Articles de revues sur le sujet « Machine-to-Machine, Device-to-Device, Resource allocation »

Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Machine-to-Machine, Device-to-Device, Resource allocation.

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 50 meilleurs articles de revues pour votre recherche sur le sujet « Machine-to-Machine, Device-to-Device, Resource allocation ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les articles de revues sur diverses disciplines et organisez correctement votre bibliographie.

1

Nardini, Giovanni, Antonio Virdis et Giovanni Stea. « Modeling Network-Controlled Device-to-Device Communications in SimuLTE ». Sensors 18, no 10 (19 octobre 2018) : 3551. http://dx.doi.org/10.3390/s18103551.

Texte intégral
Résumé :
In Long Term Evolution-Advanced (LTE-A), network-controlled device-to-device (D2D) communications allow User Equipments (UEs) to communicate directly, without involving the Evolved Node-B in data relaying, while the latter still retains control of resource allocation. The above paradigm allows reduced latencies for the UEs and increased resource efficiency for the network operator, and is therefore foreseen to support several services, from Machine-to-machine to vehicular communications. D2D communications introduce research challenges that might affect the performance of applications and upper-layer protocols, hence simulations represent a valuable tool for evaluating these aspects. However, simulating D2D features might pose additional computational burden to the simulation environment. To this aim, a careful modeling is required to reduce computational overhead. In this paper, we describe our modeling of network-controlled D2D communications in SimuLTE, a system-level LTE-A simulation library based on OMNeT++. We describe the core modeling choices of SimuLTE, and show how these allow an easy extension to D2D communications. Moreover, we describe in detail the modeling of specific problems arising with D2D communications, such as scheduling with frequency reuse, connection mode switching and broadcast transmission. We document the computational efficiency of our modeling choices, showing that simulation of D2D communications is not more complex than simulation of classical cellular communications of comparable scale. Results show that the heaviest computational burden of D2D communication lies in estimating the Sidelink channel quality. We show that SimuLTE allows one to evaluate the interplay between D2D communication and end-to-end performance of UDP- and TCP-based services. Moreover, we assess the accuracy of using a binary interference model for frequency reuse, and we evaluate the trade-off between speed of execution and accuracy in modeling the reception probability.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Pise, Prakash. « Mobile Cloud IoT for Resource Allocation with Scheduling in Device- Device Communication and Optimization based on 5G Networks ». International Journal on Future Revolution in Computer Science & ; Communication Engineering 8, no 3 (15 septembre 2022) : 33–42. http://dx.doi.org/10.17762/ijfrcsce.v8i3.2094.

Texte intégral
Résumé :
Internet of Things (IoT) is revolutionising technical environment of traditional methods as well as has applications in smart cities, smart industries, etc. Additionally, IoT enabled models' application areas are resource-constrained as well as demand quick answers, low latencies, and high bandwidth, all of which are outside of their capabilities. The above-mentioned issues are addressed by cloud computing (CC), which is viewed as a resource-rich solution. However, excessive latency of CC prevents it from being practical. The performance of IoT-based smart systems suffers from longer delay. CC is an affordable, emergent dispersed computing pattern that features extensive assembly of diverse autonomous methods. This research propose novel technique resource allocation and task scheduling for device-device communication in mobile Cloud IoT environment based on 5G networks. Here the resource allocation has been carried out using virtual machine based markov model infused wavelength division multiplexing. Task scheduling is carried out using meta-heuristic moath flame optimization with chaotic maps. So, by scheduling tasks in a smaller search space, system resources are conserved. We run simulation tests on benchmark issues and real-world situations to confirm the effectiveness of our suggested approach. The parameters measured here are resource utilization of 95%, response time of 89%, computational cost of 35%, power consumption of 38%, QoS of 85%.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Springer, Tom, Elia Eiroa-Lledo, Elizabeth Stevens et Erik Linstead. « On-Device Deep Learning Inference for System-on-Chip (SoC) Architectures ». Electronics 10, no 6 (15 mars 2021) : 689. http://dx.doi.org/10.3390/electronics10060689.

Texte intégral
Résumé :
As machine learning becomes ubiquitous, the need to deploy models on real-time, embedded systems will become increasingly critical. This is especially true for deep learning solutions, whose large models pose interesting challenges for target architectures at the “edge” that are resource-constrained. The realization of machine learning, and deep learning, is being driven by the availability of specialized hardware, such as system-on-chip solutions, which provide some alleviation of constraints. Equally important, however, are the operating systems that run on this hardware, and specifically the ability to leverage commercial real-time operating systems which, unlike general purpose operating systems such as Linux, can provide the low-latency, deterministic execution required for embedded, and potentially safety-critical, applications at the edge. Despite this, studies considering the integration of real-time operating systems, specialized hardware, and machine learning/deep learning algorithms remain limited. In particular, better mechanisms for real-time scheduling in the context of machine learning applications will prove to be critical as these technologies move to the edge. In order to address some of these challenges, we present a resource management framework designed to provide a dynamic on-device approach to the allocation and scheduling of limited resources in a real-time processing environment. These types of mechanisms are necessary to support the deterministic behavior required by the control components contained in the edge nodes. To validate the effectiveness of our approach, we applied rigorous schedulability analysis to a large set of randomly generated simulated task sets and then verified the most time critical applications, such as the control tasks which maintained low-latency deterministic behavior even during off-nominal conditions. The practicality of our scheduling framework was demonstrated by integrating it into a commercial real-time operating system (VxWorks) then running a typical deep learning image processing application to perform simple object detection. The results indicate that our proposed resource management framework can be leveraged to facilitate integration of machine learning algorithms with real-time operating systems and embedded platforms, including widely-used, industry-standard real-time operating systems.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Rodriguez Medel, Abel, et Jose Marcos C. Brito. « Random-Access Accelerator (RAA) : A Framework to Speed Up the Random-Access Procedure in 5G New Radio for IoT mMTC by Enabling Device-To-Device Communications ». Sensors 20, no 19 (25 septembre 2020) : 5485. http://dx.doi.org/10.3390/s20195485.

Texte intégral
Résumé :
Mobile networks have a great challenge by serving the expected billions of Internet of Things (IoT) devices in the upcoming years. Due to the limited simultaneous access in the mobile networks, the devices should compete between each other for resource allocation during a Random-Access procedure. This contention provokes a non-depreciable delay during the device’s registration because of the great number of collisions experienced. To overcome such a problem, a framework called Random-Access Accelerator (RAA) is proposed in this work, in order to speed up network access in massive Machine Type Communication (mMTC). RAA exploits Device-To-Device (D2D) communications, where devices with already assigned resources act like relays for the rest of devices trying to gain access in the network. The simulation results show an acceleration in the registration procedure of 99%, and a freed space of the allocated spectrum until 74% in comparison with the conventional Random-Access procedure. Besides, it preserves the same device’s energy consumption compared with legacy networks by using a custom version of Bluetooth as a wireless technology for D2D communications. The proposed framework can be taken into account for the standardization of mMTC in Fifth-Generation-New Radio (5G NR).
Styles APA, Harvard, Vancouver, ISO, etc.
5

Vo, Ta-Hoang, Zhi Ding, Quoc-Viet Pham et Won-Joo Hwang. « Access Control and Pilot Allocation for Machine-Type Communications in Crowded Massive MIMO Systems ». Symmetry 11, no 10 (11 octobre 2019) : 1272. http://dx.doi.org/10.3390/sym11101272.

Texte intégral
Résumé :
Massive machine-type communication (mMTC) in 5G New Radio (5G-NR) or the Internet of Things (IoT) is a network of physical devices such as vehicles, smart meters, sensors, and smart appliances, which can communicate and interact in real time without human intervention. In IoT systems, the number of networked devices is expected to be in the tens of billions, while radio resources remain scarce. To connect the massive number of devices with limited bandwidth, it is crucial to develop new access solutions that can improve resource efficiency and reduce control overhead as well as access delay. The key idea is controlling the number of arrival devices that want to access the system, and then allowing only the strongest device (that has the largest channel gain and each device is able to check whether it is the strongest device) be able to transit to BS. In this paper, we consider a random access problem in massive MIMO context for the collision resolution, in which the access class barring (ACB) factor is dynamically adjusted in each time slot to maximize access success rate for the strongest-user collision resolution (SUCRe) protocol. We propose the dynamic ACB scheme to find optimal ACB factor in the next time slot and then apply SUCRe protocol to achieve a good performance. This method is called dynamic access class barring combined strongest-user collision resolution (DACB-SUCR). In addition, we investigate two different ACB schemes that consist of the fixed ACB and the traffic-aware ACB to compare with the proposed dynamic ACB. Analysis and simulation results demonstrate that, compared with SUCRe protocol, the proposed DACB-SUCR method can remarkably reduce pilot collision, and increase access success rate. It is also shown that the dynamic ACB gives better performance than the fixed ACB and the traffic-aware ACB.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Shah, Sayed-Chhattan. « Design of a Machine Learning-Based Intelligent Middleware Platform for a Heterogeneous Private Edge Cloud System ». Sensors 21, no 22 (19 novembre 2021) : 7701. http://dx.doi.org/10.3390/s21227701.

Texte intégral
Résumé :
Recent advances in mobile technologies have facilitated the development of a new class of smart city and fifth-generation (5G) network applications. These applications have diverse requirements, such as low latencies, high data rates, significant amounts of computing and storage resources, and access to sensors and actuators. A heterogeneous private edge cloud system was proposed to address the requirements of these applications. The proposed heterogeneous private edge cloud system is characterized by a complex and dynamic multilayer network and computing infrastructure. Efficient management and utilization of this infrastructure may increase data rates and reduce data latency, data privacy risks, and traffic to the core Internet network. A novel intelligent middleware platform is proposed in the current study to manage and utilize heterogeneous private edge cloud infrastructure efficiently. The proposed platform aims to provide computing, data collection, and data storage services to support emerging resource-intensive and non-resource-intensive smart city and 5G network applications. It aims to leverage regression analysis and reinforcement learning methods to solve the problem of efficiently allocating heterogeneous resources to application tasks. This platform adopts parallel transmission techniques, dynamic interface allocation techniques, and machine learning-based algorithms in a dynamic multilayer network infrastructure to improve network and application performance. Moreover, it uses container and device virtualization technologies to address problems related to heterogeneous hardware and execution environments.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Bankov, Dmitry, Evgeny Khorov, Andrey Lyakhov et Jeroen Famaey. « Resource Allocation for Machine-Type Communication of Energy-Harvesting Devices in Wi-Fi HaLow Networks ». Sensors 20, no 9 (25 avril 2020) : 2449. http://dx.doi.org/10.3390/s20092449.

Texte intégral
Résumé :
The recent Wi-Fi HaLow technology focuses on adopting Wi-Fi for the needs of the Internet of Things. A key feature of Wi-Fi HaLow is the Restricted Access Window (RAW) mechanism that allows an access point to divide the sensors into groups and to assign each group to an exclusively reserved time interval where only the stations of a particular group can transmit. In this work, we study how to optimally configure RAW in a scenario with a high number of energy harvesting sensor devices. For such a scenario, we consider a problem of device grouping and develop a model of data transmission, which takes into account the peculiarities of channel access and the fact that the devices can run out of energy within the allocated intervals. We show how to use the developed model in order to determine the optimal duration of RAW intervals and the optimal number of groups that provide the required probability of data delivery and minimize the amount of consumed channel resources. The numerical results show that the optimal RAW configuration can reduce the amount of consumed channel resources by almost 50%.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Farhad, Arshad, et Jae-Young Pyun. « Resource Management for Massive Internet of Things in IEEE 802.11ah WLAN : Potentials, Current Solutions, and Open Challenges ». Sensors 22, no 23 (5 décembre 2022) : 9509. http://dx.doi.org/10.3390/s22239509.

Texte intégral
Résumé :
IEEE 802.11ah, known as Wi-Fi HaLow, is envisioned for long-range and low-power communication. It is sub-1 GHz technology designed for massive Internet of Things (IoT) and machine-to-machine devices. It aims to overcome the IoT challenges, such as providing connectivity to massive power-constrained devices distributed over a large geographical area. To accomplish this objective, IEEE 802.11ah introduces several unique physical and medium access control layer (MAC) features. In recent years, the MAC features of IEEE 802.11ah, including restricted access window, authentication (e.g., centralized and distributed) and association, relay and sectorization, target wake-up time, and traffic indication map, have been intensively investigated from various aspects to improve resource allocation and enhance the network performance in terms of device association time, throughput, delay, and energy consumption. This survey paper presents an in-depth assessment and analysis of these MAC features along with current solutions, their potentials, and key challenges, exposing how to use these novel features to meet the rigorous IoT standards.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Sanyal, Rajarshi, et Ramjee Prasad. « Enabling Cellular Device to Device Data Exchange on WISDOM 5G by Actuating Cooperative Communication Based on SMNAT ». International Journal of Interdisciplinary Telecommunications and Networking 6, no 3 (juillet 2014) : 37–59. http://dx.doi.org/10.4018/ijitn.2014070104.

Texte intégral
Résumé :
The key attributes envisioned for LTE-Advanced pertaining to 5G Networks are ubiquitous presence, device convergence, massive machine connectivity, ultrahigh throughput and moderated carbon footprint of the network and the user equipment actuated by offloading cellular data traffic and by enabling device to device communication. The present method of mobility management and addressing as the authors have foreseen in LTE Advanced can solve some issues of cellular traffic backhaul towards the access and core network by actuating a local breakout and enabling communication directly between devices. But most of the approaches look forward towards an enhancement in the radio resource allocation process and prone to interference. Besides, most of these proposals delve in Device to Device (D2D) mode initiation from the device end, but no research has so far addressed the concept of a network initiated D2D process, which can optimise the channel utilisation and network operations further. In their attempt to knot these loose ends together, the auhtors furnish the concept of WISDOM (Wireless Innovative System for Dynamic Operating Mega communications) (Badoi Cornelia-I., Prasad N., Croitoru V., Prasad R., 2011) (Prasad R., June 2013) (Prasad R.,December 2013) and SMNAT (Sanyal, R., Cianca, E. and Prasad,R.,2012a) () () () (. Further, the authors explore how SMNAT (Smart Mobile Network Access Topology) can engage with WISDOM in cooperative communication to actuate D2D communication initiated by the device or the network. WISDOM is an architectural concept for 5G Networks based on cognitive radio approach. The cognition, sustained by adaptation techniques, is a way to provide communication, convergence, connectivity, co-operation, and content, anytime and anywhere. Though D2D communication using a dedicated spectrum in multi cell environment is possible through advanced network coding or by use of fractional frequency reuse, but physical proximity of the 2 devices is still a key requisite. In this paper the authors will discuss SMNAT which employs physical layer addressing to enable D2D communication agnostic to the spatial coordinates of the devices.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Ali, Anum, Ghalib A. Shah et Junaid Arshad. « Energy Efficient Resource Allocation for M2M Devices in 5G ». Sensors 19, no 8 (17 avril 2019) : 1830. http://dx.doi.org/10.3390/s19081830.

Texte intégral
Résumé :
Resource allocation for machine-type communication (MTC) devices is one of the keys challenges in the 5G network as it affects the lifetime of battery powered devices and also the quality of service of the applications. MTC devices are battery restrained and cannot afford a lot of power consumption due to spectrum usage. In this paper, we propose a novel resource allocation algorithm termed threshold controlled access (TCA) protocol. We propose a novel technique of uplink resource allocation in which the devices make a decision of resource allocation blocks based on their battery status and related application’s power profile that eventually leads to required quality of service (QoS) metric. The first phase of the TCA algorithm selects the number of carriers to be allocated to a certain device for the better lifetime of low power MTC devices. In the second phase, the efficient solution is implemented through inducing a threshold value. A certain value of the threshold is selected through a mapping based on a QoS metric. The threshold enhances the selection of subcarriers for less powered devices, such as small e-health sensors. The algorithm is simulated for the physical layer of the 5G network. Simulation results show that the proposed algorithm is less complex and achieves better performance when compared to existing solutions in the literature.
Styles APA, Harvard, Vancouver, ISO, etc.
11

Wu, Yali, Shuang Zhang, Zhengxuan Liu, Xiaoshuang Liu et Jianfeng Li. « An Efficient Resource Allocation for Massive MTC in NOMA-OFDMA Based Cellular Networks ». Electronics 9, no 5 (25 avril 2020) : 705. http://dx.doi.org/10.3390/electronics9050705.

Texte intégral
Résumé :
To alleviate random access congestion and support massive-connections with less energy consumption for machine-type communications (MTC) in the 5G cellular network, we propose an efficient resource allocation for massive MTC (mMTC) with hybrid non-orthogonal multiple access (NOMA)-orthogonal frequency division multiple access (OFDMA). First, a hybrid multiple access scheme, including the NOMA-based congestion-alleviating access scheme (NCAS) and OFDMA-based congestion-alleviating access scheme (OCAS), is proposed, in which the NOMA based devices coexist with OFDMA based ones. Then, aiming at maximizing the system access capacity, a traffic-aware resource blocks (RBs) allocation is investigated to optimize RBs allocation for preamble transmission and data packets transmission, as well as to optimize the RBs allocation between NCAS and OCAS for the RBs usage efficiency improvement. Next, aiming at the problem of high computational complexity and improving energy efficiency in hybrid NOMA-OFDMA based cellular M2M communications, this paper proposes an improved low complexity power allocation algorithm. The feasibility conditions of power allocation solution under the maximum transmit power constraints and quality of service (QoS) requirements of the devices is investigated. The original non-convex optimization problem is solved under the feasibility conditions by two iterative algorithms. Finally, a device clustering scheme is proposed based on the channel gain difference and feasible condition of power allocation solution, by which NOMA based devices and OFDMA based devices can be determined. Simulation results show that compared with non-orthogonal random access and transmission (NORA-DT), the proposed resource allocation scheme for hybrid NOMA-OFDMA systems can efficiently improve the performance of access capacity and energy efficiency.
Styles APA, Harvard, Vancouver, ISO, etc.
12

Feng Hu, Andong Chen, Hexing Yang et Hongliu Zhang. « 5G Hybrid System Design and Energy Efficient Resource Allocation Deployment ». Electrotehnica, Electronica, Automatica 70, no 2 (15 mai 2022) : 47–55. http://dx.doi.org/10.46904/eea.22.70.2.1108006.

Texte intégral
Résumé :
5G communication provide a promising platform for new, innovative and diverse enhanced mobile broadband (eMBB) and massive device connectivity applications, such as streaming media, machine vision and Internet of Things (IoT), real-time and dynamic data processing, intensive computation. However, 5G multimedia devices deployment relies on the coverage of base stations, which is inefficient and costly in wide-area coverage and physical penetration. In this paper, a 5G and wide-area Ad Hoc network fusion architecture is proposed to flexibly provide scalable 5G and extensible low-power devices interconnection liberated from geographical restriction, which consists of a low-power wide-area network and an edge processing gateway. Moreover, the intelligent edge gateway near a specific base station can support real-time ultra-high-definition video streams access and achieve traffic optimization by compressing, intelligent identification and preprocessing of the video streams to alleviate traffic congestion. The coverage capacity efficiency of wide-area Ad Hoc networks is restricted by the "funnel effect" in multihop cascading, and adaptive resource allocation strategies will present a promising approach to realize energy-efficient deployment. A non-convex optimization problem is formulated to maximize the energy-efficient deployment of Ad Hoc network. Then, a coordination and optimization strategy of internal resource allocation in deployed multihop nodes based on Lagrange relaxation algorithm was presented to solve the optimization problem. The actual system deployment and real measurement proved that the system function is running normally and stably. The experimental simulation test results show that the proposed 5G wide-area Ad Hoc network can effectively make up for the adaptive streaming needs of 5G coverage blind spots. Compared with static resource allocation, the proposed resource allocation and deployment scheme reduces energy consumption by 42.31%.
Styles APA, Harvard, Vancouver, ISO, etc.
13

Hou, Wenjun, Song Li, Yanjing Sun, Jiasi Zhou, Xiao Yun et Nannan Lu. « Interference-Aware Subcarrier Allocation for Massive Machine-Type Communication in 5G-Enabled Internet of Things ». Sensors 19, no 20 (18 octobre 2019) : 4530. http://dx.doi.org/10.3390/s19204530.

Texte intégral
Résumé :
Massive machine-type communication (mMTC) is investigated as one of three typical scenes of the 5th-generation (5G) network. In this paper, we propose a 5G-enabled internet of things (IoT) in which some enhanced mobile broadband devices transmit video stream to a centralized controller and some mMTC devices exchange short packet data with adjacent devices via D2D communication to promote inter-device cooperation. Since massive MTC devices have data transmission requirements in 5G-enabled IoT with limited spectrum resources, the subcarrier allocation problem is investigated to maximize the connectivity of mMTC devices subject to the quality of service (QoS) requirement of enhanced Mobile Broadband (eMBB) devices and mMTC devices. To solve the formulated mixed-integer non-linear programming (MINLP) problem, which is NP-hard, an interference-aware subcarrier allocation algorithm for mMTC communication (IASA) is developed to maximize the number of active mMTC devices. Finally, the performance of the proposed algorithm is evaluated by simulation. Numerical results demonstrate that the proposed algorithm outperforms the three traditional benchmark methods, which significantly improves the utilization of the uplink spectrum. This indicates that the proposed IASA algorithm provides a better solution for IoT application.
Styles APA, Harvard, Vancouver, ISO, etc.
14

Han, Tongzhou, et Danfeng Zhao. « Energy Efficiency of User-Centric, Cell-Free Massive MIMO-OFDM with Instantaneous CSI ». Entropy 24, no 2 (3 février 2022) : 234. http://dx.doi.org/10.3390/e24020234.

Texte intégral
Résumé :
In the user-centric, cell-free, massive multi-input, multi-output (MIMO) orthogonal frequency division multiplexing (OFDM) system, a large number of deployed access points (APs) serve user equipment (UEs) simultaneously, using the same time–frequency resources, and the system is able to ensure fairness between each user; moreover, it is robust against fading caused by multi-path propagation. Existing studies assume that cell-free, massive MIMO is channel-hardened, the same as centralized massive MIMO, and these studies address power allocation and energy efficiency optimization based on the statistics information of each channel. In cell-free, massive MIMO systems, especially APs with only one antenna, the channel statistics information is not a complete substitute for the instantaneous channel state information (CSI) obtained via channel estimation. In this paper, we propose that energy efficiency is optimized by power allocation with instantaneous CSI in the user-centric, cell-free, massive MIMO-OFDM system, and we consider the effect of CSI exchanging between APs and the central processing unit. In addition, we design different resource block allocation schemes, so that user-centric, cell-free, massive MIMO-OFDM can support enhanced mobile broadband (eMBB) for high-speed communication and massive machine communication (mMTC) for massive device communication. The numerical results verify that the proposed energy efficiency optimization scheme, based on instantaneous CSI, outperforms the one with statistical information in both scenarios.
Styles APA, Harvard, Vancouver, ISO, etc.
15

Chen, Mingzhe, Nir Shlezinger, H. Vincent Poor, Yonina C. Eldar et Shuguang Cui. « Communication-efficient federated learning ». Proceedings of the National Academy of Sciences 118, no 17 (22 avril 2021) : e2024789118. http://dx.doi.org/10.1073/pnas.2024789118.

Texte intégral
Résumé :
Federated learning (FL) enables edge devices, such as Internet of Things devices (e.g., sensors), servers, and institutions (e.g., hospitals), to collaboratively train a machine learning (ML) model without sharing their private data. FL requires devices to exchange their ML parameters iteratively, and thus the time it requires to jointly learn a reliable model depends not only on the number of training steps but also on the ML parameter transmission time per step. In practice, FL parameter transmissions are often carried out by a multitude of participating devices over resource-limited communication networks, for example, wireless networks with limited bandwidth and power. Therefore, the repeated FL parameter transmission from edge devices induces a notable delay, which can be larger than the ML model training time by orders of magnitude. Hence, communication delay constitutes a major bottleneck in FL. Here, a communication-efficient FL framework is proposed to jointly improve the FL convergence time and the training loss. In this framework, a probabilistic device selection scheme is designed such that the devices that can significantly improve the convergence speed and training loss have higher probabilities of being selected for ML model transmission. To further reduce the FL convergence time, a quantization method is proposed to reduce the volume of the model parameters exchanged among devices, and an efficient wireless resource allocation scheme is developed. Simulation results show that the proposed FL framework can improve the identification accuracy and convergence time by up to 3.6% and 87% compared to standard FL.
Styles APA, Harvard, Vancouver, ISO, etc.
16

Dhar, Sauptik, Junyao Guo, Jiayi (Jason) Liu, Samarth Tripathi, Unmesh Kurup et Mohak Shah. « A Survey of On-Device Machine Learning ». ACM Transactions on Internet of Things 2, no 3 (juillet 2021) : 1–49. http://dx.doi.org/10.1145/3450494.

Texte intégral
Résumé :
The predominant paradigm for using machine learning models on a device is to train a model in the cloud and perform inference using the trained model on the device. However, with increasing numbers of smart devices and improved hardware, there is interest in performing model training on the device. Given this surge in interest, a comprehensive survey of the field from a device-agnostic perspective sets the stage for both understanding the state of the art and for identifying open challenges and future avenues of research. However, on-device learning is an expansive field with connections to a large number of related topics in AI and machine learning (including online learning, model adaptation, one/few-shot learning, etc.). Hence, covering such a large number of topics in a single survey is impractical. This survey finds a middle ground by reformulating the problem of on-device learning as resource constrained learning where the resources are compute and memory. This reformulation allows tools, techniques, and algorithms from a wide variety of research areas to be compared equitably. In addition to summarizing the state of the art, the survey also identifies a number of challenges and next steps for both the algorithmic and theoretical aspects of on-device learning.
Styles APA, Harvard, Vancouver, ISO, etc.
17

Liao, Teh-Lu, Hong-Ru Lin, Pei-Yen Wan et Jun-Juh Yan. « Improved Attribute-Based Encryption Using Chaos Synchronization and Its Application to MQTT Security ». Applied Sciences 9, no 20 (21 octobre 2019) : 4454. http://dx.doi.org/10.3390/app9204454.

Texte intégral
Résumé :
In recent years, Internet of Things (IoT) has developed rapidly and been widely used in industry, agriculture, e-health, smart cities, and families. As the total amount of data transmission will increase dramatically, security will become a very important issue in data communication in the IoT. There are many communication protocols for Device to Device (D2D) or Machine to Machine (M2M) in IoT. One of them is Message Queuing Telemetry Transport (MQTT), which is quite prevalent and easy to use. MQTT is designed for resource-constrained devices, so its security is not as strong as other communication protocols. To enhance MQTT security, it needs an additional function to overcome its weakness. However, considering the limited computational abilities of resource-constrained devices, they cannot use too powerful or complicated cryptographic algorithms. Therefore, this paper proposes novel improved attribute-based encryption (ABE) integrated with chaos synchronization to enhance the MQTT security. Finally, a small size of IoT is implemented to simulate resource-constrained devices equipped with a human–machine interface and monitoring software to show and verify the performance of MQTT communication with this improved ABE algorithm.
Styles APA, Harvard, Vancouver, ISO, etc.
18

Mhetre, Nalini A., Arvind V. Deshpande et Parikshit Narendra Mahalle. « Device Classification-Based Context Management for Ubiquitous Computing using Machine Learning ». International Journal of Engineering and Advanced Technology 10, no 5 (30 juin 2021) : 135–42. http://dx.doi.org/10.35940/ijeat.e2688.0610521.

Texte intégral
Résumé :
Ubiquitous computing comprises scenarios where networks, devices within the network, and software components change frequently. Market demand and cost-effectiveness are forcing device manufacturers to introduce new-age devices. Also, the Internet of Things (IoT) is transitioning rapidly from the IoT to the Internet of Everything (IoE). Due to this enormous scale, effective management of these devices becomes vital to support trustworthy and high-quality applications. One of the key challenges of IoT device management is proactive device classification with the logically semantic type and using that as a parameter for device context management. This would enable smart security solutions. In this paper, a device classification approach is proposed for the context management of ubiquitous devices based on unsupervised machine learning. To classify unknown devices and to label them logically, a proactive device classification model is framed using a k-Means clustering algorithm. To group devices, it uses the information of network parameters such as Received Signal Strength Indicator (rssi), packet_size, number_of_nodes in the network, throughput, etc. Experimental analysis suggests that the well-formedness of clusters can be used to derive cluster labels as a logically semantic device type which would be a context for resource management and authorization of resources.
Styles APA, Harvard, Vancouver, ISO, etc.
19

Xia, Xian Fei, Kai Wu, Qing Peng Zhu, Yu Sun et Qing Hai Jiang. « A New Type of Automatic Feeding Device for Biomass Briquetting Machine ». Applied Mechanics and Materials 365-366 (août 2013) : 32–36. http://dx.doi.org/10.4028/www.scientific.net/amm.365-366.32.

Texte intégral
Résumé :
Biomass is a kind of important renewable energy resource, while the solidification technology provides an effective way for its energy utilization. The current feeding device of the biomass briquetting machine in China is manual or semi-automated, with low production efficiency. A new kind of automatic feeding device has been designed to meet the continuous feed requirements of a production line with the production capacity of 10~12 T/h in this paper. The shaftless screw device can realize continuous material supply, and the batch hopper provide materials for different briquetting machine, at the same time the crank-rocker mechanism provides certain vibration to prevent material clogging. This device gives a uniform material supply with a high degree of automation, which can improve the biomass fuel production effectively proved by practice.
Styles APA, Harvard, Vancouver, ISO, etc.
20

Kamola, Mariusz. « Internet of Things with Lightweight Identities Implemented Using DNS DANE—Architecture Proposal ». Sensors 18, no 8 (1 août 2018) : 2517. http://dx.doi.org/10.3390/s18082517.

Texte intégral
Résumé :
Domain Name Service (DNS) and its certification related resource records are appealing alternative to the standard X.509 certification framework, in provision of identities for Internet of Things (IoT) smart devices. We propose to also use DNS to store device owner identification data in device certificates. A working demonstration software has been developed as proof of this concept, which uses an external identity provider run by national authorities. As a result, smart devices are equipped with certificates that safely identify both the device and its owner. Hardware requirements make such a framework applicable to constrained devices. It stimulates mutual trust in machine-to-machine and man-to-machine communication, and creation of a friendlier environment for sale, lease, and data exchange. Further extensions of the proposed architecture are also discussed.
Styles APA, Harvard, Vancouver, ISO, etc.
21

Llisterri Giménez, Nil, Marc Monfort Grau, Roger Pueyo Centelles et Felix Freitag. « On-Device Training of Machine Learning Models on Microcontrollers with Federated Learning ». Electronics 11, no 4 (14 février 2022) : 573. http://dx.doi.org/10.3390/electronics11040573.

Texte intégral
Résumé :
Recent progress in machine learning frameworks has made it possible to now perform inference with models using cheap, tiny microcontrollers. Training of machine learning models for these tiny devices, however, is typically done separately on powerful computers. This way, the training process has abundant CPU and memory resources to process large stored datasets. In this work, we explore a different approach: training the machine learning model directly on the microcontroller and extending the training process with federated learning. We implement this approach for a keyword spotting task. We conduct experiments with real devices to characterize the learning behavior and resource consumption for different hyperparameters and federated learning configurations. We observed that in the case of training locally with fewer data, more frequent federated learning rounds more quickly reduced the training loss but involved a cost of higher bandwidth usage and longer training time. Our results indicate that, depending on the specific application, there is a need to determine the trade-off between the requirements and the resource usage of the system.
Styles APA, Harvard, Vancouver, ISO, etc.
22

Zahorulko, Andrii, Aleksey Zagorulko, Kateryna Kasabova, Bogdan Liashenko, Alexander Postadzhiev et Mariana Sashnova. « Improving a tempering machine for confectionery masses ». Eastern-European Journal of Enterprise Technologies 2, no 11 (116) (30 avril 2022) : 6–11. http://dx.doi.org/10.15587/1729-4061.2022.254873.

Texte intégral
Résumé :
This paper reports the improved model of a tempering machine for heating the formulation mixture of marshmallow, characterized by heat supply to the working tank through the replacement of a steam jacket with heating by a film resistive electric heater of radiative type (FREhRT). The surface of the heat exchange of the device was increased by heating the stirrer with FREhRT; secondary energy (30....85 °C) was used by converting it by Peltier elements for the autonomous operation of superchargers for cooling the engine compartment. The proposed solution will lead to an increase in the efficiency of the device, which is explained by a decrease in its specific metal consumption through the use of FREhRT. A reduction in the duration of heating (75 °C) a marshmallow formulation mixture was experimentally established: in the examined model, 530 s, compared with the analog, 645 s. That confirmed the reduction in heating time to the set temperature by 21.7 % compared to the MT-250 basic design. The calculations have established a decrease, by 13 %, in the specific energy consumption for heating the volume of a unit of product when using the improved structure, 205.7 kJ/kg, when using the basic one ‒ 232.1 kJ/kg. The increase in the efficiency of the proposed structure is explained by a decrease in the specific metal consumption of the device from 474 kg/m2 in the base apparatus to 273 kg/m2 in the improved one. The study results confirm the increase in the resource efficiency of the improved tempering machine, which is achieved by eliminating the steam jacket; increasing the heat exchange surface by heating the stirrer. The heat transfer by FREhRT simplifies the operational performance of the temperature stabilization system in a working tank. The reported results could prove useful when designing thermal devices with electric heat supply under the conditions of using secondary energy, which is relevant for ensuring resource efficiency
Styles APA, Harvard, Vancouver, ISO, etc.
23

Souza, Camilo, Edjair Mota, Diogo Soares, Pietro Manzoni, Juan-Carlos Cano, Carlos T. Calafate et Enrique Hernández-Orallo. « FSF : Applying Machine Learning Techniques to Data Forwarding in Socially Selfish Opportunistic Networks ». Sensors 19, no 10 (23 mai 2019) : 2374. http://dx.doi.org/10.3390/s19102374.

Texte intégral
Résumé :
Opportunistic networks are becoming a solution to provide communication support in areas with overloaded cellular networks, and in scenarios where a fixed infrastructure is not available, as in remote and developing regions. A critical issue, which still requires a satisfactory solution, is the design of an efficient data delivery solution trading off delivery efficiency, delay, and cost. To tackle this problem, most researchers have used either the network state or node mobility as a forwarding criterion. Solutions based on social behaviour have recently been considered as a promising alternative. Following the philosophy from this new category of protocols, in this work, we present our “FriendShip and Acquaintanceship Forwarding” (FSF) protocol, a routing protocol that makes its routing decisions considering the social ties between the nodes and both the selfishness and the device resources levels of the candidate node for message relaying. When a contact opportunity arises, FSF first classifies the social ties between the message destination and the candidate to relay. Then, by using logistic functions, FSF assesses the relay node selfishness to consider those cases in which the relay node is socially selfish. To consider those cases in which the relay node does not accept receipt of the message because its device has resource constraints at that moment, FSF looks at the resource levels of the relay node. By using the ONE simulator to carry out trace-driven simulation experiments, we find that, when accounting for selfishness on routing decisions, our FSF algorithm outperforms previously proposed schemes, by increasing the delivery ratio up to 20%, with the additional advantage of introducing a lower number of forwarding events. We also find that the chosen buffer management algorithm can become a critical element to improve network performance in scenarios with selfish nodes.
Styles APA, Harvard, Vancouver, ISO, etc.
24

Alsharif, Maram, et Danda B. Rawat. « Study of Machine Learning for Cloud Assisted IoT Security as a Service ». Sensors 21, no 4 (3 février 2021) : 1034. http://dx.doi.org/10.3390/s21041034.

Texte intégral
Résumé :
Machine learning (ML) has been emerging as a viable solution for intrusion detection systems (IDS) to secure IoT devices against different types of attacks. ML based IDS (ML-IDS) normally detect network traffic anomalies caused by known attacks as well as newly introduced attacks. Recent research focuses on the functionality metrics of ML techniques, depicting their prediction effectiveness, but overlooked their operational requirements. ML techniques are resource-demanding that require careful adaptation to fit the limited computing resources of a large sector of their operational platform, namely, embedded systems. In this paper, we propose cloud-based service architecture for managing ML models that best fit different IoT device operational configurations for security. An IoT device may benefit from such a service by offloading to the cloud heavy-weight activities such as feature selection, model building, training, and validation, thus reducing its IDS maintenance workload at the IoT device and get the security model back from the cloud as a service.
Styles APA, Harvard, Vancouver, ISO, etc.
25

Bogdanov, S. I., V. G. Ryabtsev et K. V. Evseev. « Resource saving in the design of multiple robots control systems ». IOP Conference Series : Earth and Environmental Science 965, no 1 (1 janvier 2022) : 012059. http://dx.doi.org/10.1088/1755-1315/965/1/012059.

Texte intégral
Résumé :
Abstract The research objective is design of the information technology for automated control system model synthesis for reducing the labour intensity of the digital machine model design by improving the control system information conversion tools. The structural-functional digital machine models are proposed for project development of robotic manipulator control devices. Application of the structural-functional digital machine models allows reducing labour intensity and project timing of robotic center control system design due to the cycle scheme conversion to intermediate representation, which is convenient for digital machine synthesis using the modern development tools. When applying the proposed structural-functional model a process engineer can rely on keeping all important technological process details and a programmer can avoid a large amount of errors when developing the device algorithms. The developed algorithm can be represented as state diagrams, which can be adapted for integrated scheme crystal as a digital machine using the specific tools of computer-aid design.
Styles APA, Harvard, Vancouver, ISO, etc.
26

Markussen, Jonas, Lars Bjørlykke Kristiansen, Rune Johan Borgli, Håkon Kvale Stensland, Friedrich Seifert, Michael Riegler, Carsten Griwodz et Pål Halvorsen. « Flexible device compositions and dynamic resource sharing in PCIe interconnected clusters using Device Lending ». Cluster Computing 23, no 2 (21 septembre 2019) : 1211–34. http://dx.doi.org/10.1007/s10586-019-02988-0.

Texte intégral
Résumé :
Abstract Modern workloads often exceed the processing and I/O capabilities provided by resource virtualization, requiring direct access to the physical hardware in order to reduce latency and computing overhead. For computers interconnected in a cluser, access to remote hardware resources often requires facilitation both in hardware and specialized drivers with virtualization support. This limits the availability of resources to specific devices and drivers that are supported by the virtualization technology being used, as well as what the interconnection technology supports. For PCI Express (PCIe) clusters, we have previously proposed Device Lending as a solution for enabling direct low latency access to remote devices. The method has extremely low computing overhead, and does not require any application- or device-specific distribution mechanisms. Any PCIe device, such as network cards disks, and GPUs, can easily be shared among the connected hosts. In this work, we have extended our solution with support for a virtual machine (VM) hypervisor. Physical remote devices can be “passed through” to VM guests, enabling direct access to physical resources while still retaining the flexibility of virtualization. Additionally, we have also implemented multi-device support, enabling shortest-path peer-to-peer transfers between remote devices residing in different hosts.Our experimental results prove that multiple remote devices can be used, achieving bandwidth and latency close to native PCIe, and without requiring any additional support in device drivers. I/O intensive workloads run seamlessly using both local and remote resources. With our added VM and multi-device support, Device Lending offers highly customizable configurations of remote devices that can be dynamically reassigned and shared to optimize resource utilization, thus enabling a flexible composable I/O infrastructure for VMs as well as bare-metal machines.
Styles APA, Harvard, Vancouver, ISO, etc.
27

Han, Shujun, Xiaodong Xu, Litong Zhao et Xiaofeng Tao. « Joint time and power allocation for uplink cooperative non-orthogonal multiple access based massive machine-type communication Network ». International Journal of Distributed Sensor Networks 14, no 5 (mai 2018) : 155014771877821. http://dx.doi.org/10.1177/1550147718778215.

Texte intégral
Résumé :
Non-orthogonal multiple access is an essential promising solution to support large-scale connectivity required by massive machine-type communication scenario defined in the fifth generation (5G) mobile communication system. In this article, we study the problem of energy minimization in non-orthogonal multiple access–based massive machine-type communication network. Focusing on the massive machine-type communication scenario and assisted by grouping method, we propose an uplink cooperative non-orthogonal multiple access scheme with two phases, transmission phase and cooperation phase, for one uplink cooperative transmission period. Based on uplink cooperative non-orthogonal multiple access, the machine-type communication device with better channel condition and more residual energy will be selected as a group head, which acts as a relay assisting other machine-type communication devices to communicate. In the transmission phase, machine-type communication devices transmit data to the group head. Then, the group head transmits the received data with its own data to base station in the cooperation phase. Because the massive machine-type communication devices are low-cost dominant with limited battery, based on uplink cooperative non-orthogonal multiple access, we propose a joint time and power allocation algorithm to minimize the system energy consumption. Furthermore, the proposed joint time and power allocation algorithm includes dynamic group head selection and fractional transmit time allocation algorithms. Simulation results show that the proposed solution for uplink cooperative non-orthogonal multiple access–based massive machine-type communication network outperforms other schemes.
Styles APA, Harvard, Vancouver, ISO, etc.
28

Santos, P. V., José Carlos Alves et João Canas Ferreira. « A Reconfigurable Custom Machine for Accelerating Cellular Genetic Algorithms ». U.Porto Journal of Engineering 2, no 2 (20 mars 2018) : 2–13. http://dx.doi.org/10.24840/2183-6493_002.002_0002.

Texte intégral
Résumé :
In this work we present a reconfigurable and scalable custom processor array for solving optimization problems using cellular genetic algorithms (cGAs), based on a regular fabric of processing nodes and local memories. Cellular genetic algorithms are a variant of the well-known genetic algorithm that can conveniently exploit the coarse-grain parallelism afforded by this architecture. To ease the design of the proposed computing engine for solving different optimization problems, a high-level synthesis design flow is proposed, where the problem-dependent operations of the algorithm are specified in C++ and synthesized to custom hardware. A spectrum allocation problem was used as a case study and successfully implemented in a Virtex-6 FPGA device, showing relevant figures for the computing acceleration.
Styles APA, Harvard, Vancouver, ISO, etc.
29

Liberis, Edgar, et Nicholas D. Lane. « Differentiable Neural Network Pruning to Enable Smart Applications on Microcontrollers ». Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 6, no 4 (21 décembre 2022) : 1–19. http://dx.doi.org/10.1145/3569468.

Texte intégral
Résumé :
Wearable, embedded, and IoT devices are a centrepiece of many ubiquitous computing applications, such as fitness tracking, health monitoring, home security and voice assistants. By gathering user data through a variety of sensors and leveraging machine learning (ML), applications can adapt their behaviour: in other words, devices become "smart". Such devices are typically powered by microcontroller units (MCUs). As MCUs continue to improve, smart devices become capable of performing a non-trivial amount of sensing and data processing, including machine learning inference, which results in a greater degree of user data privacy and autonomy, compared to offloading the execution of ML models to another device. Advanced predictive capabilities across many tasks make neural networks an attractive ML model for ubiquitous computing applications; however, on-device inference on MCUs remains extremely challenging. Orders of magnitude less storage, memory and computational ability, compared to what is typically required to execute neural networks, impose strict structural constraints on the network architecture and call for specialist model compression methodology. In this work, we present a differentiable structured pruning method for convolutional neural networks, which integrates a model's MCU-specific resource usage and parameter importance feedback to obtain highly compressed yet accurate models. Compared to related network pruning work, compressed models are more accurate due to better use of MCU resource budget, and compared to MCU specialist work, compressed models are produced faster. The user only needs to specify the amount of available computational resources and the pruning algorithm will automatically compress the network during training to satisfy them. We evaluate our methodology using benchmark image and audio classification tasks and find that it (a) improves key resource usage of neural networks up to 80x; (b) has little to no overhead or even improves model training time; (c) produces compressed models with matching or improved resource usage up to 1.4x in less time compared to prior MCU-specific model compression methods.
Styles APA, Harvard, Vancouver, ISO, etc.
30

Dunmade, Israel. « Lifecycle assessment of a stapling machine ». International Journal of Engineering & ; Technology 4, no 1 (18 décembre 2014) : 12. http://dx.doi.org/10.14419/ijet.v4i1.3813.

Texte intégral
Résumé :
A stapler is a mechanical device used to join two or more sheets of paper together by driving a thin metal staple through the sheets. They are widely used in schools, offices, business, government and homes. The anticipated large quantity of waste that is disposed of annually present great risk of environmental pollution and opportunities for economically viable resource recycling. This study evaluates potential environmental impacts of a Stapling machine and its end-of-life management opportunities. Environmental lifecycle assessment (LCA) process was used for the evaluation. The assessment was implemented with the aid of SimaPro software version 7.3.3.Results of the analyses revealed that climate change and eutrophication are the significant potential environmental impacts. Each Stapler has 1.265130 kg CO2-eq in Global Warming Potential and 0.113067 max kg O2-eq as its Eutrophication Potential. Further examination also showed that most of the impacts are from material selection, product distribution, and end-of-life management of the stapling machine. This study provides insights on potential environmental impacts of stapling machines and potential opportunities for improvements in their end-of-life management.
Styles APA, Harvard, Vancouver, ISO, etc.
31

Salimi, Nayema, Antonio Gonzalez-Fiol, David Yanez, Kristen Fardelmann, Emily Harmon, Katherine Kohari, Sonya Abdel-Razeq, Urania Magriples et Aymen Alian. « Ultrasound Image Quality Comparison Between a Handheld Ultrasound Transducer and Mid-Range Ultrasound Machine ». POCUS Journal 7, no 1 (21 avril 2022) : 154–59. http://dx.doi.org/10.24908/pocus.v7i1.15052.

Texte intégral
Résumé :
Objectives: Not all labor and delivery floors are equipped with ultrasound machines which can serve the needs of both obstetricians and anesthesiologists. This cross-sectional, blinded, randomized observational study compares the image resolution (RES), detail (DET), and quality (IQ) acquired by a handheld ultrasound, the Butterfly iQ, and a mid-range mobile device, the Sonosite M-turbo US (SU), to evaluate their use as a shared resource. Methods: Seventy-four pairs of ultrasound images were obtained for different imaging purposes: 29 for spine (Sp), 15 for transversus abdominis plane (TAP) and 30 for diagnostic obstetrics (OB) purposes. Each location was scanned by both the handheld and mid-range machine, resulting in 148 images. The images were graded by three blinded experienced sonographers on a 10-point Likert scale. Results: The mean difference for Sp imaging favored the handheld device (RES: -0.6 [(95% CI -1.1, -0.1), p = 0.017], DET: -0.8 [(95% CI -1.2, -0.3), p = 0.001] and IQ: -0.9 [95% CI-1.3, -0.4, p = 0.001]). For the TAP images, there was no statistical difference in RES or IQ, but DET was favored in the handheld device (-0.8 [(95% CI-1.2, -0.5), p < 0.001]). For OB images, the SU was favored over the handheld device with RES, DET and IQ with mean differences of 1.7 [(95% CI 1.2, 2.1), p < 0.001], 1.6 [(95% CI 1.2, 2.0], p < 0.001] and 1.1 [(95% CI 0.7, 1.5]), p < 0.001), respectively. Conclusions: Where resources are limited, a handheld ultrasound may be considered as a potential low-cost alternative to a more expensive ultrasound machine for point of care ultrasonography, better suited to anesthetic vs. diagnostic obstetrical indications.
Styles APA, Harvard, Vancouver, ISO, etc.
32

Popyk, P. S. « Adaptability of reliability of seeding device with dispenser of directed action as direction of resource saving ». Naukovij žurnal «Tehnìka ta energetika» 11, no 3 (18 novembre 2020) : 163–71. http://dx.doi.org/10.31548/machenergy2020.03.163.

Texte intégral
Résumé :
The article analyzes the application of the latest technologies of precision seeding on the example of the use of a seeding machine with a directional metering unit. To do this, we analyzed such a parameter in agricultural production as the cost of sowing material, which affects its effectiveness. The object of the study is a sowing machine with a directional dispenser, an innovative design solution which will improve agricultural production on the basis of resource conservation. As a result of using a new design solution of the dispenser, the accuracy of the technological process of forming a regular single-grain flow is increased. The connection between the distance from the seed to the cell and the force of its suction is established. The equation of dynamics of movement of seeds and time of exposure of seeds with a cell is received. The analysis of the forces acting on the seeds when moving them by the dispenser of the directed action is carried out. Rational phases of work of the sowing device with the batcher of the directed action are substantiated and parameters of its work are defined.
Styles APA, Harvard, Vancouver, ISO, etc.
33

Johnson, Anju P., Hussain Al-Aqrabi et Richard Hill. « Bio-Inspired Approaches to Safety and Security in IoT-Enabled Cyber-Physical Systems ». Sensors 20, no 3 (5 février 2020) : 844. http://dx.doi.org/10.3390/s20030844.

Texte intégral
Résumé :
Internet of Things (IoT) and Cyber-Physical Systems (CPS) have profoundly influenced the way individuals and enterprises interact with the world. Although attacks on IoT devices are becoming more commonplace, security metrics often focus on software, network, and cloud security. For CPS systems employed in IoT applications, the implementation of hardware security is crucial. The identity of electronic circuits measured in terms of device parameters serves as a fingerprint. Estimating the parameters of this fingerprint assists the identification and prevention of Trojan attacks in a CPS. We demonstrate a bio-inspired approach for hardware Trojan detection using unsupervised learning methods. The bio-inspired principles of pattern identification use a Spiking Neural Network (SNN), and glial cells form the basis of this work. When hardware device parameters are in an acceptable range, the design produces a stable firing pattern. When unbalanced, the firing rate reduces to zero, indicating the presence of a Trojan. This network is tunable to accommodate natural variations in device parameters and to avoid false triggering of Trojan alerts. The tolerance is tuned using bio-inspired principles for various security requirements, such as forming high-alert systems for safety-critical missions. The Trojan detection circuit is resilient to a range of faults and attacks, both intentional and unintentional. Also, we devise a design-for-trust architecture by developing a bio-inspired device-locking mechanism. The proposed architecture is implemented on a Xilinx Artix-7 Field Programmable Gate Array (FPGA) device. Results demonstrate the suitability of the proposal for resource-constrained environments with minimal hardware and power dissipation profiles. The design is tested with a wide range of device parameters to demonstrate the effectiveness of Trojan detection. This work serves as a new approach to enable secure CPSs and to employ bio-inspired unsupervised machine intelligence.
Styles APA, Harvard, Vancouver, ISO, etc.
34

Pierangeli, Davide, Giulia Marcucci, Daniel Brunner et Claudio Conti. « Noise-enhanced spatial-photonic Ising machine ». Nanophotonics 9, no 13 (23 mai 2020) : 4109–16. http://dx.doi.org/10.1515/nanoph-2020-0119.

Texte intégral
Résumé :
AbstractIsing machines are novel computing devices for the energy minimization of Ising models. These combinatorial optimization problems are of paramount importance for science and technology, but remain difficult to tackle on large scale by conventional electronics. Recently, various photonics-based Ising machines demonstrated fast computing of a Ising ground state by data processing through multiple temporal or spatial optical channels. Experimental noise acts as a detrimental effect in many of these devices. On the contrary, here we demonstrate that an optimal noise level enhances the performance of spatial-photonic Ising machines on frustrated spin problems. By controlling the error rate at the detection, we introduce a noisy-feedback mechanism in an Ising machine based on spatial light modulation. We investigate the device performance on systems with hundreds of individually-addressable spins with all-to-all couplings and we found an increased success probability at a specific noise level. The optimal noise amplitude depends on graph properties and size, thus indicating an additional tunable parameter helpful in exploring complex energy landscapes and in avoiding getting stuck in local minima. Our experimental results identify noise as a potentially valuable resource for optical computing. This concept, which also holds in different nanophotonic neural networks, may be crucial in developing novel hardware with optics-enabled parallel architecture for large-scale optimizations.
Styles APA, Harvard, Vancouver, ISO, etc.
35

Jo, Heeseung, Jinkyu Jeong, Myoungho Lee et Dong Hoon Choi. « Exploiting GPUs in Virtual Machine for BioCloud ». BioMed Research International 2013 (2013) : 1–11. http://dx.doi.org/10.1155/2013/939460.

Texte intégral
Résumé :
Recently, biological applications start to be reimplemented into the applications which exploit many cores of GPUs for better computation performance. Therefore, by providing virtualized GPUs to VMs in cloud computing environment, many biological applications will willingly move into cloud environment to enhance their computation performance and utilize infinite cloud computing resource while reducing expenses for computations. In this paper, we propose a BioCloud system architecture that enables VMs to use GPUs in cloud environment. Because much of the previous research has focused on the sharing mechanism of GPUs among VMs, they cannot achieve enough performance for biological applications of which computation throughput is more crucial rather than sharing. The proposed system exploits the pass-through mode of PCI express (PCI-E) channel. By making each VM be able to access underlying GPUs directly, applications can show almost the same performance as when those are in native environment. In addition, our scheme multiplexes GPUs by using hot plug-in/out device features of PCI-E channel. By adding or removing GPUs in each VM in on-demand manner, VMs in the same physical host can time-share their GPUs. We implemented the proposed system using the Xen VMM and NVIDIA GPUs and showed that our prototype is highly effective for biological GPU applications in cloud environment.
Styles APA, Harvard, Vancouver, ISO, etc.
36

Wolfenden, Joseph, Alexandra S. Alexandrova, Frank Jackson, Storm Mathisen, Geoffrey Morris, Thomas H. Pacey, Narender Kumar, Monika Yadav, Angus Jones et Carsten P. Welsch. « Cherenkov Radiation in Optical Fibres as a Versatile Machine Protection System in Particle Accelerators ». Sensors 23, no 4 (16 février 2023) : 2248. http://dx.doi.org/10.3390/s23042248.

Texte intégral
Résumé :
Machine protection systems in high power particle accelerators are crucial. They can detect, prevent, and respond to events which would otherwise cause damage and significant downtime to accelerator infrastructure. Current systems are often resource heavy and operationally expensive, reacting after an event has begun to cause damage; this leads to facilities only covering certain operational modes and setting lower limits on machine performance. Presented here is a new type of machine protection system based upon optical fibres, which would be complementary to existing systems, elevating existing performance. These fibres are laid along an accelerator beam line in lengths of ∼100 m, providing continuous coverage over this distance. When relativistic particles pass through these fibres, they generate Cherenkov radiation in the optical spectrum. This radiation propagates in both directions along the fibre and can be detected at both ends. A calibration based technique allows the location of the Cherenkov radiation source to be pinpointed to within 0.5 m with a resolution of 1 m. This measurement mechanism, from a single device, has multiple applications within an accelerator facility. These include beam loss location monitoring, RF breakdown prediction, and quench prevention. Detailed here are the application processes and results from measurements, which provide proof of concept for this device for both beam loss monitoring and RF breakdown detection. Furthermore, highlighted are the current challenges for future innovation.
Styles APA, Harvard, Vancouver, ISO, etc.
37

Katsidimas, Ioannis, Vassilis Kostopoulos, Thanasis Kotzakolios, Sotiris E. Nikoletseas, Stefanos H. Panagiotou et Constantinos Tsakonas. « An Impact Localization Solution Using Embedded Intelligence—Methodology and Experimental Verification via a Resource-Constrained IoT Device ». Sensors 23, no 2 (12 janvier 2023) : 896. http://dx.doi.org/10.3390/s23020896.

Texte intégral
Résumé :
Recent advances both in hardware and software have facilitated the embedded intelligence (EI) research field, and enabled machine learning and decision-making integration in resource-scarce IoT devices and systems, realizing “conscious” and self-explanatory objects (smart objects). In the context of the broad use of WSNs in advanced IoT applications, this is the first work to provide an extreme-edge system, to address structural health monitoring (SHM) on polymethyl methacrylate (PPMA) thin-plate. To the best of our knowledge, state-of-the-art solutions primarily utilize impact positioning methods based on the time of arrival of the stress wave, while in the last decade machine learning data analysis has been performed, by more expensive and resource-abundant equipment than general/development purpose IoT devices, both for the collection and the inference stages of the monitoring system. In contrast to the existing systems, we propose a methodology and a system, implemented by a low-cost device, with the benefit of performing an online and on-device impact localization service from an agnostic perspective, regarding the material and the sensors’ location (as none of those attributes are used). Thus, a design of experiments and the corresponding methodology to build an experimental time-series dataset for impact detection and localization is proposed, using ceramic piezoelectric transducers (PZTs). The system is excited with a steel ball, varying the height from which it is released. Based on TinyML technology for embedding intelligence in low-power devices, we implement and validate random forest and shallow neural network models to localize in real-time (less than 400 ms latency) any occurring impacts on the structure, achieving higher than 90% accuracy.
Styles APA, Harvard, Vancouver, ISO, etc.
38

James, Bonney Lee, Sumsum P. Sunny, Andrew Emon Heidari, Ravindra D. Ramanjinappa, Tracie Lam, Anne V. Tran, Sandeep Kankanala et al. « Validation of a Point-of-Care Optical Coherence Tomography Device with Machine Learning Algorithm for Detection of Oral Potentially Malignant and Malignant Lesions ». Cancers 13, no 14 (17 juillet 2021) : 3583. http://dx.doi.org/10.3390/cancers13143583.

Texte intégral
Résumé :
Non-invasive strategies that can identify oral malignant and dysplastic oral potentially-malignant lesions (OPML) are necessary in cancer screening and long-term surveillance. Optical coherence tomography (OCT) can be a rapid, real time and non-invasive imaging method for frequent patient surveillance. Here, we report the validation of a portable, robust OCT device in 232 patients (lesions: 347) in different clinical settings. The device deployed with algorithm-based automated diagnosis, showed efficacy in delineation of oral benign and normal (n = 151), OPML (n = 121), and malignant lesions (n = 75) in community and tertiary care settings. This study showed that OCT images analyzed by automated image processing algorithm could distinguish the dysplastic-OPML and malignant lesions with a sensitivity of 95% and 93%, respectively. Furthermore, we explored the ability of multiple (n = 14) artificial neural network (ANN) based feature extraction techniques for delineation high grade-OPML (moderate/severe dysplasia). The support vector machine (SVM) model built over ANN, delineated high-grade dysplasia with sensitivity of 83%, which in turn, can be employed to triage patients for tertiary care. The study provides evidence towards the utility of the robust and low-cost OCT instrument as a point-of-care device in resource-constrained settings and the potential clinical application of device in screening and surveillance of oral cancer.
Styles APA, Harvard, Vancouver, ISO, etc.
39

Kaur Dhaliwal, Japman, Mohd Naseem, Aadil Ahamad Lawaye et Ehtesham Husain Abbasi. « Fibonacci Series based Virtual Machine Selection for Load Balancing in Cloud Computing ». International Journal of Engineering & ; Technology 7, no 3.12 (20 juillet 2018) : 1071. http://dx.doi.org/10.14419/ijet.v7i3.12.17634.

Texte intégral
Résumé :
The rapid advancement of the internet has given birth to many technologies. Cloud computing is one of the most emerging technology which aim to process large scale data by using the computational capabilities of shared resources. It gives support to the distributed parallel processing. Using cloud computing, we can process data by paying according to its uses which eliminates the requirement of device by individual users. As cloud computing grows, more users get attracted towards it. However, providing an efficient execution time and load distribution is a major challenging issue in the distributed systems. In our approach, weighted round robin algorithm is used and benefits of Fibonacci sequence is combined which results in better execution time than static round robin. Relevant virtual machines are chosen and jobs are assigned to them. Also, number of resources being utilized concurrently is reduced, which leads to resource saving thereby reducing the cost. There is no need to deploy new resources as resources such as virtual machines are already available.
Styles APA, Harvard, Vancouver, ISO, etc.
40

Meng, Zhaozong, et Joan Lu. « Integrating Technical Advance in Mobile Devices to Enhance the Information Retrieval in Mobile Learning ». International Journal of Information Retrieval Research 3, no 3 (juillet 2013) : 1–25. http://dx.doi.org/10.4018/ijirr.2013070101.

Texte intégral
Résumé :
It has been a long time since the wireless technologies are used in learning activities to promote the interaction between teacher and learners. The recent advance in the mobile technologies has created new chances improving the flexibility, efficiency, and functionality of the learning interaction systems. This investigation identifies the weakness of the existing systems, and integrates the emerging mobile technologies to establish an open interaction framework to effectively enhance the interaction using students faced mobile devices and public wireless infrastructure. The main work of this investigation contains: (1) a teacher-learner response model for mobile based interaction is proposed and described with state machine logic; (2) a presentation-content retrieval mechanism is designed to efficiently utilise the limited computation resource; (3) device independence and context-aware techniques are integrated to created cross-platform application with mobile device features; (4) an open media framework is built for flexible learning material distribution and question organization. A lightweight mobile oriented Web-based Wireless Response System (mobile-WRS) is implemented as a case study. In-house testing and classroom application of the mobile-WRS in universities demonstrate that the proposed system outperforms the peer works in usability, interface, operational efficiency, learning material distribution, results presentation, and performance assessment, etc.
Styles APA, Harvard, Vancouver, ISO, etc.
41

Meng, Zhaozong, et Joan Lu. « Integrating Technical Advance in Mobile Devices to Enhance the Information Retrieval in Mobile Learning ». International Journal of Information Retrieval Research 4, no 1 (janvier 2014) : 61–85. http://dx.doi.org/10.4018/ijirr.2014010104.

Texte intégral
Résumé :
It has been a long time since the wireless technology is used for the interaction between teacher and learners in learning activities. The recent advance of the mobile technologies has created new chances improving the flexibility, efficiency, and functionality of the learning interaction systems. This investigation identifies the weakness of the existing systems, and integrates the emerging mobile technologies to establish an open interaction framework to effectively enhance the interaction using students faced mobile devices and public wireless infrastructure. The main work of this investigation contains: (1) a teacher-learner response model for mobile based interaction is proposed and is described with state machine logic; (2) a presentation-content retrieval mechanism is designed to efficiently utilize the limited resource; (3) the device independent and context-aware techniques are integrated to created cross-platform application with mobile device features; (4) an open media framework is created for flexible learning material distribution and question organization. A lightweight mobile oriented web-based wireless response system (mobile-WRS) is implemented as a case study. In-house testing and classroom application of the mobile-WRS in universities demonstrate that the proposed system outperforms the peer works in usability, interface, operational efficiency, learning material distribution, results presentation and performance assessment, etc.
Styles APA, Harvard, Vancouver, ISO, etc.
42

Tabakov, Petr A., et Aleksey P. Tabakov. « Device for checking crankshaft bending and its straightening ». Tekhnicheskiy servis mashin, no 2 (10 juin 2020) : 96–101. http://dx.doi.org/10.22314/2618-8287-2020-58-2-96-101.

Texte intégral
Résumé :
Before grinding, the crankshafts are checked for curvature, the allowed runout of the average root neck should be within 0.03-0.05 mm. Straightening is performed on a hydraulic press, installing the crankshaft on prisms in the side root necks, and to check the degree of straightening, one have to move the crankshaft to the grinding machine and fix it on the centers. Such devices have many disadvantages. (Research purpose) The research purpose is in expanding the functionality of the crankshaft straightening device, developing drawings and application of a patent for a device that could check the bending of the crankshaft at the centers, as well as straightening with a fixed reverse bend, which eliminates multiple straightening and increases the life of the shaft and the resource of the internal combustion engine. (Materials and methods) The article proposes upgraded equipment, protected by a patent, to increase the life of the crankshaft, labor productivity by 3-4 times and the accuracity of straightening. (Results and discussion) Authors have made drawings and got patent N191590 from August 14, 2019 for a device for checking the bending of the shaft on the centers and straightening under the press. The article describes the equipment work. (Conclusions) Checking the crankshaft for bending at the centers and straightening with a fixed reverse bend on a single device significantly improves labor productivity and straightening accuracy. The straightening of crankshafts by the proposed method is more than 99 percent without breakdowns.
Styles APA, Harvard, Vancouver, ISO, etc.
43

Ramana, Kadiyala, Rajanikanth Aluvalu, Vinit Kumar Gunjan, Ninni Singh et M. Nageswara Prasadhu. « Multipath Transmission Control Protocol for Live Virtual Machine Migration in the Cloud Environment ». Wireless Communications and Mobile Computing 2022 (22 avril 2022) : 1–14. http://dx.doi.org/10.1155/2022/2060875.

Texte intégral
Résumé :
For mobile cloud computing (MCC), a local virtual machine- (VM-) based cloudlet is proposed to improve the performance of real-time resource-intensive mobile applications. When a mobile device (MD) discovers a cloudlet nearby, it takes some time to build up a virtual machine (VM) inside the cloudlet before data offloading from the MD to the VM can begin. Live virtual machine migration refers to the process of transferring a running Virtual Machine (VM) from one host to another without interrupting its state. Theoretically, live migration process must not render the instance being migrated unavailable during its execution. However, in practice, there is always a service downtime associated with the process. This paper focuses on addressing the need to reduce the service downtime in case of live VM migration in cloud and providing a solution by implementing and optimizing the multipath transmission control protocol (MPTCP) ability within an Infrastructure as a service (IaaS) cloud to increase the efficiency of live migration. We have also introduced an algorithm, the α-best fit algorithm, to optimize the usage of bandwidth and to effectively streamline the MPTCP performance.
Styles APA, Harvard, Vancouver, ISO, etc.
44

Kwon, Jisu, et Daejin Park. « Hardware/Software Co-Design for TinyML Voice-Recognition Application on Resource Frugal Edge Devices ». Applied Sciences 11, no 22 (22 novembre 2021) : 11073. http://dx.doi.org/10.3390/app112211073.

Texte intégral
Résumé :
On-device artificial intelligence has attracted attention globally, and attempts to combine the internet of things and TinyML (machine learning) applications are increasing. Although most edge devices have limited resources, time and energy costs are important when running TinyML applications. In this paper, we propose a structure in which the part that preprocesses externally input data in the TinyML application is distributed to the hardware. These processes are performed using software in the microcontroller unit of an edge device. Furthermore, resistor–transistor logic, which perform not only windowing using the Hann function, but also acquire audio raw data, is added to the inter-integrated circuit sound module that collects audio data in the voice-recognition application. As a result of the experiment, the windowing function was excluded from the TinyML application of the embedded board. When the length of the hardware-implemented Hann window is 80 and the quantization degree is 2−5, the exclusion causes a decrease in the execution time of the front-end function and energy consumption by 8.06% and 3.27%, respectively.
Styles APA, Harvard, Vancouver, ISO, etc.
45

Khalfaoui, Sameh, Jean Leneutre, Arthur Villard, Ivan Gazeau, Jingxuan Ma et Pascal Urien. « Security Analysis of Machine Learning-Based PUF Enrollment Protocols : A Review ». Sensors 21, no 24 (16 décembre 2021) : 8415. http://dx.doi.org/10.3390/s21248415.

Texte intégral
Résumé :
The demand for Internet of Things services is increasing exponentially, and consequently a large number of devices are being deployed. To efficiently authenticate these objects, the use of physical unclonable functions (PUFs) has been introduced as a promising solution for the resource-constrained nature of these devices. The use of machine learning PUF models has been recently proposed to authenticate the IoT objects while reducing the storage space requirement for each device. Nonetheless, the use of a mathematically clonable PUFs requires careful design of the enrollment process. Furthermore, the secrecy of the machine learning models used for PUFs and the scenario of leakage of sensitive information to an adversary due to an insider threat within the organization have not been discussed. In this paper, we review the state-of-the-art model-based PUF enrollment protocols. We identity two architectures of enrollment protocols based on the participating entities and the building blocks that are relevant to the security of the authentication procedure. In addition, we discuss their respective weaknesses with respect to insider and outsider threats. Our work serves as a comprehensive overview of the ML PUF-based methods and provides design guidelines for future enrollment protocol designers.
Styles APA, Harvard, Vancouver, ISO, etc.
46

Gritsenko, Alexander Vladimirovich, Konstantin Vyacheslavovich Glemba et Grigoriy Nikolaevich Salimonenko. « Engine diagnostics by selective gas analysis of exhaust gases ». Transport of the Urals, no 2 (2022) : 84–91. http://dx.doi.org/10.20291/1815-9400-2022-2-84-91.

Texte intégral
Résumé :
In the paper the method of individual gas analysis of vehicles with using promising means, including an oscilloscope and an electronically controlled load device, is considered. During the research, the lower and upper limits of the variation of the variable toxicity parameters for modern gasoline cars, regardless of the number of cylinders, are theoretically justified. This generalized theoretical model can be used by machine-building plants and car service enterprises to assess the current state of catalytic converters, spark plugs, electromagnetic injectors, as well as forecasting the resource for the upcoming use cycles.
Styles APA, Harvard, Vancouver, ISO, etc.
47

Chen, Chunlei, Peng Zhang, Huixiang Zhang, Jiangyan Dai, Yugen Yi, Huihui Zhang et Yonghui Zhang. « Deep Learning on Computational-Resource-Limited Platforms : A Survey ». Mobile Information Systems 2020 (1 mars 2020) : 1–19. http://dx.doi.org/10.1155/2020/8454327.

Texte intégral
Résumé :
Nowadays, Internet of Things (IoT) gives rise to a huge amount of data. IoT nodes equipped with smart sensors can immediately extract meaningful knowledge from the data through machine learning technologies. Deep learning (DL) is constantly contributing significant progress in smart sensing due to its dramatic superiorities over traditional machine learning. The promising prospect of wide-range applications puts forwards demands on the ubiquitous deployment of DL under various contexts. As a result, performing DL on mobile or embedded platforms is becoming a common requirement. Nevertheless, a typical DL application can easily exhaust an embedded or mobile device owing to a large amount of multiply and accumulate (MAC) operations and memory access operations. Consequently, it is a challenging task to bridge the gap between deep learning and resource-limited platforms. We summarize typical applications of resource-limited deep learning and point out that deep learning is an indispensable impetus of pervasive computing. Subsequently, we explore the underlying reasons for the high computational overhead of DL through reviewing the fundamental concepts including capacity, generalization, and backpropagation of a neural network. Guided by these concepts, we investigate on principles of representative research works, as well as three types of solutions: algorithmic design, computational optimization, and hardware revolution. In pursuant to these solutions, we identify challenges to be addressed.
Styles APA, Harvard, Vancouver, ISO, etc.
48

Li, Xuejing, Yajuan Qin, Huachun Zhou, Du Chen, Shujie Yang et Zhewei Zhang. « An Intelligent Adaptive Algorithm for Servers Balancing and Tasks Scheduling over Mobile Fog Computing Networks ». Wireless Communications and Mobile Computing 2020 (23 juillet 2020) : 1–16. http://dx.doi.org/10.1155/2020/8863865.

Texte intégral
Résumé :
With the increasing popularity of terminals and applications, the corresponding requirements of services have been growing significantly. In order to improve the quality of services in resource restrained user devices and reduce the large latency of service migration caused by long distance in cloud computing, mobile fog computing (MFC) is presented to provide supplementary resources by adding a fog layer with several servers near user devices. Focusing on cloud-aware MFC networks with multiple servers, we formulate a problem with the optimization objective to improve the quality of service, relieve the restrained resource of user device, and balance the workload of participant server. In consideration of the data size of remaining task, the power consumption of user device, and the appended workload of participant server, this paper designs a machine learning-based algorithm which aims to generate intelligent adaptive strategies related with load balancing of collaborative servers and dynamic scheduling of sequential tasks. Based on the proposed algorithm and software-defined networking technology, the tasks can be executed cooperatively by the user device and the servers in the MFC network. Besides, we conducted some experiments to verify the algorithm effectiveness under different numerical parameters including task arrival rate, avaliable server workload, and wireless channel condition. The simulation results show that the proposed intelligent adaptive algorithm achieves a superior performance in terms of latency and power consumption compared to candidate algorithms.
Styles APA, Harvard, Vancouver, ISO, etc.
49

Arri, Harwant Singh, Ramandeep Singh, Sudan Jha, Deepak Prashar, Gyanendra Prasad Joshi et Ill Chul Doo. « Optimized Task Group Aggregation-Based Overflow Handling on Fog Computing Environment Using Neural Computing ». Mathematics 9, no 19 (7 octobre 2021) : 2522. http://dx.doi.org/10.3390/math9192522.

Texte intégral
Résumé :
It is a non-deterministic challenge on a fog computing network to schedule resources or jobs in a manner that increases device efficacy and throughput, diminishes reply period, and maintains the system well-adjusted. Using Machine Learning as a component of neural computing, we developed an improved Task Group Aggregation (TGA) overflow handling system for fog computing environments. As a result of TGA usage in conjunction with an Artificial Neural Network (ANN), we may assess the model’s QoS characteristics to detect an overloaded server and then move the model’s data to virtual machines (VMs). Overloaded and underloaded virtual machines will be balanced according to parameters, such as CPU, memory, and bandwidth to control fog computing overflow concerns with the help of ANN and the machine learning concept. Additionally, the Artificial Bee Colony (ABC) algorithm, which is a neural computing system, is employed as an optimization technique to separate the services and users depending on their individual qualities. The response time and success rate were both enhanced using the newly proposed optimized ANN-based TGA algorithm. Compared to the present work’s minimal reaction time, the total improvement in average success rate is about 3.6189 percent, and Resource Scheduling Efficiency has improved by 3.9832 percent. In terms of virtual machine efficiency for resource scheduling, average success rate, average task completion success rate, and virtual machine response time are improved. The proposed TGA-based overflow handling on a fog computing domain enhances response time compared to the current approaches. Fog computing, for example, demonstrates how artificial intelligence-based systems can be made more efficient.
Styles APA, Harvard, Vancouver, ISO, etc.
50

Nayyar, Anand, Pijush Kanti Dutta Pramankit et Rajni Mohana. « Introduction to the Special Issue on Evolving IoT and Cyber-Physical Systems : Advancements, Applications, and Solutions ». Scalable Computing : Practice and Experience 21, no 3 (1 août 2020) : 347–48. http://dx.doi.org/10.12694/scpe.v21i3.1568.

Texte intégral
Résumé :
Internet of Things (IoT) is regarded as a next-generation wave of Information Technology (IT) after the widespread emergence of the Internet and mobile communication technologies. IoT supports information exchange and networked interaction of appliances, vehicles and other objects, making sensing and actuation possible in a low-cost and smart manner. On the other hand, cyber-physical systems (CPS) are described as the engineered systems which are built upon the tight integration of the cyber entities (e.g., computation, communication, and control) and the physical things (natural and man-made systems governed by the laws of physics). The IoT and CPS are not isolated technologies. Rather it can be said that IoT is the base or enabling technology for CPS and CPS is considered as the grownup development of IoT, completing the IoT notion and vision. Both are merged into closed-loop, providing mechanisms for conceptualizing, and realizing all aspects of the networked composed systems that are monitored and controlled by computing algorithms and are tightly coupled among users and the Internet. That is, the hardware and the software entities are intertwined, and they typically function on different time and location-based scales. In fact, the linking between the cyber and the physical world is enabled by IoT (through sensors and actuators). CPS that includes traditional embedded and control systems are supposed to be transformed by the evolving and innovative methodologies and engineering of IoT. Several applications areas of IoT and CPS are smart building, smart transport, automated vehicles, smart cities, smart grid, smart manufacturing, smart agriculture, smart healthcare, smart supply chain and logistics, etc. Though CPS and IoT have significant overlaps, they differ in terms of engineering aspects. Engineering IoT systems revolves around the uniquely identifiable and internet-connected devices and embedded systems; whereas engineering CPS requires a strong emphasis on the relationship between computation aspects (complex software) and the physical entities (hardware). Engineering CPS is challenging because there is no defined and fixed boundary and relationship between the cyber and physical worlds. In CPS, diverse constituent parts are composed and collaborated together to create unified systems with global behaviour. These systems need to be ensured in terms of dependability, safety, security, efficiency, and adherence to real‐time constraints. Hence, designing CPS requires knowledge of multidisciplinary areas such as sensing technologies, distributed systems, pervasive and ubiquitous computing, real-time computing, computer networking, control theory, signal processing, embedded systems, etc. CPS, along with the continuous evolving IoT, has posed several challenges. For example, the enormous amount of data collected from the physical things makes it difficult for Big Data management and analytics that includes data normalization, data aggregation, data mining, pattern extraction and information visualization. Similarly, the future IoT and CPS need standardized abstraction and architecture that will allow modular designing and engineering of IoT and CPS in global and synergetic applications. Another challenging concern of IoT and CPS is the security and reliability of the components and systems. Although IoT and CPS have attracted the attention of the research communities and several ideas and solutions are proposed, there are still huge possibilities for innovative propositions to make IoT and CPS vision successful. The major challenges and research scopes include system design and implementation, computing and communication, system architecture and integration, application-based implementations, fault tolerance, designing efficient algorithms and protocols, availability and reliability, security and privacy, energy-efficiency and sustainability, etc. It is our great privilege to present Volume 21, Issue 3 of Scalable Computing: Practice and Experience. We had received 30 research papers and out of which 14 papers are selected for publication. The objective of this special issue is to explore and report recent advances and disseminate state-of-the-art research related to IoT, CPS and the enabling and associated technologies. The special issue will present new dimensions of research to researchers and industry professionals with regard to IoT and CPS. Vivek Kumar Prasad and Madhuri D Bhavsar in the paper titled "Monitoring and Prediction of SLA for IoT based Cloud described the mechanisms for monitoring by using the concept of reinforcement learning and prediction of the cloud resources, which forms the critical parts of cloud expertise in support of controlling and evolution of the IT resources and has been implemented using LSTM. The proper utilization of the resources will generate revenues to the provider and also increases the trust factor of the provider of cloud services. For experimental analysis, four parameters have been used i.e. CPU utilization, disk read/write throughput and memory utilization. Kasture et al. in the paper titled "Comparative Study of Speaker Recognition Techniques in IoT Devices for Text Independent Negative Recognition" compared the performance of features which are used in state of art speaker recognition models and analyse variants of Mel frequency cepstrum coefficients (MFCC) predominantly used in feature extraction which can be further incorporated and used in various smart devices. Mahesh Kumar Singh and Om Prakash Rishi in the paper titled "Event Driven Recommendation System for E-Commerce using Knowledge based Collaborative Filtering Technique" proposed a novel system that uses a knowledge base generated from knowledge graph to identify the domain knowledge of users, items, and relationships among these, knowledge graph is a labelled multidimensional directed graph that represents the relationship among the users and the items. The proposed approach uses about 100 percent of users' participation in the form of activities during navigation of the web site. Thus, the system expects under the users' interest that is beneficial for both seller and buyer. The proposed system is compared with baseline methods in area of recommendation system using three parameters: precision, recall and NDGA through online and offline evaluation studies with user data and it is observed that proposed system is better as compared to other baseline systems. Benbrahim et al. in the paper titled "Deep Convolutional Neural Network with TensorFlow and Keras to Classify Skin Cancer" proposed a novel classification model to classify skin tumours in images using Deep Learning methodology and the proposed system was tested on HAM10000 dataset comprising of 10,015 dermatoscopic images and the results observed that the proposed system is accurate in order of 94.06\% in validation set and 93.93\% in the test set. Devi B et al. in the paper titled "Deadlock Free Resource Management Technique for IoT-Based Post Disaster Recovery Systems" proposed a new class of techniques that do not perform stringent testing before allocating the resources but still ensure that the system is deadlock-free and the overhead is also minimal. The proposed technique suggests reserving a portion of the resources to ensure no deadlock would occur. The correctness of the technique is proved in the form of theorems. The average turnaround time is approximately 18\% lower for the proposed technique over Banker's algorithm and also an optimal overhead of O(m). Deep et al. in the paper titled "Access Management of User and Cyber-Physical Device in DBAAS According to Indian IT Laws Using Blockchain" proposed a novel blockchain solution to track the activities of employees managing cloud. Employee authentication and authorization are managed through the blockchain server. User authentication related data is stored in blockchain. The proposed work assists cloud companies to have better control over their employee's activities, thus help in preventing insider attack on User and Cyber-Physical Devices. Sumit Kumar and Jaspreet Singh in paper titled "Internet of Vehicles (IoV) over VANETS: Smart and Secure Communication using IoT" highlighted a detailed description of Internet of Vehicles (IoV) with current applications, architectures, communication technologies, routing protocols and different issues. The researchers also elaborated research challenges and trade-off between security and privacy in area of IoV. Deore et al. in the paper titled "A New Approach for Navigation and Traffic Signs Indication Using Map Integrated Augmented Reality for Self-Driving Cars" proposed a new approach to supplement the technology used in self-driving cards for perception. The proposed approach uses Augmented Reality to create and augment artificial objects of navigational signs and traffic signals based on vehicles location to reality. This approach help navigate the vehicle even if the road infrastructure does not have very good sign indications and marking. The approach was tested locally by creating a local navigational system and a smartphone based augmented reality app. The approach performed better than the conventional method as the objects were clearer in the frame which made it each for the object detection to detect them. Bhardwaj et al. in the paper titled "A Framework to Systematically Analyse the Trustworthiness of Nodes for Securing IoV Interactions" performed literature on IoV and Trust and proposed a Hybrid Trust model that seperates the malicious and trusted nodes to secure the interaction of vehicle in IoV. To test the model, simulation was conducted on varied threshold values. And results observed that PDR of trusted node is 0.63 which is higher as compared to PDR of malicious node which is 0.15. And on the basis of PDR, number of available hops and Trust Dynamics the malicious nodes are identified and discarded. Saniya Zahoor and Roohie Naaz Mir in the paper titled "A Parallelization Based Data Management Framework for Pervasive IoT Applications" highlighted the recent studies and related information in data management for pervasive IoT applications having limited resources. The paper also proposes a parallelization-based data management framework for resource-constrained pervasive applications of IoT. The comparison of the proposed framework is done with the sequential approach through simulations and empirical data analysis. The results show an improvement in energy, processing, and storage requirements for the processing of data on the IoT device in the proposed framework as compared to the sequential approach. Patel et al. in the paper titled "Performance Analysis of Video ON-Demand and Live Video Streaming Using Cloud Based Services" presented a review of video analysis over the LVS \& VoDS video application. The researchers compared different messaging brokers which helps to deliver each frame in a distributed pipeline to analyze the impact on two message brokers for video analysis to achieve LVS & VoS using AWS elemental services. In addition, the researchers also analysed the Kafka configuration parameter for reliability on full-service-mode. Saniya Zahoor and Roohie Naaz Mir in the paper titled "Design and Modeling of Resource-Constrained IoT Based Body Area Networks" presented the design and modeling of a resource-constrained BAN System and also discussed the various scenarios of BAN in context of resource constraints. The Researchers also proposed an Advanced Edge Clustering (AEC) approach to manage the resources such as energy, storage, and processing of BAN devices while performing real-time data capture of critical health parameters and detection of abnormal patterns. The comparison of the AEC approach is done with the Stable Election Protocol (SEP) through simulations and empirical data analysis. The results show an improvement in energy, processing time and storage requirements for the processing of data on BAN devices in AEC as compared to SEP. Neelam Saleem Khan and Mohammad Ahsan Chishti in the paper titled "Security Challenges in Fog and IoT, Blockchain Technology and Cell Tree Solutions: A Review" outlined major authentication issues in IoT, map their existing solutions and further tabulate Fog and IoT security loopholes. Furthermore, this paper presents Blockchain, a decentralized distributed technology as one of the solutions for authentication issues in IoT. In addition, the researchers discussed the strength of Blockchain technology, work done in this field, its adoption in COVID-19 fight and tabulate various challenges in Blockchain technology. The researchers also proposed Cell Tree architecture as another solution to address some of the security issues in IoT, outlined its advantages over Blockchain technology and tabulated some future course to stir some attempts in this area. Bhadwal et al. in the paper titled "A Machine Translation System from Hindi to Sanskrit Language Using Rule Based Approach" proposed a rule-based machine translation system to bridge the language barrier between Hindi and Sanskrit Language by converting any test in Hindi to Sanskrit. The results are produced in the form of two confusion matrices wherein a total of 50 random sentences and 100 tokens (Hindi words or phrases) were taken for system evaluation. The semantic evaluation of 100 tokens produce an accuracy of 94\% while the pragmatic analysis of 50 sentences produce an accuracy of around 86\%. Hence, the proposed system can be used to understand the whole translation process and can further be employed as a tool for learning as well as teaching. Further, this application can be embedded in local communication based assisting Internet of Things (IoT) devices like Alexa or Google Assistant. Anshu Kumar Dwivedi and A.K. Sharma in the paper titled "NEEF: A Novel Energy Efficient Fuzzy Logic Based Clustering Protocol for Wireless Sensor Network" proposed a a deterministic novel energy efficient fuzzy logic-based clustering protocol (NEEF) which considers primary and secondary factors in fuzzy logic system while selecting cluster heads. After selection of cluster heads, non-cluster head nodes use fuzzy logic for prudent selection of their cluster head for cluster formation. NEEF is simulated and compared with two recent state of the art protocols, namely SCHFTL and DFCR under two scenarios. Simulation results unveil better performance by balancing the load and improvement in terms of stability period, packets forwarded to the base station, improved average energy and extended lifetime.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie