Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Self-adaptation, Data-driven, Machine learning, Software architecture.

Статті в журналах з теми "Self-adaptation, Data-driven, Machine learning, Software architecture"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-34 статей у журналах для дослідження на тему "Self-adaptation, Data-driven, Machine learning, Software architecture".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Lopes, Rui Pedro, Bárbara Barroso, Leonel Deusdado, André Novo, Manuel Guimarães, João Paulo Teixeira, and Paulo Leitão. "Digital Technologies for Innovative Mental Health Rehabilitation." Electronics 10, no. 18 (September 14, 2021): 2260. http://dx.doi.org/10.3390/electronics10182260.

Повний текст джерела
Анотація:
Schizophrenia is a chronic mental illness, characterized by the loss of the notion of reality, failing to distinguish it from the imaginary. It affects the patient in life’s major areas, such as work, interpersonal relationships, or self-care, and the usual treatment is performed with the help of anti-psychotic medication, which targets primarily the hallucinations, delirium, etc. Other symptoms, such as the decreased emotional expression or avolition, require a multidisciplinary approach, including psychopharmacology, cognitive training, and many forms of therapy. In this context, this paper addresses the use of digital technologies to design and develop innovative rehabilitation techniques, particularly focusing on mental health rehabilitation, and contributing for the promotion of well-being and health from a holistic perspective. In this context, serious games and virtual reality allows for creation of immersive environments that contribute to a more effective and lasting recovery, with improvements in terms of quality of life. The use of machine learning techniques will allow the real-time analysis of the data collected during the execution of the rehabilitation procedures, as well as enable their dynamic and automatic adaptation according to the profile and performance of the patients, by increasing or reducing the exercises’ difficulty. It relies on the acquisition of biometric and physiological signals, such as voice, heart rate, and game performance, to estimate the stress level, thus adapting the difficulty of the experience to the skills of the patient. The system described in this paper is currently in development, in collaboration with a health unit, and is an engineering effort that combines hardware and software to develop a rehabilitation tool for schizophrenic patients. A clinical trial is also planned for assessing the effectiveness of the system among negative symptoms in schizophrenia patients.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Rodríguez-Gracia, Diego, José A. Piedra-Fernández, Luis Iribarne, Javier Criado, Rosa Ayala, Joaquín Alonso-Montesinos, and Capobianco-Uriarte Maria de las Mercedes. "Microservices and Machine Learning Algorithms for Adaptive Green Buildings." Sustainability 11, no. 16 (August 9, 2019): 4320. http://dx.doi.org/10.3390/su11164320.

Повний текст джерела
Анотація:
In recent years, the use of services for Open Systems development has consolidated and strengthened. Advances in the Service Science and Engineering (SSE) community, promoted by the reinforcement of Web Services and Semantic Web technologies and the presence of new Cloud computing techniques, such as the proliferation of microservices solutions, have allowed software architects to experiment and develop new ways of building open and adaptable computer systems at runtime. Home automation, intelligent buildings, robotics, graphical user interfaces are some of the social atmosphere environments suitable in which to apply certain innovative trends. This paper presents a schema for the adaptation of Dynamic Computer Systems (DCS) using interdisciplinary techniques on model-driven engineering, service engineering and soft computing. The proposal manages an orchestrated microservices schema for adapting component-based software architectural systems at runtime. This schema has been developed as a three-layer adaptive transformation process that is supported on a rule-based decision-making service implemented by means of Machine Learning (ML) algorithms. The experimental development was implemented in the Solar Energy Research Center (CIESOL) applying the proposed microservices schema for adapting home architectural atmosphere systems on Green Buildings.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Park, Seunghyun, and Jin-Young Choi. "Malware Detection in Self-Driving Vehicles Using Machine Learning Algorithms." Journal of Advanced Transportation 2020 (January 17, 2020): 1–9. http://dx.doi.org/10.1155/2020/3035741.

Повний текст джерела
Анотація:
The recent trend for vehicles to be connected to unspecified devices, vehicles, and infrastructure increases the potential for external threats to vehicle cybersecurity. Thus, intrusion detection is a key network security function in vehicles with open connectivity, such as self-driving and connected cars. Specifically, when a vehicle is connected to an external device through a smartphone inside the vehicle or when a vehicle communicates with external infrastructure, security technology is required to protect the software network inside the vehicle. Existing technology with this function includes vehicle gateways and intrusion detection systems. However, it is difficult to block malicious code based on application behaviors. In this study, we propose a machine learning-based data analysis method to accurately detect abnormal behaviors due to malware in large-scale network traffic in real time. First, we define a detection architecture, which is required by the intrusion detection module to detect and block malware attempting to affect the vehicle via a smartphone. Then, we propose an efficient algorithm for detecting malicious behaviors in a network environment and conduct experiments to verify algorithm accuracy and cost through comparisons with other algorithms.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Thembelihle, Dlamini, Michele Rossi, and Daniele Munaretto. "Softwarization of Mobile Network Functions towards Agile and Energy Efficient 5G Architectures: A Survey." Wireless Communications and Mobile Computing 2017 (2017): 1–21. http://dx.doi.org/10.1155/2017/8618364.

Повний текст джерела
Анотація:
Future mobile networks (MNs) are required to be flexible with minimal infrastructure complexity, unlike current ones that rely on proprietary network elements to offer their services. Moreover, they are expected to make use of renewable energy to decrease their carbon footprint and of virtualization technologies for improved adaptability and flexibility, thus resulting in green and self-organized systems. In this article, we discuss the application of software defined networking (SDN) and network function virtualization (NFV) technologies towards softwarization of the mobile network functions, taking into account different architectural proposals. In addition, we elaborate on whether mobile edge computing (MEC), a new architectural concept that uses NFV techniques, can enhance communication in 5G cellular networks, reducing latency due to its proximity deployment. Besides discussing existing techniques, expounding their pros and cons and comparing state-of-the-art architectural proposals, we examine the role of machine learning and data mining tools, analyzing their use within fully SDN- and NFV-enabled mobile systems. Finally, we outline the challenges and the open issues related to evolved packet core (EPC) and MEC architectures.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Alexander, Francis J., James Ang, Jenna A. Bilbrey, Jan Balewski, Tiernan Casey, Ryan Chard, Jong Choi, et al. "Co-design Center for Exascale Machine Learning Technologies (ExaLearn)." International Journal of High Performance Computing Applications 35, no. 6 (September 27, 2021): 598–616. http://dx.doi.org/10.1177/10943420211029302.

Повний текст джерела
Анотація:
Rapid growth in data, computational methods, and computing power is driving a remarkable revolution in what variously is termed machine learning (ML), statistical learning, computational learning, and artificial intelligence. In addition to highly visible successes in machine-based natural language translation, playing the game Go, and self-driving cars, these new technologies also have profound implications for computational and experimental science and engineering, as well as for the exascale computing systems that the Department of Energy (DOE) is developing to support those disciplines. Not only do these learning technologies open up exciting opportunities for scientific discovery on exascale systems, they also appear poised to have important implications for the design and use of exascale computers themselves, including high-performance computing (HPC) for ML and ML for HPC. The overarching goal of the ExaLearn co-design project is to provide exascale ML software for use by Exascale Computing Project (ECP) applications, other ECP co-design centers, and DOE experimental facilities and leadership class computing facilities.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Kondratenko, Yuriy, Igor Atamanyuk, Ievgen Sidenko, Galyna Kondratenko, and Stanislav Sichevskyi. "Machine Learning Techniques for Increasing Efficiency of the Robot’s Sensor and Control Information Processing." Sensors 22, no. 3 (January 29, 2022): 1062. http://dx.doi.org/10.3390/s22031062.

Повний текст джерела
Анотація:
Real-time systems are widely used in industry, including technological process control systems, industrial automation systems, SCADA systems, testing, and measuring equipment, and robotics. The efficiency of executing an intelligent robot’s mission in many cases depends on the properties of the robot’s sensor and control systems in providing the trajectory planning, recognition of the manipulated objects, adaptation of the desired clamping force of the gripper, obstacle avoidance, and so on. This paper provides an analysis of the approaches and methods for real-time sensor and control information processing with the application of machine learning, as well as successful cases of machine learning application in the synthesis of a robot’s sensor and control systems. Among the robotic systems under investigation are (a) adaptive robots with slip displacement sensors and fuzzy logic implementation for sensor data processing, (b) magnetically controlled mobile robots for moving on inclined and ceiling surfaces with neuro-fuzzy observers and neuro controllers, and (c) robots that are functioning in unknown environments with the prediction of the control system state using statistical learning theory. All obtained results concern the main elements of the two-component robotic system with the mobile robot and adaptive manipulation robot on a fixed base for executing complex missions in non-stationary or uncertain conditions. The design and software implementation stage involves the creation of a structural diagram and description of the selected technologies, training a neural network for recognition and classification of geometric objects, and software implementation of control system components. The Swift programming language is used for the control system design and the CreateML framework is used for creating a neural network. Among the main results are: (a) expanding the capabilities of the intelligent control system by increasing the number of classes for recognition from three (cube, cylinder, and sphere) to five (cube, cylinder, sphere, pyramid, and cone); (b) increasing the validation accuracy (to 100%) for recognition of five different classes using CreateML (YOLOv2 architecture); (c) increasing the training accuracy (to 98.02%) and testing accuracy (to 98.0%) for recognition of five different classes using Torch library (ResNet34 architecture) in less time and number of epochs compared with Create ML (YOLOv2 architecture); (d) increasing the training accuracy (to 99.75%) and testing accuracy (to 99.2%) for recognition of five different classes using Torch library (ResNet34 architecture) and fine-tuning technology; and (e) analyzing the effect of dataset size impact on recognition accuracy with ResNet34 architecture and fine-tuning technology. The results can help to choose efficient (a) design approaches for control robotic devices, (b) machine-learning methods for performing pattern recognition and classification, and (c) computer technologies for designing control systems and simulating robotic devices.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Akbari, Ali, Jonathan Martinez, and Roozbeh Jafari. "Facilitating Human Activity Data Annotation via Context-Aware Change Detection on Smartwatches." ACM Transactions on Embedded Computing Systems 20, no. 2 (March 2021): 1–20. http://dx.doi.org/10.1145/3431503.

Повний текст джерела
Анотація:
Annotating activities of daily living (ADL) is vital for developing machine learning models for activity recognition. In addition, it is critical for self-reporting purposes such as in assisted living where the users are asked to log their ADLs. However, data annotation becomes extremely challenging in real-world data collection scenarios, where the users have to provide annotations and labels on their own. Methods such as self-reports that rely on users’ memory and compliance are prone to human errors and become burdensome since they increase users’ cognitive load. In this article, we propose a light yet effective context-aware change point detection algorithm that is implemented and run on a smartwatch for facilitating data annotation for high-level ADLs. The proposed system detects the moments of transition from one to another activity and prompts the users to annotate their data. We leverage freely available Bluetooth low energy (BLE) information broadcasted by various devices to detect changes in environmental context. This contextual information is combined with a motion-based change point detection algorithm, which utilizes data from wearable motion sensors, to reduce the false positives and enhance the system's accuracy. Through real-world experiments, we show that the proposed system improves the quality and quantity of labels collected from users by reducing human errors while eliminating users’ cognitive load and facilitating the data annotation process.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Nawrocki, Piotr, and Bartlomiej Sniezynski. "Adaptive Context-Aware Energy Optimization for Services on Mobile Devices with Use of Machine Learning." Wireless Personal Communications 115, no. 3 (August 13, 2020): 1839–67. http://dx.doi.org/10.1007/s11277-020-07657-9.

Повний текст джерела
Анотація:
AbstractIn this paper we present an original adaptive task scheduling system, which optimizes the energy consumption of mobile devices using machine learning mechanisms and context information. The system learns how to allocate resources appropriately: how to schedule services/tasks optimally between the device and the cloud, which is especially important in mobile systems. Decisions are made taking the context into account (e.g. network connection type, location, potential time and cost of executing the application or service). In this study, a supervised learning agent architecture and service selection algorithm are proposed to solve this problem. Adaptation is performed online, on a mobile device. Information about the context, task description, the decision made and its results such as power consumption are stored and constitute training data for a supervised learning algorithm, which updates the knowledge used to determine the optimal location for the execution of a given type of task. To verify the solution proposed, appropriate software has been developed and a series of experiments have been conducted. Results show that as a result of the experience gathered and the learning process performed, the decision module has become more efficient in assigning the task to either the mobile device or cloud resources.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Kretsis, Aristotelis, Ippokratis Sartzetakis, Polyzois Soumplis, Katerina Mitropoulou, Panagiotis Kokkinos, Petros Nicopolitidis, Georgios Papadimitriou, and Emmanouel Varvarigos. "ARMONIA: A Unified Access and Metro Network Architecture." Applied Sciences 10, no. 23 (November 24, 2020): 8318. http://dx.doi.org/10.3390/app10238318.

Повний текст джерела
Анотація:
We present a self-configured and unified access and metro network architecture, named ARMONIA. The ARMONIA network monitors its status, and dynamically (re-)optimizes its configuration. ARMONIA leverages software defined networking (SDN) and network functions virtualization (NFV) technologies. These technologies enable the access and metro convergence and the joint and efficient control of the optical and the IP equipment used in these different network segments. Network monitoring information is collected and analyzed utilizing machine learning and big data analytics methods. Dynamic algorithms then decide how to adapt and dynamically optimize the unified network. The ARMONIA network enables unprecedented resource efficiency and provides advanced virtualization services, reducing the capital expenditures (CAPEX) and operating expenses (OPEX) and lowering the barriers for the introduction of new services. We demonstrate the benefits of the ARMONIA network in the context of dynamic resource provisioning of network slices. We observe significant spectrum and equipment savings when compared to static overprovisioning.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Alwakeel, Lyan, and Kevin Lano. "Functional and Technical Aspects of Self-management mHealth Apps: Systematic App Search and Literature Review." JMIR Human Factors 9, no. 2 (May 25, 2022): e29767. http://dx.doi.org/10.2196/29767.

Повний текст джерела
Анотація:
Background Although the past decade has witnessed the development of many self-management mobile health (mHealth) apps that enable users to monitor their health and activities independently, there is a general lack of empirical evidence on the functional and technical aspects of self-management mHealth apps from a software engineering perspective. Objective This study aims to systematically identify the characteristics and challenges of self-management mHealth apps, focusing on functionalities, design, development, and evaluation methods, as well as to specify the differences and similarities between published research papers and commercial and open-source apps. Methods This research was divided into 3 main phases to achieve the expected goal. The first phase involved reviewing peer-reviewed academic research papers from 7 digital libraries, and the second phase involved reviewing and evaluating apps available on Android and iOS app stores using the Mobile Application Rating Scale. Finally, the third phase involved analyzing and evaluating open-source apps from GitHub. Results In total, 52 research papers, 42 app store apps, and 24 open-source apps were analyzed, synthesized, and reported. We found that the development of self-management mHealth apps requires significant time, effort, and cost because of their complexity and specific requirements, such as the use of machine learning algorithms, external services, and built-in technologies. In general, self-management mHealth apps are similar in their focus, user interface components, navigation and structure, services and technologies, authentication features, and architecture and patterns. However, they differ in terms of the use of machine learning, processing techniques, key functionalities, inference of machine learning knowledge, logging mechanisms, evaluation techniques, and challenges. Conclusions Self-management mHealth apps may offer an essential means of managing users’ health, expecting to assist users in continuously monitoring their health and encourage them to adopt healthy habits. However, developing an efficient and intelligent self-management mHealth app with the ability to reduce resource consumption and processing time, as well as increase performance, is still under research and development. In addition, there is a need to find an automated process for evaluating and selecting suitable machine learning algorithms for the self-management of mHealth apps. We believe that these issues can be avoided or significantly reduced by using a model-driven engineering approach with a decision support system to accelerate and ameliorate the development process and quality of self-management mHealth apps.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Dobrojevic, Milos, and Nebojsa Bacanin. "IoT as a Backbone of Intelligent Homestead Automation." Electronics 11, no. 7 (March 24, 2022): 1004. http://dx.doi.org/10.3390/electronics11071004.

Повний текст джерела
Анотація:
The concepts of smart agriculture, with the aim of highly automated industrial mass production leaning towards self-farming, can be scaled down to the level of small farms and homesteads, with the use of more affordable electronic components and open-source software. The backbone of smart agriculture, in both cases, is the Internet of Things (IoT). Single-board computers (SBCs) such as a Raspberry Pi, working under Linux or Windows IoT operating systems, make affordable platform for smart devices with modular architecture, suitable for automation of various tasks by using machine learning (ML), artificial intelligence (AI) and computer vision (CV). Similarly, the Arduino microcontroller enables the building of nodes in the IoT network, capable of reading various physical values, wirelessly sending them to other computers for processing and furthermore, controlling electronic elements and machines in the physical world based on the received data. This review gives a limited overview of currently available technologies for smart automation of industrial agricultural production and of alternative, smaller-scale projects applicable in homesteads, based on Arduino and Raspberry Pi hardware, as well as a draft proposal of an integrated homestead automation system based on the IoT.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Gunawan, Teddy Surya, B. Herawan Hayadi, Cindy Paramitha, and Muhammad Sadikin. "IoT Framework Current Trends and Recent Advances to Management Company in The PT.TNC." JUDIMAS 1, no. 2 (March 31, 2021): 164. http://dx.doi.org/10.30700/jm.v1i2.1104.

Повний текст джерела
Анотація:
The Internet of Things (IoT) is a fast growing and user-friendly technology that connects everything together. And it can communicate effectively between the people who connect "Things." Internet of Things, also known as Internet of Objects, usually refers to remote systems between projects. Systems will be remote and self-designable. However, the world's largest information technology companies tend to release products in the form of services to avoid disclosing detailed design and implementation knowledge. Hence, the overall trend of academic institutions is to use these mainstream IoT platforms as "black boxes". IoT is something that is useful as a sensor, computer architecture, software, security, packaging, technology selection based on the amount of data, as far as data is needed, whatever power you have. Fundamental way to collect and store data Thing: SQL, noSQL, and time series databases Machine learning algorithms with outputs: regression, classification, anomaly detection. Improve service quality, reduce service costs New models (precision services), Reduce consumption costs of higher quality products or services, Improve health and safety.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Agnew, Dennis, Nader Aljohani, Reynold Mathieu, Sharon Boamah, Keerthiraj Nagaraj, Janise McNair, and Arturo Bretas. "Implementation Aspects of Smart Grids Cyber-Security Cross-Layered Framework for Critical Infrastructure Operation." Applied Sciences 12, no. 14 (July 7, 2022): 6868. http://dx.doi.org/10.3390/app12146868.

Повний текст джерела
Анотація:
Communication networks in power systems are a major part of the smart grid paradigm. It enables and facilitates the automation of power grid operation as well as self-healing in contingencies. Such dependencies on communication networks, though, create a roam for cyber-threats. An adversary can launch an attack on the communication network, which in turn reflects on power grid operation. Attacks could be in the form of false data injection into system measurements, flooding the communication channels with unnecessary data, or intercepting messages. Using machine learning-based processing on data gathered from communication networks and the power grid is a promising solution for detecting cyber threats. In this paper, a co-simulation of cyber-security for cross-layer strategy is presented. The advantage of such a framework is the augmentation of valuable data that enhances the detection as well as identification of anomalies in the operation of the power grid. The framework is implemented on the IEEE 118-bus system. The system is constructed in Mininet to simulate a communication network and obtain data for analysis. A distributed three controller software-defined networking (SDN) framework is proposed that utilizes the Open Network Operating System (ONOS) cluster. According to the findings of our suggested architecture, it outperforms a single SDN controller framework by a factor of more than ten times the throughput. This provides for a higher flow of data throughout the network while decreasing congestion caused by a single controller’s processing restrictions. Furthermore, our CECD-AS approach outperforms state-of-the-art physics and machine learning-based techniques in terms of attack classification. The performance of the framework is investigated under various types of communication attacks.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Nayyar, Anand, Pijush Kanti Dutta Pramankit, and Rajni Mohana. "Introduction to the Special Issue on Evolving IoT and Cyber-Physical Systems: Advancements, Applications, and Solutions." Scalable Computing: Practice and Experience 21, no. 3 (August 1, 2020): 347–48. http://dx.doi.org/10.12694/scpe.v21i3.1568.

Повний текст джерела
Анотація:
Internet of Things (IoT) is regarded as a next-generation wave of Information Technology (IT) after the widespread emergence of the Internet and mobile communication technologies. IoT supports information exchange and networked interaction of appliances, vehicles and other objects, making sensing and actuation possible in a low-cost and smart manner. On the other hand, cyber-physical systems (CPS) are described as the engineered systems which are built upon the tight integration of the cyber entities (e.g., computation, communication, and control) and the physical things (natural and man-made systems governed by the laws of physics). The IoT and CPS are not isolated technologies. Rather it can be said that IoT is the base or enabling technology for CPS and CPS is considered as the grownup development of IoT, completing the IoT notion and vision. Both are merged into closed-loop, providing mechanisms for conceptualizing, and realizing all aspects of the networked composed systems that are monitored and controlled by computing algorithms and are tightly coupled among users and the Internet. That is, the hardware and the software entities are intertwined, and they typically function on different time and location-based scales. In fact, the linking between the cyber and the physical world is enabled by IoT (through sensors and actuators). CPS that includes traditional embedded and control systems are supposed to be transformed by the evolving and innovative methodologies and engineering of IoT. Several applications areas of IoT and CPS are smart building, smart transport, automated vehicles, smart cities, smart grid, smart manufacturing, smart agriculture, smart healthcare, smart supply chain and logistics, etc. Though CPS and IoT have significant overlaps, they differ in terms of engineering aspects. Engineering IoT systems revolves around the uniquely identifiable and internet-connected devices and embedded systems; whereas engineering CPS requires a strong emphasis on the relationship between computation aspects (complex software) and the physical entities (hardware). Engineering CPS is challenging because there is no defined and fixed boundary and relationship between the cyber and physical worlds. In CPS, diverse constituent parts are composed and collaborated together to create unified systems with global behaviour. These systems need to be ensured in terms of dependability, safety, security, efficiency, and adherence to real‐time constraints. Hence, designing CPS requires knowledge of multidisciplinary areas such as sensing technologies, distributed systems, pervasive and ubiquitous computing, real-time computing, computer networking, control theory, signal processing, embedded systems, etc. CPS, along with the continuous evolving IoT, has posed several challenges. For example, the enormous amount of data collected from the physical things makes it difficult for Big Data management and analytics that includes data normalization, data aggregation, data mining, pattern extraction and information visualization. Similarly, the future IoT and CPS need standardized abstraction and architecture that will allow modular designing and engineering of IoT and CPS in global and synergetic applications. Another challenging concern of IoT and CPS is the security and reliability of the components and systems. Although IoT and CPS have attracted the attention of the research communities and several ideas and solutions are proposed, there are still huge possibilities for innovative propositions to make IoT and CPS vision successful. The major challenges and research scopes include system design and implementation, computing and communication, system architecture and integration, application-based implementations, fault tolerance, designing efficient algorithms and protocols, availability and reliability, security and privacy, energy-efficiency and sustainability, etc. It is our great privilege to present Volume 21, Issue 3 of Scalable Computing: Practice and Experience. We had received 30 research papers and out of which 14 papers are selected for publication. The objective of this special issue is to explore and report recent advances and disseminate state-of-the-art research related to IoT, CPS and the enabling and associated technologies. The special issue will present new dimensions of research to researchers and industry professionals with regard to IoT and CPS. Vivek Kumar Prasad and Madhuri D Bhavsar in the paper titled "Monitoring and Prediction of SLA for IoT based Cloud described the mechanisms for monitoring by using the concept of reinforcement learning and prediction of the cloud resources, which forms the critical parts of cloud expertise in support of controlling and evolution of the IT resources and has been implemented using LSTM. The proper utilization of the resources will generate revenues to the provider and also increases the trust factor of the provider of cloud services. For experimental analysis, four parameters have been used i.e. CPU utilization, disk read/write throughput and memory utilization. Kasture et al. in the paper titled "Comparative Study of Speaker Recognition Techniques in IoT Devices for Text Independent Negative Recognition" compared the performance of features which are used in state of art speaker recognition models and analyse variants of Mel frequency cepstrum coefficients (MFCC) predominantly used in feature extraction which can be further incorporated and used in various smart devices. Mahesh Kumar Singh and Om Prakash Rishi in the paper titled "Event Driven Recommendation System for E-Commerce using Knowledge based Collaborative Filtering Technique" proposed a novel system that uses a knowledge base generated from knowledge graph to identify the domain knowledge of users, items, and relationships among these, knowledge graph is a labelled multidimensional directed graph that represents the relationship among the users and the items. The proposed approach uses about 100 percent of users' participation in the form of activities during navigation of the web site. Thus, the system expects under the users' interest that is beneficial for both seller and buyer. The proposed system is compared with baseline methods in area of recommendation system using three parameters: precision, recall and NDGA through online and offline evaluation studies with user data and it is observed that proposed system is better as compared to other baseline systems. Benbrahim et al. in the paper titled "Deep Convolutional Neural Network with TensorFlow and Keras to Classify Skin Cancer" proposed a novel classification model to classify skin tumours in images using Deep Learning methodology and the proposed system was tested on HAM10000 dataset comprising of 10,015 dermatoscopic images and the results observed that the proposed system is accurate in order of 94.06\% in validation set and 93.93\% in the test set. Devi B et al. in the paper titled "Deadlock Free Resource Management Technique for IoT-Based Post Disaster Recovery Systems" proposed a new class of techniques that do not perform stringent testing before allocating the resources but still ensure that the system is deadlock-free and the overhead is also minimal. The proposed technique suggests reserving a portion of the resources to ensure no deadlock would occur. The correctness of the technique is proved in the form of theorems. The average turnaround time is approximately 18\% lower for the proposed technique over Banker's algorithm and also an optimal overhead of O(m). Deep et al. in the paper titled "Access Management of User and Cyber-Physical Device in DBAAS According to Indian IT Laws Using Blockchain" proposed a novel blockchain solution to track the activities of employees managing cloud. Employee authentication and authorization are managed through the blockchain server. User authentication related data is stored in blockchain. The proposed work assists cloud companies to have better control over their employee's activities, thus help in preventing insider attack on User and Cyber-Physical Devices. Sumit Kumar and Jaspreet Singh in paper titled "Internet of Vehicles (IoV) over VANETS: Smart and Secure Communication using IoT" highlighted a detailed description of Internet of Vehicles (IoV) with current applications, architectures, communication technologies, routing protocols and different issues. The researchers also elaborated research challenges and trade-off between security and privacy in area of IoV. Deore et al. in the paper titled "A New Approach for Navigation and Traffic Signs Indication Using Map Integrated Augmented Reality for Self-Driving Cars" proposed a new approach to supplement the technology used in self-driving cards for perception. The proposed approach uses Augmented Reality to create and augment artificial objects of navigational signs and traffic signals based on vehicles location to reality. This approach help navigate the vehicle even if the road infrastructure does not have very good sign indications and marking. The approach was tested locally by creating a local navigational system and a smartphone based augmented reality app. The approach performed better than the conventional method as the objects were clearer in the frame which made it each for the object detection to detect them. Bhardwaj et al. in the paper titled "A Framework to Systematically Analyse the Trustworthiness of Nodes for Securing IoV Interactions" performed literature on IoV and Trust and proposed a Hybrid Trust model that seperates the malicious and trusted nodes to secure the interaction of vehicle in IoV. To test the model, simulation was conducted on varied threshold values. And results observed that PDR of trusted node is 0.63 which is higher as compared to PDR of malicious node which is 0.15. And on the basis of PDR, number of available hops and Trust Dynamics the malicious nodes are identified and discarded. Saniya Zahoor and Roohie Naaz Mir in the paper titled "A Parallelization Based Data Management Framework for Pervasive IoT Applications" highlighted the recent studies and related information in data management for pervasive IoT applications having limited resources. The paper also proposes a parallelization-based data management framework for resource-constrained pervasive applications of IoT. The comparison of the proposed framework is done with the sequential approach through simulations and empirical data analysis. The results show an improvement in energy, processing, and storage requirements for the processing of data on the IoT device in the proposed framework as compared to the sequential approach. Patel et al. in the paper titled "Performance Analysis of Video ON-Demand and Live Video Streaming Using Cloud Based Services" presented a review of video analysis over the LVS \& VoDS video application. The researchers compared different messaging brokers which helps to deliver each frame in a distributed pipeline to analyze the impact on two message brokers for video analysis to achieve LVS & VoS using AWS elemental services. In addition, the researchers also analysed the Kafka configuration parameter for reliability on full-service-mode. Saniya Zahoor and Roohie Naaz Mir in the paper titled "Design and Modeling of Resource-Constrained IoT Based Body Area Networks" presented the design and modeling of a resource-constrained BAN System and also discussed the various scenarios of BAN in context of resource constraints. The Researchers also proposed an Advanced Edge Clustering (AEC) approach to manage the resources such as energy, storage, and processing of BAN devices while performing real-time data capture of critical health parameters and detection of abnormal patterns. The comparison of the AEC approach is done with the Stable Election Protocol (SEP) through simulations and empirical data analysis. The results show an improvement in energy, processing time and storage requirements for the processing of data on BAN devices in AEC as compared to SEP. Neelam Saleem Khan and Mohammad Ahsan Chishti in the paper titled "Security Challenges in Fog and IoT, Blockchain Technology and Cell Tree Solutions: A Review" outlined major authentication issues in IoT, map their existing solutions and further tabulate Fog and IoT security loopholes. Furthermore, this paper presents Blockchain, a decentralized distributed technology as one of the solutions for authentication issues in IoT. In addition, the researchers discussed the strength of Blockchain technology, work done in this field, its adoption in COVID-19 fight and tabulate various challenges in Blockchain technology. The researchers also proposed Cell Tree architecture as another solution to address some of the security issues in IoT, outlined its advantages over Blockchain technology and tabulated some future course to stir some attempts in this area. Bhadwal et al. in the paper titled "A Machine Translation System from Hindi to Sanskrit Language Using Rule Based Approach" proposed a rule-based machine translation system to bridge the language barrier between Hindi and Sanskrit Language by converting any test in Hindi to Sanskrit. The results are produced in the form of two confusion matrices wherein a total of 50 random sentences and 100 tokens (Hindi words or phrases) were taken for system evaluation. The semantic evaluation of 100 tokens produce an accuracy of 94\% while the pragmatic analysis of 50 sentences produce an accuracy of around 86\%. Hence, the proposed system can be used to understand the whole translation process and can further be employed as a tool for learning as well as teaching. Further, this application can be embedded in local communication based assisting Internet of Things (IoT) devices like Alexa or Google Assistant. Anshu Kumar Dwivedi and A.K. Sharma in the paper titled "NEEF: A Novel Energy Efficient Fuzzy Logic Based Clustering Protocol for Wireless Sensor Network" proposed a a deterministic novel energy efficient fuzzy logic-based clustering protocol (NEEF) which considers primary and secondary factors in fuzzy logic system while selecting cluster heads. After selection of cluster heads, non-cluster head nodes use fuzzy logic for prudent selection of their cluster head for cluster formation. NEEF is simulated and compared with two recent state of the art protocols, namely SCHFTL and DFCR under two scenarios. Simulation results unveil better performance by balancing the load and improvement in terms of stability period, packets forwarded to the base station, improved average energy and extended lifetime.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Yaser, Ahmed Latif, Hamdy M. Mousa, and Mahmoud Hussein. "Improved DDoS Detection Utilizing Deep Neural Networks and Feedforward Neural Networks as Autoencoder." Future Internet 14, no. 8 (August 12, 2022): 240. http://dx.doi.org/10.3390/fi14080240.

Повний текст джерела
Анотація:
Software-defined networking (SDN) is an innovative network paradigm, offering substantial control of network operation through a network’s architecture. SDN is an ideal platform for implementing projects involving distributed applications, security solutions, and decentralized network administration in a multitenant data center environment due to its programmability. As its usage rapidly expands, network security threats are becoming more frequent, leading SDN security to be of significant concern. Machine-learning (ML) techniques for intrusion detection of DDoS attacks in SDN networks utilize standard datasets and fail to cover all classification aspects, resulting in under-coverage of attack diversity. This paper proposes a hybrid technique to recognize denial-of-service (DDoS) attacks that combine deep learning and feedforward neural networks as autoencoders. Two datasets were analyzed for the training and testing model, first statically and then iteratively. The auto-encoding model is constructed by stacking the input layer and hidden layer of self-encoding models’ layer by layer, with each self-encoding model using a hidden layer. To evaluate our model, we use a three-part data split (train, test, and validate) rather than the common two-part split (train and test). The resulting proposed model achieved a higher accuracy for the static dataset, where for ISCX-IDS-2012 dataset, accuracy reached a high of 99.35% in training, 99.3% in validation and 99.99% in precision, recall, and F1-score. for the UNSW2018 dataset, the accuracy reached a high of 99.95% in training, 0.99.94% in validation, and 99.99% in precision, recall, and F1-score. In addition, the model achieved great results with a dynamic dataset (using an emulator), reaching a high of 97.68% in accuracy.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Demertzis, Konstantinos, Lazaros Iliadis, and Elias Pimenidis. "Geo-AI to aid disaster response by memory-augmented deep reservoir computing." Integrated Computer-Aided Engineering 28, no. 4 (August 27, 2021): 383–98. http://dx.doi.org/10.3233/ica-210657.

Повний текст джерела
Анотація:
It is a fact that natural disasters often cause severe damage both to ecosystems and humans. Moreover, man-made disasters can have enormous moral and economic consequences for people. A typical example is the large deadly and catastrophic explosion in Beirut on 4 August 2020, which destroyed a very large area of the city. This research paper introduces a Geo-AI disaster response computer vision system, capable to map an area using material from Synthetic Aperture Radar (SAR). SAR is a unique form of radar that can penetrate the clouds and collect data day and night under any weather conditions. Specifically, the Memory-Augmented Deep Convolutional Echo State Network (MA/DCESN) is introduced for the first time in the literature, as an advanced Machine Vision (MAV) architecture. It uses a meta-learning technique, which is based on a memory-augmented approach. The target is the employment of Deep Reservoir Computing (DRC) for domain adaptation. The developed Deep Convolutional Echo State Network (DCESN) combines a classic Convolutional Neural Network (CNN), with a Deep Echo State Network (DESN), and analog neurons with sparse random connections. Its training is performed following the Recursive Least Square (RLS) method. In addition, the integration of external memory allows the storage of useful data from past processes, while facilitating the rapid integration of new information, without the need for retraining. The proposed DCESN implements a set of original modifications regarding training setting, memory retrieval mechanisms, addressing techniques, and ways of assigning attention weights to memory vectors. As it is experimentally shown, the whole approach produces remarkable stability, high generalization efficiency and significant classification accuracy, significantly extending the state-of-the-art Machine Vision methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Vidács, László, Márk Jelasity, László Tóth, Péter Hegedűs, and Rudolf Ferenc. "A mesterséges intelligencia néhány biztonsági vetülete." Scientia et Securitas 1, no. 1 (December 17, 2020): 29–34. http://dx.doi.org/10.1556/112.2020.00005.

Повний текст джерела
Анотація:
Összefoglalás. A mély mesterséges neuronhálók elterjedése az ipari alkalmazásokban évekkel azok megbízhatóságával, értelmezhetőségével, és biztonságával kapcsolatos szakterületek fejlődését megelőzően történt. Az egyik, gyakorlatban is jelentős területen, a képfelismerésben például a megvalósult megoldások szinte már emberi teljesítményre képesek, de ezzel együtt célzott zajjal ezek a rendszerek félrevezethetők, megzavarhatók. Jelen kéziratban ismertetünk néhány tipikus biztonsági problémát, valamint rámutatunk arra, hogy a hagyományos szoftverfejlesztés területén alkalmazott minőségbiztosítási módszerekkel rokon megoldásokra szükség van az MI-re épülő rendszerek fejlesztésében, akár a mesterséges neuronhálók biztonságát, akár az MI rendszerek hagyományos komponenseinek fejlesztését tartjuk szem előtt. Summary. Research on the trustworthiness, interpretability and security of deep neural networks lags behind the widespread application of the technology in industrial applications. For example, in image recognition, modern solutions are capable of nearly human performance. However, with targeted adversarial noise, these systems can be arbitrarily manipulated. Here, we discuss some of the security problems and point out that quality assurance methods used in traditional software development should also be adapted when developing AI-based systems, whether in the security of artificial neural networks or traditional components of AI systems. One of the main concerns about neural networks today that – to the best of our knowledge – affects all deep neural networks is the existence of adversarial examples. These examples are relatively easy to find and according to a recent experiment, a well-chosen input can attack more networks at the same time. In this paper we also present a wider perspective of security of neural architectures borrowed from the traditional software engineering discipline. While in traditional development several methods are widely applied for software testing and fault localization, there is a lack of similar well-established methods in the neural network context. In case of deep neural networks, systematic testing tools and methods are in the early stage, and a methodology to test and verify the proper behavior of the neural networks is highly desirable. Robustness testing of machine learning algorithms is a further issue. This requires the generation of large random input data using fuzz testing methods. The adaptation of automatic fault localization techniques has already started by defining notions like code coverage to neural networks. Lastly, we argue that the effective development of high quality AI-based systems need well suited frameworks that can facilitate the daily work of scientists and software developers – like the Deep-Water framework, presented in the closing part of the paper.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Bosse, Stefan, Michael Koerdt, and Daniel Schmidt. "Robust and Adaptive Signal Segmentation for Structural Monitoring Using Autonomous Agents." Proceedings 2, no. 3 (November 14, 2017): 105. http://dx.doi.org/10.3390/ecsa-4-04917.

Повний текст джерела
Анотація:
Monitoring of mechanical structures is a Big Data challenge and includes Structural Health Monitoring (SHM) and Non-destructive Testing (NDT). The sensor data produced by common measuring techniques, e.g., guided wave propagation analysis, is characterized by a high dimensionality in the temporal and spatial domain. There are off- and on-line methods applied at maintenance- or run-time, respectively. On-line methods (SHM) usually are constrained by low-resource processing platforms, sensor noise, unreliability, and real-time operation requiring advanced and efficient sensor data processing. Commonly, structural monitoring is a task that maps high-dimensional input data on low-dimensional output data (information, which is feature extraction), e.g., in the simplest case a Boolean output variable “Damaged”. Machine Learning (ML), e.g., supervised learning, can be used to derive such a mapping function. But ML quality and performance depends strongly on the input data size. Therefore, adaptive and reliable input data reduction (that is feature selection) is required at the first layer of an automatic structural monitoring system. Assuming some kind of two-dimensional sensor data (or n-dimensional data in general), image segmentation can be used to identify Regions of Interest (ROI), e.g., of wave propagation fields. Wave propagation in materials underlie reflections that must be distinguished, especially in hybrid materials (e.g., combining metal and fibre-plastic composites) there are complex wave propagation fields. The image segmentation is one of the most crucial parts of image processing. Major difficulties in image segmentation are noise and the differing homogeneity (fuzziness and signal gradients) of regions, complicating the definition of suitable threshold conditions for the edge detection or region splitting/clustering. Many traditional image segmentation algorithms are constrained by this issue. Artificial Intelligence can aid to overcome this limitation by using autonomous agents as an adaptive and self-organizing software architecture, presented in this work. Using a collection of co-operating agents decomposes a large and complex problem in smaller and simpler problems with a Divide-and-Conquer approach. Related to the image segmentation scenario, agents are working mostly autonomous (de-coupled) on dynamically bounded data from different regions of a signal or an image (i.e., distributed with simulated mobility), adapted to the locality, being reliable and less sensitive to noisy sensor data. In this work, self-organizing agents perform segmentation. They are evaluated with measured high-dimensional data from piezo-electric acusto-ultrasonic sensors recording the wave propagation in plate-like structures. Commonly, SHM deploys only a small set of sensors and actuators at static positions delivering only a few temporal resolved sensor signals (1D), whereas NDT methods additionally can use spatial scanning to create images of wave signals (2D). Both one-dimensional temporal and two-dimensional spatial segmentation are considered to find characteristic ROI.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Baloian, Nelson, and José Pino. "Editorial introduction to J.UCS special issue Challenges for Smart Environments – Human-Centered Computing, Data Science, and Ambient Intelligence I." JUCS - Journal of Universal Computer Science 27, no. 11 (November 28, 2021): 1149–51. http://dx.doi.org/10.3897/jucs.76554.

Повний текст джерела
Анотація:
Modern technologies and various domains of human activities increasingly rely on data science to develop smarter and autonomous systems. This trend has already changed the whole landscape of the global economy becoming more AI-driven. Massive production of data by humans and machines, its availability for feasible processing with advent of deep learning infrastructures, combined with advancements in reliable information transfer capacities, open unbounded horizons for societal progress in close future. Quite naturally, this brings also new challenges for science and industry. In that context, Internet of things (IoT) is an enormously huge factory of monitoring and data generation. It enables countless devices to act as sensors which record and manipulate data, while requiring efficient algorithms to derive actionable knowledge. Billions of end-users equipped with smart mobile phones are also producing immensely large volumes of data, being it about user interaction or indirect telemetry such as location coordinates. Social networks represent another kind of data-intensive sources, with both structured and unstructured components, containing valuable information about world’s connectivity, dynamism, and more. Last but not least, to help businesses run smoothly, today’s cloud computing infrastructures and applications are also serviced and managed through measuring huge amounts of data to leverage in various predictive and automation tasks for healthy performance and permanent availability. Therefore, all these technology areas, experts and practitioners, are facing innovation challenges on building novel methodologies, accurate models, and systems for respective data-driven solutions which are effective and efficient. In view of the complexity of contemporary neural network architectures and models with millions of parameters they derive, one of such challenges is related to the concept of explainability of the machine learning models. It refers to the ability of the model to give information which can be interpreted by humans about the reasons for the decision made or recommendation released. These challenges can only be met with a mix of basic research, process modeling and simulation under uncertainty using qualitative and quantitative methods from the involved sciences, and taking into account international standards and adequate evaluation methods. Based on a successful funded collaboration between the American University of Armenia, the University of Duisburg-Essen and the University of Chile, in previous years a network was built, and in September 2020 a group of researchers gathered (although virtually) for the 2nd CODASSCA workshop on “Collaborative Technologies and Data Science in Smart City Applications”. This event has attracted 25 paper submissions which deal with the problems and challenges mentioned above. The studies are in specialized areas and disclose novel solutions and approaches based on existing theories suitably applied. The authors of the best papers published in the conference proceedings on Collaborative Technologies and Data Science in Artificial Intelligence Applications by Logos edition Berlin were invited to submit significantly extended and improved versions of their contributions to be considered for a journal special issue of J.UCS. There was also a J.UCS open call so that any author could submit papers on the highlighted subject. For this volume, we selected those dealing with more theoretical issues which were rigorously reviewed in three rounds and 6 papers nominated to be published. The editors would like to express their gratitude to J.UCS foundation for accepting the special issues in their journal, to the German Research Foundation (DFG), the German Academic Exchange Service (DAAD) and the universities and sponsors involved for funding the common activities and thank the editors of the CODASSCA2020 proceedings for their ongoing encouragement and support, the authors for their contributions, and the anonymous reviewers for their invaluable support. The paper “Incident Management for Explainable and Automated Root Cause Analysis in Cloud Data Centers” by Arnak Poghosyan, Ashot Harutyunyan, Naira Grigoryan, and Nicholas Kushmerick addresses an increasingly important problem towards autonomous or self-X systems, intelligent management of modern cloud environments with an emphasis on explainable AI. It demonstrates techniques and methods that greatly help in automated discovery of explicit conditions leading to data center incidents. The paper “Temporal Accelerators: Unleashing the Potential of Embedded FPGAs” by Christopher Cichiwskyj and Gregor Schiele presents an approach for executing computational tasks that can be split into sequential sub-tasks. It divides accelerators into multiple, smaller parts and uses the reconfiguration capabilities of the FPGA to execute the parts according to a task graph. That improves the energy consumption and the cost of using FPGAs in IoT devices. The paper “On Recurrent Neural Network based Theorem Prover for First Order Minimal Logic” by Ashot Baghdasaryan and Hovhannes Bolibekyan investigates using recurrent neural networks to determine the order of proof search in a sequent calculus for first-order minimal logic with a history mechanism. It demonstrates reduced durations in automated theorem proving systems.  The paper “Incremental Autoencoders for Text Streams Clustering in Social Networks” by Amal Rekik and Salma Jamoussi proposes a deep learning method to identify trending topics in a social network. It is built on detecting changes in streams of tweets. The method is experimentally validated to outperform relevant data stream algorithms in identifying “hot” topics. The paper “E-Capacity–Equivocation Region of Wiretap Channel” by Mariam Haroutunian studies a secure communication problem over the wiretap channel, where information transfer from the source to a legitimate receiver needs to be realized maximally secretly for an eavesdropper. This is an information-theoretic research which generalizes the capacity-equivocation region and secrecy-capacity function of the wiretap channel subject to error exponent criterion, thus deriving new and extended fundamental limits in reliable and secure communication in presence of a wiretapper. The paper “Leveraging Multifaceted Proximity Measures among Developers in Predicting Future Collaborations to Improve the Social Capital of Software Projects” by Amit Kumar and Sonali Agarwal targets improving the social capital of individual software developers and projects using machine learning. Authors’ approach applies network proximity and developer activity features to build a classifier for predicting the future collaborations among developers and generating relevant recommendations.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Srivastava, Rajat, Vinay Avasthi, and Priya R. Krishna. "Self-Adaptive Optimization Assisted Deep Learning Model for Partial Discharge Recognition." Parallel Processing Letters 32, no. 01n02 (March 2022). http://dx.doi.org/10.1142/s0129626421500249.

Повний текст джерела
Анотація:
In the power system, research is being conducted in diagnosing and monitoring the condition of power equipment in a precise way. The Partial Discharges (PD) estimations under high voltage is recognized to be the most renowned and useful approach for accessing the electrical behaviour of the insulation material. The PD is good at localizing the dielectric failures even in the smaller regions before the occurrence of the dielectric breakdown. Therefore, the PD condition monitoring with accurate feature specification will be the appropriate model for enhancing the life span of the electrical apparatus. In this research work, a novel data-driven approach is introduced to detect the PD pulses in power cables using optimization based machine learning models. The proposed model will encompass two major phases: feature extraction and recognition. The first phase of the proposed method concentrates on extracting the wavelet scattering transform-based features. In the second phase, these features are fed as the input to optimized Deep Belief Network (DBN), whose count of the hidden neuron is optimized via a Self Adaptive Border Collie Optimization algorithm (SA-BCO). Finally, the performance evaluation is done in terms of diverse performance measures.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

AHIRE, KAVITA, and Jyoti Yadav. "Network Topology Classification in SDN Ecosystem using Machine Learning." International Journal of Next-Generation Computing, July 26, 2022. http://dx.doi.org/10.47164/ijngc.v13i2.410.

Повний текст джерела
Анотація:
To meet the increasing network demands of enterprise environments and data centers, traditional network architectures have been replaced by software-enabled hardware devices for developing agile, dynamic, and programmable networks. Software Defined Networking (SDN) is a new paradigm shift that abstracts network design and infrastructure in the software and then implements it by using software across hardware devices. SDN is used to build and manage the network in a customized way. SDN architecture offers network virtualization, network programmability, and flexibility by decoupling control and data planes which further enriches the network performance. As such, the SDN controller is a tactical control point in SDN. An SDN normally allows data flow control to the switches and/or routers and the rationale of the application's logic for deploying intelligent networks. SDN with Machine Learning (ML) and Artificial Intelligence (AI) techniques build network models, which essentially can take decisions based on self-learning and self-management capabilities. Accurate classification of topology is of prime importance to satisfy future network prerequisites like unpredictable traffic patterns, dynamic scaling, flexibility, and centralized control. The controller needs to have exact information about the topology of the network in order to configure and manage the network. Subsequently, topology classification is an important component of any Software Defined Network architecture. This paper presents the classification of topologies using different supervised ML algorithms. The accuracy obtained from Support Vector Machine (SVM) and Classification and Regression (CART) is 95% and 90% respectively. The experimental results show that according to the k Cross-Fold validation technique, SVM algorithm has been found to be the most accurate amongst the other ML algorithms with a mean accuracy value of 85%.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

De Blasi, Stefano, Maryam Bahrami, Elmar Engels, and Alexander Gepperth. "Safe contextual Bayesian optimization integrated in industrial control for self-learning machines." Journal of Intelligent Manufacturing, February 13, 2023. http://dx.doi.org/10.1007/s10845-023-02087-3.

Повний текст джерела
Анотація:
AbstractIntelligent manufacturing applications and agent-based implementations are scientifically investigated due to the enormous potential of industrial process optimization. The most widespread data-driven approach is the use of experimental history under test conditions for training, followed by execution of the trained model. Since factors, such as tool wear, affect the process, the experimental history has to be compiled extensively. In addition, individual machine noise implies that the models are not easily transferable to other (theoretically identical) machines. In contrast, a continual learning system should have the capacity to adapt (slightly) to a changing environment, e.g., another machine under different working conditions. Since this adaptation can potentially have a negative impact on process quality, especially in industry, safe optimization methods are required. In this article, we present a significant step towards self-optimizing machines in industry, by introducing a novel method for efficient safe contextual optimization and continuously trading-off between exploration and exploitation. Furthermore, an appropriate data discard strategy and local approximation techniques enable continual optimization. The approach is implemented as generic software module for an industrial edge control device. We apply this module to a steel straightening machine as an example, enabling it to adapt safely to changing environments.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Scrimieri, Daniele, Omar Adalat, Shukri Afazov, and Svetan Ratchev. "An integrated data- and capability-driven approach to the reconfiguration of agent-based production systems." International Journal of Advanced Manufacturing Technology, November 29, 2022. http://dx.doi.org/10.1007/s00170-022-10553-0.

Повний текст джерела
Анотація:
AbstractIndustry 4.0 promotes highly automated mechanisms for setting up and operating flexible manufacturing systems, using distributed control and data-driven machine intelligence. This paper presents an approach to reconfiguring distributed production systems based on complex product requirements, combining the capabilities of the available production resources. A method for both checking the “realisability” of a product by matching required operations and capabilities, and adapting resources is introduced. The reconfiguration is handled by a multi-agent system, which reflects the distributed nature of the production system and provides an intelligent interface to the user. This is all integrated with a self-adaptation technique for learning how to improve the performance of the production system as part of a reconfiguration. This technique is based on a machine learning algorithm that generalises from past experience on adjustments. The mechanisms of the proposed approach have been evaluated on a distributed robotic manufacturing system, demonstrating their efficacy. Nevertheless, the approach is general and it can be applied to other scenarios.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Benato, Lisa, Erik Buhmann, Martin Erdmann, Peter Fackeldey, Jonas Glombitza, Nikolai Hartmann, Gregor Kasieczka, et al. "Shared Data and Algorithms for Deep Learning in Fundamental Physics." Computing and Software for Big Science 6, no. 1 (May 3, 2022). http://dx.doi.org/10.1007/s41781-022-00082-6.

Повний текст джерела
Анотація:
AbstractWe introduce a Python package that provides simple and unified access to a collection of datasets from fundamental physics research—including particle physics, astroparticle physics, and hadron- and nuclear physics—for supervised machine learning studies. The datasets contain hadronic top quarks, cosmic-ray-induced air showers, phase transitions in hadronic matter, and generator-level histories. While public datasets from multiple fundamental physics disciplines already exist, the common interface and provided reference models simplify future work on cross-disciplinary machine learning and transfer learning in fundamental physics. We discuss the design and structure and line out how additional datasets can be submitted for inclusion. As showcase application, we present a simple yet flexible graph-based neural network architecture that can easily be applied to a wide range of supervised learning tasks. We show that our approach reaches performance close to dedicated methods on all datasets. To simplify adaptation for various problems, we provide easy-to-follow instructions on how graph-based representations of data structures, relevant for fundamental physics, can be constructed and provide code implementations for several of them. Implementations are also provided for our proposed method and all reference algorithms.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Tsouvalas, Vasileios, Aaqib Saeed, and Tanir Ozcelebi. "Federated Self-Training for Semi-Supervised Audio Recognition." ACM Transactions on Embedded Computing Systems, March 4, 2022. http://dx.doi.org/10.1145/3520128.

Повний текст джерела
Анотація:
Federated Learning is a distributed machine learning paradigm dealing with decentralized and personal datasets. Since data reside on devices like smartphones and virtual assistants, labeling is entrusted to the clients or labels are extracted in an automated way. Specifically, in the case of audio data, acquiring semantic annotations can be prohibitively expensive and time-consuming. As a result, an abundance of audio data remains unlabeled and unexploited on users’ devices. Most existing federated learning approaches focus on supervised learning without harnessing the unlabeled data. In this work, we study the problem of semi-supervised learning of audio models via self-training in conjunction with federated learning. We propose FedSTAR to exploit large-scale on-device unlabeled data to improve the generalization of audio recognition models. We further demonstrate that self-supervised pre-trained models can accelerate the training of on-device models, significantly improving convergence within fewer training rounds. We conduct experiments on diverse public audio classification datasets and investigate the performance of our models under varying percentages of labeled and unlabeled data. Notably, we show that with as little as 3% labeled data available, FedSTAR on average can improve the recognition rate by 13.28% compared to the fully-supervised federated model.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Casado, Fernando E., Dylan Lema, Marcos F. Criado, Roberto Iglesias, Carlos V. Regueiro, and Senén Barro. "Concept drift detection and adaptation for federated and continual learning." Multimedia Tools and Applications, July 17, 2021. http://dx.doi.org/10.1007/s11042-021-11219-x.

Повний текст джерела
Анотація:
AbstractSmart devices, such as smartphones, wearables, robots, and others, can collect vast amounts of data from their environment. This data is suitable for training machine learning models, which can significantly improve their behavior, and therefore, the user experience. Federated learning is a young and popular framework that allows multiple distributed devices to train deep learning models collaboratively while preserving data privacy. Nevertheless, this approach may not be optimal for scenarios where data distribution is non-identical among the participants or changes over time, causing what is known as concept drift. Little research has yet been done in this field, but this kind of situation is quite frequent in real life and poses new challenges to both continual and federated learning. Therefore, in this work, we present a new method, called Concept-Drift-Aware Federated Averaging (CDA-FedAvg). Our proposal is an extension of the most popular federated algorithm, Federated Averaging (FedAvg), enhancing it for continual adaptation under concept drift. We empirically demonstrate the weaknesses of regular FedAvg and prove that CDA-FedAvg outperforms it in this type of scenario.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Alabsi, Mohammed, Larry Pearlstein, Nithya Nalluri, Michael Franco-Garcia, and Zachary Leong. "Transfer learning evaluation based on optimal convolution neural networks architecture for bearing fault diagnosis applications." Journal of Vibration and Control, February 7, 2023, 107754632311557. http://dx.doi.org/10.1177/10775463231155713.

Повний текст джерела
Анотація:
Intelligent fault diagnosis utilizing deep learning algorithms is currently a topic of great interest. When developing a new Convolutional Neural Network (CNN) architecture to address machine diagnosis problem, it is common to use a deep model, with many layers, many feature maps, and large kernels. These models are capable of learning complex relationships and can potentially achieve superior performance on test data. However, not only does a large network potentially impose undue computational complexity for training and eventual deployment, it may also lead to more brittleness—where data outside of the curated dataset used in CNN training and evaluation is poorly handled. Accordingly, this paper will investigate a methodical approach for identifying a quasi-optimal CNN architecture to maximize robustness when a model is trained under one set of operating conditions, and deployed under a different set of conditions. Optuna software will be used to optimize a baseline CNN model for robustness to different rotational speeds and bearing Model #’s. To further improve the network generalization capabilities, this paper proposes the addition of white Gaussian noise to the raw vibration training data. Results indicate that the number of trainable weights and associated multiplications in the optimized model were reduced by almost 95% without jeopardizing the network classification accuracy. Additionally, moderate Additive White Gaussian Noise (AWGN) improved the model adaptation capabilities.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Vieira, Diogo Munaro, Chrystinne Fernandes, Carlos Lucena, and Sérgio Lifschitz. "Driftage: a multi-agent system framework for concept drift detection." GigaScience 10, no. 6 (June 1, 2021). http://dx.doi.org/10.1093/gigascience/giab030.

Повний текст джерела
Анотація:
Abstract Background The amount of data and behavior changes in society happens at a swift pace in this interconnected world. Consequently, machine learning algorithms lose accuracy because they do not know these new patterns. This change in the data pattern is known as concept drift. There exist many approaches for dealing with these drifts. Usually, these methods are costly to implement because they require (i) knowledge of drift detection algorithms, (ii) software engineering strategies, and (iii) continuous maintenance concerning new drifts. Results This article proposes to create Driftage: a new framework using multi-agent systems to simplify the implementation of concept drift detectors considerably and divide concept drift detection responsibilities between agents, enhancing explainability of each part of drift detection. As a case study, we illustrate our strategy using a muscle activity monitor of electromyography. We show a reduction in the number of false-positive drifts detected, improving detection interpretability, and enabling concept drift detectors’ interactivity with other knowledge bases. Conclusion We conclude that using Driftage, arises a new paradigm to implement concept drift algorithms with multi-agent architecture that contributes to split drift detection responsability, algorithms interpretability and more dynamic algorithms adaptation.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Chia, Nai-Hui, András Gilyén, Tongyang Li, Han-Hsuan Lin, Ewin Tang, and Chunhao Wang. "Sampling-based sublinear low-rank matrix arithmetic framework for dequantizing quantum machine learning." Journal of the ACM, August 10, 2022. http://dx.doi.org/10.1145/3549524.

Повний текст джерела
Анотація:
We present an algorithmic framework for quantum-inspired classical algorithms on close-to-low-rank matrices, generalizing the series of results started by Tang’s breakthrough quantum-inspired algorithm for recommendation systems [STOC’19]. Motivated by quantum linear algebra algorithms and the quantum singular value transformation (SVT) framework of Gilyén, Su, Low, and Wiebe [STOC’19], we develop classical algorithms for SVT that run in time independent of input dimension, under suitable quantum-inspired sampling assumptions. Our results give compelling evidence that in the corresponding QRAM data structure input model, quantum SVT does not yield exponential quantum speedups. Since the quantum SVT framework generalizes essentially all known techniques for quantum linear algebra, our results, combined with sampling lemmas from previous work, suffice to generalize all prior results about dequantizing quantum machine learning algorithms. In particular, our classical SVT framework recovers and often improves the dequantization results on recommendation systems, principal component analysis, supervised clustering, support vector machines, low-rank regression, and semidefinite program solving. We also give additional dequantization results on low-rank Hamiltonian simulation and discriminant analysis. Our improvements come from identifying the key feature of the quantum-inspired input model that is at the core of all prior quantum-inspired results: ℓ 2 -norm sampling can approximate matrix products in time independent of their dimension. We reduce all our main results to this fact, making our exposition concise, self-contained, and intuitive.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Chrysafiadi, Konstantina, Margaritis Kamitsios, and Maria Virvou. "Fuzzy-based dynamic difficulty adjustment of an educational 3D-game." Multimedia Tools and Applications, February 16, 2023. http://dx.doi.org/10.1007/s11042-023-14515-w.

Повний текст джерела
Анотація:
AbstractAn educational game aims to employ the pleasant and fascinating environment of a game for educational purpose. However, when the game’s educational content or playing environment does not match with the player’s learning needs or game-playing skills, the player can find the game either too easy or too difficult and decide to drop it. Therefore, a successful educational game has to offer Dynamic Difficulty Adjustment (DDA) of both the game’s educational content and the game’s playing environment. Considering the above, in this paper a fuzzy-based DDA mechanism of a 3D-game, that teaches the programming language ‘HTML’, is presented. The game adapts dynamically the difficulty of battles and mazes navigation to each individual player’s skills. The novelty of the presented game lies on the application of DDA in the two dimensions (educational content and playing environment) of an educational game and on the use of fuzzy logic for succeeding balance between the game’s difficulty and the user’s playing skills. The adaptation is realized in real-time, during the playing and not each time the player changes game level/track. Furthermore, fuzzy logic allows to apply DDA without the need of many learning data, like other Artificial Intelligence techniques for DDA (i.e. machine learning techniques). The presented game was evaluated through a comparative evaluation, which included questionnaires, an experiment in real conditions and t-test statistical analysis method. The evaluation results showed that the incorporated adaptation mechanism increases the user’s motivation and succeeds to keep players’ interest for the game undiminished.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Olney, Brooks, and Robert Karam. "Diverse, Neural Trojan Resilient Ecosystem of Neural Network IP." ACM Journal on Emerging Technologies in Computing Systems, February 2, 2022. http://dx.doi.org/10.1145/3471189.

Повний текст джерела
Анотація:
Adversarial machine learning is a prominent research area aimed towards exposing and mitigating security vulnerabilities in AI/ML algorithms and their implementations. Data poisoning and neural Trojans enable an attacker to drastically change the behavior and performance of a Convolutional Neural Network (CNN) merely by altering some of the input data during training. Such attacks can be catastrophic in the field, e.g. for self-driving vehicles. In this paper, we propose deploying a CNN as an ecosystem of variants , rather than a singular model. The ecosystem is derived from the original trained model, and though every derived model is structurally different, they are all functionally equivalent to the original and each other. We propose two complementary techniques: stochastic parameter mutation , where the weights θ of the original are shifted by a small, random amount, and a delta-update procedure which functions by XOR’ing all of the parameters with an update file containing the Δθ values. This technique is effective against transferability of a neural Trojan to the greater ecosystem by amplifying the Trojan’s malicious impact to easily detectable levels; thus, deploying a model as an ecosystem can render the ecosystem more resilient against a neural Trojan attack.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Segura, Albert, Jose Maria Arnau, and Antonio Gonzalez. "Irregular accesses reorder unit: improving GPGPU memory coalescing for graph-based workloads." Journal of Supercomputing, July 18, 2022. http://dx.doi.org/10.1007/s11227-022-04621-1.

Повний текст джерела
Анотація:
AbstractGPGPU architectures have become the dominant platform for massively parallel workloads, delivering high performance and energy efficiency for popular applications such as machine learning, computer vision or self-driving cars. However, irregular applications, such as graph processing, fail to fully exploit GPGPU resources due to their divergent memory accesses that saturate the memory hierarchy. To reduce the pressure on the memory subsystem for divergent memory-intensive applications, programmers must take into account SIMT execution model and memory coalescing in GPGPUs, devoting significant efforts in complex optimization techniques. Despite these efforts, we show that irregular graph processing still suffers from low GPGPU performance. We observe that in many irregular applications the mapping of data to threads can be safely changed. In other words, it is possible to relax the strict relationship between thread and data processed to reduce memory divergence. Based on this observation, we propose the Irregular accesses Reorder Unit (IRU), a novel hardware extension tightly integrated in the GPGPU pipeline. The IRU reorders data processed by the threads on irregular accesses to improve memory coalescing, i.e., it tries to assign data elements to threads as to produce coalesced accesses in SIMT groups. Furthermore, the IRU is capable of filtering and merging duplicated accesses, significantly reducing the workload. Programmers can easily utilize the IRU with a simple API, or let the compiler issue instructions from our extended ISA. We evaluate our proposal for state-of-the-art graph-based algorithms and a wide selection of applications. Results show that the IRU achieves a memory coalescing improvement of 1.32x and a 46% reduction in the overall traffic in the memory hierarchy, which results in 1.33x speedup and 13% energy savings on average, while incurring in a small 5.6% area overhead.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Alotaibi, Ghala, Mohammed Awawdeh, Fathima Fazrina Farook, Mohamed Aljohani, Razan Mohamed Aldhafiri, and Mohamed Aldhoayan. "Artificial intelligence (AI) diagnostic tools: utilizing a convolutional neural network (CNN) to assess periodontal bone level radiographically—a retrospective study." BMC Oral Health 22, no. 1 (September 13, 2022). http://dx.doi.org/10.1186/s12903-022-02436-3.

Повний текст джерела
Анотація:
Abstract Background The purpose of this investigation was to develop a computer-assisted detection system based on a deep convolutional neural network (CNN) algorithm and to evaluate the accuracy and usefulness of this system for the detection of alveolar bone loss in periapical radiographs in the anterior region of the dental arches. We also aimed to evaluate the usefulness of the system in categorizing the severity of bone loss due to periodontal disease. Method A data set of 1724 intraoral periapical images of upper and lower anterior teeth in 1610 adult patients were retrieved from the ROMEXIS software management system at King Saud bin Abdulaziz University for Health Sciences. Using a combination of pre-trained deep CNN architecture and a self-trained network, the radiographic images were used to determine the optimal CNN algorithm. The diagnostic and predictive accuracy, precision, confusion matrix, recall, F1-score, Matthews Correlation Coefficient (MCC), Cohen Kappa, were calculated using the deep CNN algorithm in Python. Results The periapical radiograph dataset was divided randomly into 70% training, 20% validation, and 10% testing datasets. With the deep learning algorithm, the diagnostic accuracy for classifying normal versus disease was 73.0%, and 59% for the classification of the levels of severity of the bone loss. The Model showed a significant difference in the confusion matrix, accuracy, precision, recall, f1-score, MCC and Matthews Correlation Coefficient (MCC), Cohen Kappa, and receiver operating characteristic (ROC), between both the binary and multi-classification models. Conclusion This study revealed that the deep CNN algorithm (VGG-16) was useful to detect alveolar bone loss in periapical radiographs, and has a satisfactory ability to detect the severity of bone loss in teeth. The results suggest that machines can perform better based on the level classification and the captured characteristics of the image diagnosis. With additional optimization of the periodontal dataset, it is expected that a computer-aided detection system can become an effective and efficient procedure for aiding in the detection and staging of periodontal disease.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Cesarini, Paul. "‘Opening’ the Xbox." M/C Journal 7, no. 3 (July 1, 2004). http://dx.doi.org/10.5204/mcj.2371.

Повний текст джерела
Анотація:
“As the old technologies become automatic and invisible, we find ourselves more concerned with fighting or embracing what’s new”—Dennis Baron, From Pencils to Pixels: The Stage of Literacy Technologies What constitutes a computer, as we have come to expect it? Are they necessarily monolithic “beige boxes”, connected to computer monitors, sitting on computer desks, located in computer rooms or computer labs? In order for a device to be considered a true computer, does it need to have a keyboard and mouse? If this were 1991 or earlier, our collective perception of what computers are and are not would largely be framed by this “beige box” model: computers are stationary, slab-like, and heavy, and their natural habitats must be in rooms specifically designated for that purpose. In 1992, when Apple introduced the first PowerBook, our perception began to change. Certainly there had been other portable computers prior to that, such as the Osborne 1, but these were more luggable than portable, weighing just slightly less than a typical sewing machine. The PowerBook and subsequent waves of laptops, personal digital assistants (PDAs), and so-called smart phones from numerous other companies have steadily forced us to rethink and redefine what a computer is and is not, how we interact with them, and the manner in which these tools might be used in the classroom. However, this reconceptualization of computers is far from over, and is in fact steadily evolving as new devices are introduced, adopted, and subsequently adapted for uses beyond of their original purpose. Pat Crowe’s Book Reader project, for example, has morphed Nintendo’s GameBoy and GameBoy Advance into a viable electronic book platform, complete with images, sound, and multi-language support. (Crowe, 2003) His goal was to take this existing technology previously framed only within the context of proprietary adolescent entertainment, and repurpose it for open, flexible uses typically associated with learning and literacy. Similar efforts are underway to repurpose Microsoft’s Xbox, perhaps the ultimate symbol of “closed” technology given Microsoft’s propensity for proprietary code, in order to make it a viable platform for Open Source Software (OSS). However, these efforts are not forgone conclusions, and are in fact typical of the ongoing battle over who controls the technology we own in our homes, and how open source solutions are often at odds with a largely proprietary world. In late 2001, Microsoft launched the Xbox with a multimillion dollar publicity drive featuring events, commercials, live models, and statements claiming this new console gaming platform would “change video games the way MTV changed music”. (Chan, 2001) The Xbox launched with the following technical specifications: 733mhz Pentium III 64mb RAM, 8 or 10gb internal hard disk drive CD/DVD ROM drive (speed unknown) Nvidia graphics processor, with HDTV support 4 USB 1.1 ports (adapter required), AC3 audio 10/100 ethernet port, Optional 56k modem (TechTV, 2001) While current computers dwarf these specifications in virtually all areas now, for 2001 these were roughly on par with many desktop systems. The retail price at the time was $299, but steadily dropped to nearly half that with additional price cuts anticipated. Based on these features, the preponderance of “off the shelf” parts and components used, and the relatively reasonable price, numerous programmers quickly became interested in seeing it if was possible to run Linux and additional OSS on the Xbox. In each case, the goal has been similar: exceed the original purpose of the Xbox, to determine if and how well it might be used for basic computing tasks. If these attempts prove to be successful, the Xbox could allow institutions to dramatically increase the student-to-computer ratio in select environments, or allow individuals who could not otherwise afford a computer to instead buy and Xbox, download and install Linux, and use this new device to write, create, and innovate . This drive to literally and metaphorically “open” the Xbox comes from many directions. Such efforts include Andrew Huang’s self-published “Hacking the Xbox” book in which, under the auspices of reverse engineering, Huang analyzes the architecture of the Xbox, detailing step-by-step instructions for flashing the ROM, upgrading the hard drive and/or RAM, and generally prepping the device for use as an information appliance. Additional initiatives include Lindows CEO Michael Robertson’s $200,000 prize to encourage Linux development on the Xbox, and the Xbox Linux Project at SourceForge. What is Linux? Linux is an alternative operating system initially developed in 1991 by Linus Benedict Torvalds. Linux was based off a derivative of the MINIX operating system, which in turn was a derivative of UNIX. (Hasan 2003) Linux is currently available for Intel-based systems that would normally run versions of Windows, PowerPC-based systems that would normally run Apple’s Mac OS, and a host of other handheld, cell phone, or so-called “embedded” systems. Linux distributions are based almost exclusively on open source software, graphic user interfaces, and middleware components. While there are commercial Linux distributions available, these mainly just package the freely available operating system with bundled technical support, manuals, some exclusive or proprietary commercial applications, and related services. Anyone can still download and install numerous Linux distributions at no cost, provided they do not need technical support beyond the community / enthusiast level. Typical Linux distributions come with open source web browsers, word processors and related productivity applications (such as those found in OpenOffice.org), and related tools for accessing email, organizing schedules and contacts, etc. Certain Linux distributions are more or less designed for network administrators, system engineers, and similar “power users” somewhat distanced from that of our students. However, several distributions including Lycoris, Mandrake, LindowsOS, and other are specifically tailored as regular, desktop operating systems, with regular, everyday computer users in mind. As Linux has no draconian “product activation key” method of authentication, or digital rights management-laden features associated with installation and implementation on typical desktop and laptop systems, Linux is becoming an ideal choice both individually and institutionally. It still faces an uphill battle in terms of achieving widespread acceptance as a desktop operating system. As Finnie points out in Desktop Linux Edges Into The Mainstream: “to attract users, you need ease of installation, ease of device configuration, and intuitive, full-featured desktop user controls. It’s all coming, but slowly. With each new version, desktop Linux comes closer to entering the mainstream. It’s anyone’s guess as to when critical mass will be reached, but you can feel the inevitability: There’s pent-up demand for something different.” (Finnie 2003) Linux is already spreading rapidly in numerous capacities, in numerous countries. Linux has “taken hold wherever computer users desire freedom, and wherever there is demand for inexpensive software.” Reports from technology research company IDG indicate that roughly a third of computers in Central and South America run Linux. Several countries, including Mexico, Brazil, and Argentina, have all but mandated that state-owned institutions adopt open source software whenever possible to “give their people the tools and education to compete with the rest of the world.” (Hills 2001) The Goal Less than a year after Microsoft introduced the The Xbox, the Xbox Linux project formed. The Xbox Linux Project has a goal of developing and distributing Linux for the Xbox gaming console, “so that it can be used for many tasks that Microsoft don’t want you to be able to do. ...as a desktop computer, for email and browsing the web from your TV, as a (web) server” (Xbox Linux Project 2002). Since the Linux operating system is open source, meaning it can freely be tinkered with and distributed, those who opt to download and install Linux on their Xbox can do so with relatively little overhead in terms of cost or time. Additionally, Linux itself looks very “windows-like”, making for fairly low learning curve. To help increase overall awareness of this project and assist in diffusing it, the Xbox Linux Project offers step-by-step installation instructions, with the end result being a system capable of using common peripherals such as a keyboard and mouse, scanner, printer, a “webcam and a DVD burner, connected to a VGA monitor; 100% compatible with a standard Linux PC, all PC (USB) hardware and PC software that works with Linux.” (Xbox Linux Project 2002) Such a system could have tremendous potential for technology literacy. Pairing an Xbox with Linux and OpenOffice.org, for example, would provide our students essentially the same capability any of them would expect from a regular desktop computer. They could send and receive email, communicate using instant messaging IRC, or newsgroup clients, and browse Internet sites just as they normally would. In fact, the overall browsing experience for Linux users is substantially better than that for most Windows users. Internet Explorer, the default browser on all systems running Windows-base operating systems, lacks basic features standard in virtually all competing browsers. Native blocking of “pop-up” advertisements is still not yet possible in Internet Explorer without the aid of a third-party utility. Tabbed browsing, which involves the ability to easily open and sort through multiple Web pages in the same window, often with a single mouse click, is also missing from Internet Explorer. The same can be said for a robust download manager, “find as you type”, and a variety of additional features. Mozilla, Netscape, Firefox, Konqueror, and essentially all other OSS browsers for Linux have these features. Of course, most of these browsers are also available for Windows, but Internet Explorer is still considered the standard browser for the platform. If the Xbox Linux Project becomes widely diffused, our students could edit and save Microsoft Word files in OpenOffice.org’s Writer program, and do the same with PowerPoint and Excel files in similar OpenOffice.org components. They could access instructor comments originally created in Microsoft Word documents, and in turn could add their own comments and send the documents back to their instructors. They could even perform many functions not yet capable in Microsoft Office, including saving files in PDF or Flash format without needing Adobe’s Acrobat product or Macromedia’s Flash Studio MX. Additionally, by way of this project, the Xbox can also serve as “a Linux server for HTTP/FTP/SMB/NFS, serving data such as MP3/MPEG4/DivX, or a router, or both; without a monitor or keyboard or mouse connected.” (Xbox Linux Project 2003) In a very real sense, our students could use these inexpensive systems previously framed only within the context of entertainment, for educational purposes typically associated with computer-mediated learning. Problems: Control and Access The existing rhetoric of technological control surrounding current and emerging technologies appears to be stifling many of these efforts before they can even be brought to the public. This rhetoric of control is largely typified by overly-restrictive digital rights management (DRM) schemes antithetical to education, and the Digital Millennium Copyright Act (DMCA). Combined,both are currently being used as technical and legal clubs against these efforts. Microsoft, for example, has taken a dim view of any efforts to adapt the Xbox to Linux. Microsoft CEO Steve Ballmer, who has repeatedly referred to Linux as a cancer and has equated OSS as being un-American, stated, “Given the way the economic model works - and that is a subsidy followed, essentially, by fees for every piece of software sold - our license framework has to do that.” (Becker 2003) Since the Xbox is based on a subsidy model, meaning that Microsoft actually sells the hardware at a loss and instead generates revenue off software sales, Ballmer launched a series of concerted legal attacks against the Xbox Linux Project and similar efforts. In 2002, Nintendo, Sony, and Microsoft simultaneously sued Lik Sang, Inc., a Hong Kong-based company that produces programmable cartridges and “mod chips” for the PlayStation II, Xbox, and Game Cube. Nintendo states that its company alone loses over $650 million each year due to piracy of their console gaming titles, which typically originate in China, Paraguay, and Mexico. (GameIndustry.biz) Currently, many attempts to “mod” the Xbox required the use of such chips. As Lik Sang is one of the only suppliers, initial efforts to adapt the Xbox to Linux slowed considerably. Despite that fact that such chips can still be ordered and shipped here by less conventional means, it does not change that fact that the chips themselves would be illegal in the U.S. due to the anticircumvention clause in the DMCA itself, which is designed specifically to protect any DRM-wrapped content, regardless of context. The Xbox Linux Project then attempted to get Microsoft to officially sanction their efforts. They were not only rebuffed, but Microsoft then opted to hire programmers specifically to create technological countermeasures for the Xbox, to defeat additional attempts at installing OSS on it. Undeterred, the Xbox Linux Project eventually arrived at a method of installing and booting Linux without the use of mod chips, and have taken a more defiant tone now with Microsoft regarding their circumvention efforts. (Lettice 2002) They state that “Microsoft does not want you to use the Xbox as a Linux computer, therefore it has some anti-Linux-protection built in, but it can be circumvented easily, so that an Xbox can be used as what it is: an IBM PC.” (Xbox Linux Project 2003) Problems: Learning Curves and Usability In spite of the difficulties imposed by the combined technological and legal attacks on this project, it has succeeded at infiltrating this closed system with OSS. It has done so beyond the mere prototype level, too, as evidenced by the Xbox Linux Project now having both complete, step-by-step instructions available for users to modify their own Xbox systems, and an alternate plan catering to those who have the interest in modifying their systems, but not the time or technical inclinations. Specifically, this option involves users mailing their Xbox systems to community volunteers within the Xbox Linux Project, and basically having these volunteers perform the necessary software preparation or actually do the full Linux installation for them, free of charge (presumably not including shipping). This particular aspect of the project, dubbed “Users Help Users”, appears to be fairly new. Yet, it already lists over sixty volunteers capable and willing to perform this service, since “Many users don’t have the possibility, expertise or hardware” to perform these modifications. Amazingly enough, in some cases these volunteers are barely out of junior high school. One such volunteer stipulates that those seeking his assistance keep in mind that he is “just 14” and that when performing these modifications he “...will not always be finished by the next day”. (Steil 2003) In addition to this interesting if somewhat unusual level of community-driven support, there are currently several Linux-based options available for the Xbox. The two that are perhaps the most developed are GentooX, which is based of the popular Gentoo Linux distribution, and Ed’s Debian, based off the Debian GNU / Linux distribution. Both Gentoo and Debian are “seasoned” distributions that have been available for some time now, though Daniel Robbins, Chief Architect of Gentoo, refers to the product as actually being a “metadistribution” of Linux, due to its high degree of adaptability and configurability. (Gentoo 2004) Specifically, the Robbins asserts that Gentoo is capable of being “customized for just about any application or need. ...an ideal secure server, development workstation, professional desktop, gaming system, embedded solution or something else—whatever you need it to be.” (Robbins 2004) He further states that the whole point of Gentoo is to provide a better, more usable Linux experience than that found in many other distributions. Robbins states that: “The goal of Gentoo is to design tools and systems that allow a user to do their work pleasantly and efficiently as possible, as they see fit. Our tools should be a joy to use, and should help the user to appreciate the richness of the Linux and free software community, and the flexibility of free software. ...Put another way, the Gentoo philosophy is to create better tools. When a tool is doing its job perfectly, you might not even be very aware of its presence, because it does not interfere and make its presence known, nor does it force you to interact with it when you don’t want it to. The tool serves the user rather than the user serving the tool.” (Robbins 2004) There is also a so-called “live CD” Linux distribution suitable for the Xbox, called dyne:bolic, and an in-progress release of Slackware Linux, as well. According to the Xbox Linux Project, the only difference between the standard releases of these distributions and their Xbox counterparts is that “...the install process – and naturally the bootloader, the kernel and the kernel modules – are all customized for the Xbox.” (Xbox Linux Project, 2003) Of course, even if Gentoo is as user-friendly as Robbins purports, even if the Linux kernel itself has become significantly more robust and efficient, and even if Microsoft again drops the retail price of the Xbox, is this really a feasible solution in the classroom? Does the Xbox Linux Project have an army of 14 year olds willing to modify dozens, perhaps hundreds of these systems for use in secondary schools and higher education? Of course not. If such an institutional rollout were to be undertaken, it would require significant support from not only faculty, but Department Chairs, Deans, IT staff, and quite possible Chief Information Officers. Disk images would need to be customized for each institution to reflect their respective needs, ranging from setting specific home pages on web browsers, to bookmarks, to custom back-up and / or disk re-imaging scripts, to network authentication. This would be no small task. Yet, the steps mentioned above are essentially no different than what would be required of any IT staff when creating a new disk image for a computer lab, be it one for a Windows-based system or a Mac OS X-based one. The primary difference would be Linux itself—nothing more, nothing less. The institutional difficulties in undertaking such an effort would likely be encountered prior to even purchasing a single Xbox, in that they would involve the same difficulties associated with any new hardware or software initiative: staffing, budget, and support. If the institutional in question is either unwilling or unable to address these three factors, it would not matter if the Xbox itself was as free as Linux. An Open Future, or a Closed one? It is unclear how far the Xbox Linux Project will be allowed to go in their efforts to invade an essentially a proprietary system with OSS. Unlike Sony, which has made deliberate steps to commercialize similar efforts for their PlayStation 2 console, Microsoft appears resolute in fighting OSS on the Xbox by any means necessary. They will continue to crack down on any companies selling so-called mod chips, and will continue to employ technological protections to keep the Xbox “closed”. Despite clear evidence to the contrary, in all likelihood Microsoft continue to equate any OSS efforts directed at the Xbox with piracy-related motivations. Additionally, Microsoft’s successor to the Xbox would likely include additional anticircumvention technologies incorporated into it that could set the Xbox Linux Project back by months, years, or could stop it cold. Of course, it is difficult to say with any degree of certainty how this “Xbox 2” (perhaps a more appropriate name might be “Nextbox”) will impact this project. Regardless of how this device evolves, there can be little doubt of the value of Linux, OpenOffice.org, and other OSS to teaching and learning with technology. This value exists not only in terms of price, but in increased freedom from policies and technologies of control. New Linux distributions from Gentoo, Mandrake, Lycoris, Lindows, and other companies are just now starting to focus their efforts on Linux as user-friendly, easy to use desktop operating systems, rather than just server or “techno-geek” environments suitable for advanced programmers and computer operators. While metaphorically opening the Xbox may not be for everyone, and may not be a suitable computing solution for all, I believe we as educators must promote and encourage such efforts whenever possible. I suggest this because I believe we need to exercise our professional influence and ultimately shape the future of technology literacy, either individually as faculty and collectively as departments, colleges, or institutions. Moran and Fitzsimmons-Hunter argue this very point in Writing Teachers, Schools, Access, and Change. One of their fundamental provisions they use to define “access” asserts that there must be a willingness for teachers and students to “fight for the technologies that they need to pursue their goals for their own teaching and learning.” (Taylor / Ward 160) Regardless of whether or not this debate is grounded in the “beige boxes” of the past, or the Xboxes of the present, much is at stake. Private corporations should not be in a position to control the manner in which we use legally-purchased technologies, regardless of whether or not these technologies are then repurposed for literacy uses. I believe the exigency associated with this control, and the ongoing evolution of what is and is not a computer, dictates that we assert ourselves more actively into this discussion. We must take steps to provide our students with the best possible computer-mediated learning experience, however seemingly unorthodox the technological means might be, so that they may think critically, communicate effectively, and participate actively in society and in their future careers. About the Author Paul Cesarini is an Assistant Professor in the Department of Visual Communication & Technology Education, Bowling Green State University, Ohio Email: pcesari@bgnet.bgsu.edu Works Cited http://xbox-linux.sourceforge.net/docs/debian.php>.Baron, Denis. “From Pencils to Pixels: The Stages of Literacy Technologies.” Passions Pedagogies and 21st Century Technologies. Hawisher, Gail E., and Cynthia L. Selfe, Eds. Utah: Utah State University Press, 1999. 15 – 33. Becker, David. “Ballmer: Mod Chips Threaten Xbox”. News.com. 21 Oct 2002. http://news.com.com/2100-1040-962797.php>. http://news.com.com/2100-1040-978957.html?tag=nl>. http://archive.infoworld.com/articles/hn/xml/02/08/13/020813hnchina.xml>. http://www.neoseeker.com/news/story/1062/>. http://www.bookreader.co.uk>.Finni, Scott. “Desktop Linux Edges Into The Mainstream”. TechWeb. 8 Apr 2003. http://www.techweb.com/tech/software/20030408_software. http://www.theregister.co.uk/content/archive/29439.html http://gentoox.shallax.com/. http://ragib.hypermart.net/linux/. http://www.itworld.com/Comp/2362/LWD010424latinlinux/pfindex.html. http://www.xbox-linux.sourceforge.net. http://www.theregister.co.uk/content/archive/27487.html. http://www.theregister.co.uk/content/archive/26078.html. http://www.us.playstation.com/peripherals.aspx?id=SCPH-97047. http://www.techtv.com/extendedplay/reviews/story/0,24330,3356862,00.html. http://www.wired.com/news/business/0,1367,61984,00.html. http://www.gentoo.org/main/en/about.xml http://www.gentoo.org/main/en/philosophy.xml http://techupdate.zdnet.com/techupdate/stories/main/0,14179,2869075,00.html. http://xbox-linux.sourceforge.net/docs/usershelpusers.html http://www.cnn.com/2002/TECH/fun.games/12/16/gamers.liksang/. Citation reference for this article MLA Style Cesarini, Paul. "“Opening” the Xbox" M/C: A Journal of Media and Culture <http://www.media-culture.org.au/0406/08_Cesarini.php>. APA Style Cesarini, P. (2004, Jul1). “Opening” the Xbox. M/C: A Journal of Media and Culture, 7, <http://www.media-culture.org.au/0406/08_Cesarini.php>
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії