To see the other types of publications on this topic, follow the link: Integrated Scheduler Architecture.

Journal articles on the topic 'Integrated Scheduler Architecture'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Integrated Scheduler Architecture.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

M. Shahane, Priti, and Narayan Pisharoty. "Implementation of ISLIP scheduler for NOC router on FPGA." International Journal of Engineering & Technology 7, no. 2.12 (April 3, 2018): 268. http://dx.doi.org/10.14419/ijet.v7i2.12.11302.

Full text
Abstract:
Network on chip (NoC) effectively replaces a traditional bus based architecture in System on chip (SoC). The NoC provides a solution to the communication bottleneck of the bus based interconnection in SoC, where large numbers of Intellectual modules are integrated on a single chip for better performance. In NoC architecture, the router is a dominant component, which should provide contention free architecture with low latency. The router consists of input block, scheduler and crossbar switch. The design of scheduler leads the performance of the NoC router in terms of latency. Hence the starvation free scheduler is paramount importantin the NoC router design. iSLIP (Iterative serial line internet protocol) scheduler has programmable priority encoder which makes it fast and efficient scheduler over round robin arbiter. In this paper 2x4 NoC router using iSLIPscheduler is proposed. The proposed design is implemented using the Verilog programming on Xilinx Spartan 3 device.
APA, Harvard, Vancouver, ISO, and other styles
2

Zagan, Ionel, and Vasile Găitan. "Hardware RTOS: Custom Scheduler Implementation Based on Multiple Pipeline Registers and MIPS32 Architecture." Electronics 8, no. 2 (February 14, 2019): 211. http://dx.doi.org/10.3390/electronics8020211.

Full text
Abstract:
The task context switch operation, the inter-task synchronization and communication mechanisms, as well as the jitter occurred in treating aperiodic events, are crucial factors in implementing real-time operating systems (RTOS). In practice and literature, several solutions can be identified for improving the response speed and performance of real-time systems. Software implementations of RTOS-specific functions can generate significant delays, adversely affecting the deadlines required for certain applications. This paper presents an original implementation of a dedicated processor, based on multiple pipeline registers, and a hardware support for a dynamic scheduler with the following characteristics: performs unitary event management, provides access to architecture shared resources, prioritizes and executes the multiple events expected by the same task. The paper also presents a method through which interrupts are assigned to tasks. Through dedicated instructions, the integrated hardware scheduler implements tasks synchronization with multiple prioritized events, thus ensuring an efficient functioning of the processor in the context of real-time control.
APA, Harvard, Vancouver, ISO, and other styles
3

Wang, Hong Chun, and Wen Sheng Niu. "Design and Analysis of AFDX Network Based High-Speed Avionics System of Civil Aircraft." Advanced Materials Research 462 (February 2012): 445–51. http://dx.doi.org/10.4028/www.scientific.net/amr.462.445.

Full text
Abstract:
Avionics Full Duplex Switched Ethernet (AFDX) standardized as ARINC 664 is a major upgrade for integrated avionics systems of civil aircraft. It becomes the current communication technology in the context of avionics and provides a backbone network for the civil avionics system. This paper focuses on features of AFDX network protocol. Architecture of AFDX switch based on shared memory is proposed to meet the requirements of avionics real-time system. In addition, frame filtering, traffic policing and frame schedule function are used to eliminate uncertainties in huge traffic flows. End System (ES) host-target architecture is also researched in this paper. Virtual link scheduler, redundancy management, and protocol stack in ES are designed to ensure determinism and reliability of data communication. AFDX switch and ES have been successfully developed, and configuration tool, ARINC 615A loader and simulation tool related to AFDX network are also provided as package solution to support avionics system construction. Finally, AFDX switch and ESes have passed ARINC 664 protocol conformance test and certification, the test results show that our AFDX products meet the requirements of real-time communication, determinism and reliability defined in ARINC 664
APA, Harvard, Vancouver, ISO, and other styles
4

S., Manishankar, and S. Sathayanarayana. "Performance evaluation and resource optimization of cloud based parallel Hadoop clusters with an intelligent scheduler." International Journal of Engineering & Technology 7, no. 4.20 (November 29, 2018): 4220. http://dx.doi.org/10.14419/ijet.v7i4.13372.

Full text
Abstract:
Data generated from real time information systems are always incremental in nature. Processing of such a huge incremental data in large scale requires a parallel processing system like Hadoop based cluster. Major challenge that arises in all cluster-based system is how efficiently the resources of the system can be used. The research carried out proposes a model architecture for Hadoop cluster with additional components integrated such as super node who manages the clusters computations and a mediation manager who does the performance monitoring and evaluation. Super node in the system is equipped with intelligent or adaptive scheduler that does the scheduling of the job with optimal resources. The scheduler is termed intelligent as it automatically decides which resource to be taken for which computation, with the help of a cross mapping of resource and job with a genetic algorithm which finds the best matching resource. The mediation node deploys ganglia a standard monitoring tool for Hadoop cluster to collect and record the performance parameters of the Hadoop cluster. The system over all does the scheduling of different jobs with optimal usage of resources thus achieving better efficiency compared to the native capacity scheduler in Hadoop. The system is deployed on top of OpenNebula Cloud environment for scalability.
APA, Harvard, Vancouver, ISO, and other styles
5

Chen, Chi-Ting, Ling-Ju Hung, Sun-Yuan Hsieh, Rajkumar Buyya, and Albert Y. Zomaya. "Heterogeneous Job Allocation Scheduler for Hadoop MapReduce Using Dynamic Grouping Integrated Neighboring Search." IEEE Transactions on Cloud Computing 8, no. 1 (January 1, 2020): 193–206. http://dx.doi.org/10.1109/tcc.2017.2748586.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Audah, Lukman, Zhili Sun, and Haitham Cruickshank. "QoS based Admission Control using Multipath Scheduler for IP over Satellite Networks." International Journal of Electrical and Computer Engineering (IJECE) 7, no. 6 (December 1, 2017): 2958. http://dx.doi.org/10.11591/ijece.v7i6.pp2958-2969.

Full text
Abstract:
<p>This paper presents a novel scheduling algorithm to support quality of service (QoS) for multiservice applications over integrated satellite and terrestrial networks using admission control system with multipath selection capabilities. The algorithm exploits the multipath routing paradigm over LEO and GEO satellites constellation in order to achieve optimum end-to-end QoS of the client-server Internet architecture for HTTP web service, file transfer, video streaming and VoIP applications. The proposed multipath scheduler over the satellite networks advocates load balancing technique based on optimum time-bandwidth in order to accommodate the burst of application traffics. The method tries to balance the bandwidth load and queue length on each link over satellite in order to fulfil the optimum QoS level for each traffic type. Each connection of a traffic type will be routed over a link with the least bandwidth load and queue length at current time in order to avoid congestion state. The multipath routing scheduling decision is based on per connection granularity so that packet reordering at the receiver side could be avoided. The performance evaluation of IP over satellites has been carried out using multiple connections, different file sizes and bit-error-rate (BER) variations to measure the packet delay, loss ratio and throughput.</p>
APA, Harvard, Vancouver, ISO, and other styles
7

Guzmán Ortiz, Eduardo, Beatriz Andres, Francisco Fraile, Raul Poler, and Ángel Ortiz Bas. "Fleet management system for mobile robots in healthcare environments." Journal of Industrial Engineering and Management 14, no. 1 (January 28, 2021): 55. http://dx.doi.org/10.3926/jiem.3284.

Full text
Abstract:
Purpose: The purpose of this paper is to describe the implementation of a Fleet Management System (FMS) that plans and controls the execution of logistics tasks by a set of mobile robots in a real-world hospital environment. The FMS is developed upon an architecture that hosts a routing engine, a task scheduler, an Endorse Broker, a controller and a backend Application Programming Interface (API). The routing engine handles the geo-referenced data and the calculation of routes; the task scheduler implements algorithms to solve the task allocation problem and the trolley loading problem using Integer Linear Programming (ILP) model and a Genetic Algorithm (GA) depending on the problem size. The Endorse Broker provides a messaging system to exchange information with the robotic fleet, while the controller implements the control rules to ensure the execution of the work plan. Finally, the Backend API exposes some FMS to external systems.Design/methodology/approach: The first part of the paper, focuses on the dynamic path planning problem of a set of mobile robots in indoor spaces such as hospitals, laboratories and shopping centres. A review of algorithms developed in the literature, to address dynamic path planning, is carried out; and an analysis of the applications of such algorithms in mobile robots that operate in real in-door spaces is performed. The second part of the paper focuses on the description of the FMS, which consists of five integrated tools to support the multi-robot dynamic path planning and the fleet management.Findings: The literature review, carried out in the context of path planning problem of multiple mobile robots in in-door spaces, has posed great challenges due to the environment characteristics in which robots move. The developed FMS for mobile robots in healthcare environments has resulted on a tool that enables to: (i) interpret of geo-referenced data; (ii) calculate and recalculate dynamic path plans and task execution plans, through the implementation of advanced algorithms that take into account dynamic events; (iii) track the tasks execution; (iv) fleet traffic control; and (v) to communicate with one another external systems.Practical implications: The proposed FMS has been developed under the scope of ENDORSE project that seeks to develop safe, efficient, and integrated indoor robotic fleets for logistic applications in healthcare and commercial spaces. Moreover, a computational analysis is performed using a virtual hospital floor-plant.Originality/value: This work proposes a novel FMS, which consists of integrated tools to support the mobile multi-robot dynamic path planning in a real-world hospital environment. These tools include: a routing engine that handles the geo-referenced data and the calculation of routes. A task scheduler that includes a mathematical model to solve the path planning problem, when a low number of robots is considered. In order to solve large size problems, a genetic algorithm is also implemented to compute the dynamic path planning with less computational effort. An Endorse broker to exchanges information between the robotic fleet and the FMS in a secure way. A backend API that provides interface to manage the master data of the FMS, to calculate an optimal assignment of a set of tasks to a group of robots to be executed on a specific date and time, and to add a new task to be executed in the current shift. Finally, a controller to ensures that the robots execute the tasks that have been assigned by the task scheduler.
APA, Harvard, Vancouver, ISO, and other styles
8

Dong, Zihang, Yunming Cao, Naixue Xiong, and Pingping Dong. "EE-MPTCP: An Energy-Efficient Multipath TCP Scheduler for IoT-Based Power Grid Monitoring Systems." Electronics 11, no. 19 (September 28, 2022): 3104. http://dx.doi.org/10.3390/electronics11193104.

Full text
Abstract:
The Internet-of-Things (IoT) based monitoring system has significantly promoted the intelligence and automation of power grids. The inspection robots and wireless sensors used in the monitoring system usually have multiple network interfaces to achieve high throughput and reliability transmission. The concurrent usage of these available interfaces with Multipath TCP (MPTCP) can enhance the quality of service of the communications. However, traditional MPTCP scheduling algorithms may bring about data disorder and even buffer blocking, which severely affects the transmission performance of MPTCP. And the common MPTCP improvement mechanisms for IoT lack sufficient attention to energy consumption, which is important for the battery-limited wireless sensors. With the aim to promote conservative energy without loss of throughput, this paper develops an integrated multipath scheduler for energy consumption optimization named energy-efficient MPTCP (EE-MPTCP). EE-MPTCP first constructs a target optimization function which considers both network throughput and energy consumption. Then, based on the proposed MPTCP transmission model and existing energy efficiency model, the network throughput and energy consumption of each path can be estimated. Finally, a heuristic scheduling algorithm is proposed to find a suitable set of paths for each application. As confirmed by experiments based on Linux testbed as well as the NS3 simulation platform, the proposed scheduler can shorten the average completion time and reduce the energy consumption by up to 79.9% and 79.2%, respectively.
APA, Harvard, Vancouver, ISO, and other styles
9

Lyu, Chenghao, Qi Fan, Fei Song, Arnab Sinha, Yanlei Diao, Wei Chen, Li Ma, et al. "Fine-grained modeling and optimization for intelligent resource management in big data processing." Proceedings of the VLDB Endowment 15, no. 11 (July 2022): 3098–111. http://dx.doi.org/10.14778/3551793.3551855.

Full text
Abstract:
Big data processing at the production scale presents a highly complex environment for resource optimization (RO), a problem crucial for meeting performance goals and budgetary constraints of analytical users. The RO problem is challenging because it involves a set of decisions (the partition count, placement of parallel instances on machines, and resource allocation to each instance), requires multi-objective optimization (MOO), and is compounded by the scale and complexity of big data systems while having to meet stringent time constraints for scheduling. This paper presents a MaxCompute based integrated system to support multi-objective resource optimization via fine-grained instance-level modeling and optimization. We propose a new architecture that breaks RO into a series of simpler problems, new fine-grained predictive models, and novel optimization methods that exploit these models to make effective instance-level RO decisions well under a second. Evaluation using production workloads shows that our new RO system could reduce 37--72% latency and 43--78% cost at the same time, compared to the current optimizer and scheduler, while running in 0.02-0.23s.
APA, Harvard, Vancouver, ISO, and other styles
10

Dossis, Michael F. "Formal ESL Synthesis for Control-Intensive Applications." Advances in Software Engineering 2012 (June 27, 2012): 1–30. http://dx.doi.org/10.1155/2012/156907.

Full text
Abstract:
Due to the massive complexity of contemporary embedded applications and integrated systems, long effort has been invested in high-level synthesis (HLS) and electronic system level (ESL) methodologies to automatically produce correct implementations from high-level, abstract, and executable specifications written in program code. If the HLS transformations that are applied on the source code are formal, then the generated implementation is correct-by-construction. The focus in this work is on application-specific design, which can deliver optimal, and customized implementations, as opposed to platform or IP-based design, which is bound by the limits and constraints of the preexisting architecture. This work surveys and reviews past and current research in the area of ESL and HLS. Then, a prototype HLS compiler tool that has been developed by the author is presented, which utilizes compiler-generators and logic programming to turn the synthesis into a formal process. The scheduler PARCS and the formal compilation of the system are tested with a number of benchmarks and real-world applications. This demonstrates the usability and applicability of the presented method.
APA, Harvard, Vancouver, ISO, and other styles
11

S. Shraddha Bollamma, K., S. Manishankar, and M. V. Vishnu. "Optimizing the performance of hadoop clusters through efficient cluster management techniques." International Journal of Engineering & Technology 7, no. 2.31 (May 29, 2018): 19. http://dx.doi.org/10.14419/ijet.v7i2.31.13389.

Full text
Abstract:
The necessity for processing the huge data has become a critical task in the age of Internet, even though data processing has evolved into a next generation level still data processing and information extraction has many problems to solve. With the increase in data size retrieving useful information with a given span of time is a herculean task. The most optimal solution that has been adopted is usage of distributed computing environment supporting data processing involving suitable model architecture with large complex structure. Although processing has achieved good amount of improvement, efficiency, energy utilization and accuracy has been compromised. The research aims to propose an efficient environment for data processing with optimized energy utilization and increased performance. Hadoop environment common and popular among big data processing platform has been chosen as base for enhancement. Creating a multi node Hadoop cluster architecture on top of which an efficient cluster monitor is setup and an algorithm to manage efficiency of the cluster is formulated. Cluster monitor is incorporated with Zoo keeper, Yarn (Node and resource manager). Zoo keeper does the monitoring of cluster nodes of the distributed system and identifies critical performance problems. Yarn plays a vital role in managing the resources efficiently and controlling the nodes with the help of hybrid scheduler algorithm. Thus this integrated platform helps in monitoring the distributed cluster as well as improving the performance of the overall Big Data processing.
APA, Harvard, Vancouver, ISO, and other styles
12

Fagiar, Muaz, Yasser Mohamed, and Simaan AbouRizk. "Simulation-Assisted Project Data Integration for Development and Analysis of As-Built Schedules." Buildings 13, no. 4 (April 6, 2023): 974. http://dx.doi.org/10.3390/buildings13040974.

Full text
Abstract:
As-built schedules are an essential tool for evaluating contractors’ schedule performance and analyzing delay and lost productivity claims. Yet, most often construction schedules are not updated frequently and/or accurately as required, which limit the availability of as-built schedules. Furthermore, the retrospective development of as-built schedules, when sufficient and reliable project data is available, is a lengthy and costly process. This study describes a simulation-assisted modeling approach that automatically processes and integrates schedules progress data and develops as-built schedules at the activity level. The proposed method uses conceptual entities that are central to the operation of simulation models and whose content changes as they route through the schedule network model. The approach introduces (1) an entity information model that records relevant schedule information either in a materialized or virtual form, and (2) an entity lifecycle model that imitates the possible routes an entity instance may maneuver through in a schedule network model which, together, simultaneously respond to schedule logic and invoking duration changes. To demonstrate its effectiveness, a prototype based on the framework was developed using Excel and MS Project and was tested with a real case study. The study is expected to facilitate the development of as-built schedules for the analysis of delay and time extension claims.
APA, Harvard, Vancouver, ISO, and other styles
13

Dongarra, Jack, Mark Gates, Azzam Haidar, Yulu Jia, Khairul Kabir, Piotr Luszczek, and Stanimire Tomov. "HPC Programming on Intel Many-Integrated-Core Hardware with MAGMA Port to Xeon Phi." Scientific Programming 2015 (2015): 1–11. http://dx.doi.org/10.1155/2015/502593.

Full text
Abstract:
This paper presents the design and implementation of several fundamental dense linear algebra (DLA) algorithms for multicore with Intel Xeon Phi coprocessors. In particular, we consider algorithms for solving linear systems. Further, we give an overview of the MAGMA MIC library, an open source, high performance library, that incorporates the developments presented here and, more broadly, provides the DLA functionality equivalent to that of the popular LAPACK library while targeting heterogeneous architectures that feature a mix of multicore CPUs and coprocessors. The LAPACK-compliance simplifies the use of the MAGMA MIC library in applications, while providing them with portably performant DLA. High performance is obtained through the use of the high-performance BLAS, hardware-specific tuning, and a hybridization methodology whereby we split the algorithm into computational tasks of various granularities. Execution of those tasks is properly scheduled over the heterogeneous hardware by minimizing data movements and mapping algorithmic requirements to the architectural strengths of the various heterogeneous hardware components. Our methodology and programming techniques are incorporated into the MAGMA MIC API, which abstracts the application developer from the specifics of the Xeon Phi architecture and is therefore applicable to algorithms beyond the scope of DLA.
APA, Harvard, Vancouver, ISO, and other styles
14

Bolotin, Sergey, Haitham Bohan, Aldyn-kys Dadar, and Khenzig Biche-ool. "SCHEDULING WORK UNDER INTEGRATED URBAN DEVELOPMENT USING THE METHOD OF UNCERTAIN RESOURCE COEFFICIENTS." Architecture and Engineering 6, no. 4 (December 24, 2021): 34–41. http://dx.doi.org/10.23968/2500-0055-2021-6-4-34-41.

Full text
Abstract:
Introduction: Planning integrated development of a residential area involves determining the composition of the objects to be built and creating an appropriate integration mechanism, backed up by a generalized work schedule. The existing methods of forming integrated work schedules do not use a systemic approach, based on a universal mathematical model, to describe the organizational and technological aspects of construction. Methods: The present study uses the method of uncertain resource coefficients to demonstrate a mechanism for systemically describing organizational and technological construction processes. We present a way of adapting this method to forming a generalized construction schedule during integrated development. The proposed adaptation mechanism is based on managing schedule calculations by rationally influencing the elements of the linear equation system that describes the organizational and technological processes. Results and Discussion: The solutions presented in the paper are fully consistent with the calculations obtained by different flow methods of organizing construction, as well as with the critical path method used in project management programs. The method described in the paper has been implemented in well-known project management software, Microsoft Project, as a macro program in the Visual Basic for Applications programming language, making it possible to form, calculate, and optimize a schedule for integrated territory development using the unified software toolkit.
APA, Harvard, Vancouver, ISO, and other styles
15

Gellweiler, Christof. "Connecting Enterprise Architecture and Project Portfolio Management." International Journal of Information Technology Project Management 11, no. 1 (January 2020): 99–114. http://dx.doi.org/10.4018/ijitpm.2020010106.

Full text
Abstract:
Enterprise architecture (EA) and project portfolio management (PPM) are key areas when it comes to connecting enterprise strategy and information technology (IT) projects. Both management disciplines enhance business capabilities, integrate skilled resources, and govern affiliated processes and functions. A skillful comprehension of the links between these managerial areas is essential for effective IT planning. This article elaborates on the common grounds and structural attachment of EA and PPM, showing the substantiated relations between them and demonstrating their cohesiveness. From strategic planning to solution delivery, a conceptual model for IT project alignment integrates these IT management disciplines over two levels. EA ascertains the technical goals and constraints, whereas PPM determines the organizational goals and constraints. The results from both sides are combined to jointly propose, select, prioritize, and schedule IT projects. Roadmapping is a suitable approach to bring EA and PPM together.
APA, Harvard, Vancouver, ISO, and other styles
16

Hatipkarasulu, Yilmaz. "A conceptual approach to graphically compare construction schedules." Construction Innovation 20, no. 1 (January 6, 2020): 43–60. http://dx.doi.org/10.1108/ci-01-2019-0001.

Full text
Abstract:
Purpose This paper aims to present a graphical comparison method for construction schedules, which illustrates the differences for each individual activity. The method overlays the observed differences on a bar chart creating a representation of whether each activity is ahead, on or behind schedule at a given date. Design/methodology/approach The method is implemented using a Microsoft Project add-in (plug-in). The paper demonstrates the method and its potential uses with three illustration cases: a time impact analysis, an alternative analysis for the selection of subcontractors and a multi-baseline analysis of an as-built schedule. Findings The cases included in the paper show that the proposed method uses a simplified and familiar attribute comparison for each activity in a schedule. The method affords flexibility in presenting differences between schedules such as the start/finish dates or duration. As the method does not rely on a specific software application or analysis method, it can be implement to different software applications as well as performance or delay analysis techniques. The method also makes it possible to present multiple and selective baseline comparisons overlaid on an updated or as-built schedule. Originality/value The method graphically presents a comparison of start dates, durations and finish dates for each activity that can be integrated with any schedule. The method can be used for forensic analysis as well as project control measures during construction. As the method does not rely on any specific performance or delay calculation method, it can be applied to any forensic analysis technique and delay analysis.
APA, Harvard, Vancouver, ISO, and other styles
17

Muškinja, Miha, Paolo Calafiura, Charles Leggett, Illya Shapoval, and Vakho Tsulaia. "Raythena: a vertically integrated scheduler for ATLAS applications on heterogeneous distributed resources." EPJ Web of Conferences 245 (2020): 05042. http://dx.doi.org/10.1051/epjconf/202024505042.

Full text
Abstract:
The ATLAS experiment has successfully integrated HighPerformance Computing resources (HPCs) in its production system. Unlike the current generation of HPC systems, and the LHC computing grid, the next generation of supercomputers is expected to be extremely heterogeneous in nature: different systems will have radically different architectures, and most of them will provide partitions optimized for different kinds of workloads. In this work we explore the applicability of concepts and tools realized in Ray (the high-performance distributed execution framework targeting large-scale machine learning applications) to ATLAS event throughput optimization on heterogeneous distributed resources, ranging from traditional grid clusters to Exascale computers. We present a prototype of Raythena, a Ray-based implementation of the ATLAS Event Service (AES), a fine-grained event processing workflow aimed at improving the efficiency of ATLAS workflows on opportunistic resources, specifically HPCs. The AES is implemented as an event processing task farm that distributes packets of events to several worker processes running on multiple nodes. Each worker in the task farm runs an event-processing application (Athena) as a daemon. The whole system is orchestrated by Ray, which assigns work in a distributed, possibly heterogeneous, environment. For all its flexibility, the AES implementation is currently comprised of multiple separate layers that communicate through ad-hoc command-line and filebased interfaces. The goal of Raythena is to integrate these layers through a feature-rich, efficient application framework. Besides increasing usability and robustness, a vertically integrated scheduler will enable us to explore advanced concepts such as dynamically shaping of workflows to exploit currently available resources, particularly on heterogeneous systems.
APA, Harvard, Vancouver, ISO, and other styles
18

Herlina, Lely, Muhammad Adha Ilhami, and Kiki Dwi Safitri. "Integrated Production and Distribution Planning using Mathematical Model for Maximizing Profit." Jurnal Ilmiah Teknik Industri 21, no. 2 (December 30, 2022): 142–48. http://dx.doi.org/10.23917/jiti.v21i2.19731.

Full text
Abstract:
In three-dimensional concurrent engineering context, product, process, and supply chain designs are approached by the imperative of concurrency principle. The imperative of concurrency proposes product architecture and supply chain architecture conducted simultaneously. The detailed product design and production process design are conducted simultaneously. While manufacturing system design is conducted simultaneously with logistic system design. This study concerns on integrating production planning with distribution planning in a new plastic bags plant. Since the plant is relatively new, the number of orders is still scarce. Therefore, production planning becomes critical. Producing more products than required creates inventories while producing as many as the order required creates an intermittent production schedule. A mathematical is developed to simultaneously schedule production and distribution to maximize profit. The model provides solutions on the number of materials to be delivered, the number of products produced and delivered within its period for the planning horizon.
APA, Harvard, Vancouver, ISO, and other styles
19

Salamy, Hassan, and Semih Aslan. "Pipelined-Scheduling of Multiple Embedded Applications on a Multi-Processor-SoC." Journal of Circuits, Systems and Computers 26, no. 03 (November 21, 2016): 1750042. http://dx.doi.org/10.1142/s0218126617500426.

Full text
Abstract:
Due to clock and power constraints, it is hard to extract more power out of single core architectures. Thus, multi-core systems are now the architecture of choice to provide the needed computing power. In embedded system, multi-processor system-on-a-chip (MPSoC) is widely used to provide the needed power to effectively run complex embedded applications. However, to effectively utilize an MPSoC system, tools to generate optimized schedules is highly needed. In this paper, we design an integrated approach to task scheduling and memory partitioning of multiple applications utilizing the MPSoC system simultaneously. This is in contrast to the traditional decoupled approach that looks at task scheduling and memory partitioning as two separate problems. Our framework is also based on pipelined scheduling to increase the throughput of the system. Results on different benchmarks show the effectiveness of our techniques.
APA, Harvard, Vancouver, ISO, and other styles
20

Gehlhoff, Felix, and Alexander Fay. "On agent-based decentralized and integrated scheduling for small-scale manufacturing." at - Automatisierungstechnik 68, no. 1 (January 28, 2020): 15–31. http://dx.doi.org/10.1515/auto-2019-0105.

Full text
Abstract:
AbstractSmall-scale manufacturing often relies on flexible production systems that can cope with frequent changes of products and equipment. Transports are a significant part of the production flow, especially in the domain of large and heavy workpieces that requires explicit planning to avoid unnecessary delays. This contribution takes a detailed look at how to create feasible integrated schedules within a decentralised or even heterarchical architecture and which information the agents have to exchange. These schedules incorporate constraints such as the blocking-constraint. They also consider dynamic setup and operation durations while finding a good-enough solution. The proposed agent-based solution applies to a wide variety of scheduling problems and reveals positive properties in terms of scalability and reconfigurability.
APA, Harvard, Vancouver, ISO, and other styles
21

Woolley, Brandon, Susan Mengel, and Atila Ertas. "An Evolutionary Approach for the Hierarchical Scheduling of Safety- and Security-Critical Multicore Architectures." Computers 9, no. 3 (September 3, 2020): 71. http://dx.doi.org/10.3390/computers9030071.

Full text
Abstract:
The aerospace and defense industry is facing an end-of-life production issue with legacy embedded uniprocessor systems. Most, if not all, embedded processor manufacturers have already moved towards system-on-a-chip multicore architectures. Current scheduling arrangements do not consider schedules related to safety and security. The methods are also inefficient because they arbitrarily assign larger-than-necessary windows of execution. This research creates a hierarchical scheduling framework as a model for real-time multicore systems to integrate the scheduling for safe and secure systems. This provides a more efficient approach which automates the migration of embedded systems’ real-time software tasks to multicore architectures. A novel genetic algorithm with a unique objective function and encoding scheme was created and compared to classical bin-packing algorithms. The simulation results show the genetic algorithm had 1.8–2.5 times less error (a 56–71% difference), outperforming its counterparts in uniformity in utilization. This research provides an efficient, automated method for commercial, private and defense industries to use a genetic algorithm to create a feasible two-level hierarchical schedule for real-time embedded multicore systems that address safety and security constraints.
APA, Harvard, Vancouver, ISO, and other styles
22

Kanda, Guard, and Kwangki Ryoo. "Design of an Integrated Cryptographic SoC Architecture for Resource-Constrained Devices." International Journal of Electrical and Electronics Research 10, no. 2 (June 30, 2022): 230–44. http://dx.doi.org/10.37391/ijeer.100231.

Full text
Abstract:
One of the active research areas in recent years that has seen researchers from numerous related fields converging and sharing ideas and developing feasible solutions is the area of hardware security. The hardware security discipline deals with the protection from vulnerabilities by way of physical devices such as hardware firewalls or hardware security modules rather than installed software programs. These hardware security modules use physical security measures, logical security controls, and strong encryption to protect sensitive data that is in transit, in use, or stored from unauthorized interferences. Without mechanisms to circumvent the ever-evolving attacking strategies on hardware devices and the data that they process or store, billions of dollars will always be lost to attackers who ply their trade by targeting such vulnerable devices. This paper, therefore, proposes an integrated cryptographic SoC architecture solution to this menace. The proposed architecture provides security by way of key exchange, management, and encryption. The proposed architecture is based on a True Random Number generator core that generates secret keys that are used in Elliptic Curve Diffie-Hellman Key Exchange to perform elliptic curve scalar multiplication to obtain public and shared keys after the exchange of the public keys. The proposed architecture further relies on a Key Derivation Function based on the CubeHash algorithm to obtain Derived Keys that provide the needed security using the ChaCha20_Poly1305 Authenticated Encryption with Associated (AEAD) Data Core. The proposed Integrated SoC architecture is interconnected by AMBA AHB-APB on-chip bus and the system is scheduled and controlled using the PicoRV32 opensource RISC-V processor. The proposed architecture is tested and verified on the Virtex-4 FPGA board using a custom-designed GUI desktop application.
APA, Harvard, Vancouver, ISO, and other styles
23

Manoharan, J. Samuel. "A Two Stage Task Scheduler for Effective Load Optimization in Cloud – FoG Architectures." September 2021 3, no. 3 (November 9, 2021): 224–42. http://dx.doi.org/10.36548/jei.2021.3.006.

Full text
Abstract:
In recent times, computing technologies have moved over to a new dimension with the advent of cloud platforms which provide seamless rendering of required services to consumers either in static or dynamic state. In addition, the nature of data being handled in today’s scenario has also become sophisticated as mostly real time data acquisition systems equipped with High-Definition capture (HD) have become common. Lately, cloud systems have also become prone to computing overheads owing to huge volume of data being imparted on them especially in real time applications. To assist and simplify the computational complexity of cloud systems, FoG platforms are being integrated into cloud interfaces to streamline and provide computing at the edge nodes rather at the cloud core processors, thus accounting for reduction of load overhead on cloud core processors. This research paper proposes a Two Stage Load Optimizer (TSLO) implemented as a double stage optimizer with one being deployed at FoG level and the other at the Cloud level. The computational complexity analysis is extensively done and compared with existing benchmark methods and superior performance of the suggested method is observed and reported.
APA, Harvard, Vancouver, ISO, and other styles
24

Chéour, Rym, Mohamed Wassim Jmal, Sabrine Khriji, Dhouha El Houssaini, Carlo Trigona, Mohamed Abid, and Olfa Kanoun. "Towards Hybrid Energy-Efficient Power Management in Wireless Sensor Networks." Sensors 22, no. 1 (December 31, 2021): 301. http://dx.doi.org/10.3390/s22010301.

Full text
Abstract:
Wireless Sensor Networks (WSNs) are prone to highly constrained resources, as a result ensuring the proper functioning of the network is a requirement. Therefore, an effective WSN management system has to be integrated for the network efficiency. Our objective is to model, design, and propose a homogeneous WSN hybrid architecture. This work features a dedicated power utilization optimization strategy specifically for WSNs application. It is entitled Hybrid Energy-Efficient Power manager Scheduling (HEEPS). The pillars of this strategy are based on the one hand on time-out Dynamic Power Management (DPM) Intertask and on the other hand on Dynamic Voltage and Frequency Scaling (DVFS). All tasks are scheduled under Global Earliest Deadline First (GEDF) with new scheduling tests to overcome the Dhall effect. To minimize the energy consumption, the HEEPS predicts, defines and models the behavior adapted to each sensor node, as well as the associated energy management mechanism. HEEPS’s performance evaluation and analysis are performed using the STORM simulator. A comparison to the results obtained with the various state of the art approaches is presented. Results show that the power manager proposed effectively schedules tasks to use dynamically the available energy estimated gain up to 50%.
APA, Harvard, Vancouver, ISO, and other styles
25

Gauthier, Marion, Romain Barillot, Anne Schneider, Camille Chambon, Christian Fournier, Christophe Pradal, Corinne Robert, and Bruno Andrieu. "A functional structural model of grass development based on metabolic regulation and coordination rules." Journal of Experimental Botany 71, no. 18 (June 4, 2020): 5454–68. http://dx.doi.org/10.1093/jxb/eraa276.

Full text
Abstract:
Abstract Shoot architecture is a key component of the interactions between plants and their environment. We present a novel model of grass, which fully integrates shoot morphogenesis and the metabolism of carbon (C) and nitrogen (N) at organ scale, within a three-dimensional representation of plant architecture. Plant morphogenesis is seen as a self-regulated system driven by two main mechanisms. First, the rate of organ extension and the establishment of architectural traits are regulated by concentrations of C and N metabolites in the growth zones and the temperature. Second, the timing of extension is regulated by rules coordinating successive phytomers instead of a thermal time schedule. Local concentrations are calculated from a model of C and N metabolism at organ scale. The three-dimensional representation allows the accurate calculation of light and temperature distribution within the architecture. The model was calibrated for wheat (Triticum aestivum) and evaluated for early vegetative stages. This approach allowed the simulation of realistic patterns of leaf dimensions, extension dynamics, and organ mass and composition. The model simulated, as emergent properties, plant and agronomic traits. Metabolic activities of growing leaves were investigated in relation to whole-plant functioning and environmental conditions. The current model is an important step towards a better understanding of the plasticity of plant phenotype in different environments.
APA, Harvard, Vancouver, ISO, and other styles
26

Huang, Ye, Amos Brocco, Michele Courant, Beat Hirsbrunne, and Pierre Kuonen. "MaGate." International Journal of Distributed Systems and Technologies 1, no. 3 (July 2010): 24–39. http://dx.doi.org/10.4018/jdst.2010070102.

Full text
Abstract:
This work presents the design and architecture of a decentralized grid scheduler named MaGate, which is developed within the SmartGRID project and focuses on grid scheduler interoperation. The MaGate scheduler is modular structured, and emphasizes the functionality, procedure and policy of delegating local unsuited jobs to appropriate remote MaGates within the same grid system. To avoid an isolated solution, web services and several existing and emerging grid standards are adopted, as well as a series of interfaces to both publish MaGate capabilities and integrate functionalities from external grid components. Meanwhile, a specific swarm intelligence solution is employed as a critical complementary service for MaGate to maintain an optimized peer-to-peer overlay that supports efficient resource discovery. Regarding evaluation, the effectiveness brought by job sharing within a physically connected grid community with the use of the MaGate has been illustrated by means of experiments on communities of different scale, and under various scenarios.
APA, Harvard, Vancouver, ISO, and other styles
27

Edelkamp, S. "Taming Numbers and Durations in the Model Checking Integrated Planning System." Journal of Artificial Intelligence Research 20 (December 1, 2003): 195–238. http://dx.doi.org/10.1613/jair.1302.

Full text
Abstract:
The Model Checking Integrated Planning System (MIPS) is a temporal least commitment heuristic search planner based on a flexible object-oriented workbench architecture. Its design clearly separates explicit and symbolic directed exploration algorithms from the set of on-line and off-line computed estimates and associated data structures. MIPS has shown distinguished performance in the last two international planning competitions. In the last event the description language was extended from pure propositional planning to include numerical state variables, action durations, and plan quality objective functions. Plans were no longer sequences of actions but time-stamped schedules. As a participant of the fully automated track of the competition, MIPS has proven to be a general system; in each track and every benchmark domain it efficiently computed plans of remarkable quality. This article introduces and analyzes the most important algorithmic novelties that were necessary to tackle the new layers of expressiveness in the benchmark problems and to achieve a high level of performance. The extensions include critical path analysis of sequentially generated plans to generate corresponding optimal parallel plans. The linear time algorithm to compute the parallel plan bypasses known NP hardness results for partial ordering by scheduling plans with respect to the set of actions and the imposed precedence relations. The efficiency of this algorithm also allows us to improve the exploration guidance: for each encountered planning state the corresponding approximate sequential plan is scheduled. One major strength of MIPS is its static analysis phase that grounds and simplifies parameterized predicates, functions and operators, that infers knowledge to minimize the state description length, and that detects domain object symmetries. The latter aspect is analyzed in detail. MIPS has been developed to serve as a complete and optimal state space planner, with admissible estimates, exploration engines and branching cuts. In the competition version, however, certain performance compromises had to be made, including floating point arithmetic, weighted heuristic search exploration according to an inadmissible estimate and parameterized optimization.
APA, Harvard, Vancouver, ISO, and other styles
28

Liu, Yuanxin, Zheng Lin, and Fengcheng Yuan. "ROSITA: Refined BERT cOmpreSsion with InTegrAted techniques." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 10 (May 18, 2021): 8715–22. http://dx.doi.org/10.1609/aaai.v35i10.17056.

Full text
Abstract:
Pre-trained language models of the BERT family have defined the state-of-the-arts in a wide range of NLP tasks. However, the performance of BERT-based models is mainly driven by the enormous amount of parameters, which hinders their application to resource-limited scenarios. Faced with this problem, recent studies have been attempting to compress BERT into a small-scale model. However, most previous work primarily focuses on a single kind of compression technique, and few attention has been paid to the combination of different methods. When BERT is compressed with integrated techniques, a critical question is how to design the entire compression framework to obtain the optimal performance. In response to this question, we integrate three kinds of compression methods (weight pruning, low-rank factorization and knowledge distillation (KD)) and explore a range of designs concerning model architecture, KD strategy, pruning frequency and learning rate schedule. We find that a careful choice of the designs is crucial to the performance of the compressed model. Based on the empirical findings, our best compressed model, dubbed Refined BERT cOmpreSsion with InTegrAted techniques (ROSITA), is 7.5x smaller than BERT while maintains 98.5% of the performance on five tasks of the GLUE benchmark, outperforming the previous BERT compression methods with similar parameter budget.
APA, Harvard, Vancouver, ISO, and other styles
29

Beder, Christian, Julia Blanke, and Martin Klepal. "Towards Integrating Behaviour Demand Response into Simulation Based Heat Production Optimisation." Proceedings 2, no. 15 (August 23, 2018): 1125. http://dx.doi.org/10.3390/proceedings2151125.

Full text
Abstract:
Behaviour Demand Response (BDR) is the process of communicating with the building occupants and integrating their behavioural flexibility into the energy value chain. In this paper we will present an integrated behavioural model based on well-established behavioural theories and show how it can be used to provide predictable flexibility to the production schedule optimisation. The proposed approach is two-fold: the model can be used to predict the expected behavioural flexibility of occupants as well as to generate optimal communication to trigger reliable BDR events. A system architecture will be presented showing how BDR can be integrated into simulation passed building/district operation.
APA, Harvard, Vancouver, ISO, and other styles
30

Ralph, Benjamin James, Marcel Sorger, Karin Hartl, Andreas Schwarz-Gsaxner, Florian Messner, and Martin Stockinger. "Transformation of a rolling mill aggregate to a cyber physical production system: from sensor retrofitting to machine learning." Journal of Intelligent Manufacturing 33, no. 2 (October 24, 2021): 493–518. http://dx.doi.org/10.1007/s10845-021-01856-2.

Full text
Abstract:
AbstractThis paper describes the transformation of a rolling mill aggregate from a stand-alone solution to a fully integrated cyber physical production system. Within this process, already existing load cells were substituted and additional inductive and magnetic displacement sensors were applied. After calibration, those were fully integrated into a six-layer digitalization architecture at the Smart Forming Lab at the Chair of Metal Forming (Montanuniversitaet Leoben). Within this framework, two front end human machine interfaces were designed, where the first one serves as a condition monitoring system during the rolling process. The second user interface visualizes the result of a resilient machine learning algorithm, which was designed using Python and is not just able to predict and adapt the resulting rolling schedule of a defined metal sheet, but also to learn from additional rolling mill schedules carried out. This algorithm was created on the basis of a black box approach, using data from more than 1900 milling steps with varying roll gap height, sheet width and friction conditions. As a result, the developed program is able to interpolate and extrapolate between these parameters as well as different initial sheet thicknesses, serving as a digital twin for data-based recommendations on schedule changes between different rolling process steps. Furthermore, via the second user interface, it is possible to visualize the influence of this parameters on the result of the milling process. As the whole layer system runs on an internal server at the university, students and other interested parties are able to access the visualization and can therefore use the environment to deepen their knowledge within the characteristics and influence of the sheet metal rolling process as well as data science and especially fundamentals of machine learning. This algorithm also serves as a basis for further integration of materials science based data for the prediction of the influence of different materials on the rolling result. To do so, the rolled specimens were also analyzed regarding the influence of the plastic strain path on their mechanical properties, including anisotropy and materials’ strength.
APA, Harvard, Vancouver, ISO, and other styles
31

Zhao, Zhongyuan, Weiguang Sheng, Jinchao Li, Pengfei Ye, Qin Wang, and Zhigang Mao. "Similarity-Aware Architecture/Compiler Co-Designed Context-Reduction Framework for Modulo-Scheduled CGRA." Electronics 10, no. 18 (September 9, 2021): 2210. http://dx.doi.org/10.3390/electronics10182210.

Full text
Abstract:
Modulo-scheduled coarse-grained reconfigurable array (CGRA) processors have shown their potential for exploiting loop-level parallelism at high energy efficiency. However, these CGRAs need frequent reconfiguration during their execution, which makes them suffer from large area and power overhead for context memory and context-fetching. To tackle this challenge, this paper uses an architecture/compiler co-designed method for context reduction. From an architecture perspective, we carefully partition the context into several subsections and only fetch the subsections that are different to the former context word whenever fetching the new context. We package each different subsection with an opcode and index value to formulate a context-fetching primitive (CFP) and explore the hardware design space by providing the centralized and distributed CFP-fetching CGRA to support this CFP-based context-fetching scheme. From the software side, we develop a similarity-aware tuning algorithm and integrate it into state-of-the-art modulo scheduling and memory access conflict optimization algorithms. The whole compilation flow can efficiently improve the similarities between contexts in each PE for the purpose of reducing both context-fetching latency and context footprint. Experimental results show that our HW/SW co-designed framework can improve the area efficiency and energy efficiency to at most 34% and 21% higher with only 2% performance overhead.
APA, Harvard, Vancouver, ISO, and other styles
32

Ayman, Hassan Mohamed, Sameh Youssef Mahfouz, and Ahmed Alhady. "Integrated EDM and 4D BIM-Based Decision Support System for Construction Projects Control." Buildings 12, no. 3 (March 7, 2022): 315. http://dx.doi.org/10.3390/buildings12030315.

Full text
Abstract:
Project schedule monitoring and controlling are critical challenges of construction project management that are not adequately implemented, likely due to the predominance of earned value management and the lack of utilizing technology, such as BIM tools. Unlike earned value, earned duration management (EDM) was developed, which includes several indices to track schedule progress and measure the performance of a schedule. The goal of this research was to establish a decision support system to track and monitor construction project activities during construction, with better performance and accuracy. A survey was conducted and distributed among ten site engineers, selected from different construction sites. The survey asked the site engineers about the possible durations of certain activities; based on their answers, the authors started the proposed system. In this study, we aimed to develop a decision support system (DSS), which combines BIM with EDM to help calculate probabilistic total project duration, visually detecting critical activities, monitoring visually risky activities subjected to delay and visually categorizing the accuracy of estimated duration for delayed activities.
APA, Harvard, Vancouver, ISO, and other styles
33

Alavipour, S. M. Reza, and David Arditi. "Maximizing expected contractor profit using an integrated model." Engineering, Construction and Architectural Management 26, no. 1 (February 18, 2019): 118–38. http://dx.doi.org/10.1108/ecam-04-2018-0149.

Full text
Abstract:
Purpose Planning for increased contractor profits should start at the time the contract is signed because low profits and lack of profitability are the primary causes of contractor failure. The purpose of this paper is to propose an integrated profit maximization model (IPMM) that aims for maximum expected profit by using time-cost tradeoff analysis, adjusted start times of activities, minimized financing cost and minimized extension of work schedule beyond the contract duration. This kind of integrated approach was never researched in the past. Design/methodology/approach IPMM is programmed into an automated system using MATLAB 2016a. It generates an optimal work schedule that leads to maximum profit by means of time-cost tradeoff analysis considering different activity acceleration/deceleration methods and adjusting the start/finish times of activities. While doing so, IPMM minimizes the contractor’s financing cost by considering combinations of different financing alternatives such as short-term loans, long-term loans and lines of credit. IPMM also considers the impact of extending the project duration on project profit. Findings IPMM is tested for different project durations, for the optimality of the solutions, differing activity start/finish times and project financing alternatives. In all cases, contractors can achieve maximum profit by using IPMM. Research limitations/implications IPMM considers a deterministic project schedule, whereas stochastic time-cost tradeoff analysis can improve its performance. Resource allocation and resource leveling are not considered in IPMM, but can be incorporated into the model in future research. Finally, the long computational time is a challenge that needs to be overcome in future research. Practical implications IPMM is likely to increase profits and improve the chances of contractors to survive and grow compared to their competitors. The practical value of IPMM is that any contractor can and should use IPMM since all the data required to run IPMM is available to the contractor at the time the contract is signed. The contractor who provides information about network logic, schedule data, cost data, contractual terms, and available financing alternatives and their APRs can use an automated IPMM that adjusts activity start times and durations, minimizes financing cost, eliminates or minimizes time extensions, minimizes total cost and maximizes expected profit. Originality/value Unlike any prior study that looks into contractors’ profits by considering the impact of only one or two factors at a time, this study presents an IPMM that considers all major factors that affect profits, namely, time-cost tradeoff analysis, adjusted start times of activities, minimized financing cost and minimized extension of work schedule beyond the contract duration.
APA, Harvard, Vancouver, ISO, and other styles
34

Sbiti, Maroua, Karim Beddiar, Djaoued Beladjine, Romuald Perrault, and Bélahcène Mazari. "Toward BIM and LPS Data Integration for Lean Site Project Management: A State-of-the-Art Review and Recommendations." Buildings 11, no. 5 (May 7, 2021): 196. http://dx.doi.org/10.3390/buildings11050196.

Full text
Abstract:
Over recent years, the independent adoption of lean construction and building information modeling (BIM) has shown improvements in construction industry efficiency. Because these approaches have overlapping concepts, it is thought that their synergistic adoption can bring many more benefits. Today, implementing the lean–BIM theoretical framework is still challenging for many companies. This paper conducts a comprehensive review with the intent to identify prevailing interconnected lean and BIM areas. To this end, 77 papers published in AEC journals and conferences over the last decade were reviewed. The proposed weighting matrix showed the most promising interactions, namely those related to 4D BIM-based visualization of construction schedules produced and updated by last planners. The authors also show evidence of the lack of a sufficiently integrated BIM–Last Planner System® framework and technologies. Thus, we propose a new theoretical framework considering all BIM and LPS interactions. In our model, we suggest automating the generation of phase schedule using joint BIM data and a work breakdown structure database. Thereafter, the lookahead planning and weekly work plan is supported by a field application that must be able to exchange data with the enterprise resource planning system, document management systems, and report progress to the BIM model.
APA, Harvard, Vancouver, ISO, and other styles
35

García-Domínguez, Antonio, Mariano Marcos-Barcena, Inmaculada Medina-Bulo, and Lledo Prades. "Towards an Integrated SOA-Based Architecture for Interoperable and Responsive Manufacturing Systems Using the ISA-95 Object Model." Key Engineering Materials 615 (June 2014): 145–56. http://dx.doi.org/10.4028/www.scientific.net/kem.615.145.

Full text
Abstract:
Manufacturing systems need to integrate information from several sources, manage their business processes and coordinate the shop floor in order to meet the schedule at reduced costs. Holonic manufacturing has been proposed to solve these problems and has been normally implemented using agents. However, these implementations are not widely used due to their cost and limited interoperability. In this work, an approach that combines a service-oriented architecture with a multi-agent system is proposed. A list of rules is designed for deciding how to implement each holon, and is applied to the level 3 activities of ISA-95. The adoption of an enterprise service bus is suggested for decoupling service consumers and producers, and using a complex event processing engine is recommended to detect trends from the day-to-day operations of a plant. This approach is then applied to a grinded ceramic tile manufacturing plant, deriving a general architecture and an implementation of a service-based holon based on the ISA-95 object model, with a human-facing side (a website) and a machine-facing side (a Web Service).
APA, Harvard, Vancouver, ISO, and other styles
36

Korobko, Katerina. "INTEGRATED RESIDENTIAL AND COMMUNITY COMPLEXES OF MEDIUM FLOORS." Spatial development, no. 3 (April 14, 2023): 35–46. http://dx.doi.org/10.32347/2786-7269.2023.3.35-46.

Full text
Abstract:
Setting the problem - designing the construction objects of the centers of small towns and historical areas is problematic because their construction must accommodate residential and public functions at the same time. The complexity of their integration implies the search and use of special techniques. Publications on integrated residential and community centers cover different stages of design, starting with the study of traditional medieval cities, ending with more recent examples of urban center formation, especially in areas with regulated building heights. The article examines the peculiarities of the architectural and planning organization of housing depending on the component of social tasks of the territory, and options for solving the problems related to this. The purpose of the article is to identify trends and define approaches to the formation of integrated residential and public complexes of average storeys. It was revealed that as a result of changes in the way of life of people, as a result of various types of disasters, a large percentage of qualified workers began to prefer working at home or within walking distance to housing. The need for housing that can provide freedom and flexibility in the life schedule began to appear. The expediency of combining two environments with different purposes is substantiated and the factors of mutual influence are taken into account. The international and national experience was analyzed, differences in the architectural and planning organization of integrated medium-story housing were revealed. It is proposed to consider housing as a complex that can satisfy various needs of a person and raise his standard of living, provide social services, security, leisure and health care facilities. It was found that the determined trend is becoming relevant at this time and for the coming years. Further research should clarify the methods of architectural and planning organization of such buildings.
APA, Harvard, Vancouver, ISO, and other styles
37

Song, Sua, Hongrae Kim, and Young-Keun Chang. "Design and Implementation of 3U CubeSat Platform Architecture." International Journal of Aerospace Engineering 2018 (December 24, 2018): 1–17. http://dx.doi.org/10.1155/2018/2079219.

Full text
Abstract:
This paper describes the main concept and development of a standard platform architecture of 3U Cube Satellite, whose design and performance were implemented and verified through the development of KAUSAT-5 3U CubeSat. The 3U standard platform is built in 1.5U size and developed as a modular concept to add and expand payloads and attitude control actuators to meet the user’s needs. In the case of the electrical power system, the solar panel, the battery, and the deployment mechanism are designed to be configured by the user. Mechanical system design maximizes the electrical capability to accommodate various payloads and to integrate and miniaturize EEE (Electrical, Electronic, and Electromechanical) parts and subsystem functions/performance into limited-size PCBs. The performance of KAUSAT-5 adopting standard platform was verified by mounting the VSCMG (Variable Speed Control Moment Gyro), which is one payload for technical demonstration, at the bottom of the platform and the infrared (IR) camera, which is the other payload for science mission, on the top. The 3U CubeSat equipped with the electronic optical camera is under development implementing the standard platform to reduce development cost and schedule by minimizing additional verification.
APA, Harvard, Vancouver, ISO, and other styles
38

Zhang, Yujian, Fei Tong, Chuanyou Li, and Yuwei Xu. "Bi-Objective Workflow Scheduling on Heterogeneous Computing Systems Using a Memetic Algorithm." Electronics 10, no. 2 (January 18, 2021): 209. http://dx.doi.org/10.3390/electronics10020209.

Full text
Abstract:
Due to the high power bills and the negative environmental impacts, workflow scheduling with energy consciousness has been an emerging need for modern heterogeneous computing systems. A number of approaches have been developed to find suboptimal schedules through heuristics by means of slack reclamation or trade-off functions. In this article, a memetic algorithm for energy-efficient workflow scheduling is proposed for a quality-guaranteed solution with high runtime efficiency. The basic idea is to retain the advantages of population-based, heuristic-based, and local search methods while avoiding their drawbacks. Specifically, the proposed algorithm incorporates an improved non-dominated sorting genetic algorithm (NSGA-II) to explore potential task priorities and allocates tasks to processors by an earliest finish time (EFT)-based heuristic to provide a time-efficient candidate. Then, a local search method integrated with a pruning technique is launched with a low possibility, to exploit the feasible region indicated by the candidate schedule. Experimental results on workflows from both randomly-generated and real-world applications suggest that the proposed algorithm achieves bi-objective optimization, improving makespan, and energy saving by 4.9% and 24.3%, respectively. Meanwhile, it has a low time complexity compared to the similar work HECS.
APA, Harvard, Vancouver, ISO, and other styles
39

Adamidi, E., E. Gazis, and K. Nikita. "Control System (CS) and Data Acquisition (DAQ) architecture for the radiation background monitoring of a Personnel Safety System in the ATLAS cavern." HNPS Proceedings 24 (April 1, 2019): 72. http://dx.doi.org/10.12681/hnps.1846.

Full text
Abstract:
EDUSAFE is a 4-year Marie Curie ITN project, which focuses on research into the use of Virtual Reality (VR) and Augmented Reality (AR) during planned and emergency maintenance in extreme environments of high radiation background (HEP experiments, nuclear installations, space, deep sea, etc.) The scientific objective of this project is research into advanced VR and AR technologies for a personnel safety system platform, including features, methods and tools. Current technology is not efficient because of significant time-lag in communication and data transmission, missing multi-input interfaces, and simultaneous supervision of multiple workers who are working in the extreme high radiation background environment. The aim is to technically advance and combine several technologies and integrate them as integral part of a personnel safety system to improve safety, maintain availability, reduce errors and decrease the time needed for scheduled or sudden interventions. The research challenges lie in the development of real-time (time-lags less than human interaction speed) data-transmission, instantaneous analysis of data coming from different inputs (vision, sound, touch, buttons), interaction with multiple on-site users, complex interfaces, portability and wearability. The result is an integrated wearable VR/AR system and Control System which can be implemented and tested as a prototype. The LHC at CERN and its existing Personnel Safety System, requirements and protocols will be used as a test and demonstration platform. In this article the progress of the project will be presented and especially the major contribution of the NTUA team in developing and optimizing the Control System (CS) and the Data Acquisition System (DAQ).
APA, Harvard, Vancouver, ISO, and other styles
40

Berezka, Vitaly. "Application of the integrated decision support system for scheduling of development projects." MATEC Web of Conferences 251 (2018): 05033. http://dx.doi.org/10.1051/matecconf/201825105033.

Full text
Abstract:
Introduction: A higher level of specialization in various disciplines and technologies typical for the participants of modern investment and construction (development) projects creates the need for the advanced project management models. Decision support systems and tools for communication and organization of joint activities are the mandatory components for the efficient project management. Purpose-designed information systems allow to computerize such project management function as scheduling. However, the optimum decision is still found on the basis of personal assessment by decision makers. At the same time, as competition in the construction industry increases, the need for decision support systems to optimize investment and construction activities in the environment of multi-objective optimization becomes evident. Methods: The findings of DSS development projects in the construction industry have been used. The studies have been based on system integration approach to engineering, method of successive concession and the procedure for a search of the satisfactory values meeting STEM criteria under given weights. Results: Principles of development and functioning, architecture and organization and process aspects of DSS for scheduling under multi-objective optimization. Discussion: Integrated DSS capable for multi-objective optimization of the schedules has been proposed.
APA, Harvard, Vancouver, ISO, and other styles
41

Almatroushi, Hessa, Moncer Hariga, Rami As'ad, and AbdulRahman Al-Bar. "The multi resource leveling and materials procurement problem: an integrated approach." Engineering, Construction and Architectural Management 27, no. 9 (May 7, 2020): 2135–61. http://dx.doi.org/10.1108/ecam-10-2019-0563.

Full text
Abstract:
PurposeThis paper proposes an integrated approach that seeks to jointly optimize project scheduling and material lot sizing decisions for time-constrained project scheduling problems.Design/methodology/approachA mixed integer linear programming model is devised, which utilizes the splitting of noncritical activities as a mean toward leveling the renewable resources. The developed model minimizes renewable resources leveling costs along with consumable resources related costs, and it is solved using IBM ILOG CPLEX optimization package. A hybrid metaheuristic procedure is also proposed to efficiently solve the model for larger projects with complex networks structure.FindingsThe results confirmed the significance of the integrated approach as both the project schedule and the material ordering policy turned out to be different once compared to the sequential approach under same parameter settings. Furthermore, the integrated approach resulted in substantial total costs reduction for low values of the acquiring and releasing costs of the renewable resources. Computational experiments conducted over 240 test instances of various sizes, and complexities illustrate the efficiency of the proposed metaheuristic approach as it yields solutions that are on average 1.14% away from the optimal ones.Practical implicationsThis work highlights the necessity of having project managers address project scheduling and materials lot sizing decisions concurrently, rather than sequentially, to better level resources and minimize materials related costs. Significant cost savings were generated through the developed model despite the use of a small-scale example which illustrates the great potential that the integrated approach has in real life projects. For real life projects with complex network topology, practitioners are advised to make use of the developed metaheuristic procedure due to its superior time efficiency as compared to exact solution methods.Originality/valueThe sequential approach, wherein a project schedule is established first followed by allocating the needed resources, is proven to yield a nonoptimized project schedule and materials ordering policy, leading to an increase in the project's total cost. The integrated approach proposed hereafter optimizes both decisions at once ensuring the timely completion of the project at the least possible cost. The proposed metaheuristic approach provides a viable alternative to exact solution methods especially for larger projects.
APA, Harvard, Vancouver, ISO, and other styles
42

AlQahtani, Osama. "Effects of User Equipment and Integrated Access and Backhaul Schedulers on the Throughput of 5G Millimeter-Wave Networks." IT Professional 25, no. 3 (May 2023): 19–23. http://dx.doi.org/10.1109/mitp.2023.3271300.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Ali, Ahmed K. "A case study in developing an interdisciplinary learning experiment between architecture, building construction, and construction engineering and management education." Engineering, Construction and Architectural Management 26, no. 9 (October 21, 2019): 2040–59. http://dx.doi.org/10.1108/ecam-07-2018-0306.

Full text
Abstract:
Purpose The purpose of this paper is to highlight the value of interdisciplinary learning specifically in the architecture (ARCH), building construction (BC) and construction management and engineering (CEM) disciplines within the USA’s higher education system. The study attempts to expand the existing literature on integrated design and construction education and offer an alternative model for academic students’ collaboration when restructuring curriculums is not possible in the short term. Design/methodology/approach The study adopted a qualitative research methodology, which involved designing a structured learning experiment, then followed it with collecting the “lived experience” of 31 participants from three majors according to the institution’s institution review board (IRB) office’s guidelines. The author hypothesized that students from different, but related disciplines working on a real-life project, would better understand the value of each other’s knowledge brought to the teamwork before graduation. The data were analyzed and compared to existing literature on integrated project delivery, and collaborative learning models. Data collection (surveys) was approved by the higher education’s IRB No. 13-021. Findings Despite the already-existing curriculum obstacles, the majority of students were very pleased with this collaborative experiment. The results confirmed many of the expectations about how students viewed each other’s discipline. The preconceived notions were dissipated at the end of the study, and students expressed more appreciation for each other’s field and expressed interest in learning more about the thought processes of other disciplines. Research limitations/implications Typical conflicting academic schedules were the greatest obstacle in this experiment. Architecture students often devote majority of their time to design studios and therefore are unable to fully engage in an integrated capstone project like this one as extracurricular. Because of the chosen research approach, the research results may lack generalizability. Therefore, researchers are encouraged to test the proposed propositions further. Practical implications It is possible to develop a successful collaborative experience in the architecture, engineering and construction higher education system without major restructuring of the curriculums. The impact on students’ learning experience is greater than the existing separated education model. Originality/value This paper fulfills an identified need to study how integrated design and construction education occurs without creating new dedicated programs or coursework.
APA, Harvard, Vancouver, ISO, and other styles
44

DECKER, KEITH, ALAN GARVEY, MARTY HUMPHREY, and VICTOR LESSER. "A REAL-TIME CONTROL ARCHITECTURE FOR AN APPROXIMATE PROCESSING BLACKBOARD SYSTEM." International Journal of Pattern Recognition and Artificial Intelligence 07, no. 02 (April 1993): 265–84. http://dx.doi.org/10.1142/s0218001493000145.

Full text
Abstract:
Approximate processing is an approach to real-time AI problem solving in domains in which compromise is possible between the resources required to generate a solution and the quality of that solution. It is a satisficing approach in which the goal is to produce acceptable solutions within the available time and computational resource constraints. Previous work has shown how to integrate approximate processing knowledge sources within the blackboard architecture. However, in order to solve real-time problems with hard deadlines using a blackboard system, we need to have: (1) a predictable blackboard execution loop, (2) a representation of the set of current and future tasks and their estimated durations, and (3) a model of how to modify those tasks when their deadlines are projected to be missed, and how the modifications will affect the task durations and results. This paper describes four components for achieving this goal in an approximate processing blackboard system. A parameterized low-level control loop allows predictable knowledge source execution, multiple execution channels allow dynamic control over the computation involved in each task, a meta-controller allows a representation of the set of current and future tasks and their estimated durations and results, and a real-time blackboard scheduler monitors and modifies tasks during execution so that deadlines are met. An example is given that illustrates how these components work together to construct a satisficing solution to a time-constrained problem in the Distributed Vehicle Monitoring Testbed (DVMT).
APA, Harvard, Vancouver, ISO, and other styles
45

He, Chao, and Ruyan Wang. "A QoE-Aware Energy Supply Scheme over a FiWi Access Network in the 5G Era." Sensors 20, no. 13 (July 7, 2020): 3794. http://dx.doi.org/10.3390/s20133794.

Full text
Abstract:
Integrated fiber-wireless (FiWi) should be regarded as a promising access network architecture in future 5G networks, and beyond; this due to its seamless combination of flexibility, ubiquity, mobility of the wireless mesh network (WMN) frontend with a large capacity, high bandwidth, strong robustness in time, and a wavelength-division multiplexed passive optical network (TWDM-PON) backhaul. However, the key issue in both traditional human-to-human (H2H) traffic and emerging Tactile Internet is the energy conservation network operation. Therefore, a power-saving method should be instrumental in the wireless retransmission-enabled architecture design. Toward this end, this paper firstly proposes a novel energy-supply paradigm of the FiWi converged network infrastructure, i.e., the emerging power over fiber (PoF) technology instead of an external power supply. Then, the existing time-division multiplexing access (TDMA) scheme and PoF technology are leveraged to carry out joint dynamic bandwidth allocation (DBA) and provide enough power for the sleep schedule in each integrated optical network unit mesh portal point (ONU-MPP) branch. Additionally, the correlation between the transmitted optical power of the optical line terminal (OLT) and the quality of experience (QoE) guarantee caused by multiple hops in the wireless frontend is taken into consideration in detail. The research results prove that the envisioned paradigm can significantly reduce the energy consumption of the whole FiWi system while satisfying the average delay constraints, thus providing enough survivability for multimode optical fiber.
APA, Harvard, Vancouver, ISO, and other styles
46

Lu, Peng, Xiao Cong, and Dongdai Zhou. "E-learning-Oriented Software Architecture Design and Case Study." International Journal of Emerging Technologies in Learning (iJET) 10, no. 4 (September 22, 2015): 59. http://dx.doi.org/10.3991/ijet.v10i4.4698.

Full text
Abstract:
Nowadays, E-learning system has been widely applied to practical teaching. It was favored by people for its characterized course arrangement and flexible learning schedule. However, the system does have some problems in the process of application such as the functions of single software are not diversified enough to satisfy the requirements in teaching completely. In order to cater more applications in the teaching process, it is necessary to integrate functions from different systems. But the difference in developing techniques and the inflexibility in design makes it difficult to implement. The major reason of these problems is the lack of fine software architecture. In this article, we build domain model and component model of E-learning system and components integration method on the basis of WebService. And we proposed an abstract framework of E-learning which could express the semantic relationship among components and realize high level reusable on the basis of informationized teaching mode. On this foundation, we form an E-learning oriented layering software architecture contain component library layer, application framework layer and application layer. Moreover, the system contains layer division multiplexing and was not built upon developing language and tools. Under the help of the software architecture, we could build characterized E-learning system flexibly like building blocks through framework selection, component assembling and replacement. In addition, we exemplify how to build concrete E-learning system on the basis of this software architecture.
APA, Harvard, Vancouver, ISO, and other styles
47

Hadiuzzaman, Md, Yang Zhang, Tony Z. Qiu, Ming Lu, and Simaan AbouRizk. "Modeling the impact of work-zone traffic flows upon concrete construction: a high level architecture based simulation framework." Canadian Journal of Civil Engineering 41, no. 2 (February 2014): 144–53. http://dx.doi.org/10.1139/cjce-2013-0103.

Full text
Abstract:
Congestion can restrict access to construction activities both on and close to a freeway, reducing construction efficiency, especially by delaying the delivery of materials. Although previous studies have investigated the impact of work-zone capacities on traffic flow, most did not consider interactions with construction resources. There is currently no mature standard or acceptable model that can be used for this purpose, and that would benefit both traffic and construction in an integrated framework; however, well-established, powerful simulation systems exist unique to each domain. To this end, in this paper, a construction-traffic interdisciplinary simulation (CTISIM) framework based on high level architecture is proposed to combine the power of each domain’s simulation systems. A ready-mixed concrete production and delivery problem is adopted to demonstrate the benefit of the CTISIM, which optimizes construction resource arrangement and a concrete production schedule based on the real-time traffic conditions in a work-zone.
APA, Harvard, Vancouver, ISO, and other styles
48

Eckert, Martina, Alicia Aglio, María-Luisa Martín-Ruiz, and Víctor Osma-Ruiz. "A New Architecture for Customizable Exergames: User Evaluation for Different Neuromuscular Disorders." Healthcare 10, no. 10 (October 21, 2022): 2115. http://dx.doi.org/10.3390/healthcare10102115.

Full text
Abstract:
This paper presents a modular approach to generic exergame design that combines custom physical exercises in a meaningful and motivating story. This aims to provide a tool that can be individually tailored and adapted to people with different needs, making it applicable to different diseases and states of disease. The game is based on motion capturing and integrates four example exercises that can be configured via our therapeutic web platform “Blexer-med”. To prove the feasibility for a wide range of different users, evaluation tests were performed on 14 patients with various types and degrees of neuromuscular disorders, classified into three groups based on strength and autonomy. The users were free to choose their schedule and frequency. The game scores and three surveys (before, during, and after the intervention) showed similar experiences for all groups, with the most vulnerable having the most fun and satisfaction. The players were motivated by the story and by achieving high scores. The average usage time was 2.5 times per week, 20 min per session. The pure exercise time was about half of the game time. The concept has proven feasible and forms a reasonable basis for further developments. The full 3D exercise needs further fine-tuning to enhance the fun and motivation.
APA, Harvard, Vancouver, ISO, and other styles
49

Zhang, Weijian, Zhimin Guo, Nuannuan Li, Mingyan Li, Qing Fan, and Min Luo. "A Blind Signature-Aided Privacy-Preserving Power Request Scheme for Smart Grid." Wireless Communications and Mobile Computing 2021 (June 29, 2021): 1–10. http://dx.doi.org/10.1155/2021/9988170.

Full text
Abstract:
Smart grid is an emerging power system capable of providing appropriate electricity generation and distribution adjustments in the two-way communication mode. However, privacy preservation is a critical issue in the power request system since malicious adversaries could obtain users’ daily schedule through power transmission channel. Blind signature is an effective method of hiding users’ private information. In this paper, we propose an untraceable blind signature scheme under the reputable modification digital signature algorithm (MDSA). Moreover, we put forward an improved credential-based power request system architecture integrated with the proposed blind signature. In addition, we prove our blind signature’s blindness and unforgeability under the assumption of Elliptic Curve Discrete Logarithm Problem (ECDLP). Meanwhile, we analyze privacy preservation, unforgeability, untraceability, and verifiability of the proposed scheme. Computational cost analysis demonstrates that our scheme has better efficiency compared with other two blind signatures.
APA, Harvard, Vancouver, ISO, and other styles
50

Khan, Gitosree, Sabnam Sengupta, and Anirban Sarkar. "Dynamic service composition in enterprise cloud bus architecture." International Journal of Web Information Systems 15, no. 5 (December 2, 2019): 550–76. http://dx.doi.org/10.1108/ijwis-01-2019-0005.

Full text
Abstract:
Purpose Service composition phenomenon based on non-scenario aspects are become the latest issues in enterprise software applications of the multi-cloud environment due to the phenomenal increase in a number of Web services. The traditional service composition patterns are hard to support the dynamic, flexible and autonomous service composition in the inter-cloud platform. To address this problem, this paper aims to describe a dynamic service composition framework (SCF) that is enriched with various structural and functional aspects of composition patterns in a cloud computing environment. The proposed methodology helps to integrate various heterogeneous cloud services dynamically to acquire an optimal and novel enterprise solution for delivering the service to the end-users automatically. Design/methodology/approach SCF and different composition patterns have been used to compose the services present in the inter-cloud architecture of the multi-agent-based system. Further, the proposed dynamic service composition algorithm is illustrated using a hybrid approach, where service are chosen according to various needs of quality of service parameters. Besides, a priority-based service scheduling algorithm is proposed that facilitates the automation of delivering cloud service optimally. Findings The proposed framework is capable of composing the heterogeneous service and facilitate the structural and functional aspects of service composition process in enterprise cloud-based applications in terms of flexibility, scalability, integrity and dynamicity of the cloud bus. The advantage of the proposed algorithm is that it helps to minimize the execution cost, processing time and get better success rate in delivering the service as per customer’s need. Originality/value The novelty of the proposed architecture coordinates cloud participants, automate service discovery pattern, reconfigure scheduled services and focus on aggregating a composite services in inter-cloud environments. Besides, the proposed framework supported several non-functional characteristics such as robustness, flexibility, dynamicity, scalability and reliability of the system.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography