Academic literature on the topic 'Integrated Scheduler Architecture'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Integrated Scheduler Architecture.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Integrated Scheduler Architecture"

1

M. Shahane, Priti, and Narayan Pisharoty. "Implementation of ISLIP scheduler for NOC router on FPGA." International Journal of Engineering & Technology 7, no. 2.12 (April 3, 2018): 268. http://dx.doi.org/10.14419/ijet.v7i2.12.11302.

Full text
Abstract:
Network on chip (NoC) effectively replaces a traditional bus based architecture in System on chip (SoC). The NoC provides a solution to the communication bottleneck of the bus based interconnection in SoC, where large numbers of Intellectual modules are integrated on a single chip for better performance. In NoC architecture, the router is a dominant component, which should provide contention free architecture with low latency. The router consists of input block, scheduler and crossbar switch. The design of scheduler leads the performance of the NoC router in terms of latency. Hence the starvation free scheduler is paramount importantin the NoC router design. iSLIP (Iterative serial line internet protocol) scheduler has programmable priority encoder which makes it fast and efficient scheduler over round robin arbiter. In this paper 2x4 NoC router using iSLIPscheduler is proposed. The proposed design is implemented using the Verilog programming on Xilinx Spartan 3 device.
APA, Harvard, Vancouver, ISO, and other styles
2

Zagan, Ionel, and Vasile Găitan. "Hardware RTOS: Custom Scheduler Implementation Based on Multiple Pipeline Registers and MIPS32 Architecture." Electronics 8, no. 2 (February 14, 2019): 211. http://dx.doi.org/10.3390/electronics8020211.

Full text
Abstract:
The task context switch operation, the inter-task synchronization and communication mechanisms, as well as the jitter occurred in treating aperiodic events, are crucial factors in implementing real-time operating systems (RTOS). In practice and literature, several solutions can be identified for improving the response speed and performance of real-time systems. Software implementations of RTOS-specific functions can generate significant delays, adversely affecting the deadlines required for certain applications. This paper presents an original implementation of a dedicated processor, based on multiple pipeline registers, and a hardware support for a dynamic scheduler with the following characteristics: performs unitary event management, provides access to architecture shared resources, prioritizes and executes the multiple events expected by the same task. The paper also presents a method through which interrupts are assigned to tasks. Through dedicated instructions, the integrated hardware scheduler implements tasks synchronization with multiple prioritized events, thus ensuring an efficient functioning of the processor in the context of real-time control.
APA, Harvard, Vancouver, ISO, and other styles
3

Wang, Hong Chun, and Wen Sheng Niu. "Design and Analysis of AFDX Network Based High-Speed Avionics System of Civil Aircraft." Advanced Materials Research 462 (February 2012): 445–51. http://dx.doi.org/10.4028/www.scientific.net/amr.462.445.

Full text
Abstract:
Avionics Full Duplex Switched Ethernet (AFDX) standardized as ARINC 664 is a major upgrade for integrated avionics systems of civil aircraft. It becomes the current communication technology in the context of avionics and provides a backbone network for the civil avionics system. This paper focuses on features of AFDX network protocol. Architecture of AFDX switch based on shared memory is proposed to meet the requirements of avionics real-time system. In addition, frame filtering, traffic policing and frame schedule function are used to eliminate uncertainties in huge traffic flows. End System (ES) host-target architecture is also researched in this paper. Virtual link scheduler, redundancy management, and protocol stack in ES are designed to ensure determinism and reliability of data communication. AFDX switch and ES have been successfully developed, and configuration tool, ARINC 615A loader and simulation tool related to AFDX network are also provided as package solution to support avionics system construction. Finally, AFDX switch and ESes have passed ARINC 664 protocol conformance test and certification, the test results show that our AFDX products meet the requirements of real-time communication, determinism and reliability defined in ARINC 664
APA, Harvard, Vancouver, ISO, and other styles
4

S., Manishankar, and S. Sathayanarayana. "Performance evaluation and resource optimization of cloud based parallel Hadoop clusters with an intelligent scheduler." International Journal of Engineering & Technology 7, no. 4.20 (November 29, 2018): 4220. http://dx.doi.org/10.14419/ijet.v7i4.13372.

Full text
Abstract:
Data generated from real time information systems are always incremental in nature. Processing of such a huge incremental data in large scale requires a parallel processing system like Hadoop based cluster. Major challenge that arises in all cluster-based system is how efficiently the resources of the system can be used. The research carried out proposes a model architecture for Hadoop cluster with additional components integrated such as super node who manages the clusters computations and a mediation manager who does the performance monitoring and evaluation. Super node in the system is equipped with intelligent or adaptive scheduler that does the scheduling of the job with optimal resources. The scheduler is termed intelligent as it automatically decides which resource to be taken for which computation, with the help of a cross mapping of resource and job with a genetic algorithm which finds the best matching resource. The mediation node deploys ganglia a standard monitoring tool for Hadoop cluster to collect and record the performance parameters of the Hadoop cluster. The system over all does the scheduling of different jobs with optimal usage of resources thus achieving better efficiency compared to the native capacity scheduler in Hadoop. The system is deployed on top of OpenNebula Cloud environment for scalability.
APA, Harvard, Vancouver, ISO, and other styles
5

Chen, Chi-Ting, Ling-Ju Hung, Sun-Yuan Hsieh, Rajkumar Buyya, and Albert Y. Zomaya. "Heterogeneous Job Allocation Scheduler for Hadoop MapReduce Using Dynamic Grouping Integrated Neighboring Search." IEEE Transactions on Cloud Computing 8, no. 1 (January 1, 2020): 193–206. http://dx.doi.org/10.1109/tcc.2017.2748586.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Audah, Lukman, Zhili Sun, and Haitham Cruickshank. "QoS based Admission Control using Multipath Scheduler for IP over Satellite Networks." International Journal of Electrical and Computer Engineering (IJECE) 7, no. 6 (December 1, 2017): 2958. http://dx.doi.org/10.11591/ijece.v7i6.pp2958-2969.

Full text
Abstract:
<p>This paper presents a novel scheduling algorithm to support quality of service (QoS) for multiservice applications over integrated satellite and terrestrial networks using admission control system with multipath selection capabilities. The algorithm exploits the multipath routing paradigm over LEO and GEO satellites constellation in order to achieve optimum end-to-end QoS of the client-server Internet architecture for HTTP web service, file transfer, video streaming and VoIP applications. The proposed multipath scheduler over the satellite networks advocates load balancing technique based on optimum time-bandwidth in order to accommodate the burst of application traffics. The method tries to balance the bandwidth load and queue length on each link over satellite in order to fulfil the optimum QoS level for each traffic type. Each connection of a traffic type will be routed over a link with the least bandwidth load and queue length at current time in order to avoid congestion state. The multipath routing scheduling decision is based on per connection granularity so that packet reordering at the receiver side could be avoided. The performance evaluation of IP over satellites has been carried out using multiple connections, different file sizes and bit-error-rate (BER) variations to measure the packet delay, loss ratio and throughput.</p>
APA, Harvard, Vancouver, ISO, and other styles
7

Guzmán Ortiz, Eduardo, Beatriz Andres, Francisco Fraile, Raul Poler, and Ángel Ortiz Bas. "Fleet management system for mobile robots in healthcare environments." Journal of Industrial Engineering and Management 14, no. 1 (January 28, 2021): 55. http://dx.doi.org/10.3926/jiem.3284.

Full text
Abstract:
Purpose: The purpose of this paper is to describe the implementation of a Fleet Management System (FMS) that plans and controls the execution of logistics tasks by a set of mobile robots in a real-world hospital environment. The FMS is developed upon an architecture that hosts a routing engine, a task scheduler, an Endorse Broker, a controller and a backend Application Programming Interface (API). The routing engine handles the geo-referenced data and the calculation of routes; the task scheduler implements algorithms to solve the task allocation problem and the trolley loading problem using Integer Linear Programming (ILP) model and a Genetic Algorithm (GA) depending on the problem size. The Endorse Broker provides a messaging system to exchange information with the robotic fleet, while the controller implements the control rules to ensure the execution of the work plan. Finally, the Backend API exposes some FMS to external systems.Design/methodology/approach: The first part of the paper, focuses on the dynamic path planning problem of a set of mobile robots in indoor spaces such as hospitals, laboratories and shopping centres. A review of algorithms developed in the literature, to address dynamic path planning, is carried out; and an analysis of the applications of such algorithms in mobile robots that operate in real in-door spaces is performed. The second part of the paper focuses on the description of the FMS, which consists of five integrated tools to support the multi-robot dynamic path planning and the fleet management.Findings: The literature review, carried out in the context of path planning problem of multiple mobile robots in in-door spaces, has posed great challenges due to the environment characteristics in which robots move. The developed FMS for mobile robots in healthcare environments has resulted on a tool that enables to: (i) interpret of geo-referenced data; (ii) calculate and recalculate dynamic path plans and task execution plans, through the implementation of advanced algorithms that take into account dynamic events; (iii) track the tasks execution; (iv) fleet traffic control; and (v) to communicate with one another external systems.Practical implications: The proposed FMS has been developed under the scope of ENDORSE project that seeks to develop safe, efficient, and integrated indoor robotic fleets for logistic applications in healthcare and commercial spaces. Moreover, a computational analysis is performed using a virtual hospital floor-plant.Originality/value: This work proposes a novel FMS, which consists of integrated tools to support the mobile multi-robot dynamic path planning in a real-world hospital environment. These tools include: a routing engine that handles the geo-referenced data and the calculation of routes. A task scheduler that includes a mathematical model to solve the path planning problem, when a low number of robots is considered. In order to solve large size problems, a genetic algorithm is also implemented to compute the dynamic path planning with less computational effort. An Endorse broker to exchanges information between the robotic fleet and the FMS in a secure way. A backend API that provides interface to manage the master data of the FMS, to calculate an optimal assignment of a set of tasks to a group of robots to be executed on a specific date and time, and to add a new task to be executed in the current shift. Finally, a controller to ensures that the robots execute the tasks that have been assigned by the task scheduler.
APA, Harvard, Vancouver, ISO, and other styles
8

Dong, Zihang, Yunming Cao, Naixue Xiong, and Pingping Dong. "EE-MPTCP: An Energy-Efficient Multipath TCP Scheduler for IoT-Based Power Grid Monitoring Systems." Electronics 11, no. 19 (September 28, 2022): 3104. http://dx.doi.org/10.3390/electronics11193104.

Full text
Abstract:
The Internet-of-Things (IoT) based monitoring system has significantly promoted the intelligence and automation of power grids. The inspection robots and wireless sensors used in the monitoring system usually have multiple network interfaces to achieve high throughput and reliability transmission. The concurrent usage of these available interfaces with Multipath TCP (MPTCP) can enhance the quality of service of the communications. However, traditional MPTCP scheduling algorithms may bring about data disorder and even buffer blocking, which severely affects the transmission performance of MPTCP. And the common MPTCP improvement mechanisms for IoT lack sufficient attention to energy consumption, which is important for the battery-limited wireless sensors. With the aim to promote conservative energy without loss of throughput, this paper develops an integrated multipath scheduler for energy consumption optimization named energy-efficient MPTCP (EE-MPTCP). EE-MPTCP first constructs a target optimization function which considers both network throughput and energy consumption. Then, based on the proposed MPTCP transmission model and existing energy efficiency model, the network throughput and energy consumption of each path can be estimated. Finally, a heuristic scheduling algorithm is proposed to find a suitable set of paths for each application. As confirmed by experiments based on Linux testbed as well as the NS3 simulation platform, the proposed scheduler can shorten the average completion time and reduce the energy consumption by up to 79.9% and 79.2%, respectively.
APA, Harvard, Vancouver, ISO, and other styles
9

Lyu, Chenghao, Qi Fan, Fei Song, Arnab Sinha, Yanlei Diao, Wei Chen, Li Ma, et al. "Fine-grained modeling and optimization for intelligent resource management in big data processing." Proceedings of the VLDB Endowment 15, no. 11 (July 2022): 3098–111. http://dx.doi.org/10.14778/3551793.3551855.

Full text
Abstract:
Big data processing at the production scale presents a highly complex environment for resource optimization (RO), a problem crucial for meeting performance goals and budgetary constraints of analytical users. The RO problem is challenging because it involves a set of decisions (the partition count, placement of parallel instances on machines, and resource allocation to each instance), requires multi-objective optimization (MOO), and is compounded by the scale and complexity of big data systems while having to meet stringent time constraints for scheduling. This paper presents a MaxCompute based integrated system to support multi-objective resource optimization via fine-grained instance-level modeling and optimization. We propose a new architecture that breaks RO into a series of simpler problems, new fine-grained predictive models, and novel optimization methods that exploit these models to make effective instance-level RO decisions well under a second. Evaluation using production workloads shows that our new RO system could reduce 37--72% latency and 43--78% cost at the same time, compared to the current optimizer and scheduler, while running in 0.02-0.23s.
APA, Harvard, Vancouver, ISO, and other styles
10

Dossis, Michael F. "Formal ESL Synthesis for Control-Intensive Applications." Advances in Software Engineering 2012 (June 27, 2012): 1–30. http://dx.doi.org/10.1155/2012/156907.

Full text
Abstract:
Due to the massive complexity of contemporary embedded applications and integrated systems, long effort has been invested in high-level synthesis (HLS) and electronic system level (ESL) methodologies to automatically produce correct implementations from high-level, abstract, and executable specifications written in program code. If the HLS transformations that are applied on the source code are formal, then the generated implementation is correct-by-construction. The focus in this work is on application-specific design, which can deliver optimal, and customized implementations, as opposed to platform or IP-based design, which is bound by the limits and constraints of the preexisting architecture. This work surveys and reviews past and current research in the area of ESL and HLS. Then, a prototype HLS compiler tool that has been developed by the author is presented, which utilizes compiler-generators and logic programming to turn the synthesis into a formal process. The scheduler PARCS and the formal compilation of the system are tested with a number of benchmarks and real-world applications. This demonstrates the usability and applicability of the presented method.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Integrated Scheduler Architecture"

1

Saranya, N. "Efficient Schemes for Partitioning Based Scheduling of Real-Time Tasks in Multicore Architecture." Thesis, 2015. https://etd.iisc.ac.in/handle/2005/4495.

Full text
Abstract:
The correctness of hard real-time systems depends not only on its logical correctness but also, on its ability to meet all its deadline. Existing real-time systems use either a pure real-time scheduler or a real-time scheduler embedded as a real-time scheduling class in the scheduler of an operating system. Existing schedulers in multicore systems that support both real-time and non-real-time tasks, permit the execution of non-real-time tasks in all the cores with priorities lower than those of real-time tasks, but interrupts and softirqs associated with these non-real-time tasks can execute in any core with priorities higher than those of real-time tasks. In such systems, there is a need to develop a scheduler which minimizes the execution overhead of real-time tasks and ensures that the tasks runtime is not affected. To this end, we develop an integrated scheduler architecture on Linux kernel, called SchedISA, which executes hard real-time tasks with minimal interference from the Linux tasks while ensuring a fair share of CPU resources for the Linux tasks. We compared the execution overhead of real-time tasks in SchedISA implementing partitioned earliest deadline first (P-EDF) scheduling algorithm with SCHED_DEADLINEs P-EDF implementation. The experimental results show that the execution overhead of real-time tasks in SchedISA is considerably less than that in SCHED_DEADLINE. Having developed a multicore scheduling architecture for scheduling hard real-time tasks, we explore existing multicore scheduling techniques and propose a new scheduling technique that is better in terms of efficiency and suitability than the existing multicore scheduling techniques. Existing real-time multicore schedulers use either global or partitioned scheduling technique to schedule real-time tasks. Partitioned scheduling is a static approach in which, a task is mapped to a per-processor ready queue prior to scheduling it and it cannot migrate. Partitioned scheduling makes ineffective use of the available processing power and incurs high overhead when real-time tasks are dynamic in nature. Global scheduling is a dynamic scheduling approach, where the processors share a single ready queue to execute the highest priority tasks. Global scheduling allows task migration which results in high scheduling overhead. In our work, we present a dynamic partitioning-based scheduling of real-time tasks, called DP scheduling. In DP scheduling, jobs of tasks are assigned to cores when they are released and remain in the same core till they finish execution. The partitioning in DP scheduling is done based on the slack time and priority of the existing jobs. If a job cannot be allocated to any core, then it is split, and executed on more than one core. DP scheduling technique attempts to retain good features of both global and partitioned scheduling without compromising on resource utilization, and at the same time, also tries to minimize the scheduling overhead. We have tested DP scheduling technique with EDF scheduling policy at each core, called DP-EDF scheduling algorithm, and implemented it using the concept of SchedISA. We compared the performance of DP-EDF with P-EDF and global EDF scheduling algorithms. Both simulation and experimental results show that DP-EDF scheduling algorithm has better performance in terms of resource utilization, and comparable or better performance in terms of scheduling overhead in comparison to these scheduling algorithms.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Integrated Scheduler Architecture"

1

Guevara, Ivan, Hafiz Ahmad Awais Chaudhary, and Tiziana Margaria. "Model-Driven Edge Analytics: Practical Use Cases in Smart Manufacturing." In Lecture Notes in Computer Science, 406–21. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-19762-8_29.

Full text
Abstract:
AbstractIn the Internet of Things (IoT) era, devices and systems generate enormous amounts of real-time data, and demand real-time analytics in an uninterrupted manner. The typical solution, a cloud-centred architecture providing an analytics service, cannot guarantee real-time responsiveness because of unpredictable workloads and network congestion. Recently, edge computing has been proposed as a solution to reduce latency in critical systems. For computation processing and analytics on edge, the challenges include handling the heterogeneity of devices and data, and achieving processing on the edge in order to reduce the amount of data transmitted over the network.In this paper, we show how low-code, model-driven approaches benefit a Digital Platform for Edge analytics. The first solution uses EdgeX, an IIoT framework for supporting heterogeneous architectures with the eKuiper rule-based engine. The engine schedules fully automatically tasks that retrieve data from the Edge, as the infrastructure near the data is generated, allowing us to create a continuous flow of information. The second solution uses FiWARE, an IIoT framework used in industry, using IoT agents to accomplish a pipeline for edge analytics. In our architecture, based on the DIME LC/NC Integrated Modelling Environment, both integrations of EdgeX/eKuyper and FiWARE happen by adding an External Native DSL to this Digital Platform. The DSL comprises a family of reusable Service-Independent Building blocks (SIBs), which are the essential modelling entities and (service) execution capabilities in the architecture’s modelling layer. They provide users with capabilities to connect, control and organise devices and components, and develop custom workflows in a simple drag and drop manner.
APA, Harvard, Vancouver, ISO, and other styles
2

Mueller-Gritschneder, Daniel, Eric Cheng, Uzair Sharif, Veit Kleeberger, Pradip Bose, Subhasish Mitra, and Ulf Schlichtmann. "Cross-Layer Resilience Against Soft Errors: Key Insights." In Dependable Embedded Systems, 249–75. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-52017-5_11.

Full text
Abstract:
AbstractDriven by technology scaling, integrated systems become more susceptible to various causes of random hardware faults such as radiation-induced soft errors. Such soft errors may cause malfunction of the system due to corruption of data or control flow, which may lead to unacceptable risks for life or property in safety-critical applications. Hence, safety-critical systems deploy protection techniques such as hardening and redundancy at different layers of the system stack (circuit, logic, architecture, OS/schedule, compiler, software, algorithm) to improve resiliency against soft errors. Here, cross-layer resilience techniques aim at finding lower cost solutions by providing accurate estimation of soft error resilience combined with a systematic exploration of protection techniques that work collaboratively across the system stack. This chapter demonstrates how to apply the cross-layer resilience principle on custom processors, fixed-hardware processors, accelerators, and SRAM memories (with a focus on soft errors) and presents key insights obtained.
APA, Harvard, Vancouver, ISO, and other styles
3

Lane, Jo Ann, and Barry Boehm. "System-of-Systems Cost Estimation." In Emerging Systems Approaches in Information Technologies, 204–13. IGI Global, 2010. http://dx.doi.org/10.4018/978-1-60566-976-2.ch012.

Full text
Abstract:
As organizations strive to expand system capabilities through the development of system-of-systems (SoS) architectures, they want to know “how much effort” and “how long” to implement the SoS. In order to answer these questions, it is important to first understand the types of activities performed in SoS architecture development and integration and how these vary across different SoS implementations. This article provides results of research conducted to determine types of SoS lead system integrator (LSI) activities and how these differ from the more traditional system engineering activities described in Electronic Industries Alliance (EIA) 632 (“Processes for Engineering a System”). This research further analyzed effort and schedule issues on “very large” SoS programs to more clearly identify and profile the types of activities performed by the typical LSI and to determine organizational characteristics that significantly impact overall success and productivity of the LSI effort. The results of this effort have been captured in a reduced-parameter version of the constructive SoS integration cost model (COSOSIMO) that estimates LSI SoS engineering (SoSE) effort.
APA, Harvard, Vancouver, ISO, and other styles
4

Lane, Jo Ann, and Barry Boehm. "System-of-Systems Cost Estimation." In Enterprise Information Systems, 986–96. IGI Global, 2011. http://dx.doi.org/10.4018/978-1-61692-852-0.ch406.

Full text
Abstract:
As organizations strive to expand system capabilities through the development of system-of-systems (SoS) architectures, they want to know “how much effort” and “how long” to implement the SoS. In order to answer these questions, it is important to first understand the types of activities performed in SoS architecture development and integration and how these vary across different SoS implementations. This article provides results of research conducted to determine types of SoS lead system integrator (LSI) activities and how these differ from the more traditional system engineering activities described in Electronic Industries Alliance (EIA) 632 (“Processes for Engineering a System”). This research further analyzed effort and schedule issues on “very large” SoS programs to more clearly identify and profile the types of activities performed by the typical LSI and to determine organizational characteristics that significantly impact overall success and productivity of the LSI effort. The results of this effort have been captured in a reduced-parameter version of the constructive SoS integration cost model (COSOSIMO) that estimates LSI SoS engineering (SoSE) effort.
APA, Harvard, Vancouver, ISO, and other styles
5

Breur, Tom. "Business Intelligence Architecture in Support of Data Quality." In Information Quality and Governance for Business Intelligence, 363–81. IGI Global, 2014. http://dx.doi.org/10.4018/978-1-4666-4892-0.ch019.

Full text
Abstract:
Business Intelligence (BI) projects that involve substantial data integration have often proven failure-prone and difficult to plan. Data quality issues trigger rework, which makes it difficult to accurately schedule deliverables. Two things can bring improvement. Firstly, one should deliver information products in the smallest possible chunks, but without adding prohibitive overhead for breaking up the work in tiny increments. This will increase the frequency and improve timeliness of feedback on suitability of information products and hence make planning and progress more predictable. Secondly, BI teams need to provide better stewardship when they facilitate discussions between departments whose data cannot easily be integrated. Many so-called data quality errors do not stem from inaccurate source data, but rather from incorrect interpretation of data. This is mostly caused by different interpretation of essentially the same underlying source system facts across departments with misaligned performance objectives. Such problems require prudent stakeholder management and informed negotiations to resolve such differences. In this chapter, the authors suggest an innovation to data warehouse architecture to help accomplish these objectives.
APA, Harvard, Vancouver, ISO, and other styles
6

Sandhu, Rajinder, Adel Nadjaran Toosi, and Rajkumar Buyya. "An API for Development of User-Defined Scheduling Algorithms in Aneka PaaS Cloud Software." In Handbook of Research on Cloud Computing and Big Data Applications in IoT, 170–87. IGI Global, 2019. http://dx.doi.org/10.4018/978-1-5225-8407-0.ch009.

Full text
Abstract:
Cloud computing provides resources using multitenant architecture where infrastructure is created from one or more distributed datacenters. Scheduling of applications in cloud infrastructures is one of the main research area in cloud computing. Researchers have developed many scheduling algorithms and evaluated them using simulators such as CloudSim. Their performance needs to be validated in real-time cloud environments to improve their usefulness. Aneka is one of the prominent PaaS software which allows users to develop cloud application using various programming models and underline infrastructure. This chapter presents a scheduling API developed for the Aneka software platform. Users can develop their own scheduling algorithms using this API and integrate it with Aneka to test their scheduling algorithms in real cloud environments. The proposed API provides all the required functionalities to integrate and schedule private, public, or hybrid cloud with the Aneka software.
APA, Harvard, Vancouver, ISO, and other styles
7

Davey, K., A. Moergeli, J. J. Brady, S. A. Saki, R. J. F. Goodfellow, and A. Del Amo. "Bart Silicon Valley Phase II – integrated cost & schedule life-cycle comparative risk analysis of single-bore versus twin-bore tunneling." In Tunnels and Underground Cities: Engineering and Innovation meet Archaeology, Architecture and Art, 4425–34. CRC Press, 2020. http://dx.doi.org/10.4324/9781003031659-15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Davey, K., A. Moergeli, J. J. Brady, S. A. Saki, R. J. F. Goodfellow, and A. Del Amo. "Bart Silicon Valley Phase II – integrated cost & schedule life-cycle comparative risk analysis of single-bore versus twin-bore tunneling." In Tunnels and Underground Cities: Engineering and Innovation meet Archaeology, Architecture and Art, 4425–34. CRC Press, 2019. http://dx.doi.org/10.1201/9780429424441-468.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Davey, K., A. Moergeli, J. J. Brady, S. A. Saki, R. J. F. Goodfellow, and A. Del Amo. "Bart Silicon Valley Phase II – integrated cost & schedule life-cycle comparative risk analysis of single-bore versus twin-bore tunneling." In Tunnels and Underground Cities: Engineering and Innovation meet Archaeology, Architecture and Art, 4425–34. CRC Press, 2020. http://dx.doi.org/10.1201/9781003031659-15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Alhakkak, Nada M. "BigGIS With Hadoop in MapReduce Environment." In Handbook of Research on Digital Research Methods and Architectural Tools in Urban Planning and Design, 25–32. IGI Global, 2019. http://dx.doi.org/10.4018/978-1-5225-9238-9.ch002.

Full text
Abstract:
BigGIS is a new product that resulted from developing GIS in the “Big Data” area, which is used in storing and processing big geographical data and helps in solving its issues. This chapter describes an optimized Big GIS framework in Map Reduce Environment M2BG. The suggested framework has been integrated into Map Reduce Environment in order to solve the storage issues and get the benefit of the Hadoop environment. M2BG include two steps: Big GIS warehouse and Big GIS Map Reduce. The first step contains three main layers: Data Source and Storage Layer (DSSL), Data Processing Layer (DPL), and Data Analysis Layer (DAL). The second layer is responsible for clustering using swarms as inputs for the Hadoop phase. Then it is scheduled in the mapping part with the use of a preempted priority scheduling algorithm; some data types are classified as critical and some others are ordinary data type; the reduce part used, merge sort algorithm M2BG, should solve security and be implemented with real data in the simulated environment and later in the real world.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Integrated Scheduler Architecture"

1

Jessop, Simon, Scott Valentine, and Michael Roemer. "CBM Integrated Maintenance Scheduler." In ASME Turbo Expo 2008: Power for Land, Sea, and Air. ASMEDC, 2008. http://dx.doi.org/10.1115/gt2008-51375.

Full text
Abstract:
Condition Based Maintenance (CBM) is a key technology enabling facility maintenance cost reduction. The CBM approach to maintenance replaces rigid time-based maintenance schedules with the “right maintenance at the right time” identified by real-time equipment health monitoring. This approach creates a new requirement for determining the best time to schedule newly identified critical maintenance actions in light of the real world constraints of available labor and resources. One of the major challenges encountered when attempting to optimize a maintenance schedule is related to the resolution of the many and often complex interdependencies or constraints present throughout the maintenance process. This paper presents a CBM decision support software tool that leverages real-time current and future health condition information to optimize maintenance resources, tasking, and planning in order to maximize the system readiness. Over the past year Impact Technologies, under contract by NAVSEA, has been developing technologies that will provide the necessary decision support tools to address this dynamic maintenance environment. The software scheduling tool utilizes an Open Systems Architecture for Condition-Based Maintenance (OSA-CBM) architecture to facilitate implementation into new or legacy systems. The tool employs a generic maintenance model that accounts for equipment reliability attributes, maintenance task material and labor requirements, system dependencies, and subsystems relationships. The focus of the development has been on Naval Ship maintenance, but the model inputs can be adapted to a variety of applications including power generators, aircraft, ships, and production facilities. The core of the decision support tool is a multi-sweep optimization algorithm that is tuned to the maintenance scheduling problem. The algorithm has been designed to achieve the best computational speed. Benefits and risks of maintenance decisions have been quantified in risk, which can be defined in terms of readiness or financial. The probability and consequence of each system failure are considered in light of the complex system interdependencies, such as dependant and redundant systems, to achieve the best overall system readiness. Novel post-processing steps identify the active solution constraints further enhancing the user’s ability to understand the issues that affect system availability.
APA, Harvard, Vancouver, ISO, and other styles
2

Kohutka, Lukas, and Viera Stopjakova. "ASIC Architecture and Implementation of RED Scheduler for Mixed-Criticality Real-Time Systems." In 2020 27th International Conference on Mixed Design of Integrated Circuits and System (MIXDES). IEEE, 2020. http://dx.doi.org/10.23919/mixdes49814.2020.9156070.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kohutka, Lukas. "A New FPGA - based Architecture of Task Scheduler with Support of Periodic Real-Time Tasks." In 2022 29th International Conference on Mixed Design of Integrated Circuits and System (MIXDES). IEEE, 2022. http://dx.doi.org/10.23919/mixdes55591.2022.9838055.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Carlos Junior, Francisco, Ivan Silva, and Ricardo Jacobi. "A Partially Shared Thin Reconfigurable Array For Multicore Processor." In IX Simpósio Brasileiro de Engenharia de Sistemas Computacionais. Sociedade Brasileira de Computação - SBC, 2019. http://dx.doi.org/10.5753/sbesc_estendido.2019.8645.

Full text
Abstract:
Reconfigurable architectures have been widely used as single core processor accelerators. In the multi-core era, however, it is necessary to review the way that reconfigurable arrays are integrated into multi-core processor. Generally, a set of reconfigurable functional units are employed in a similar way as they are used in single core processors. Unfortunately, a considerable increase in the area ensues from this practice. Besides, in applications with unbalanced workload in their threads this approach can lead to a inefficient use of the reconfigurable architecture in cores with a low or even idle workload. To cope with this issue, this work proposes and evaluates a partially shared thin reconfigurable array, which allows to share reconfigurable resources among the processor's cores. Sharing is performed dynamically by the configuration scheduler hardware. The results shows that the sharing mechanism provided 76% of energy savings, improving the performance 41% in average when compared with a version without the proposed reconfigurable array. A comparison with a version of the reconfigurable array without the sharing mechanism was performed and shows that the sharing mechanism improved up to 11.16% in the system performance.
APA, Harvard, Vancouver, ISO, and other styles
5

Laliberty, Thomas J., David W. Hildum, Norman M. Sadeh, John McA’Nulty, Dag Kjenstad, Robert V. E. Bryant, and Stephen F. Smith. "A Blackboard Architecture for Integrated Process Planning/Production Scheduling." In ASME 1996 Design Engineering Technical Conferences and Computers in Engineering Conference. American Society of Mechanical Engineers, 1996. http://dx.doi.org/10.1115/96-detc/dfm-1291.

Full text
Abstract:
Abstract As companies increase the level of customization in their products, move towards smaller lot production and experiment with more flexible customer/supplier arrangements such as those made possible by Electronic Data Interchange (EDI), they increasingly require the ability to quickly, accurately and competitively respond to customer requests for bids on new products and efficiently work out supplier/subcontractor arrangements for these new products. This in turn requires the ability to rapidly convert standard-based product specifications into process plans and quickly integrate process plans for new orders into the existing production schedule to best accommodate the current state of the manufacturing enterprise. This paper describes IP3S, an Integrated Process Planning/Production Scheduling (IP3S) Shell for Agile Manufacturing. The IP3S Shell is designed around a blackboard architecture that emphasizes (1) concurrent development and dynamic revision of integrated process planning/production scheduling solutions, (2) the use of a common representation for exchanging process planning and production scheduling information, (3) coordination with outside information sources such as customer and supplier sites, (4) mixed initiative decision support, enabling the user to interactively explore a number of tradeoffs, and (5) portability and ease of integration with legacy systems. The system is scheduled for initial evaluation in a large and highly dynamic machine shop at Raytheon’s Andover manufacturing facility.
APA, Harvard, Vancouver, ISO, and other styles
6

Yuan, Chengyin, and Placid Ferreira. "An Integrated Environment for the Design and Control of Deadlock-Free Flexible Manufacturing Cells." In ASME 2004 International Mechanical Engineering Congress and Exposition. ASMEDC, 2004. http://dx.doi.org/10.1115/imece2004-61213.

Full text
Abstract:
At the enterprise level, manufacturing organizations are faced with accelerating technological cycles, global competition and an increasingly mobile work force. The flexibility of the enterprise and its ability to respond to various customer demands governs the competitiveness of the enterprise to the changes in its market and in the society in which it operates. It has been recognized for many years that flexibility on the enterprise floor can always be achieved if the resulting cost of product and process changeovers and its operations are not considered. However, with the increasing competitive pressures on today’s manufacturing enterprise, a flexible-manufacturing environment must be achieved at relatively low cost and high work-force productivity while maintaining a competitive advantage. To accomplish this goal, the manufacturing enterprise must be able to be reconfigured and verified with an increased level of automation that is scalable and flexible to meet diverse product demands quickly and economically. In this paper, we will introduce the recent research work on developing an integrated rapid prototyping environment, EMBench [22, 23], which can provide design, control configuration, simulation and deployment services for flexible manufacturing systems. This rapid prototyping environment has its own user-friendly GUI (Graphical User Interface) that allows user to issue various commands to the controller at different layers, from the simple joint servo to the complex manufacturing cell. In this paper, we also propose the implementation diagram for the controller of manufacturing cells that consists of scheduler, dispatcher, real-time database and structural control policy. All these internal components are responsible for storing system configuration, optimizing processing plan, releasing appropriate command, etc. We also present the idea of cell model and explore its characteristics and behaviors as well as the resource and workstation models. All above modules and architecture are developed using IEC-61499 function blocks that support scalable expansion and modular design. To demonstrate our theoretical achievements, we have developed various IEC-61499 function blocks to integrate various resources on the enterprise shop floor and achieve flexibility at a low cost. This software environment facilitates a modular, component-based mechanical and control design, simulating and prototyping tool for shop floor control.
APA, Harvard, Vancouver, ISO, and other styles
7

Fessy, Antoine, Sivananthan Jothee, Sebastien Jacquemin, Jonathan Sammon, Carolina Cruz, Igor Ferreira, Jill Bell, Hariz Akmal Hosen, and Amirul Asraf Askat. "Leveraging an Integrated Execution Model, Digital FEED Platform and Product Standardisation to Improve Project CAPEX." In Offshore Technology Conference Asia. OTC, 2022. http://dx.doi.org/10.4043/31583-ms.

Full text
Abstract:
Abstract This paper illustrates how a typical subsea development (Subsea Production Systems (SPS) and Subsea Umbilicals Risers Flowlines (SURF)) can benefit from an integrated execution model which will significantly improve CAPEX, time to first oil and reduce delivery risk. The PETRONAS Limbayong Deepwater Development offshore Sabah, Malaysia is a successful example of close collaboration between a contractor and operator to leverage integrated contracting models and extended service scope, while maximizing Malaysian participation. Digital platforms for Front End Engineering and Design (FEED) and Configure to Order (CTO) product designs were utilized in combination to assess and establish the optimal field architecture for improved cost and schedule. Adopting an integrated one-stop contract approach (SURF, SPS and Subsea Services) enabled an improved development schedule and reduction in cost and risk normally associated with split-contract interfaces. Digitalization of FEEDs and standardization of product configurations created value for the Limbayong field development, accelerating time to First Oil Date (FOD) as well as securing aggressive long-lead items delivery schedules. The combination of the methods described above provides the required enhancement to a traditional execution approach, ill-suited to current oil and gas economics. This approach is instrumental in making many subsea developments feasible and a preface for accelerated future collaborations.
APA, Harvard, Vancouver, ISO, and other styles
8

Lima, Clecio D., Kentaro Sano, Hiroaki Kobayashi, Tadao Nakamura, and Michael J. Flynn. "A Technology-Scalable Multithreaded Architecture." In Simpósio de Arquitetura de Computadores e Processamento de Alto Desempenho. Sociedade Brasileira de Computação, 2001. http://dx.doi.org/10.5753/sbac-pad.2001.22197.

Full text
Abstract:
Advances in integrated circuit technology have offered an increasing transistor density with a continuous performance improvement. Soon, it will be possible to integrate a billion of transistors on a chip running at very high speeds. At this levei of integration, however, physical constraints related to wire delay will become dominant, requiring microprocessors to be more partitioned and use short wires for on-chip communication. On the other hand, effective parallel processing by taking advantage of the large number of transistors will be challenging. In this research, we propose the Shift Architecture, a multithreaded paradigm that maps statically scheduled threads onto multiple functional units. Communication is based on shift register files and restricted to contiguous functional units, requiring reduced wire lengths. Threads are dynamically interleaved on a cycle-by-cycle basis, to maintain high processor utilization. We describe the basic concepts of our approach. A preliminary evaluation shows that this architecture has the potential for achieving high instruction throughput for multithreaded benchmarks.
APA, Harvard, Vancouver, ISO, and other styles
9

Liu, Yuan, Manuj Dhingra, and J. V. R. Prasad. "Benefits of Active Compressor Stability Management on Turbofan Engine Operability." In ASME Turbo Expo 2008: Power for Land, Sea, and Air. ASMEDC, 2008. http://dx.doi.org/10.1115/gt2008-51307.

Full text
Abstract:
Active compressor stability management can play a significant role towards the intelligent control of gas turbine engines. The present work utilizes a computer simulation to illustrate the potential operability benefits of compressor stability management when actively controlling a turbofan engine. The simulation, called the Modular Aero-Propulsion System Simulation (MAPSS) and developed at NASA Glenn, models the actuation, sensor, controller, and engine dynamics of a twin-spool, low-bypass turbofan engine. The stability management system is built around a previously developed stability measure called the correlation measure. The correlation measure quantifies the repeatability of the pressure signature of a compressor rotor. Earlier work has used laboratory compressor and engine rig data to develop a relationship between a compressor’s stability boundary and its correlation measure. Specifically, correlation measure threshold crossing events increase in magnitude and number as the compressor approaches the limit of stable operation. To simulate the experimentally observed behavior of these events, a stochastic model based on level-crossings of an exponentially distributed pseudo-random process has been implemented in the MAPSS environment. Three different methods of integrating active stability management within the existing engine control architecture have been explored. The results show that significant improvements in the engine emergency response can be obtained while maintaining instability-free compressor operation via any of the methods studied. Two of the active control schemes investigated utilize existing scheduler and controller parameters and require minimal additional control logic for implementation. The third method, while introducing additional logic, emphasizes the need as well as benefits of a more integrated stability management system.
APA, Harvard, Vancouver, ISO, and other styles
10

Stough, John, Tom DuBois, Leslie Hyatt, Alan Hammond, and Chris Kellow. "A Holistic Approach to Open Systems Architecture for Army Aviation." In Vertical Flight Society 75th Annual Forum & Technology Display. The Vertical Flight Society, 2019. http://dx.doi.org/10.4050/f-0075-2019-14551.

Full text
Abstract:
The Army is pivoting to meet the challenges of a rapidly evolving threat environment in an increasingly complex world; this requires an agile and adaptive capability, leveraging competition, while operating within the constraints of current budget cycles. A cross cutting architectural approach provides opportunity for the Army to maintain capability overmatch. Recent changes in acquisition law and Army modernization strategy bring particularly strong emphasis on adoption of Modular Open Systems Approach (MOSA) and Open Systems Architecture (OSA). Many current programs within Army Aviation rely on a best effort approach ("Do MOSA") to deliver systems. Current programs measure success on cost, performance, and schedule of the individual program with little historical institutional support for aligning efforts across a larger "whole-system" context, such as a Combat Aviation Brigade (CAB). Specific programs, including the Utility Helicopter Program Office (UHPO) UH-60V and Crew Mission Station (CMS), as well as multiple Science & Technology (S&T) programs supporting Future Vertical Lift (FVL) such as the Mission Systems Architecture Demonstration (MSAD), that exhibit aspects of the architectural momentum in the Army enterprise. The recommendation of our team is to develop an Army Aviation Enterprise Architecture Strategy that will provide the detail necessary in order to develop individual product lines while providing synergy in architecture related efforts into a holistic approach maintaining focus on Army Aviation as a whole force in support of and integrated with the ground commander.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography