To see the other types of publications on this topic, follow the link: Heterogeneous operating environments.

Journal articles on the topic 'Heterogeneous operating environments'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Heterogeneous operating environments.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Liu, Yan Fang, Ming Chong Mao, Xiang Yang Xu, and Gang Shi. "Multi-Physics Coupled Thermo-Mechanics Analysis of a Hydraulic Solenoid Valve." Applied Mechanics and Materials 321-324 (June 2013): 102–7. http://dx.doi.org/10.4028/www.scientific.net/amm.321-324.102.

Full text
Abstract:
Solenoid valve is complex heterogeneous system involving multi-physics coupling of mechanics, electronics, magnetics, thermotics, etc, whose reliability and life depends largely on the heat generated during the operation. A multi-physics coupled thermo-mechanics model of a hydraulic proportional solenoid valve used in an automatic transmission was built up with the finite element method (FEM), the temperature and thermal deformation of the solenoid valve with different currents under two operating environments was analyzed. The calculated results show that the operating environment and current are important factors leading to thermal failure of solenoid valves. The model has high accuracy because of considering the multi-physics coupling control characteristics of mechanics, electronics, magnetics, thermotics, etc, and so can be used for the reliability design of solenoid valves.
APA, Harvard, Vancouver, ISO, and other styles
2

Sztrik, J., and O. Moeller. "Simulation of machine interference in randomly changing environments." Yugoslav Journal of Operations Research 12, no. 2 (2002): 237–46. http://dx.doi.org/10.2298/yjor0202237s.

Full text
Abstract:
The simulation tool lcpSim can be used to investigate special level crossing problems of queuing systems of type HYPOk / HYPOr / 1 // n embedded in different Markovian environments (recently referred to as Markov modulated ones). Our observed system consists of n heterogeneous machines (requests) and a server that 'repairs' the broken machines according to the most commonly used service disciplines, such as FIFO, LIFO, PPS, HOL, Preemptive Priorities (Resume, Repeat), Transfer, Polling, Nearest. We specify a maximum number of stopped machines for an operating system and our aim is to give the main steady-state performance measures of the system, such as, server utilization, machine utilization, mean waiting times, mean response times, the probability of an operating system and the mean operating time of the system. These values can be calculated by lcpSim level crossing problem Simulation package for different random environment types and service disciplines.
APA, Harvard, Vancouver, ISO, and other styles
3

Bricken, William, and Geoffrey Coco. "The VEOS Project." Presence: Teleoperators and Virtual Environments 3, no. 2 (January 1994): 111–29. http://dx.doi.org/10.1162/pres.1994.3.2.111.

Full text
Abstract:
The Virtual Environment Operating Shell (veos) was developed at University of Washington's Human Interface Technology Laboratory as software infrastructure for the lab's research in virtual environments. veos was designed from scratch to provide a comprehensive and unified management facility to support generation of, interaction with, and maintenance of virtual environments. VEOS emphasizes rapid prototyping, heterogeneous distributed computing, and portability. We discuss the design, philosophy and implementation of veos in depth. Within the Kernel, the shared database transformations are pattern-directed, communications are asynchronous, and the programmer's interface is LISP. An entity-based metaphor extends object-oriented programming to systems-oriented programming. Entities provide first-class environments and biological programming constructs such as perceive, react, and persist. The organization, structure, and programming of entities are discussed in detail. The article concludes with a description of the applications that have contributed to the iterative refinement of the VEOS software.
APA, Harvard, Vancouver, ISO, and other styles
4

Snowdon, David N., and Adrian J. West. "AVIARY:Design Issues for Future Large-Scale Virtual Environments." Presence: Teleoperators and Virtual Environments 3, no. 4 (January 1994): 288–308. http://dx.doi.org/10.1162/pres.1994.3.4.288.

Full text
Abstract:
VR is already evolving away from single user small-scale demonstrators, and inexorably toward sophisticated environments in which many geographically distributed users can perform a diverse range of activities. There will therefore be a pressure to make such environments increasingly general purpose and dynamic in their support of applications, paralleling perhaps the historical evolution of conventional operating systems. It is from speculations about the nature of such a future large-scale VR system that the AVIARY project has developed. AVIARY provides multiple worlds, each with its own set of laws, that may be tailored to suit particular application domains. The overall structure enables a coherent relationship between worlds to be maintained, which is important both for purposes of code reuse, and to aid users in navigating the system. A prototype implementation exists that addresses underlying implementation issues in the AVIARY model, and, in particular, distribution across heterogeneous processor networks, dynamic management of objects and message types within the system, the separation of graphics processing, and the management of spatial extent. Implementations of the prototype have been tested on a Transputer array, and a heterogeneous network of Sun and Silicon Graphics workstations. The system is currently being ported to a 2.4-Gflop KSR-1 parallel supercomputer. This paper reviews approaches to distributed, multi-application VR systems, presents pertinent elements of the AVIARY design, and describes the prototype implementation with particular attention given to the issues of distribution.
APA, Harvard, Vancouver, ISO, and other styles
5

Shmid, Alexander Viktorovich. "Practice and Prospects for Using the Emulator Family of IBM Mainframe Architecture." Proceedings of the Institute for System Programming of the RAS 32, no. 5 (2020): 57–66. http://dx.doi.org/10.15514/ispras-2020-32(5)-4.

Full text
Abstract:
This article describes the family of emulators for IBM mainframe architectures, their development history, functional features and capability, as well as the experience of many years (since 1994) of emulators development and their implementation area. There was sold the relatively simple task (for modern standards) of creating a virtual machine in the VSE/ESA operating system for transferring legacy platform-dependent applications to this target environment. The problem was solved at first for EU computers in Russia, and then for IBM 9221 in Germany and in the other western countries. The transfer was made to the OS/390 environment, and to IBM AIX, quite modern at that time. The virtual execution of any existing IBM mainframe operating systems in the main server OS environments: Linux, Windows, AIX, Z/OS, ZLinux had been provided. There was developed the solution for combining any types of formed virtual computing nodes into heterogeneous geographically distributed computing networks that provide, in particular, multiple mutual redundancy of nodes in the network.
APA, Harvard, Vancouver, ISO, and other styles
6

McGarvey, Ronald G., Andreas Thorsen, Maggie L. Thorsen, and Rohith Madhi Reddy. "Measuring efficiency of community health centers: a multi-model approach considering quality of care and heterogeneous operating environments." Health Care Management Science 22, no. 3 (August 26, 2018): 489–511. http://dx.doi.org/10.1007/s10729-018-9455-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Pugliese, Roberto, Luca Gregoratti, Renata Krempska, Fulvio Billè, Juray Krempasky, Marino Marsi, and Alessandro Abrami. "A novel approach to the control of experimental environments: the ESCA microscopy data-acquisition system at ELETTRA." Journal of Synchrotron Radiation 5, no. 3 (May 1, 1998): 587–89. http://dx.doi.org/10.1107/s0909049597014374.

Full text
Abstract:
An efficient control system is today one of the key points for the successful operation of a beamline at third-generation synchrotron radiation sources. The high cost of these ultra-bright light sources and the limited beam time requires effective instrument handling in order to reduce any waste of measurement time. The basic requirements for such control software are reliability, user-friendliness, modularity, upgradability, as well as the capability of integrating a horde of different instruments, commercial tools and independent pre-existing systems in a possibly distributed environment. A novel approach has been adopted to implement the data-acquisition system of the ESCA microscopy beamline at ELETTRA. The system is based on YASB, a software bus, i.e. an underlying control model to coordinate information exchanges and networking software to implement that model. This `middleware' allows the developer to model applications as a set of interacting agents, i.e. independent software machines. Agents can be implemented using different programming languages and be executed on heterogeneous operating environments, which promotes an effective collaboration between software engineers and experimental physicists.
APA, Harvard, Vancouver, ISO, and other styles
8

MURPHY, JOHN, and JANE GRIMSON. "COOPERATIVE INFORMATION SYSTEMS: INTEROPERABILITY IN HEALTH CARE LEGACY APPLICATIONS." International Journal of Cooperative Information Systems 07, no. 01 (March 1998): 1–17. http://dx.doi.org/10.1142/s0218843098000027.

Full text
Abstract:
We believe that the typical hospital computing environment is especially fruitful as a domain for investigating the problems of interoperability and cooperation. We state this belief as hospital computing environments consist of a heterogeneous collection of autonomous information systems. These systems range from centralised hospital-wide systems, such as patient administration systems, to departmental systems such as pharmacy stock-control, laboratory information systems, accident and emergency systems and so on. Many of these are legacy systems which have been operating for many years and are difficult to integrate and virtually impossible to rewrite. In this article we discuss and assess the work of the Jupiter Project and its successor LIOM (Legacy system Interoperability using Object-oriented Methods).
APA, Harvard, Vancouver, ISO, and other styles
9

Ahmed, Azza E., Phelelani T. Mpangase, Sumir Panji, Shakuntala Baichoo, Yassine Souilmi, Faisal M. Fadlelmola, Mustafa Alghali, et al. "Organizing and running bioinformatics hackathons within Africa: The H3ABioNet cloud computing experience." AAS Open Research 1 (August 7, 2019): 9. http://dx.doi.org/10.12688/aasopenres.12847.2.

Full text
Abstract:
The need for portable and reproducible genomics analysis pipelines is growing globally as well as in Africa, especially with the growth of collaborative projects like the Human Health and Heredity in Africa Consortium (H3Africa). The Pan-African H3Africa Bioinformatics Network (H3ABioNet) recognized the need for portable, reproducible pipelines adapted to heterogeneous computing environments, and for the nurturing of technical expertise in workflow languages and containerization technologies. Building on the network’s Standard Operating Procedures (SOPs) for common genomic analyses, H3ABioNet arranged its first Cloud Computing and Reproducible Workflows Hackathon in 2016, with the purpose of translating those SOPs into analysis pipelines able to run on heterogeneous computing environments and meeting the needs of H3Africa research projects. This paper describes the preparations for this hackathon and reflects upon the lessons learned about its impact on building the technical and scientific expertise of African researchers. The workflows developed were made publicly available in GitHub repositories and deposited as container images on Quay.io.
APA, Harvard, Vancouver, ISO, and other styles
10

Palacios, Filiberto Muñoz, Eduardo Steed Espinoza Quesada, Guillaume Sanahuja, Sergio Salazar, Octavio Garcia Salazar, and Luis Rodolfo Garcia Carrillo. "Test bed for applications of heterogeneous unmanned vehicles." International Journal of Advanced Robotic Systems 14, no. 1 (January 1, 2017): 172988141668711. http://dx.doi.org/10.1177/1729881416687111.

Full text
Abstract:
This article addresses the development and implementation of a test bed for applications of heterogeneous unmanned vehicle systems. The test bed consists of unmanned aerial vehicles (Parrot AR.Drones versions 1 or 2, Parrot SA, Paris, France, and Bebop Drones 1.0 and 2.0, Parrot SA, Paris, France), ground vehicles (WowWee Rovio, WowWee Group Limited, Hong Kong, China), and the motion capture systems VICON and OptiTrack. Such test bed allows the user to choose between two different options of development environments, to perform aerial and ground vehicles applications. On the one hand, it is possible to select an environment based on the VICON system and LabVIEW (National Instruments) or robotics operating system platforms, which make use the Parrot AR.Drone software development kit or the Bebop_autonomy Driver to communicate with the unmanned vehicles. On the other hand, it is possible to employ a platform that uses the OptiTrack system and that allows users to develop their own applications, replacing AR.Drone’s original firmware with original code. We have developed four experimental setups to illustrate the use of the Parrot software development kit, the Bebop Driver (AutonomyLab, Simon Fraser University, British Columbia, Canada), and the original firmware replacement for performing a strategy that involves both ground and aerial vehicle tracking. Finally, in order to illustrate the effectiveness of the developed test bed for the implementation of advanced controllers, we present experimental results of the implementation of three consensus algorithms: static, adaptive, and neural network, in order to accomplish that a team of multiagents systems move together to track a target.
APA, Harvard, Vancouver, ISO, and other styles
11

Meijide, Jessica, Patrick S. M. Dunlop, Marta Pazos, and María Angeles Sanromán. "Heterogeneous Electro-Fenton as “Green” Technology for Pharmaceutical Removal: A Review." Catalysts 11, no. 1 (January 9, 2021): 85. http://dx.doi.org/10.3390/catal11010085.

Full text
Abstract:
The presence of pharmaceutical products in the water cycle may cause harmful effects such as morphological, metabolic and sex alterations in aquatic organisms and the selection/development of organisms resistant to antimicrobial agents. The compounds’ stability and persistent character hinder their elimination by conventional physico-chemical and biological treatments and thus, the development of new water purification technologies has drawn great attention from academic and industrial researchers. Recently, the electro-Fenton process has been demonstrated to be a viable alternative for the removal of these hazardous, recalcitrant compounds. This process occurs under the action of a suitable catalyst, with the majority of current scientific research focused on heterogeneous systems. A significant area of research centres working on the development of an appropriate catalyst able to overcome the operating limitations associated with the homogeneous process is concerned with the short service life and difficulty in the separation/recovery of the catalyst from polluted water. This review highlights a present trend in the use of different materials as electro-Fenton catalysts for pharmaceutical compound removal from aquatic environments. The main challenges facing these technologies revolve around the enhancement of performance, stability for long-term use, life-cycle analysis considerations and cost-effectiveness. Although treatment efficiency has improved significantly, ongoing research efforts need to deliver economic viability at a larger scale due to the high operating costs, primarily related to energy consumption.
APA, Harvard, Vancouver, ISO, and other styles
12

Meijide, Jessica, Patrick S. M. Dunlop, Marta Pazos, and María Angeles Sanromán. "Heterogeneous Electro-Fenton as “Green” Technology for Pharmaceutical Removal: A Review." Catalysts 11, no. 1 (January 9, 2021): 85. http://dx.doi.org/10.3390/catal11010085.

Full text
Abstract:
The presence of pharmaceutical products in the water cycle may cause harmful effects such as morphological, metabolic and sex alterations in aquatic organisms and the selection/development of organisms resistant to antimicrobial agents. The compounds’ stability and persistent character hinder their elimination by conventional physico-chemical and biological treatments and thus, the development of new water purification technologies has drawn great attention from academic and industrial researchers. Recently, the electro-Fenton process has been demonstrated to be a viable alternative for the removal of these hazardous, recalcitrant compounds. This process occurs under the action of a suitable catalyst, with the majority of current scientific research focused on heterogeneous systems. A significant area of research centres working on the development of an appropriate catalyst able to overcome the operating limitations associated with the homogeneous process is concerned with the short service life and difficulty in the separation/recovery of the catalyst from polluted water. This review highlights a present trend in the use of different materials as electro-Fenton catalysts for pharmaceutical compound removal from aquatic environments. The main challenges facing these technologies revolve around the enhancement of performance, stability for long-term use, life-cycle analysis considerations and cost-effectiveness. Although treatment efficiency has improved significantly, ongoing research efforts need to deliver economic viability at a larger scale due to the high operating costs, primarily related to energy consumption.
APA, Harvard, Vancouver, ISO, and other styles
13

Liu, Fangzi, Zihong Li, Hua Xie, Lei Yang, and Minghua Hu. "Predicting Fuel Consumption Reduction Potentials Based on 4D Trajectory Optimization with Heterogeneous Constraints." Sustainability 13, no. 13 (June 23, 2021): 7043. http://dx.doi.org/10.3390/su13137043.

Full text
Abstract:
Investigating potential ways to improve fuel efficiency of aircraft operations is crucial for the development of the global air traffic management (ATM) performance target. The implementation of trajectory-based operations (TBOs) will play a major role in enhancing the predictability of air traffic and flight efficiency. TBO also provides new means for aircraft to save energy and reduce emissions. By comprehensively considering aircraft dynamics, available route limitations, sector capacity constraints, and air traffic control restrictions on altitude and speed, a “runway-to-runway” four-dimensional trajectory multi-objective planning method under loose-to-tight heterogeneous constraints is proposed in this paper. Taking the Shanghai–Beijing city pair as an example, the upper bounds of the Pareto front describing potential fuel consumption reduction under the influence of flight time were determined under different airspace rigidities, such as different ideal and realistic operating environments, as well as fixed and optional routes. In the congestion-free scenario with fixed route, the upper bounds on fuel consumption reduction range from 3.36% to 13.38% under different benchmarks. In the capacity-constrained scenario, the trade-off solutions of trajectory optimization are compressed due to limited available entry time slots of congested sectors. The results show that more flexible route options improve fuel-saving potentials up to 8.99%. In addition, the sensitivity analysis further illustrated the pattern of how optimal solutions evolved with congested locations and severity. The outcome of this paper would provide a preliminary framework for predicting and evaluating fuel efficiency improvement potentials in TBOs, which is meaningful for setting performance targets of green ATM systems.
APA, Harvard, Vancouver, ISO, and other styles
14

Irawan, Addie, Mohammad Fadhil Abas, and Nurulfadzilah Hasan. "Robot Local Network Using TQS Protocol for Land-to-Underwater Communications." Journal of Telecommunications and Information Technology 1 (March 29, 2019): 23–30. http://dx.doi.org/10.26636/jtit.2019.125818.

Full text
Abstract:
This paper presents a model and an analysis of the Tag QoS switching (TQS) protocol proposed for heterogeneous robots operating in different environments. Collaborative control is topic that is widely discussed in multirobot task allocation (MRTA) – an area which includes establishing network communication between each of the connected robots. Therefore, this research focuses on classifying, prioritizing and analyzing performance of the robot local network (RLN) model which comprises a point-to-point topology network between robot peers (nodes) in the air, on land, and under water. The proposed TQS protocol was inspired by multiprotocol label switching (MPLS), achieving a quality of service (QoS) where swapping and labeling operations involving the data packet header were applied. The OMNET++ discrete event simulator was used to analyze the percentage of losses, average access delay, and throughput of the transmitted data in different classes of service (CoS), in a line of transmission between underwater and land environments. The results show that inferior data transmission performance has the lowest priority with low bitrates and extremely high data packet loss rates when the network traffic was busy. On the other hand, simulation results for the highest CoS data forwarding show that its performance was not affected by different data transmission rates characterizing different mediums and environments.
APA, Harvard, Vancouver, ISO, and other styles
15

Richmond, Craig M., Clemens Kielhauser, and Bryan T. Adey. "Performance measures for road managers facing diverse environments." Benchmarking: An International Journal 23, no. 7 (October 3, 2016): 1876–91. http://dx.doi.org/10.1108/bij-01-2015-0005.

Full text
Abstract:
Purpose A key difficulty that plagues benchmarking in the public sector is heterogeneity in the production process. The purpose of this paper is to present a strategy for overcoming that difficulty using physical production models and demonstrate it using road renewal management as an example. Design/methodology/approach A physical production model is used to linking required prices, inputs and exposures to environmental factors to the desired services to be delivered. A measure is derived from this that adjusts for the additional expected costs from operating in a more difficult environments. A case study is used to present methods for addressing specific parameterization issues that arise in an empirical application. Findings The method was found to be implementable and empirically better than naïve ratio measures commonly found in practice. Research limitations/implications Data and modeling issues were identified that can be addressed by public supervisors that are expected to greatly improve the quality of the measures. Social implications According to the raw data and simple ratios, a very large degree of inefficiency can potentially be eliminated by applying the recommended measures. In all likelihood the real potential is much smaller, but still significant. Originality/value Most applied benchmarking exercises use simple ratios as KPI’s. These are easily dismissed where environments are heterogeneous. Data envelopment analysis and stochastic frontier analysis are generally difficult to relate to KPI’s. The use of an explicit and specific process model with an engineering content is therefore exceptional.
APA, Harvard, Vancouver, ISO, and other styles
16

Shin, Changsun, and Meonghun Lee. "Swarm-Intelligence-Centric Routing Algorithm for Wireless Sensor Networks." Sensors 20, no. 18 (September 10, 2020): 5164. http://dx.doi.org/10.3390/s20185164.

Full text
Abstract:
The swarm intelligence (SI)-based bio-inspired algorithm demonstrates features of heterogeneous individual agents, such as stability, scalability, and adaptability, in distributed and autonomous environments. The said algorithm will be applied to the communication network environment to overcome the limitations of wireless sensor networks (WSNs). Herein, the swarm-intelligence-centric routing algorithm (SICROA) is presented for use in WSNs that aim to leverage the advantages of the ant colony optimization (ACO) algorithm. The proposed routing protocol addresses the problems of the ad hoc on-demand distance vector (AODV) and improves routing performance via collision avoidance, link-quality prediction, and maintenance methods. The proposed method was found to improve network performance by replacing the periodic “Hello” message with an interrupt that facilitates the prediction and detection of link disconnections. Consequently, the overall network performance can be further improved by prescribing appropriate procedures for processing each control message. Therefore, it is inferred that the proposed SI-based approach provides an optimal solution to problems encountered in a complex environment, while operating in a distributed manner and adhering to simple rules of behavior.
APA, Harvard, Vancouver, ISO, and other styles
17

Werner, Stephan, Andreas Lauber, Martijn Koedam, Juergen Becker, Eric Sax, and Kees Goossens. "Cloud-based Design and Virtual Prototyping Environment for Embedded Systems." International Journal of Online Engineering (iJOE) 12, no. 09 (September 28, 2016): 52. http://dx.doi.org/10.3991/ijoe.v12i09.6142.

Full text
Abstract:
The design and test of Multi-Processor System-on-Chips (MPSoCs) and development of distributed applications and/or operating systems executed on those hardware platforms is one of the biggest challenges in today’s system design. This applies in particular when short time-to-market constraints impose serious limitations on the exploration of the design space. The use of virtual platforms can help in decreasing the development and test cycles. In this paper, we present a cloud-based environment supporting the user in designing heterogeneous MPSoCs and developing distributed applications. Therefore, the design environment generates virtual platforms automatically allowing fast prototyping cycles especially in the software development process, and exports the design to a hardware flow synthesizing compatible FPGA designs. The extension of the peripheral models with debug information supports the developer during test and debug cycles and avoids the need of adding special debug codes in the application. This improves the <br />readability, portability and maintainability of produced software. Additionally, this paper presents the benefits of using cloud-based design environments in engineers’ trainings and educations. Therefore, the framework supports testing the system including complex software stacks with prerecorded data or testbenches.
APA, Harvard, Vancouver, ISO, and other styles
18

Martín, Diego, Borja Bordel, and Ramón Alcarria. "Automatic Detection of Erratic Sensor Observations in Ami Platforms: A Statistical Approach †." Proceedings 31, no. 1 (November 20, 2019): 55. http://dx.doi.org/10.3390/proceedings2019031055.

Full text
Abstract:
This paper addresses the problem of data aggregation platforms operating in heterogeneous Ambient Intelligence Environments. In these platforms, device interoperability is a challenge and erratic sensor observations are difficult to be detected. We propose ADES (Automatic Detection of Erratic Sensors), a statistical approach to detect erratic behavior in sensors and annotate those errors in a semantic platform. To do that, we propose three binary classification systems based on statistical tests for erratic observation detection, and we validate our approach by verifying whether ADES is able to classify sensors by its observations correctly. Results show that the first two classifiers (constant and random observations) had good accuracy rates, and they were able to classify most of the samples. In addition, all of the classifiers obtained a very low false positive rate.
APA, Harvard, Vancouver, ISO, and other styles
19

Abudu, Prince M. "CommNets: Communicating Neural Network Architectures for Resource Constrained Systems." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 9909–10. http://dx.doi.org/10.1609/aaai.v33i01.33019909.

Full text
Abstract:
Applications that require heterogeneous sensor deployments continue to face practical challenges owing to resource constraints within their operating environments (i.e. energy efficiency, computational power and reliability). This has motivated the need for effective ways of selecting a sensing strategy that maximizes detection accuracy for events of interest using available resources and data-driven approaches. Inspired by those limitations, we ask a fundamental question: whether state-of-the-art Recurrent Neural Networks can observe different series of data and communicate their hidden states to collectively solve an objective in a distributed fashion. We realize our answer by conducting a series of systematic analyses of a Communicating Recurrent Neural Network architecture on varying time-steps, objective functions and number of nodes. The experimental setup we employ models tasks synonymous with those in Wireless Sensor Networks. Our contributions show that Recurrent Neural Networks can communicate through their hidden states and we achieve promising results.
APA, Harvard, Vancouver, ISO, and other styles
20

Patra, Sudhansu Shekhar, and R. K. Barik. "Dynamic Dedicated Server Allocation for Service Oriented Multi-Agent Data Intensive Architecture in Biomedical and Geospatial Cloud." International Journal of Cloud Applications and Computing 4, no. 1 (January 2014): 50–62. http://dx.doi.org/10.4018/ijcac.2014010105.

Full text
Abstract:
Cloud computing has recently received considerable attention, as a promising approach for delivering Information and Communication Technologies (ICT) services as a utility. In the process of providing these services it is necessary to improve the utilization of data centre resources which are operating in most dynamic workload environments. Datacenters are integral parts of cloud computing. In the datacenter generally hundreds and thousands of virtual servers run at any instance of time, hosting many tasks and at the same time the cloud system keeps receiving the batches of task requests. It provides services and computing through the networks. Service Oriented Architecture (SOA) and agent frameworks renders tools for developing distributed and multi agent systems which can be used for the administration of cloud computing environments which supports the above characteristics. This paper presents a SOQM (Service Oriented QoS Assured and Multi Agent Cloud Computing) architecture which supports QoS assured cloud service provision and request. Biomedical and geospatial data on cloud can be analyzed through SOQM and has allowed the efficient management of the allocation of resources to the different system agents. It has proposed a finite heterogeneous multiple vm model which are dynamically allocated depending on the request from biomedical and geospatial stakeholders.
APA, Harvard, Vancouver, ISO, and other styles
21

Di Giandomenico, Felicita, Antonia Bertolino, Antonello Calabrò, and Nicola Nostro. "An Approach to Adaptive Dependability Assessment in Dynamic and Evolving Connected Systems." International Journal of Adaptive, Resilient and Autonomic Systems 4, no. 1 (January 2013): 1–25. http://dx.doi.org/10.4018/jaras.2013010101.

Full text
Abstract:
Complexity, heterogeneity, interdependency and, especially, evolution of system/services specifications, related operating environments and user needs, are more and more highly relevant characteristics of modern and future software applications. Taking advantage of the experience gained in the context of the European project Connect, which addresses the challenging and ambitious topic of eternally functioning distributed and heterogeneous systems, this paper presents a framework to analyse and assess dependability and performance properties in dynamic and evolving contexts. The goal is to develop an adaptive approach by coupling stochastic model-based analysis, performed at design time to support the definition and implementation of software products complying with their stated dependability and performance requirements, with run-time monitoring to re-calibrate and enhance the dependability and performance prediction along evolution. The proposed framework for adaptive assessment is described and illustrated through a case study. To simplify the description while making more concrete the approach under study, the authors adopted the setting and terminology of the Connect project.
APA, Harvard, Vancouver, ISO, and other styles
22

Coluccia, Angelo, and Alessio Fascista. "A Review of Advanced Localization Techniques for Crowdsensing Wireless Sensor Networks." Sensors 19, no. 5 (February 26, 2019): 988. http://dx.doi.org/10.3390/s19050988.

Full text
Abstract:
The wide availability of sensing modules and computing capabilities in modern mobile devices (smartphones, smart watches, in-vehicle sensors, etc.) is driving the shift from mote-class wireless sensor networks (WSNs) to the new era of crowdsensing WSNs. In this emerging paradigm sensors are no longer static and homogeneous, but are rather worn/carried by people or cars. This results in a new type of wide-area WSN—crowd-based and overlaid on top of heterogeneous communication technologies—that paves the way for very innovative applications. To this aim, the positioning of mobile devices operating in the network becomes crucial. Indeed, the pervasive, almost ubiquitous availability of smart devices brings unprecedented opportunities but also poses new research challenges in their precise location under mobility and dense-multipath environments typical of urban and indoor scenarios. In this paper, we review recent advances in the field of wireless positioning with focus on cooperation, mobility, and advanced array processing, which are key enablers for the design of novel localization solutions for crowdsensing WSNs.
APA, Harvard, Vancouver, ISO, and other styles
23

Reggiani, Luca, Andrea Gola, Gian Mario Maggio, and Gianluigi Tiberi. "Design of Interference-Resilient Medium Access for High Throughput WLANs." International Journal of Distributed Sensor Networks 2015 (2015): 1–13. http://dx.doi.org/10.1155/2015/659569.

Full text
Abstract:
The proliferation of wireless communications systems poses new challenges in terms of coexistence between heterogeneous devices operating within the same frequency bands. In fact, in case of high-density concentration of wireless devices, like indoor environments, the network performance is typically limited by the mutual interference among the devices themselves, such as for wireless local area networks (WLANs). In this paper, we analyze a protocol strategy for managing multiple access in wireless networks. A network of sensors colocated with the WLAN terminals forms a control layer for managing the medium access and scheduling resources in order to limit collisions and optimize the WLAN data traffic; this control layer is based on a low-power wideband technology characterized by interference robustness, like CDMA (code division multiple access) or UWB (ultra-wideband) for sensors. In this work, we perform an analytical and simulative performance study of the saturated throughput, showing numerical results for the UWB-IR (Impulse Radio) sensors case and highlighting the advantage that can be provided particularly in very high capacity systems, which constitute the necessary evolution of current WLAN versions.
APA, Harvard, Vancouver, ISO, and other styles
24

Capiola, August, Holly C. Baxter, Marc D. Pfahler, Christopher S. Calhoun, and Philip Bobko. "Swift Trust in Ad Hoc Teams: A Cognitive Task Analysis of Intelligence Operators in Multi-Domain Command and Control Contexts." Journal of Cognitive Engineering and Decision Making 14, no. 3 (September 2020): 218–41. http://dx.doi.org/10.1177/1555343420943460.

Full text
Abstract:
Trust is important for establishing successful relationships and performance outcomes. In some contexts, however, rich information such as knowledge of and experience with a teammate is not available to inform one’s trust. Yet, parties in these contexts are expected to work together toward common goals for a relatively brief and finite period of time. This research investigated the antecedents to quickly-formed trust (often referred to as swift trust) in fast-paced, time-constrained contexts. We conducted a cognitive task analysis (CTA) based on 11 structured interviews of subject-matter experts (SMEs) in Intelligence (Intel)—a heterogeneous job category comprising distributed and co-located personnel within multi-domain command and control (MDC2) environments. Eight antecedents to swift trust emerged from these interviews (i.e., ability, integrity, benevolence, communication, mission-focus, self-awareness, shared perspectives/experiences, and calm), with further analysis implying that swift trust is a relevant and emergent state in MDC2 that facilitates reliance. These findings offer implications for teams operating in high-risk distributed contexts and should be expanded through basic experimental investigations as well as applied initiatives.
APA, Harvard, Vancouver, ISO, and other styles
25

HAMADI, YOUSSEF. "INTERLEAVED BACKTRACKING IN DISTRIBUTED CONSTRAINT NETWORKS." International Journal on Artificial Intelligence Tools 11, no. 02 (June 2002): 167–88. http://dx.doi.org/10.1142/s0218213002000836.

Full text
Abstract:
The adaptation of software technology to distributed environments is an important challenge today. In this work we combine parallel and distributed search. By this way we add the potential speed-up of a parallel exploration in the processing of distributed problems. This paper extends DIBT, a distributed search procedure operating in distributed constraint networks.11 The extension is threefold. First, the ordered hierarchies used during backtracking are extended to remove partial orders. Second the procedure is updated to face delayed information problems upcoming in heterogeneous (We would like to thank here M. Yokoo for rising this potential problem.)Third, the search is extended to simultaneously explore independent parts of a distributed search tree. The first and third points were first presented in 1999.7 The third improvement introduces parallelism into distributed search, which brings to Interleaved Distributed Intelligent BackTracking (IDIBT). Our results show that 1) insoluble problems do not greatly degrade performance over DIBT and 2) super-linear speed-up can be achieved when the distribution of solutions is non-uniform.
APA, Harvard, Vancouver, ISO, and other styles
26

Mousavi, M. Sina, Yuan Feng, Mostafa Afzalian, Josh McCann, and Jongwan Eun. "In situ characterization of temperature and gas production using membrane interface probe (MIP) and hydraulic profiling tool (HPT) in an operating municipal solid waste landfill." E3S Web of Conferences 205 (2020): 09009. http://dx.doi.org/10.1051/e3sconf/202020509009.

Full text
Abstract:
A modern Municipal Solid Waste (MSW) landfill is a renewable energy resource to produce a significant amount of heat and methane used for generating electricity. However, it is difficult to use those sources effectively because active and post-closure MSW landfills are heterogeneous spatially and temporally and exposed to complex environments with varying pressure and moisture in the landfill. With regard to the prediction of the sources, the analysis of in situ MSW properties is an alternative way to reduce the uncertainty and to understand complex processes undergoing in the landfill effectively. A Hydraulic Profiling Tool (HPT) and Membrane Interface Probe (MIP) measures the continuous profile of MSW properties with depth, including hydraulic pressure, temperature, hydraulic conductivity, electrical conductivity (EC), and concentration of selected volatile organic compounds and methane. In this study, we conducted a series of MIP with HPT tests to investigate the MSW characteristics of a landfill in Nebraska. The results of the test showed an increase in hydraulic pressure and temperature with depth. The EC profile showed a variety of different waste constituents and MIP results showed the methane trapped beneath the top cover. The results in terms of hydraulic properties, temperature and EC obtained from different sites can be used to estimate the waste age and help designing energy recovery systems.
APA, Harvard, Vancouver, ISO, and other styles
27

Gupta, Pragya, and Markus Duchon. "Developing Self-Similar Hybrid Control Architecture Based on SGAM-Based Methodology for Distributed Microgrids." Designs 2, no. 4 (October 23, 2018): 41. http://dx.doi.org/10.3390/designs2040041.

Full text
Abstract:
Cyber-Physical Systems (CPS) are the complex systems that control and coordinate physical infrastructures, which may be geographically apart, via the use of Information and Communication Technology (ICT). One such application of CPS is smart microgrids. Microgrids comprise both power consuming and power producing infrastructure and are capable of operating in grid connected and disconnected modes. Due to the presence of heterogeneous smart devices communicating over multiple communication protocols in a distributed environment, a system architecture is required. The objective of this paper is to approach the microgrid architecture from the software and systems’ design perspective. The architecture should be flexible to support various multiple communication protocols and is able to integrate various hardware technologies. It should also be modular and scalable to support various functionalities such as island mode operations, energy efficient operations, energy trading, predictive maintenance, etc. These requirements are the basis for designing the software architecture for the smart microgrids that should be able to manage not only electrical but all energy related systems. In this work, we propose a distributed, hybrid control architecture suited for microgrid environments, where entities are geographically distant and need to operate in a cohesive manner. The proposed system architecture supports various design philosophies such as component-based design, hierarchical composition of components, peer-to-peer design, distributed decision-making and controlling as well as plug-and-play during runtime. A unique capability of the proposed system architecture is the self-similarity of the components for the distributed microgrids. The benefit of the approach is that it supports these design philosophies at all the levels in the hierarchy in contrast to a typical centralized architectures where decisions are taken only at the global level. The proposed architecture is applied to a real system of 13 residential buildings in a low-voltage distribution network. The required implementation and deployment details for monitoring and controlling 13 residential buildings are also discussed in this work.
APA, Harvard, Vancouver, ISO, and other styles
28

Andersson, Dennis. "An Externalizable Model of Tactical Mission Control for Knowledge Transfer." International Journal of Information Systems for Crisis Response and Management 6, no. 3 (July 2014): 16–37. http://dx.doi.org/10.4018/ijiscram.2014070102.

Full text
Abstract:
Organizations that deal with humanitarian assistance, disaster response and military activities are often exposed to dynamic environments where chaos rules. Under these circumstances, standard operating procedures may not be always be applicable, forcing the controllers to resort to opportunistic, or even scrambled, control. The lack of tactical or strategic control forces the teams to rely on experience from scenario-based training and prior missions. Acquiring, and retaining, such experience is thus essential to prepare for future events. Based on ideas from the knowledge management community, this article proposes an externalizable control model, supporting methods for retaining mission experience through internalization via hypermedia. Such a knowledge base of experience can be used to simplify knowledge sharing, an important matter since first-hand experience from rare and extreme events is, naturally, rare. The knowledge base synthesizes actual decision making processes, complete with context, history, cues, and interactions and is captured through a combination of heterogeneous multimedia recordings, sensor readings, and documents relating to the mission. The approach can complement regular training and apprenticeships, to help establish and maintain a pool of knowledge and increase tactical commanders' recognition-primed decision-making capability.
APA, Harvard, Vancouver, ISO, and other styles
29

Sarkar, Nurul I., Anita Xiao-min Kuang, Kashif Nisar, and Angela Amphawan. "Hospital Environment Scenarios using WLAN over OPNET Simulation Tool." International Journal of Information Communication Technologies and Human Development 6, no. 1 (January 2014): 69–90. http://dx.doi.org/10.4018/ijicthd.2014010104.

Full text
Abstract:
For the past ten years, heterogeneous networks wired and wireless had tended to integrate seamlessly, offering effective and reliable service for medical operations. One of the problems encountered by network practitioners is the seamless integration of network components into healthcare delivery. As a multiplexing hospital model, the implementation certainly presents some challenges. The major technical and performance issues involve are as following. The operating parameters should keep aligned to the Quality of Service (QoS) requirement throughout simulation. Bandwidth utilisation of wireless networking is a challenging issue for real-time multimedia transmission. IEEE 802.11 provides relatively lower data rate than wired networks, thus the developer tends to adopt a more compromised solution: either reduce the file size or compress the image packets. Communication performance that varies constantly with the impact of signal strength, traffic load and interference. As stated radio signal senses as a curve and attenuates greatly while metallic object and microwave exist within the active range. To ensure devices do not interfere with other electronic equipments (e.g. heart monitors), assert wireless spectrum has to be managed appropriately. This research paper aims to develop a generic hospital network scenarios using Wireless Local Area Network (WLAN) over OPNET Simulation, to evaluate the performance of the integrated network scenario for Intensive Care Units (ICU). This research makes use of computer simulation and discusses various aspects of the network design, so as to discover the performance behaviour pertaining to effect of traffic type, traffic load and network size. In the ICU scenario, the performance of video conference degrades with network size, thus, a QoS-enabled device is recommended to reduce the packet delay and data loss. IEEE 802.11a suits in hospital environment because it mitigates interference on the 2.4GHz band where most wireless devices operate. Experiment examines the effect of signal strength in WLAN. It is convinced that -88dBm is the best signal strength threshold. Although 802.11a generates slightly lower throughput than 802.11g, this issues can be addressed by placing more APs in the service area. It is convinced that 802.11a suits the hospital environments, because it mitigates interference on the popular 2.4GHz band where most wireless devices operate. It is important for medical devices which require future upgrade and Bluetooth deployment.
APA, Harvard, Vancouver, ISO, and other styles
30

Andersen, Björn, Martin Kasparick, Hannes Ulrich, Stefan Franke, Jan Schlamelcher, Max Rockstroh, and Josef Ingenerf. "Connecting the clinical IT infrastructure to a service-oriented architecture of medical devices." Biomedical Engineering / Biomedizinische Technik 63, no. 1 (February 23, 2018): 57–68. http://dx.doi.org/10.1515/bmt-2017-0021.

Full text
Abstract:
AbstractThe new medical device communication protocol known as IEEE 11073 SDC is well-suited for the integration of (surgical) point-of-care devices, so are the established Health Level Seven (HL7) V2 and Digital Imaging and Communications in Medicine (DICOM) standards for the communication of systems in the clinical IT infrastructure (CITI). An integrated operating room (OR) and other integrated clinical environments, however, need interoperability between both domains to fully unfold their potential for improving the quality of care as well as clinical workflows. This work thus presents concepts for the propagation of clinical and administrative data to medical devices, physiologic measurements and device parameters to clinical IT systems, as well as image and multimedia content in both directions. Prototypical implementations of the derived components have proven to integrate well with systems of networked medical devices and with the CITI, effectively connecting these heterogeneous domains. Our qualitative evaluation indicates that the interoperability concepts are suitable to be integrated into clinical workflows and are expected to benefit patients and clinicians alike. The upcoming HL7 Fast Healthcare Interoperability Resources (FHIR) communication standard will likely change the domain of clinical IT significantly. A straightforward mapping to its resource model thus ensures the tenability of these concepts despite a foreseeable change in demand and requirements.
APA, Harvard, Vancouver, ISO, and other styles
31

Im, Cholhong, and Changsung Jeong. "ISOMP: An Instant Service-Orchestration Mobile M2M Platform." Mobile Information Systems 2016 (2016): 1–16. http://dx.doi.org/10.1155/2016/7263729.

Full text
Abstract:
Smartphones have greater computing power than ever before, providing convenient applications to improve our lives. In general, people find it difficult to locate suitable applications and implementing new applications often requires professional skills. In this paper, we propose a new service platform that facilitates the implementation of new applications by composing prebuilt components that provide the context information of mobile devices such as location and contacts. Our platform introduces an innovative concept named context collaboration, in which smartphones exchange context information with each other, which in turn is used to deduct useful inferences. The concept is realized by instant orchestration, which assembles some components and implements a composite component. The interactive communication interface helps a mobile device to communicate with other devices using open APIs, such as SOAP and HTTP (REST). The platform also works in heterogeneous environments, for example, between Android and iOS operating systems. Throughout the platform, mobile devices can act as smart M2M machines with context awareness, enabling intelligent tasks on behalf of users. Our platform will open up a new and innovative pathway for both enhanced mobile context awareness and M2M, which is expected to be a fundamental feature of the next generation of mobile devices.
APA, Harvard, Vancouver, ISO, and other styles
32

Ochieng, Hannington O., John O. Ojiem, Simon M. Kamwana, Joyce C. Mutai, and James W. Nyongesa. "Multiple-bean varieties as a strategy for minimizing production risk and enhancing yield stability in smallholder systems." Experimental Agriculture 56, no. 1 (March 20, 2019): 37–47. http://dx.doi.org/10.1017/s0014479719000085.

Full text
Abstract:
AbstractCommon bean (Phaseolus vulgaris L.) is perhaps the most important grain legume in sub-Saharan Africa (SSA) smallholder systems for food security and household income. Although a wide choice of varieties is available, smallholder farmers in western Kenya realize yields that are low and variable since they operate in risky production environments. Significant seasonal variations exist in rainfall and severity of pests and diseases. This situation is worsened by the low and declining soil fertility, coupled with low capacity of farmers to purchase production inputs such as fertilizers, fungicides and insecticides, and land scarcity. The objective of this study was to investigate whether growing multiple-bean varieties instead of a single variety can enable farmers enhance yield stability over seasons and ensure food security. Five common bean varieties were evaluated in multiple farms for 11 seasons at Kapkerer in Nandi County, western Kenya. Data were collected on grain yield, days to 50% flowering and major diseases. In addition, daily rainfall was recorded throughout the growing seasons. The five varieties were combined in all possible ways to create 31 single- and multiple-bean production strategies. The strategies were evaluated for grain yield performance and yield stability over seasons to determine the risk of not attaining a particular yield target. Results indicated that cropping multiple-bean varieties can be an effective way for reducing production risks in heterogeneous smallholder systems. Yield stability can be greatly enhanced across diverse environments, leading to improved food security, especially for the resource-poor smallholder farmers operating in risk-prone environments. Although the results show that some of the single-bean variety strategies were high yielding, their yield stability was generally lower than those of multiple strategies. Resource-poor risk averse farmers can greatly increase the probability of exceeding their yield targets by cropping multiple-bean varieties with relatively low yields but high grain yield stability. Trading-off high grain yield for yield stability might be an important strategy for minimizing bean production risks.
APA, Harvard, Vancouver, ISO, and other styles
33

Amin, M. Miftakul. "Interoperabilitas perangkat lunak menggunakan RESTful web service." Register: Jurnal Ilmiah Teknologi Sistem Informasi 4, no. 1 (November 24, 2018): 14. http://dx.doi.org/10.26594/register.v4i1.1129.

Full text
Abstract:
Pengembangan sistem informasi membutuhkan interoperabilitas dalam lingkungan yang heterogen, dilihat dari sistem operasi, perangkat lunak, bahasa pemrograman, dan basis data, sehingga dapat saling berkomunikasi dan bertukar data atau informasi. RESTful web service dapat digunakan sebagai salah satu teknologi untuk mewujudkan interoperabilitas. Sebuah studi kasus tentang aplikasi perpustakaan telah digunakan dalam penelitian ini. Aplikasi tersebut dibangun dengan Slim Framework PHP untuk sisi server dan Visual Basic pada sisi client. Komunikasi antara client dan server menggunakan HTTP method yaitu GET, POST, PUT, dan DELETE. Pengujian telah dilakukan untuk melihat performa dari web service yang telah dikembangkan menggunakan perangkat lunak Postman. Hasil dari penelitian ini menunjukkan bahwa, aplikasi client dapat mengakses web service yang disediakan di sisi server sebagai wujud interoperabilitas. Information development systems need interoperability in heterogeneous environments, seen from operating systems, software, programming languages, and databases, so that they can communicate and exchange data or information. RESTful web services can be used as one of the technologies to realize interoperability. As case studies build library applications using PHP Slim Framework on the server side, while Visual Basic programming language is used on the client side. Communication Between client and server using HTTP Method that is GET, POST, PUT, and DELETE. Testing has been done to see the performance of web service functionality that has been developed using Postman software. The result shows that client applications can access the web services provided on the server side as a form of interoperability.
APA, Harvard, Vancouver, ISO, and other styles
34

Misbah, Anass, and Ahmed Ettalbi. "Automatic Conversion of a Conceptual Model to a Standard Multi-view Web Services Definition." International Journal of Recent Contributions from Engineering, Science & IT (iJES) 6, no. 1 (March 19, 2018): 43. http://dx.doi.org/10.3991/ijes.v6i1.8285.

Full text
Abstract:
<span>Information systems are becoming more and more heterogeneous and here comes the need to have more generic transformation algorithms and more automatic generation Meta rules. In fact, the large number of terminals, devices, operating systems, platforms and environments require a high level of adaptation. Therefore, it is becoming more and more difficult to validate, generate and implement manually models, designs and codes.</span><br /><span>Web services are one of the technologies that are used massively nowadays; hence, it is considered as one of technologies that require the most automatic rules of validation and automation. Many previous works have dealt with Web services by proposing new concepts such as Multi-view Web services, standard WSDL implementation of Multi-view Web services and even further Generic Meta rules for automatic generation of Multi-view Web services.</span><br /><span>In this work we will propose a new way of generating Multi-view Web ser-vices, which is based on an engine algorithm that takes as input both an initial Conceptual Model and user’s matrix and then unroll a generic algorithm to gen-erate dynamically a validated set of points of view. This set of points of view will be transformed to a standard WSDL implementation of Multi-view Web services by means of the automatic transformation Meta rules.</span>
APA, Harvard, Vancouver, ISO, and other styles
35

Lavalle, Ana, Miguel A. Teruel, Alejandro Maté, and Juan Trujillo. "Improving Sustainability of Smart Cities through Visualization Techniques for Big Data from IoT Devices." Sustainability 12, no. 14 (July 11, 2020): 5595. http://dx.doi.org/10.3390/su12145595.

Full text
Abstract:
Fostering sustainability is paramount for Smart Cities development. Lately, Smart Cities are benefiting from the rising of Big Data coming from IoT devices, leading to improvements on monitoring and prevention. However, monitoring and prevention processes require visualization techniques as a key component. Indeed, in order to prevent possible hazards (such as fires, leaks, etc.) and optimize their resources, Smart Cities require adequate visualizations that provide insights to decision makers. Nevertheless, visualization of Big Data has always been a challenging issue, especially when such data are originated in real-time. This problem becomes even bigger in Smart City environments since we have to deal with many different groups of users and multiple heterogeneous data sources. Without a proper visualization methodology, complex dashboards including data from different nature are difficult to understand. In order to tackle this issue, we propose a methodology based on visualization techniques for Big Data, aimed at improving the evidence-gathering process by assisting users in the decision making in the context of Smart Cities. Moreover, in order to assess the impact of our proposal, a case study based on service calls for a fire department is presented. In this sense, our findings will be applied to data coming from citizen calls. Thus, the results of this work will contribute to the optimization of resources, namely fire extinguishing battalions, helping to improve their effectiveness and, as a result, the sustainability of a Smart City, operating better with less resources. Finally, in order to evaluate the impact of our proposal, we have performed an experiment, with non-expert users in data visualization.
APA, Harvard, Vancouver, ISO, and other styles
36

ISHII, RENATO P., RODRIGO F. DE MELLO, LUCIANO J. SENGER, MARCOS J. SANTANA, REGINA H. C. SANTANA, and LAURENCE TIANRUO YANG. "IMPROVING SCHEDULING OF COMMUNICATION INTENSIVE PARALLEL APPLICATIONS ON HETEROGENEOUS COMPUTING ENVIRONMENTS." Parallel Processing Letters 15, no. 04 (December 2005): 423–38. http://dx.doi.org/10.1142/s0129626405002349.

Full text
Abstract:
This paper presents a new model for the evaluation of the impacts of processing operations resulting from the communication among processes. This model quantifies the traffic volume imposed on the communication network by means of the latency parameters and the overhead. Such parameters represent the load that each process imposes over the network and the delay on CPU, as a consequence of the network operations. This delay is represented on the model by means of metric measurements slowdown. The equations that quantify the costs involved in the processing operation and message exchange are defined. In the same way, equations to determine the maximum network bandwidth are used in the decision-making scheduling. The proposed model uses a constant that delimitates the communication network maximum allowed usage, this constant defines two possible scheduling techniques: group scheduling or through communication network. Such techniques are incorporated to the DPWP policy, generating an extension of this policy. Experimental and simulation results confirm the performance enhancement of parallel applications under supervision of the extended DPWP policy, compared to the executions supervised by the original DPWP.
APA, Harvard, Vancouver, ISO, and other styles
37

Heckel, Kai, Marcel Urban, Patrick Schratz, Miguel Mahecha, and Christiane Schmullius. "Predicting Forest Cover in Distinct Ecosystems: The Potential of Multi-Source Sentinel-1 and -2 Data Fusion." Remote Sensing 12, no. 2 (January 17, 2020): 302. http://dx.doi.org/10.3390/rs12020302.

Full text
Abstract:
The fusion of microwave and optical data sets is expected to provide great potential for the derivation of forest cover around the globe. As Sentinel-1 and Sentinel-2 are now both operating in twin mode, they can provide an unprecedented data source to build dense spatial and temporal high-resolution time series across a variety of wavelengths. This study investigates (i) the ability of the individual sensors and (ii) their joint potential to delineate forest cover for study sites in two highly varied landscapes located in Germany (temperate dense mixed forests) and South Africa (open savanna woody vegetation and forest plantations). We used multi-temporal Sentinel-1 and single time steps of Sentinel-2 data in combination to derive accurate forest/non-forest (FNF) information via machine-learning classifiers. The forest classification accuracies were 90.9% and 93.2% for South Africa and Thuringia, respectively, estimated while using autocorrelation corrected spatial cross-validation (CV) for the fused data set. Sentinel-1 only classifications provided the lowest overall accuracy of 87.5%, while Sentinel-2 based classifications led to higher accuracies of 91.9%. Sentinel-2 short-wave infrared (SWIR) channels, biophysical parameters (Leaf Area Index (LAI), and Fraction of Absorbed Photosynthetically Active Radiation (FAPAR)) and the lower spectrum of the Sentinel-1 synthetic aperture radar (SAR) time series were found to be most distinctive in the detection of forest cover. In contrast to homogenous forests sites, Sentinel-1 time series information improved forest cover predictions in open savanna-like environments with heterogeneous regional features. The presented approach proved to be robust and it displayed the benefit of fusing optical and SAR data at high spatial resolution.
APA, Harvard, Vancouver, ISO, and other styles
38

Chitombo, Gideon. "Importance of Geology in Cave Mining." SEG Discovery, no. 119 (October 1, 2019): 1–21. http://dx.doi.org/10.5382/geo-and-mining-05.

Full text
Abstract:
Editor’s note: The Geology and Mining series, edited by Dan Wood and Jeffrey Hedenquist, is designed to introduce early-career professionals and students to a variety of topics in mineral exploration, development, and mining, in order to provide insight into the many ways in which geoscientists contribute to the mineral industry. Abstract Cave mining methods (generically referred to as block caving) are becoming the preferred mass underground mining options for large, regularly shaped mineral deposits that are too deep to mine by open pit. The depth at which caving is initiated has increased over the past few decades, and operational difficulties experienced in these new mines have indicated the need for a much improved geologic and geotechnical understanding of the rock mass, if the low-cost and high-productivity objectives of the method are to be maintained and the mines operated safely. Undercuts (the caving initiation level immediately above the ore extraction level) are now being developed at depths of &gt;1,000 m below surface, with the objective of progressively deepening to 2,000 and, eventually, 3,000 m. Many of the deeper deposits now being mined by caving have lower average metal grades than previously caved at shallower depths and comprise harder and more heterogeneous rock masses, and some are located in higher-stress and higher-temperature environments. As a result, larger caving block heights are required for engineering reasons; mining costs (capital and operating) are also escalating. In these deeper cave mining environments, numerous hazards must be mitigated if safety, productivity, and profitability are not to be adversely affected. Fortunately, potential hazards can be indicated and evaluated during exploration, discovery, and deposit assessment, prior to mine design and planning. Major hazards include rock bursts, air blasts, discontinuous surface subsidence, and inrushes of fines. These hazards are present during all stages of the caving process, from cave establishment (tunnel and underground infrastructure development, drawbell opening, and undercutting) through cave propagation and cave breakthrough to surface, up to and including steady-state production. Improved geologic input into mine design and planning will facilitate recognition and management of these risks, mitigating their consequences.
APA, Harvard, Vancouver, ISO, and other styles
39

Ciscon, Lawrence A., James D. Wise, and Don H. Johnson. "A Distributed Data Sharing Environment for Telerobotics." Presence: Teleoperators and Virtual Environments 3, no. 4 (January 1994): 321–40. http://dx.doi.org/10.1162/pres.1994.3.4.321.

Full text
Abstract:
We designed and implemented a distributed processing environment for uniformly communicating and sharing information among remotely located robots and human operators. This environment interconnects heterogeneous processors to withstand dynamically changing configurations and communications failures, and to support mixed-mode operation (simultaneous teleoperation and autonomous modes). The key design notion is undirected, data-driven communication: the recipients of data rather than the creators define communications paths for well-formed data blocks augmented by descriptive properties. Using this environment, we have implemented both a hierarchical path planner for an intelligent mobile robot operating over a local-area network and a telerobotic testbed that supports long-distance teleoperation.
APA, Harvard, Vancouver, ISO, and other styles
40

Ooshita, Fukuhito, Susumu Matsumae, Toshimitsu Masuzawa, and Nobuki Tokura. "Scheduling for broadcast operation in heterogeneous parallel computing environments." Systems and Computers in Japan 35, no. 5 (2004): 44–54. http://dx.doi.org/10.1002/scj.10533.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Vilisov, V. Ya. "Machine training to the distribution of tasks in the multi-agent robotic system in the elimination of emergency situations." Informacionno-technologicheskij vestnik, no. 2 (July 30, 2018): 59–68. http://dx.doi.org/10.21499/2409-1650-2018-2-59-68.

Full text
Abstract:
An algorithm for machine learning of a transport type model is presented for the optimal distribution of tasks in a heterogeneous group of robots operating in an automatic mode without operator participation. It is assumed that the model is trained by an experienced operator in a landfill environment adequate to a real emergency situation in which robots are to perform operations. According to the configured model in a real setting, tasks can be distributed according to a supervisory or decentralized control scheme. Training can be carried out and in the process of the regular operation of robots. In this case, the use of the learning model allows you to split the configuration tuning circuits and the assignment of tasks, which enables the robots and the operator to function at their own natural pace.
APA, Harvard, Vancouver, ISO, and other styles
42

Kyong, Joohyun, In-Kyu Han, and Sung-Soo Lim. "Heterogeneous Operating Systems Integrated Trace Method for Real-Time Virtualization Environment." IEMEK Journal of Embedded Systems and Applications 10, no. 4 (August 31, 2015): 233–39. http://dx.doi.org/10.14372/iemek.2015.10.4.233.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Cha, Ji Hwan, and Maxim Finkelstein. "Stochastic modeling of quality of systems operating in a heterogeneous environment." Applied Stochastic Models in Business and Industry 35, no. 6 (September 4, 2019): 1344–65. http://dx.doi.org/10.1002/asmb.2484.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Kalil, Mohamed A., Hassan Al-Mahdi, and Andreas Mitschele-Thiel. "Performance Evaluation of Secondary Users Operating on a Heterogeneous Spectrum Environment." Wireless Personal Communications 72, no. 4 (May 22, 2013): 2251–62. http://dx.doi.org/10.1007/s11277-013-1147-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

LaRoque, Benjamin. "High-availability on-site deployment to heterogeneous architectures for Project 8 and ADMX." EPJ Web of Conferences 245 (2020): 07029. http://dx.doi.org/10.1051/epjconf/202024507029.

Full text
Abstract:
Project 8 is applying a novel spectroscopy technique to make a precision measurement of the tritium beta-decay spectrum, resulting in either a measurement of or further constraint on the effective mass of the electron antineutrino. ADMX is operating an axion haloscope to scan the mass-coupling parameter space in search of dark matter axions. Both collaborations are executing medium-scale experiments, where stable operations last for three to nine months and the same system is used for development and testing between periods of operation. It is also increasingly common to use low-cost computing elements, such as the Raspberry Pi, to integrate computing and control with custom instrumentation and hardware. This leads to situations where it is necessary to support software deployment to heterogeneous architectures on rapid development cycles while maintaining high availability. Here we present the use of docker containers to standardize packaging and execution of control software for both experiments and the use of kubernetes for management and monitoring of container deployment in an active research and development environment. We also discuss the advantages over more traditional approaches employed by experiments at this scale, such as detached user execution or custom control shell scripts.
APA, Harvard, Vancouver, ISO, and other styles
46

Beguelin, Adam, Jack J. Dongarra, George Al Geist, Robert Manchek, and Keith Moore. "HeNCE: A Heterogeneous Network Computing Environment." Scientific Programming 3, no. 1 (1994): 49–60. http://dx.doi.org/10.1155/1994/368727.

Full text
Abstract:
Network computing seeks to utilize the aggregate resources of many networked computers to solve a single problem. In so doing it is often possible to obtain supercomputer performance from an inexpensive local area network. The drawback is that network computing is complicated and error prone when done by hand, especially if the computers have different operating systems and data formats and are thus heterogeneous. The heterogeneous network computing environment (HeNCE) is an integrated graphical environment for creating and running parallel programs over a heterogeneous collection of computers. It is built on a lower level package called parallel virtual machine (PVM). The HeNCE philosophy of parallel programming is to have the programmer graphically specify the parallelism of a computation and to automate, as much as possible, the tasks of writing, compiling, executing, debugging, and tracing the network computation. Key to HeNCE is a graphical language based on directed graphs that describe the parallelism and data dependencies of an application. Nodes in the graphs represent conventional Fortran or C subroutines and the arcs represent data and control flow. This article describes the present state of HeNCE, its capabilities, limitations, and areas of future research.
APA, Harvard, Vancouver, ISO, and other styles
47

El Mashade, Mohamed Bakry. "Performance Amelioration of Standard Variants of Adaptive Schemes Operating in Heterogeneous Environment." Radioelectronics and Communications Systems 63, no. 4 (April 2020): 171–85. http://dx.doi.org/10.3103/s0735272720040019.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

GEIST, G. A., JAMES ARTHUR KOHL, STEPHEN SCOTT, and PHILIP M. PAPADOPOULOS. "HARNESS: ADAPTABLE VIRTUAL MACHINE ENVIRONMENT FOR HETEROGENEOUS CLUSTER." Parallel Processing Letters 09, no. 02 (June 1999): 253–73. http://dx.doi.org/10.1142/s0129626499000244.

Full text
Abstract:
This paper describes ongoing work on the Harness system for next-generation hetergeneous distributed computing. Harness is an adaptable, reliable virtual machine environment being built as a follow-on to PVM. The three fundamental concepts presented here are parallel plugins, fault-tolerant distributed control, and dynamically merging and splitting virtual machines. The distributed control mechanisms provide the support framework necessary for coordinating and applying parallel plugins that allow applications to customize or tune their operating environment on-the-fly. In the spirit of CUMULVS, Harness applications can plug into each other to couple for collaborative computing. Virtual machines that merge and split can assist applications in dynamically utilizing different computing resources to suit changing computational needs.
APA, Harvard, Vancouver, ISO, and other styles
49

D’Amato, Marcello, Christian Di Pietro, and Marco M. Sorge. "Credit allocation in heterogeneous banking systems." German Economic Review 21, no. 1 (April 28, 2020): 1–33. http://dx.doi.org/10.1515/ger-1000-0018.

Full text
Abstract:
AbstractWorld banking systems are almost invariably populated by relatively diverse financial institutions. This paper studies the operation of credit markets where heterogeneous banks compete for investment projects of varying quality in the presence of informational asymmetries. We emphasize on two dimensions of heterogeneity – access to project-specific information vis-à-vis funding costs – which naturally reflect lenders’ comparative disadvantages in the competitive landscape. Two main findings stand out. First, competition across heterogeneous banks can produce multiple equilibria. Thus, economies with similar fundamentals may well display a variety of interest rates and/or market shares for the operating institutions. Second, market failures (overlending) always prove mitigated in heterogeneous banking systems, relative to a world with equally uninformed lenders. This discipline effect however comes with a chance for market fragility, whereby modest changes in the business environment or other fundamentals can trigger large shifts in the price of credit, leading to either highly selective markets or rather ones which overfund ventures of the lowest quality. Extensions of the basic model and some policy implications are also discussed.
APA, Harvard, Vancouver, ISO, and other styles
50

Siler, C. G. F., R. J. Madix, and C. M. Friend. "Designing for selectivity: weak interactions and the competition for reactive sites on gold catalysts." Faraday Discussions 188 (2016): 355–68. http://dx.doi.org/10.1039/c5fd00192g.

Full text
Abstract:
A major challenge in heterogeneous catalysis is controlling reaction selectivity, especially in complex environments. When more than one species is present in the gas mixture, the competition for binding sites on the surface of a catalyst is an important factor in determining reaction selectivity and activity. We establish an experimental hierarchy for the binding of a series of reaction intermediates on Au(111) and demonstrate that this hierarchy accounts for reaction selectivity on both the single crystal surface and under operating catalytic conditions at atmospheric pressure using a nanoporous Au catalyst. A partial set of measurements of relative binding has been measured by others on other catalyst materials, including Ag, Pd and metal oxide surfaces; a comparison demonstrates the generality of this concept and identifies differences in the trends. Theoretical calculations for a subset of reactants on Au(111) show that weak van der Waals interactions are key to predicting the hierarchy of binding strengths for alkoxides bound to Au(111). This hierarchy is key to the control of the selectivity for partial oxidation of alcohols to esters on both Au surfaces and under working catalytic conditions using nanoporous gold. The selectivity depends on the competition for active sites among key intermediates. New results probing the effect of fluorine substitution are also presented to extend the relation of reaction selectivity to the hierarchy of binding. Motivated by an interest in synthetic manipulation of fluorinated organics, we specifically investigated the influence of the –CF3 group on alcohol reactivity and selectivity. 2,2,2-Trifluoroethanol couples on O-covered Au(111) to yield CF3CH2O–C(O)(CF3), but in the presence of methanol or ethanol it preferentially forms the respective 2,2,2-trifluoroethoxy-esters. The ester is not the dominant product in any of these cases, though, indicating that the rate of β-H elimination from adsorbed trifluoroethoxy is slower than that for either adsorbed methoxy or ethoxy, consistent with their relative estimated β-C–H bond strengths. The measured equilibrium constants for the competition for binding to the surface are 2.9 and 0.38 for ethanol and methanol, respectively, vs. 2,2,2-trifluoroethanol, indicating that the binding strength of 2,2,2-trifluoroethoxy is weaker than ethoxy, but stronger than methoxy. These results are consistent with weakening of the interactions between the surface and the alkyl group due to Pauli repulsion of the electron-rich CF3 group from the surface, which offsets the van der Waals attraction. These experiments provide guiding principles for understanding the effect of fluorination on heterogeneous synthesis and further demonstrate the key role of molecular structure in determining reaction selectivity.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography