Journal articles on the topic 'Managing redundancy'

To see the other types of publications on this topic, follow the link: Managing redundancy.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Managing redundancy.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Kalyuga, Slava, Paul Chandler, and John Sweller. "Managing split-attention and redundancy in multimedia instruction." Applied Cognitive Psychology 13, no. 4 (August 1999): 351–71. http://dx.doi.org/10.1002/(sici)1099-0720(199908)13:4<351::aid-acp589>3.0.co;2-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Wang, Jian Jun, Zhi Feng Liu, and Ji Kai Ma. "The Study of the Managing Node Redundancy of Real-Time Industrial Ethernet POWERLINK." Applied Mechanics and Materials 336-338 (July 2013): 2388–91. http://dx.doi.org/10.4028/www.scientific.net/amm.336-338.2388.

Full text
Abstract:
This paper analyses the master station redundancy technology in the POWERLINK network. Through the monitoring network and data refresh to achieve the master station redundancy function. The Managing Node redundancy ensures the POWERLINK cycle production continuance after the failure of the current master station, the switch-over time (recovery time) of the POWERLINK system is in the two POWERLINK cycle time at least. That ensures a very fast restoring of normal operation without any downtime for the control system.
APA, Harvard, Vancouver, ISO, and other styles
3

Proenza, Julián, José Miro-Julia, and Hans Hansson. "Managing redundancy in CAN-based networks supporting N-Version Programming." Computer Standards & Interfaces 31, no. 1 (January 2009): 120–27. http://dx.doi.org/10.1016/j.csi.2007.11.007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Wright, W. E., and J. C. Hall. "Advanced Aircraft Gas Turbine Engine Controls." Journal of Engineering for Gas Turbines and Power 112, no. 4 (October 1, 1990): 561–64. http://dx.doi.org/10.1115/1.2906205.

Full text
Abstract:
With the advent of vectored thrust, vertical lift, and fly-by-wire aircraft, the complexity of aircraft gas turbine control systems has evolved to the point wherein they must approach or equal the reliability of current quad redundant flight control systems. To advance the technology of high-reliability engine controls, one solution to the Byzantine General’s problem (Lamport et al., 1982) is presented as the foundation for fault tolerant engine control architecture. In addition to creating a control architecture, an approach to managing the architecture’s redundancy is addressed.
APA, Harvard, Vancouver, ISO, and other styles
5

Burmester, Mike, Tri Van Le, and Alec Yasinsac. "Adaptive gossip protocols: Managing security and redundancy in dense ad hoc networks." Ad Hoc Networks 5, no. 3 (April 2007): 313–23. http://dx.doi.org/10.1016/j.adhoc.2005.11.007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Su, Hang, Nima Enayati, Luca Vantadori, Andrea Spinoglio, Giancarlo Ferrigno, and Elena De Momi. "Online human-like redundancy optimization for tele-operated anthropomorphic manipulators." International Journal of Advanced Robotic Systems 15, no. 6 (November 1, 2018): 172988141881469. http://dx.doi.org/10.1177/1729881418814695.

Full text
Abstract:
Robot human-like behavior can enhance the performance of human–robot cooperation with prominently improved natural interaction. This also holds for redundant robots with an anthropomorphic kinematics. In this article, we translated human ability of managing redundancy to control a seven degrees of freedom anthropomorphic robot arm (LWR4+, KUKA, Germany) during tele-operated tasks. We implemented a nonlinear regression method—based on neural networks—between the human arm elbow swivel angle and the hand target pose to achieve an anthropomorphic arm posture during tele-operation tasks. The method was assessed in simulation and experiments were performed with virtual reality tracking tasks in a lab environment. The results showed that the robot achieves a human-like arm posture during tele-operation, and the subjects prefer to work with the biologically inspired robot. The proposed method can be applied in control of anthropomorphic robot manipulators for tele-operated collaborative tasks, such as in factories or in operating rooms.
APA, Harvard, Vancouver, ISO, and other styles
7

Ahmed, Ejaz, Nik Bessis, Peter Norrington, and Yong Yue. "Managing Inconsistencies in Data Grid Environments." International Journal of Grid and High Performance Computing 2, no. 4 (October 2010): 51–64. http://dx.doi.org/10.4018/jghpc.2010100105.

Full text
Abstract:
Much work has been done in the area of data access and integration using various data mapping, matching, and loading techniques. One of the main concerns when integrating data from heterogeneous data sources is data redundancy. The concern is mainly due to the different business contexts and purposes from which the data systems were originally built. A common process for accessing data from integrated databases involves the use of each data source’s own catalogue or metadata schema. In this article, the authors take the view that there is a greater chance of data inconsistencies, such as data redundancies when integrating them within a grid environment as compared to traditional distributed paradigms. The importance of improving the data search and matching process is briefly discussed, and a partial service oriented generic strategy is adopted to consolidate distinct catalogue schemas of federated databases to access information seamlessly. To this end, a proposed matching strategy between structure objects and data values across federated databases in a grid environment is presented.
APA, Harvard, Vancouver, ISO, and other styles
8

Mattei, Massimiliano, and Gaetano Paviglianiti. "MANAGING SENSOR HARDWARE REDUNDANCY ON A SMALL COMMERCIAL AIRCRAFT WITH H∞ FDI OBSERVERS." IFAC Proceedings Volumes 38, no. 1 (2005): 347–52. http://dx.doi.org/10.3182/20050703-6-cz-1902.01860.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Walker, Brian, and Brunilde Sansó. "Managing Redundancy in Distributed Computer Networks: A State Transition Graph Approach for the Stashing Problem." Operations Research 46, no. 3 (June 1998): 305–15. http://dx.doi.org/10.1287/opre.46.3.305.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Kovalenko, Pavel Y., Valentin I. Mukhin, and Mihail D. Senyuk. "Development of a methodology for data validation in power systems using different types of measurements." E3S Web of Conferences 288 (2021): 01062. http://dx.doi.org/10.1051/e3sconf/202128801062.

Full text
Abstract:
In recent years, the technology of synchrophasor measurements has been introduced in power systems around the world. When managing a power system and predicting its operation conditions, the important task is to check the validity of power system data. At the same time, the traditional types of measurements, such as digital fault recorders and telemetry devices are still widely used. It is known that redundancy of measurements contributes to a more accurate solution of the data validation problem. It is useful to create a method for data validation in power systems, which could involve various types of measurements in order to increase the redundancy and, hence, the overall accuracy of measurements. This study presents some validity criteria that use the idea described above. The results of testing the proposed methodology on the substation model in Matlab software package are presented and discussed.
APA, Harvard, Vancouver, ISO, and other styles
11

Varma Perumal, Khobinath, and Siva Priya Thiagarajah. "Enhancement of Undergraduate Time Management Through the Use of A Lab Schedule Reminder App." Journal of Engineering Technology and Applied Physics 1, no. 1 (June 19, 2019): 8–12. http://dx.doi.org/10.33093/jetap.2019.1.1.30.

Full text
Abstract:
Engineering undergraduates have to attend a certain number of laboratory hours to obtain their degree. Undergraduates have to attend up to 10 laboratory sessions per semester. One problem that arises from this requirement is complicated lab schedule management. This leads to increased missing of lab sessions and also redundancy in laboratory replacement sessions, which decreases efficiency of student and staff. This paper provides a solution to this problem by creating a lab schedule mobile application based on android development. The LabSkedule application reminds students on lab sessions, reduces hassle for students in viewing lab schedules and also managing their various lab sessions. A post application launch survey verified by 92% of students shows an improvement in efficiency in managing lab schedules and improved attendance to laboratory sessions.
APA, Harvard, Vancouver, ISO, and other styles
12

Gružauskas, Valentas, and Mantas Vilkas. "Managing Capabilities for Supply Chain Resilience Through it Integration." Economics and Business 31, no. 1 (August 28, 2017): 30–43. http://dx.doi.org/10.1515/eb-2017-0016.

Full text
Abstract:
Abstract The trend for e-commerce, estimated population size to 11 billion by 2050, and an increase in urbanization level to 70 % is requiring to re-think the current supply chain. These trends changed the distribution process: delivery distances are decreasing, the product variety is increasing, and more products are being sold in smaller quantities. Therefore, the concept of supply chain resilience has gained more recognition in recent years. The scientific literature analysis conducted by the authors indicate several capabilities that influence supply chain resilience. Collaboration, flexibility, redundancy and integration are the most influential capabilities to supply chain resilience. However, the authors identify that the combination of these capabilities to supply chain resilience is under researched. The authors indicate that by combining these capabilities with the upcoming technologies of industry 4.0, supply chain resilience can be achieved. In the future, the authors are planning to conduct further research to identify the influence of these capabilities to supply chain resilience, to quantify supply chain resilience, and to provide further practices of industry 4.0 concept usage for supply chain resilience.
APA, Harvard, Vancouver, ISO, and other styles
13

BAYRAK, TUNCAY, and MARTHA R. GRABOWSKI. "SAFETY-CRITICAL WIDE AREA NETWORK PERFORMANCE EVALUATION." International Journal of Information Technology & Decision Making 02, no. 04 (December 2003): 651–67. http://dx.doi.org/10.1142/s0219622003000823.

Full text
Abstract:
There has been a considerable amount of research in the area of network performance evaluation. However, little of the research is focused on the evaluation of real-time safety-critical WANs, a need that motivated this research. Over the years, networks have been evaluated by different disciplines from different perspectives. Many of these evaluations focus on network technical performance, or an organization's performance when using a network, or individual users' performance when using a network. In this study, network performance was measured using empirical data from an operational WAN and by utilizing well-defined and well-known network performance metrics such as reliability, availability, and response time. In general, increased use of a real-time WAN in this study was associated with negative impacts on WAN performance and increased redundancy was generally associated with positive impacts, allowing greater system usage and higher network workload, as intended. The impacts of increasing redundancy on MTBF were mixed, as were the MTTR impacts; availability values varied considerably by port. The network performance data thus shows mixed empirical results from increases in network usage and redundancy, which highlights the importance of managing and measuring network performance at both the system and the local level.
APA, Harvard, Vancouver, ISO, and other styles
14

Zhong, Lunlong, and Mora-Camino Félix. "A two-stage approach for managing actuators redundancy and its application to fault tolerant flight control." Chinese Journal of Aeronautics 28, no. 2 (April 2015): 469–77. http://dx.doi.org/10.1016/j.cja.2015.02.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Jambunathan, Baskaran, and Dr Y. Kalpana. "Design of devops solution for managing multi cloud distributed environment." International Journal of Engineering & Technology 7, no. 3.3 (June 8, 2018): 637. http://dx.doi.org/10.14419/ijet.v7i2.33.14854.

Full text
Abstract:
DevOps is the emerging field in the area of IT infrastructure management and cloud automation. Organization are spending time and money in terms of continuous improvement and optimization in their infrastructure management and finding to avenues for efficiency and produc-tivity improvement, reduce redundancy and optimise cost in the area of development and deployment of their enterprise application. More importantly, organisation have multiple environment to manage and hence need to have a mechanism to continuously monitor and optimize operational task and improve collaboration with various teams across organization. In addition, many companies are migrating or migrated on to cloud, some are having multiple cloud environments across regions, and hence managing it is even more difficult and need to have a mechanism to address the key challenges and risk in managing devOps in multicloud. . In this article, we would like to discuss the key challenges and how it can be addressed in a multi cloud environment and proposing an integrated solution addressing all challenges with right set of tools and framework design which enables devops in multi cloud efficiently.
APA, Harvard, Vancouver, ISO, and other styles
16

Subrin, Kévin, Laurent Sabourin, Grigore Gogu, and Youcef Mezouar. "Performance Criteria to Evaluate a Kinematically Redundant Robotic Cell for Machining Tasks." Applied Mechanics and Materials 162 (March 2012): 413–22. http://dx.doi.org/10.4028/www.scientific.net/amm.162.413.

Full text
Abstract:
Machine tools and robots have both evolved fundamentally and we can now question the abilities of new industrial robots concerning accurate task realization under high constraints. Requirements in terms of kinematic and dynamic capabilities in High Speed Machining (HSM) are increasingly demanding. To face the challenge of performance improvement, parallel and hybrid robotic architectures have emerged and a new generation of industrial serial robots with the ability to perform machining tasks has been designed. In this paper, we propose to evaluate the performance criteria of an industrial robot included in a kinematically redundant robotic cell dedicated to a machining task. Firstly, we present the constraints of the machining process (speed, accuracy etc.). We then detail the direct geometrical model and the kinematic model of a robot with closed chain in the arm and we propose a procedure for managing kinematic redundancy whilst integrating various criteria. Finally, we present the evolution of the criteria for a given trajectory in order to define the best location for a rotary table and to analyze the manipulators stiffness.
APA, Harvard, Vancouver, ISO, and other styles
17

Perdana, Andreas, and Suhendro Yusuf. "Perancangan Arsitektur Enterprise Menggunakan Togaf Framework (Studi Kasus : Cv. Agung Lestari)." Sienna 1, no. 1 (July 21, 2020): 10–23. http://dx.doi.org/10.47637/sienna.v1i1.267.

Full text
Abstract:
CV. Agung Lestari is a company engaged in services of new vehicle’s documents. The company activities are using Information System (IS) and Information Technology (IT) and already have an application that used in Administration Division. However, in the absence of data integration and connection to information system between divisions. Unintegrated data allows for data redundancy, error, lack of data accuracy, and less efficient. Required a framework in planning, designing, and managing infrastructure called Enterprise Architecture(EA). The design in the form of System Administration, Finance Management System, Human Resource and Accounting. Data any information system is already integrated.
APA, Harvard, Vancouver, ISO, and other styles
18

Wright, Robert, Ivan Stoianov, Panos Parpas, Kevin Henderson, and John King. "Adaptive water distribution networks with dynamically reconfigurable topology." Journal of Hydroinformatics 16, no. 6 (May 19, 2014): 1280–301. http://dx.doi.org/10.2166/hydro.2014.086.

Full text
Abstract:
This paper presents a novel concept of adaptive water distribution networks with dynamically reconfigurable topology for optimal pressure control, leakage management and improved system resilience. The implementation of District Meter Areas (DMAs) has greatly assisted water utilities in reducing leakage. DMAs segregate water networks into small areas, the flow in and out of each area is monitored and thresholds are derived from the minimum night flow to trigger the leak localization. A major drawback of the DMA approach is the reduced redundancy in network connectivity which has a severe impact on network resilience, incident management and water quality deterioration. The presented approach for adaptively reconfigurable networks integrates the benefits of DMAs for managing leakage with the advantages of large-scale looped networks for increased redundancy in connectivity, reliability and resilience. Self-powered multi-function network controllers are designed and integrated with novel telemetry tools for high-speed time-synchronized monitoring of the dynamic hydraulic conditions. A computationally efficient and robust optimization method based on sequential convex programming is developed and applied for the dynamic topology reconfiguration and pressure control of water distribution networks. An investigation is carried out using an operational network to evaluate the implementation and benefits of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
19

Bui, Hong T. M., Vinh Sum Chau, and Jacqueline Cox. "Managing the survivor syndrome as scenario planning methodology … and it matters!" International Journal of Productivity and Performance Management 68, no. 4 (April 8, 2019): 838–54. http://dx.doi.org/10.1108/ijppm-05-2018-0202.

Full text
Abstract:
PurposeThe importance of foresight is discussed in relation to why traditional scenario planning methodology is problematic at achieving it. The “survivor syndrome” is borrowed from the human resources literature and presented as a metaphor for foresight to illustrate how better “scenarios” can be achieved by understanding the syndrome better. A practice perspective is given on the use of a seven-theme framework as a method of interviewing survivors. The paper aims to discuss this issue.Design/methodology/approachThe paper draws from an empirical research that took place during the 2008 global financial crisis to illustrate the richness of the insights that would otherwise not be obtainable through scenario planning methods that do not involve “survivors.” In that research, semi-structured interviews were employed with key personnel at multiple levels of one private and one public organization that had undergone a redundancy process at the time of the crisis to explore its effect on the remaining workforce.FindingsThe “survivor syndrome” itself would be minimized if managers consider the feelings of survivors with more open communication. Survivors in private firms were found generally to experience anxiety, but are more likely to remain more motivated, than their counterparts in the public sector. These detailed insights create more accurate “scenarios” in scenario planning exercises.Originality/valueOrganizational performance can be better enhanced if the survivor syndrome can be better managed. In turn, scenario planning, as a form of organizational foresight, is better practiced through managing the survivor syndrome. Scenario planning methodology has proliferated well in the human resource management literature.
APA, Harvard, Vancouver, ISO, and other styles
20

Yarosh, Olga, Natalia Kalkova, and Viktor Reutov. "Managing consumers’ visual attention in the context of information asymmetry." Upravlenets 11, no. 5 (November 6, 2020): 97–111. http://dx.doi.org/10.29141/2218-5003-2020-11-5-8.

Full text
Abstract:
The problem of imperfect information is one of the central focuses of institutional economics. Information asymmetry creates a contextual environment for marketing promotion. Its manifestations are associated with both the inability for the buyer to know the true price of the product and the risk of making the wrong choice. The paper deals with the assessment of information asymmetry that occurs in retail trade, as well as the development of algorithms for identifying areas of visual consumer interest that are responsible for consumer decision-making. The methodological framework of the research is based on experimental economics, including the methods of classical marketing and neuromarketing. The information base of the study embraces the research works of Russian and foreign scholars published in the leading peer-reviewed journals, as well as the authors’ previous studies, designed algorithms and methods for analyzing marketing and neuromarketing data. Empirical and experimental results show that when making a choice and a purchase decision, the buyer is guided by different types of information attributes that create information asymmetry. We put forward and statistically confirm five hypotheses concerning the optimization of consumer visual attention management, such as: there are differences in eye movement behavior of men and women when choosing goods; there is a correlation between time spent in a store and the number of impulse purchases made; the design of supermarket shelves increases the amount of visual attention; there is a relationship between the visual hierarchy of products and consumer choice; and information asymmetry of product display proves the redundancy of unstructured visual information. The research results are useful for retail businesses and are of high importance in terms of the fundamental understanding of the space organization in stores, as this allows getting new evidence about the possibilities of consumer visual attention management.
APA, Harvard, Vancouver, ISO, and other styles
21

Mohammed, Hala, Wameedh Flayyih, and Fakhrul Rokhani. "Tolerating Permanent Faults in the Input Port of the Network on Chip Router." Journal of Low Power Electronics and Applications 9, no. 1 (February 27, 2019): 11. http://dx.doi.org/10.3390/jlpea9010011.

Full text
Abstract:
Deep submicron technologies continue to develop according to Moore’s law allowing hundreds of processing elements and memory modules to be integrated on a single chip forming multi/many-processor systems-on-chip (MPSoCs). Network on chip (NoC) arose as an interconnection for this large number of processing modules. However, the aggressive scaling of transistors makes NoC more vulnerable to both permanent and transient faults. Permanent faults persistently affect the circuit functionality from the time of their occurrence. The router represents the heart of the NoC. Thus, this research focuses on tolerating permanent faults in the router’s input buffer component, particularly the virtual channel state fields. These fields track packets from the moment they enter the input component until they leave to the next router. The hardware redundancy approach is used to tolerate the faults in these fields due to their crucial role in managing the router operation. A built-in self-test logic is integrated into the input port to periodically detect permanent faults without interrupting router operation. These approaches make the NoC router more reliable than the unprotected NoC router with a maximum of 17% and 16% area and power overheads, respectively. In addition, the hardware redundancy approach preserves the network performance in the presence of a single fault by avoiding the virtual channel closure.
APA, Harvard, Vancouver, ISO, and other styles
22

Fernández, Javier D., Miguel A. Martínez-Prieto, Pablo de la Fuente Redondo, and Claudio Gutiérrez. "Characterising RDF data sets." Journal of Information Science 44, no. 2 (January 9, 2017): 203–29. http://dx.doi.org/10.1177/0165551516677945.

Full text
Abstract:
The publication of semantic web data, commonly represented in Resource Description Framework (RDF), has experienced outstanding growth over the last few years. Data from all fields of knowledge are shared publicly and interconnected in active initiatives such as Linked Open Data. However, despite the increasing availability of applications managing large-scale RDF information such as RDF stores and reasoning tools, little attention has been given to the structural features emerging in real-world RDF data. Our work addresses this issue by proposing specific metrics to characterise RDF data. We specifically focus on revealing the redundancy of each data set, as well as common structural patterns. We evaluate the proposed metrics on several data sets, which cover a wide range of designs and models. Our findings provide a basis for more efficient RDF data structures, indexes and compressors.
APA, Harvard, Vancouver, ISO, and other styles
23

Tkachov, Vitalii, Andriy Kovalenko, Heorhii Kuchuk, and Iana Ni. "Method of ensuring the survivability of highly mobile computer networks." Advanced Information Systems 5, no. 2 (June 22, 2021): 159–65. http://dx.doi.org/10.20998/2522-9052.2021.2.24.

Full text
Abstract:
The article discusses the features of the functioning of mobile computer networks based on small-sized aircraft (highly mobile computer networks). It is shown that such networks, in contrast to stationary or low-mobile ones, have a low level of survivability in case of local damage to their nodes. The purpose of the article is to develop a method for ensuring the survivability of highly mobile computer networks under conditions of destructive external influences, which leads to local destruction of network nodes or links between them, using the method of assessing survivability at all stages of network functioning, by changing the main function to implement all available strategies for the functioning of the network when determining the critical values of the integrity of the network and its ability to perform at least one of the available functions. The results obtained allow: to continue the development of theoretical research in the development of strategies for managing highly mobile computer networks in extreme situations; to develop an applied solution to ensure the survivability of highly mobile computer networks by building multifunctional or redundant structures, increasing the value of their redundancy. The studies allow us to conclude that the proposed method can be used at the design stages of highly mobile computer networks, characterized by increased survivability and capable of functioning in conditions of multiple local damages without catastrophic destructive consequences for the network structure.
APA, Harvard, Vancouver, ISO, and other styles
24

Rosenbeck Gøeg, K., and A. Randorff Højen. "SNOMED CT Implementation." Methods of Information in Medicine 51, no. 06 (2012): 529–38. http://dx.doi.org/10.3414/me11-02-0023.

Full text
Abstract:
SummaryClinical practice as well as research and quality-assurance benefit from unambiguous clinical information resulting from the use of a common terminology like the Systematized Nomenclature of Medicine – Clinical Terms (SNOMED CT). A common terminology is a necessity to enable consistent reuse of data, and supporting semantic interoperability. Managing use of terminology for large cross specialty Electronic Health Record systems (EHR systems) or just beyond the level of single EHR systems requires that mappings are kept consistent. The objective of this study is to provide a clear methodology for SNOMED CT mapping to enhance applicability of SNOMED CT despite incompleteness and redundancy. Such mapping guidelines are presented based on an in depth analysis of 14 different EHR templates retrieved from five Danish and Swedish EHR systems. Each mapping is assessed against defined quality criteria and mapping guidelines are specified. Future work will include guideline validation.
APA, Harvard, Vancouver, ISO, and other styles
25

Benaziz, Besma, Okba Kazar, Laid Kahloul, Ilham Kitouni, and Samir Bourekkache. "Two-Level Data Collection for an Energy-Efficient Solution in Wireless Sensor Networks." International Journal of Agricultural and Environmental Information Systems 7, no. 4 (October 2016): 50–67. http://dx.doi.org/10.4018/ijaeis.2016100104.

Full text
Abstract:
Density in sensor networks often causes data redundancy, which is often the origin of high energy consumption. Data collection techniques are proposed to avoid retransmission of the same data by several sensors. In this paper, the authors propose a new data collection strategy based on static agents and clustering nodes in wireless sensor network (WSN) for an efficient energy consumption called: Two-Level Data Collection Strategy (TLDC). It consists in two-level hierarchy of nodes grouping. The technique is based on an experience building to perform a reorganization of the groups. Cooperation between agents can be used to reduce the communication cost significantly, by managing the data collection smartly. In order to validate the proposed scheme, the authors use the timed automata (TA) model and UPPAAL engine to validate the proposed strategy; the results after and before reorganization are compared. They establish that the proposed approach reduces the cost of communication in the group and thus minimizes the consumed energy.
APA, Harvard, Vancouver, ISO, and other styles
26

Currie, Robert, Teng Li, and Andrew Washbrook. "Using ZFS to manage Grid storage and improve middleware resilience." EPJ Web of Conferences 214 (2019): 04043. http://dx.doi.org/10.1051/epjconf/201921404043.

Full text
Abstract:
ZFS is a powerful storage management technology combining filesystem, volume management and software RAID technology into a single solution. The WLCG Tier-2 computing at Edinburgh was an early adopter of ZFS on Linux, with this technology being used to manage all of our storage systems including servers with aging components. Our experiences of ZFS deployment have been shared with the Grid storage community which has led to additional sites adopting this technology. ZFS is highly configurable therefore allowing systems to be tuned to give best performance under diverse workloads. This paper highlights our experiences in tuning our systems for best performance when combining ZFS with DPM storage management. This has resulted in reduced system load and better data throughput. This configuration also provides the high redundancy required for managing older storage servers. We also demonstrate how ZFS can be combined with Distributed Replicated Block Device (DRBD) technology to provide a performant and resilient hypervisor solution to host multiple production Grid services.
APA, Harvard, Vancouver, ISO, and other styles
27

Nosrati, Masoud, and Mahmood Fazlali. "Community-based replica management in distributed systems." International Journal of Web Information Systems 14, no. 1 (April 16, 2018): 41–61. http://dx.doi.org/10.1108/ijwis-01-2017-0006.

Full text
Abstract:
Purpose One of the techniques for improving the performance of distributed systems is data replication, wherein new replicas are created to provide more accessibility, fault tolerance and lower access cost of the data. In this paper, the authors propose a community-based solution for the management of data replication, based on the graph model of communication latency between computing and storage nodes. Communities are the clusters of nodes that the communication latency between the nodes are minimum values. The purpose of this study if to, by using this method, minimize the latency and access cost of the data. Design/methodology/approach This paper used the Louvain algorithm for finding the best communities. In the proposed algorithm, by requesting a file according to the nodes of each community, the cost of accessing the file located out of the applicant’s community was calculated and the results were accumulated. On exceeding the accumulated costs from a specified threshold, a new replica of the file was created in the applicant’s community. Besides, the number of replicas of each file should be limited to prevent the system from creating useless and redundant data. Findings To evaluate the method, four metrics were introduced and measured, including communication latency, response time, data access cost and data redundancy. The results indicated acceptable improvement in all of them. Originality/value So far, this is the first research that aims at managing the replicas via community detection algorithms. It opens many opportunities for further studies in this area.
APA, Harvard, Vancouver, ISO, and other styles
28

Timmerberg, Jean Fitzpatrick, Kristin J. Krosschell, Sally Dunaway Young, David Uher, Chris Yun, and Jacqueline Montes. "Essential competencies for physical therapist managing individuals with spinal muscular atrophy: A delphi study." PLOS ONE 16, no. 4 (April 22, 2021): e0249279. http://dx.doi.org/10.1371/journal.pone.0249279.

Full text
Abstract:
Background and purpose With the availability and development of disease-modifying therapies for individuals with spinal muscular atrophy (SMA), new emerging phenotypes must be characterized, and potential new treatment paradigms tested. There is an urgent demand to develop an educational program that provides physical therapists (PTs) worldwide the necessary knowledge and training to contribute to best-practice care and clinical research. A competency based education framework is one that would focus on outcomes not process and where progression of learners would occur only after competencies are demonstrated. The first step toward such a framework is defining outcomes. The purpose of this Delphi study was to develop consensus on those competencies deemed essential within the SMA PT community. Methods Purposive selection and snowball sampling techniques were used to recruit expert SMA PTs. Three web-based survey rounds were used to achieve consensus, defined as agreement among >80% of respondents. The first round gathered demographic information on participants as well as information on clarity and redundancy on a list of competencies; the second round, collected the same information on the revised list and whether or not participants agreed if the identified domains captured the essence of a SMA PT as well as the definitions for each; and the third asked participants to rank their agreement with each competency. Results Consensus revealed 35 competencies, organized under 6 domains, which were deemed essential for a PT working with persons with SMA. Discussion In order to develop a curriculum to meet the physical therapy needs of persons with SMA, it is imperative to establish defined outcomes and to achieve consensus on those outcomes within the SMA community. Conclusions This study identified essential competencies that will help to provide guidance in development of a formal education program to meet these defined outcomes. This can foster best-practice care and clinical decision-making for all PTs involved in the care of persons with SMA in a clinical and research setting.
APA, Harvard, Vancouver, ISO, and other styles
29

Wahab, Raja Azhan Syah Raja, Siti Nurulain Mohd Rum, Hamidah Ibrahim, Fatimah Sidi, and Iskandar Ishak. "A Method for Processing Top-k Continuous Query on Uncertain Data Stream in Sliding Window Model." WSEAS TRANSACTIONS ON SYSTEMS AND CONTROL 16 (May 25, 2021): 261–69. http://dx.doi.org/10.37394/23203.2021.16.22.

Full text
Abstract:
The data stream is a series of data generated at sequential time from different sources. Processing such data is very important in many contemporary applications such as sensor networks, RFID technology, mobile computing and many more. The huge amount data generated and frequent changes in a short time makes the conventional processing methods insufficient. The Sliding Window Model (SWM) was introduced by Datar et. al to handle this problem. Avoiding multiple scans of the whole data sets, optimizing memory usage, and processing only the most recent tuple are the main challenges. The number of possible world instances grows exponentially in uncertain data and it is highly difficult to comprehend what it takes to meet Top-k query processing in the shortest amount of time. Following the generation of rules and the probability theory of this model, a framework was anticipated to sustain top-k processing algorithm over the SWM approach until the candidates expired. Based on the literature review study, none of the existing work have been made to tackle the issue arises from the top-k query processing of the possible world instance of the uncertain data streams within the SWM. The major issue resulted from these scenarios need to be addressed especially in the computation redundancy area that contributed to the increases of computational cost within the SWM. Therefore, the main objective of this research work is to propose the top-k query processing methods over uncertain data streams in SWM utilizing the score and the Possible World (PW) setting. In this study, a novel expiration and object indexing method is introduced to address the computational redundancy issues. We believed the proposed method can reduce computational costs and by managing insertion and exit policy on the right tuple candidates within a specified window frame. This research work will contribute to the area of computational query processing.
APA, Harvard, Vancouver, ISO, and other styles
30

Bărbat, Boldur E. "DOMINO: Trivalent Logic Semantics in Bivalent Syntax Clothes." International Journal of Computers Communications & Control 2, no. 4 (April 1, 2007): 303. http://dx.doi.org/10.15837/ijccc.2007.3.2362.

Full text
Abstract:
The paper describes a rather general software mechanism developed primarily for decision making in dynamic and uncertain environments (typical application: managing overbooking). DOMINO (Decision-Oriented Mechanism for "IF" as Non-deterministic Operator) is meant to deal with undecidability due to any kind of future contingents. Its description here is self-contained but, since a validation is underway within a much broader undertaking involving agent-oriented software, to impair redundancy, several aspects explained in very recent papers are here abridged. In essence, DOMINO acts as an "IF" with enhanced semantics: it can answer "YES", "NO" or "UNDECIDABLE in the time span given" (it renders control to an exception handler). Despite its trivalent logic semantics, it respects the rigours of structural programming and the syntax of bivalent logic (it is programmed in plain C++ to be applicable to legacy systems too). As for most novel approaches, expectations are high, involving a less algorithmic, less probabilistic, less difficult to understand method to treat undecidability in dynamic and uncertain environments, where postponing decisions means keeping open alternatives (to react better to rapid environment changes).
APA, Harvard, Vancouver, ISO, and other styles
31

Barbashova, N. E., and Yu V. Gerasimova. "The Problems of Matching Objectives, Instruments, and Financial Flows in Regional Policy." World of new economy 12, no. 3 (June 3, 2019): 90–97. http://dx.doi.org/10.26794/2220-6469-2018-12-3-90-97.

Full text
Abstract:
In the article, authors considered the relationship between the priorities of the regional policy of theRussian Federation, the applied instruments, and the corresponding financial flows. The study was conducted on the basis of the analysis of normative legal acts of the Russian Federation in the field of regional policy, budget policy and strategic planning, as well as strategic and program documents of the Federal level. The authors paid special attention to the instruments of stimulation of regional economic development. The authors came to the conclusion that the successful implementation of regional policy is hampered by the problems associated with the incompleteness and inconsistency of the system of strategic planning documents, the lack of interdepartmental coordination, the redundancy of the number of incentive mechanisms, the complexity of managing financial flows from various sources, and the gaps in information support. The findings of the study were tested on the example of the Primorsky Krai. According to the analysis, the authors have formulated recommendations for public authorities aimed at improving the effectiveness of incentive mechanisms of state regional policy.
APA, Harvard, Vancouver, ISO, and other styles
32

Lanza-Gutierrez, Jose M., N. C. Caballe, Broderick Crawford, Ricardo Soto, Juan A. Gomez-Pulido, and Fernando Paredes. "Exploring Further Advantages in an Alternative Formulation for the Set Covering Problem." Mathematical Problems in Engineering 2020 (July 15, 2020): 1–24. http://dx.doi.org/10.1155/2020/5473501.

Full text
Abstract:
The set covering problem (SCP) is an NP-complete optimization problem, fitting with many problems in engineering. The traditional SCP formulation does not directly address both solution unsatisfiability and set redundancy aspects. As a result, the solving methods have to control these aspects to avoid getting unfeasible and nonoptimized in cost solutions. In the last years, an alternative SCP formulation was proposed, directly covering both aspects. This alternative formulation received limited attention because managing both aspects is considered straightforward at this time. This paper questions whether there is some advantage in the alternative formulation, beyond addressing the two issues. Thus, two studies based on a metaheuristic approach are proposed to identify if there is any concept in the alternative formulation, which could be considered for enhancing a solving method considering the traditional SCP formulation. As a result, the authors conclude that there are concepts from the alternative formulation, which could be applied for guiding the search process and for designing heuristic feasibilit\y operators. Thus, such concepts could be recommended for designing state-of-the-art algorithms addressing the traditional SCP formulation.
APA, Harvard, Vancouver, ISO, and other styles
33

Hema S and Dr.Kangaiammal A. "A Secure Method for Managing Data in Cloud Storage using Deduplication and Enhanced Fuzzy Based Intrusion Detection Framework." November 2020 6, no. 11 (November 30, 2020): 165–73. http://dx.doi.org/10.46501/ijmtst061131.

Full text
Abstract:
Cloud services increase data availability so as to offer flawless service to the client. Because of increasing data availability, more redundancies and more memory space are required to store such data. Cloud computing requires essential storage and efficient protection for all types of data. With the amount of data produced seeing an exponential increase with time, storing the replicated data contents is inevitable. Hence, using storage optimization approaches becomes an important pre-requisite for enormous storage domains like cloud storage. Data deduplication is the technique which compresses the data by eliminating the replicated copies of similar data and it is widely utilized in cloud storage to conserve bandwidth and minimize the storage space. Despite the data deduplication eliminates data redundancy and data replication; it likewise presents significant data privacy and security problems for the end-user. Considering this, in this work, a novel security-based deduplication model is proposed to reduce a hash value of a given file size and provide additional security for cloud storage. In proposed method the hash value of a given file is reduced employing Distributed Storage Hash Algorithm (DSHA) and to provide security the file is encrypted by using an Improved Blowfish Encryption Algorithm (IBEA). This framework also proposes the enhanced fuzzy based intrusion detection system (EFIDS) by defining rules for the major attacks, thereby alert the system automatically. Finally the combination of data exclusion and security encryption technique allows cloud users to effectively manage their cloud storage by avoiding repeated data encroachment. It also saves bandwidth and alerts the system from attackers. The results of experiments reveal that the discussed algorithm yields improved throughput and bytes saved per second in comparison with other chunking algorithms.
APA, Harvard, Vancouver, ISO, and other styles
34

Tahir, Muhammad, Asim Rafiq, and Danish Hassan. "2 Designing, Planning & Implementation of IT Infrastructure & Security for A Brokerage House." Sir Syed Research Journal of Engineering & Technology 1, no. 1 (December 19, 2018): 6. http://dx.doi.org/10.33317/ssurj.v1i1.32.

Full text
Abstract:
M-Commerce is widely known as wireless networktechnology, use as the essential means of communication forbusiness transactions. Mobile commerce is seen as anaugmentation of E-commerce empowered by cell phones.Transactions either indirectly or directly carried out orsupported through mobile phones. ABC Brokerage House is thepioneer in Online Trading Services. In online trade, customerscan trade using the online trade software and website. Beforeswitching users, from traditional trading system to mobiletrading system, the weaknesses of infrastructure, securitybreaches, and risk factors should be considered carefully. Theinfrastructure solution should be designed with respect tointeroperability, security, scalability, non-repudiation of ABCBrokerage House on the behalf of best practices for MobileCommerce Trading System. The solution has its distributednature and is a suitable architecture for managing businessprocesses of ABC Stock Brokerage House. The solution is wellplanned, designed and it is in accordance with ISOrecommendation. The solution provides more flexibility and alsoprovides redundancy by using clustering of database server. Theproposed solution provides more flexibility and also providesredundancy plan and clustering of database server. This solutionalso switches ABC Brokerage House infrastructure to the realworld infrastructure by removing risk factors and implementingthe latest technologies.
APA, Harvard, Vancouver, ISO, and other styles
35

Sottocornola, Simone. "Software Based Control and Monitoring of a Hardware Based Track Reconstruction System for the ATLAS Experiment." EPJ Web of Conferences 214 (2019): 01021. http://dx.doi.org/10.1051/epjconf/201921401021.

Full text
Abstract:
During Run 2 of the Large Hadron Collider (LHC) the instantaneous luminosity exceeded the nominal value of 1034 cm−2 s−1 with a 25 ns bunch crossing period and the number of overlapping proton-proton interactions per bunch crossing increased to a maximum of about 80. These conditions pose a challenge to the trigger system of the experiments that has to manage rates while keeping a good efficiency for interesting physics events. This document summarizes the software based control and monitoring of a hardware-based track reconstruction system for the ATLAS experiment, called Fast Tracker (FTK), composed of associative memories and FPGAs operating at the rate of 100 kHz and providing high quality track information within the available latency to the high-level trigger. In particular, we will detail the commissioning of the FTK within the ATLAS online software system presenting the solutions adopted for scaling up the system and ensuring robustness and redundancy. We will also describe the solutions to challenges such as controlling the occupancy of the buffers, managing the heterogeneous and large configuration, and providing monitoring information at sufficient rate.
APA, Harvard, Vancouver, ISO, and other styles
36

Tahir, Muhammad, Asim Rafiq, and Danish Hassan. "Designing, Planning & Implementation of IT Infrastructure & Security for A Brokerage House." Sir Syed University Research Journal of Engineering & Technology 8, no. 1 (December 19, 2018): 6. http://dx.doi.org/10.33317/ssurj.v8i1.32.

Full text
Abstract:
M-Commerce is widely known as wireless networktechnology, use as the essential means of communication forbusiness transactions. Mobile commerce is seen as anaugmentation of E-commerce empowered by cell phones.Transactions either indirectly or directly carried out orsupported through mobile phones. ABC Brokerage House is thepioneer in Online Trading Services. In online trade, customerscan trade using the online trade software and website. Beforeswitching users, from traditional trading system to mobiletrading system, the weaknesses of infrastructure, securitybreaches, and risk factors should be considered carefully. Theinfrastructure solution should be designed with respect tointeroperability, security, scalability, non-repudiation of ABCBrokerage House on the behalf of best practices for MobileCommerce Trading System. The solution has its distributednature and is a suitable architecture for managing businessprocesses of ABC Stock Brokerage House. The solution is wellplanned, designed and it is in accordance with ISOrecommendation. The solution provides more flexibility and alsoprovides redundancy by using clustering of database server. Theproposed solution provides more flexibility and also providesredundancy plan and clustering of database server. This solutionalso switches ABC Brokerage House infrastructure to the realworld infrastructure by removing risk factors and implementingthe latest technologies.
APA, Harvard, Vancouver, ISO, and other styles
37

Currow, David, Matthew Maddocks, David Cella, and Maurizio Muscaritoli. "Efficacy of Anamorelin, a Novel Non-Peptide Ghrelin Analogue, in Patients with Advanced Non-Small Cell Lung Cancer (NSCLC) and Cachexia—Review and Expert Opinion." International Journal of Molecular Sciences 19, no. 11 (November 5, 2018): 3471. http://dx.doi.org/10.3390/ijms19113471.

Full text
Abstract:
Cancer cachexia is a multilayered syndrome consisting of the interaction between tumor cells and the host, at times modulated by the pharmacologic treatments used for tumor control. Key cellular and soluble mediators, activated because of this interaction, induce metabolic and nutritional alterations. This results in mass and functional changes systemically, and can lead to increased morbidity and reduced length and quality of life. For most solid malignancies, a cure remains an unrealistic goal, and targeting the key mediators is ineffective because of their heterogeneity/redundancy. The most beneficial approach is to target underlying systemic mechanisms, an approach where the novel non-peptide ghrelin analogue anamorelin has the advantage of stimulating appetite and possibly food intake, as well as promoting anabolism and significant muscle mass gain. In the ROMANA studies, compared with placebo, anamorelin significantly increased lean body mass in non-small cell lung cancer (NSCLC) patients. Body composition analysis suggested that anamorelin is an active anabolic agent in patients with NSCLC, without the side effects of other anabolic drugs. Anamorelin also induced a significant and meaningful improvement of anorexia/cachexia symptoms. The ROMANA trials have provided unprecedented knowledge, highlighting the therapeutic effects of anamorelin as an initial, but significant, step toward directly managing cancer cachexia.
APA, Harvard, Vancouver, ISO, and other styles
38

Ponnamma Divakaran, Pradeep Kumar. "Idea-generation communities: when should host firms intervene?" Journal of Business Strategy 38, no. 6 (November 20, 2017): 80–88. http://dx.doi.org/10.1108/jbs-04-2016-0041.

Full text
Abstract:
Purpose The purpose of this paper is to explore when, why and to what extent firms should intervene in firm-hosted idea-generation communities, and to develop a framework for firm-intervention. Design/methodology/approach A single case-study is conducted in a highly successful firm-hosted idea-generation community called Dell IdeaStorm, whereby the netnographic approach is applied. Findings The findings indicate that, overall, firm-participation is minimal and passive, and varies according to the three stages of the idea lifecycle in the community, such as ideation stage – here firm-participation is limited to acknowledgement of new ideas, checking for redundancy, managing search tool and profanity filtering; discussion and development stage – here firm-participation is more active by providing feedback and clarification when needed, troubleshooting, asking for additional input on an idea, etc.; and completion stage – here a firm intervenes to screen and select the most promising ideas for implementation and also provides status updates on ideas. Originality/value This study contributes by developing a new framework for firm-participation, which is useful for the early diagnosis of community issues in idea generation. The framework is also a tactical tool which can be used to guide community managers in selecting the correct moderation approach, depending on the specific stage in the idea lifecycle.
APA, Harvard, Vancouver, ISO, and other styles
39

Liu, Chin-Hsuan, Posen Lee, Yen-Lin Chen, Chen-Wen Yen, and Chao-Wei Yu. "Study of Postural Stability Features by Using Kinect Depth Sensors to Assess Body Joint Coordination Patterns." Sensors 20, no. 5 (February 27, 2020): 1291. http://dx.doi.org/10.3390/s20051291.

Full text
Abstract:
A stable posture requires the coordination of multiple joints of the body. This coordination of the multiple joints of the human body to maintain a stable posture is a subject of research. The number of degrees of freedom (DOFs) of the human motor system is considerably larger than the DOFs required for posture balance. The manner of managing this redundancy by the central nervous system remains unclear. To understand this phenomenon, in this study, three local inter-joint coordination pattern (IJCP) features were introduced to characterize the strength, changing velocity, and complexity of the inter-joint couplings by computing the correlation coefficients between joint velocity signal pairs. In addition, for quantifying the complexity of IJCPs from a global perspective, another set of IJCP features was introduced by performing principal component analysis on all joint velocity signals. A Microsoft Kinect depth sensor was used to acquire the motion of 15 joints of the body. The efficacy of the proposed features was tested using the captured motions of two age groups (18–24 and 65–73 years) when standing still. With regard to the redundant DOFs of the joints of the body, the experimental results suggested that an inter-joint coordination strategy intermediate to that of the two extreme coordination modes of total joint dependence and independence is used by the body. In addition, comparative statistical results of the proposed features proved that aging increases the coupling strength, decreases the changing velocity, and reduces the complexity of the IJCPs. These results also suggested that with aging, the balance strategy tends to be more joint dependent. Because of the simplicity of the proposed features and the affordability of the easy-to-use Kinect depth sensor, such an assembly can be used to collect large amounts of data to explore the potential of the proposed features in assessing the performance of the human balance control system.
APA, Harvard, Vancouver, ISO, and other styles
40

MacKeith, Joy, Anna Good, and Sara Burns. "Development of a co-produced tool for monitoring and supporting the mental health of young people." BJPsych Open 7, S1 (June 2021): S267. http://dx.doi.org/10.1192/bjo.2021.711.

Full text
Abstract:
AimsThe aims were to develop and validate a tool for monitoring and supporting the mental health of young people. Based on extensive experience of developing similar tools, the hypothesis was that a user-friendly tool could be produced with sound psychometric properties.BackgroundThe Outcomes Star is a suite of collaboratively completed, strengths-based tools with the dual roles of both supporting and monitoring change. Service users are empowered through their active involvement in identifying their strengths and creating their care plan. Triangle, the creators of the Outcomes Star was approached by a number of organisations to develop a version of the Star for young people with mental health issues in early intervention services and also to support young people in managing a diagnosed mental illness.MethodUsing a series of focus groups and an iterative process of refinement we gathered data from practitioners and service users on the domains in which they wish to create change, and the steps of the change process. A draft version of the new tool was piloted in two organisations by 67 workers and 177 young people over six months. The pilot data were analysed to assess the psychometric properties of My Mind Star (acceptability, skew, factor structure, internal consistency, item redundancy and responsiveness).ResultThe resulting tool, My Mind Star consisted of seven domains: Feelings and emotions, Healthy lifestyle, Where you live, Friends and relationships, School, training and work, How you use your time and Self-esteem. Almost all young people and practitioners (94%) agreed that their completed Star was ‘a good summary of my life right now’ and that it gave a better idea of service users’ support needs. Psychometric analyses indicated a unidimensional structure with good internal consistency (α = .76) and no item redundancy. My Mind Star was responsive to change between the first and second readings, with medium and small-medium effect sizes.ConclusionInitial findings suggest that My Mind Star has good psychometric properties and is perceived as acceptable and useful by young people and practitioners, Further research is planned to conduct a full validation of the psychometric properties of this Star including inter-rater reliability and predictive validity.Financial sponsorship of the study: Action for Children
APA, Harvard, Vancouver, ISO, and other styles
41

Ieronimakis, Kristina M., Christopher J. Colombo, Justin Valovich, Mark Griffith, Konrad L. Davis, and Jeremy C. Pamplin. "The Trifecta of Tele-Critical Care: Intrahospital, Operational, and Mass Casualty Applications." Military Medicine 186, Supplement_1 (January 1, 2021): 253–60. http://dx.doi.org/10.1093/milmed/usaa298.

Full text
Abstract:
ABSTRACT Introduction Tele-critical care (TCC) has improved outcomes in civilian hospitals and military treatment facilities (MTFs). Tele-critical care has the potential to concurrently support MTFs and operational environments and could increase capacity and capability during mass casualty events. TCC services distributed across multiple hub sites may flexibly adapt to rapid changes in patient volume and complexity to fully optimize resources. Given the highly variable census in MTF intensive care units (ICU), the proposed TCC solution offers system resiliency and redundancy for garrison, operational, and mass casualty needs, while also maximizing return on investment for the Defense Health Agency. Materials and Methods The investigators piloted simultaneous TCC support to the MTF during three field exercises: (1) TCC concurrently monitored the ICU during a remote mass casualty exercise: the TCC physician monitored a high-risk ICU patient while the nurse monitored 24 simulated field casualties; (2) TCC concurrently monitored the garrison ICU and a remote military medical field exercise: the physician provided tele-mentoring during prolonged field care for a simulated casualty, and the nurse provided hospital ICU TCC; (3) the TCC nurse simultaneously monitored the ICU while providing reach-back support to field hospital nurses training in a simulation scenario. Results TCC proved feasible during multiple exercises with concurrent tele-mentoring to different care environments including physician and nurse alternating operational and hospital support roles, and an ICU nurse managing both simultaneously. ICU staff noted enhanced quality and safety of bedside care. Field exercise participants indicated TCC expanded multipatient monitoring during mass casualties and enhanced novice caregiver procedural capability and scope of patient complexity. Conclusions Tele-critical care can extend critical care services to anywhere at any time in support of garrison medicine, operational medicine, and mass casualty settings. An interoperable, flexibly staffed, and rapidly expandable TCC network must be further developed given the potential for large casualty volumes to overwhelm a single TCC provider with multiple duties. Lessons learned from development of this capability should have applicability for managing military and civilian mass casualty events.
APA, Harvard, Vancouver, ISO, and other styles
42

Yasir, Muntadher Naeem, and Muayad Sadik Croock. "Software engineering based self-checking process for cyber security system in VANET." International Journal of Electrical and Computer Engineering (IJECE) 10, no. 6 (December 1, 2020): 5844. http://dx.doi.org/10.11591/ijece.v10i6.pp5844-5852.

Full text
Abstract:
Newly, the cyber security of Vehicle Ad hoc Network (VANET) includes two practicable: Vehicle to Vehicle (V2V) and Vehicle to Infrastructure (V2I), that have been considered due to importance. It has become possible to keep pace with the development in the world. The people safety is a priority in the development of technology in general and particular in of VANET for police vehicles. In this paper, we propose a software engineering based self-checking process to ensure the high redundancy of the generated keys. These keys are used in underlying cyber security system for VANET. The proposed self-checking process emploies a set of NIST tests including frequency, block and runs as a threshold for accepting the generated keys. The introduced cyber security system includes three levels: Firstly, the registration phase that asks vehicles to register in the system, in which the network excludes the unregistered ones. In this phase, the proposed software engineeringbased self-checking process is adopted. Secondly, the authentication phase that checks of the vehicles after the registration phase. Thirdly, the proposed system that is able to detect the DOS attack. The obtained results show the efficient performance of the proposed system in managing the security of the VANET network. The self-checking process increased the randomness of the generated keys, in which the security factor is increased.
APA, Harvard, Vancouver, ISO, and other styles
43

De Toni, Alberto Felice, Giovanni De Zan, and Cinzia Battistella. "Organisational capabilities for internal complexity: an exploration in the Coop stores." Business Process Management Journal 22, no. 1 (February 5, 2016): 196–230. http://dx.doi.org/10.1108/bpmj-06-2015-0089.

Full text
Abstract:
Purpose – Managing organizations in complex environments is a major challenge. Complexity is not only due to the external environment (market and/or technological turbulence) but also to the internal configuration and specificities. A recent stream of studies in organizational literature suggested that organizations should develop and deploy specific capabilities for facing complexity, namely dynamic capabilities. This means becoming more flexible. The paper aims to discuss these issues. Design/methodology/approach – This paper proposes four main capabilities to face four dimensions of complexity. It then investigates if it is more appropriate to focus on a specific capability when facing higher levels of a specific dimension of complexity. The research methodology is a multiple case study in seven different organizational units of the same super-store corporate. Findings – Data showed some important results. First of all, internal complexity is unit specific rather than corporate or industry specific. Moreover, it can derive not only from unpredictability and rate of change, but also from variety of elements and their interactions. All these elements form complexity. Internal complexity is characterized by four main elements: uncertainty, dynamicity, diversity and interdependence. Finally, for each of these elements, different organizational strategies are used: in case of uncertainty, for example, a strategy used by the companies is the sharing of information and the development of redundancy. Originality/value – Originality lies in linking different capabilities with different dimensions of internal complexity.
APA, Harvard, Vancouver, ISO, and other styles
44

Garrett, K. A., K. F. Andersen, F. Asche, R. L. Bowden, G. A. Forbes, P. A. Kulakow, and B. Zhou. "Resistance Genes in Global Crop Breeding Networks." Phytopathology® 107, no. 10 (October 2017): 1268–78. http://dx.doi.org/10.1094/phyto-03-17-0082-fi.

Full text
Abstract:
Resistance genes are a major tool for managing crop diseases. The networks of crop breeders who exchange resistance genes and deploy them in varieties help to determine the global landscape of resistance and epidemics, an important system for maintaining food security. These networks function as a complex adaptive system, with associated strengths and vulnerabilities, and implications for policies to support resistance gene deployment strategies. Extensions of epidemic network analysis can be used to evaluate the multilayer agricultural networks that support and influence crop breeding networks. Here, we evaluate the general structure of crop breeding networks for cassava, potato, rice, and wheat. All four are clustered due to phytosanitary and intellectual property regulations, and linked through CGIAR hubs. Cassava networks primarily include public breeding groups, whereas others are more mixed. These systems must adapt to global change in climate and land use, the emergence of new diseases, and disruptive breeding technologies. Research priorities to support policy include how best to maintain both diversity and redundancy in the roles played by individual crop breeding groups (public versus private and global versus local), and how best to manage connectivity to optimize resistance gene deployment while avoiding risks to the useful life of resistance genes. [Formula: see text] Copyright © 2017 The Author(s). This is an open access article distributed under the CC BY 4.0 International license .
APA, Harvard, Vancouver, ISO, and other styles
45

Mancuso, Carol A., Jessica R. Berman, Laura Robbins, and Stephen A. Paget. "Caution Before Embracing Team Mentoring in Academic Medical Research Training: Recommendations from a Qualitative Study." HSS Journal®: The Musculoskeletal Journal of Hospital for Special Surgery 17, no. 2 (February 19, 2021): 158–64. http://dx.doi.org/10.1177/1556331621992069.

Full text
Abstract:
Background: Multidisciplinary team mentoring increasingly is being advocated for biomedical research training. Before implementing a curriculum that could include team mentoring, we asked faculty about their opinions of this mentoring approach. Questions/Purposes: The goals of this study were to ask faculty about the benefits, challenges, and drawbacks of team mentoring in research training. Methods: Twenty-two experienced mentors representing all academic departments at a single institution were interviewed about perceived benefits, drawbacks, and their willingness to participate in team mentoring. Responses were analyzed with qualitative techniques using grounded theory and a comparative analytic strategy. Results: Faculty noted academic pursuits in medicine usually occur within, and not across, specialties; thus, multidisciplinary team mentoring would require coordinating diverse work schedules, additional meetings, and greater time commitments. Other challenges included ensuring breadth of expertise without redundancy, skillfully managing group dynamics, and ensuring there is one decision-maker. Potential drawbacks for mentees included reluctance to voice preferences and forge unique paths, perceived necessity to simultaneously please many mentors, and less likelihood of establishing a professional bond with any particular mentor. Conclusions: Faculty recommended caution before embracing team mentoring models. An acceptable alternative might be a hybrid model with a primary mentor at the helm and a selected group of co-mentors committed to a multidisciplinary effort. This model requires training and professional development for primary mentors.
APA, Harvard, Vancouver, ISO, and other styles
46

Rasina Begum, B., and P. Chithra. "Improving Security on Cloud Based Deduplication System." Asian Journal of Computer Science and Technology 7, S1 (November 5, 2018): 16–19. http://dx.doi.org/10.51983/ajcst-2018.7.s1.1813.

Full text
Abstract:
Cloud computing provides a scalable platform for large amount of data and processes that work on various applications and services by means of on-demand service. The storage services offered by clouds have become a new profit growth by providing a comparable cheaper, scalable, location-independent platform for managing users’ data. The client uses the cloud storage and enjoys the high end applications and services from a shared group of configurable computing resources using cloud services. It reduces the difficulty of local data storage and maintenance. But it gives severe security issues toward users’ outsourced data. Data Redundancy promotes the data reliability in Cloud Storage. At the same time, it increases storage space, Bandwidth and Security threats due to some server vulnerability. Data Deduplication helps to improve storage utilization. Backup is also less which means less Hardware and Backup media. But it has lots of security issues. Data reliability is a very risky issue in a Deduplication storage system because there is single copy for each file stored in the server which is shared by all the data owners. If such a shared file/chunk was missing, large amount of data becomes unreachable. The main aim of this work is to implement Deduplication System without sacrificing Security in cloud storage. It combines both Deduplication and convergent key cryptography with reduced overhead.
APA, Harvard, Vancouver, ISO, and other styles
47

Formetta, G., R. Mantilla, S. Franceschi, A. Antonello, and R. Rigon. "The JGrass-NewAge system for forecasting and managing the hydrological budgets at the basin scale: models of flow generation and propagation/routing." Geoscientific Model Development 4, no. 4 (November 4, 2011): 943–55. http://dx.doi.org/10.5194/gmd-4-943-2011.

Full text
Abstract:
Abstract. This paper presents a discussion of the predictive capacity of the implementation of the semi-distributed hydrological modeling system JGrass-NewAge. This model focuses on the hydrological budgets of medium scale to large scale basins as the product of the processes at the hillslope scale with the interplay of the river network. The part of the modeling system presented here deals with the: (i) estimation of the space-time structure of precipitation, (ii) estimation of runoff production; (iii) aggregation and propagation of flows in channel; (v) estimation of evapotranspiration; (vi) automatic calibration of the discharge with the method of particle swarming. The system is based on a hillslope-link geometrical partition of the landscape, combining raster and vectorial treatment of hillslope data with vector based tracking of flow in channels. Measured precipitation are spatially interpolated with the use of kriging. Runoff production at each channel link is estimated through a peculiar application of the Hymod model. Routing in channels uses an integrated flow equation and produces discharges at any link end, for any link in the river network. Evapotranspiration is estimated with an implementation of the Priestley-Taylor equation. The model system assembly is calibrated using the particle swarming algorithm. A two year simulation of hourly discharge of the Little Washita (OK, USA) basin is presented and discussed with the support of some classical indices of goodness of fit, and analysis of the residuals. A novelty with respect to traditional hydrological modeling is that each of the elements above, including the preprocessing and the analysis tools, is implemented as a software component, built upon Object Modelling System v3 and jgrasstools prescriptions, that can be cleanly switched in and out at run-time, rather than at compiling time. The possibility of creating different modeling products by the connection of modules with or without the calibration tool, as for instance the case of the present modeling chain, reduces redundancy in programming, promotes collaborative work, enhances the productivity of researchers, and facilitates the search for the optimal modeling solution.
APA, Harvard, Vancouver, ISO, and other styles
48

Conanec, Alexandre, Brigitte Picard, Denis Durand, Gonzalo Cantalapiedra-Hijar, Marie Chavent, Christophe Denoyelle, Dominique Gruffat, Jérôme Normand, Jérôme Saracco, and Marie-Pierre Ellies-Oury. "New Approach Studying Interactions Regarding Trade-Off between Beef Performances and Meat Qualities." Foods 8, no. 6 (June 7, 2019): 197. http://dx.doi.org/10.3390/foods8060197.

Full text
Abstract:
The beef cattle industry is facing multiple problems, from the unequal distribution of added value to the poor matching of its product with fast-changing demand. Therefore, the aim of this study was to examine the interactions between the main variables, evaluating the nutritional and organoleptic properties of meat and cattle performances, including carcass properties, to assess a new method of managing the trade-off between these four performance goals. For this purpose, each variable evaluating the parameters of interest has been statistically modeled and based on data collected on 30 Blonde d’Aquitaine heifers. The variables were obtained after a statistical pre-treatment (clustering of variables) to reduce the redundancy of the 62 initial variables. The sensitivity analysis evaluated the importance of each independent variable in the models, and a graphical approach completed the analysis of the relationships between the variables. Then, the models were used to generate virtual animals and study the relationships between the nutritional and organoleptic quality. No apparent link between the nutritional and organoleptic properties of meat (r = −0.17) was established, indicating that no important trade-off between these two qualities was needed. The 30 best and worst profiles were selected based on nutritional and organoleptic expectations set by a group of experts from the INRA (French National Institute for Agricultural Research) and Institut de l’Elevage (French Livestock Institute). The comparison between the two extreme profiles showed that heavier and fatter carcasses led to low nutritional and organoleptic quality.
APA, Harvard, Vancouver, ISO, and other styles
49

YEH, JINN-YI, and TAI-HSI WU. "Solutions for product configuration management: An empirical study." Artificial Intelligence for Engineering Design, Analysis and Manufacturing 19, no. 1 (February 2005): 39–47. http://dx.doi.org/10.1017/s0890060405050043.

Full text
Abstract:
Customers can directly express their preferences on many options when ordering products today. Mass customization manufacturing thus has emerged as a new trend for its aiming to satisfy the needs of individual customers. This process of offering a wide product variety often induces an exponential growth in the volume of information and redundancy for data storage. Thus, a technique for managing product configuration is necessary, on the one hand, to provide customers faster configured and lower priced products, and on the other hand, to translate customers' needs into the product information needed for tendering and manufacturing. This paper presents a decision-making scheme through constructing a product family model (PFM) first, in which the relationship between product, modules, and components are defined. The PFM is then transformed into a product configuration network. A product configuration problem assuming that customers would like to have a minimum-cost and customized product can be easily solved by finding the shortest path in the corresponding product configuration network. Genetic algorithms (GAs), mathematical programming, and tree-searching methods such as uniform-cost search and iterative deepening A* are applied to obtain solutions to this problem. An empirical case is studied in this work as an example. Computational results show that the solution quality of GAs retains 93.89% for a complicated configuration problem. However, the running time of GAs outperforms the running time of other methods with a minimum speed factor of 25. This feature is very useful for a real-time system.
APA, Harvard, Vancouver, ISO, and other styles
50

Chang, Woojung, Alexander E. Ellinger, and Jennifer Blackhurst. "A contextual approach to supply chain risk mitigation." International Journal of Logistics Management 26, no. 3 (November 9, 2015): 642–56. http://dx.doi.org/10.1108/ijlm-02-2014-0026.

Full text
Abstract:
Purpose – As global supply networks proliferate, the strategic significance of supply chain risk management (SCRM) – defined as the identification, evaluation, and management of supply chain-related risks to reduce overall supply chain vulnerability – also increases. Yet, despite consistent evidence that firm performance is enhanced by appropriate fit between strategy and context, extant SCRM research focusses more on identifying sources of supply chain risk, types of SCRM strategy, and performance implications associated with SCRM than on the relative efficacy of alternative primary supply chain risk mitigation strategies in different risk contexts. Drawing on contingency theory, a conceptual framework is proposed that aligns well-established aspects of SCRM to present a rubric for matching primary alternative supply chain risk mitigation strategies (redundancy and flexibility) with particular risk contexts (severity and probability of risk occurrence). The paper aims to discuss these issues. Design/methodology/approach – Conceptual paper. Findings – The proposed framework addresses supply chain managers’ need for a basic rubric to help them choose and implement risk mitigation approaches. The framework may also prove helpful for introducing business students to the fundamentals of SCRM. Originality/value – The framework and associated research propositions provide a theoretically grounded basis for managing the firm’s portfolio of potential supply chain risks by applying appropriate primary risk mitigation strategies based on the specific context of each risk rather than taking a “one size fits all” approach to risk mitigation. An agenda for progressing research on contingency-based approaches to SCRM is also presented.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography