Journal articles on the topic 'Computations management'

To see the other types of publications on this topic, follow the link: Computations management.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Computations management.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Liden, Sam. "Practical considerations in optimal flight management computations." Journal of Guidance, Control, and Dynamics 9, no. 4 (July 1986): 427–32. http://dx.doi.org/10.2514/3.20128.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Han, Kyoungseok, Tam W. Nguyen, and Kanghyun Nam. "Battery Energy Management of Autonomous Electric Vehicles Using Computationally Inexpensive Model Predictive Control." Electronics 9, no. 8 (August 9, 2020): 1277. http://dx.doi.org/10.3390/electronics9081277.

Full text
Abstract:
With the emergence of vehicle-communication technologies, many researchers have strongly focused their interest in vehicle energy-efficiency control using this connectivity. For instance, the exploitation of preview traffic enables the vehicle to plan its speed and position trajectories given a prediction horizon so that energy consumption is minimized. To handle the strong uncertainties in the traffic model in the future, a constrained controller is generally employed in the existing researches. However, its expensive computational feature largely prevents its commercialization. This paper addresses computational burden of the constrained controller by proposing a computationally tractable model prediction control (MPC) for real-time implementation in autonomous electric vehicles. We present several remedies to achieve a computationally manageable constrained control, and analyze its real-time computation feasibility and effectiveness in various driving conditions. In particular, both warmstarting and move-blocking methods could relax the computations significantly. Through the validations, we confirm the effectiveness of the proposed approach while maintaining good performance compared to other alternative schemes.
APA, Harvard, Vancouver, ISO, and other styles
3

Dulá, José H. "Computations in DEA." Pesquisa Operacional 22, no. 2 (December 2002): 165–82. http://dx.doi.org/10.1590/s0101-74382002000200005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Mitsui, Jun, Shin-ichi Ogata, Sanae Shibata, Akira Matsumoto, and Masato Yamamoto. "BEHAVIOR OF FISH REEF BLOCKS FALLING THROUGH WATER." Coastal Engineering Proceedings 1, no. 33 (September 28, 2012): 38. http://dx.doi.org/10.9753/icce.v33.management.38.

Full text
Abstract:
An accurate and economical placement method for fish reef blocks utilizing free fall in the deep sea area is presented in this paper. Falling behavior of concrete blocks in water was investigated by hydraulic model experiments and 3-D numerical computations. As a result, a new fish reef block which falls through water with highly stable behavior was developed. The distribution range of the developed block on the sea bottom was predicted by a stochastic model and it was proved to be sufficiently narrow. In addition, a quick estimation method for the placement position was developed to determine the release point of the block in actual construction. By using hydrodynamic force coefficients to evaluate the forces acting on the block, the method can estimate the falling behavior of the block in a flow field with sufficient accuracy and in a short time.
APA, Harvard, Vancouver, ISO, and other styles
5

Wood, Alan. "Parallel computers and computations." European Journal of Operational Research 27, no. 3 (December 1986): 385–86. http://dx.doi.org/10.1016/0377-2217(86)90338-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ibrahim, Khaled Z., and François Bodin. "Efficient SIMDization and Data Management of the Lattice QCD Computation on the Cell Broadband Engine." Scientific Programming 17, no. 1-2 (2009): 153–72. http://dx.doi.org/10.1155/2009/634756.

Full text
Abstract:
Lattice Quantum Chromodynamic (QCD) models subatomic interactions based on a four-dimensional discretized space–time continuum. The Lattice QCD computation is one of the grand challenges in physics especially when modeling a lattice with small spacing. In this work, we study the implementation of the main kernel routine of Lattice QCD that dominates the execution time on the Cell Broadband Engine. We tackle the problem of efficient SIMD execution and the problem of limited bandwidth for data transfers with the off-chip memory. For efficient SIMD execution, we present runtime data fusion technique that groups data processed similarly at runtime. We also introduce analysis needed to reduce the pressure on the scarce memory bandwidth that limits the performance of this computation. We studied two implementations for the main kernel routine that exhibit different patterns of accessing the memory and thus allowing different sets of optimizations. We show the attributes that make one implementation more favorable in terms of performance. For lattice size that is significantly larger than the local store, our implementation achieves 31.2 GFlops for single precision computations and 16.6 GFlops for double precision computations on the PowerXCell 8i, an order of magnitude better than the performance achieved on most general-purpose processors.
APA, Harvard, Vancouver, ISO, and other styles
7

Skowron, Andrzej, Andrzej Jankowski, and Soma Dutta. "Interactive granular computing." Granular Computing 1, no. 2 (January 5, 2016): 95–113. http://dx.doi.org/10.1007/s41066-015-0002-1.

Full text
Abstract:
Abstract Decision support in solving problems related to complex systems requires relevant computation models for the agents as well as methods for reasoning on properties of computations performed by agents. Agents are performing computations on complex objects [e.g., (behavioral) patterns, classifiers, clusters, structural objects, sets of rules, aggregation operations, (approximate) reasoning schemes]. In Granular Computing (GrC), all such constructed and/or induced objects are called granules. To model interactive computations performed by agents, crucial for the complex systems, we extend the existing GrC approach to Interactive Granular Computing (IGrC) approach by introducing complex granules (c-granules or granules, for short). Many advanced tasks, concerning complex systems, may be classified as control tasks performed by agents aiming at achieving the high-quality computational trajectories relative to the considered quality measures defined over the trajectories. Here, new challenges are to develop strategies to control, predict, and bound the behavior of the system. We propose to investigate these challenges using the IGrC framework. The reasoning, which aims at controlling of computations, to achieve the required targets, is called an adaptive judgement. This reasoning deals with granules and computations over them. Adaptive judgement is more than a mixture of reasoning based on deduction, induction and abduction. Due to the uncertainty the agents generally cannot predict exactly the results of actions (or plans). Moreover, the approximations of the complex vague concepts initiating actions (or plans) are drifting with time. Hence, adaptive strategies for evolving approximations of concepts are needed. In particular, the adaptive judgement is very much needed in the efficiency management of granular computations, carried out by agents, for risk assessment, risk treatment, and cost/benefit analysis. In the paper, we emphasize the role of the rough set-based methods in IGrC. The discussed approach is a step towards realization of the Wisdom Technology (WisTech) program, and is developed over years, based on the work experience on different real-life projects.
APA, Harvard, Vancouver, ISO, and other styles
8

Caron, Eddy, Bruno DelFabbro, Frédéric Desprez, Emmanuel Jeannot, and Jean-Marc Nicod. "Managing Data Persistence in Network Enabled Servers." Scientific Programming 13, no. 4 (2005): 333–54. http://dx.doi.org/10.1155/2005/151604.

Full text
Abstract:
The GridRPC model [17] is an emerging standard promoted by the Global Grid Forum (GGF) that defines how to perform remote client-server computations on a distributed architecture. In this model data are sent back to the client at the end of every computation. This implies unnecessary communications when computed data are needed by an other server in further computations. Since, communication time is sometimes the dominant cost of remote computations, this cost has to be lowered. Several tools instantiate the GridRPC model such as NetSolve developed at the University of Tennessee, Knoxville, USA, and DIET developed at LIP laboratory, ENS Lyon, France. They are usually called Network Enabled Servers (NES). In this paper, we present a discussion of the data management solutions chosen for these two NES (NetSolve and DIET) as well as experimental results.
APA, Harvard, Vancouver, ISO, and other styles
9

Skowron, Andrzej, and Andrzej Jankowski. "Interactive computations: toward risk management in interactive intelligent systems." Natural Computing 15, no. 3 (February 11, 2015): 465–76. http://dx.doi.org/10.1007/s11047-015-9486-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Adamski, Marcin, Krzysztof Kurowski, Marek Mika, Wojciech Piątek, and Jan Węglarz. "Security Aspects in Resource Management Systems in Distributed Computing Environments." Foundations of Computing and Decision Sciences 42, no. 4 (December 20, 2017): 299–313. http://dx.doi.org/10.1515/fcds-2017-0015.

Full text
Abstract:
Abstract In many distributed computing systems, aspects related to security are getting more and more relevant. Security is ubiquitous and could not be treated as a separated problem or a challenge. In our opinion it should be considered in the context of resource management in distributed computing environments like Grids and Clouds, e.g. scheduled computations can be much delayed because of cyber-attacks, inefficient infrastructure or users valuable and sensitive data can be stolen even in the process of correct computation. To prevent such cases there is a need to introduce new evaluation metrics for resource management that will represent the level of security of computing resources and more broadly distributed computing infrastructures. In our approach, we have introduced a new metric called reputation, which simply determines the level of reliability of computing resources from the security perspective and could be taken into account during scheduling procedures. The new reputation metric is based on various relevant parameters regarding cyber-attacks (also energy attacks), administrative activities such as security updates, bug fixes and security patches. Moreover, we have conducted various computational experiments within the Grid Scheduling Simulator environment (GSSIM) inspired by real application scenarios. Finally, our experimental studies of new resource management approaches taking into account critical security aspects are also discussed in this paper.
APA, Harvard, Vancouver, ISO, and other styles
11

Paganelli, Matteo, Domenico Beneventano, Francesco Guerra, and Paolo Sottovia. "Parallelizing Computations of Full Disjunctions." Big Data Research 17 (September 2019): 18–31. http://dx.doi.org/10.1016/j.bdr.2019.07.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Breiner, Spencer, Blake Pollard, and Eswaran Subrahmanian. "Functorial Model Management." Proceedings of the Design Society: International Conference on Engineering Design 1, no. 1 (July 2019): 1963–72. http://dx.doi.org/10.1017/dsi.2019.202.

Full text
Abstract:
AbstractIn this paper we use formal tools from category theory to develop a foundation for creating and managing models in systems where knowledge is distributed across multiple representations and formats. We define a class of models which incorporate three different representations---computations, logical semantics, and data--as well as model mappings (functors) to establish relationships between them. We prove that our models support model merge operations called colimits and use these to define a methodology for model integration.
APA, Harvard, Vancouver, ISO, and other styles
13

Kumar, Malay, and Manu Vardhan. "Privacy Preserving and Efficient Outsourcing Algorithm to Public Cloud." International Journal of Information Security and Privacy 12, no. 2 (April 2018): 1–25. http://dx.doi.org/10.4018/ijisp.2018040101.

Full text
Abstract:
The growth of the cloud computing services and its proliferation in business and academia has triggered enormous opportunities for computation in third-party data management settings. This computing model allows the client to outsource their large computations to cloud data centers, where the cloud server conducts the computation on their behalf. But data privacy and computational integrity are the biggest concern for the client. In this article, the authors attempt to present an algorithm for secure outsourcing of a covariance matrix, which is the basic building block for many automatic classification systems. The algorithm first performs some efficient transformation to protect the privacy and verify the computed result produced by the cloud server. Further, an analytical and experimental analysis shows that the algorithm is simultaneously meeting the design goals of privacy, verifiability and efficiency. Also, found that the proposed algorithm is about 7.8276 times more efficient than the direct implementation.
APA, Harvard, Vancouver, ISO, and other styles
14

Kafi, Nadeem, Zubair Ahmed Shaikh Shaikh, and Muhammad Shahid Shaikh. "Human Computations in Citizen Crowds: A Knowledge Management Solution Framework." Mehran University Research Journal of Engineering and Technology 37, no. 3 (July 1, 2018): 513–28. http://dx.doi.org/10.22581/muet1982.1803.06.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Büsing, Christina, Daniel Kadatz, and Catherine Cleophas. "Capacity Uncertainty in Airline Revenue Management: Models, Algorithms, and Computations." Transportation Science 53, no. 2 (March 2019): 383–400. http://dx.doi.org/10.1287/trsc.2018.0829.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Hakim, R. B. Fajriya, and Sugiyarto Sugiyarto. "The importance of understanding web frameworks for statisticians." Bulletin of Applied Mathematics and Mathematics Education 1, no. 2 (December 10, 2021): 47–60. http://dx.doi.org/10.12928/bamme.v1i2.3996.

Full text
Abstract:
As the web and computational technology carry on growing and huge data are yielded on the web, these technologies are turn into important for a statisticians' work. It is worthy that statistician always gain knowledge of new aspects of computation. A lack of computational reasoning skills gets it hard for statisticians to work in a team. If statistician do not take up this computations challenge more coherently, statistics will be marginalized and take away related at a time when its data science reputation grow up significantly. In addition, people rely on the information on web, for whatever their reason.Since web growth, several major transforms have evolved, from the most rudimentary concept until a new model of interaction between humans and machines. Simple interactivity denotes that users can enter data to the application on a web page, then click on button, and then appears a new web page with the results of the computations. This application has been known as web application with most are built with the utility of web frameworks which is a package of programming tasks that offering services through the Internet. Therefore, this paper gives short overview the importance of Flask web frameworks to assist the lack of computational skill of statistician over web application in the simplest possible way and how web framework is used to create a web page with application form, run the application to compute statistical calculation which has been deployed in local server, and produce a web page with the solutions
APA, Harvard, Vancouver, ISO, and other styles
17

Apte, Madhura, Supriya Kelkar, Aishwarya Dorge, Shilpa Deshpande, Pooja Bomble, and Anushka Dhamankar. "Gateway based Trust Management System for Internet of Things." Revista Gestão Inovação e Tecnologias 11, no. 4 (August 19, 2021): 4750–63. http://dx.doi.org/10.47059/revistageintec.v11i4.2501.

Full text
Abstract:
Internet of Things (IoT) a growing phenomenon, refers to the seamless integration of things into the information network. The security in IoT is tampered because of the various attacks which happen due to resource constrained nature of the devices in the network. Thus, although IoT is evolving as an attractive next generation networking paradigm, it can be adopted only when the security issues are resolved. This implies that, in a dynamic and collaborative IoT environment, the devices need to be trustworthy. This paper proposes a gateway based trust management system and an algorithm for computation of trust for the devices. The system focuses on making the computations on the devices lightweight and the network robust. The proposed system is tested against various IoT attacks and results demonstrate that it can clearly identify the malicious device if any, in the IoT network.
APA, Harvard, Vancouver, ISO, and other styles
18

Carman, David W. "New Directions in Sensor Network Key Management." International Journal of Distributed Sensor Networks 1, no. 1 (February 2005): 3–15. http://dx.doi.org/10.1080/15501320490522803.

Full text
Abstract:
Sensor networks require efficient, low latency key management techniques that enable strong security and tolerance of node compromise. Conventional interactive approaches using public key certificate-based key management techniques are not communications efficient and are very time-consuming. Protocols that leverage elliptic curve cryptography reduce communications but still require considerable interactive exchange. Noninteractive techniques that leverage identity-based public-key cryptography show considerable promise, but these techniques are relatively immature and require considerable computations. Conversely, random key predistribution techniques reduce computations, but at the expense of many interactions. In this paper, we describe recent work in the cryptographic community that combines the benefits of both identity-based cryptography and random-key predistribution into a framework we call identity-based random-key predistribution (IBRKP). IBRKP establishes pair-wise keys with virtually no extra communications and provides security versus node memory trade-offs for the sensor network designer to engineer.
APA, Harvard, Vancouver, ISO, and other styles
19

Møyner, Olav, Stein Krogstad, and Knut-Andreas Lie. "The Application of Flow Diagnostics for Reservoir Management." SPE Journal 20, no. 02 (September 8, 2014): 306–23. http://dx.doi.org/10.2118/171557-pa.

Full text
Abstract:
Summary Flow diagnostics, as referred to herein, are computational tools derived from controlled numerical flow experiments that yield quantitative information regarding the flow behavior of a reservoir model in settings much simpler than would be encountered in the actual field. In contrast to output from traditional reservoir simulators, flow-diagnostic measures can be obtained within seconds. The methodology can be used to evaluate, rank, and/or compare realizations or strategies, and the computational speed makes it ideal for interactive visualization output. We also consider application of flow diagnostics as proxies in optimization of reservoir-management work flows. In particular, by use of finite-volume discretizations for pressure, time of flight (TOF), and stationary tracers, we efficiently compute general Lorenz coefficients (and variants) that are shown to correlate well with simulated recovery. For efficient optimization, we develop an adjoint code for gradient computations of the considered flow-diagnostic measures. We present several numerical examples, including optimization of rates, well placements, and drilling sequences for two- and three-phase synthetic and real field models. Overall, optimizing the diagnostic measures implies substantial improvement in simulation-based objectives.
APA, Harvard, Vancouver, ISO, and other styles
20

Puzyrkov, Dmitry, Sergey Polyakov, Viktoriia Podryga, and Sergey Markizov. "Concept of a Cloud Service for Data Preparation and Computational Control on Custom HPC Systems in Application to Molecular Dynamics." EPJ Web of Conferences 173 (2018): 05014. http://dx.doi.org/10.1051/epjconf/201817305014.

Full text
Abstract:
At the present stage of computer technology development it is possible to study the properties and processes in complex systems at molecular and even atomic levels, for example, by means of molecular dynamics methods. The most interesting are problems related with the study of complex processes under real physical conditions. Solving such problems requires the use of high performance computing systems of various types, for example, GRID systems and HPC clusters. Considering the time consuming computational tasks, the need arises of software for automatic and unified monitoring of such computations. A complex computational task can be performed over different HPC systems. It requires output data synchronization between the storage chosen by a scientist and the HPC system used for computations. The design of the computational domain is also quite a problem. It requires complex software tools and algorithms for proper atomistic data generation on HPC systems. The paper describes the prototype of a cloud service, intended for design of atomistic systems of large volume for further detailed molecular dynamic calculations and computational management for this calculations, and presents the part of its concept aimed at initial data generation on the HPC systems.
APA, Harvard, Vancouver, ISO, and other styles
21

Möhring, Rolf H., Andreas S. Schulz, Frederik Stork, and Marc Uetz. "Solving Project Scheduling Problems by Minimum Cut Computations." Management Science 49, no. 3 (March 2003): 330–50. http://dx.doi.org/10.1287/mnsc.49.3.330.12737.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Skowron, Andrzej, and Andrzej Jankowski. "Correction to: Interactive computations: toward risk management in interactive intelligent systems." Natural Computing 18, no. 1 (December 18, 2018): 213. http://dx.doi.org/10.1007/s11047-018-9721-y.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Alsberg, Bjørn K., Håvard Bjerke, Gunn M. Navestad, and Per-Olof Åstrand. "GaussDal: An open source database management system for quantum chemical computations." Computer Physics Communications 171, no. 2 (September 2005): 133–53. http://dx.doi.org/10.1016/j.cpc.2005.04.008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

El-Sayed, Hesham, Sharmi Sankar, Heng Yu, and Gokulnath Thandavarayan. "Benchmarking of Recommendation Trust Computation for Trust/Trustworthiness Estimation in HDNs." International Journal of Computers Communications & Control 12, no. 5 (September 10, 2017): 612. http://dx.doi.org/10.15837/ijccc.2017.5.2895.

Full text
Abstract:
In the recent years, Heterogeneous Distributed Networks (HDNs) is a predominant technology implemented to enable various application in different fields like transportation, medicine, war zone, etc. Due to its arbitrary self-organizing nature and temporary topologies in the spatial-temporal region, distributed systems are vulnerable with a few security issues and demands high security countermeasures. Unlike other static networks, the unique characteristics of HDNs demands cutting edge security policies. Numerous cryptographic techniques have been proposed by different researchers to address the security issues in HDNs. These techniques utilize too many resources, resulting in higher network overheads. This being classified under light weight security scheme, the Trust Management System (TMS) tends to be one of the most promising technology, featured with more efficiency in terms of availability, scalability and simplicity. It advocates both the node level validation and data level verification enhancing trust between the attributes. Further, it thwarts a wide range of security attacks by incorporating various statistical techniques and integrated security services. In this paper, we present a literature survey on different TMS that highlights reliable techniques adapted across the entire HDNs. We then comprehensively study the existing distributed trust computations and benchmark them in accordance to their effectiveness. Further, performance analysis is applied on the existing computation techniques and the benchmarked outcome delivered by Recommendation Trust Computations (RTC) is discussed. A Receiver Operating Characteristics (ROC) curve illustrates better accuracy for Recommendation Trust Computations (RTC), in comparison with Direct Trust Computations (DTC) and Hybrid Trust Computations (HTC). Finally, we propose the future directions for research and highlight reliable techniques to build an efficient TMS in HDNs.
APA, Harvard, Vancouver, ISO, and other styles
25

HUNTER, JEFFREY J. "SIMPLE PROCEDURES FOR FINDING MEAN FIRST PASSAGE TIMES IN MARKOV CHAINS." Asia-Pacific Journal of Operational Research 24, no. 06 (December 2007): 813–29. http://dx.doi.org/10.1142/s0217595907001553.

Full text
Abstract:
The derivation of mean first passage times in Markov chains involves the solution of a family of linear equations. By exploring the solution of a related set of equations, using suitable generalized inverses of the Markovian kernel I - P, where P is the transition matrix of a finite irreducible Markov chain, we are able to derive elegant new results for finding the mean first passage times. As a by-product we derive the stationary distribution of the Markov chain without the necessity of any further computational procedures. Standard techniques in the literature, using for example Kemeny and Snell's fundamental matrix Z, require the initial derivation of the stationary distribution followed by the computation of Z, the inverse of I - P + eπT where eT = (1, 1, …, 1) and πT is the stationary probability vector. The procedures of this paper involve only the derivation of the inverse of a matrix of simple structure, based upon known characteristics of the Markov chain together with simple elementary vectors. No prior computations are required. Various possible families of matrices are explored leading to different related procedures.
APA, Harvard, Vancouver, ISO, and other styles
26

Bélanger, Philippe, and Marc-André Picard. "A multi-factor HJM and PCA approach to risk management of VIX futures." Journal of Risk Finance 19, no. 5 (November 19, 2018): 524–47. http://dx.doi.org/10.1108/jrf-07-2017-0114.

Full text
Abstract:
Purpose Previous studies have shown the VIX futures tend to roll-down the term structure and converge towards the spot as they grow closer to maturity. The purpose of this paper is to propose an approach to improve the volatility index fear factor-level (VIX-level) prediction. Design/methodology/approach First, the authors use a forward-looking technique, the Heath–Jarrow–Morton (HJM) no-arbitrage framework to capture the convergence of the futures contract towards the spot. Second, the authors use principal component analysis (PCA) to reduce dimensionality and save substantial computational time. Third, the authors validate the model with selected VIX futures maturities and test on value-at-risk (VAR) computations. Findings The authors show that the use of multiple factors has a significant impact on the simulated VIX futures distribution, as well as the computations of their VAR (gain in accuracy and computing time). This impact becomes much more compelling when analysing a portfolio of VIX futures of multiple maturities. Research limitations/implications The authors’ approach assumes the variance to be stationary and ignores the volatility smile. Nevertheless, they offer suggestions for future research. Practical implications The VIX-level prediction (the fear factor) is of paramount importance for market makers and participants, as there is no way to replicate the underlying asset of VIX futures. The authors propose a procedure that provides efficiency to both pricing and risk management. Originality/value This paper is the first to apply a forward-looking method by way of a HJM framework combined with PCA to VIX-level prediction in a portfolio context.
APA, Harvard, Vancouver, ISO, and other styles
27

Kho, Daniel Cheok Kiang, Mohammad Faizal Ahmad Fauzi, and Sin Liang Lim. "Hardware-Based Sobel Gradient Computations for Sharpness Enhancement." International Journal of Technology 10, no. 7 (November 27, 2019): 1315. http://dx.doi.org/10.14716/ijtech.v10i7.3263.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Kumar, Akshay, and S. B. Singh. "Computations of the signature reliability of the coherent system." International Journal of Quality & Reliability Management 34, no. 6 (June 5, 2017): 785–97. http://dx.doi.org/10.1108/ijqrm-10-2016-0178.

Full text
Abstract:
Purpose The purpose of this paper is to compute the signature reliability of the coherent systems. Design/methodology/approach The considered k-out-of-n coherent system consists of n number of elements connected in series. With the help of these systems, the authors have evaluated a mathematical structure using universal generating function. Findings Using the universal generating function technique, the authors evaluate tail signature, Barlow-Proschan index, expected lifetime and expected cost. Originality/value In this paper, the authors have developed a coherent systems based on the universal generating function technique.
APA, Harvard, Vancouver, ISO, and other styles
29

Pan, Wenxiang, Baodeng Hou, Ruixiang Yang, Xuzhu Zhan, Wenkai Tian, Baoqi Li, Weihua Xiao, et al. "Conceptual Framework and Computational Research of Hierarchical Residential Household Water Demand." Water 10, no. 6 (May 27, 2018): 696. http://dx.doi.org/10.3390/w10060696.

Full text
Abstract:
Although the quantity of household water consumption does not account for a huge proportion of the total water consumption amidst socioeconomic development, there has been a steadily increasing trend due to population growth and improved urbanization standards. As such, mastering the mechanisms of household water demand, scientifically predicting trends of household water demand, and implementing reasonable control measures are key focuses of current urban water management. Based on the categorization and characteristic analysis of household water, this paper used Maslow’s Hierarchy of Needs to establish a level and grade theory of household water demand, whereby household water is classified into three levels (rigid water demand, flexible water demand, and luxury water demand) and three grades (basic water demand, reasonable water demand, and representational water demand). An in-depth analysis was then carried out on the factors that influence the computation of household water demand, whereby equations for different household water categories were established, and computations for different levels of household water were proposed. Finally, observational experiments on household water consumption were designed, and observation and simulation computations were performed on three typical households in order to verify the scientific outcome and rationality of the computation of household water demand. The research findings contribute to the enhancement and development of prediction theories on water demand, and they are of high theoretical and realistic significance in terms of scientifically predicting future household water demand and fine-tuning the management of urban water resources.
APA, Harvard, Vancouver, ISO, and other styles
30

Echeverribar, Isabel, Mario Morales-Hernández, Pilar Brufau, and Pilar García-Navarro. "Numerical simulation of 2D real large scale floods on GPU: the Ebro River." E3S Web of Conferences 40 (2018): 06007. http://dx.doi.org/10.1051/e3sconf/20184006007.

Full text
Abstract:
Modern flood risk management and mitigation plans incorporate the presence of numerical models that are able to assess the response of the system and to help in the decision-making processes. The shallow water system of equations (SWE) is widely used to model free surface flow evolution in river flooding. Although 1D models are usually adopted when simulating long rivers due to their computational efficiency, 2D models approximate better the behaviour in floodplains of meandering rivers using a fine mesh which implies unaffordable computations in real-world applications. However, the advances on parallelization methods accelerate computation making 2D models competitive. In particular, GPU technology offers important speed-ups which allow fast simulations of large scale scenarios. In this work, an example of the scope of this technology is presented. Several past flood events have been modelled using GPU. The physical domain (middle part of the Ebro River in Spain) has a extent of 477 km2, which gives rise to a large computational grid. The steps followed to carry out the numerical simulation are detailed, as well as the comparison between numerical results and observed flooded areas reaching coincidences up to 87.25 % and speed enhancements of 1-h of simulation time for 1-day flood event. These results lead to the feasible application of this numerical model in real-time simulation tools with accurate and fast predictions useful for flood management.
APA, Harvard, Vancouver, ISO, and other styles
31

Hennig, Philipp, Michael A. Osborne, and Mark Girolami. "Probabilistic numerics and uncertainty in computations." Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences 471, no. 2179 (July 2015): 20150142. http://dx.doi.org/10.1098/rspa.2015.0142.

Full text
Abstract:
We deliver a call to arms for probabilistic numerical methods : algorithms for numerical tasks, including linear algebra, integration, optimization and solving differential equations, that return uncertainties in their calculations. Such uncertainties, arising from the loss of precision induced by numerical calculation with limited time or hardware, are important for much contemporary science and industry. Within applications such as climate science and astrophysics, the need to make decisions on the basis of computations with large and complex data have led to a renewed focus on the management of numerical uncertainty. We describe how several seminal classic numerical methods can be interpreted naturally as probabilistic inference. We then show that the probabilistic view suggests new algorithms that can flexibly be adapted to suit application specifics, while delivering improved empirical performance. We provide concrete illustrations of the benefits of probabilistic numeric algorithms on real scientific problems from astrometry and astronomical imaging, while highlighting open problems with these new algorithms. Finally, we describe how probabilistic numerical methods provide a coherent framework for identifying the uncertainty in calculations performed with a combination of numerical algorithms (e.g. both numerical optimizers and differential equation solvers), potentially allowing the diagnosis (and control) of error sources in computations.
APA, Harvard, Vancouver, ISO, and other styles
32

Amirijoo, M., J. Hansson, and S. H. Son. "Specification and management of QoS in real-time databases supporting imprecise computations." IEEE Transactions on Computers 55, no. 3 (March 2006): 304–19. http://dx.doi.org/10.1109/tc.2006.45.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Nagurney, A., and J. Dong. "Management of knowledge intensive systems as supernetworks: Modeling, analysis, computations, and applications." Mathematical and Computer Modelling 42, no. 3-4 (August 2005): 397–417. http://dx.doi.org/10.1016/j.mcm.2004.01.015.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Li, Keqin. "Power and performance management for parallel computations in clouds and data centers." Journal of Computer and System Sciences 82, no. 2 (March 2016): 174–90. http://dx.doi.org/10.1016/j.jcss.2015.07.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Park, K. Sam. "Duality, efficiency computations and interpretations in imprecise DEA." European Journal of Operational Research 200, no. 1 (January 2010): 289–96. http://dx.doi.org/10.1016/j.ejor.2008.11.028.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Wong, Man Hong, and Shuzhong Zhang. "On distributional robust probability functions and their computations." European Journal of Operational Research 233, no. 1 (February 2014): 23–33. http://dx.doi.org/10.1016/j.ejor.2013.08.044.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Khabarov, Nikolay, Alexey Smirnov, Juraj Balkovič, Rastislav Skalský, Christian Folberth, Marijn Van Der Velde, and Michael Obersteiner. "Heterogeneous Compute Clusters and Massive Environmental Simulations Based on the EPIC Model." Modelling 1, no. 2 (December 4, 2020): 215–24. http://dx.doi.org/10.3390/modelling1020013.

Full text
Abstract:
In recent years, the crop growth modeling community invested immense effort into high resolution global simulations estimating inter alia the impacts of projected climate change. The demand for computing resources in this context is high and expressed in processor core-years per one global simulation, implying several crops, management systems, and a several decades time span for a single climatic scenario. The anticipated need to model a richer set of alternative management options and crop varieties would increase the processing capacity requirements even more, raising the looming issue of computational efficiency. While several publications report on the successful application of the original field-scale crop growth model EPIC (Environmental Policy Integrated Climate) for running on modern supercomputers, the related performance improvement issues and, especially, associated trade-offs have only received, so far, limited coverage. This paper provides a comprehensive view on the principles of the EPIC setup for parallel computations and, for the first time, on those specific to heterogeneous compute clusters that are comprised of desktop computers utilizing their idle time to carry out massive computations. The suggested modification of the core EPIC model allows for a dramatic performance increase (order of magnitude) on a compute cluster that is powered by the open-source high-throughput computing software framework HTCondor.
APA, Harvard, Vancouver, ISO, and other styles
38

Jaumard, Brigitte, Marcelo Prais, and Celso Carneiro Ribeiro. "Penalty computations for the set partitioning problem." Annals of Operations Research 43, no. 5 (May 1993): 285–93. http://dx.doi.org/10.1007/bf02025299.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Chen, Yixiang, and Hengyang Wu. "Domain semantics of possibility computations." Information Sciences 178, no. 12 (June 2008): 2661–79. http://dx.doi.org/10.1016/j.ins.2008.01.016.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Valderrama Bento da Silva, Pedro Henrique, Eduardo Camponogara, Laio Oriel Seman, Gabriel Villarrubia González, and Valderi Reis Quietinho Leithardt. "Decompositions for MPC of Linear Dynamic Systems with Activation Constraints." Energies 13, no. 21 (November 2, 2020): 5744. http://dx.doi.org/10.3390/en13215744.

Full text
Abstract:
The interconnection of dynamic subsystems that share limited resources are found in many applications, and the control of such systems of subsystems has fueled significant attention from scientists and engineers. For the operation of such systems, model predictive control (MPC) has become a popular technique, arguably for its ability to deal with complex dynamics and system constraints. The MPC algorithms found in the literature are mostly centralized, with a single controller receiving the signals and performing the computations of output signals. However, the distributed structure of such interconnected subsystems is not necessarily explored by standard MPC. To this end, this work proposes hierarchical decomposition to split the computations between a master problem (centralized component) and a set of decoupled subproblems (distributed components) with activation constraints, which brings about organizational flexibility and distributed computation. Two general methods are considered for hierarchical control and optimization, namely Benders decomposition and outer approximation. Results are reported from a numerical analysis of the decompositions and a simulated application to energy management, in which a limited source of energy is distributed among batteries of electric vehicles.
APA, Harvard, Vancouver, ISO, and other styles
41

WIRBA, E. N., J. H. M. TAH, and R. HOWES. "Risk interdependencies and natural language computations." Engineering, Construction and Architectural Management 3, no. 4 (April 1996): 251–69. http://dx.doi.org/10.1108/eb021034.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Cornea-Hasegan, M. C., Dan C. Marinescu, and Zhongyun Zhang. "Data management for a class of iterative computations on distributed-memory MIMD systems." Concurrency: Practice and Experience 6, no. 3 (May 1994): 205–29. http://dx.doi.org/10.1002/cpe.4330060304.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Noels, L., L. Stainier, and J. P. Ponthot. "Self-adapting time integration management in crash-worthiness and sheet metal forming computations." International Journal of Vehicle Design 30, no. 1/2 (2002): 67. http://dx.doi.org/10.1504/ijvd.2002.002026.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Ždankus, Narimantas Titas, and Gediminas Stelmokaitis. "CLAY SLOPE STABILITY COMPUTATIONS/MOLIO ŠLAITO STABILUMO SKAIČIAVIMAS." JOURNAL OF CIVIL ENGINEERING AND MANAGEMENT 14, no. 3 (September 30, 2008): 207–12. http://dx.doi.org/10.3846/1392-3730.2008.14.18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Li, Chuan Kun, Xiao Juan Huang, Chun Li Wang, and Xin Jiang Gao. "Research on Petrochemical Plant Safety Management System Basis of Cloud Computing." Applied Mechanics and Materials 614 (September 2014): 507–12. http://dx.doi.org/10.4028/www.scientific.net/amm.614.507.

Full text
Abstract:
Cloud computing is a large-scale distributed computing mode, which forms an abstract, virtual, and dynamically scalable resource pool. The resource pool can provide services including computing, storage, development platform and software to users through Internet on demand. Therefore, it possesses powerful computing and storage capacity and even lots of other advantages. In this paper, the petrochemical plant safety management system (PSMS) proposed based on cloud computing whose function can be extended dynamically. It includes the online database concerning process safety knowledge, production-related information system which can be used to train operators and fulfill online inquiry. It also contains hazard fingerprints libraries and the hazard countermeasure libraries of chemical process. PSMS can executive computations with different scales and types based on massive real-time data collected from process. And the computation results can be matched with the hazard fingerprints database to evaluate and predict the process safety situation. If PSMS interacts with the DCS, the process may eliminate hazard by itself based on the hazard countermeasure database. Therefore, the real time information and safety knowledge of process can be organized to form wisdom, which would enhance the process safety greatly. The computing power of the cloud computing can guarantee the model computation completed just in time. The powerful sever cluster and highly reliable data center of the cloud computing guarantee that the information system, hazard fingerprints database and hazard countermeasure database can be extended dynamically, which also guarantee the in-time matching among computation results, hazard fingerprints, hazard fingerprints and its countermeasures. In order to take full advantages of cloud computing to make it easy to expand and upgrade, and then to solve the problem that the different petrochemical production system needs different services, the software will be developed with service oriented architecture (SOA) which is more abstract than object oriented architecture (OOA).
APA, Harvard, Vancouver, ISO, and other styles
46

Rito, Guilherme, and Hervé Paulino. "Scheduling computations with provably low synchronization overheads." Journal of Scheduling 25, no. 1 (October 21, 2021): 107–24. http://dx.doi.org/10.1007/s10951-021-00706-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Magnanti, Thomas L. "Optimization: From Its Inception." Management Science 67, no. 9 (September 2021): 5349–63. http://dx.doi.org/10.1287/mnsc.2021.3955.

Full text
Abstract:
Optimization has been one of the most fundamental and extensive contributions of management science/operations research, with an enormous number of contributions and subfields developed by many researchers and practitioners. When the journal Management Science launched in 1954, little was known about optimization, including some results in nonlinear optimization and the simplex method and duality developed for linear programming. However, linear programming computations were limited to problems with at most 101 linear constraints. Then some early contributions by seminal researchers began to develop foundations for the field. I will review a few of these early contributions, focusing on the traveling sales problem and integer programming, decomposition, and column generation. I will summarize some research and applied contributions since then, including the enormous development of computations. I will focus on linear and integer programs with some material on combinatorial optimization. This paper was accepted by David Simchi-Levi, Special Section of Management Science: 65th Anniversary.
APA, Harvard, Vancouver, ISO, and other styles
48

Suk, Heejun, Jui-Sheng Chen, Eungyu Park, and You Hong Kihm. "Practical Application of the Galerkin Finite Element Method with a Mass Conservation Scheme under Dirichlet Boundary Conditions to Solve Groundwater Problems." Sustainability 12, no. 14 (July 13, 2020): 5627. http://dx.doi.org/10.3390/su12145627.

Full text
Abstract:
The Galerkin finite element method (FEM) has long been used to solve groundwater flow equations and compute the mass balance in a region. In this study, we proposed a simple, new computational FEM procedure for global mass balance computations that can simultaneously obtain boundary fluxes at Dirichlet boundary nodes and finite element hydraulic heads at all nodes in only one step, whereas previous approaches usually require two steps. In previous approaches, the first step obtains the Galerkin finite element hydraulic heads at all nodes, and then, the boundary fluxes are calculated using the obtained Galerkin finite element hydraulic heads in a second step. Comparisons between the new approach proposed in this study and previous approaches, such as Yeh’s approach and a conventional differential approach, were performed using two practical groundwater problems to illustrate the improved accuracy and efficiency of the new approach when computing the global mass balance or boundary fluxes. From the results of the numerical experiments, it can be concluded that the new approach provides a more efficient mass balance computation scheme and a much more accurate mass balance computation compared to previous approaches that have been widely used in commercial and public groundwater software.
APA, Harvard, Vancouver, ISO, and other styles
49

Chen, Wen-Chih, and Wei-Jen Cho. "A procedure for large-scale DEA computations." Computers & Operations Research 36, no. 6 (June 2009): 1813–24. http://dx.doi.org/10.1016/j.cor.2008.05.006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Guerreiro, Andreia P., Vasco Manquinho, and José Rui Figueira. "Exact hypervolume subset selection through incremental computations." Computers & Operations Research 136 (December 2021): 105471. http://dx.doi.org/10.1016/j.cor.2021.105471.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography