Siga este enlace para ver otros tipos de publicaciones sobre el tema: HTC computation.

Artículos de revistas sobre el tema "HTC computation"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores artículos de revistas para su investigación sobre el tema "HTC computation".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore artículos de revistas sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Shen, Fan, Siyuan Rong, Naigang Cui y Xianren Kong. "A tensor-based modelling and simulation method for multibody system". Engineering Computations 34, n.º 4 (12 de junio de 2017): 1107–25. http://dx.doi.org/10.1108/ec-11-2015-0375.

Texto completo
Resumen
Purpose The purpose of this paper is to provide a method with convenient modelling as well as precise computation to the research of complex multi-body system, such as robot arms and solar power satellite. Classical modelling method does not always fit these two requirements. Design/methodology/approach In this paper, tensor coordinates (TC) and homogeneous tensor coordinates (HTC) method with gradient components are developed, which also have a convenient interface with classical theory. Findings The HTC proved its precision and effectiveness by two examples. In HTC model, equations have a more convenient form as matrix and the results coincide well with classical one. Research limitations/implications There is no plenty detailed operations supported in mathematics yet, which may be developed in further research. Practical implications With TC/HTC method, the research work can be separated more thoroughly: a simpler modelling work is left to scientists, when more computing work is handed to the computers. It may ease scientists’ brains in multibody modelling. Originality/value The HTC method has the advantages of absolute nodal coordinate formulations, tensor and homogeneous coordinate (HC) and it may be used in multibody mechanics, or other related engineerings.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Chabicovsky, Martin, Petr Kotrbacek, Hana Bellerova, Jan Kominek y Miroslav Raudensky. "Spray Cooling Heat Transfer above Leidenfrost Temperature". Metals 10, n.º 9 (21 de septiembre de 2020): 1270. http://dx.doi.org/10.3390/met10091270.

Texto completo
Resumen
This study considers spray cooling starting at surface temperatures of about 1200 °C and finishing at the Leidenfrost temperature. Cooling is in the film boiling regime. The paper uses experimental techniques for the study of which spray parameters are necessary for good prediction of spray cooling intensity. The research is based on experiments with water and air-mist nozzles. The following spray parameters were measured together with a heat transfer coefficient: water flowrate, water impingement density, impact pressure, droplet size and velocity. Derived parameters as droplet kinetic energy, droplet momentum and droplet Reynolds number are used in the tested correlations as well. Ten combinations of spray parameters used for correlation functions for the heat transfer coefficient (HTC) are studied and discussed. Correlation functions for prediction of HTC are presented and it is shown which spray parameters are necessary for reliable computation of HTC. The best results were obtained when the parameters impact pressure and water impingement density were used together. It was proven that the correlations based only on water impingement density, which are the most frequent in literature, can not provide reliable results.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

El-Sayed, Hesham, Sharmi Sankar, Heng Yu y Gokulnath Thandavarayan. "Benchmarking of Recommendation Trust Computation for Trust/Trustworthiness Estimation in HDNs". International Journal of Computers Communications & Control 12, n.º 5 (10 de septiembre de 2017): 612. http://dx.doi.org/10.15837/ijccc.2017.5.2895.

Texto completo
Resumen
In the recent years, Heterogeneous Distributed Networks (HDNs) is a predominant technology implemented to enable various application in different fields like transportation, medicine, war zone, etc. Due to its arbitrary self-organizing nature and temporary topologies in the spatial-temporal region, distributed systems are vulnerable with a few security issues and demands high security countermeasures. Unlike other static networks, the unique characteristics of HDNs demands cutting edge security policies. Numerous cryptographic techniques have been proposed by different researchers to address the security issues in HDNs. These techniques utilize too many resources, resulting in higher network overheads. This being classified under light weight security scheme, the Trust Management System (TMS) tends to be one of the most promising technology, featured with more efficiency in terms of availability, scalability and simplicity. It advocates both the node level validation and data level verification enhancing trust between the attributes. Further, it thwarts a wide range of security attacks by incorporating various statistical techniques and integrated security services. In this paper, we present a literature survey on different TMS that highlights reliable techniques adapted across the entire HDNs. We then comprehensively study the existing distributed trust computations and benchmark them in accordance to their effectiveness. Further, performance analysis is applied on the existing computation techniques and the benchmarked outcome delivered by Recommendation Trust Computations (RTC) is discussed. A Receiver Operating Characteristics (ROC) curve illustrates better accuracy for Recommendation Trust Computations (RTC), in comparison with Direct Trust Computations (DTC) and Hybrid Trust Computations (HTC). Finally, we propose the future directions for research and highlight reliable techniques to build an efficient TMS in HDNs.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Tran, Duc Q., Vaughn Barry, Ana G. Antun, Maria Ribeiro, Sidney F. Stein y Christine L. Kempton. "Numeracy in Patients with Hemophilia". Blood 126, n.º 23 (3 de diciembre de 2015): 40. http://dx.doi.org/10.1182/blood.v126.23.40.40.

Texto completo
Resumen
Abstract Background: Numeracy, defined as the ability to handle basic probability and numerical concepts including computation, estimation, logic, and problem solving, is an under-recognized component of health literacy. Numeracy has been shown to influence performance of health tasks in non-hemophilia populations. Little is known about numeracy in the hemophilia population. Since hemophilia treatment requires understanding of numerical concepts to manage factor replacement, it is likely that numeracy also influences performance of health tasks by patients with hemophilia. A greater understanding of numeracy status and the characteristics influencing numeracy in the hemophilia population may allow healthcare providers to better influence health task performance. The objective of this study is to explore numeracy in the hemophilia population using two different tests of numeracy and to evaluate characteristics that are associated with low numeracy. Methods: Using a cross-sectional design, adults with moderate or severe hemophilia A or B who spoke and read English were enrolled at their annual visit at the Emory/Children's Health Care of Atlanta Hemophilia Treatment Center (HTC). Numeracy was measured using the validated Schwartz Woloshin (SW) test requiring answers in words and the unvalidated stick figure test requiring answers using images. Subjects were considered numerate with the SW numeracy test if all three questions were answered correctly or with the stick figure numeracy test if all four questions were answered correctly. Demographic and socioeconomic characteristics collected included age, race, ethnicity, household income (more or less than $50,000), level of education completed (more or less than completion of college), and duration of time followed at this HTC. Clinical information including type and severity of hemophilia, history of viral infections, history of depression, and use of chronic medication were abstracted from the medical records. Descriptive statistics of each variable and bivariate associations between numerate status and each dependent variable were calculated. Multivariable modeling using logistic regression was performed using the validated SW numeracy test as the dependent variable. Results: Of 91 enrolled participants with complete data, all were men. Most had hemophilia A [n=82 (90%)] and severe disease [69 (76%)]. Median age was 34 years [interquartile range (IQR) 18]. Sixty-three (69%) were Caucasian; 5 (6%) were Hispanic; 55 (61%) reported income of <$50,000; 33 (36%) had received an undergraduate degree or higher. Median duration of time followed at the HTC was 17.0 years [IQR 18]. Twenty-four (26%) were HIV positive; 54 (59%) were HCV positive; and 19 (21%) had history of depression. Forty-one (45%) used at least one chronic medication other than factor replacement. Using the SW numeracy test, there were 22 (24%) participants who were numerate. Using the stick figure numeracy test, 60 (66%) were numerate. Only 20 (22%) of all the participants answered all seven questions correctly; two participants (2%) were numerate on the SW test but not on the stick figure test; 40 participants (44%) were numerate on the stick figure numeracy test but not on the SW test; 29 (32%) were not numerate on either test. On bivariable analysis, SW numeracy was associated with higher education (p<0.01), higher income (p=0.035), and the use of chronic medication (p=0.048). On multivariable analysis, after adjusting for age, race, and ethnicity, SW numeracy was associated with higher education (OR 6.21, 95% CI = 1.95-19.76), use of chronic medication (OR 4.31, 95% CI = 1.29-14.34), and less time followed at the HTC (OR 0.92, 95% CI = 0.86-0.97). Conclusion: Among patients with hemophilia, a significant proportion of patients were not numerate. Patients with less than a college education were more likely to not be numerate. Accordingly, many patients with less than a college education may struggle to understand basic numeracy concepts and this may influence their understanding of dosing, factor pharmacokinetics and probability. The impact of numeracy on health outcomes and the utility of the SW and stick figure numeracy tests to help guide patient-centered discussions that involve mathematical concepts are important areas of future research. Disclosures Tran: Novo Nordisk: Honoraria. Kempton:Baxter Biopharmaceuticals: Honoraria; Biogen Idec: Honoraria; Kedrion Biopharma: Honoraria; CSL Behring: Honoraria.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Cui, Zhe, Shivalik Sen, Sriram Karthik Badam y Niklas Elmqvist. "VisHive: Supporting web-based visualization through ad hoc computational clusters of mobile devices". Information Visualization 18, n.º 2 (23 de enero de 2018): 195–210. http://dx.doi.org/10.1177/1473871617752910.

Texto completo
Resumen
Current web-based visualizations are designed for single computers and cannot make use of additional devices on the client side, even if today’s users often have access to several, such as a tablet, a smartphone, and a smartwatch. We present a framework for ad hoc computational clusters that leverage these local devices for visualization computations. Furthermore, we present an instantiating JavaScript toolkit called VisHive for constructing web-based visualization applications that can transparently connect multiple devices—called cells—into such ad hoc clusters—called a hive—for local computation. Hives are formed either using a matchmaking service or through manual configuration. Cells are organized into a master–slave architecture, where the master provides the visual interface to the user and controls the slaves and the slaves perform computation. VisHive is built entirely using current web technologies, runs in the native browser of each cell, and requires no specific software to be downloaded on the involved devices. We demonstrate VisHive using four distributed examples: a text analytics visualization, a database query for exploratory visualization, a density-based spatial clustering of applications with noise clustering running on multiple nodes, and a principal component analysis implementation.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Puzyrkov, Dmitry, Sergey Polyakov, Viktoriia Podryga y Sergey Markizov. "Concept of a Cloud Service for Data Preparation and Computational Control on Custom HPC Systems in Application to Molecular Dynamics". EPJ Web of Conferences 173 (2018): 05014. http://dx.doi.org/10.1051/epjconf/201817305014.

Texto completo
Resumen
At the present stage of computer technology development it is possible to study the properties and processes in complex systems at molecular and even atomic levels, for example, by means of molecular dynamics methods. The most interesting are problems related with the study of complex processes under real physical conditions. Solving such problems requires the use of high performance computing systems of various types, for example, GRID systems and HPC clusters. Considering the time consuming computational tasks, the need arises of software for automatic and unified monitoring of such computations. A complex computational task can be performed over different HPC systems. It requires output data synchronization between the storage chosen by a scientist and the HPC system used for computations. The design of the computational domain is also quite a problem. It requires complex software tools and algorithms for proper atomistic data generation on HPC systems. The paper describes the prototype of a cloud service, intended for design of atomistic systems of large volume for further detailed molecular dynamic calculations and computational management for this calculations, and presents the part of its concept aimed at initial data generation on the HPC systems.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Irphan K, Ashiq y Srisusindhran K. Srisusindhran K. "Reputation based Route Computation for Wireless Ad-Hoc Network Using AODV". International Journal of Scientific Research 3, n.º 4 (1 de junio de 2012): 200–202. http://dx.doi.org/10.15373/22778179/apr2014/68.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Sun, Xiaoqiang, F. Richard Yu, Peng Zhang, Weixin Xie y Xiang Peng. "A Survey on Secure Computation Based on Homomorphic Encryption in Vehicular Ad Hoc Networks". Sensors 20, n.º 15 (30 de julio de 2020): 4253. http://dx.doi.org/10.3390/s20154253.

Texto completo
Resumen
In vehicular ad hoc networks (VANETs), the security and privacy of vehicle data are core issues. In order to analyze vehicle data, they need to be computed. Encryption is a common method to guarantee the security of vehicle data in the process of data dissemination and computation. However, encrypted vehicle data cannot be analyzed easily and flexibly. Because homomorphic encryption supports computations of the ciphertext, it can completely solve this problem. In this paper, we provide a comprehensive survey of secure computation based on homomorphic encryption in VANETs. We first describe the related definitions and the current state of homomorphic encryption. Next, we present the framework, communication domains, wireless access technologies and cyber-security issues of VANETs. Then, we describe the state of the art of secure basic operations, data aggregation, data query and other data computation in VANETs. Finally, several challenges and open issues are discussed for future research.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Rubio-Montero, Antonio Juan, Angelines Alberto-Morillas, Rosa De Lima Rosa De Lima, Pablo Colino-Sanguino, Jorge Blanco-Yagüe, Manuel Giménez, Fernando Blanco-Marcilla, Esther Montes-Prado, Alicia Acero y Rafael Mayo-García. "Evolution of the maintainability of HPC facilities at CIEMAT headquarters". Revista UIS Ingenierías 19, n.º 2 (3 de mayo de 2020): 85–88. http://dx.doi.org/10.18273/revuin.v19n2-2020009.

Texto completo
Resumen
Since its establishment in 1951, CIEMAT has been continuously boosting the use of computation as a research method, deploying innovative computing facilities. Hence, Vectorial, MPP, NUMA, and distributed architectures have been managed at CIEMAT, resultingin an extensive expertise on HPC maintainability as well as on the computational needs of the community related to international projects. Nowadays, the evolution of HPC hardware and software is progressively faster and implies a continuous challenge to increase their availability for the greater number of different initiatives supported. To address this task, the ICT team has been changing towards a flexible management model, with a look toward future acquisitions
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Ubene, Mitchell, Mohammad Heidari y Animesh Dutta. "Computational Modeling Approaches of Hydrothermal Carbonization: A Critical Review". Energies 15, n.º 6 (17 de marzo de 2022): 2209. http://dx.doi.org/10.3390/en15062209.

Texto completo
Resumen
Hydrothermal carbonization (HTC) continues to gain recognition over other valorization techniques for organic and biomass residue in recent research. The hydrochar product of HTC can be effectively produced from various sustainable resources and has been shown to have impressive potential for a wide range of applications. As industries work to adapt the implementation of HTC over large processes, the need for reliable models that can be referred to for predictions and optimization studies are becoming imperative. Although much of the available research relating to HTC has worked on the modeling area, a large gap remains in developing advanced computational models that can better describe the complex mechanisms, heat transfer, and fluid dynamics that take place in the reactor of the process. This review aims to highlight the importance of expanding the research relating to computational modeling for HTC conversion of biomass. It identifies six research areas that are recommended to be further examined for contributing to necessary advancements that need to be made for large-scale and continuous HTC operations. The six areas that are identified for further investigation are variable feedstock compositions, heat of exothermic reactions, type of reactor and scale-up, consideration of pre-pressurization, consideration of the heat-up period, and porosity of feedstock. Addressing these areas in future HTC modeling efforts will greatly help with commercialization of this promising technology.
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Völkel, Gunnar, Ludwig Lausser, Florian Schmid, Johann M. Kraus y Hans A. Kestler. "Sputnik: ad hoc distributed computation". Bioinformatics 31, n.º 8 (12 de diciembre de 2014): 1298–301. http://dx.doi.org/10.1093/bioinformatics/btu818.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Cebrian, Juan M., Baldomero Imbernón, Jesús Soto y José M. Cecilia. "Evaluation of Clustering Algorithms on HPC Platforms". Mathematics 9, n.º 17 (4 de septiembre de 2021): 2156. http://dx.doi.org/10.3390/math9172156.

Texto completo
Resumen
Clustering algorithms are one of the most widely used kernels to generate knowledge from large datasets. These algorithms group a set of data elements (i.e., images, points, patterns, etc.) into clusters to identify patterns or common features of a sample. However, these algorithms are very computationally expensive as they often involve the computation of expensive fitness functions that must be evaluated for all points in the dataset. This computational cost is even higher for fuzzy methods, where each data point may belong to more than one cluster. In this paper, we evaluate different parallelisation strategies on different heterogeneous platforms for fuzzy clustering algorithms typically used in the state-of-the-art such as the Fuzzy C-means (FCM), the Gustafson–Kessel FCM (GK-FCM) and the Fuzzy Minimals (FM). The experimental evaluation includes performance and energy trade-offs. Our results show that depending on the computational pattern of each algorithm, their mathematical foundation and the amount of data to be processed, each algorithm performs better on a different platform.
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Guidolin, Michele, Zoran Kapelan y Dragan Savić. "Using high performance techniques to accelerate demand-driven hydraulic solvers". Journal of Hydroinformatics 15, n.º 1 (24 de septiembre de 2012): 38–54. http://dx.doi.org/10.2166/hydro.2012.198.

Texto completo
Resumen
Computer models of water distribution networks are commonly used to simulate large systems under complex dynamic scenarios. These models normally use so-called demand-driven solvers, which determine the nodal pressures and pipe flow rates that correspond to specified nodal demands. This paper investigates the use of data parallel high performance computing (HPC) techniques to accelerate demand-driven hydraulic solvers. The sequential code of the solver implemented in the CWSNet library is analysed to understand which computational blocks contribute the most to the total computation time of a hydraulic simulation. The results obtained show that, contrary to popular belief, the linear solver is not the code block with the highest impact on the simulation time, but the pipe head loss computation. Two data parallel HPC techniques, single instruction multiple data (SIMD) operations and general purpose computation on graphics processing units (GPGPU), are used to accelerate the pipe head loss computation and linear algebra operations in new implementations of the hydraulic solver of CWSNet library. The results obtained on different network models show that the use of this techniques can improve significantly the performance of a demand-driven hydraulic solver.
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Setiawan, Iwan, Akbari Indra Basuki y Didi Rosiyadi. "Optimized Hybrid DCT-SVD Computation over Extremely Large Images". Jurnal Teknik Elektro 13, n.º 2 (20 de diciembre de 2021): 56–61. http://dx.doi.org/10.15294/jte.v13i2.31879.

Texto completo
Resumen
High performance computing (HPC) is required for image processing especially for picture element (pixel) with huge size. To avoid dependence to HPC equipment which is very expensive to be provided, the soft approach has been performed in this work. Actually, both hard and soft methods offer similar goal which are to reach time computation as short as possible. The discrete cosine transformation (DCT) and singular values decomposition (SVD) are conventionally performed to original image by consider it as a single matrix. This will result in computational burden for images with huge pixel. To overcome this problem, the second order matrix has been performed as block matrix to be applied on the original image which delivers the DCT-SVD hybrid formula. Hybrid here means the only required parameter shown in formula is intensity of the original pixel as the DCT and SVD formula has been merged in derivation. Result shows that when using Lena as original image, time computation of the singular values using the hybrid formula is almost two seconds faster than the conventional. Instead of pushing hard to provide the equipment, it is possible to overcome computational problem due to the size simply by using the proposed formula.
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Mitsopoulos, Georgios, Elpida Panagiotatou, Vasiliki Sant, Evangelos Baltas, Michalis Diakakis, Efthymios Lekkas y Anastasios Stamou. "Optimizing the Performance of Coupled 1D/2D Hydrodynamic Models for Early Warning of Flash Floods". Water 14, n.º 15 (30 de julio de 2022): 2356. http://dx.doi.org/10.3390/w14152356.

Texto completo
Resumen
We pose the following research question, “what are (i) the minimum required computation grid and (ii) the required form of hydrodynamic equations, i.e., shallow water equations (SWE) or diffusion wave equations (DWE), in 2D modeling to minimize the computational time while maintaining an acceptable level of error in the prediction of water depths and the extent of flood inundated areas?”. To answer this question, we apply the HEC-RAS 1D/2D model to simulate a disastrous flash flood in the town of Mandra, in Attica, Greece, in November 2017. HEC-RAS 1D/2D combines 1D modeling in the cross-sections of the two main streams of Mandra with 2D modeling in the rest of the potentially flooded area of the computational domain which has an area equal to 18.36 km2. We perform calculations for 8 scenarios that combined various grid sizes (with approximately 44,000–95,000 control volumes) with the use of the SWE or DWE. We derive the following conclusions: (i) calculated maximum water depths using DWE were equal to 60–65% of the corresponding water depths using SWE, i.e., the DWE significantly underestimated water depths; (ii) calculated total inundation areas using the SWE were approximately 4.9–7.9% larger than the corresponding inundation areas using the DWE; these differences can be considered as acceptable; and (iii) the total computation times using SWE, which ranged from 67 to 127 min, were 60–70% longer than the computation times using DWE.
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Almaiah, Mohammed Amin, Ziad Dawahdeh, Omar Almomani, Adeeb Alsaaidah, Ahmad Al-Khasawneh y Saleh Khawatreh. "A new hybrid text encryption approach over mobile ad hoc network". International Journal of Electrical and Computer Engineering (IJECE) 10, n.º 6 (1 de diciembre de 2020): 6461. http://dx.doi.org/10.11591/ijece.v10i6.pp6461-6471.

Texto completo
Resumen
Data exchange has been rapidly increased recently by increasing the use of mobile networks. Sharing information (text, image, audio and video) over unsecured mobile network channels is liable for attacking and stealing. Encryption techniques are the most suitable methods to protect information from hackers. Hill cipher algorithm is one of symmetric techniques, it has a simple structure and fast computations, but weak security because sender and receiver need to use and share the same private key within a non-secure channel. Therefore, a novel hybrid encryption approach between elliptic curve cryptosystem and hill cipher (ECCHC) is proposed in this paper to convert Hill Cipher from symmetric technique (private key) to asymmetric one (public key) and increase its security and efficiency and resist the hackers. Thus, no need to share the secret key between sender and receiver and both can generate it from the private and public keys. Therefore, the proposed approach presents a new contribution by its ability to encrypt every character in the 128 ASCII table by using its ASCII value direct without needing to assign a numerical value for each character. The main advantages of the proposed method are represented in the computation simplicity, security efficiency and faster computation.
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Read, Dwight W. "The algebraic logic of kinship terminology structures". Behavioral and Brain Sciences 33, n.º 5 (octubre de 2010): 399–401. http://dx.doi.org/10.1017/s0140525x10001378.

Texto completo
Resumen
AbstractJones' proposed application of Optimality Theory assumes the primary kinship data are genealogical definitions of kin terms. This, however, ignores the fact that these definitions can be predicted from the computational, algebralike structural logic of kinship terminologies, as has been discussed and demonstrated in numerous publications. The richness of human kinship systems derives from the cultural knowledge embedded in kinship terminologies as symbolic computation systems, not the post hoc constraints devised by Jones.
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Sobehy, Abdallah, Eric Renault y Paul Muhlethaler. "Position Certainty Propagation: A Localization Service for Ad-Hoc Networks". Computers 8, n.º 1 (7 de enero de 2019): 6. http://dx.doi.org/10.3390/computers8010006.

Texto completo
Resumen
Location services for ad-hoc networks are of indispensable value for a wide range of applications, such as the Internet of Things (IoT) and vehicular ad-hoc networks (VANETs). Each context requires a solution that addresses the specific needs of the application. For instance, IoT sensor nodes have resource constraints (i.e., computational capabilities), and so a localization service should be highly efficient to conserve the lifespan of these nodes. We propose an optimized energy-aware and low computational solution, requiring 3-GPS equipped nodes (anchor nodes) in the network. Moreover, the computations are lightweight and can be implemented distributively among nodes. Knowing the maximum range of communication for all nodes and distances between 1-hop neighbors, each node localizes itself and shares its location with the network in an efficient manner. We simulate our proposed algorithm in a NS-3 simulator, and compare our solution with state-of-the-art methods. Our method is capable of localizing more nodes (≈90% of nodes in a network with an average degree ≈10).
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Mukherjee, Debangshu, Kevin M. Roccapriore, Anees Al-Najjar, Ayana Ghosh, Jacob D. Hinkle, Andrew R. Lupini, Rama K. Vasudevan et al. "A Roadmap for Edge Computing Enabled Automated Multidimensional Transmission Electron Microscopy". Microscopy Today 30, n.º 6 (noviembre de 2022): 10–19. http://dx.doi.org/10.1017/s1551929522001286.

Texto completo
Resumen
Abstract:The advent of modern, high-speed electron detectors has made the collection of multidimensional hyperspectral transmission electron microscopy datasets, such as 4D-STEM, a routine. However, many microscopists find such experiments daunting since analysis, collection, long-term storage, and networking of such datasets remain challenging. Some common issues are their large and unwieldy size that often are several gigabytes, non-standardized data analysis routines, and a lack of clarity about the computing and network resources needed to utilize the electron microscope. The existing computing and networking bottlenecks introduce significant penalties in each step of these experiments, and thus, real-time analysis-driven automated experimentation for multidimensional TEM is challenging. One solution is to integrate microscopy with edge computing, where moderately powerful computational hardware performs the preliminary analysis before handing off the heavier computation to high-performance computing (HPC) systems. Here we trace the roots of computation in modern electron microscopy, demonstrate deep learning experiments running on an edge system, and discuss the networking requirements for tying together microscopes, edge computers, and HPC systems.
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

PACHOS, JIANNIS y PAOLO ZANARDI. "QUANTUM HOLONOMIES FOR QUANTUM COMPUTING". International Journal of Modern Physics B 15, n.º 09 (10 de abril de 2001): 1257–85. http://dx.doi.org/10.1142/s0217979201004836.

Texto completo
Resumen
Holonomic Quantum Computation (HQC) is an all-geometrical approach to quantum information processing. In the HQC strategy information is encoded in degenerate eigen-spaces of a parametric family of Hamiltonians. The computational network of unitary quantum gates is realized by driving adiabatically the Hamiltonian parameters along loops in a control manifold. By properly designing such loops the nontrivial curvature of the underlying bundle geometry gives rise to unitary transformations i.e., holonomies that implement the desired unitary transformations. Conditions necessary for universal QC are stated in terms of the curvature associated to the non-abelian gauge potential (connection) over the control manifold. In view of their geometrical nature the holonomic gates are robust against several kind of perturbations and imperfections. This fact along with the adiabatic fashion in which gates are performed makes in principle HQC an appealing way towards universal fault-tolerant QC.
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Ahamad, Danish, Md Mobin Akhtar, Shabi Alam Hameed y Mahmoud Mohammad Mahmoud Al Qerom. "Provably Secure Authentication Approach for Data Security in Cloud Using Hashing, Encryption, and Chebyshev-Based Authentication". International Journal of Information Security and Privacy 16, n.º 1 (enero de 2022): 1–20. http://dx.doi.org/10.4018/ijisp.2022010106.

Texto completo
Resumen
Secure and efficient authentication mechanism becomes a major concern in cloud computing due to the data sharing among cloud server and user through internet. This paper proposed an efficient Hashing, Encryption and Chebyshev HEC-based authentication in order to provide security among data communication. With the formal and the informal security analysis, it has been demonstrated that the proposed HEC-based authentication approach provides data security more efficiently in cloud. The proposed approach amplifies the security issues and ensures the privacy and data security to the cloud user. Moreover, the proposed HEC-based authentication approach makes the system more robust and secured and has been verified with multiple scenarios. However, the proposed authentication approach requires less computational time and memory than the existing authentication techniques. The performance revealed by the proposed HEC-based authentication approach is measured in terms of computation time and memory as 26ms, and 1878bytes for 100Kb data size, respectively.
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

., Zeeshan Osmani. "AD HOC BASED VEHCULAR NETWORKING AND COMPUTATION". International Journal of Research in Engineering and Technology 05, n.º 03 (25 de marzo de 2016): 492–94. http://dx.doi.org/10.15623/ijret.2016.0503088.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Yang, Xiaodong, Ningning Ren, Aijia Chen, Zhisong Wang y Caifen Wang. "HSC-MET: Heterogeneous signcryption scheme supporting multi-ciphertext equality test for Internet of Drones". PLOS ONE 17, n.º 9 (29 de septiembre de 2022): e0274695. http://dx.doi.org/10.1371/journal.pone.0274695.

Texto completo
Resumen
Internet of Drones (IoD) is considered as a network and management architecture, which can enable unmanned aerial vehicles (UAVs) to collect data in controlled areas and conduct access control for UAVs. However, the current cloud-assisted IoD scheme cannot efficiently achieve secure communication between heterogeneous cryptosystems, and does not support multi-ciphertext equality tests. To improve the security and performance of traditional schemes, we propose a heterogeneous signcryption scheme (HSC-MET) that supports multi-ciphertext equality test. In this paper, we use a multi-ciphertext equality test technique to achieve multi-user simultaneous retrieval of multiple ciphertexts safely and efficiently. In addition, we adopt heterogeneous signcryption technology to realize secure data communication from public key infrastructure (PKI) to certificateless cryptography (CLC). At the same time, the proposed scheme based on the computation without bilinear pairing, which greatly reduces the computational cost. According to the security and performance analysis, under the random oracle model (ROM), the confidentiality, unforgeability and number security of HSC-MET are proved based on the computational Diffie-Hellman (CDH) problem.
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Cheng, Xiaogang, Ren Guo y Yonghong Chen. "Randomized quantum oblivious transfer with application in hacker-resistant computation". International Journal of Quantum Information 16, n.º 04 (junio de 2018): 1850039. http://dx.doi.org/10.1142/s0219749918500399.

Texto completo
Resumen
OT (Oblivious transfer) is a fundamental primitive in cryptography. But it is well known that in quantum cryptography, unconditionally secure OT is impossible. A variant of OT, i.e. randomized OT, is presented. We then show how to realize this variant in quantum cryptography with some security relaxations, which is inevitable because of the well-known impossible result in quantum cryptography. We also present a new secure computational model, namely HRC (Hacker-Resistant Computation) model. Since on today’s Internet there are more and more hackers and increased cyber threat, knowing how to protect the information and privacy stored on our computer and on cloud servers is very important, even when the computer or server has been breached by hackers. Finally, some interesting applications of the randomized OT variant to HRC are discussed.
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Kohlberg, Elon y Abraham Neyman. "Cooperative strategic games". Theoretical Economics 16, n.º 3 (2021): 825–51. http://dx.doi.org/10.3982/te3648.

Texto completo
Resumen
The value is a solution concept for n‐person strategic games, developed by Nash, Shapley, and Harsanyi. The value of a game is an a priori evaluation of the economic worth of the position of each player, reflecting the players' strategic possibilities, including their ability to make threats against one another. Applications of the value in economics have been rare, at least in part because the existing definition (for games with more than two players) consists of an ad hoc scheme that does not easily lend itself to computation. This paper makes three contributions: We provide an axiomatic foundation for the value; exhibit a simple formula for its computation; and extend the value—its definition, axiomatic characterization, and computational formula—to Bayesian games. We then apply the value in simple models of corruption, oligopolistic competition, and information sharing.
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Mironescu, Ion Dan y Lucian Vinţan. "A Simulation Based Analysis of an Multi Objective Diffusive Load Balancing Algorithm". International Journal of Computers Communications & Control 13, n.º 4 (25 de julio de 2018): 503–20. http://dx.doi.org/10.15837/ijccc.2018.4.3308.

Texto completo
Resumen
In this paper, we presented a further development of our research on developing an optimal software-hardware mapping framework. We used the Petri Net model of the complete hardware and software High Performance Computing (HPC) system running a Computational Fluid Dynamics (CFD) application, to simulate the behaviour of the proposed diffusive two level multi-objective load-balancing algorithm. We developed an meta-heuristic algorithm for generating an approximation of the Pareto-optimal set to be used as reference. The simulations showed the advantages of this algorithm over other diffusive algorithms: reduced computational and communication overhead and robustness due to low dependence on uncertain data. The algorithm also had the capacity to handle unpredictable events as a load increase due to domain refinement or loss of a computation resource due to malfunction.
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Pešek, Luděk, Pavel Šnábl y Vítězslav Bula. "Dry Friction Interblade Damping by 3D FEM Modelling of Bladed Disk: HPC Calculations Compared with Experiment". Shock and Vibration 2021 (14 de octubre de 2021): 1–16. http://dx.doi.org/10.1155/2021/5554379.

Texto completo
Resumen
Interblade contacts and damping evaluation of the turbine bladed wheel with prestressed dry friction contacts are solved by the 3D finite element method with the surface-to-surface dry friction contact model. This makes it possible to model the space relative motions of contact pairs that occur during blade vibrations. To experimentally validate the model, a physical model of the bladed wheel with tie-boss couplings was built and tested. HPC computation with a proposed strategy was used to lower the computational time of the nonlinear solution of the wheel resonant attenuation used for damping estimation. Comparison of experimental and numerical results of damping estimation yields a very good agreement.
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Anisiuba, Vitalis, Haibo Ma, Armin Silaen y Chenn Zhou. "Computational Studies of Air-Mist Spray Cooling in Continuous Casting". Energies 14, n.º 21 (4 de noviembre de 2021): 7339. http://dx.doi.org/10.3390/en14217339.

Texto completo
Resumen
Due to the significant reduction in water droplet size caused by the strong air-water interaction in the spray nozzle, air-mist spray is one of the promising technologies for achieving high-rate heat transfer. This study numerically analyzed air-mist spray produced by a flat-fan atomizer using three-dimensional computational fluid dynamics simulations, and a multivariable linear regression was used to develop a correlation to predict the heat transfer coefficient using the casting operating conditions such as air-pressure, water flow rate, casting speed, and standoff distance. A four-step simulation approach was used to simulate the air-mist spray cooling capturing the turbulence and mixing of the two fluids in the nozzle, droplet formation, droplet transport and impingement heat transfer. Validations were made on the droplet size and on the VOF-DPM model which were in good agreement with experimental results. A 33% increase in air pressure increases the lumped HTC by 3.09 ± 2.07% depending on the other casting parameters while an 85% increase in water flow rate reduces the lumped HTC by 4.61 ± 2.57%. For casting speed, a 6.5% decrease in casting speed results in a 1.78 ± 1.42% increase in the lumped HTC. The results from this study would provide useful information in the continuous casting operations and optimization.
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Devanaboyina, Tejaswini, Balakrishna Pillalamarri y Rama Murthy Garimella. "Distributed Computation in Wireless Sensor Networks". International Journal of Wireless Networks and Broadband Technologies 4, n.º 3 (julio de 2015): 14–32. http://dx.doi.org/10.4018/ijwnbt.2015070102.

Texto completo
Resumen
Wireless Sensor Networks are used to perform distributed sensing in various fields like health, military, home etc where the sensor nodes communicate among themselves and do distributed computation over the sensed values to identify the occurrence of an event. The architecture for distributed computation of primitive recursive functions and median is presented in this paper. This paper assumes the no memory computational model of sensor nodes; in the architecture for primary recursive functions i.e. the sensor nodes only have two registers. This assumption is not made for the computation of median. This paper also explores the applications of wireless sensor networks in building a smart, hassle free transportation system. In purview of emerging technologies like Internet of things and Vehicular Ad Hoc networks, the transport system can be made user friendly by including itinerary planning, dynamic speed boards etc. Already research is moving in the direction of making transport system efficient and user-friendly. This paper serves as a one more step in the process of achieving it.
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Duggan, Ben, John Metzcar y Paul Macklin. "DAPT: A package enabling distributed automated parameter testing". Gigabyte 2021 (4 de junio de 2021): 1–10. http://dx.doi.org/10.46471/gigabyte.22.

Texto completo
Resumen
Modern agent-based models (ABM) and other simulation models require evaluation and testing of many different parameters. Managing that testing for large scale parameter sweeps (grid searches), as well as storing simulation data, requires multiple, potentially customizable steps that may vary across simulations. Furthermore, parameter testing, processing, and analysis are slowed if simulation and processing jobs cannot be shared across teammates or computational resources. While high-performance computing (HPC) has become increasingly available, models can often be tested faster with the use of multiple computers and HPC resources. To address these issues, we created the Distributed Automated Parameter Testing (DAPT) Python package. By hosting parameters in an online (and often free) “database”, multiple individuals can run parameter sets simultaneously in a distributed fashion, enabling ad hoc crowdsourcing of computational power. Combining this with a flexible, scriptable tool set, teams can evaluate models and assess their underlying hypotheses quickly. Here, we describe DAPT and provide an example demonstrating its use.
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Et.al, Ji-Sun Kang. "Computational Efficiency Examination of a Regional Numerical Weather Prediction Model using KISTI Supercomputer NURION". Turkish Journal of Computer and Mathematics Education (TURCOMAT) 12, n.º 6 (10 de abril de 2021): 743–49. http://dx.doi.org/10.17762/turcomat.v12i6.2088.

Texto completo
Resumen
For well-resolving extreme weather events, running numerical weather prediction model with high resolution in time and space is essential. We explore how efficiently such modeling could be, using NURION. We have examined one of community numerical weather prediction models, WRF, and KISTI’s 5th supercomputer NURION of national HPC. Scalability of the model has been tested at first, and we have compared the computational efficiency of hybrid openMP + MPI runs with pure MPI runs. In addition to those parallel computing experiments, we have tested a new storage layer called burst buffer to see whether it can accelerate frequent I/O. We found that there are significant differences between the computational environments for running WRF model. First of all, we have tested a sensitivity of computational efficiency to the number of cores per node. The sensitivity experiments certainly tell us that using all cores per node does not guarantee the best results, rather leaving several cores per node could give more stable and efficient computation. For the current experimental configuration of WRF, moreover, pure MPI runs gives much better computational performance than any hybrid openMP + MPI runs. Lastly, we have tested burst buffer storage layer that is expected to accelerate frequent I/O. However, our experiments show that its impact is not consistently positive. We clearly confirm the positive impact with relatively smaller problem size experiments while the impact was not seen with bigger problem experiments. Significant sensitivity to the different computational configurations shown this paper strongly suggests that HPC users should find out the best computing environment before massive use of their applications
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Scherber, D. S. y H. C. Papadopoulos. "Distributed computation of averages over ad hoc networks". IEEE Journal on Selected Areas in Communications 23, n.º 4 (abril de 2005): 776–87. http://dx.doi.org/10.1109/jsac.2005.843553.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Xie, Yong, Songsong Zhang, Xiang Li y Yanggui Li. "An efficient cooperative message authentication based on reputation mechanism in vehicular ad hoc networks". International Journal of Distributed Sensor Networks 15, n.º 6 (junio de 2019): 155014771985491. http://dx.doi.org/10.1177/1550147719854910.

Texto completo
Resumen
Vehicular ad hoc networks have emerged as a promising approach to increasing road safety and efficiency. Vehicles periodically broadcast traffic-related status messages. Message authentication is a common way for ensuring information reliability, but it is an unaffordable computational cost for single vehicle. In this article, we propose an efficient cooperative message authentication based on reputation mechanism. In the proposed scheme, reputation model is used to assess authentication efforts of vehicles, which enhances initiative for cooperative message authentication and inhabits selfish behavior; sequence optimization algorithm solves messages overflowing on condition limited computation of onboard unit and improves the speed of message authentication at the premise of ensuring the reliability of message authentication. Simulation results show that our scheme presents a nice performance of authentication efficiency, packet loss ratio, and missing detection ratio.
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Muralidhar K. y Madhavi K. "Setting Up Ad Hoc Computing as a Service in Mobile Ad Hoc Cloud Computing Environment". International Journal of Interdisciplinary Telecommunications and Networking 13, n.º 1 (enero de 2021): 1–12. http://dx.doi.org/10.4018/ijitn.2021010101.

Texto completo
Resumen
Despite the rapid growth in popularity and hardware capacity in mobile devices, they suffer from resource poverty, which limits their ability to meet increasing mobile users' demands. Computation offloading may give a prominent solution. But it relies on the connection to the remote cloud and may fail in situations where there is poor or no connectivity. Cloudlet was introduced to cover this problem, but mobile users miss free mobility when using cloudlets. Offloading to the cloud or cloudlet is not always the preferred solution. An alternative is to utilize the nearby mobile devices as local resource suppliers and pull their capabilities as a mobile device cloud. In this paper, the authors present such an approach known as ad hoc computing as a service (AhCaaS) model for computation offloading in an ad hoc manner by connecting to nearby mobile devices. They define a multi-attribute selection strategy to determine the optimal computation offloadee. They evaluated the proposed model, and the result shows that AhCaaS reduces execution time, battery consumption, and avoids task reassignment.
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Ayriyan, Alexander S. "Computational experiment in era of HPC". Discrete and Continuous Models and Applied Computational Science 27, n.º 3 (15 de diciembre de 2019): 263–67. http://dx.doi.org/10.22363/2658-4670-2019-27-3-263-267.

Texto completo
Resumen
In this note we discuss the impact of development of architecture and technology of parallel computing on the typical life-cycle of the computational experiment. In particular, it is argued that development and installation of high-performance computing systems is indeed important itself regardless of specific scientific tasks, since the presence of cutting-age HPC systems within an academic infrastructure gives wide possibilities and stimulates new researches.
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Ayriyan, Alexander S. "Computational experiment in era of HPC". Russian Family Doctor 27, n.º 3 (15 de diciembre de 2019): 263–67. http://dx.doi.org/10.17816/rfd10662.

Texto completo
Resumen
In this note we discuss the impact of development of architecture and technology of parallel computing on the typical life-cycle of the computational experiment. In particular, it is argued that development and installation of high-performance computing systems is indeed important itself regardless of specific scientific tasks, since the presence of cutting-age HPC systems within an academic infrastructure gives wide possibilities and stimulates new researches.
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Ayriyan, Alexander S. "Computational experiment in era of HPC". Russian Family Doctor 27, n.º 3 (15 de diciembre de 2019): 263–67. http://dx.doi.org/10.17816/rfd10669.

Texto completo
Resumen
In this note we discuss the impact of development of architecture and technology of parallel computing on the typical life-cycle of the computational experiment. In particular, it is argued that development and installation of high-performance computing systems is indeed important itself regardless of specific scientific tasks, since the presence of cutting-age HPC systems within an academic infrastructure gives wide possibilities and stimulates new researches.
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Kumar, Mahesh, G. Janardhana Reddy, G. Ravi Kiran, M. A. Mohammed Aslam y O. Anwar Beg. "Computation of entropy generation in dissipative transient natural convective viscoelastic flow". Heat Transfer-Asian Research 48, n.º 3 (16 de enero de 2019): 1067–92. http://dx.doi.org/10.1002/htj.21421.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Hariri, Sleimane, Sylvain Weill, Jens Gustedt y Isabelle Charpentier. "A balanced watershed decomposition method for rain-on-grid simulations in HEC-RAS". Journal of Hydroinformatics 24, n.º 2 (22 de enero de 2022): 315–32. http://dx.doi.org/10.2166/hydro.2022.078.

Texto completo
Resumen
Abstract Rain-on-grid simulations for the modeling of 2D unsteady flows in response to precipitation input are implemented in Hydrologic Engineering Center's River Analysis System (HEC-RAS) software by means of a finite volume method and a sparse parallel linear solver running on a unique processor. Obviously, this may fail due to memory shortage when the discretization yields too large linear systems. Such simulations are good candidates for the design of partition and domain decomposition methods for the modeling of very large catchments by means of hydrological sub-units. Thinking in load-balanced computations, the proposed area-balanced partition method is based on the flow accumulation information. Implemented in HEC-RAS, the domain decomposition method comprises 2D–1D models combining 2D sub-unit meshes to 1D channels for extracting outflow data from upstream units and passing the information to downstream units. The workflow is automated using the HEC-RAS Controller. As a case study, we consider the French Saar catchment (1,747 km2). The new partition methods demonstrate a better equilibrium for sub-basin areas and the computational load for the sub-problems. The total computational time is substantially reduced. Domain decomposition is required for carrying rain-on-grid simulations over the Saar watershed (2,800,000 elements). The resulting discharge and flood maps show very good agreements with available data.
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Usman, Sardar, Rashid Mehmood, Iyad Katib y Aiiad Albeshri. "Data Locality in High Performance Computing, Big Data, and Converged Systems: An Analysis of the Cutting Edge and a Future System Architecture". Electronics 12, n.º 1 (23 de diciembre de 2022): 53. http://dx.doi.org/10.3390/electronics12010053.

Texto completo
Resumen
Big data has revolutionized science and technology leading to the transformation of our societies. High-performance computing (HPC) provides the necessary computational power for big data analysis using artificial intelligence and methods. Traditionally, HPC and big data had focused on different problem domains and had grown into two different ecosystems. Efforts have been underway for the last few years on bringing the best of both paradigms into HPC and big converged architectures. Designing HPC and big data converged systems is a hard task requiring careful placement of data, analytics, and other computational tasks such that the desired performance is achieved with the least amount of resources. Energy efficiency has become the biggest hurdle in the realization of HPC, big data, and converged systems capable of delivering exascale and beyond performance. Data locality is a key parameter of HPDA system design as moving even a byte costs heavily both in time and energy with an increase in the size of the system. Performance in terms of time and energy are the most important factors for users, particularly energy, due to it being the major hurdle in high-performance system design and the increasing focus on green energy systems due to environmental sustainability. Data locality is a broad term that encapsulates different aspects including bringing computations to data, minimizing data movement by efficient exploitation of cache hierarchies, reducing intra- and inter-node communications, locality-aware process and thread mapping, and in situ and transit data analysis. This paper provides an extensive review of cutting-edge research on data locality in HPC, big data, and converged systems. We review the literature on data locality in HPC, big data, and converged environments and discuss challenges, opportunities, and future directions. Subsequently, using the knowledge gained from this extensive review, we propose a system architecture for future HPC and big data converged systems. To the best of our knowledge, there is no such review on data locality in converged HPC and big data systems.
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Soni, Sonam. "Reliable Trust Computation model in Vehicular ad-hoc Network". American Journal of Advanced Computing 1, n.º 3 (1 de julio de 2020): 1–5. http://dx.doi.org/10.15864/ajac.1304.

Texto completo
Resumen
This paper concerns about the various implemented work has been studied and analyzed to form a new survey on trust model to VANET. In this paper it is observed that there are lot of new techniques are possible to form a new trust model in VANET to provide better security with trust concern over the entire environment of trust management in VANET. This work concerns of entire trust calculation work which has been done yet over it. Here summarizing the various trust models, various security requirements, issues over it.
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Agarwal, Pallavi y Neha Bhardwaj. "Vehicular Ad Hoc Networks: Hashing and Trust Computation Techniques". International Journal of Grid and Distributed Computing 9, n.º 7 (31 de julio de 2016): 301–6. http://dx.doi.org/10.14257/ijgdc.2016.9.7.30.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Karpas, Erez, Michael Katz y Shaul Markovitch. "When Optimal Is Just Not Good Enough: Learning Fast Informative Action Cost Partitionings". Proceedings of the International Conference on Automated Planning and Scheduling 21 (22 de marzo de 2011): 122–29. http://dx.doi.org/10.1609/icaps.v21i1.13447.

Texto completo
Resumen
Several recent heuristics for domain independent planning adopt some action cost partitioning scheme to derive admissible heuristic estimates. Given a state, two methods for obtaining an action cost partitioning have been proposed: optimal cost partitioning, which results in the best possible heuristic estimate for that state, but requires a substantial computational effort, and ad-hoc (uniform) cost partitioning, which is much faster, but is usually less informative. These two methods represent almost opposite points in the tradeoff between heuristic accuracy and heuristic computation time. One compromise that has been proposed between these two is using an optimal cost partitioning for the initial state to evaluate all states. In this paper, we propose a novel method for deriving a fast, informative cost-partitioning scheme, that is based on computing optimal action cost partitionings for a small set of states, and using these to derive heuristic estimates for all states. Our method provides greater control over the accuracy/computation-time tradeoff, which, as our empirical evaluation shows, can result in better performance.
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Markus, A. A., W. M. G. Courage y M. C. L. M. van Mierlo. "A Computational Framework for Flood Risk Assessment in The Netherlands". Scientific Programming 18, n.º 2 (2010): 93–105. http://dx.doi.org/10.1155/2010/782402.

Texto completo
Resumen
The safety of dikes in The Netherlands, located in the delta of the rivers Rhine, Meuse and Scheldt, has been the subject of debate for more than ten years. The safety (or flood risk) of a particular area may depend on the safety of other areas. This is referred to as effects of river system behaviour on flood risk (quantified as the estimated number of casualties and economic damage). A computational framework was developed to assess these effects. It consists of several components that are loosely coupled via data files and Tcl scripts to manage the individual programs and keep track of the state of the computations. The computations involved are lengthy (days or even weeks on a Linux cluster), which makes the framework currently more suitable for planning and design than for real-time operation. While the framework was constructedad hoc, it can also be viewed more formally as atuplespaceRealising this makes it possible to adopt the philosophy for other similar frameworks.
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Yim, Won Cheol y John C. Cushman. "Divide and Conquer (DC) BLAST: fast and easy BLAST execution within HPC environments". PeerJ 5 (22 de junio de 2017): e3486. http://dx.doi.org/10.7717/peerj.3486.

Texto completo
Resumen
Bioinformatics is currently faced with very large-scale data sets that lead to computational jobs, especially sequence similarity searches, that can take absurdly long times to run. For example, the National Center for Biotechnology Information (NCBI) Basic Local Alignment Search Tool (BLAST and BLAST+) suite, which is by far the most widely used tool for rapid similarity searching among nucleic acid or amino acid sequences, is highly central processing unit (CPU) intensive. While the BLAST suite of programs perform searches very rapidly, they have the potential to be accelerated. In recent years, distributed computing environments have become more widely accessible and used due to the increasing availability of high-performance computing (HPC) systems. Therefore, simple solutions for data parallelization are needed to expedite BLAST and other sequence analysis tools. However, existing software for parallel sequence similarity searches often requires extensive computational experience and skill on the part of the user. In order to accelerate BLAST and other sequence analysis tools, Divide and Conquer BLAST (DCBLAST) was developed to perform NCBI BLAST searches within a cluster, grid, or HPC environment by using a query sequence distribution approach. Scaling from one (1) to 256 CPU cores resulted in significant improvements in processing speed. Thus, DCBLAST dramatically accelerates the execution of BLAST searches using a simple, accessible, robust, and parallel approach. DCBLAST works across multiple nodes automatically and it overcomes the speed limitation of single-node BLAST programs. DCBLAST can be used on any HPC system, can take advantage of hundreds of nodes, and has no output limitations. This freely available tool simplifies distributed computation pipelines to facilitate the rapid discovery of sequence similarities between very large data sets.
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Pak, Young-Shang, Jung-Ho Shin y Soon-Min Jang. "Computational Study of Human Calcitonin (hCT) Oligomer". Bulletin of the Korean Chemical Society 30, n.º 12 (20 de diciembre de 2009): 3006–10. http://dx.doi.org/10.5012/bkcs.2009.30.12.3006.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Sharma, Pavan K., B. Gera, R. K. Singh y K. K. Vaze. "Computational Fluid Dynamics Modeling of Steam Condensation on Nuclear Containment Wall Surfaces Based on Semiempirical Generalized Correlations". Science and Technology of Nuclear Installations 2012 (2012): 1–7. http://dx.doi.org/10.1155/2012/106759.

Texto completo
Resumen
In water-cooled nuclear power reactors, significant quantities of steam and hydrogen could be produced within the primary containment following the postulated design basis accidents (DBA) or beyond design basis accidents (BDBA). For accurate calculation of the temperature/pressure rise and hydrogen transport calculation in nuclear reactor containment due to such scenarios, wall condensation heat transfer coefficient (HTC) is used. In the present work, the adaptation of a commercial CFD code with the implementation of models for steam condensation on wall surfaces in presence of noncondensable gases is explained. Steam condensation has been modeled using the empirical average HTC, which was originally developed to be used for “lumped-parameter” (volume-averaged) modeling of steam condensation in the presence of noncondensable gases. The present paper suggests a generalized HTC based on curve fitting of most of the reported semiempirical condensation models, which are valid for specific wall conditions. The present methodology has been validated against limited reported experimental data from the COPAIN experimental facility. This is the first step towards the CFD-based generalized analysis procedure for condensation modeling applicable for containment wall surfaces that is being evolved further for specific wall surfaces within the multicompartment containment atmosphere.
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

Yao, Yu Feng, Marwan Effendy y Jun Yao. "Evaluation of Wall Heat Transfer in Blade Trailing-Edge Cooling Passage". Applied Mechanics and Materials 284-287 (enero de 2013): 738–42. http://dx.doi.org/10.4028/www.scientific.net/amm.284-287.738.

Texto completo
Resumen
Model configurations of turbine blade trailing-edge internal cooling passage with staggered elliptic pin-fins in streamwise and spanwise are adopted for numerical investigation using computational fluid dynamics (CFD). Grid refinement study is performed at first to identify a baseline mesh, followed by validation study of passage total pressure loss, which gives 2% and 4% discrepancies respectively for two chosen configurations in comparison with experimental measurements. Further investigations are focused on evaluation of wall heat transfer coefficient (HTC) of both pin-fin and end walls, and it is found that CFD predicted pin-fin wall HTC are generally in good agreement with test data for the streamwise staggered elliptic pin-fins, but not the spanwise staggered elliptic pin-fins in which some discrepancies occur. CFD predicted end wall HTC have shown reasonable good agreement for the first three rows, but discrepancies seen in downstream rows are around a factor of 2-3. A ratio of averaged pin-fin and end walls HTC is estimated 1.3-1.5, close to that from a circular pin-fin configuration that has 1.8-2.1. Further study should focus on improving end wall HTC predictions, probably through a conjugate heat transfer model.
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Heintel, Markus. "Historical Height Samples with Shortfall: A Computational Approach". History and Computing 8, n.º 1 (marzo de 1996): 24–37. http://dx.doi.org/10.3366/hac.1996.8.1.24.

Texto completo
Resumen
Research in economic history frequently uses human height as a proxy for net nutrition. This anthropometric method enables historians to measure time trends and differences in nutritional status. However, the most widely used data sources for historical heights, military mustering registers, cannot be regarded as random samples of the underlying population. The lower side of the otherwise normal distribution is eroded by a phenomenon called shortfall, because shorter individuals are under-represented below a certain threshold (truncation point). This paper reviews two widely used methods for analyzing historical height samples with shortfall -the Quantile Bend Estimator (QBE) and the Reduced Sample Maximum Likelihood Estimator (RSMLE). Because of the drawbacks of these procedures, a new computational approach for identifying the truncation point of height samples with shortfall, using density estimation techniques, is proposed and illustrated on an Austrian dataset. Finally, this procedure, combined with a truncated regression model, is compared to the QBE to estimate the mean and the standard deviation. The results demonstrate the deficiencies of the QBE again and cast a good light on the new method.
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

Yoon, JunWeon, TaeYoung Hong, ChanYeol Park, Seo-Young Noh y HeonChang Yu. "Log Analysis-Based Resource and Execution Time Improvement in HPC: A Case Study". Applied Sciences 10, n.º 7 (10 de abril de 2020): 2634. http://dx.doi.org/10.3390/app10072634.

Texto completo
Resumen
High-performance computing (HPC) uses many distributed computing resources to solve large computational science problems through parallel computation. Such an approach can reduce overall job execution time and increase the capacity of solving large-scale and complex problems. In the supercomputer, the job scheduler, the HPC’s flagship tool, is responsible for distributing and managing the resources of large systems. In this paper, we analyze the execution log of the job scheduler for a certain period of time and propose an optimization approach to reduce the idle time of jobs. In our experiment, it has been found that the main root cause of delayed job is highly related to resource waiting. The execution time of the entire job is affected and significantly delayed due to the increase in idle resources that must be ready when submitting the large-scale job. The backfilling algorithm can optimize the inefficiency of these idle resources and help to reduce the execution time of the job. Therefore, we propose the backfilling algorithm, which can be applied to the supercomputer. This experimental result shows that the overall execution time is reduced.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía