Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Parallel computers Evaluation.

Zeitschriftenartikel zum Thema „Parallel computers Evaluation“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Zeitschriftenartikel für die Forschung zum Thema "Parallel computers Evaluation" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Zeitschriftenartikel für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Choudhary, A., Wei-Keng Liao, D. Weiner, P. Varshney, R. Linderman, M. Linderman und R. Brown. „Design, implementation and evaluation of parallel pipelined STAP on parallel computers“. IEEE Transactions on Aerospace and Electronic Systems 36, Nr. 2 (April 2000): 528–48. http://dx.doi.org/10.1109/7.845238.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Moorthi, M. Narayana, und R. Manjula. „Performance Evaluation and Analysis of Parallel Computers Workload“. International Journal of Grid and Distributed Computing 9, Nr. 1 (31.01.2016): 127–34. http://dx.doi.org/10.14257/ijgdc.2016.9.1.13.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

TOUYAMA, TAKAYOSHI, und SUSUMU HORIGUCHI. „PERFORMANCE EVALUATION OF PRACTICAL PARALLEL COMPUTER MODEL LogPQ“. International Journal of Foundations of Computer Science 12, Nr. 03 (Juni 2001): 325–40. http://dx.doi.org/10.1142/s0129054101000515.

Der volle Inhalt der Quelle
Annotation:
The present super computer will be replaced by a massively parallel computer consisting of a large number of processing elements which satisfy the continuous increasing depend for computing power. Practical parallel computing model has been expected to develop efficient parallel algorithms on massively parallel computers. Thus, we have presented a practical parallel computation model LogPQ by taking account of communication queues into the LogP model. This paper addresses the performance of a parallel matrix multiplication algorithm using LogPQ and LogP models. The parallel algorithm is implemented on Cray T3E and the parallel performances are compared with on the old machine CM-5. This shows that the communication network of T3E has superior buffering behavior than CM-5, in which we don't need to prepare extra buffering on T3E. Although, a little effect remains for both of the send and receive bufferings. On the other hand, the effect of message size remains, which shows the necessity of the overhead and gap proportional to the message size.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

KRUSCHE, PETER. „EXPERIMENTAL EVALUATION OF BSP PROGRAMMING LIBRARIES“. Parallel Processing Letters 18, Nr. 01 (März 2008): 7–21. http://dx.doi.org/10.1142/s0129626408003193.

Der volle Inhalt der Quelle
Annotation:
The model of bulk-synchronous parallel computation (BSP) helps to implement portable general purpose algorithms while maintaining predictable performance on different parallel computers. Nevertheless, when programming in ‘BSP style’, the running time of the implementation of an algorithm can be very dependent on the underlying communication library. In this study, an overview of existing approaches to practical BSP programming in C/C++ or Fortran is given and benchmarks were run for the two main BSP-like communication libraries, the Oxford BSP Toolset and PUB. Furthermore, a memory efficient matrix multiplication algorithm was implemented and used to compare their performance on different parallel computers and to evaluate the compliance with predictions by theoretical results.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Onbasioglu, E., und Y. Paker. „A comparative workload-based methodology for performance evaluation of parallel computers“. Future Generation Computer Systems 12, Nr. 6 (Juni 1997): 521–45. http://dx.doi.org/10.1016/s0167-739x(97)83070-1.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Furht, B. „A contribution to classification and evaluation of structures for parallel computers“. Microprocessing and Microprogramming 25, Nr. 1-5 (Januar 1989): 203–8. http://dx.doi.org/10.1016/0165-6074(89)90196-8.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

BLÖTE, H. W. J. „Statistical Mechanics and Special-Purpose Computers“. International Journal of Modern Physics C 02, Nr. 01 (März 1991): 14–20. http://dx.doi.org/10.1142/s0129183191000032.

Der volle Inhalt der Quelle
Annotation:
A number of special-purpose computers (SPC’s) have been built in the last two decades, and more are under construction. In parallel with the evolution of generalpurpose computers, the capacity of the fastest SPC’s has grown considerably in this period. The increase of speed is partly due to the availability of faster components, but even more important is the introduction of new architectures using pipelining and parallel processing. Apart from becoming faster on the average, a pronounced diversification has taken place in SPC’s which does not only affect their speed but also their versatility and, of course, their cost. An evaluation of SPC performances and costs in comparison with general-purpose supercomputers shows that, under certain circumstances, SPC’s can play a very useful role. They enable calculations that would not be feasible otherwise, because of excessive costs. However, the effort needed to build even a relatively simple SPC can easily be underestimated.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Sueyoshi, Toshinori, Keizo Saisho und Itsujiro Arita. „Performance evaluation of the binary tree access mechanism in mimd type parallel computers“. Systems and Computers in Japan 17, Nr. 9 (1986): 47–57. http://dx.doi.org/10.1002/scj.4690170906.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Nguyen Thu, Thuy. „Parallel iteration of two-step Runge-Kutta methods“. Journal of Science Natural Science 66, Nr. 1 (März 2021): 12–24. http://dx.doi.org/10.18173/2354-1059.2021-0002.

Der volle Inhalt der Quelle
Annotation:
In this paper, we introduce the Parallel iteration of two-step Runge-Kutta methods for solving non-stiff initial-value problems for systems of first-order differential equations (ODEs): y′(t) = f(t, y(t)), for use on parallel computers. Starting with an s−stage implicit two-step Runge-Kutta (TSRK) method of order p, we apply the highly parallel predictor-corrector iteration process in P (EC)mE mode. In this way, we obtain an explicit two-step Runge-Kutta method that has order p for all m, and that requires s(m+1) right-hand side evaluations per step of which each s evaluation can be computed parallelly. By a number of numerical experiments, we show the superiority of the parallel predictor-corrector methods proposed in this paper over both sequential and parallel methods available in the literature.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Gupta, Anshul, Fred G. Gustavson, Mahesh Joshi und Sivan Toledo. „The design, implementation, and evaluation of a symmetric banded linear solver for distributed-memory parallel computers“. ACM Transactions on Mathematical Software 24, Nr. 1 (März 1998): 74–101. http://dx.doi.org/10.1145/285861.285865.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Fu, Zheng-Qing, John Chrzas, George M. Sheldrick, John Rose und Bi-Cheng Wang. „A parallel program usingSHELXDfor quick heavy-atom partial structural solution on high-performance computers“. Journal of Applied Crystallography 40, Nr. 2 (12.03.2007): 387–90. http://dx.doi.org/10.1107/s0021889807003998.

Der volle Inhalt der Quelle
Annotation:
A parallel algorithm has been designed forSHELXDto solve the heavy-atom partial structures of protein crystals quickly. Based on this algorithm, a program has been developed to run on high-performance multiple-CPU Linux PCs, workstations or clusters. Tests on the 32-CPU Linux cluster at SER-CAT, APS, Argonne National Laboratory, show that the parallelization dramatically speeds up the process by a factor of roughly the number of CPUs applied, leading to reliable and instant heavy-atom sites solution, which provides the practical opportunity to employ heavy-atom search as an alternative tool for anomalous scattering data quality evaluation during single/multiple-wavelength anomalous diffraction (SAD/MAD) data collection at synchrotron beamlines.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Puzyrev, Vladimir, Seid Koric und Scott Wilkin. „Evaluation of parallel direct sparse linear solvers in electromagnetic geophysical problems“. Computers & Geosciences 89 (April 2016): 79–87. http://dx.doi.org/10.1016/j.cageo.2016.01.009.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Aslot, Vishal, und Rudolf Eigenmann. „Quantitative Performance Analysis of the SPEC OMPM2001 Benchmarks“. Scientific Programming 11, Nr. 2 (2003): 105–24. http://dx.doi.org/10.1155/2003/401032.

Der volle Inhalt der Quelle
Annotation:
The state of modern computer systems has evolved to allow easy access to multiprocessor systems by supporting multiple processors on a single physical package. As the multiprocessor hardware evolves, new ways of programming it are also developed. Some inventions may merely be adopting and standardizing the older paradigms. One such evolving standard for programming shared-memory parallel computers is the OpenMP API. The Standard Performance Evaluation Corporation (SPEC) has created a suite of parallel programs called SPEC OMP to compare and evaluate modern shared-memory multiprocessor systems using the OpenMP standard. We have studied these benchmarks in detail to understand their performance on a modern architecture. In this paper, we present detailed measurements of the benchmarks. We organize, summarize, and display our measurements using a Quantitative Model. We present a detailed discussion and derivation of the model. Also, we discuss the important loops in the SPEC OMPM2001 benchmarks and the reasons for less than ideal speedup on our platform.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Wiebe, James H. „Teaching Mathematics with Technology: Order of Operations“. Arithmetic Teacher 37, Nr. 3 (November 1989): 36–38. http://dx.doi.org/10.5951/at.37.3.0036.

Der volle Inhalt der Quelle
Annotation:
The NCTM's recently released Curriculum and Evaluation Standards for School Mathematics (1989) recommends that calculators and computers be freely available to elementary school students for solving mathematical problems, exploring potterns and concepts, and investigating realistic applications. Most students, however, need some help in learning to use these tools, especially if they are using them with problems involving more than one operation, as many realistic applications do. They need to know that different calculators or computer software tools use different internal algorithms for finding answers and, thus, may give different answers to the same problems. They need to be taught how to enter multistep problems and evaluate the displayed result, that is, to do parallel mental computations. This article focuses on teaching elementary school students the order in which calculators and computer languages solve mathematical expressions.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Laybourn, Mark, und John Pascoe. „What happens when quantum computing re-defines the assessment of investment risk?“ APPEA Journal 57, Nr. 2 (2017): 486. http://dx.doi.org/10.1071/aj16140.

Der volle Inhalt der Quelle
Annotation:
The dawn of quantum computing is upon us and as the world’s smartest minds determine how the technology will change our daily lives, we consider how it could benefit investors in oil and gas projects to make better decisions. The oil and gas industry relies on investment for its survival and investors expect a return commensurate with the risks of a project. The classical approach to investment evaluation relies on mathematics in which estimated project cash flows are assessed against a cost of capital and an upfront investment. The issue with this approach is the key assumptions which underpin the project cash flow calculations such as reserves, production and market prices are themselves estimates which each introduce a degree of risk. If we analysed the financial models of recent oil and gas developments we would find the key assumptions which underpin the projects would be vastly different to reality. The crystal ball of investment evaluation would benefit from a more powerful way to optimise estimates and assess risk. A quantum computer offers the ability to perform optimisation calculations not possible with classical computers. The theoretical ability to run infinite parallel processes (as opposed to sequential processes in classical computers) can fundamentally change the optimisation of estimates. Google and NASA were recently able to solve a highly specialised computing problem with a quantum computer 100 million times faster than a classical computer. The power to significantly improve estimation optimisations and thereby reduce risk will help investors achieve a higher degree of confidence and should see levels of investment increase.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Zhamanov, Azamat, Seong-MooYoo, Zhulduz Sakhiyeva und Meirambek Zhaparov. „Implementation and Evaluation of Flipped Classroom as IoT Element into Learning Process of Computer Network Education“. International Journal of Information and Communication Technology Education 14, Nr. 2 (April 2018): 30–47. http://dx.doi.org/10.4018/ijicte.2018040103.

Der volle Inhalt der Quelle
Annotation:
Students nowadays are hard to be motivated to study lessons with traditional teaching methods. Computers, smartphones, tablets and other smart devices disturb students' attentions. Nevertheless, those smart devices can be used as auxiliary tools of modern teaching methods. In this article, the authors review two popular modern teaching methods: flipped classroom and gamification. Next, they implement flipped classrooms as an element of IoT (Internet of Things) into learning process of computer networks course, by using Cisco networking academy tools, instead of traditional learning. The survey provided to students shows good feedback from students. The authors report the impact of flipped classroom implementation with data obtained from two parallel sections (one flipped classroom and the other traditional classroom). The results show that the flipped classroom approach is better than the traditional classroom approach with a difference of approximately 20% increase in the average of attendance, lab work, quizzes, midterm exams and final exam.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Ballestrero, P., P. Baglietto und C. Ruggiero. „Molecular dynamics for proteins: Performance evaluation on massively parallel computers based on mesh networks using a space decomposition approach“. Journal of Computational Chemistry 17, Nr. 4 (März 1996): 469–75. http://dx.doi.org/10.1002/(sici)1096-987x(199603)17:4<469::aid-jcc7>3.0.co;2-s.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Nunome, Atsushi, Hiroaki Hirata, Haruo Niimi und Kiyoshi Shibayama. „Performance evaluation of dynamic load balancing scheme with load prediction mechanism using the load growing acceleration for massively parallel computers“. Systems and Computers in Japan 35, Nr. 11 (2004): 69–79. http://dx.doi.org/10.1002/scj.10212.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Zaharov, A. I., V. A. Lokhvitskii, D. Yu Starobinets und A. D. Khomonenko. „Evaluation of the impact of parallel image processing on the operational efficiency of the Earth remote sensing spacecraft control complex“. Sovremennye problemy distantsionnogo zondirovaniya Zemli iz kosmosa 16, Nr. 1 (2019): 61–71. http://dx.doi.org/10.21046/2070-7401-2019-16-1-61-71.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

YAMAGIWA, SHINICHI, LEONEL SOUSA, KEVIN FERREIRA, KEIICHI AOKI, MASAAKI ONO und KOICHI WADA. „MAESTRO2: EXPERIMENTAL EVALUATION OF COMMUNICATION PERFORMANCE IMPROVEMENT TECHNIQUES IN THE LINK LAYER“. Journal of Interconnection Networks 07, Nr. 02 (Juni 2006): 295–318. http://dx.doi.org/10.1142/s0219265906001715.

Der volle Inhalt der Quelle
Annotation:
Cluster computers became the vehicle of choice to build high performance computing environments. To fully exploit the computing power of these environments, technologies for high performance networks and protocols have to be applied, since the communication patterns of parallel applications running on clusters demand low latency and high throughput. Our previous work identifies the main drawbacks of conventional network technologies, and proposed some techniques to solve them within a new network solution called Maestro. This paper describes not only the architecture and the evaluation platform of the Maestro2, but also the technologies introduced for enhancing the performance of the original Maestro framework. Novel techniques are proposed at the link layer and at the switching level, and its design and implementation in the Maestro2 is discussed: continuous network burst and out-of-order switching. Moreover, a specialized communication library has been developed to take full advantage of these new techniques for implementing high speed cluster networks based on the Maestro2 environment. Experimental results clearly show that the new proposed techniques provide significant improvements, both in terms of latency and throughput, which are essential to efficient cluster computing.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Forster, Richárd, und Fülöp Ágnes. „Jet browser model accelerated by GPUs“. Acta Universitatis Sapientiae, Informatica 8, Nr. 2 (01.12.2016): 171–85. http://dx.doi.org/10.1515/ausi-2016-0008.

Der volle Inhalt der Quelle
Annotation:
Abstract In the last centuries the experimental particle physics began to develop thank to growing capacity of computers among others. It is allowed to know the structure of the matter to level of quark gluon. Plasma in the strong interaction. Experimental evidences supported the theory to measure the predicted results. Since its inception the researchers are interested in the track reconstruction. We studied the jet browser model, which was developed for 4π calorimeter. This method works on the measurement data set, which contain the components of interaction points in the detector space and it allows to examine the trajectory reconstruction of the final state particles. We keep the total energy in constant values and it satisfies the Gauss law. Using GPUs the evaluation of the model can be drastically accelerated, as we were able to achieve up to 223 fold speedup compared to a CPU based parallel implementation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Hirahara, Kazuro. „Toward Advanced Earthquake Cycle Simulation“. Journal of Disaster Research 4, Nr. 2 (01.04.2009): 99–105. http://dx.doi.org/10.20965/jdr.2009.p0099.

Der volle Inhalt der Quelle
Annotation:
Recent earthquake cycle simulation based on laboratory derived rate and state friction laws with super-parallel computers have successfully reproduced historical earthquake cycles. Earthquake cycle simulation is thus a powerful tool for providing information on the occurrence of the next Nankai megathrust earthquake, if simulation is combined with data assimilation for historical data and recently ongoing crustal activity data observed by networks extending from the land to the ocean floor. Present earthquake cycle simulation assumes simplifications in calculation, however, that differ from actual complex situations. Executing simulation relaxing these simplifications requires huge computational demands, and is difficult with present supercomputers. Looking toward advanced simulation of Nankai megathrust earthquake cycles with next-generation petaflop supercomputers, we present 1) an evaluation of effects of the actual medium in earthquake cycle simulation, 2) improved deformation data with GPS and InSAR and of inversion for estimating frictional parameters, and 3) the estimation of the occurrence of large inland earthquakes in southwest Japan and of Nankai megathrust earthquakes.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

KALYANARAMAN, ANANTHARAMAN, und SRINIVAS ALURU. „EFFICIENT ALGORITHMS AND SOFTWARE FOR DETECTION OF FULL-LENGTH LTR RETROTRANSPOSONS“. Journal of Bioinformatics and Computational Biology 04, Nr. 02 (April 2006): 197–216. http://dx.doi.org/10.1142/s021972000600203x.

Der volle Inhalt der Quelle
Annotation:
LTR retrotransposons constitute one of the most abundant classes of repetitive elements in eukaryotic genomes. In this paper, we present a new algorithm for detection of full-length LTR retrotransposons in genomic sequences. The algorithm identifies regions in a genomic sequence that show structural characteristics of LTR retrotransposons. Three key components distinguish our algorithm from that of current software — (i) a novel method that preprocesses the entire genomic sequence in linear time and produces high quality pairs of LTR candidates in run-time that is constant per pair, (ii) a thorough alignment-based evaluation of candidate pairs to ensure high quality prediction, and (iii) a robust parameter set encompassing both structural constraints and quality controls providing users with a high degree of flexibility. We implemented our algorithm into a software program called LTR_par, which can be run on both serial and parallel computers. Validation of our software against the yeast genome indicates superior results in both quality and performance when compared to existing software. Additional validations are presented on rice BACs and chimpanzee genome.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Zhang, Y., und H. A. Salisch. „APPLICATION OF NEURAL NETWORKS TO THE EVALUATION OF RESERVOIR QUALITY IN A LITHOLOGICALLY COMPLEX FORMATION“. APPEA Journal 38, Nr. 1 (1998): 776. http://dx.doi.org/10.1071/aj97051.

Der volle Inhalt der Quelle
Annotation:
Neural networks are non-algorithmic, analog, distributive and massively parallel information processing systems that have a number of performance characteristics in common with biological neural networks or the human brain. Neural networks can simulate the nervous systems of living animals which work differently from conventional computing, to analyse, compute and solve some complex practical problems making use of computers. Neural networks are able to discover highly complex relationships between several variables that are presented to the network. Studies show that neural networks can be used to solve a great number of practical problems which occur in modeling, predictions, assessments, recognition and image processing. In particular, neural networks are suitable for application to problems where some results are known but the manner in which these results can be achieved are not known (or are difficult to implement) or the results themselves are not known. An important challenge for geologists, geophysicists and reservoir engineers is to accurately determine petrophysical parameters and to improve reservoir evaluation and description. It is important to be able to obtain realistic values of petrophysical parameters from well logs because core data are often not available either because of bore hole conditions or due to the high cost of coring. In lithologically complex formations conventional petrophysical evaluation methods cannot be used because of the lithological heterogeneity. This paper presents an application of neural networking to estimate petrophysical parameters from well logs and to evaluate reservoir quality in the Mardie Greensand in the Carnarvon Basin in Western Australia.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Lu, Qiao, Silin Li, Tuo Yang und Chenheng Xu. „An adaptive hybrid XdeepFM based deep Interest network model for click-through rate prediction system“. PeerJ Computer Science 7 (17.09.2021): e716. http://dx.doi.org/10.7717/peerj-cs.716.

Der volle Inhalt der Quelle
Annotation:
Recent advances in communication enable individuals to use phones and computers to access information on the web. E-commerce has seen rapid development, e.g., Alibaba has nearly 12 hundred million customers in China. Click-Through Rate (CTR) forecasting is a primary task in the e-commerce advertisement system. From the traditional Logistic Regression algorithm to the latest popular deep neural network methods that follow a similar embedding and MLP, several algorithms are used to predict CTR. This research proposes a hybrid model combining the Deep Interest Network (DIN) and eXtreme Deep Factorization Machine (xDeepFM) to perform CTR prediction robustly. The cores of DIN and xDeepFM are attention and feature cross, respectively. DIN follows an adaptive local activation unit that incorporates the attention mechanism to adaptively learn user interest from historical behaviors related to specific advertisements. xDeepFM further includes a critical part, a Compressed Interactions Network (CIN), aiming to generate feature interactions at a vectorwise level implicitly. Furthermore, a CIN, plain DNN, and a linear part are combined into one unified model to form xDeepFM. The proposed end-to-end hybrid model is a parallel ensemble of models via multilayer perceptron. CIN and xDeepFM are trained in parallel, and their output is fed into a multilayer perceptron. We used the e-commerce Alibaba dataset with the focal loss as the loss function for experimental evaluation through online complex example mining (OHEM) in the training process. The experimental result indicates that the proposed hybrid model has better performance than other models.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Ivetić, Damjan, Željko Vasilić, Miloš Stanić und Dušan Prodanović. „Speeding up the water distribution network design optimization using the ΔQ method“. Journal of Hydroinformatics 18, Nr. 1 (21.01.2015): 33–48. http://dx.doi.org/10.2166/hydro.2015.118.

Der volle Inhalt der Quelle
Annotation:
To optimize the design of a water distribution network (WDN), a large number of possible solutions need to be examined; hence computation efficiency is an important issue. To accelerate the computation, one can use more powerful computers, parallel computing systems with adapted hydraulic solvers, hybrid algorithms, more efficient hydraulic methods or any combination of these techniques. This paper explores the possibility to speed up optimization using variations of the ΔQ method to solve the network hydraulics. First, the ΔQ method was used inside the evaluation function where each tested alternative was hydraulically solved and ranked. Then, the convergence criterion was relived in order to reduce the computation time. Although the accuracy of the obtained hydraulic results was reduced, these were feasible and interesting solutions. Another modification was tested, where the ΔQ method was used just once to solve the hydraulics of the initial network, and the unknown flow corrections were added to the list of other unknown variables subject to optimization. Two case networks were used for testing and were compared to the results obtained using EPANET2. The obtained results have shown that the use of the ΔQ method in hydraulic computations can significantly accelerate the optimization of WDN.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Min, Li, Ya Nan Zhang und Zhen Bang Gong. „Development of a Robotic System to Examine Outer Cylinder of Pipes in Steam Generators“. Advanced Materials Research 228-229 (April 2011): 38–43. http://dx.doi.org/10.4028/www.scientific.net/amr.228-229.38.

Der volle Inhalt der Quelle
Annotation:
The examination of outer cylinder of pipes in steam generators is difficult to fulfill due to confined space. A robotic system has been developed to do the job. The robot contains a base, four rotary balls, a slide guide, two steppers, a DC motor, a positioning mechanism, a synchromesh belt mechanism, a longitudinal mobile platform, an extending and retracting mechanism, a rotary platform and a micro vehicle. The positioning mechanism can locate and fix the robot efficiently. The motion of the robot consists of longitudinal motion and transverse motion. The extending and retracting mechanism can push and pull the micro vehicle carrying a micro CMOS camera. The pose of the camera mounted on the movable platform of a micro 3-CSR parallel mechanism can be adjusted by 3 bias-two-way SMA actuators. The control system consists of two levels of computers. The user interface was developed with VB. Fuzzy control method is utilized to control the motion of the longitudinal mobile platform. Grey evaluation method is applied to evaluate the status of checked areas according to obtained images. Experiment results indicate that the robot has met the needs of examination.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Kvale, Karin F., Samar Khatiwala, Heiner Dietze, Iris Kriest und Andreas Oschlies. „Evaluation of the transport matrix method for simulation of ocean biogeochemical tracers“. Geoscientific Model Development 10, Nr. 6 (29.06.2017): 2425–45. http://dx.doi.org/10.5194/gmd-10-2425-2017.

Der volle Inhalt der Quelle
Annotation:
Abstract. Conventional integration of Earth system and ocean models can accrue considerable computational expenses, particularly for marine biogeochemical applications. Offline numerical schemes in which only the biogeochemical tracers are time stepped and transported using a pre-computed circulation field can substantially reduce the burden and are thus an attractive alternative. One such scheme is the transport matrix method (TMM), which represents tracer transport as a sequence of sparse matrix–vector products that can be performed efficiently on distributed-memory computers. While the TMM has been used for a variety of geochemical and biogeochemical studies, to date the resulting solutions have not been comprehensively assessed against their online counterparts. Here, we present a detailed comparison of the two. It is based on simulations of the state-of-the-art biogeochemical sub-model embedded within the widely used coarse-resolution University of Victoria Earth System Climate Model (UVic ESCM). The default, non-linear advection scheme was first replaced with a linear, third-order upwind-biased advection scheme to satisfy the linearity requirement of the TMM. Transport matrices were extracted from an equilibrium run of the physical model and subsequently used to integrate the biogeochemical model offline to equilibrium. The identical biogeochemical model was also run online. Our simulations show that offline integration introduces some bias to biogeochemical quantities through the omission of the polar filtering used in UVic ESCM and in the offline application of time-dependent forcing fields, with high latitudes showing the largest differences with respect to the online model. Differences in other regions and in the seasonality of nutrients and phytoplankton distributions are found to be relatively minor, giving confidence that the TMM is a reliable tool for offline integration of complex biogeochemical models. Moreover, while UVic ESCM is a serial code, the TMM can be run on a parallel machine with no change to the underlying biogeochemical code, thus providing orders of magnitude speed-up over the online model.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Wang, Shan Shan, und Stephen Mahin. „High-Performance Computer-Aided Optimization of Viscous Dampers for Improving the Seismic Performance of a Tall Steel Building“. Key Engineering Materials 763 (Februar 2018): 502–9. http://dx.doi.org/10.4028/www.scientific.net/kem.763.502.

Der volle Inhalt der Quelle
Annotation:
Using fluid viscous dampers (FVDs) has been demonstrated to be an effective method to improve seismic performance of new and existing buildings. In engineering applications, designs of these dampers mainly rely on trial and error, which could be repetitive and labor intensive. To improve this tedious manual process, it is beneficial to explore more formal and automated approaches that rely on recent advances in software applications for nonlinear dynamic analysis, performance-based evaluation, and workflow management and the computational power of high-performance, parallel processing computers. The optimization design procedure follows the framework of Performance Based Earthquake Engineering (PBEE) and uses an automatic tool that incorporates an optimization engine and structural analysis software: Open System for Earthquake Engineering Simulation (OpenSEES). An existing 35-story steel moment frame is selected as a case-study building for verification of this procedure. The goal of the retrofit design of FVDs is to improve the building’s seismic behavior that focuses on avoiding collapse under a basic-safety, level-2 earthquake (BSE-2E). The objective of the optimization procedure is to reduce the building’s total loss under a BSE-2E event and optimal damper patterns will be proposed. The efficiency of the optimization procedure will be demonstrated and compared with a manual refinement procedure.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Zhao, Qingyun, Fuqing Zhang, Teddy Holt, Craig H. Bishop und Qin Xu. „Development of a Mesoscale Ensemble Data Assimilation System at the Naval Research Laboratory“. Weather and Forecasting 28, Nr. 6 (01.12.2013): 1322–36. http://dx.doi.org/10.1175/waf-d-13-00015.1.

Der volle Inhalt der Quelle
Annotation:
Abstract An ensemble Kalman filter (EnKF) has been adopted and implemented at the Naval Research Laboratory (NRL) for mesoscale and storm-scale data assimilation to study the impact of ensemble assimilation of high-resolution observations, including those from Doppler radars, on storm prediction. The system has been improved during its implementation at NRL to further enhance its capability of assimilating various types of meteorological data. A parallel algorithm was also developed to increase the system’s computational efficiency on multiprocessor computers. The EnKF has been integrated into the NRL mesoscale data assimilation system and extensively tested to ensure that the system works appropriately with new observational data stream and forecast systems. An innovative procedure was developed to evaluate the impact of assimilated observations on ensemble analyses with no need to exclude any observations for independent validation (as required by the conventional evaluation based on data-denying experiments). The procedure was employed in this study to examine the impacts of ensemble size and localization on data assimilation and the results reveal a very interesting relationship between the ensemble size and the localization length scale. All the tests conducted in this study demonstrate the capabilities of the EnKF as a research tool for mesoscale and storm-scale data assimilation with potential operational applications.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

van Peursen, Wido, und Eep Talstra. „Computer-Assisted Analysis of Parallel Texts in the Bible. The Case of 2 Kings xviii-xix and Its Parallels in Isaiah and Chronicles“. Vetus Testamentum 57, Nr. 1 (2007): 45–72. http://dx.doi.org/10.1163/15685337x167855.

Der volle Inhalt der Quelle
Annotation:
AbstractIn literary-critical and text historical studies of the Bible the comparison of parallel texts plays an important role. Starting from the description of the proximity of parallel texts as a continuum from very close to very loose, this article discusses the way in which the computer can facilitate a comparison of various types of parallel texts. 2 Kings 18-19 and Isaiah 37-38 are taken as an example of two closely related texts. The Kings chapters and their parallels in 2 Chronicles 32 occupy a position at the other side of the continuum. These chapters differ so much, that it is sometimes impossible to establish which verses should be considered parallel. The computer-assisted analysis brings to light some striking correspondences, that disappear in traditional synopses, such as Ben David's Parallels in the Bible. These observations have an impact on our evaluation of the Chronicler's user of his sources and his literary taste.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Kodama, Yuetsu, Shuichi Sakai und Yoshinori Yamaguchi. „Evaluation of parallel execution performance by highly parallel computer EM-4“. Systems and Computers in Japan 24, Nr. 9 (1993): 32–41. http://dx.doi.org/10.1002/scj.4690240904.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Shkatuliak, Natalia, und Iryna Zadorozhna. „Students’ knowledge test control in Physics“. Scientific bulletin of South Ukrainian National Pedagogical University named after K. D. Ushynsky 2019, Nr. 4 (129) (26.12.2019): 143–49. http://dx.doi.org/10.24195/2617-6688-2019-4-18.

Der volle Inhalt der Quelle
Annotation:
Controlling learners’ educational achievements in our time performs the most important function of learning. Scientists and methodologists argue that the test control of students’ academic achievement meets the requirements of quantitative and objective measurement of learners’ knowledge, skills and abilities. The relevance of the test controlling method of learners’ academic achievements is dictated also by the introduction of the independent external evaluation (IEE) as a final certification of school leavers. With the emergence of computer classes, the use of tests became available and appropriate. The use of computers in the testing process greatly enhances the benefits of this type of control. This paper is aimed at studying experimentally the impact of the controlling testing on enhancing learners’ educational achievements in Physics and managing learners’ cognitive activity. We have been prepared the system of test questions on the following topics: “We begin to study Physics”, “Mechanical work. Units of work”, “Power and its units”, “Electric current. Conductor Resistance”, “Serial and Parallel Conductor Connection”, “Ideal Gas Laws”, “Atomic Physics. Spectrums”. The tests were developed in order to identify the students’ acquisitions of the main issues constituting the educational material / programme, as opposed to revealing some amount of information from the learner's side. In our opinion, it is important for students to master the level of education that would become the basis for further self-improvement of their own education, their own opportunities to overcome the challenges that the life postures to him / her in today's society. Therefore, the main task while developing the test tasks was the formation of certain subject and life-oriented competences. We conducted a pedagogical experiment to introduce the test control of learners' knowledge in Physics. A series of lessons were conducted using the test survey in the experimental class, but no tests were used in the control class. Using the example of testing 8th grade schoolchildren in Physics it was detected that the use of a systematic test control of knowledge acquisition at corresponding lessons Physics allowed the students to demonstrate better knowledge of certain topics at sufficient and high levels. Keywords: testing control, learners’ self-educational competence in Physics, creative thinking.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Stpiczyński, Przemysław. „Evaluating recursive filters on distributed memory parallel computers“. Communications in Numerical Methods in Engineering 22, Nr. 11 (06.04.2006): 1087–95. http://dx.doi.org/10.1002/cnm.867.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Consel, Charles, und Olivier Danvy. „Partial evaluation in parallel“. Lisp and Symbolic Computation 5, Nr. 4 (Dezember 1993): 327–42. http://dx.doi.org/10.1007/bf01806309.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Ziavras, Sotirios G., Haim Grebel, Anthony T. Chronopoulos und Florent Marcelli. „A new-generation parallel computer and its performance evaluation“. Future Generation Computer Systems 17, Nr. 3 (November 2000): 315–33. http://dx.doi.org/10.1016/s0167-739x(00)00082-0.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Huang, Kuo-Chan. „Minimizing Waiting Ratio for Dynamic Workload on Parallel Computers“. Parallel Processing Letters 16, Nr. 04 (Dezember 2006): 441–53. http://dx.doi.org/10.1142/s0129626406002769.

Der volle Inhalt der Quelle
Annotation:
This paper proposes waiting ratio as a basis in evaluating various scheduling methods for dynamic workloads consisting of multi-processor jobs on parallel computers. We evaluate commonly used methods as well as several methods proposed in this paper by simulation studies. The results indicate that some commonly used methods do not improve the waiting ratios as expected by intuition, while some methods proposed in this paper do greatly improve waiting ratios more than 10 times for some workload data, promising in leading to more reasonable waiting time and better user's satisfaction.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Orii, Shigeo. „Metrics for evaluation of parallel efficiency toward highly parallel processing“. Parallel Computing 36, Nr. 1 (Januar 2010): 16–25. http://dx.doi.org/10.1016/j.parco.2009.11.003.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

RACCA, R. G., Z. MENG, J. M. OZARD und M. J. WILMUT. „EVALUATION OF MASSIVELY PARALLEL COMPUTING FOR EXHAUSTIVE AND CLUSTERED MATCHED-FIELD PROCESSING“. Journal of Computational Acoustics 04, Nr. 02 (Juni 1996): 159–73. http://dx.doi.org/10.1142/s0218396x96000039.

Der volle Inhalt der Quelle
Annotation:
Many computer algorithms contain an operation that accounts for a substantial portion of the total execution cost in a frequently executed loop. The use of a parallel computer to execute that operation may represent an alternative to a sheer increase in processor speed. The signal processing technique known as matched-field processing (MFP) involves performing identical and independent operations on a potentially huge set of vectors. To investigate a massively parallel approach to MFP and clustered nearest neighbors MFP, algorithms were implemented on a DECmpp 12000 massively parallel computer (from Digital Equipment and MasPar Corporation) with 8192 processors. The execution time for the MFP technique on the MasPar machine was compared with that of MFP on a serial VAX9000–210 equipped with a vector processor. The results showed that the MasPar achieved a speedup factor of at least 17 relative to the VAX9000. The speedup was 3.5 times higher than the ratio of the peak ratings of 600 MFLOPS for the MasPar versus 125 MFLOPS for the VAX9000 with vector processor. The execution speed on the parallel machine represented 64% of its peak rating. This is much better than what is commonly assumed for a parallel machine and was obtained with modest programming effort. An initial implementation of a massively parallel approach to clustered MFP on the MasPar showed a further order of magnitude increase in speed, for an overall speedup factor of 35.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Stpiczyński, Przemysław. „Fast Parallel Algorithm for Polynomial Evaluation“. Parallel Algorithms and Applications 18, Nr. 4 (Dezember 2003): 209–16. http://dx.doi.org/10.1080/10637190310001633673.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

KIPER, AYSE. „PARALLEL POLYNOMIAL EVALUATION BY DECOUPLING ALGORITHM“. Parallel Algorithms and Applications 9, Nr. 1-2 (Januar 1996): 145–52. http://dx.doi.org/10.1080/10637199608915570.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Tsai, Jeffrey J. P., Bing Li und Eric Y. T. Juan. „Parallel evaluation of software architecture specifications“. Communications of the ACM 40, Nr. 1 (Januar 1997): 83–86. http://dx.doi.org/10.1145/242857.242881.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Luque, Emilio, Remo Suppi und Joan Sorribes. „Simulation of parallel systems: PSEE (Parallel System Evaluation Environment)“. Future Generation Computer Systems 10, Nr. 2-3 (Juni 1994): 291–94. http://dx.doi.org/10.1016/0167-739x(94)90031-0.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Uehara, Kiyohiko, und Kaoru Hirota. „A Fast Method for Fuzzy Rules Learning with Derivative-Free Optimization by Formulating Independent Evaluations of Each Fuzzy Rule“. Journal of Advanced Computational Intelligence and Intelligent Informatics 25, Nr. 2 (20.03.2021): 213–25. http://dx.doi.org/10.20965/jaciii.2021.p0213.

Der volle Inhalt der Quelle
Annotation:
A method is proposed for evaluating fuzzy rules independently of each other in fuzzy rules learning. The proposed method is named α-FUZZI-ES (α-weight-based fuzzy-rule independent evaluations) in this paper. In α-FUZZI-ES, the evaluation value of a fuzzy system is divided out among the fuzzy rules by using the compatibility degrees of the learning data. By the effective use of α-FUZZI-ES, a method for fast fuzzy rules learning is proposed. This is named α-FUZZI-ES learning (α-FUZZI-ES-based fuzzy rules learning) in this paper. α-FUZZI-ES learning is especially effective when evaluation functions are not differentiable and derivative-based optimization methods cannot be applied to fuzzy rules learning. α-FUZZI-ES learning makes it possible to optimize fuzzy rules independently of each other. This property reduces the dimensionality of the search space in finding the optimum fuzzy rules. Thereby, α-FUZZI-ES learning can attain fast convergence in fuzzy rules optimization. Moreover, α-FUZZI-ES learning can be efficiently performed with hardware in parallel to optimize fuzzy rules independently of each other. Numerical results show that α-FUZZI-ES learning is superior to the exemplary conventional scheme in terms of accuracy and convergence speed when the evaluation function is non-differentiable.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Cruz, Henry, Martina Eckert, Juan M. Meneses und J. F. Martínez. „Fast Evaluation of Segmentation Quality with Parallel Computing“. Scientific Programming 2017 (2017): 1–9. http://dx.doi.org/10.1155/2017/5767521.

Der volle Inhalt der Quelle
Annotation:
In digital image processing and computer vision, a fairly frequent task is the performance comparison of different algorithms on enormous image databases. This task is usually time-consuming and tedious, such that any kind of tool to simplify this work is welcome. To achieve an efficient and more practical handling of a normally tedious evaluation, we implemented the automatic detection system, with the help of MATLAB®’s Parallel Computing Toolbox™. The key parts of the system have been parallelized to achieve simultaneous execution and analysis of segmentation algorithms on the one hand and the evaluation of detection accuracy for the nonforested regions, such as a study case, on the other hand. As a positive side effect, CPU usage was reduced and processing time was significantly decreased by 68.54% compared to sequential processing (i.e., executing the system with each algorithm one by one).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Waite, Martin, Bret Giddings und Simon Lavington. „Parallel associative combinator evaluation II“. Future Generation Computer Systems 8, Nr. 4 (September 1992): 303–19. http://dx.doi.org/10.1016/0167-739x(92)90065-j.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Baker-Finch, Clem, David J. King und Phil Trinder. „An operational semantics for parallel lazy evaluation“. ACM SIGPLAN Notices 35, Nr. 9 (September 2000): 162–73. http://dx.doi.org/10.1145/357766.351256.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Frenkel, Karen A. „Evaluating two massively parallel machines“. Communications of the ACM 29, Nr. 8 (August 1986): 752–58. http://dx.doi.org/10.1145/6424.6427.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Buker, U., und B. Mertsching. „Parallel Evaluation of Hierarchical Image Databases“. Journal of Parallel and Distributed Computing 31, Nr. 2 (Dezember 1995): 141–52. http://dx.doi.org/10.1006/jpdc.1995.1152.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Muller, D. E., und F. P. Preparata. „Parallel restructuring and evaluation of expressions“. Journal of Computer and System Sciences 44, Nr. 1 (Februar 1992): 43–62. http://dx.doi.org/10.1016/0022-0000(92)90003-2.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie