To see the other types of publications on this topic, follow the link: Parallel computers Evaluation.

Journal articles on the topic 'Parallel computers Evaluation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Parallel computers Evaluation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Choudhary, A., Wei-Keng Liao, D. Weiner, P. Varshney, R. Linderman, M. Linderman, and R. Brown. "Design, implementation and evaluation of parallel pipelined STAP on parallel computers." IEEE Transactions on Aerospace and Electronic Systems 36, no. 2 (April 2000): 528–48. http://dx.doi.org/10.1109/7.845238.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Moorthi, M. Narayana, and R. Manjula. "Performance Evaluation and Analysis of Parallel Computers Workload." International Journal of Grid and Distributed Computing 9, no. 1 (January 31, 2016): 127–34. http://dx.doi.org/10.14257/ijgdc.2016.9.1.13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

TOUYAMA, TAKAYOSHI, and SUSUMU HORIGUCHI. "PERFORMANCE EVALUATION OF PRACTICAL PARALLEL COMPUTER MODEL LogPQ." International Journal of Foundations of Computer Science 12, no. 03 (June 2001): 325–40. http://dx.doi.org/10.1142/s0129054101000515.

Full text
Abstract:
The present super computer will be replaced by a massively parallel computer consisting of a large number of processing elements which satisfy the continuous increasing depend for computing power. Practical parallel computing model has been expected to develop efficient parallel algorithms on massively parallel computers. Thus, we have presented a practical parallel computation model LogPQ by taking account of communication queues into the LogP model. This paper addresses the performance of a parallel matrix multiplication algorithm using LogPQ and LogP models. The parallel algorithm is implemented on Cray T3E and the parallel performances are compared with on the old machine CM-5. This shows that the communication network of T3E has superior buffering behavior than CM-5, in which we don't need to prepare extra buffering on T3E. Although, a little effect remains for both of the send and receive bufferings. On the other hand, the effect of message size remains, which shows the necessity of the overhead and gap proportional to the message size.
APA, Harvard, Vancouver, ISO, and other styles
4

KRUSCHE, PETER. "EXPERIMENTAL EVALUATION OF BSP PROGRAMMING LIBRARIES." Parallel Processing Letters 18, no. 01 (March 2008): 7–21. http://dx.doi.org/10.1142/s0129626408003193.

Full text
Abstract:
The model of bulk-synchronous parallel computation (BSP) helps to implement portable general purpose algorithms while maintaining predictable performance on different parallel computers. Nevertheless, when programming in ‘BSP style’, the running time of the implementation of an algorithm can be very dependent on the underlying communication library. In this study, an overview of existing approaches to practical BSP programming in C/C++ or Fortran is given and benchmarks were run for the two main BSP-like communication libraries, the Oxford BSP Toolset and PUB. Furthermore, a memory efficient matrix multiplication algorithm was implemented and used to compare their performance on different parallel computers and to evaluate the compliance with predictions by theoretical results.
APA, Harvard, Vancouver, ISO, and other styles
5

Onbasioglu, E., and Y. Paker. "A comparative workload-based methodology for performance evaluation of parallel computers." Future Generation Computer Systems 12, no. 6 (June 1997): 521–45. http://dx.doi.org/10.1016/s0167-739x(97)83070-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Furht, B. "A contribution to classification and evaluation of structures for parallel computers." Microprocessing and Microprogramming 25, no. 1-5 (January 1989): 203–8. http://dx.doi.org/10.1016/0165-6074(89)90196-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

BLÖTE, H. W. J. "Statistical Mechanics and Special-Purpose Computers." International Journal of Modern Physics C 02, no. 01 (March 1991): 14–20. http://dx.doi.org/10.1142/s0129183191000032.

Full text
Abstract:
A number of special-purpose computers (SPC’s) have been built in the last two decades, and more are under construction. In parallel with the evolution of generalpurpose computers, the capacity of the fastest SPC’s has grown considerably in this period. The increase of speed is partly due to the availability of faster components, but even more important is the introduction of new architectures using pipelining and parallel processing. Apart from becoming faster on the average, a pronounced diversification has taken place in SPC’s which does not only affect their speed but also their versatility and, of course, their cost. An evaluation of SPC performances and costs in comparison with general-purpose supercomputers shows that, under certain circumstances, SPC’s can play a very useful role. They enable calculations that would not be feasible otherwise, because of excessive costs. However, the effort needed to build even a relatively simple SPC can easily be underestimated.
APA, Harvard, Vancouver, ISO, and other styles
8

Sueyoshi, Toshinori, Keizo Saisho, and Itsujiro Arita. "Performance evaluation of the binary tree access mechanism in mimd type parallel computers." Systems and Computers in Japan 17, no. 9 (1986): 47–57. http://dx.doi.org/10.1002/scj.4690170906.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Nguyen Thu, Thuy. "Parallel iteration of two-step Runge-Kutta methods." Journal of Science Natural Science 66, no. 1 (March 2021): 12–24. http://dx.doi.org/10.18173/2354-1059.2021-0002.

Full text
Abstract:
In this paper, we introduce the Parallel iteration of two-step Runge-Kutta methods for solving non-stiff initial-value problems for systems of first-order differential equations (ODEs): y′(t) = f(t, y(t)), for use on parallel computers. Starting with an s−stage implicit two-step Runge-Kutta (TSRK) method of order p, we apply the highly parallel predictor-corrector iteration process in P (EC)mE mode. In this way, we obtain an explicit two-step Runge-Kutta method that has order p for all m, and that requires s(m+1) right-hand side evaluations per step of which each s evaluation can be computed parallelly. By a number of numerical experiments, we show the superiority of the parallel predictor-corrector methods proposed in this paper over both sequential and parallel methods available in the literature.
APA, Harvard, Vancouver, ISO, and other styles
10

Gupta, Anshul, Fred G. Gustavson, Mahesh Joshi, and Sivan Toledo. "The design, implementation, and evaluation of a symmetric banded linear solver for distributed-memory parallel computers." ACM Transactions on Mathematical Software 24, no. 1 (March 1998): 74–101. http://dx.doi.org/10.1145/285861.285865.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Fu, Zheng-Qing, John Chrzas, George M. Sheldrick, John Rose, and Bi-Cheng Wang. "A parallel program usingSHELXDfor quick heavy-atom partial structural solution on high-performance computers." Journal of Applied Crystallography 40, no. 2 (March 12, 2007): 387–90. http://dx.doi.org/10.1107/s0021889807003998.

Full text
Abstract:
A parallel algorithm has been designed forSHELXDto solve the heavy-atom partial structures of protein crystals quickly. Based on this algorithm, a program has been developed to run on high-performance multiple-CPU Linux PCs, workstations or clusters. Tests on the 32-CPU Linux cluster at SER-CAT, APS, Argonne National Laboratory, show that the parallelization dramatically speeds up the process by a factor of roughly the number of CPUs applied, leading to reliable and instant heavy-atom sites solution, which provides the practical opportunity to employ heavy-atom search as an alternative tool for anomalous scattering data quality evaluation during single/multiple-wavelength anomalous diffraction (SAD/MAD) data collection at synchrotron beamlines.
APA, Harvard, Vancouver, ISO, and other styles
12

Puzyrev, Vladimir, Seid Koric, and Scott Wilkin. "Evaluation of parallel direct sparse linear solvers in electromagnetic geophysical problems." Computers & Geosciences 89 (April 2016): 79–87. http://dx.doi.org/10.1016/j.cageo.2016.01.009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Aslot, Vishal, and Rudolf Eigenmann. "Quantitative Performance Analysis of the SPEC OMPM2001 Benchmarks." Scientific Programming 11, no. 2 (2003): 105–24. http://dx.doi.org/10.1155/2003/401032.

Full text
Abstract:
The state of modern computer systems has evolved to allow easy access to multiprocessor systems by supporting multiple processors on a single physical package. As the multiprocessor hardware evolves, new ways of programming it are also developed. Some inventions may merely be adopting and standardizing the older paradigms. One such evolving standard for programming shared-memory parallel computers is the OpenMP API. The Standard Performance Evaluation Corporation (SPEC) has created a suite of parallel programs called SPEC OMP to compare and evaluate modern shared-memory multiprocessor systems using the OpenMP standard. We have studied these benchmarks in detail to understand their performance on a modern architecture. In this paper, we present detailed measurements of the benchmarks. We organize, summarize, and display our measurements using a Quantitative Model. We present a detailed discussion and derivation of the model. Also, we discuss the important loops in the SPEC OMPM2001 benchmarks and the reasons for less than ideal speedup on our platform.
APA, Harvard, Vancouver, ISO, and other styles
14

Wiebe, James H. "Teaching Mathematics with Technology: Order of Operations." Arithmetic Teacher 37, no. 3 (November 1989): 36–38. http://dx.doi.org/10.5951/at.37.3.0036.

Full text
Abstract:
The NCTM's recently released Curriculum and Evaluation Standards for School Mathematics (1989) recommends that calculators and computers be freely available to elementary school students for solving mathematical problems, exploring potterns and concepts, and investigating realistic applications. Most students, however, need some help in learning to use these tools, especially if they are using them with problems involving more than one operation, as many realistic applications do. They need to know that different calculators or computer software tools use different internal algorithms for finding answers and, thus, may give different answers to the same problems. They need to be taught how to enter multistep problems and evaluate the displayed result, that is, to do parallel mental computations. This article focuses on teaching elementary school students the order in which calculators and computer languages solve mathematical expressions.
APA, Harvard, Vancouver, ISO, and other styles
15

Laybourn, Mark, and John Pascoe. "What happens when quantum computing re-defines the assessment of investment risk?" APPEA Journal 57, no. 2 (2017): 486. http://dx.doi.org/10.1071/aj16140.

Full text
Abstract:
The dawn of quantum computing is upon us and as the world’s smartest minds determine how the technology will change our daily lives, we consider how it could benefit investors in oil and gas projects to make better decisions. The oil and gas industry relies on investment for its survival and investors expect a return commensurate with the risks of a project. The classical approach to investment evaluation relies on mathematics in which estimated project cash flows are assessed against a cost of capital and an upfront investment. The issue with this approach is the key assumptions which underpin the project cash flow calculations such as reserves, production and market prices are themselves estimates which each introduce a degree of risk. If we analysed the financial models of recent oil and gas developments we would find the key assumptions which underpin the projects would be vastly different to reality. The crystal ball of investment evaluation would benefit from a more powerful way to optimise estimates and assess risk. A quantum computer offers the ability to perform optimisation calculations not possible with classical computers. The theoretical ability to run infinite parallel processes (as opposed to sequential processes in classical computers) can fundamentally change the optimisation of estimates. Google and NASA were recently able to solve a highly specialised computing problem with a quantum computer 100 million times faster than a classical computer. The power to significantly improve estimation optimisations and thereby reduce risk will help investors achieve a higher degree of confidence and should see levels of investment increase.
APA, Harvard, Vancouver, ISO, and other styles
16

Zhamanov, Azamat, Seong-MooYoo, Zhulduz Sakhiyeva, and Meirambek Zhaparov. "Implementation and Evaluation of Flipped Classroom as IoT Element into Learning Process of Computer Network Education." International Journal of Information and Communication Technology Education 14, no. 2 (April 2018): 30–47. http://dx.doi.org/10.4018/ijicte.2018040103.

Full text
Abstract:
Students nowadays are hard to be motivated to study lessons with traditional teaching methods. Computers, smartphones, tablets and other smart devices disturb students' attentions. Nevertheless, those smart devices can be used as auxiliary tools of modern teaching methods. In this article, the authors review two popular modern teaching methods: flipped classroom and gamification. Next, they implement flipped classrooms as an element of IoT (Internet of Things) into learning process of computer networks course, by using Cisco networking academy tools, instead of traditional learning. The survey provided to students shows good feedback from students. The authors report the impact of flipped classroom implementation with data obtained from two parallel sections (one flipped classroom and the other traditional classroom). The results show that the flipped classroom approach is better than the traditional classroom approach with a difference of approximately 20% increase in the average of attendance, lab work, quizzes, midterm exams and final exam.
APA, Harvard, Vancouver, ISO, and other styles
17

Ballestrero, P., P. Baglietto, and C. Ruggiero. "Molecular dynamics for proteins: Performance evaluation on massively parallel computers based on mesh networks using a space decomposition approach." Journal of Computational Chemistry 17, no. 4 (March 1996): 469–75. http://dx.doi.org/10.1002/(sici)1096-987x(199603)17:4<469::aid-jcc7>3.0.co;2-s.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Nunome, Atsushi, Hiroaki Hirata, Haruo Niimi, and Kiyoshi Shibayama. "Performance evaluation of dynamic load balancing scheme with load prediction mechanism using the load growing acceleration for massively parallel computers." Systems and Computers in Japan 35, no. 11 (2004): 69–79. http://dx.doi.org/10.1002/scj.10212.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Zaharov, A. I., V. A. Lokhvitskii, D. Yu Starobinets, and A. D. Khomonenko. "Evaluation of the impact of parallel image processing on the operational efficiency of the Earth remote sensing spacecraft control complex." Sovremennye problemy distantsionnogo zondirovaniya Zemli iz kosmosa 16, no. 1 (2019): 61–71. http://dx.doi.org/10.21046/2070-7401-2019-16-1-61-71.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

YAMAGIWA, SHINICHI, LEONEL SOUSA, KEVIN FERREIRA, KEIICHI AOKI, MASAAKI ONO, and KOICHI WADA. "MAESTRO2: EXPERIMENTAL EVALUATION OF COMMUNICATION PERFORMANCE IMPROVEMENT TECHNIQUES IN THE LINK LAYER." Journal of Interconnection Networks 07, no. 02 (June 2006): 295–318. http://dx.doi.org/10.1142/s0219265906001715.

Full text
Abstract:
Cluster computers became the vehicle of choice to build high performance computing environments. To fully exploit the computing power of these environments, technologies for high performance networks and protocols have to be applied, since the communication patterns of parallel applications running on clusters demand low latency and high throughput. Our previous work identifies the main drawbacks of conventional network technologies, and proposed some techniques to solve them within a new network solution called Maestro. This paper describes not only the architecture and the evaluation platform of the Maestro2, but also the technologies introduced for enhancing the performance of the original Maestro framework. Novel techniques are proposed at the link layer and at the switching level, and its design and implementation in the Maestro2 is discussed: continuous network burst and out-of-order switching. Moreover, a specialized communication library has been developed to take full advantage of these new techniques for implementing high speed cluster networks based on the Maestro2 environment. Experimental results clearly show that the new proposed techniques provide significant improvements, both in terms of latency and throughput, which are essential to efficient cluster computing.
APA, Harvard, Vancouver, ISO, and other styles
21

Forster, Richárd, and Fülöp Ágnes. "Jet browser model accelerated by GPUs." Acta Universitatis Sapientiae, Informatica 8, no. 2 (December 1, 2016): 171–85. http://dx.doi.org/10.1515/ausi-2016-0008.

Full text
Abstract:
Abstract In the last centuries the experimental particle physics began to develop thank to growing capacity of computers among others. It is allowed to know the structure of the matter to level of quark gluon. Plasma in the strong interaction. Experimental evidences supported the theory to measure the predicted results. Since its inception the researchers are interested in the track reconstruction. We studied the jet browser model, which was developed for 4π calorimeter. This method works on the measurement data set, which contain the components of interaction points in the detector space and it allows to examine the trajectory reconstruction of the final state particles. We keep the total energy in constant values and it satisfies the Gauss law. Using GPUs the evaluation of the model can be drastically accelerated, as we were able to achieve up to 223 fold speedup compared to a CPU based parallel implementation.
APA, Harvard, Vancouver, ISO, and other styles
22

Hirahara, Kazuro. "Toward Advanced Earthquake Cycle Simulation." Journal of Disaster Research 4, no. 2 (April 1, 2009): 99–105. http://dx.doi.org/10.20965/jdr.2009.p0099.

Full text
Abstract:
Recent earthquake cycle simulation based on laboratory derived rate and state friction laws with super-parallel computers have successfully reproduced historical earthquake cycles. Earthquake cycle simulation is thus a powerful tool for providing information on the occurrence of the next Nankai megathrust earthquake, if simulation is combined with data assimilation for historical data and recently ongoing crustal activity data observed by networks extending from the land to the ocean floor. Present earthquake cycle simulation assumes simplifications in calculation, however, that differ from actual complex situations. Executing simulation relaxing these simplifications requires huge computational demands, and is difficult with present supercomputers. Looking toward advanced simulation of Nankai megathrust earthquake cycles with next-generation petaflop supercomputers, we present 1) an evaluation of effects of the actual medium in earthquake cycle simulation, 2) improved deformation data with GPS and InSAR and of inversion for estimating frictional parameters, and 3) the estimation of the occurrence of large inland earthquakes in southwest Japan and of Nankai megathrust earthquakes.
APA, Harvard, Vancouver, ISO, and other styles
23

KALYANARAMAN, ANANTHARAMAN, and SRINIVAS ALURU. "EFFICIENT ALGORITHMS AND SOFTWARE FOR DETECTION OF FULL-LENGTH LTR RETROTRANSPOSONS." Journal of Bioinformatics and Computational Biology 04, no. 02 (April 2006): 197–216. http://dx.doi.org/10.1142/s021972000600203x.

Full text
Abstract:
LTR retrotransposons constitute one of the most abundant classes of repetitive elements in eukaryotic genomes. In this paper, we present a new algorithm for detection of full-length LTR retrotransposons in genomic sequences. The algorithm identifies regions in a genomic sequence that show structural characteristics of LTR retrotransposons. Three key components distinguish our algorithm from that of current software — (i) a novel method that preprocesses the entire genomic sequence in linear time and produces high quality pairs of LTR candidates in run-time that is constant per pair, (ii) a thorough alignment-based evaluation of candidate pairs to ensure high quality prediction, and (iii) a robust parameter set encompassing both structural constraints and quality controls providing users with a high degree of flexibility. We implemented our algorithm into a software program called LTR_par, which can be run on both serial and parallel computers. Validation of our software against the yeast genome indicates superior results in both quality and performance when compared to existing software. Additional validations are presented on rice BACs and chimpanzee genome.
APA, Harvard, Vancouver, ISO, and other styles
24

Zhang, Y., and H. A. Salisch. "APPLICATION OF NEURAL NETWORKS TO THE EVALUATION OF RESERVOIR QUALITY IN A LITHOLOGICALLY COMPLEX FORMATION." APPEA Journal 38, no. 1 (1998): 776. http://dx.doi.org/10.1071/aj97051.

Full text
Abstract:
Neural networks are non-algorithmic, analog, distributive and massively parallel information processing systems that have a number of performance characteristics in common with biological neural networks or the human brain. Neural networks can simulate the nervous systems of living animals which work differently from conventional computing, to analyse, compute and solve some complex practical problems making use of computers. Neural networks are able to discover highly complex relationships between several variables that are presented to the network. Studies show that neural networks can be used to solve a great number of practical problems which occur in modeling, predictions, assessments, recognition and image processing. In particular, neural networks are suitable for application to problems where some results are known but the manner in which these results can be achieved are not known (or are difficult to implement) or the results themselves are not known. An important challenge for geologists, geophysicists and reservoir engineers is to accurately determine petrophysical parameters and to improve reservoir evaluation and description. It is important to be able to obtain realistic values of petrophysical parameters from well logs because core data are often not available either because of bore hole conditions or due to the high cost of coring. In lithologically complex formations conventional petrophysical evaluation methods cannot be used because of the lithological heterogeneity. This paper presents an application of neural networking to estimate petrophysical parameters from well logs and to evaluate reservoir quality in the Mardie Greensand in the Carnarvon Basin in Western Australia.
APA, Harvard, Vancouver, ISO, and other styles
25

Lu, Qiao, Silin Li, Tuo Yang, and Chenheng Xu. "An adaptive hybrid XdeepFM based deep Interest network model for click-through rate prediction system." PeerJ Computer Science 7 (September 17, 2021): e716. http://dx.doi.org/10.7717/peerj-cs.716.

Full text
Abstract:
Recent advances in communication enable individuals to use phones and computers to access information on the web. E-commerce has seen rapid development, e.g., Alibaba has nearly 12 hundred million customers in China. Click-Through Rate (CTR) forecasting is a primary task in the e-commerce advertisement system. From the traditional Logistic Regression algorithm to the latest popular deep neural network methods that follow a similar embedding and MLP, several algorithms are used to predict CTR. This research proposes a hybrid model combining the Deep Interest Network (DIN) and eXtreme Deep Factorization Machine (xDeepFM) to perform CTR prediction robustly. The cores of DIN and xDeepFM are attention and feature cross, respectively. DIN follows an adaptive local activation unit that incorporates the attention mechanism to adaptively learn user interest from historical behaviors related to specific advertisements. xDeepFM further includes a critical part, a Compressed Interactions Network (CIN), aiming to generate feature interactions at a vectorwise level implicitly. Furthermore, a CIN, plain DNN, and a linear part are combined into one unified model to form xDeepFM. The proposed end-to-end hybrid model is a parallel ensemble of models via multilayer perceptron. CIN and xDeepFM are trained in parallel, and their output is fed into a multilayer perceptron. We used the e-commerce Alibaba dataset with the focal loss as the loss function for experimental evaluation through online complex example mining (OHEM) in the training process. The experimental result indicates that the proposed hybrid model has better performance than other models.
APA, Harvard, Vancouver, ISO, and other styles
26

Ivetić, Damjan, Željko Vasilić, Miloš Stanić, and Dušan Prodanović. "Speeding up the water distribution network design optimization using the ΔQ method." Journal of Hydroinformatics 18, no. 1 (January 21, 2015): 33–48. http://dx.doi.org/10.2166/hydro.2015.118.

Full text
Abstract:
To optimize the design of a water distribution network (WDN), a large number of possible solutions need to be examined; hence computation efficiency is an important issue. To accelerate the computation, one can use more powerful computers, parallel computing systems with adapted hydraulic solvers, hybrid algorithms, more efficient hydraulic methods or any combination of these techniques. This paper explores the possibility to speed up optimization using variations of the ΔQ method to solve the network hydraulics. First, the ΔQ method was used inside the evaluation function where each tested alternative was hydraulically solved and ranked. Then, the convergence criterion was relived in order to reduce the computation time. Although the accuracy of the obtained hydraulic results was reduced, these were feasible and interesting solutions. Another modification was tested, where the ΔQ method was used just once to solve the hydraulics of the initial network, and the unknown flow corrections were added to the list of other unknown variables subject to optimization. Two case networks were used for testing and were compared to the results obtained using EPANET2. The obtained results have shown that the use of the ΔQ method in hydraulic computations can significantly accelerate the optimization of WDN.
APA, Harvard, Vancouver, ISO, and other styles
27

Min, Li, Ya Nan Zhang, and Zhen Bang Gong. "Development of a Robotic System to Examine Outer Cylinder of Pipes in Steam Generators." Advanced Materials Research 228-229 (April 2011): 38–43. http://dx.doi.org/10.4028/www.scientific.net/amr.228-229.38.

Full text
Abstract:
The examination of outer cylinder of pipes in steam generators is difficult to fulfill due to confined space. A robotic system has been developed to do the job. The robot contains a base, four rotary balls, a slide guide, two steppers, a DC motor, a positioning mechanism, a synchromesh belt mechanism, a longitudinal mobile platform, an extending and retracting mechanism, a rotary platform and a micro vehicle. The positioning mechanism can locate and fix the robot efficiently. The motion of the robot consists of longitudinal motion and transverse motion. The extending and retracting mechanism can push and pull the micro vehicle carrying a micro CMOS camera. The pose of the camera mounted on the movable platform of a micro 3-CSR parallel mechanism can be adjusted by 3 bias-two-way SMA actuators. The control system consists of two levels of computers. The user interface was developed with VB. Fuzzy control method is utilized to control the motion of the longitudinal mobile platform. Grey evaluation method is applied to evaluate the status of checked areas according to obtained images. Experiment results indicate that the robot has met the needs of examination.
APA, Harvard, Vancouver, ISO, and other styles
28

Kvale, Karin F., Samar Khatiwala, Heiner Dietze, Iris Kriest, and Andreas Oschlies. "Evaluation of the transport matrix method for simulation of ocean biogeochemical tracers." Geoscientific Model Development 10, no. 6 (June 29, 2017): 2425–45. http://dx.doi.org/10.5194/gmd-10-2425-2017.

Full text
Abstract:
Abstract. Conventional integration of Earth system and ocean models can accrue considerable computational expenses, particularly for marine biogeochemical applications. Offline numerical schemes in which only the biogeochemical tracers are time stepped and transported using a pre-computed circulation field can substantially reduce the burden and are thus an attractive alternative. One such scheme is the transport matrix method (TMM), which represents tracer transport as a sequence of sparse matrix–vector products that can be performed efficiently on distributed-memory computers. While the TMM has been used for a variety of geochemical and biogeochemical studies, to date the resulting solutions have not been comprehensively assessed against their online counterparts. Here, we present a detailed comparison of the two. It is based on simulations of the state-of-the-art biogeochemical sub-model embedded within the widely used coarse-resolution University of Victoria Earth System Climate Model (UVic ESCM). The default, non-linear advection scheme was first replaced with a linear, third-order upwind-biased advection scheme to satisfy the linearity requirement of the TMM. Transport matrices were extracted from an equilibrium run of the physical model and subsequently used to integrate the biogeochemical model offline to equilibrium. The identical biogeochemical model was also run online. Our simulations show that offline integration introduces some bias to biogeochemical quantities through the omission of the polar filtering used in UVic ESCM and in the offline application of time-dependent forcing fields, with high latitudes showing the largest differences with respect to the online model. Differences in other regions and in the seasonality of nutrients and phytoplankton distributions are found to be relatively minor, giving confidence that the TMM is a reliable tool for offline integration of complex biogeochemical models. Moreover, while UVic ESCM is a serial code, the TMM can be run on a parallel machine with no change to the underlying biogeochemical code, thus providing orders of magnitude speed-up over the online model.
APA, Harvard, Vancouver, ISO, and other styles
29

Wang, Shan Shan, and Stephen Mahin. "High-Performance Computer-Aided Optimization of Viscous Dampers for Improving the Seismic Performance of a Tall Steel Building." Key Engineering Materials 763 (February 2018): 502–9. http://dx.doi.org/10.4028/www.scientific.net/kem.763.502.

Full text
Abstract:
Using fluid viscous dampers (FVDs) has been demonstrated to be an effective method to improve seismic performance of new and existing buildings. In engineering applications, designs of these dampers mainly rely on trial and error, which could be repetitive and labor intensive. To improve this tedious manual process, it is beneficial to explore more formal and automated approaches that rely on recent advances in software applications for nonlinear dynamic analysis, performance-based evaluation, and workflow management and the computational power of high-performance, parallel processing computers. The optimization design procedure follows the framework of Performance Based Earthquake Engineering (PBEE) and uses an automatic tool that incorporates an optimization engine and structural analysis software: Open System for Earthquake Engineering Simulation (OpenSEES). An existing 35-story steel moment frame is selected as a case-study building for verification of this procedure. The goal of the retrofit design of FVDs is to improve the building’s seismic behavior that focuses on avoiding collapse under a basic-safety, level-2 earthquake (BSE-2E). The objective of the optimization procedure is to reduce the building’s total loss under a BSE-2E event and optimal damper patterns will be proposed. The efficiency of the optimization procedure will be demonstrated and compared with a manual refinement procedure.
APA, Harvard, Vancouver, ISO, and other styles
30

Zhao, Qingyun, Fuqing Zhang, Teddy Holt, Craig H. Bishop, and Qin Xu. "Development of a Mesoscale Ensemble Data Assimilation System at the Naval Research Laboratory." Weather and Forecasting 28, no. 6 (December 1, 2013): 1322–36. http://dx.doi.org/10.1175/waf-d-13-00015.1.

Full text
Abstract:
Abstract An ensemble Kalman filter (EnKF) has been adopted and implemented at the Naval Research Laboratory (NRL) for mesoscale and storm-scale data assimilation to study the impact of ensemble assimilation of high-resolution observations, including those from Doppler radars, on storm prediction. The system has been improved during its implementation at NRL to further enhance its capability of assimilating various types of meteorological data. A parallel algorithm was also developed to increase the system’s computational efficiency on multiprocessor computers. The EnKF has been integrated into the NRL mesoscale data assimilation system and extensively tested to ensure that the system works appropriately with new observational data stream and forecast systems. An innovative procedure was developed to evaluate the impact of assimilated observations on ensemble analyses with no need to exclude any observations for independent validation (as required by the conventional evaluation based on data-denying experiments). The procedure was employed in this study to examine the impacts of ensemble size and localization on data assimilation and the results reveal a very interesting relationship between the ensemble size and the localization length scale. All the tests conducted in this study demonstrate the capabilities of the EnKF as a research tool for mesoscale and storm-scale data assimilation with potential operational applications.
APA, Harvard, Vancouver, ISO, and other styles
31

van Peursen, Wido, and Eep Talstra. "Computer-Assisted Analysis of Parallel Texts in the Bible. The Case of 2 Kings xviii-xix and Its Parallels in Isaiah and Chronicles." Vetus Testamentum 57, no. 1 (2007): 45–72. http://dx.doi.org/10.1163/15685337x167855.

Full text
Abstract:
AbstractIn literary-critical and text historical studies of the Bible the comparison of parallel texts plays an important role. Starting from the description of the proximity of parallel texts as a continuum from very close to very loose, this article discusses the way in which the computer can facilitate a comparison of various types of parallel texts. 2 Kings 18-19 and Isaiah 37-38 are taken as an example of two closely related texts. The Kings chapters and their parallels in 2 Chronicles 32 occupy a position at the other side of the continuum. These chapters differ so much, that it is sometimes impossible to establish which verses should be considered parallel. The computer-assisted analysis brings to light some striking correspondences, that disappear in traditional synopses, such as Ben David's Parallels in the Bible. These observations have an impact on our evaluation of the Chronicler's user of his sources and his literary taste.
APA, Harvard, Vancouver, ISO, and other styles
32

Kodama, Yuetsu, Shuichi Sakai, and Yoshinori Yamaguchi. "Evaluation of parallel execution performance by highly parallel computer EM-4." Systems and Computers in Japan 24, no. 9 (1993): 32–41. http://dx.doi.org/10.1002/scj.4690240904.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Shkatuliak, Natalia, and Iryna Zadorozhna. "Students’ knowledge test control in Physics." Scientific bulletin of South Ukrainian National Pedagogical University named after K. D. Ushynsky 2019, no. 4 (129) (December 26, 2019): 143–49. http://dx.doi.org/10.24195/2617-6688-2019-4-18.

Full text
Abstract:
Controlling learners’ educational achievements in our time performs the most important function of learning. Scientists and methodologists argue that the test control of students’ academic achievement meets the requirements of quantitative and objective measurement of learners’ knowledge, skills and abilities. The relevance of the test controlling method of learners’ academic achievements is dictated also by the introduction of the independent external evaluation (IEE) as a final certification of school leavers. With the emergence of computer classes, the use of tests became available and appropriate. The use of computers in the testing process greatly enhances the benefits of this type of control. This paper is aimed at studying experimentally the impact of the controlling testing on enhancing learners’ educational achievements in Physics and managing learners’ cognitive activity. We have been prepared the system of test questions on the following topics: “We begin to study Physics”, “Mechanical work. Units of work”, “Power and its units”, “Electric current. Conductor Resistance”, “Serial and Parallel Conductor Connection”, “Ideal Gas Laws”, “Atomic Physics. Spectrums”. The tests were developed in order to identify the students’ acquisitions of the main issues constituting the educational material / programme, as opposed to revealing some amount of information from the learner's side. In our opinion, it is important for students to master the level of education that would become the basis for further self-improvement of their own education, their own opportunities to overcome the challenges that the life postures to him / her in today's society. Therefore, the main task while developing the test tasks was the formation of certain subject and life-oriented competences. We conducted a pedagogical experiment to introduce the test control of learners' knowledge in Physics. A series of lessons were conducted using the test survey in the experimental class, but no tests were used in the control class. Using the example of testing 8th grade schoolchildren in Physics it was detected that the use of a systematic test control of knowledge acquisition at corresponding lessons Physics allowed the students to demonstrate better knowledge of certain topics at sufficient and high levels. Keywords: testing control, learners’ self-educational competence in Physics, creative thinking.
APA, Harvard, Vancouver, ISO, and other styles
34

Stpiczyński, Przemysław. "Evaluating recursive filters on distributed memory parallel computers." Communications in Numerical Methods in Engineering 22, no. 11 (April 6, 2006): 1087–95. http://dx.doi.org/10.1002/cnm.867.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Consel, Charles, and Olivier Danvy. "Partial evaluation in parallel." Lisp and Symbolic Computation 5, no. 4 (December 1993): 327–42. http://dx.doi.org/10.1007/bf01806309.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Ziavras, Sotirios G., Haim Grebel, Anthony T. Chronopoulos, and Florent Marcelli. "A new-generation parallel computer and its performance evaluation." Future Generation Computer Systems 17, no. 3 (November 2000): 315–33. http://dx.doi.org/10.1016/s0167-739x(00)00082-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Huang, Kuo-Chan. "Minimizing Waiting Ratio for Dynamic Workload on Parallel Computers." Parallel Processing Letters 16, no. 04 (December 2006): 441–53. http://dx.doi.org/10.1142/s0129626406002769.

Full text
Abstract:
This paper proposes waiting ratio as a basis in evaluating various scheduling methods for dynamic workloads consisting of multi-processor jobs on parallel computers. We evaluate commonly used methods as well as several methods proposed in this paper by simulation studies. The results indicate that some commonly used methods do not improve the waiting ratios as expected by intuition, while some methods proposed in this paper do greatly improve waiting ratios more than 10 times for some workload data, promising in leading to more reasonable waiting time and better user's satisfaction.
APA, Harvard, Vancouver, ISO, and other styles
38

Orii, Shigeo. "Metrics for evaluation of parallel efficiency toward highly parallel processing." Parallel Computing 36, no. 1 (January 2010): 16–25. http://dx.doi.org/10.1016/j.parco.2009.11.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

RACCA, R. G., Z. MENG, J. M. OZARD, and M. J. WILMUT. "EVALUATION OF MASSIVELY PARALLEL COMPUTING FOR EXHAUSTIVE AND CLUSTERED MATCHED-FIELD PROCESSING." Journal of Computational Acoustics 04, no. 02 (June 1996): 159–73. http://dx.doi.org/10.1142/s0218396x96000039.

Full text
Abstract:
Many computer algorithms contain an operation that accounts for a substantial portion of the total execution cost in a frequently executed loop. The use of a parallel computer to execute that operation may represent an alternative to a sheer increase in processor speed. The signal processing technique known as matched-field processing (MFP) involves performing identical and independent operations on a potentially huge set of vectors. To investigate a massively parallel approach to MFP and clustered nearest neighbors MFP, algorithms were implemented on a DECmpp 12000 massively parallel computer (from Digital Equipment and MasPar Corporation) with 8192 processors. The execution time for the MFP technique on the MasPar machine was compared with that of MFP on a serial VAX9000–210 equipped with a vector processor. The results showed that the MasPar achieved a speedup factor of at least 17 relative to the VAX9000. The speedup was 3.5 times higher than the ratio of the peak ratings of 600 MFLOPS for the MasPar versus 125 MFLOPS for the VAX9000 with vector processor. The execution speed on the parallel machine represented 64% of its peak rating. This is much better than what is commonly assumed for a parallel machine and was obtained with modest programming effort. An initial implementation of a massively parallel approach to clustered MFP on the MasPar showed a further order of magnitude increase in speed, for an overall speedup factor of 35.
APA, Harvard, Vancouver, ISO, and other styles
40

Stpiczyński, Przemysław. "Fast Parallel Algorithm for Polynomial Evaluation." Parallel Algorithms and Applications 18, no. 4 (December 2003): 209–16. http://dx.doi.org/10.1080/10637190310001633673.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

KIPER, AYSE. "PARALLEL POLYNOMIAL EVALUATION BY DECOUPLING ALGORITHM." Parallel Algorithms and Applications 9, no. 1-2 (January 1996): 145–52. http://dx.doi.org/10.1080/10637199608915570.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Tsai, Jeffrey J. P., Bing Li, and Eric Y. T. Juan. "Parallel evaluation of software architecture specifications." Communications of the ACM 40, no. 1 (January 1997): 83–86. http://dx.doi.org/10.1145/242857.242881.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Luque, Emilio, Remo Suppi, and Joan Sorribes. "Simulation of parallel systems: PSEE (Parallel System Evaluation Environment)." Future Generation Computer Systems 10, no. 2-3 (June 1994): 291–94. http://dx.doi.org/10.1016/0167-739x(94)90031-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Uehara, Kiyohiko, and Kaoru Hirota. "A Fast Method for Fuzzy Rules Learning with Derivative-Free Optimization by Formulating Independent Evaluations of Each Fuzzy Rule." Journal of Advanced Computational Intelligence and Intelligent Informatics 25, no. 2 (March 20, 2021): 213–25. http://dx.doi.org/10.20965/jaciii.2021.p0213.

Full text
Abstract:
A method is proposed for evaluating fuzzy rules independently of each other in fuzzy rules learning. The proposed method is named α-FUZZI-ES (α-weight-based fuzzy-rule independent evaluations) in this paper. In α-FUZZI-ES, the evaluation value of a fuzzy system is divided out among the fuzzy rules by using the compatibility degrees of the learning data. By the effective use of α-FUZZI-ES, a method for fast fuzzy rules learning is proposed. This is named α-FUZZI-ES learning (α-FUZZI-ES-based fuzzy rules learning) in this paper. α-FUZZI-ES learning is especially effective when evaluation functions are not differentiable and derivative-based optimization methods cannot be applied to fuzzy rules learning. α-FUZZI-ES learning makes it possible to optimize fuzzy rules independently of each other. This property reduces the dimensionality of the search space in finding the optimum fuzzy rules. Thereby, α-FUZZI-ES learning can attain fast convergence in fuzzy rules optimization. Moreover, α-FUZZI-ES learning can be efficiently performed with hardware in parallel to optimize fuzzy rules independently of each other. Numerical results show that α-FUZZI-ES learning is superior to the exemplary conventional scheme in terms of accuracy and convergence speed when the evaluation function is non-differentiable.
APA, Harvard, Vancouver, ISO, and other styles
45

Cruz, Henry, Martina Eckert, Juan M. Meneses, and J. F. Martínez. "Fast Evaluation of Segmentation Quality with Parallel Computing." Scientific Programming 2017 (2017): 1–9. http://dx.doi.org/10.1155/2017/5767521.

Full text
Abstract:
In digital image processing and computer vision, a fairly frequent task is the performance comparison of different algorithms on enormous image databases. This task is usually time-consuming and tedious, such that any kind of tool to simplify this work is welcome. To achieve an efficient and more practical handling of a normally tedious evaluation, we implemented the automatic detection system, with the help of MATLAB®’s Parallel Computing Toolbox™. The key parts of the system have been parallelized to achieve simultaneous execution and analysis of segmentation algorithms on the one hand and the evaluation of detection accuracy for the nonforested regions, such as a study case, on the other hand. As a positive side effect, CPU usage was reduced and processing time was significantly decreased by 68.54% compared to sequential processing (i.e., executing the system with each algorithm one by one).
APA, Harvard, Vancouver, ISO, and other styles
46

Waite, Martin, Bret Giddings, and Simon Lavington. "Parallel associative combinator evaluation II." Future Generation Computer Systems 8, no. 4 (September 1992): 303–19. http://dx.doi.org/10.1016/0167-739x(92)90065-j.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Baker-Finch, Clem, David J. King, and Phil Trinder. "An operational semantics for parallel lazy evaluation." ACM SIGPLAN Notices 35, no. 9 (September 2000): 162–73. http://dx.doi.org/10.1145/357766.351256.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Frenkel, Karen A. "Evaluating two massively parallel machines." Communications of the ACM 29, no. 8 (August 1986): 752–58. http://dx.doi.org/10.1145/6424.6427.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Buker, U., and B. Mertsching. "Parallel Evaluation of Hierarchical Image Databases." Journal of Parallel and Distributed Computing 31, no. 2 (December 1995): 141–52. http://dx.doi.org/10.1006/jpdc.1995.1152.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Muller, D. E., and F. P. Preparata. "Parallel restructuring and evaluation of expressions." Journal of Computer and System Sciences 44, no. 1 (February 1992): 43–62. http://dx.doi.org/10.1016/0022-0000(92)90003-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography