To see the other types of publications on this topic, follow the link: Optimisations for GPU.

Journal articles on the topic 'Optimisations for GPU'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Optimisations for GPU.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Amadio, G., J. Apostolakis, P. Buncic, G. Cosmo, D. Dosaru, A. Gheata, S. Hageboeck, et al. "Offloading electromagnetic shower transport to GPUs." Journal of Physics: Conference Series 2438, no. 1 (February 1, 2023): 012055. http://dx.doi.org/10.1088/1742-6596/2438/1/012055.

Full text
Abstract:
Abstract Making general particle transport simulation for high-energy physics (HEP) single-instruction-multiple-thread (SIMT) friendly, to take advantage of accelerator hardware, is an important alternative for boosting the throughput of simulation applications. To date, this challenge is not yet resolved, due to difficulties in mapping the complexity of Geant4 components and workflow to the massive parallelism features exposed by graphics processing units (GPU). The AdePT project is one of the R&D initiatives tackling this limitation and exploring GPUs as potential accelerators for offloading some part of the CPU simulation workload. Our main target is to implement a complete electromagnetic shower demonstrator working on the GPU. The project is the first to create a full prototype of a realistic electron, positron, and gamma electromagnetic shower simulation on GPU, implemented as either a standalone application or as an extension of the standard Geant4 CPU workflow. Our prototype currently provides a platform to explore many optimisations and different approaches. We present the most recent results and initial conclusions of our work, using both a standalone GPU performance analysis and a first implementation of a hybrid workflow based on Geant4 on the CPU and AdePT on the GPU.
APA, Harvard, Vancouver, ISO, and other styles
2

Yao, Shujun, Shuo Zhang, and Wanhua Guo. "Electromagnetic transient parallel simulation optimisation based on GPU." Journal of Engineering 2019, no. 16 (March 1, 2019): 1737–42. http://dx.doi.org/10.1049/joe.2018.8587.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ebrahim, Abdulla, Andrea Bocci, Wael Elmedany, and Hesham Al-Ammal. "Optimising the Configuration of the CMS GPU Reconstruction." EPJ Web of Conferences 295 (2024): 11015. http://dx.doi.org/10.1051/epjconf/202429511015.

Full text
Abstract:
Particle track reconstruction for high energy physics experiments like CMS is computationally demanding but can benefit from GPU acceleration if properly tuned. This work develops an autotuning framework to automatically optimise the throughput of GPU-accelerated CUDA kernels in CMSSW. The proposed system navigates the complex parameter space by generating configurations, benchmarking performance, and leveraging multi-fidelity optimisation from simplified applications. The autotuned launch parameters improved CMSSW tracking throughput over the default settings by finding optimised, GPU-specific configurations. The successful application of autotuning to CMSSW demonstrates both performance portability across diverse accelerators and the potential of the methodology to optimise other HEP codebases.
APA, Harvard, Vancouver, ISO, and other styles
4

Quan, H., Z. Cui, R. Wang, and Zongjie Cao. "GPU parallel implementation and optimisation of SAR target recognition method." Journal of Engineering 2019, no. 21 (November 1, 2019): 8129–33. http://dx.doi.org/10.1049/joe.2019.0669.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Träff, Erik A., Anton Rydahl, Sven Karlsson, Ole Sigmund, and Niels Aage. "Simple and efficient GPU accelerated topology optimisation: Codes and applications." Computer Methods in Applied Mechanics and Engineering 410 (May 2023): 116043. http://dx.doi.org/10.1016/j.cma.2023.116043.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Szénási, Sándor. "Solving the inverse heat conduction problem using NVLink capable Power architecture." PeerJ Computer Science 3 (November 20, 2017): e138. http://dx.doi.org/10.7717/peerj-cs.138.

Full text
Abstract:
The accurate knowledge of Heat Transfer Coefficients is essential for the design of precise heat transfer operations. The determination of these values requires Inverse Heat Transfer Calculations, which are usually based on heuristic optimisation techniques, like Genetic Algorithms or Particle Swarm Optimisation. The main bottleneck of these heuristics is the high computational demand of the cost function calculation, which is usually based on heat transfer simulations producing the thermal history of the workpiece at given locations. This Direct Heat Transfer Calculation is a well parallelisable process, making it feasible to implement an efficient GPU kernel for this purpose. This paper presents a novel step forward: based on the special requirements of the heuristics solving the inverse problem (executing hundreds of simulations in a parallel fashion at the end of each iteration), it is possible to gain a higher level of parallelism using multiple graphics accelerators. The results show that this implementation (running on 4 GPUs) is about 120 times faster than a traditional CPU implementation using 20 cores. The latest developments of the GPU-based High Power Computations area were also analysed, like the new NVLink connection between the host and the devices, which tries to solve the long time existing data transfer handicap of GPU programming.
APA, Harvard, Vancouver, ISO, and other styles
7

Bitam, Salim, NourEddine Djedi, and Maroua Grid. "GPU-based distributed bee swarm optimisation for dynamic vehicle routing problem." International Journal of Ad Hoc and Ubiquitous Computing 31, no. 3 (2019): 155. http://dx.doi.org/10.1504/ijahuc.2019.10022343.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Khemiri, Randa, Hassan Kibeya, Fatma Ezahra Sayadi, Nejmeddine Bahri, Mohamed Atri, and Nouri Masmoudi. "Optimisation of HEVC motion estimation exploiting SAD and SSD GPU-based implementation." IET Image Processing 12, no. 2 (February 1, 2018): 243–53. http://dx.doi.org/10.1049/iet-ipr.2017.0474.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Uchida, Akihiro, Yasuaki Ito, and Koji Nakano. "Accelerating ant colony optimisation for the travelling salesman problem on the GPU." International Journal of Parallel, Emergent and Distributed Systems 29, no. 4 (October 8, 2013): 401–20. http://dx.doi.org/10.1080/17445760.2013.842568.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Spalding, Myles, Anthony Walsh, and Trent Aland. "Evaluation of a new GPU-enabled VMAT multi-criteria optimisation plan generation algorithm." Medical Dosimetry 45, no. 4 (2020): 368–73. http://dx.doi.org/10.1016/j.meddos.2020.05.007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Meager, Iain, Raja Shahid Ashraf, Christian B. Nielsen, Jenny E. Donaghey, Zhenggang Huang, Hugo Bronstein, James R. Durrant, and Iain McCulloch. "Power conversion efficiency enhancement in diketopyrrolopyrrole based solar cells through polymer fractionation." J. Mater. Chem. C 2, no. 40 (2014): 8593–98. http://dx.doi.org/10.1039/c4tc01594k.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

He, Xuzhen. "Accelerated linear algebra compiler for computationally efficient numerical models: Success and potential area of improvement." PLOS ONE 18, no. 2 (February 24, 2023): e0282265. http://dx.doi.org/10.1371/journal.pone.0282265.

Full text
Abstract:
The recent dramatic progress in machine learning is partially attributed to the availability of high-performant computers and development tools. The accelerated linear algebra (XLA) compiler is one such tool that automatically optimises array operations (mostly fusion to reduce memory operations) and compiles the optimised operations into high-performant programs specific to target computing platforms. Like machine-learning models, numerical models are often expressed in array operations, and thus their performance can be boosted by XLA. This study is the first of its kind to examine the efficiency of XLA for numerical models, and the efficiency is examined stringently by comparing its performance with that of optimal implementations. Two shared-memory computing platforms are examined–the CPU platform and the GPU platform. To obtain optimal implementations, the computing speed and its optimisation are rigorously studied by considering different workloads and the corresponding computer performance. Two simple equations are found to faithfully modell the computing speed of numerical models with very few easily-measureable parameters. Regarding operation optimisation within XLA, results show that models expressed in low-level operations (e.g., slice, concatenation, and arithmetic operations) are successfully fused while high-level operations (e.g., convolution and roll) are not. Regarding compilation within XLA, results show that for the CPU platform of certain computers and certain simple numerical models on the GPU platform, XLA achieves high efficiency (> 80%) for large problems and acceptable efficiency (10%~80%) for medium-size problems–the gap is from the overhead cost of Python. Unsatisfactory performance is found for the CPU platform of other computers (operations are compiled in a non-optimal way) and for high-dimensional complex models for the GPU platform, where each GPU thread in XLA handles 4 (single precision) or 2 (double precision) output elements–hoping to exploit the high-performant instructions that can read/write 4 or 2 floating-point numbers with one instruction. However, these instructions are rarely used in the generated code for complex models and performance is negatively affected. Therefore, flags should be added to control the compilation for these non-optimal scenarios.
APA, Harvard, Vancouver, ISO, and other styles
13

Bélanger, C., É. Poulin, S. Aubin, J. A. M. Cunha, and L. Beaulieu. "PP-0150 Commissioning of a GPU-based multi-criteria optimisation algorithm for HDR brachytherapy." Radiotherapy and Oncology 158 (May 2021): S113—S115. http://dx.doi.org/10.1016/s0167-8140(21)06442-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Huang, Shengjun, and Venkata Dinavahi. "GPU-based parallel real-time volt/var optimisation for distribution network considering distributed generators." IET Generation, Transmission & Distribution 12, no. 20 (November 13, 2018): 4472–81. http://dx.doi.org/10.1049/iet-gtd.2017.1887.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Gallio, Elena, Osvaldo Rampado, Elena Gianaria, Silvio Diego Bianchi, and Roberto Ropolo. "A GPU Simulation Tool for Training and Optimisation in 2D Digital X-Ray Imaging." PLOS ONE 10, no. 11 (November 6, 2015): e0141497. http://dx.doi.org/10.1371/journal.pone.0141497.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Najam, Shaheryar, and Jameel Ahmed. "Run-time neuro-fuzzy type-2 controller for power optimisation of GP-GPU architecture." IET Circuits, Devices & Systems 14, no. 8 (November 1, 2020): 1253–57. http://dx.doi.org/10.1049/iet-cds.2020.0233.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Fluke, Christopher J., David G. Barnes, Benjamin R. Barsdell, and Amr H. Hassan. "Astrophysical Supercomputing with GPUs: Critical Decisions for Early Adopters." Publications of the Astronomical Society of Australia 28, no. 1 (2011): 15–27. http://dx.doi.org/10.1071/as10019.

Full text
Abstract:
AbstractGeneral-purpose computing on graphics processing units (GPGPU) is dramatically changing the landscape of high performance computing in astronomy. In this paper, we identify and investigate several key decision areas, with a goal of simplifying the early adoption of GPGPU in astronomy. We consider the merits of OpenCL as an open standard in order to reduce risks associated with coding in a native, vendor-specific programming environment, and present a GPU programming philosophy based on using brute force solutions. We assert that effective use of new GPU-based supercomputing facilities will require a change in approach from astronomers. This will likely include improved programming training, an increased need for software development best practice through the use of profiling and related optimisation tools, and a greater reliance on third-party code libraries. As with any new technology, those willing to take the risks and make the investment of time and effort to become early adopters of GPGPU in astronomy, stand to reap great benefits.
APA, Harvard, Vancouver, ISO, and other styles
18

Schweiger, Martin. "GPU-Accelerated Finite Element Method for Modelling Light Transport in Diffuse Optical Tomography." International Journal of Biomedical Imaging 2011 (2011): 1–11. http://dx.doi.org/10.1155/2011/403892.

Full text
Abstract:
We introduce a GPU-accelerated finite element forward solver for the computation of light transport in scattering media. The forward model is the computationally most expensive component of iterative methods for image reconstruction in diffuse optical tomography, and performance optimisation of the forward solver is therefore crucial for improving the efficiency of the solution of the inverse problem. The GPU forward solver uses a CUDA implementation that evaluates on the graphics hardware the sparse linear system arising in the finite element formulation of the diffusion equation. We present solutions for both time-domain and frequency-domain problems. A comparison with a CPU-based implementation shows significant performance gains of the graphics accelerated solution, with improvements of approximately a factor of 10 for double-precision computations, and factors beyond 20 for single-precision computations. The gains are also shown to be dependent on the mesh complexity, where the largest gains are achieved for high mesh resolutions.
APA, Harvard, Vancouver, ISO, and other styles
19

Knox, Stephen T., Sam J. Parkinson, Clarissa Y. P. Wilding, Richard A. Bourne, and Nicholas J. Warren. "Autonomous polymer synthesis delivered by multi-objective closed-loop optimisation." Polymer Chemistry 13, no. 11 (2022): 1576–85. http://dx.doi.org/10.1039/d2py00040g.

Full text
Abstract:
An artificially intelligent flow-reactor platform equipped with online NMR and GPC enables autonomous polymerisation optimisation utilising a machine learning algorithm to map the trade-off between monomer conversion and dispersity.
APA, Harvard, Vancouver, ISO, and other styles
20

Plessen, Mogens Graf. "GPU-accelerated logistics optimisation for biomass production with multiple simultaneous harvesters tours, fields and plants." Biomass and Bioenergy 141 (October 2020): 105650. http://dx.doi.org/10.1016/j.biombioe.2020.105650.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Chotirmall, Sanjay Haresh, Gillian Lee, Mary Cosgrave, Ciaran Donegan, and Allan Moore. "Optimisation of dementia management in Irish primary care." International Journal of Geriatric Psychiatry 23, no. 8 (August 2008): 880. http://dx.doi.org/10.1002/gps.2049.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Sabuga, Wladimir, and Rob Haines. "Development of 1.6 GPa pressure-measuring multipliers." ACTA IMEKO 3, no. 2 (June 23, 2014): 54. http://dx.doi.org/10.21014/acta_imeko.v3i2.104.

Full text
Abstract:
Two 1.6 GPa pressure-measuring multipliers were developed and built. Feasibility analysis of their operation up to 1.6 GPa, parameter optimisation and prediction of their behaviour were performed using Finite Element Analysis (FEA). Their performance and metrological properties were determined experimentally at pressures up to 500 MPa. The experimental and theoretical results are in reasonable agreement. With the results obtained so far, the relative standard uncertainty of the pressure measurement up to 1.6 GPa is expected to be not greater than 2·10<sup>-4</sup>. With this new development the range of the pressure calibration service in Europe can be extended up to 1.5 GPa.
APA, Harvard, Vancouver, ISO, and other styles
23

Guzzi, Francesco, Alessandra Gianoncelli, Fulvio Billè, Sergio Carrato, and George Kourousias. "Automatic Differentiation for Inverse Problems in X-ray Imaging and Microscopy." Life 13, no. 3 (February 23, 2023): 629. http://dx.doi.org/10.3390/life13030629.

Full text
Abstract:
Computational techniques allow breaking the limits of traditional imaging methods, such as time restrictions, resolution, and optics flaws. While simple computational methods can be enough for highly controlled microscope setups or just for previews, an increased level of complexity is instead required for advanced setups, acquisition modalities or where uncertainty is high; the need for complex computational methods clashes with rapid design and execution. In all these cases, Automatic Differentiation, one of the subtopics of Artificial Intelligence, may offer a functional solution, but only if a GPU implementation is available. In this paper, we show how a framework built to solve just one optimisation problem can be employed for many different X-ray imaging inverse problems.
APA, Harvard, Vancouver, ISO, and other styles
24

Aicha, Faten Ben, Faouzi Bouani, and Mekki Ksouri. "Design of GPC closed-loop performances using multi-objective optimisation." International Journal of Modelling, Identification and Control 16, no. 2 (2012): 180. http://dx.doi.org/10.1504/ijmic.2012.047125.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Piętak, Kamil, Dominik Żurek, Marcin Pietroń, Andrzej Dymara, and Marek Kisiel-Dorohinicki. "Striving for performance of discrete optimisation via memetic agent-based systems in a hybrid CPU/GPU environment." Journal of Computational Science 31 (February 2019): 151–62. http://dx.doi.org/10.1016/j.jocs.2019.01.007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Wang, Xiebing, Kai Huang, and Alois Knoll. "Performance Optimisation of Parallelized ADAS Applications in FPGA-GPU Heterogeneous Systems: A Case Study With Lane Detection." IEEE Transactions on Intelligent Vehicles 4, no. 4 (December 2019): 519–31. http://dx.doi.org/10.1109/tiv.2019.2938092.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Rodriguez, Pedro, and Didier Dumur. "Robustification d'une commande GPC par optimisation convexe du paramètre de Youla." Journal Européen des Systèmes Automatisés 37, no. 1 (January 30, 2003): 109–34. http://dx.doi.org/10.3166/jesa.37.109-134.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Ramesh, Vishal Avinash, Ehsan Nikbakht Jarghouyeh, Ahmed Saleh Alraeeini, and Amin Al-Fakih. "Optimisation Investigation and Bond-Slip Behaviour of High Strength PVA-Engineered Geopolymer Composite (EGC) Cured in Ambient Temperatures." Buildings 13, no. 12 (December 4, 2023): 3020. http://dx.doi.org/10.3390/buildings13123020.

Full text
Abstract:
Engineered geopolymer composite (EGC) is becoming an uprising product in the civil industry as a substitute and solution for conventional geopolymer concrete (GPC) as GPC exhibits brittleness and has poor cracking resistance. In this paper, we explored high strength engineered geopolymer composite (EGC) made of polyvinyl alcohol (PVA) fibre and without coarse aggregate constituents characterised as high-performance geopolymer concrete. Varying alkaline solution to fly ash ratio (AL/FA) was investigated. Bond-slip behaviour and the mechanical properties, including compressive, tensile, and flexural strengths, were studied. PVA-EGC mix designs in this research was optimised using response surface methodology (RSM). Various parameters, including the amount of ground granulated blast slag (GGBS) and silica fume, were included in the parametric and optimisation study. Based on the RSM study, the use of quadratic studies found the responses to be well-fitted. Next, the optimised mix design was utilised for the casting of all the samples for the mechanical and bond-slip tests in this study. The main parameters of bonding behaviour include multiple embedment lengths (7 d, 10 d, 12 d and 15 d) and various sizes of rebar diameter used for pull-out tests. Moreover, the mechanical properties and bond behaviours of EGC were compared with those of conventional geopolymer concrete (GPC). The compressive strength of EGC and GPC at 28 days were designed to be similar for comparison purposes; however, EGC shows higher early compressive strength on day 1 compared to GPC. In addition, results indicate that EGC has superior mechanical properties and bond performance compared to GPC, where EGC is approximately 9 and 150% higher than GPC in terms of flexural and tensile strength, respectively. Pull-out tests showed that EGC samples exhibited higher ductility, as evidenced by the presence of multiple cracks before any exhibited failure in tension and flexure. Ductile failure modes, such as pull-out failure and pull-out splitting failure, are observed in EGC. In contrast, GPC specimens show brittle failure, such as splitting failure.
APA, Harvard, Vancouver, ISO, and other styles
29

Hsieh, Hsin-Ta, and Chih-Hsing Chu. "Particle swarm optimisation (PSO)-based tool path planning for 5-axis flank milling accelerated by graphics processing unit (GPU)." International Journal of Computer Integrated Manufacturing 24, no. 7 (July 2011): 676–87. http://dx.doi.org/10.1080/0951192x.2011.570792.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Brazhnik, O. I., and A. A. Rudenko. "Pilot test results of GP-200/11x500-m stage chamber pump." Mining Industry (Gornay Promishlennost), no. 6/2020 (December 29, 2020): 53–55. http://dx.doi.org/10.30686/1609-9192-2020-6-53-55.

Full text
Abstract:
The article reviews and analyses issues that include optimisation of pumping equipment for efficient drainage of saline water in conditions of underground mines. It describes approaches to design improvement of pumping equipment and shows the economic effect of its implementation based on the results of pilot tests.
APA, Harvard, Vancouver, ISO, and other styles
31

Langdon, William B., and Oliver Krauss. "Genetic Improvement of Data for Maths Functions." ACM Transactions on Evolutionary Learning and Optimization 1, no. 2 (July 23, 2021): 1–30. http://dx.doi.org/10.1145/3461016.

Full text
Abstract:
We use continuous optimisation and manual code changes to evolve up to 1024 Newton-Raphson numerical values embedded in an open source GNU C library glibc square root sqrt to implement a double precision cube root routine cbrt, binary logarithm log2 and reciprocal square root function for C in seconds. The GI inverted square root x -1/2 is far more accurate than Quake’s InvSqrt, Quare root. GI shows potential for automatically creating mobile or low resource mote smart dust bespoke custom mathematical libraries with new functionality.
APA, Harvard, Vancouver, ISO, and other styles
32

Strauss, Johannes M., and Robert T. Dobson. "Evaluation of a second order simulation for Sterling engine design and optimisation." Journal of Energy in Southern Africa 21, no. 2 (May 1, 2010): 17–29. http://dx.doi.org/10.17159/2413-3051/2010/v21i2a3252.

Full text
Abstract:
This paper reports on the investigation of the simulation accuracy of a second order Stirling cycle simulation tool as developed by Urieli (2001) and improvements thereof against the known performance of the GPU-3 Stirling engine. The objective of this investigation is to establish a simulation tool to perform preliminary engine design and optimisation.The second order formulation under investigation simulates the engine based on the ideal adiabatic cycle, and parasitic losses are only accounted for afterwards. This approach differs from third order formulations that simulate the engine in a coupled manner incorporating non-idealities during cyclic simulation. While the second order approach is less accurate, it holds the advantage that the degradation of the ideal performance due to the various losses is more clearly defined and offers insight into improving engine performance. It is therefore particularly suitable for preliminary design of engines.Two methods to calculate the performance and efficiency of the data obtained from the ideal adiabatic cycle and the parasitic losses were applied, namely the method used by Urieli and a proposed alternative method. These two methods differ essentially in how the regenerator and pumping losses are accounted for.The overall accuracy of the simulations, especially using the proposed alternative method to calculate the different operational variables, proved to be satisfactory. Although significant inaccuracies occurred for some of the operational variables, the simulated trends in general followed the measurements and it is concluded that this second order Stirling cycle simulation tool using the proposed alternative method to calculate the different operational variables is suitable for preliminary engine design and optimisation.
APA, Harvard, Vancouver, ISO, and other styles
33

Mpon, Richard, Maurice K. Ndikontar, Hyppolite N. Ntede, J. Noah Ngamveng, Alain Dufresne, Ohandja Ayina, Emmanuel Njungap, and Abel Tame. "Optimisation of Graft Copolymerisation of Fibres from Banana Trunk." E-Journal of Chemistry 9, no. 1 (2012): 373–80. http://dx.doi.org/10.1155/2012/313490.

Full text
Abstract:
Sheets from banana trunks were opened out and dried for several weeks in air. Pulp was obtained by the nitric acid process with a yield of 37.7% while fibres were obtained according to the modified standard Japanese method for cellulose in wood for pulp (JIS 8007) with a yield of 65% with respect to oven dried plant material. Single fibre obtained by the JIS method had an average diameter of 11.0 μm and Young's modulus, tensile strength and strain at break-off 7.05 GPa, 81.7 MPa and 5.2% respectively. Modification of the fibres was carried out by grafting ethyl acrylate in the presence of ammonium nitrate cerium(IV). Optimisation of the copolymerisation reaction conditions was studied by measuring the rate of conversion, the rate of grafting and the grafting efficiency. The results showed that at low values of ceric ion concentration (0.04 M), at ambient temperature, after three hours and at a concentration of 0.2 M ethyl acrylate, maximum values of the parameters cited were obtained.
APA, Harvard, Vancouver, ISO, and other styles
34

Richmond, Tutea, Louise Lods, Jany Dandurand, Eric Dantras, Colette Lacabanne, Samuel Malburet, Alain Graillot, Jean-Michel Durand, Edouard Sherwood, and Philippe Ponteins. "Physical and dynamic mechanical properties of continuous bamboo reinforcement/bio-based epoxy composites." Materials Research Express 9, no. 1 (January 1, 2022): 015505. http://dx.doi.org/10.1088/2053-1591/ac4c1a.

Full text
Abstract:
Abstract Unidirectional bamboo reinforced cardanol-based epoxy composites were prepared by a close mould method. Two morphologies of reinforcements were used in this research: bamboo fibres and bamboo strips. The present article investigates the influence of bamboo reinforcements on the thermal and mechanical properties of the bio based matrix. Differential Scanning Calorimetry analyses showed that the introduction of bamboo does not modify the physical properties of the matrix. DMA analyses in shear mode showed an improvement of the shear conservative modulus that reaches 1.7 ± 0.1 GPa. This value that is independent from the morphology of reinforcements, indicates the existence of physical interactions. The continuity of matter between bamboo strips or bamboo fibres and the matrix observed by SEM confirms this result. Nevertheless, in tensile mode, the improvement of the tensile conservative modulus is specific to the used morphology. Indeed, for bamboo strips composites, it is 7.7 ± 0.8 GPa, while for bamboo fibres composites, it reaches 9.6 ± 0.8 GPa. This result is explained by the optimisation of stress transfer thanks to the specific morphology of bamboo fibres. A significant increase is also observed for the rubbery modulus due to entanglements specific of bamboo reinforcement.
APA, Harvard, Vancouver, ISO, and other styles
35

Cherop, Peter Tumwet, Sammy Lewis Kiambi, and Paul Musonge. "Modelling and optimisation of oxidative desulphurisation of tyre-derived oil via central composite design approach." Green Processing and Synthesis 8, no. 1 (January 28, 2019): 451–63. http://dx.doi.org/10.1515/gps-2019-0013.

Full text
Abstract:
Abstract The aim of this study was to apply the central composite design technique to study the interaction of the amount of formic acid (6-12 mL), amount of hydrogen peroxide (6-10 mL), temperature (54-58°C) and reaction time (40-60 min) during the oxidative desulphurisation (ODS) of tyre-derived oil (TDO). The TDO was oxidised at various parametric interactions before being subjected to solvent extraction using acetonitrile. The acetonitrile to oil ratios used during the extraction were 1:1 and 1:2. The content of sulphur before and after desulphurisation was analysed using ICP-AES. The maximum sulphur removal achieved using a 1:1 acetonitrile to oxidised oil ratio was 86.05%, and this was achieved at formic acid amount, hydrogen peroxide amount, temperature and a reaction time of 9 mL, 8 mL, 54°C and 50 min respectively. Analysis of variance (ANOVA) indicated that the reduced cubic model could best predict the sulphur removal for the ODS process. Coefficient of determination (R2 = 0.9776), adjusted R2 = 0.9254, predicted R2 = 0.8356 all indicated that the model was significant. In addition, the p-value of lack of fit (LOF) was 0.8926, an indication of its insignificance relative to pure error.
APA, Harvard, Vancouver, ISO, and other styles
36

Cao, Yue, Shuchen Guo, Shuai Jiang, Xuan Zhou, Xiaobei Wang, Yunhua Luo, Zhongjun Yu, Zhimin Zhang, and Yunkai Deng. "Parallel Optimisation and Implementation of a Real-Time Back Projection (BP) Algorithm for SAR Based on FPGA." Sensors 22, no. 6 (March 16, 2022): 2292. http://dx.doi.org/10.3390/s22062292.

Full text
Abstract:
This study conducts an in-depth evaluation of imaging algorithms and software and hardware architectures to meet the capability requirements of real-time image acquisition systems, such as spaceborne and airborne synthetic aperture radar (SAR) systems. By analysing the principles and models of SAR imaging, this research creatively puts forward the fully parallel processing architecture for the back projection (BP) algorithm based on Field-Programmable Gate Array (FPGA). The processing time consumption has significant advantages compared with existing methods. This article describes the BP imaging algorithm, which stands out with its high processing accuracy and two-dimensional decoupling of distance and azimuth, and analyses the algorithmic flow, operation, and storage requirements. The algorithm is divided into five core operations: range pulse compression, upsampling, oblique distance calculation, data reading, and phase accumulation. The architecture and optimisation of the algorithm are presented, and the optimisation methods are described in detail from the perspective of algorithm flow, fixed-point operation, parallel processing, and distributed storage. Next, the maximum resource utilisation rate of the hardware platform in this study is found to be more than 80%, the system power consumption is 21.073 W, and the processing time efficiency is better than designs with other FPGA, DSP, GPU, and CPU. Finally, the correctness of the processing results is verified using actual data. The experimental results showed that 1.1 s were required to generate an image with a size of 900 × 900 pixels at a 200 MHz clock rate. This technology can solve the multi-mode, multi-resolution, and multi-geometry signal processing problems in an integrated manner, thus laying a foundation for the development of a new, high-performance, SAR system for real-time imaging processing.
APA, Harvard, Vancouver, ISO, and other styles
37

Tomczak, Tadeusz. "Data-Oriented Language Implementation of the Lattice–Boltzmann Method for Dense and Sparse Geometries." Applied Sciences 11, no. 20 (October 13, 2021): 9495. http://dx.doi.org/10.3390/app11209495.

Full text
Abstract:
The performance of lattice–Boltzmann solver implementations usually depends mainly on memory access patterns. Achieving high performance requires then complex code which handles careful data placement and ordering of memory transactions. In this work, we analyse the performance of an implementation based on a new approach called the data-oriented language, which allows the combination of complex memory access patterns with simple source code. As a use case, we present and provide the source code of a solver for D2Q9 lattice and show its performance on GTX Titan Xp GPU for dense and sparse geometries up to 40962 nodes. The obtained results are promising, around 1000 lines of code allowed us to achieve performance in the range of 0.6 to 0.7 of maximum theoretical memory bandwidth (over 2.5 and 5.0 GLUPS for double and single precision, respectively) for meshes of sizes above 10242 nodes, which is close to the current state-of-the-art. However, we also observed relatively high and sometimes difficult to predict overheads, especially for sparse data structures. The additional issue was also a rather long compilation, which extended the time of short simulations, and a lack of access to low-level optimisation mechanisms.
APA, Harvard, Vancouver, ISO, and other styles
38

Xu, Yiming, Suli Liu, Weijun Meng, Hua Yuan, Weibao Ma, Xiangqian Sun, Jianhong Xu, Bin Tan, and Ping Li. "Continuous sulfonation of hexadecylbenzene in a microreactor." Green Processing and Synthesis 10, no. 1 (January 1, 2021): 219–29. http://dx.doi.org/10.1515/gps-2021-0021.

Full text
Abstract:
Abstract Heavy alkyl benzene sulfonates are inexpensive surfactants that are extensively used as oil-displacing agents during tertiary oil recovery. Among these, C16–18 heavy alkyl benzene sulfonates possess an excellent ability to reduce the oil-water interface tension. In this study, hexadecylbenzene sulfonic acid (HBSA) was synthesised in a continuous stirred-tank microreactor using a continuous method with 1,2-dichloroethane (EDC) dilution. Post-sulfonation liquid SO3 solution was used as a sulfonating agent for hexadecylbenzene (HDB). The effects of reaction conditions, such as the SO3:HDB molar ratio, sulfonation temperature and sulfonation agent concentration, on the yield and purity of the product were investigated. Optimisation of the reaction process yielded high-quality HBSA samples with a purity exceeding 99 wt%. The continuous sulfonation process significantly enhanced the production and efficiency in the case of a considerably short residence time (10 s) in the reactor, without the need for aging. The results of this study demonstrate significant potential for application in industrial production.
APA, Harvard, Vancouver, ISO, and other styles
39

Lambert, Jonathan, Rosemary Monahan, and Kevin Casey. "Accidental Choices—How JVM Choice and Associated Build Tools Affect Interpreter Performance." Computers 11, no. 6 (June 14, 2022): 96. http://dx.doi.org/10.3390/computers11060096.

Full text
Abstract:
Considering the large number of optimisation techniques that have been integrated into the design of the Java Virtual Machine (JVM) over the last three decades, the Java interpreter continues to persist as a significant bottleneck in the performance of bytecode execution. This paper examines the relationship between Java Runtime Environment (JRE) performance concerning the interpreted execution of Java bytecode and the effect modern compiler selection and integration within the JRE build toolchain has on that performance. We undertook this evaluation relative to a contemporary benchmark suite of application workloads, the Renaissance Benchmark Suite. Our results show that the choice of GNU GCC compiler version used within the JRE build toolchain statistically significantly affects runtime performance. More importantly, not all OpenJDK releases and JRE JVM interpreters are equal. Our results show that OpenJDK JVM interpreter performance is associated with benchmark workload. In addition, in some cases, rolling back to an earlier OpenJDK version and using a more recent GNU GCC compiler within the build toolchain of the JRE can significantly positively impact JRE performance.
APA, Harvard, Vancouver, ISO, and other styles
40

Temple, PD, M. Harmon, R. Lewis, MC Burstow, B. Temple, and D. Jones. "Optimisation of grease application to railway tracks." Proceedings of the Institution of Mechanical Engineers, Part F: Journal of Rail and Rapid Transit 232, no. 5 (October 4, 2017): 1514–27. http://dx.doi.org/10.1177/0954409717734681.

Full text
Abstract:
Trackside lubricators are designed to deliver grease to passing wheel flanges to reduce wheel and rail wear on curves. Ensuring that they are set up to deliver sufficient grease for the range of vehicles passing a site can be a challenge. For example, vehicle dynamics modelling and site investigations have shown that the wheels of passenger vehicles do not run as close to the rail face as those of freight vehicles, meaning that they are less likely to contact the grease and lubricate subsequent curves. To investigate the effects of different trackside devices, and the influence of parameters governing grease pickup, including lateral wheel displacement and pump durations, a bespoke test rig was built at the University of Sheffield. The rig used a scaled wheel, a short section of rail and a modern trackside lubricator set-up. Experiments involving different lateral wheel displacements and pumping durations were carried out, in addition to the visualisation of the size of the grease bulb. This showed how a grease bulb grows. It also indicated that a worn profile is likely to require greater wheel displacement to make contact with grease bulbs when compared to a new wheel profile. The experimental results showed that increasing pickup of grease can be expected when an additional component called a GreaseGuide™ was fitted to a regular grease delivery unit (GDU) on the rail. The efficiency of grease pickup was investigated, and test results exploring increasing pump durations have indicated a relationship between pickup and bulb size. To validate the use of the scaled rig, similar tests were carried out using a full-scale test rig. The full-scale results were compared to the experimental results of the scaled wheel rig. This showed that whilst there were differences between the two test rigs in absolute values and anomalous results, overall trends were the same on both test scales. The effect of temperature on bulb size and pumpability of grease was also investigated. This work can be extended further by using the same method to investigate other parameters that affect the lubrication of curves. This can lead to optimised lubricator set-up to ensure that the track is fully lubricated all the time.
APA, Harvard, Vancouver, ISO, and other styles
41

Szkutnik-Rogoż, Joanna, Jarosław Ziółkowski, Jerzy Małachowski, and Mateusz Oszczypała. "Mathematical Programming and Solution Approaches for Transportation Optimisation in Supply Network." Energies 14, no. 21 (October 26, 2021): 7010. http://dx.doi.org/10.3390/en14217010.

Full text
Abstract:
The problem of transport is a special type of mathematical programming designed to search for the optimal distribution network, taking into account the set of suppliers and the set of recipients. This article proposes an innovative approach to solving the transportation problem and devises source codes in GNU Octave (version 3.4.3) to avoid the necessity of carrying out enormous calculations in traditional methods and to minimize transportation costs, fuel consumption, and CO2 emission. The paper presents a numerical example of a solution to the transportation problem using: the northwest corner, the least cost in the matrix, the row minimum, and Vogel’s Approximation Methods (VAM). The joint use of mathematical programming and optimization was applicable to real conditions. The transport was carried out with medium load trucks. Both suppliers and recipients of materials were located geographically within the territory of the Republic of Poland. The presented model was supported by a numerical example with interpretation and visualization of the obtained results. The implementation of the proposed solution enables the user to develop an optimal transport plan for individually defined criteria.
APA, Harvard, Vancouver, ISO, and other styles
42

Mahfouf, M., M. F. Abbod, and D. A. Linkens. "Multi-objective genetic optimisation of GPC and SOFLC tuning parameters using a fuzzy-based ranking method." IEE Proceedings - Control Theory and Applications 147, no. 3 (May 1, 2000): 344–54. http://dx.doi.org/10.1049/ip-cta:20000345.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Karpenko, A. P., and V. A. Ovchinnikov. "How to Trick a Neural Network? Synthesising Noise to Reduce the Accuracy of Neural Network Image Classification." Herald of the Bauman Moscow State Technical University. Series Instrument Engineering, no. 1 (134) (March 2021): 102–19. http://dx.doi.org/10.18698/0236-3933-2021-1-102-119.

Full text
Abstract:
The study aims to develop an algorithm and then software to synthesise noise that could be used to attack deep learning neural networks designed to classify images. We present the results of our analysis of methods for conducting this type of attacks. The synthesis of attack noise is stated as a problem of multidimensional constrained optimization. The main features of the attack noise synthesis algorithm proposed are as follows: we employ the clip function to take constraints on noise into account; we use the top-1 and top-5 classification error ratings as attack noise efficiency criteria; we train our neural networks using backpropagation and Adam's gradient descent algorithm; stochastic gradient descent is employed to solve the optimisation problem indicated above; neural network training also makes use of the augmentation technique. The software was developed in Python using the Pytorch framework to dynamically differentiate the calculation graph and runs under Ubuntu 18.04 and CentOS 7. Our IDE was Visual Studio Code. We accelerated the computation via CUDA executed on a NVIDIA Titan XP GPU. The paper presents the results of a broad computational experiment in synthesising non-universal and universal attack noise types for eight deep neural networks. We show that the attack algorithm proposed is able to increase the neural network error by eight times
APA, Harvard, Vancouver, ISO, and other styles
44

Cherry, Rebecca, Warna Karunasena, and Allan Manalo. "Mechanical Properties of Low-Stiffness Out-of-Grade Hybrid Pine—Effects of Knots, Resin and Pith." Forests 13, no. 6 (June 13, 2022): 927. http://dx.doi.org/10.3390/f13060927.

Full text
Abstract:
Out-of-grade pine timber is an abundant material resource that is underutilised because its mechanical properties are not well understood. Increasing trends toward shorter rotation times and fast-grown plantation pines around the world such as Pinus elliottii × P. caribaea var. hondurensis hybrid (PEE × PCH) mean low-stiffness corewood is becoming a larger portion of this out-of-grade population. This study characterised the modulus and strength properties in bending, compression parallel to grain (CParG) and compression perpendicular to grain (CPerpG), shear and tension strength of low-stiffness out-of-grade PEE × PCH. The effect of resin, knots and pith on these properties were also investigated. The results show that in clear wood, the MOE in bending, CParG, CPerpG and shear modulus are 6.9 GPa, 5.78 GPa, 0.27 GPa and 0.59 GPa, respectively, while strengths are 45.8 MPa, 29.4 MPa, 6.7 MPa, 5.7 MPa, respectively. The tensile strength is 32.4 MPa. Resin significantly increased density 45% higher than clear, but performed similar with the exception of CPerpG MOE and strength which were significantly different. Resin area ratio (RAR) has a moderate correlation with density with an R2 of 0.659 but low to no correlation for mechanical properties. Knots were significantly different to clear for all test types and within a range of 48% to 196%. Knots were high in CPerpG MOE and strength but lower for all other properties and had the largest negative impact on tensile strength. Knot area ratio (KAR) had low to moderate correlation with tension strength and CPerpG MOE with R2 of 0.48 and 0.35, respectively. Pith was within the range of 76% to 121% of non-pith samples for structural performance, some of which were significantly different, and pith samples were higher in density than non-pith. This new information is crucial for the effective establishment of grading rules, design optimisation and utilisation of low-stiffness out-of-grade PEE × PCH as a new material resource in civil engineering applications.
APA, Harvard, Vancouver, ISO, and other styles
45

Sârbu, Annamaria, and Dumitru Neagoie. "Wi-Fi Jamming Using Software Defined Radio." International conference KNOWLEDGE-BASED ORGANIZATION 26, no. 3 (June 1, 2020): 162–66. http://dx.doi.org/10.2478/kbo-2020-0132.

Full text
Abstract:
AbstractIn this article we present software defined radio (SDR) instrumentation used for interfering or jamming Wi-Fi networks. A Wi-Fi network analyzer application was used together with a low cost, commercially available SDR, Hack RF one, to conduct aimed interference on a 802.11 b/g/n network. A GNU radio flowchart was used to control the radio transceiver (SDR) by emitting a jamming signal aimed towards the targeted client by means of a directional antenna. Various signal bandwidths and distance from the targeted device were tested to characterize the adequate parameters of an effective jamming signal with respect to the calculated signal to noise ratio (SNR). Jamming efficiency was evaluated by means of a Wi-Fi connectivity speed test application installed on the targeted device, in order to measure connectivity degradation if complete jamming was not possible. Results presented suggest that Wi-Fi jamming is possible by means of SDR technology, providing insights on the methodology used and initial optimisation procedures in the test environment.
APA, Harvard, Vancouver, ISO, and other styles
46

Addonizio, Maria Luisa, and Luigi Fusco. "Adhesion and Barrier Properties Analysis of Silica-Like Thin Layer on Polyethylene Naphthalate Substrates for Thin Film Solar Cells." Advances in Science and Technology 74 (October 2010): 113–18. http://dx.doi.org/10.4028/www.scientific.net/ast.74.113.

Full text
Abstract:
In this investigation the surface properties optimisation of a flexible PEN foil to use as substrate for thin film silicon solar cells is presented. The polymer surface, usually hydrophobic and inactive to chemical reactions, can give poor adhesion for films deposited on it. Furthermore, gas desorption from the polymer sometimes causes serious problems to the quality of the devices. To overcome these problems a thin film of silica-like functional material has been developed on polymer foil. Silica-like films were produced by sol-gel process starting from an organic silanes compound (APTMS) as precursor and the solution was deposited by spin-coating. Amorphous silica-like films were obtained with a hydrophilic surface. They were smooth, dense, homogeneous, transparent and exhibited an excellent adhesion to the polymer substrate due to the chemical bond between amine groups of the APTMS with carbonyl bonds in PEN. Physical properties such as elastic modulus and hardness and the UV irradiation effect on structure and surface hydrophilicity of the silica-like coatings have been analysed. A water contact angle of 34° was obtained after UV irradiation. Nanoindentation analysis showed that the silica-like coating have an hardness and an elastic modulus up to 2.0 GPa and 13.2 GPa respectively much higher than that of pure PEN. Oxygen permeability measured on silica-like coated PEN gave a value of 5.7 x 10-9 cc m/m2 s atm showing larger barrier properties respect to pure PEN. Strong adhesion, improved mechanical properties and barrier effect of our silica-like coating make the modified PEN substrate suitable to be used in thin film solar cell technology.
APA, Harvard, Vancouver, ISO, and other styles
47

Bittredge, Oliver, Hany Hassanin, Mahmoud Ahmed El-Sayed, Hossam Mohamed Eldessouky, Naser A. Alsaleh, Nashmi H. Alrasheedi, Khamis Essa, and Mahmoud Ahmadein. "Fabrication and Optimisation of Ti-6Al-4V Lattice-Structured Total Shoulder Implants Using Laser Additive Manufacturing." Materials 15, no. 9 (April 25, 2022): 3095. http://dx.doi.org/10.3390/ma15093095.

Full text
Abstract:
This work aimed to study one of the most important challenges in orthopaedic implantations, known as stress shielding of total shoulder implants. This problem arises from the elastic modulus mismatch between the implant and the surrounding tissue, and can result in bone resorption and implant loosening. This objective was addressed by designing and optimising a cellular-based lattice-structured implant to control the stiffness of a humeral implant stem used in shoulder implant applications. This study used a topology lattice-optimisation tool to create different cellular designs that filled the original design of a shoulder implant, and were further analysed using finite element analysis (FEA). A laser powder bed fusion technique was used to fabricate the Ti-6Al-4V test samples, and the obtained material properties were fed to the FEA model. The optimised cellular design was further fabricated using powder bed fusion, and a compression test was carried out to validate the FEA model. The yield strength, elastic modulus, and surface area/volume ratio of the optimised lattice structure, with a strut diameter of 1 mm, length of 5 mm, and 100% lattice percentage in the design space of the implant model were found to be 200 MPa, 5 GPa, and 3.71 mm−1, respectively. The obtained properties indicated that the proposed cellular structure can be effectively applied in total shoulder-replacement surgeries. Ultimately, this approach should lead to improvements in patient mobility, as well as to reducing the need for revision surgeries due to implant loosening.
APA, Harvard, Vancouver, ISO, and other styles
48

Bittredge, Oliver, Hany Hassanin, Mahmoud Ahmed El-Sayed, Hossam Mohamed Eldessouky, Naser A. Alsaleh, Nashmi H. Alrasheedi, Khamis Essa, and Mahmoud Ahmadein. "Fabrication and Optimisation of Ti-6Al-4V Lattice-Structured Total Shoulder Implants Using Laser Additive Manufacturing." Materials 15, no. 9 (April 25, 2022): 3095. http://dx.doi.org/10.3390/ma15093095.

Full text
Abstract:
This work aimed to study one of the most important challenges in orthopaedic implantations, known as stress shielding of total shoulder implants. This problem arises from the elastic modulus mismatch between the implant and the surrounding tissue, and can result in bone resorption and implant loosening. This objective was addressed by designing and optimising a cellular-based lattice-structured implant to control the stiffness of a humeral implant stem used in shoulder implant applications. This study used a topology lattice-optimisation tool to create different cellular designs that filled the original design of a shoulder implant, and were further analysed using finite element analysis (FEA). A laser powder bed fusion technique was used to fabricate the Ti-6Al-4V test samples, and the obtained material properties were fed to the FEA model. The optimised cellular design was further fabricated using powder bed fusion, and a compression test was carried out to validate the FEA model. The yield strength, elastic modulus, and surface area/volume ratio of the optimised lattice structure, with a strut diameter of 1 mm, length of 5 mm, and 100% lattice percentage in the design space of the implant model were found to be 200 MPa, 5 GPa, and 3.71 mm−1, respectively. The obtained properties indicated that the proposed cellular structure can be effectively applied in total shoulder-replacement surgeries. Ultimately, this approach should lead to improvements in patient mobility, as well as to reducing the need for revision surgeries due to implant loosening.
APA, Harvard, Vancouver, ISO, and other styles
49

Guerci, B., M. Hanefeld, S. Gentile, R. Aronson, F. Tinahones, C. Roy-duval, E. Souhami, et al. "CAD-28: Optimisation de l'utilisation de l'insuline Glargine par l'ajout de lixisénatide prandial vs l'insuline glulisine une fois par jour (GLU-1), ou l'insuline glulisine 3 fois par jour (GLU-3) : GetGoal-Duo2." Diabetes & Metabolism 42 (March 2016): A31. http://dx.doi.org/10.1016/s1262-3636(16)30124-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Hartel, Pieter H., Marc Feeley, Martin Alt, Lennart Augustsson, Peter Baumann, Marcel Beemster, Emmanuel Chailloux, et al. "Benchmarking implementations of functional languages with ‘Pseudoknot’, a float-intensive benchmark." Journal of Functional Programming 6, no. 4 (July 1996): 621–55. http://dx.doi.org/10.1017/s0956796800001891.

Full text
Abstract:
AbstractOver 25 implementations of different functional languages are benchmarked using the same program, a floating-point intensive application taken from molecular biology. The principal aspects studied are compile time and execution time for the various implementations that were benchmarked. An important consideration is how the program can be modified and tuned to obtain maximal performance on each language implementation. With few exceptions, the compilers take a significant amount of time to compile this program, though most compilers were faster than the then current GNU C compiler (GCC version 2.5.8). Compilers that generate C or Lisp are often slower than those that generate native code directly: the cost of compiling the intermediate form is normally a large fraction of the total compilation time. There is no clear distinction between the runtime performance of eager and lazy implementations when appropriate annotations are used: lazy implementations have clearly come of age when it comes to implementing largely strict applications, such as the Pseudoknot program. The speed of C can be approached by some implementations, but to achieve this performance, special measures such as strictness annotations are required by non-strict implementations. The benchmark results have to be interpreted with care. Firstly, a benchmark based on a single program cannot cover a wide spectrum of ‘typical’ applications. Secondly, the compilers vary in the kind and level of optimisations offered, so the effort required to obtain an optimal version of the program is similarly varied.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography