Artykuły w czasopismach na temat „Floating point”

Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Floating point.

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych artykułów w czasopismach naukowych na temat „Floating point”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.

1

Jorgensen, Alan A., Las Vegas, Connie R. Masters, Ratan K. Guha i Andrew C. Masters. "Bounded Floating Point: Identifying and Revealing Floating-Point Error". Advances in Science, Technology and Engineering Systems Journal 6, nr 1 (styczeń 2021): 519–31. http://dx.doi.org/10.25046/aj060157.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

M. Somasekhar, M. Somasekhar. "Floating Point Operations in PipeRench CGRA". International Journal of Scientific Research 1, nr 6 (1.06.2012): 67–68. http://dx.doi.org/10.15373/22778179/nov2012/24.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Boldo, Sylvie, Claude-Pierre Jeannerod, Guillaume Melquiond i Jean-Michel Muller. "Floating-point arithmetic". Acta Numerica 32 (maj 2023): 203–90. http://dx.doi.org/10.1017/s0962492922000101.

Pełny tekst źródła
Streszczenie:
Floating-point numbers have an intuitive meaning when it comes to physics-based numerical computations, and they have thus become the most common way of approximating real numbers in computers. The IEEE-754 Standard has played a large part in making floating-point arithmetic ubiquitous today, by specifying its semantics in a strict yet useful way as early as 1985. In particular, floating-point operations should be performed as if their results were first computed with an infinite precision and then rounded to the target format. A consequence is that floating-point arithmetic satisfies the ‘standard model’ that is often used for analysing the accuracy of floating-point algorithms. But that is only scraping the surface, and floating-point arithmetic offers much more.In this survey we recall the history of floating-point arithmetic as well as its specification mandated by the IEEE-754 Standard. We also recall what properties it entails and what every programmer should know when designing a floating-point algorithm. We provide various basic blocks that can be implemented with floating-point arithmetic. In particular, one can actually compute the rounding error caused by some floating-point operations, which paves the way to designing more accurate algorithms. More generally, properties of floating-point arithmetic make it possible to extend the accuracy of computations beyond working precision.
Style APA, Harvard, Vancouver, ISO itp.
4

Blinn, J. F. "Floating-point tricks". IEEE Computer Graphics and Applications 17, nr 4 (1997): 80–84. http://dx.doi.org/10.1109/38.595279.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Ghosh, Aniruddha, Satrughna Singha i Amitabha Sinha. ""Floating point RNS"". ACM SIGARCH Computer Architecture News 40, nr 2 (31.05.2012): 39–43. http://dx.doi.org/10.1145/2234336.2234343.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Kavya, Nagireddy. "Design and Implementation of Floating-Point Addition and Floating-Point Multiplication". International Journal for Research in Applied Science and Engineering Technology 10, nr 1 (31.01.2022): 98–101. http://dx.doi.org/10.22214/ijraset.2022.39742.

Pełny tekst źródła
Streszczenie:
Abstract: In this paper, we present the design and implementation of Floating point addition and Floating point Multiplication. There are many multipliers in existence in which Floating point Multiplication and Floating point addition offers a high precision and more accuracy for the data representation of the image. This project is designed and simulated on Xilinx ISE 14.7 version software using verilog. Simulation results show area reduction and delay reduction as compared to the conventional method. Keywords: FIR Filter, Floating point Addition, Floating point Multiplication, Carry Look Ahead Adder
Style APA, Harvard, Vancouver, ISO itp.
7

Baidas, Z., A. D. Brown i A. C. Williams. "Floating-point behavioral synthesis". IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 20, nr 7 (lipiec 2001): 828–39. http://dx.doi.org/10.1109/43.931000.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Sayers, David, i du Croz Jeremy. "Validating floating-point systems". Physics World 2, nr 6 (czerwiec 1989): 59–62. http://dx.doi.org/10.1088/2058-7058/2/6/33.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Erle, Mark A., Brian J. Hickmann i Michael J. Schulte. "Decimal Floating-Point Multiplication". IEEE Transactions on Computers 58, nr 7 (lipiec 2009): 902–16. http://dx.doi.org/10.1109/tc.2008.218.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Nannarelli, Alberto. "Tunable Floating-Point Adder". IEEE Transactions on Computers 68, nr 10 (1.10.2019): 1553–60. http://dx.doi.org/10.1109/tc.2019.2906907.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
11

Shirayanagi, Kiyoshi. "Floating point Gröbner bases". Mathematics and Computers in Simulation 42, nr 4-6 (listopad 1996): 509–28. http://dx.doi.org/10.1016/s0378-4754(96)00027-4.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
12

Wichmann, Brian. "Improving floating-point programming". Science of Computer Programming 15, nr 2-3 (grudzień 1990): 255–56. http://dx.doi.org/10.1016/0167-6423(90)90092-r.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
13

Weiss, S., i A. Goldstein. "Floating point micropipeline performance". Journal of Systems Architecture 45, nr 1 (styczeń 1998): 15–29. http://dx.doi.org/10.1016/s1383-7621(97)00070-2.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
14

Advanced Micro Devices. "IEEE floating-point format". Microprocessors and Microsystems 12, nr 1 (styczeń 1988): 13–23. http://dx.doi.org/10.1016/0141-9331(88)90031-2.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
15

Espelid, T. O. "On Floating-Point Summation". SIAM Review 37, nr 4 (grudzień 1995): 603–7. http://dx.doi.org/10.1137/1037130.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
16

Umemura, Kyoji. "Floating-point number LISP". Software: Practice and Experience 21, nr 10 (październik 1991): 1015–26. http://dx.doi.org/10.1002/spe.4380211003.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
17

Singamsetti, Mrudula, Sadulla Shaik i T. Pitchaiah. "Merged Floating Point Multipliers". International Journal of Engineering and Advanced Technology 9, nr 1s5 (30.12.2019): 178–82. http://dx.doi.org/10.35940/ijeat.a1042.1291s519.

Pełny tekst źródła
Streszczenie:
Floating point multipliers are extensively used in many scientific and signal processing computations, due to high speed and memory requirements of IEEE-754 floating point multipliers which prevents its implementation in many systems because of fast computations. Hence floating point multipliers became one of the research criteria. This research aims to design a new floating point multiplier that occupies less area, low power dissipation and reduces computational time (more speed) when compared to the conventional architectures. After an extensive literature survey, new architecture was recognized i.e, resource sharing Karatsuba –Ofman algorithm which occupies less area, power and increasing speed. The design was implemented in mat lab using DSP block sets, simulator tool is Xilinx Vivado.
Style APA, Harvard, Vancouver, ISO itp.
18

Hockert, Neil, i Katherine Compton. "Improving Floating-Point Performance in Less Area: Fractured Floating Point Units (FFPUs)". Journal of Signal Processing Systems 67, nr 1 (11.01.2011): 31–46. http://dx.doi.org/10.1007/s11265-010-0561-y.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
19

Albert, Anitha Juliette, i Seshasayanan Ramachandran. "NULL Convention Floating Point Multiplier". Scientific World Journal 2015 (2015): 1–10. http://dx.doi.org/10.1155/2015/749569.

Pełny tekst źródła
Streszczenie:
Floating point multiplication is a critical part in high dynamic range and computational intensive digital signal processing applications which require high precision and low power. This paper presents the design of an IEEE 754 single precision floating point multiplier using asynchronous NULL convention logic paradigm. Rounding has not been implemented to suit high precision applications. The novelty of the research is that it is the first ever NULL convention logic multiplier, designed to perform floating point multiplication. The proposed multiplier offers substantial decrease in power consumption when compared with its synchronous version. Performance attributes of the NULL convention logic floating point multiplier, obtained from Xilinx simulation and Cadence, are compared with its equivalent synchronous implementation.
Style APA, Harvard, Vancouver, ISO itp.
20

Sravani, Chinta, Dr Prasad Janga i Mrs S. SriBindu. "Floating Point Operations Compatible Streaming Elements for FPGA Accelerators". International Journal of Trend in Scientific Research and Development Volume-2, Issue-5 (31.08.2018): 302–9. http://dx.doi.org/10.31142/ijtsrd15853.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
21

Ramya Rani, N. "Implementation of Embedded Floating Point Arithmetic Units on FPGA". Applied Mechanics and Materials 550 (maj 2014): 126–36. http://dx.doi.org/10.4028/www.scientific.net/amm.550.126.

Pełny tekst źródła
Streszczenie:
:Floating point arithmetic plays a major role in scientific and embedded computing applications. But the performance of field programmable gate arrays (FPGAs) used for floating point applications is poor due to the complexity of floating point arithmetic. The implementation of floating point units on FPGAs consumes a large amount of resources and that leads to the development of embedded floating point units in FPGAs. Embedded applications like multimedia, communication and DSP algorithms use floating point arithmetic in processing graphics, Fourier transformation, coding, etc. In this paper, methodologies are presented for the implementation of embedded floating point units on FPGA. The work is focused with the aim of achieving high speed of computations and to reduce the power for evaluating expressions. An application that demands high performance floating point computation can achieve better speed and density by incorporating embedded floating point units. Additionally this paper describes a comparative study of the design of single precision and double precision pipelined floating point arithmetic units for evaluating expressions. The modules are designed using VHDL simulation in Xilinx software and implemented on VIRTEX and SPARTAN FPGAs.
Style APA, Harvard, Vancouver, ISO itp.
22

Aruna Mastani, S., i Riyaz Ahamed Shaik. "Inexact Floating Point Adders Analysis". International Journal of Applied Engineering Research 15, nr 11 (30.11.2020): 1075–80. http://dx.doi.org/10.37622/ijaer/15.11.2020.1075-1080.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
23

Meyer, Quirin, Jochen Süßmuth, Gerd Sußner, Marc Stamminger i Günther Greiner. "On Floating-Point Normal Vectors". Computer Graphics Forum 29, nr 4 (26.08.2010): 1405–9. http://dx.doi.org/10.1111/j.1467-8659.2010.01737.x.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
24

Ghatte, Najib, Shilpa Patil i Deepak Bhoir. "Floating Point Engine using VHDL". International Journal of Engineering Trends and Technology 8, nr 4 (25.02.2014): 198–203. http://dx.doi.org/10.14445/22315381/ijett-v8p236.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
25

Lange, Marko, i Siegfried M. Rump. "Faithfully Rounded Floating-point Computations". ACM Transactions on Mathematical Software 46, nr 3 (25.09.2020): 1–20. http://dx.doi.org/10.1145/3290955.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
26

Winter, Dik T. "Floating point attributes in Ada". ACM SIGAda Ada Letters XI, nr 7 (2.09.1991): 244–73. http://dx.doi.org/10.1145/123533.123577.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
27

Toronto, Neil, i Jay McCarthy. "Practically Accurate Floating-Point Math". Computing in Science & Engineering 16, nr 4 (lipiec 2014): 80–95. http://dx.doi.org/10.1109/mcse.2014.90.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
28

Kadric, Edin, Paul Gurniak i Andre DeHon. "Accurate Parallel Floating-Point Accumulation". IEEE Transactions on Computers 65, nr 11 (1.11.2016): 3224–38. http://dx.doi.org/10.1109/tc.2016.2532874.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
29

Lam, Michael O., Jeffrey K. Hollingsworth i G. W. Stewart. "Dynamic floating-point cancellation detection". Parallel Computing 39, nr 3 (marzec 2013): 146–55. http://dx.doi.org/10.1016/j.parco.2012.08.002.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
30

Scheidt, J. K., i C. W. Schelin. "Distributions of floating point numbers". Computing 38, nr 4 (grudzień 1987): 315–24. http://dx.doi.org/10.1007/bf02278709.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
31

Oavrielov, Moshe, i Lev Epstein. "The NS32081 Floating-point Unit". IEEE Micro 6, nr 2 (kwiecień 1986): 6–12. http://dx.doi.org/10.1109/mm.1986.304737.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
32

Groza, V. Z. "High-resolution floating-point ADC". IEEE Transactions on Instrumentation and Measurement 50, nr 6 (2001): 1822–29. http://dx.doi.org/10.1109/19.982987.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
33

Nikmehr, H., B. Phillips i Cheng-Chew Lim. "Fast Decimal Floating-Point Division". IEEE Transactions on Very Large Scale Integration (VLSI) Systems 14, nr 9 (wrzesień 2006): 951–61. http://dx.doi.org/10.1109/tvlsi.2006.884047.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
34

Serebrenik, Alexander, i Danny De Schreye. "Termination of Floating-Point Computations". Journal of Automated Reasoning 34, nr 2 (grudzień 2005): 141–77. http://dx.doi.org/10.1007/s10817-005-6546-z.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
35

Rivera, Joao, Franz Franchetti i Markus Püschel. "Floating-Point TVPI Abstract Domain". Proceedings of the ACM on Programming Languages 8, PLDI (20.06.2024): 442–66. http://dx.doi.org/10.1145/3656395.

Pełny tekst źródła
Streszczenie:
Floating-point arithmetic is natively supported in hardware and the preferred choice when implementing numerical software in scientific or engineering applications. However, such programs are notoriously hard to analyze due to round-off errors and the frequent use of elementary functions such as log, arctan, or sqrt. In this work, we present the Two Variables per Inequality Floating-Point (TVPI-FP) domain, a numerical and constraint-based abstract domain designed for the analysis of floating-point programs. TVPI-FP supports all features of real-world floating-point programs including conditional branches, loops, and elementary functions and it is efficient asymptotically and in practice. Thus it overcomes limitations of prior tools that often are restricted to straight-line programs or require the use of expensive solvers. The key idea is the consistent use of interval arithmetic in inequalities and an associated redesign of all operators. Our extensive experiments show that TVPI-FP is often orders of magnitudes faster than more expressive tools at competitive, or better precision while also providing broader support for realistic programs with loops and conditionals.
Style APA, Harvard, Vancouver, ISO itp.
36

Hammadi Jassim, Manal. "Floating Point Optimization Using VHDL". Engineering and Technology Journal 27, nr 16 (1.12.2009): 3023–49. http://dx.doi.org/10.30684/etj.27.16.11.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
37

Luсkij, Georgi, i Oleksandr Dolholenko. "Development of floating point operating devices". Technology audit and production reserves 5, nr 2(73) (31.10.2023): 11–17. http://dx.doi.org/10.15587/2706-5448.2023.290127.

Pełny tekst źródła
Streszczenie:
The paper shows a well-known approach to the construction of cores in multi-core microprocessors, which is based on the application of a data flow graph-driven calculation model. The architecture of such kernels is based on the application of the reduced instruction set level data flow model proposed by Yale Patt. The object of research is a model of calculations based on data flow management in a multi-core microprocessor. The results of the floating-point multiplier development that can be dynamically reconfigured to handle five different formats of floating-point operands and an approach to the construction of an operating device for addition-subtraction of a sequence of floating-point numbers are presented, for which the law of associativity is fulfilled without additional programming complications. On the basis of the developed circuit of the floating-point multiplier, it is possible to implement various variants of the high-speed multiplier with both fixed and floating points, which may find commercial application. By adding memory elements to each of the multiplier segments, it is possible to get options for building very fast pipeline multipliers. The multiplier scheme has a limitation: the exponent is not evaluated for denormalized operands, but the standard for floating-point arithmetic does not require that denormalized operands be handled. In such cases, the multiplier packs infinity as the result. The implementation of an inter-core operating device of a floating-point adder-subtractor can be considered as a new approach to the practical solution of dynamic planning tasks when performing addition-subtraction operations within the framework of a multi-core microprocessor. The limitations of its implementation are related to the large amount of hardware costs required for implementation. To assess this complexity, an assessment of the value of the bits of its main blocks for various formats of representing floating-point numbers, in accordance with the floating-point standard, was carried out.
Style APA, Harvard, Vancouver, ISO itp.
38

Yang, Hongru, Jinchen Xu, Jiangwei Hao, Zuoyan Zhang i Bei Zhou. "Detecting Floating-Point Expression Errors Based Improved PSO Algorithm". IET Software 2023 (23.10.2023): 1–16. http://dx.doi.org/10.1049/2023/6681267.

Pełny tekst źródła
Streszczenie:
The use of floating-point numbers inevitably leads to inaccurate results and, in certain cases, significant program failures. Detecting floating-point errors is critical to ensuring that floating-point programs outputs are proper. However, due to the sparsity of floating-point errors, only a limited number of inputs can cause significant floating-point errors, and determining how to detect these inputs and to selecting the appropriate search technique is critical to detecting significant errors. This paper proposes characteristic particle swarm optimization (CPSO) algorithm based on particle swarm optimization (PSO) algorithm. The floating-point expression error detection tool PSOED is implemented, which can detect significant errors in floating-point arithmetic expressions and provide corresponding input. The method presented in this paper is based on two insights: (1) treating floating-point error detection as a search problem and selecting reliable heuristic search strategies to solve the problem; (2) fully utilizing the error distribution laws of expressions and the distribution characteristics of floating-point numbers to guide the search space generation and improve the search efficiency. This paper selects 28 expressions from the FPBench standard set as test cases, uses PSOED to detect the maximum error of the expressions, and compares them to the current dynamic error detection tools S3FP and Herbie. PSOED detects the maximum error 100% better than S3FP, 68% better than Herbie, and 14% equivalent to Herbie. The results of the experiments indicate that PSOED can detect significant floating-point expression errors.
Style APA, Harvard, Vancouver, ISO itp.
39

Yang, Fengyuan. "Research and Analysis of Floating-Point Adder Principle". Applied and Computational Engineering 8, nr 1 (1.08.2023): 113–17. http://dx.doi.org/10.54254/2755-2721/8/20230092.

Pełny tekst źródła
Streszczenie:
With the development of the times, computers are used more and more widely, and the research and development of adder, as the most basic operation unit, determine the development of the computer field. This paper analyzes the principle of one-bit adder and floating-point adder by literature analysis. One-bit adder is the most basic type of traditional adder, besides bit-by-bit adder, overrun adder and so on. The purpose of this paper is to understand the basic principle of adder, among them, IEEE-754 binary floating point operation is very important. So that the traditional fixed-point adder is the basis of the floating-point adder, which can have a new direction in the future development of floating-point adder optimization. This paper finds that the floating-point adder is one of the most widely used components in signal processing systems today, and therefore, the improvement of the floating-point adder is necessary.
Style APA, Harvard, Vancouver, ISO itp.
40

Burud, Mr Anand S., i Dr Pradip C. Bhaskar. "Processor Design Using 32 Bit Single Precision Floating Point Unit". International Journal of Trend in Scientific Research and Development Volume-2, Issue-4 (30.06.2018): 198–202. http://dx.doi.org/10.31142/ijtsrd12912.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
41

Kurniawan, Wakhid, Hafizd Ardiansyah, Annisa Dwi Oktavianita i Mr Fitree Tahe. "Integer Representation of Floating-Point Manipulation with Float Twice". IJID (International Journal on Informatics for Development) 9, nr 1 (9.09.2020): 15. http://dx.doi.org/10.14421/ijid.2020.09103.

Pełny tekst źródła
Streszczenie:
In the programming world, understanding floating point is not easy, especially if there are floating point and bit-level interactions. Although there are currently many libraries to simplify the computation process, still many programmers today who do not really understand how the floating point manipulation process. Therefore, this paper aims to provide insight into how to manipulate IEEE-754 32-bit floating point with different representation of results, which are integers and code rules of float twice. The method used is a literature review, adopting a float-twice prototype using C programming. The results of this study are applications that can be used to represent integers of floating-point manipulation by adopting a float-twice prototype. Using the application programmers make it easy for programmers to determine the type of program data to be developed, especially those running on 32 bits floating point (Single Precision).
Style APA, Harvard, Vancouver, ISO itp.
42

Blanton, Marina, Michael T. Goodrich i Chen Yuan. "Secure and Accurate Summation of Many Floating-Point Numbers". Proceedings on Privacy Enhancing Technologies 2023, nr 3 (lipiec 2023): 432–45. http://dx.doi.org/10.56553/popets-2023-0090.

Pełny tekst źródła
Streszczenie:
Motivated by the importance of floating-point computations, we study the problem of securely and accurately summing many floating-point numbers. Prior work has focused on security absent accuracy or accuracy absent security, whereas our approach achieves both of them. Specifically, we show how to implement floating-point superaccumulators using secure multi-party computation techniques, so that a number of participants holding secret shares of floating-point numbers can accurately compute their sum while keeping the individual values private.
Style APA, Harvard, Vancouver, ISO itp.
43

Zou, Daming, Yuchen Gu, Yuanfeng Shi, MingZhe Wang, Yingfei Xiong i Zhendong Su. "Oracle-free repair synthesis for floating-point programs". Proceedings of the ACM on Programming Languages 6, OOPSLA2 (31.10.2022): 957–85. http://dx.doi.org/10.1145/3563322.

Pełny tekst źródła
Streszczenie:
The floating-point representation provides widely-used data types (such as “float” and “double”) for modern numerical software. Numerical errors are inherent due to floating-point’s approximate nature, and pose an important, well-known challenge. It is nontrivial to fix/repair numerical code to reduce numerical errors — it requires either numerical expertise (for manual fixing) or high-precision oracles (for automatic repair); both are difficult requirements. To tackle this challenge, this paper introduces a principled dynamic approach that is fully automated and oracle-free for effectively repairing floating-point errors. The key of our approach is the novel notion of micro-structure that characterizes structural patterns of floating-point errors. We leverage micro-structures’ statistical information on floating-point errors to effectively guide repair synthesis and validation. Compared with existing state-of-the-art repair approaches, our work is fully automatic and has the distinctive benefit of not relying on the difficult to obtain high-precision oracles. Evaluation results on 36 commonly-used numerical programs show that our approach is highly efficient and effective: (1) it is able to synthesize repairs instantaneously, and (2) versus the original programs, the repaired programs have orders of magnitude smaller floating-point errors, while having faster runtime performance.
Style APA, Harvard, Vancouver, ISO itp.
44

Lee, Wonyeol, Rahul Sharma i Alex Aiken. "Verifying bit-manipulations of floating-point". ACM SIGPLAN Notices 51, nr 6 (sierpień 2016): 70–84. http://dx.doi.org/10.1145/2980983.2908107.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
45

Vogt, Christopher J. "Floating point performance of Common Lisp". ACM SIGPLAN Notices 33, nr 9 (wrzesień 1998): 103–7. http://dx.doi.org/10.1145/290229.290244.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
46

Yamanaka, Naoya, i Shin'ichi Oishi. "Fast quadruple-double floating point format". Nonlinear Theory and Its Applications, IEICE 5, nr 1 (2014): 15–34. http://dx.doi.org/10.1587/nolta.5.15.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
47

Lindstrom, Peter. "Fixed-Rate Compressed Floating-Point Arrays". IEEE Transactions on Visualization and Computer Graphics 20, nr 12 (31.12.2014): 2674–83. http://dx.doi.org/10.1109/tvcg.2014.2346458.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
48

Ris, Fred, Ed Barkmeyer, Craig Schaffert i Peter Farkas. "When floating-point addition isn't commutative". ACM SIGNUM Newsletter 28, nr 1 (styczeń 1993): 8–13. http://dx.doi.org/10.1145/156301.156303.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
49

Laakso, T. I., i L. B. Jackson. "Bounds for floating-point roundoff noise". IEEE Transactions on Circuits and Systems II: Analog and Digital Signal Processing 41, nr 6 (czerwiec 1994): 424–26. http://dx.doi.org/10.1109/82.300204.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
50

Rao, B. D. "Floating point arithmetic and digital filters". IEEE Transactions on Signal Processing 40, nr 1 (1992): 85–95. http://dx.doi.org/10.1109/78.157184.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii