Journal articles on the topic 'Floating point'

To see the other types of publications on this topic, follow the link: Floating point.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Floating point.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Jorgensen, Alan A., Las Vegas, Connie R. Masters, Ratan K. Guha, and Andrew C. Masters. "Bounded Floating Point: Identifying and Revealing Floating-Point Error." Advances in Science, Technology and Engineering Systems Journal 6, no. 1 (January 2021): 519–31. http://dx.doi.org/10.25046/aj060157.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

M. Somasekhar, M. Somasekhar. "Floating Point Operations in PipeRench CGRA." International Journal of Scientific Research 1, no. 6 (June 1, 2012): 67–68. http://dx.doi.org/10.15373/22778179/nov2012/24.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Boldo, Sylvie, Claude-Pierre Jeannerod, Guillaume Melquiond, and Jean-Michel Muller. "Floating-point arithmetic." Acta Numerica 32 (May 2023): 203–90. http://dx.doi.org/10.1017/s0962492922000101.

Full text
Abstract:
Floating-point numbers have an intuitive meaning when it comes to physics-based numerical computations, and they have thus become the most common way of approximating real numbers in computers. The IEEE-754 Standard has played a large part in making floating-point arithmetic ubiquitous today, by specifying its semantics in a strict yet useful way as early as 1985. In particular, floating-point operations should be performed as if their results were first computed with an infinite precision and then rounded to the target format. A consequence is that floating-point arithmetic satisfies the ‘standard model’ that is often used for analysing the accuracy of floating-point algorithms. But that is only scraping the surface, and floating-point arithmetic offers much more.In this survey we recall the history of floating-point arithmetic as well as its specification mandated by the IEEE-754 Standard. We also recall what properties it entails and what every programmer should know when designing a floating-point algorithm. We provide various basic blocks that can be implemented with floating-point arithmetic. In particular, one can actually compute the rounding error caused by some floating-point operations, which paves the way to designing more accurate algorithms. More generally, properties of floating-point arithmetic make it possible to extend the accuracy of computations beyond working precision.
APA, Harvard, Vancouver, ISO, and other styles
4

Blinn, J. F. "Floating-point tricks." IEEE Computer Graphics and Applications 17, no. 4 (1997): 80–84. http://dx.doi.org/10.1109/38.595279.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Ghosh, Aniruddha, Satrughna Singha, and Amitabha Sinha. ""Floating point RNS"." ACM SIGARCH Computer Architecture News 40, no. 2 (May 31, 2012): 39–43. http://dx.doi.org/10.1145/2234336.2234343.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Kavya, Nagireddy. "Design and Implementation of Floating-Point Addition and Floating-Point Multiplication." International Journal for Research in Applied Science and Engineering Technology 10, no. 1 (January 31, 2022): 98–101. http://dx.doi.org/10.22214/ijraset.2022.39742.

Full text
Abstract:
Abstract: In this paper, we present the design and implementation of Floating point addition and Floating point Multiplication. There are many multipliers in existence in which Floating point Multiplication and Floating point addition offers a high precision and more accuracy for the data representation of the image. This project is designed and simulated on Xilinx ISE 14.7 version software using verilog. Simulation results show area reduction and delay reduction as compared to the conventional method. Keywords: FIR Filter, Floating point Addition, Floating point Multiplication, Carry Look Ahead Adder
APA, Harvard, Vancouver, ISO, and other styles
7

Baidas, Z., A. D. Brown, and A. C. Williams. "Floating-point behavioral synthesis." IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 20, no. 7 (July 2001): 828–39. http://dx.doi.org/10.1109/43.931000.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Sayers, David, and du Croz Jeremy. "Validating floating-point systems." Physics World 2, no. 6 (June 1989): 59–62. http://dx.doi.org/10.1088/2058-7058/2/6/33.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Erle, Mark A., Brian J. Hickmann, and Michael J. Schulte. "Decimal Floating-Point Multiplication." IEEE Transactions on Computers 58, no. 7 (July 2009): 902–16. http://dx.doi.org/10.1109/tc.2008.218.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Nannarelli, Alberto. "Tunable Floating-Point Adder." IEEE Transactions on Computers 68, no. 10 (October 1, 2019): 1553–60. http://dx.doi.org/10.1109/tc.2019.2906907.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Shirayanagi, Kiyoshi. "Floating point Gröbner bases." Mathematics and Computers in Simulation 42, no. 4-6 (November 1996): 509–28. http://dx.doi.org/10.1016/s0378-4754(96)00027-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Wichmann, Brian. "Improving floating-point programming." Science of Computer Programming 15, no. 2-3 (December 1990): 255–56. http://dx.doi.org/10.1016/0167-6423(90)90092-r.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Weiss, S., and A. Goldstein. "Floating point micropipeline performance." Journal of Systems Architecture 45, no. 1 (January 1998): 15–29. http://dx.doi.org/10.1016/s1383-7621(97)00070-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Advanced Micro Devices. "IEEE floating-point format." Microprocessors and Microsystems 12, no. 1 (January 1988): 13–23. http://dx.doi.org/10.1016/0141-9331(88)90031-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Espelid, T. O. "On Floating-Point Summation." SIAM Review 37, no. 4 (December 1995): 603–7. http://dx.doi.org/10.1137/1037130.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Umemura, Kyoji. "Floating-point number LISP." Software: Practice and Experience 21, no. 10 (October 1991): 1015–26. http://dx.doi.org/10.1002/spe.4380211003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Singamsetti, Mrudula, Sadulla Shaik, and T. Pitchaiah. "Merged Floating Point Multipliers." International Journal of Engineering and Advanced Technology 9, no. 1s5 (December 30, 2019): 178–82. http://dx.doi.org/10.35940/ijeat.a1042.1291s519.

Full text
Abstract:
Floating point multipliers are extensively used in many scientific and signal processing computations, due to high speed and memory requirements of IEEE-754 floating point multipliers which prevents its implementation in many systems because of fast computations. Hence floating point multipliers became one of the research criteria. This research aims to design a new floating point multiplier that occupies less area, low power dissipation and reduces computational time (more speed) when compared to the conventional architectures. After an extensive literature survey, new architecture was recognized i.e, resource sharing Karatsuba –Ofman algorithm which occupies less area, power and increasing speed. The design was implemented in mat lab using DSP block sets, simulator tool is Xilinx Vivado.
APA, Harvard, Vancouver, ISO, and other styles
18

Hockert, Neil, and Katherine Compton. "Improving Floating-Point Performance in Less Area: Fractured Floating Point Units (FFPUs)." Journal of Signal Processing Systems 67, no. 1 (January 11, 2011): 31–46. http://dx.doi.org/10.1007/s11265-010-0561-y.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Albert, Anitha Juliette, and Seshasayanan Ramachandran. "NULL Convention Floating Point Multiplier." Scientific World Journal 2015 (2015): 1–10. http://dx.doi.org/10.1155/2015/749569.

Full text
Abstract:
Floating point multiplication is a critical part in high dynamic range and computational intensive digital signal processing applications which require high precision and low power. This paper presents the design of an IEEE 754 single precision floating point multiplier using asynchronous NULL convention logic paradigm. Rounding has not been implemented to suit high precision applications. The novelty of the research is that it is the first ever NULL convention logic multiplier, designed to perform floating point multiplication. The proposed multiplier offers substantial decrease in power consumption when compared with its synchronous version. Performance attributes of the NULL convention logic floating point multiplier, obtained from Xilinx simulation and Cadence, are compared with its equivalent synchronous implementation.
APA, Harvard, Vancouver, ISO, and other styles
20

Sravani, Chinta, Dr Prasad Janga, and Mrs S. SriBindu. "Floating Point Operations Compatible Streaming Elements for FPGA Accelerators." International Journal of Trend in Scientific Research and Development Volume-2, Issue-5 (August 31, 2018): 302–9. http://dx.doi.org/10.31142/ijtsrd15853.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Ramya Rani, N. "Implementation of Embedded Floating Point Arithmetic Units on FPGA." Applied Mechanics and Materials 550 (May 2014): 126–36. http://dx.doi.org/10.4028/www.scientific.net/amm.550.126.

Full text
Abstract:
:Floating point arithmetic plays a major role in scientific and embedded computing applications. But the performance of field programmable gate arrays (FPGAs) used for floating point applications is poor due to the complexity of floating point arithmetic. The implementation of floating point units on FPGAs consumes a large amount of resources and that leads to the development of embedded floating point units in FPGAs. Embedded applications like multimedia, communication and DSP algorithms use floating point arithmetic in processing graphics, Fourier transformation, coding, etc. In this paper, methodologies are presented for the implementation of embedded floating point units on FPGA. The work is focused with the aim of achieving high speed of computations and to reduce the power for evaluating expressions. An application that demands high performance floating point computation can achieve better speed and density by incorporating embedded floating point units. Additionally this paper describes a comparative study of the design of single precision and double precision pipelined floating point arithmetic units for evaluating expressions. The modules are designed using VHDL simulation in Xilinx software and implemented on VIRTEX and SPARTAN FPGAs.
APA, Harvard, Vancouver, ISO, and other styles
22

Aruna Mastani, S., and Riyaz Ahamed Shaik. "Inexact Floating Point Adders Analysis." International Journal of Applied Engineering Research 15, no. 11 (November 30, 2020): 1075–80. http://dx.doi.org/10.37622/ijaer/15.11.2020.1075-1080.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Meyer, Quirin, Jochen Süßmuth, Gerd Sußner, Marc Stamminger, and Günther Greiner. "On Floating-Point Normal Vectors." Computer Graphics Forum 29, no. 4 (August 26, 2010): 1405–9. http://dx.doi.org/10.1111/j.1467-8659.2010.01737.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Ghatte, Najib, Shilpa Patil, and Deepak Bhoir. "Floating Point Engine using VHDL." International Journal of Engineering Trends and Technology 8, no. 4 (February 25, 2014): 198–203. http://dx.doi.org/10.14445/22315381/ijett-v8p236.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Lange, Marko, and Siegfried M. Rump. "Faithfully Rounded Floating-point Computations." ACM Transactions on Mathematical Software 46, no. 3 (September 25, 2020): 1–20. http://dx.doi.org/10.1145/3290955.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Winter, Dik T. "Floating point attributes in Ada." ACM SIGAda Ada Letters XI, no. 7 (September 2, 1991): 244–73. http://dx.doi.org/10.1145/123533.123577.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Toronto, Neil, and Jay McCarthy. "Practically Accurate Floating-Point Math." Computing in Science & Engineering 16, no. 4 (July 2014): 80–95. http://dx.doi.org/10.1109/mcse.2014.90.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Kadric, Edin, Paul Gurniak, and Andre DeHon. "Accurate Parallel Floating-Point Accumulation." IEEE Transactions on Computers 65, no. 11 (November 1, 2016): 3224–38. http://dx.doi.org/10.1109/tc.2016.2532874.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Lam, Michael O., Jeffrey K. Hollingsworth, and G. W. Stewart. "Dynamic floating-point cancellation detection." Parallel Computing 39, no. 3 (March 2013): 146–55. http://dx.doi.org/10.1016/j.parco.2012.08.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Scheidt, J. K., and C. W. Schelin. "Distributions of floating point numbers." Computing 38, no. 4 (December 1987): 315–24. http://dx.doi.org/10.1007/bf02278709.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Oavrielov, Moshe, and Lev Epstein. "The NS32081 Floating-point Unit." IEEE Micro 6, no. 2 (April 1986): 6–12. http://dx.doi.org/10.1109/mm.1986.304737.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Groza, V. Z. "High-resolution floating-point ADC." IEEE Transactions on Instrumentation and Measurement 50, no. 6 (2001): 1822–29. http://dx.doi.org/10.1109/19.982987.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Nikmehr, H., B. Phillips, and Cheng-Chew Lim. "Fast Decimal Floating-Point Division." IEEE Transactions on Very Large Scale Integration (VLSI) Systems 14, no. 9 (September 2006): 951–61. http://dx.doi.org/10.1109/tvlsi.2006.884047.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Serebrenik, Alexander, and Danny De Schreye. "Termination of Floating-Point Computations." Journal of Automated Reasoning 34, no. 2 (December 2005): 141–77. http://dx.doi.org/10.1007/s10817-005-6546-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Rivera, Joao, Franz Franchetti, and Markus Püschel. "Floating-Point TVPI Abstract Domain." Proceedings of the ACM on Programming Languages 8, PLDI (June 20, 2024): 442–66. http://dx.doi.org/10.1145/3656395.

Full text
Abstract:
Floating-point arithmetic is natively supported in hardware and the preferred choice when implementing numerical software in scientific or engineering applications. However, such programs are notoriously hard to analyze due to round-off errors and the frequent use of elementary functions such as log, arctan, or sqrt. In this work, we present the Two Variables per Inequality Floating-Point (TVPI-FP) domain, a numerical and constraint-based abstract domain designed for the analysis of floating-point programs. TVPI-FP supports all features of real-world floating-point programs including conditional branches, loops, and elementary functions and it is efficient asymptotically and in practice. Thus it overcomes limitations of prior tools that often are restricted to straight-line programs or require the use of expensive solvers. The key idea is the consistent use of interval arithmetic in inequalities and an associated redesign of all operators. Our extensive experiments show that TVPI-FP is often orders of magnitudes faster than more expressive tools at competitive, or better precision while also providing broader support for realistic programs with loops and conditionals.
APA, Harvard, Vancouver, ISO, and other styles
36

Hammadi Jassim, Manal. "Floating Point Optimization Using VHDL." Engineering and Technology Journal 27, no. 16 (December 1, 2009): 3023–49. http://dx.doi.org/10.30684/etj.27.16.11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Luсkij, Georgi, and Oleksandr Dolholenko. "Development of floating point operating devices." Technology audit and production reserves 5, no. 2(73) (October 31, 2023): 11–17. http://dx.doi.org/10.15587/2706-5448.2023.290127.

Full text
Abstract:
The paper shows a well-known approach to the construction of cores in multi-core microprocessors, which is based on the application of a data flow graph-driven calculation model. The architecture of such kernels is based on the application of the reduced instruction set level data flow model proposed by Yale Patt. The object of research is a model of calculations based on data flow management in a multi-core microprocessor. The results of the floating-point multiplier development that can be dynamically reconfigured to handle five different formats of floating-point operands and an approach to the construction of an operating device for addition-subtraction of a sequence of floating-point numbers are presented, for which the law of associativity is fulfilled without additional programming complications. On the basis of the developed circuit of the floating-point multiplier, it is possible to implement various variants of the high-speed multiplier with both fixed and floating points, which may find commercial application. By adding memory elements to each of the multiplier segments, it is possible to get options for building very fast pipeline multipliers. The multiplier scheme has a limitation: the exponent is not evaluated for denormalized operands, but the standard for floating-point arithmetic does not require that denormalized operands be handled. In such cases, the multiplier packs infinity as the result. The implementation of an inter-core operating device of a floating-point adder-subtractor can be considered as a new approach to the practical solution of dynamic planning tasks when performing addition-subtraction operations within the framework of a multi-core microprocessor. The limitations of its implementation are related to the large amount of hardware costs required for implementation. To assess this complexity, an assessment of the value of the bits of its main blocks for various formats of representing floating-point numbers, in accordance with the floating-point standard, was carried out.
APA, Harvard, Vancouver, ISO, and other styles
38

Yang, Hongru, Jinchen Xu, Jiangwei Hao, Zuoyan Zhang, and Bei Zhou. "Detecting Floating-Point Expression Errors Based Improved PSO Algorithm." IET Software 2023 (October 23, 2023): 1–16. http://dx.doi.org/10.1049/2023/6681267.

Full text
Abstract:
The use of floating-point numbers inevitably leads to inaccurate results and, in certain cases, significant program failures. Detecting floating-point errors is critical to ensuring that floating-point programs outputs are proper. However, due to the sparsity of floating-point errors, only a limited number of inputs can cause significant floating-point errors, and determining how to detect these inputs and to selecting the appropriate search technique is critical to detecting significant errors. This paper proposes characteristic particle swarm optimization (CPSO) algorithm based on particle swarm optimization (PSO) algorithm. The floating-point expression error detection tool PSOED is implemented, which can detect significant errors in floating-point arithmetic expressions and provide corresponding input. The method presented in this paper is based on two insights: (1) treating floating-point error detection as a search problem and selecting reliable heuristic search strategies to solve the problem; (2) fully utilizing the error distribution laws of expressions and the distribution characteristics of floating-point numbers to guide the search space generation and improve the search efficiency. This paper selects 28 expressions from the FPBench standard set as test cases, uses PSOED to detect the maximum error of the expressions, and compares them to the current dynamic error detection tools S3FP and Herbie. PSOED detects the maximum error 100% better than S3FP, 68% better than Herbie, and 14% equivalent to Herbie. The results of the experiments indicate that PSOED can detect significant floating-point expression errors.
APA, Harvard, Vancouver, ISO, and other styles
39

Yang, Fengyuan. "Research and Analysis of Floating-Point Adder Principle." Applied and Computational Engineering 8, no. 1 (August 1, 2023): 113–17. http://dx.doi.org/10.54254/2755-2721/8/20230092.

Full text
Abstract:
With the development of the times, computers are used more and more widely, and the research and development of adder, as the most basic operation unit, determine the development of the computer field. This paper analyzes the principle of one-bit adder and floating-point adder by literature analysis. One-bit adder is the most basic type of traditional adder, besides bit-by-bit adder, overrun adder and so on. The purpose of this paper is to understand the basic principle of adder, among them, IEEE-754 binary floating point operation is very important. So that the traditional fixed-point adder is the basis of the floating-point adder, which can have a new direction in the future development of floating-point adder optimization. This paper finds that the floating-point adder is one of the most widely used components in signal processing systems today, and therefore, the improvement of the floating-point adder is necessary.
APA, Harvard, Vancouver, ISO, and other styles
40

Burud, Mr Anand S., and Dr Pradip C. Bhaskar. "Processor Design Using 32 Bit Single Precision Floating Point Unit." International Journal of Trend in Scientific Research and Development Volume-2, Issue-4 (June 30, 2018): 198–202. http://dx.doi.org/10.31142/ijtsrd12912.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Kurniawan, Wakhid, Hafizd Ardiansyah, Annisa Dwi Oktavianita, and Mr Fitree Tahe. "Integer Representation of Floating-Point Manipulation with Float Twice." IJID (International Journal on Informatics for Development) 9, no. 1 (September 9, 2020): 15. http://dx.doi.org/10.14421/ijid.2020.09103.

Full text
Abstract:
In the programming world, understanding floating point is not easy, especially if there are floating point and bit-level interactions. Although there are currently many libraries to simplify the computation process, still many programmers today who do not really understand how the floating point manipulation process. Therefore, this paper aims to provide insight into how to manipulate IEEE-754 32-bit floating point with different representation of results, which are integers and code rules of float twice. The method used is a literature review, adopting a float-twice prototype using C programming. The results of this study are applications that can be used to represent integers of floating-point manipulation by adopting a float-twice prototype. Using the application programmers make it easy for programmers to determine the type of program data to be developed, especially those running on 32 bits floating point (Single Precision).
APA, Harvard, Vancouver, ISO, and other styles
42

Blanton, Marina, Michael T. Goodrich, and Chen Yuan. "Secure and Accurate Summation of Many Floating-Point Numbers." Proceedings on Privacy Enhancing Technologies 2023, no. 3 (July 2023): 432–45. http://dx.doi.org/10.56553/popets-2023-0090.

Full text
Abstract:
Motivated by the importance of floating-point computations, we study the problem of securely and accurately summing many floating-point numbers. Prior work has focused on security absent accuracy or accuracy absent security, whereas our approach achieves both of them. Specifically, we show how to implement floating-point superaccumulators using secure multi-party computation techniques, so that a number of participants holding secret shares of floating-point numbers can accurately compute their sum while keeping the individual values private.
APA, Harvard, Vancouver, ISO, and other styles
43

Zou, Daming, Yuchen Gu, Yuanfeng Shi, MingZhe Wang, Yingfei Xiong, and Zhendong Su. "Oracle-free repair synthesis for floating-point programs." Proceedings of the ACM on Programming Languages 6, OOPSLA2 (October 31, 2022): 957–85. http://dx.doi.org/10.1145/3563322.

Full text
Abstract:
The floating-point representation provides widely-used data types (such as “float” and “double”) for modern numerical software. Numerical errors are inherent due to floating-point’s approximate nature, and pose an important, well-known challenge. It is nontrivial to fix/repair numerical code to reduce numerical errors — it requires either numerical expertise (for manual fixing) or high-precision oracles (for automatic repair); both are difficult requirements. To tackle this challenge, this paper introduces a principled dynamic approach that is fully automated and oracle-free for effectively repairing floating-point errors. The key of our approach is the novel notion of micro-structure that characterizes structural patterns of floating-point errors. We leverage micro-structures’ statistical information on floating-point errors to effectively guide repair synthesis and validation. Compared with existing state-of-the-art repair approaches, our work is fully automatic and has the distinctive benefit of not relying on the difficult to obtain high-precision oracles. Evaluation results on 36 commonly-used numerical programs show that our approach is highly efficient and effective: (1) it is able to synthesize repairs instantaneously, and (2) versus the original programs, the repaired programs have orders of magnitude smaller floating-point errors, while having faster runtime performance.
APA, Harvard, Vancouver, ISO, and other styles
44

Lee, Wonyeol, Rahul Sharma, and Alex Aiken. "Verifying bit-manipulations of floating-point." ACM SIGPLAN Notices 51, no. 6 (August 2016): 70–84. http://dx.doi.org/10.1145/2980983.2908107.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Vogt, Christopher J. "Floating point performance of Common Lisp." ACM SIGPLAN Notices 33, no. 9 (September 1998): 103–7. http://dx.doi.org/10.1145/290229.290244.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Yamanaka, Naoya, and Shin'ichi Oishi. "Fast quadruple-double floating point format." Nonlinear Theory and Its Applications, IEICE 5, no. 1 (2014): 15–34. http://dx.doi.org/10.1587/nolta.5.15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Lindstrom, Peter. "Fixed-Rate Compressed Floating-Point Arrays." IEEE Transactions on Visualization and Computer Graphics 20, no. 12 (December 31, 2014): 2674–83. http://dx.doi.org/10.1109/tvcg.2014.2346458.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Ris, Fred, Ed Barkmeyer, Craig Schaffert, and Peter Farkas. "When floating-point addition isn't commutative." ACM SIGNUM Newsletter 28, no. 1 (January 1993): 8–13. http://dx.doi.org/10.1145/156301.156303.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Laakso, T. I., and L. B. Jackson. "Bounds for floating-point roundoff noise." IEEE Transactions on Circuits and Systems II: Analog and Digital Signal Processing 41, no. 6 (June 1994): 424–26. http://dx.doi.org/10.1109/82.300204.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Rao, B. D. "Floating point arithmetic and digital filters." IEEE Transactions on Signal Processing 40, no. 1 (1992): 85–95. http://dx.doi.org/10.1109/78.157184.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography