Artículos de revistas sobre el tema "Floating point"

Siga este enlace para ver otros tipos de publicaciones sobre el tema: Floating point.

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores artículos de revistas para su investigación sobre el tema "Floating point".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore artículos de revistas sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Jorgensen, Alan A., Las Vegas, Connie R. Masters, Ratan K. Guha y Andrew C. Masters. "Bounded Floating Point: Identifying and Revealing Floating-Point Error". Advances in Science, Technology and Engineering Systems Journal 6, n.º 1 (enero de 2021): 519–31. http://dx.doi.org/10.25046/aj060157.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

M. Somasekhar, M. Somasekhar. "Floating Point Operations in PipeRench CGRA". International Journal of Scientific Research 1, n.º 6 (1 de junio de 2012): 67–68. http://dx.doi.org/10.15373/22778179/nov2012/24.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Boldo, Sylvie, Claude-Pierre Jeannerod, Guillaume Melquiond y Jean-Michel Muller. "Floating-point arithmetic". Acta Numerica 32 (mayo de 2023): 203–90. http://dx.doi.org/10.1017/s0962492922000101.

Texto completo
Resumen
Floating-point numbers have an intuitive meaning when it comes to physics-based numerical computations, and they have thus become the most common way of approximating real numbers in computers. The IEEE-754 Standard has played a large part in making floating-point arithmetic ubiquitous today, by specifying its semantics in a strict yet useful way as early as 1985. In particular, floating-point operations should be performed as if their results were first computed with an infinite precision and then rounded to the target format. A consequence is that floating-point arithmetic satisfies the ‘standard model’ that is often used for analysing the accuracy of floating-point algorithms. But that is only scraping the surface, and floating-point arithmetic offers much more.In this survey we recall the history of floating-point arithmetic as well as its specification mandated by the IEEE-754 Standard. We also recall what properties it entails and what every programmer should know when designing a floating-point algorithm. We provide various basic blocks that can be implemented with floating-point arithmetic. In particular, one can actually compute the rounding error caused by some floating-point operations, which paves the way to designing more accurate algorithms. More generally, properties of floating-point arithmetic make it possible to extend the accuracy of computations beyond working precision.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Blinn, J. F. "Floating-point tricks". IEEE Computer Graphics and Applications 17, n.º 4 (1997): 80–84. http://dx.doi.org/10.1109/38.595279.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Ghosh, Aniruddha, Satrughna Singha y Amitabha Sinha. ""Floating point RNS"". ACM SIGARCH Computer Architecture News 40, n.º 2 (31 de mayo de 2012): 39–43. http://dx.doi.org/10.1145/2234336.2234343.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Kavya, Nagireddy. "Design and Implementation of Floating-Point Addition and Floating-Point Multiplication". International Journal for Research in Applied Science and Engineering Technology 10, n.º 1 (31 de enero de 2022): 98–101. http://dx.doi.org/10.22214/ijraset.2022.39742.

Texto completo
Resumen
Abstract: In this paper, we present the design and implementation of Floating point addition and Floating point Multiplication. There are many multipliers in existence in which Floating point Multiplication and Floating point addition offers a high precision and more accuracy for the data representation of the image. This project is designed and simulated on Xilinx ISE 14.7 version software using verilog. Simulation results show area reduction and delay reduction as compared to the conventional method. Keywords: FIR Filter, Floating point Addition, Floating point Multiplication, Carry Look Ahead Adder
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Baidas, Z., A. D. Brown y A. C. Williams. "Floating-point behavioral synthesis". IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 20, n.º 7 (julio de 2001): 828–39. http://dx.doi.org/10.1109/43.931000.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Sayers, David y du Croz Jeremy. "Validating floating-point systems". Physics World 2, n.º 6 (junio de 1989): 59–62. http://dx.doi.org/10.1088/2058-7058/2/6/33.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Erle, Mark A., Brian J. Hickmann y Michael J. Schulte. "Decimal Floating-Point Multiplication". IEEE Transactions on Computers 58, n.º 7 (julio de 2009): 902–16. http://dx.doi.org/10.1109/tc.2008.218.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Nannarelli, Alberto. "Tunable Floating-Point Adder". IEEE Transactions on Computers 68, n.º 10 (1 de octubre de 2019): 1553–60. http://dx.doi.org/10.1109/tc.2019.2906907.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Shirayanagi, Kiyoshi. "Floating point Gröbner bases". Mathematics and Computers in Simulation 42, n.º 4-6 (noviembre de 1996): 509–28. http://dx.doi.org/10.1016/s0378-4754(96)00027-4.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Wichmann, Brian. "Improving floating-point programming". Science of Computer Programming 15, n.º 2-3 (diciembre de 1990): 255–56. http://dx.doi.org/10.1016/0167-6423(90)90092-r.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Weiss, S. y A. Goldstein. "Floating point micropipeline performance". Journal of Systems Architecture 45, n.º 1 (enero de 1998): 15–29. http://dx.doi.org/10.1016/s1383-7621(97)00070-2.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Advanced Micro Devices. "IEEE floating-point format". Microprocessors and Microsystems 12, n.º 1 (enero de 1988): 13–23. http://dx.doi.org/10.1016/0141-9331(88)90031-2.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Espelid, T. O. "On Floating-Point Summation". SIAM Review 37, n.º 4 (diciembre de 1995): 603–7. http://dx.doi.org/10.1137/1037130.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Umemura, Kyoji. "Floating-point number LISP". Software: Practice and Experience 21, n.º 10 (octubre de 1991): 1015–26. http://dx.doi.org/10.1002/spe.4380211003.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Singamsetti, Mrudula, Sadulla Shaik y T. Pitchaiah. "Merged Floating Point Multipliers". International Journal of Engineering and Advanced Technology 9, n.º 1s5 (30 de diciembre de 2019): 178–82. http://dx.doi.org/10.35940/ijeat.a1042.1291s519.

Texto completo
Resumen
Floating point multipliers are extensively used in many scientific and signal processing computations, due to high speed and memory requirements of IEEE-754 floating point multipliers which prevents its implementation in many systems because of fast computations. Hence floating point multipliers became one of the research criteria. This research aims to design a new floating point multiplier that occupies less area, low power dissipation and reduces computational time (more speed) when compared to the conventional architectures. After an extensive literature survey, new architecture was recognized i.e, resource sharing Karatsuba –Ofman algorithm which occupies less area, power and increasing speed. The design was implemented in mat lab using DSP block sets, simulator tool is Xilinx Vivado.
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Hockert, Neil y Katherine Compton. "Improving Floating-Point Performance in Less Area: Fractured Floating Point Units (FFPUs)". Journal of Signal Processing Systems 67, n.º 1 (11 de enero de 2011): 31–46. http://dx.doi.org/10.1007/s11265-010-0561-y.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Albert, Anitha Juliette y Seshasayanan Ramachandran. "NULL Convention Floating Point Multiplier". Scientific World Journal 2015 (2015): 1–10. http://dx.doi.org/10.1155/2015/749569.

Texto completo
Resumen
Floating point multiplication is a critical part in high dynamic range and computational intensive digital signal processing applications which require high precision and low power. This paper presents the design of an IEEE 754 single precision floating point multiplier using asynchronous NULL convention logic paradigm. Rounding has not been implemented to suit high precision applications. The novelty of the research is that it is the first ever NULL convention logic multiplier, designed to perform floating point multiplication. The proposed multiplier offers substantial decrease in power consumption when compared with its synchronous version. Performance attributes of the NULL convention logic floating point multiplier, obtained from Xilinx simulation and Cadence, are compared with its equivalent synchronous implementation.
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Sravani, Chinta, Dr Prasad Janga y Mrs S. SriBindu. "Floating Point Operations Compatible Streaming Elements for FPGA Accelerators". International Journal of Trend in Scientific Research and Development Volume-2, Issue-5 (31 de agosto de 2018): 302–9. http://dx.doi.org/10.31142/ijtsrd15853.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Ramya Rani, N. "Implementation of Embedded Floating Point Arithmetic Units on FPGA". Applied Mechanics and Materials 550 (mayo de 2014): 126–36. http://dx.doi.org/10.4028/www.scientific.net/amm.550.126.

Texto completo
Resumen
:Floating point arithmetic plays a major role in scientific and embedded computing applications. But the performance of field programmable gate arrays (FPGAs) used for floating point applications is poor due to the complexity of floating point arithmetic. The implementation of floating point units on FPGAs consumes a large amount of resources and that leads to the development of embedded floating point units in FPGAs. Embedded applications like multimedia, communication and DSP algorithms use floating point arithmetic in processing graphics, Fourier transformation, coding, etc. In this paper, methodologies are presented for the implementation of embedded floating point units on FPGA. The work is focused with the aim of achieving high speed of computations and to reduce the power for evaluating expressions. An application that demands high performance floating point computation can achieve better speed and density by incorporating embedded floating point units. Additionally this paper describes a comparative study of the design of single precision and double precision pipelined floating point arithmetic units for evaluating expressions. The modules are designed using VHDL simulation in Xilinx software and implemented on VIRTEX and SPARTAN FPGAs.
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Aruna Mastani, S. y Riyaz Ahamed Shaik. "Inexact Floating Point Adders Analysis". International Journal of Applied Engineering Research 15, n.º 11 (30 de noviembre de 2020): 1075–80. http://dx.doi.org/10.37622/ijaer/15.11.2020.1075-1080.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Meyer, Quirin, Jochen Süßmuth, Gerd Sußner, Marc Stamminger y Günther Greiner. "On Floating-Point Normal Vectors". Computer Graphics Forum 29, n.º 4 (26 de agosto de 2010): 1405–9. http://dx.doi.org/10.1111/j.1467-8659.2010.01737.x.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Ghatte, Najib, Shilpa Patil y Deepak Bhoir. "Floating Point Engine using VHDL". International Journal of Engineering Trends and Technology 8, n.º 4 (25 de febrero de 2014): 198–203. http://dx.doi.org/10.14445/22315381/ijett-v8p236.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Lange, Marko y Siegfried M. Rump. "Faithfully Rounded Floating-point Computations". ACM Transactions on Mathematical Software 46, n.º 3 (25 de septiembre de 2020): 1–20. http://dx.doi.org/10.1145/3290955.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Winter, Dik T. "Floating point attributes in Ada". ACM SIGAda Ada Letters XI, n.º 7 (2 de septiembre de 1991): 244–73. http://dx.doi.org/10.1145/123533.123577.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Toronto, Neil y Jay McCarthy. "Practically Accurate Floating-Point Math". Computing in Science & Engineering 16, n.º 4 (julio de 2014): 80–95. http://dx.doi.org/10.1109/mcse.2014.90.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Kadric, Edin, Paul Gurniak y Andre DeHon. "Accurate Parallel Floating-Point Accumulation". IEEE Transactions on Computers 65, n.º 11 (1 de noviembre de 2016): 3224–38. http://dx.doi.org/10.1109/tc.2016.2532874.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Lam, Michael O., Jeffrey K. Hollingsworth y G. W. Stewart. "Dynamic floating-point cancellation detection". Parallel Computing 39, n.º 3 (marzo de 2013): 146–55. http://dx.doi.org/10.1016/j.parco.2012.08.002.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Scheidt, J. K. y C. W. Schelin. "Distributions of floating point numbers". Computing 38, n.º 4 (diciembre de 1987): 315–24. http://dx.doi.org/10.1007/bf02278709.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Oavrielov, Moshe y Lev Epstein. "The NS32081 Floating-point Unit". IEEE Micro 6, n.º 2 (abril de 1986): 6–12. http://dx.doi.org/10.1109/mm.1986.304737.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Groza, V. Z. "High-resolution floating-point ADC". IEEE Transactions on Instrumentation and Measurement 50, n.º 6 (2001): 1822–29. http://dx.doi.org/10.1109/19.982987.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Nikmehr, H., B. Phillips y Cheng-Chew Lim. "Fast Decimal Floating-Point Division". IEEE Transactions on Very Large Scale Integration (VLSI) Systems 14, n.º 9 (septiembre de 2006): 951–61. http://dx.doi.org/10.1109/tvlsi.2006.884047.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Serebrenik, Alexander y Danny De Schreye. "Termination of Floating-Point Computations". Journal of Automated Reasoning 34, n.º 2 (diciembre de 2005): 141–77. http://dx.doi.org/10.1007/s10817-005-6546-z.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Rivera, Joao, Franz Franchetti y Markus Püschel. "Floating-Point TVPI Abstract Domain". Proceedings of the ACM on Programming Languages 8, PLDI (20 de junio de 2024): 442–66. http://dx.doi.org/10.1145/3656395.

Texto completo
Resumen
Floating-point arithmetic is natively supported in hardware and the preferred choice when implementing numerical software in scientific or engineering applications. However, such programs are notoriously hard to analyze due to round-off errors and the frequent use of elementary functions such as log, arctan, or sqrt. In this work, we present the Two Variables per Inequality Floating-Point (TVPI-FP) domain, a numerical and constraint-based abstract domain designed for the analysis of floating-point programs. TVPI-FP supports all features of real-world floating-point programs including conditional branches, loops, and elementary functions and it is efficient asymptotically and in practice. Thus it overcomes limitations of prior tools that often are restricted to straight-line programs or require the use of expensive solvers. The key idea is the consistent use of interval arithmetic in inequalities and an associated redesign of all operators. Our extensive experiments show that TVPI-FP is often orders of magnitudes faster than more expressive tools at competitive, or better precision while also providing broader support for realistic programs with loops and conditionals.
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Hammadi Jassim, Manal. "Floating Point Optimization Using VHDL". Engineering and Technology Journal 27, n.º 16 (1 de diciembre de 2009): 3023–49. http://dx.doi.org/10.30684/etj.27.16.11.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Luсkij, Georgi y Oleksandr Dolholenko. "Development of floating point operating devices". Technology audit and production reserves 5, n.º 2(73) (31 de octubre de 2023): 11–17. http://dx.doi.org/10.15587/2706-5448.2023.290127.

Texto completo
Resumen
The paper shows a well-known approach to the construction of cores in multi-core microprocessors, which is based on the application of a data flow graph-driven calculation model. The architecture of such kernels is based on the application of the reduced instruction set level data flow model proposed by Yale Patt. The object of research is a model of calculations based on data flow management in a multi-core microprocessor. The results of the floating-point multiplier development that can be dynamically reconfigured to handle five different formats of floating-point operands and an approach to the construction of an operating device for addition-subtraction of a sequence of floating-point numbers are presented, for which the law of associativity is fulfilled without additional programming complications. On the basis of the developed circuit of the floating-point multiplier, it is possible to implement various variants of the high-speed multiplier with both fixed and floating points, which may find commercial application. By adding memory elements to each of the multiplier segments, it is possible to get options for building very fast pipeline multipliers. The multiplier scheme has a limitation: the exponent is not evaluated for denormalized operands, but the standard for floating-point arithmetic does not require that denormalized operands be handled. In such cases, the multiplier packs infinity as the result. The implementation of an inter-core operating device of a floating-point adder-subtractor can be considered as a new approach to the practical solution of dynamic planning tasks when performing addition-subtraction operations within the framework of a multi-core microprocessor. The limitations of its implementation are related to the large amount of hardware costs required for implementation. To assess this complexity, an assessment of the value of the bits of its main blocks for various formats of representing floating-point numbers, in accordance with the floating-point standard, was carried out.
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Yang, Hongru, Jinchen Xu, Jiangwei Hao, Zuoyan Zhang y Bei Zhou. "Detecting Floating-Point Expression Errors Based Improved PSO Algorithm". IET Software 2023 (23 de octubre de 2023): 1–16. http://dx.doi.org/10.1049/2023/6681267.

Texto completo
Resumen
The use of floating-point numbers inevitably leads to inaccurate results and, in certain cases, significant program failures. Detecting floating-point errors is critical to ensuring that floating-point programs outputs are proper. However, due to the sparsity of floating-point errors, only a limited number of inputs can cause significant floating-point errors, and determining how to detect these inputs and to selecting the appropriate search technique is critical to detecting significant errors. This paper proposes characteristic particle swarm optimization (CPSO) algorithm based on particle swarm optimization (PSO) algorithm. The floating-point expression error detection tool PSOED is implemented, which can detect significant errors in floating-point arithmetic expressions and provide corresponding input. The method presented in this paper is based on two insights: (1) treating floating-point error detection as a search problem and selecting reliable heuristic search strategies to solve the problem; (2) fully utilizing the error distribution laws of expressions and the distribution characteristics of floating-point numbers to guide the search space generation and improve the search efficiency. This paper selects 28 expressions from the FPBench standard set as test cases, uses PSOED to detect the maximum error of the expressions, and compares them to the current dynamic error detection tools S3FP and Herbie. PSOED detects the maximum error 100% better than S3FP, 68% better than Herbie, and 14% equivalent to Herbie. The results of the experiments indicate that PSOED can detect significant floating-point expression errors.
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Yang, Fengyuan. "Research and Analysis of Floating-Point Adder Principle". Applied and Computational Engineering 8, n.º 1 (1 de agosto de 2023): 113–17. http://dx.doi.org/10.54254/2755-2721/8/20230092.

Texto completo
Resumen
With the development of the times, computers are used more and more widely, and the research and development of adder, as the most basic operation unit, determine the development of the computer field. This paper analyzes the principle of one-bit adder and floating-point adder by literature analysis. One-bit adder is the most basic type of traditional adder, besides bit-by-bit adder, overrun adder and so on. The purpose of this paper is to understand the basic principle of adder, among them, IEEE-754 binary floating point operation is very important. So that the traditional fixed-point adder is the basis of the floating-point adder, which can have a new direction in the future development of floating-point adder optimization. This paper finds that the floating-point adder is one of the most widely used components in signal processing systems today, and therefore, the improvement of the floating-point adder is necessary.
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Burud, Mr Anand S. y Dr Pradip C. Bhaskar. "Processor Design Using 32 Bit Single Precision Floating Point Unit". International Journal of Trend in Scientific Research and Development Volume-2, Issue-4 (30 de junio de 2018): 198–202. http://dx.doi.org/10.31142/ijtsrd12912.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Kurniawan, Wakhid, Hafizd Ardiansyah, Annisa Dwi Oktavianita y Mr Fitree Tahe. "Integer Representation of Floating-Point Manipulation with Float Twice". IJID (International Journal on Informatics for Development) 9, n.º 1 (9 de septiembre de 2020): 15. http://dx.doi.org/10.14421/ijid.2020.09103.

Texto completo
Resumen
In the programming world, understanding floating point is not easy, especially if there are floating point and bit-level interactions. Although there are currently many libraries to simplify the computation process, still many programmers today who do not really understand how the floating point manipulation process. Therefore, this paper aims to provide insight into how to manipulate IEEE-754 32-bit floating point with different representation of results, which are integers and code rules of float twice. The method used is a literature review, adopting a float-twice prototype using C programming. The results of this study are applications that can be used to represent integers of floating-point manipulation by adopting a float-twice prototype. Using the application programmers make it easy for programmers to determine the type of program data to be developed, especially those running on 32 bits floating point (Single Precision).
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Blanton, Marina, Michael T. Goodrich y Chen Yuan. "Secure and Accurate Summation of Many Floating-Point Numbers". Proceedings on Privacy Enhancing Technologies 2023, n.º 3 (julio de 2023): 432–45. http://dx.doi.org/10.56553/popets-2023-0090.

Texto completo
Resumen
Motivated by the importance of floating-point computations, we study the problem of securely and accurately summing many floating-point numbers. Prior work has focused on security absent accuracy or accuracy absent security, whereas our approach achieves both of them. Specifically, we show how to implement floating-point superaccumulators using secure multi-party computation techniques, so that a number of participants holding secret shares of floating-point numbers can accurately compute their sum while keeping the individual values private.
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Zou, Daming, Yuchen Gu, Yuanfeng Shi, MingZhe Wang, Yingfei Xiong y Zhendong Su. "Oracle-free repair synthesis for floating-point programs". Proceedings of the ACM on Programming Languages 6, OOPSLA2 (31 de octubre de 2022): 957–85. http://dx.doi.org/10.1145/3563322.

Texto completo
Resumen
The floating-point representation provides widely-used data types (such as “float” and “double”) for modern numerical software. Numerical errors are inherent due to floating-point’s approximate nature, and pose an important, well-known challenge. It is nontrivial to fix/repair numerical code to reduce numerical errors — it requires either numerical expertise (for manual fixing) or high-precision oracles (for automatic repair); both are difficult requirements. To tackle this challenge, this paper introduces a principled dynamic approach that is fully automated and oracle-free for effectively repairing floating-point errors. The key of our approach is the novel notion of micro-structure that characterizes structural patterns of floating-point errors. We leverage micro-structures’ statistical information on floating-point errors to effectively guide repair synthesis and validation. Compared with existing state-of-the-art repair approaches, our work is fully automatic and has the distinctive benefit of not relying on the difficult to obtain high-precision oracles. Evaluation results on 36 commonly-used numerical programs show that our approach is highly efficient and effective: (1) it is able to synthesize repairs instantaneously, and (2) versus the original programs, the repaired programs have orders of magnitude smaller floating-point errors, while having faster runtime performance.
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Lee, Wonyeol, Rahul Sharma y Alex Aiken. "Verifying bit-manipulations of floating-point". ACM SIGPLAN Notices 51, n.º 6 (agosto de 2016): 70–84. http://dx.doi.org/10.1145/2980983.2908107.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Vogt, Christopher J. "Floating point performance of Common Lisp". ACM SIGPLAN Notices 33, n.º 9 (septiembre de 1998): 103–7. http://dx.doi.org/10.1145/290229.290244.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Yamanaka, Naoya y Shin'ichi Oishi. "Fast quadruple-double floating point format". Nonlinear Theory and Its Applications, IEICE 5, n.º 1 (2014): 15–34. http://dx.doi.org/10.1587/nolta.5.15.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Lindstrom, Peter. "Fixed-Rate Compressed Floating-Point Arrays". IEEE Transactions on Visualization and Computer Graphics 20, n.º 12 (31 de diciembre de 2014): 2674–83. http://dx.doi.org/10.1109/tvcg.2014.2346458.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

Ris, Fred, Ed Barkmeyer, Craig Schaffert y Peter Farkas. "When floating-point addition isn't commutative". ACM SIGNUM Newsletter 28, n.º 1 (enero de 1993): 8–13. http://dx.doi.org/10.1145/156301.156303.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Laakso, T. I. y L. B. Jackson. "Bounds for floating-point roundoff noise". IEEE Transactions on Circuits and Systems II: Analog and Digital Signal Processing 41, n.º 6 (junio de 1994): 424–26. http://dx.doi.org/10.1109/82.300204.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

Rao, B. D. "Floating point arithmetic and digital filters". IEEE Transactions on Signal Processing 40, n.º 1 (1992): 85–95. http://dx.doi.org/10.1109/78.157184.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía