Academic literature on the topic 'Floating point'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Floating point.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Floating point"

1

Jorgensen, Alan A., Las Vegas, Connie R. Masters, Ratan K. Guha, and Andrew C. Masters. "Bounded Floating Point: Identifying and Revealing Floating-Point Error." Advances in Science, Technology and Engineering Systems Journal 6, no. 1 (January 2021): 519–31. http://dx.doi.org/10.25046/aj060157.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

M. Somasekhar, M. Somasekhar. "Floating Point Operations in PipeRench CGRA." International Journal of Scientific Research 1, no. 6 (June 1, 2012): 67–68. http://dx.doi.org/10.15373/22778179/nov2012/24.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Boldo, Sylvie, Claude-Pierre Jeannerod, Guillaume Melquiond, and Jean-Michel Muller. "Floating-point arithmetic." Acta Numerica 32 (May 2023): 203–90. http://dx.doi.org/10.1017/s0962492922000101.

Full text
Abstract:
Floating-point numbers have an intuitive meaning when it comes to physics-based numerical computations, and they have thus become the most common way of approximating real numbers in computers. The IEEE-754 Standard has played a large part in making floating-point arithmetic ubiquitous today, by specifying its semantics in a strict yet useful way as early as 1985. In particular, floating-point operations should be performed as if their results were first computed with an infinite precision and then rounded to the target format. A consequence is that floating-point arithmetic satisfies the ‘standard model’ that is often used for analysing the accuracy of floating-point algorithms. But that is only scraping the surface, and floating-point arithmetic offers much more.In this survey we recall the history of floating-point arithmetic as well as its specification mandated by the IEEE-754 Standard. We also recall what properties it entails and what every programmer should know when designing a floating-point algorithm. We provide various basic blocks that can be implemented with floating-point arithmetic. In particular, one can actually compute the rounding error caused by some floating-point operations, which paves the way to designing more accurate algorithms. More generally, properties of floating-point arithmetic make it possible to extend the accuracy of computations beyond working precision.
APA, Harvard, Vancouver, ISO, and other styles
4

Blinn, J. F. "Floating-point tricks." IEEE Computer Graphics and Applications 17, no. 4 (1997): 80–84. http://dx.doi.org/10.1109/38.595279.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Ghosh, Aniruddha, Satrughna Singha, and Amitabha Sinha. ""Floating point RNS"." ACM SIGARCH Computer Architecture News 40, no. 2 (May 31, 2012): 39–43. http://dx.doi.org/10.1145/2234336.2234343.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Kavya, Nagireddy. "Design and Implementation of Floating-Point Addition and Floating-Point Multiplication." International Journal for Research in Applied Science and Engineering Technology 10, no. 1 (January 31, 2022): 98–101. http://dx.doi.org/10.22214/ijraset.2022.39742.

Full text
Abstract:
Abstract: In this paper, we present the design and implementation of Floating point addition and Floating point Multiplication. There are many multipliers in existence in which Floating point Multiplication and Floating point addition offers a high precision and more accuracy for the data representation of the image. This project is designed and simulated on Xilinx ISE 14.7 version software using verilog. Simulation results show area reduction and delay reduction as compared to the conventional method. Keywords: FIR Filter, Floating point Addition, Floating point Multiplication, Carry Look Ahead Adder
APA, Harvard, Vancouver, ISO, and other styles
7

Baidas, Z., A. D. Brown, and A. C. Williams. "Floating-point behavioral synthesis." IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 20, no. 7 (July 2001): 828–39. http://dx.doi.org/10.1109/43.931000.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Sayers, David, and du Croz Jeremy. "Validating floating-point systems." Physics World 2, no. 6 (June 1989): 59–62. http://dx.doi.org/10.1088/2058-7058/2/6/33.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Erle, Mark A., Brian J. Hickmann, and Michael J. Schulte. "Decimal Floating-Point Multiplication." IEEE Transactions on Computers 58, no. 7 (July 2009): 902–16. http://dx.doi.org/10.1109/tc.2008.218.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Nannarelli, Alberto. "Tunable Floating-Point Adder." IEEE Transactions on Computers 68, no. 10 (October 1, 2019): 1553–60. http://dx.doi.org/10.1109/tc.2019.2906907.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Floating point"

1

Skogstrøm, Kristian. "Implementation of Floating-point Coprocessor." Thesis, Norwegian University of Science and Technology, Department of Computer and Information Science, 2005. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-9202.

Full text
Abstract:

This thesis presents the architecture and implementation of a high-performance floating-point coprocessor for Atmel's new microcontroller. The coprocessor architecture is based on a fused multiply-add pipeline developed in the specialization project, TDT4720. This pipeline has been optimized significantly and extended to support negation of all operands and single-precision input and output. New hardware has been designed for the decode/fetch unit, the register file, the compare/convert pipeline and the approximation tables. Division and square root is performed in software using Newton-Raphson iteration. The Verilog RTL implementation has been synthesized at 167 MHz using a 0.18 um standard cell library. The total area of the final implementation is 107 225 gates. The coprocessor has also been synthesized with the CPU. Test-programs have been run to verify that the coprocessor works correctly. A complete verification of the floating-point coprocessor, however, has not been performed due to limitations in time.

APA, Harvard, Vancouver, ISO, and other styles
2

Zhang, Yiwei. "Biophysically accurate floating point neuroprocessors." Thesis, University of Bristol, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.544427.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Baidas, Zaher Abdulkarim. "High-level floating-point synthesis." Thesis, University of Southampton, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.325049.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Duracz, Jan Andrzej. "Verification of floating point programs." Thesis, Aston University, 2010. http://publications.aston.ac.uk/15778/.

Full text
Abstract:
In this thesis we present an approach to automated verification of floating point programs. Existing techniques for automated generation of correctness theorems are extended to produce proof obligations for accuracy guarantees and absence of floating point exceptions. A prototype automated real number theorem prover is presented, demonstrating a novel application of function interval arithmetic in the context of subdivision-based numerical theorem proving. The prototype is tested on correctness theorems for two simple yet nontrivial programs, proving exception freedom and tight accuracy guarantees automatically. The prover demonstrates a novel application of function interval arithmetic in the context of subdivision-based numerical theorem proving. The experiments show how function intervals can be used to combat the information loss problems that limit the applicability of traditional interval arithmetic in the context of hard real number theorem proving.
APA, Harvard, Vancouver, ISO, and other styles
5

Ross, Johan, and Hans Engström. "Voice Codec for Floating Point Processor." Thesis, Linköping University, Department of Electrical Engineering, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-15763.

Full text
Abstract:

As part of an ongoing project at the department of electrical engineering, ISY, at Linköping University, a voice decoder using floating point formats has been the focus of this master thesis. Previous work has been done developing an mp3-decoder using the floating point formats. All is expected to be implemented on a single DSP.The ever present desire to make things smaller, more efficient and less power consuming are the main reasons for this master thesis regarding the use of a floating point format instead of the traditional integer format in a GSM codec. The idea with the low precision floating point format is to be able to reduce the size of the memory. This in turn reduces the size of the total chip area needed and also decreases the power consumption.One main question is if this can be done with the floating point format without losing too much sound quality of the speech. When using the integer format, one can represent every value in the range depending on how many bits are being used. When using a floating point format you can represent larger values using fewer bits compared to the integer format but you lose representation of some values and have to round the values off.From the tests that have been made with the decoder during this thesis, it has been found that the audible difference between the two formats is very small and can hardly be heard, if at all. The rounding seems to have very little effect on the quality of the sound and the implementation of the codec has succeeded in reproducing similar sound quality to the GSM standard decoder.

APA, Harvard, Vancouver, ISO, and other styles
6

Englund, Madeleine. "Hybrid Floating-point Units in FPGAs." Thesis, Linköpings universitet, Datorteknik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-86587.

Full text
Abstract:
Floating point numbers are used in many applications that  would be well suited to a higher parallelism than that offered in a CPU. In  these cases, an FPGA, with its ability to handle multiple calculations  simultaneously, could be the solution. Unfortunately, floating point  operations which are implemented in an FPGA is often resource intensive,  which means that many developers avoid floating point solutions in FPGAs or  using FPGAs for floating point applications. Here the potential to get less expensive floating point operations by using ahigher radix for the floating point numbers and using and expand the existingDSP block in the FPGA is investigated. One of the goals is that the FPGAshould be usable for both the users that have floating point in their designsand those who do not. In order to motivate hard floating point blocks in theFPGA, these must not consume too much of the limited resources. This work shows that the floating point addition will become smaller withthe use of the higher radix, while the multiplication becomes smaller by usingthe hardware of the DSP block. When both operations are examined at the sametime, it turns out that it is possible to get a reduced area, compared toseparate floating point units, by utilizing both the DSP block and higherradix for the floating point numbers.
APA, Harvard, Vancouver, ISO, and other styles
7

Xiao, Yancheng. "Two floating point LLL reduction algorithms." Thesis, McGill University, 2013. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=114503.

Full text
Abstract:
The Lenstra, Lenstra and Lov\'sz (LLL) reduction is the most popular lattice reduction and is a powerful tool for solving many complex problems in mathematics and computer science. The blocking technique casts matrix algorithms in terms of matrix-matrix operations to permit efficient reuse of data in the algorithms. In this thesis, we use the blocking technique to develop two floating point block LLL reduction algorithms, the left-to-right block LLL (LRBLLL) reduction algorithm and the alternating partition block LLL (APBLLL) reduction algorithm, and give the complexity analysis of these two algorithms. We compare these two block LLL reduction algorithms with the original LLL reduction algorithm (in floating point arithmetic) and the partial LLL (PLLL) reduction algorithm in the literature in terms of CPU run time, flops and relative backward errors. The simulation results show that the overall CPU run time of the two block LLL reduction algorithms are faster than the partial LLL reduction algorithm and much faster than the original LLL, even though the two block algorithms cost more flops than the partial LLL reduction algorithm in some cases. The shortcoming of the two block algorithms is that sometimes they may not be as numerically stable as the original and partial LLL reduction algorithms. The parallelization of APBLLL is discussed.
Le Lenstra, Lenstra et réduction Lovasz (LLL) est la réduction de réseaux plus populaire et il est un outil puissant pour résoudre de nombreux problèmes complexes en mathématiques et en informatique. La technique bloc LLL bloquante reformule les algorithmes en termes de matrice-matrice opérations de permettre la réutilisation efficace des données dans les algorithmes bloc LLL. Dans cette thèse, nous utilisons la technique de blocage de développer les deux algorithmes de réduction bloc LLL en points flottants, l'algorithme de réduction bloc LLL de la gauche vers la droite (LRBLLL) et l'algorithme de réduction bloc LLL en partition alternative (APBLLL), et donner a l'analyse de la complexité des ces deux algorithmes. Nous comparons ces deux algorithmes de réduction bloc LLL avec l'algorithme de réduction LLL original (en arithmétique au point flottant) et l'algorithme de réduction LLL partielle (PLLL) dans la littérature en termes de temps d'exécution CPU, flops et les erreurs de l'arrière par rapport. Les résultats des simulations montrent que les temps d'exécution CPU pour les deux algorithmes de réduction blocs LLL sont plus rapides que l'algorithme de réduction LLL partielle et beaucoup plus rapide que la réduction LLL originale, même si les deux algorithmes par bloc coûtent plus de flops que l'algorithme de réduction LLL partielle dans certains cas. L'inconvénient de ces deux algorithmes par blocs, c'est que parfois, ils peuvent n'être pas aussi stable numériquement que les algorithmes originaux et les algorithmes de réduction LLL partielle. Le parallélisation de APBLLL est discutée.
APA, Harvard, Vancouver, ISO, and other styles
8

Kupriianova, Olga. "Towards a modern floating-point environment." Thesis, Paris 6, 2015. http://www.theses.fr/2015PA066584/document.

Full text
Abstract:
Cette thèse fait une étude sur deux moyens d'enrichir l'environnement flottant courant : le premier est d'obtenir plusieurs versions d'implantation pour chaque fonction mathématique, le deuxième est de fournir des opérations de la norme IEEE754, qui permettent de mélanger les entrées et la sortie dans les bases différentes. Comme la quantité de versions différentes pour chaque fonction mathématique est énorme, ce travail se concentre sur la génération du code. Notre générateur de code adresse une large variété de fonctions: il produit les implantations paramétrées pour les fonctions définies par l'utilisateur. Il peut être vu comme un générateur de fonctions boîtes-noires. Ce travail inclut un nouvel algorithme pour le découpage de domaine et une tentative de remplacer les branchements pendant la reconstruction par un polynôme. Le nouveau découpage de domaines produit moins de sous-domaines et les degrés polynomiaux sur les sous-domaines adjacents ne varient pas beaucoup. Pour fournir les implantations vectorisables il faut éviter les branchements if-else pendant la reconstruction. Depuis la révision de la norme IEEE754 en 2008, il est devenu possible de mélanger des nombres de différentes précisions dans une opération. Par contre, il n'y a aucun mécanisme qui permettrait de mélanger les nombres dans des bases différentes dans une opération. La recherche dans l'arithmétique en base mixte a commencé par les pires cas pour le FMA. Un nouvel algorithme pour convertir une suite de caractères décimaux du longueur arbitraire en nombre flottant binaire est présenté. Il est indépendant du mode d'arrondi actuel et produit un résultat correctement arrondi
This work investigates two ways of enlarging the current floating-point environment. The first is to support several implementation versions of each mathematical function (elementary such as $\exp$ or $\log$ and special such as $\erf$ or $\Gamma$), the second one is to provide IEEE754 operations that mix the inputs and the output of different \radixes. As the number of various implementations for each mathematical function is large, this work is focused on code generation. Our code generator supports the huge variety of functions: it generates parametrized implementations for the user-specified functions. So it may be considered as a black-box function generator. This work contains a novel algorithm for domain splitting and an approach to replace branching on reconstruction by a polynomial. This new domain splitting algorithm produces less subdomains and the polynomial degrees on adjacent subdomains do not change much. To produce vectorizable implementations, if-else statements on the reconstruction step have to be avoided. Since the revision of the IEEE754 Standard in 2008 it is possible to mix numbers of different precisions in one operation. However, there is no mechanism that allows users to mix numbers of different radices in one operation. This research starts an examination ofmixed-radix arithmetic with the worst cases search for FMA. A novel algorithm to convert a decimal character sequence of arbitrary length to a binary floating-point number is presented. It is independent of currently-set rounding mode and produces correctly-rounded results
APA, Harvard, Vancouver, ISO, and other styles
9

Kupriianova, Olga. "Towards a modern floating-point environment." Electronic Thesis or Diss., Paris 6, 2015. http://www.theses.fr/2015PA066584.

Full text
Abstract:
Cette thèse fait une étude sur deux moyens d'enrichir l'environnement flottant courant : le premier est d'obtenir plusieurs versions d'implantation pour chaque fonction mathématique, le deuxième est de fournir des opérations de la norme IEEE754, qui permettent de mélanger les entrées et la sortie dans les bases différentes. Comme la quantité de versions différentes pour chaque fonction mathématique est énorme, ce travail se concentre sur la génération du code. Notre générateur de code adresse une large variété de fonctions: il produit les implantations paramétrées pour les fonctions définies par l'utilisateur. Il peut être vu comme un générateur de fonctions boîtes-noires. Ce travail inclut un nouvel algorithme pour le découpage de domaine et une tentative de remplacer les branchements pendant la reconstruction par un polynôme. Le nouveau découpage de domaines produit moins de sous-domaines et les degrés polynomiaux sur les sous-domaines adjacents ne varient pas beaucoup. Pour fournir les implantations vectorisables il faut éviter les branchements if-else pendant la reconstruction. Depuis la révision de la norme IEEE754 en 2008, il est devenu possible de mélanger des nombres de différentes précisions dans une opération. Par contre, il n'y a aucun mécanisme qui permettrait de mélanger les nombres dans des bases différentes dans une opération. La recherche dans l'arithmétique en base mixte a commencé par les pires cas pour le FMA. Un nouvel algorithme pour convertir une suite de caractères décimaux du longueur arbitraire en nombre flottant binaire est présenté. Il est indépendant du mode d'arrondi actuel et produit un résultat correctement arrondi
This work investigates two ways of enlarging the current floating-point environment. The first is to support several implementation versions of each mathematical function (elementary such as exp or log and special such as erf or Γ), the second one is to provide IEEE754 operations that mix the inputs and the output of different radixes. As the number of various implementations for each mathematical function is large, this work is focused on code generation. Our code generator supports the huge variety of functions: it generates parametrized implementations for the user-specified functions. So it may be considered as a black-box function generator. This work contains a novel algorithm for domain splitting and an approach to replace branching on reconstruction by a polynomial. This new domain splitting algorithm produces less subdomains and the polynomial degrees on adjacent subdomains do not change much. To produce vectorizable implementations, if-else statements on the reconstruction step have to be avoided. Since the revision of the IEEE754 Standard in 2008 it is possible to mix numbers of different precisions in one operation. However, there is no mechanism that allows users to mix numbers of different radices in one operation. This research starts an examination ofmixed-radix arithmetic with the worst cases search for FMA. A novel algorithm to convert a decimal character sequence of arbitrary length to a binary floating-point number is presented. It is independent of currently-set rounding mode and produces correctly-rounded results
APA, Harvard, Vancouver, ISO, and other styles
10

Aamodt, Tor. "Floating-point to fixed-point compilation and embedded architectural support." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/MQ58787.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Floating point"

1

L, Wallis Peter J., ed. Improving floating-point programming. Chichester, England: Wiley, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Muller, Jean-Michel, Nicolas Brunie, Florent de Dinechin, Claude-Pierre Jeannerod, Mioara Joldes, Vincent Lefèvre, Guillaume Melquiond, Nathalie Revol, and Serge Torres. Handbook of Floating-Point Arithmetic. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-76526-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Muller, Jean-Michel, Nicolas Brisebarre, Florent de Dinechin, Claude-Pierre Jeannerod, Vincent Lefèvre, Guillaume Melquiond, Nathalie Revol, Damien Stehlé, and Serge Torres. Handbook of Floating-Point Arithmetic. Boston: Birkhäuser Boston, 2010. http://dx.doi.org/10.1007/978-0-8176-4705-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

service), SpringerLink (Online, ed. Handbook of floating-point arithmetic. Boston: Birkhäuser, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Vekaria, R. Bandpass filters using floating point arithmtic. London: University of East London, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Aamodt, Tor. Floating-point to fixed-point compilation and embedded architectural support. Ottawa: National Library of Canada, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Russinoff, David M. Formal Verification of Floating-Point Hardware Design. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-87181-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Russinoff, David M. Formal Verification of Floating-Point Hardware Design. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-319-95513-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Motorola. MC68881/MC68882 floating-point coprocessor user's manual. 2nd ed. Englewood Cliffs, N.J: Prentice Hall, 1989.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

IEEE Computer Society. Standards Committee. Working group of the Microprocessor Standards Subcommittee. and American National Standards Institute, eds. IEEE standard for binary floating-point arithmetic. New York, N.Y: Institute of Electrical and Electronics Engineers, Inc, 1985.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Floating point"

1

Kneusel, Ronald T. "Floating Point." In Numbers and Computers, 75–111. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-17260-6_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kneusel, Ronald T. "Floating Point." In Numbers and Computers, 81–115. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-50508-4_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Lloris Ruiz, Antonio, Encarnación Castillo Morales, Luis Parrilla Roure, Antonio García Ríos, and María José Lloris Meseguer. "Floating Point." In Arithmetic and Algebraic Circuits, 173–220. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-67266-9_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Stoyan, Gisbert, and Agnes Baran. "Floating Point Arithmetic." In Compact Textbooks in Mathematics, 1–14. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-44660-8_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Kormanyos, Christopher. "Floating-Point Mathematics." In Real-Time C++, 213–39. Berlin, Heidelberg: Springer Berlin Heidelberg, 2015. http://dx.doi.org/10.1007/978-3-662-47810-3_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Deschamps, Jean-Pierre, Gustavo D. Sutter, and Enrique Cantó. "Floating Point Arithmetic." In Lecture Notes in Electrical Engineering, 305–36. Dordrecht: Springer Netherlands, 2012. http://dx.doi.org/10.1007/978-94-007-2987-2_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Van Hoey, Jo. "Floating-Point Arithmetic." In Beginning x64 Assembly Programming, 95–100. Berkeley, CA: Apress, 2019. http://dx.doi.org/10.1007/978-1-4842-5076-1_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Smith, Stephen. "Floating-Point Operations." In Raspberry Pi Assembly Language Programming, 211–32. Berkeley, CA: Apress, 2019. http://dx.doi.org/10.1007/978-1-4842-5287-1_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Smith, Stephen. "Floating-Point Operations." In Programming with 64-Bit ARM Assembly Language, 269–89. Berkeley, CA: Apress, 2020. http://dx.doi.org/10.1007/978-1-4842-5881-1_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Mihailescu, Marius Iulian, and Stefania Loredana Nita. "Floating-Point Arithmetic." In Pro Cryptography and Cryptanalysis, 109–32. Berkeley, CA: Apress, 2020. http://dx.doi.org/10.1007/978-1-4842-6367-9_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Floating point"

1

Zaki, Ahmad M., Mohamed H. El-Shafey, Ayman M. Bahaa-Eldin, and Gamal M. Aly. "Accurate floating-point operation using controlled floating-point precision." In 2011 IEEE Pacific Rim Conference on Communications, Computers and Signal Processing (PacRim). IEEE, 2011. http://dx.doi.org/10.1109/pacrim.2011.6032978.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Gemini, Vipin. "Reconfigurable floating point adder." In 2014 1st International Conference on Information Technology, Computer and Electrical Engineering (ICITACEE). IEEE, 2014. http://dx.doi.org/10.1109/icitacee.2014.7065719.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Collins, George E., and Werner Krandick. "Multiprecision floating point addition." In the 2000 international symposium. New York, New York, USA: ACM Press, 2000. http://dx.doi.org/10.1145/345542.345585.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Fandrianto, Jan, and B. Y. Woo. "VLSI floating-point processors." In 1985 IEEE 7th Symposium on Computer Arithmetic (ARITH). IEEE, 1985. http://dx.doi.org/10.1109/arith.1985.6158947.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Han, Kyungtae, Alex G. Olson, and Brian L. Evans. "Automatic Floating-Point to Fixed-Point Transformations." In 2006 Fortieth Asilomar Conference on Signals, Systems and Computers. IEEE, 2006. http://dx.doi.org/10.1109/acssc.2006.356588.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Marynets, K. "Study of the Antarctic Circumpolar Current via the Shallow Water Large Scale Modelling." In Floating Offshore Energy Devices. Materials Research Forum LLC, 2022. http://dx.doi.org/10.21741/9781644901731-11.

Full text
Abstract:
Abstract. This paper proposes a modelling of the Antarctic Circumpolar Current (ACC) by means of a two-point boundary value problem. As the major means of exchange of water between the great ocean basins (Atlantic, Pacific and Indian), the ACC plays a highly important role in the global climate. Despite its importance, it remains one of the most poorly understood components of global ocean circulation. We present some recent results on the existence and uniqueness of solutions of a two-point nonlinear boundary value problem that arises in the modeling of the flow of the (ACC) (see discussions in [4-9]).
APA, Harvard, Vancouver, ISO, and other styles
7

Rao, R. Prakash, N. Dhanunjaya Rao, K. Naveen, and P. Ramya. "IMPLEMENTATION OF THE STANDARD FLOATING POINT MAC USING IEEE 754 FLOATING POINT ADDER." In 2018 Second International Conference on Computing Methodologies and Communication (ICCMC). IEEE, 2018. http://dx.doi.org/10.1109/iccmc.2018.8487626.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Lam, Michael O., and Barry L. Rountree. "Floating-Point Shadow Value Analysis." In 2016 5th Workshop on Extreme-Scale Programming Tools (ESPT). IEEE, 2016. http://dx.doi.org/10.1109/espt.2016.007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Sheikh, Basit Riaz, and Rajit Manohar. "An Asynchronous Floating-Point Multiplier." In 2012 18th IEEE International Symposium on Asynchronous Circuits and Systems (ASYNC). IEEE, 2012. http://dx.doi.org/10.1109/async.2012.19.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Aarnio, Tomi, Claudio Brunelli, and Timo Viitanen. "Efficient floating-point texture decompression." In 2010 International Symposium on System-on-Chip - SOC. IEEE, 2010. http://dx.doi.org/10.1109/issoc.2010.5625555.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Floating point"

1

Chen, Yirng-An, and Randal E. Bryant. Verification of Floating-Point Adders. Fort Belvoir, VA: Defense Technical Information Center, April 1997. http://dx.doi.org/10.21236/ada346061.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kahan, W. Rational Arithmetic in Floating-Point. Fort Belvoir, VA: Defense Technical Information Center, September 1986. http://dx.doi.org/10.21236/ada175190.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Dally, William J. Micro-Optimization of Floating-Point Operations. Fort Belvoir, VA: Defense Technical Information Center, August 1988. http://dx.doi.org/10.21236/ada202001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Hauser, John R. Handling Floating-point Exceptions in Numeric Programs. Fort Belvoir, VA: Defense Technical Information Center, March 1995. http://dx.doi.org/10.21236/ada637041.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Kong, Soonho, Sicun Gao, and Edmund M. Clarke. Floating-point Bugs in Embedded GNU C Library. Fort Belvoir, VA: Defense Technical Information Center, November 2013. http://dx.doi.org/10.21236/ada600185.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Jensen, Debby. Control Implementation for the SPUR Floating Point Coprocessor. Fort Belvoir, VA: Defense Technical Information Center, August 1987. http://dx.doi.org/10.21236/ada604004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Presuhn, R. Textual Conventions for the Representation of Floating-Point Numbers. RFC Editor, August 2011. http://dx.doi.org/10.17487/rfc6340.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Chen, Yirng-An, and Randal E. Bryant. PBHD: An Efficient Graph Representation for Floating Point Circuit Verification,. Fort Belvoir, VA: Defense Technical Information Center, May 1997. http://dx.doi.org/10.21236/ada327995.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Nagayama, Shinobu, Tsuotomu Sasao, and Jon T. Butler. Floating-Point Numeric Function Generators Based on Piecewise-Split EVMDDs. Fort Belvoir, VA: Defense Technical Information Center, May 2010. http://dx.doi.org/10.21236/ada547647.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Nash, J. G. VLSI (Very Large Scale Integration) Floating Point Chip Design Study. Fort Belvoir, VA: Defense Technical Information Center, November 1985. http://dx.doi.org/10.21236/ada164198.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography