Literatura académica sobre el tema "Floating point"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Floating point".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Artículos de revistas sobre el tema "Floating point"

1

Jorgensen, Alan A., Las Vegas, Connie R. Masters, Ratan K. Guha y Andrew C. Masters. "Bounded Floating Point: Identifying and Revealing Floating-Point Error". Advances in Science, Technology and Engineering Systems Journal 6, n.º 1 (enero de 2021): 519–31. http://dx.doi.org/10.25046/aj060157.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

M. Somasekhar, M. Somasekhar. "Floating Point Operations in PipeRench CGRA". International Journal of Scientific Research 1, n.º 6 (1 de junio de 2012): 67–68. http://dx.doi.org/10.15373/22778179/nov2012/24.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Boldo, Sylvie, Claude-Pierre Jeannerod, Guillaume Melquiond y Jean-Michel Muller. "Floating-point arithmetic". Acta Numerica 32 (mayo de 2023): 203–90. http://dx.doi.org/10.1017/s0962492922000101.

Texto completo
Resumen
Floating-point numbers have an intuitive meaning when it comes to physics-based numerical computations, and they have thus become the most common way of approximating real numbers in computers. The IEEE-754 Standard has played a large part in making floating-point arithmetic ubiquitous today, by specifying its semantics in a strict yet useful way as early as 1985. In particular, floating-point operations should be performed as if their results were first computed with an infinite precision and then rounded to the target format. A consequence is that floating-point arithmetic satisfies the ‘standard model’ that is often used for analysing the accuracy of floating-point algorithms. But that is only scraping the surface, and floating-point arithmetic offers much more.In this survey we recall the history of floating-point arithmetic as well as its specification mandated by the IEEE-754 Standard. We also recall what properties it entails and what every programmer should know when designing a floating-point algorithm. We provide various basic blocks that can be implemented with floating-point arithmetic. In particular, one can actually compute the rounding error caused by some floating-point operations, which paves the way to designing more accurate algorithms. More generally, properties of floating-point arithmetic make it possible to extend the accuracy of computations beyond working precision.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Blinn, J. F. "Floating-point tricks". IEEE Computer Graphics and Applications 17, n.º 4 (1997): 80–84. http://dx.doi.org/10.1109/38.595279.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Ghosh, Aniruddha, Satrughna Singha y Amitabha Sinha. ""Floating point RNS"". ACM SIGARCH Computer Architecture News 40, n.º 2 (31 de mayo de 2012): 39–43. http://dx.doi.org/10.1145/2234336.2234343.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Kavya, Nagireddy. "Design and Implementation of Floating-Point Addition and Floating-Point Multiplication". International Journal for Research in Applied Science and Engineering Technology 10, n.º 1 (31 de enero de 2022): 98–101. http://dx.doi.org/10.22214/ijraset.2022.39742.

Texto completo
Resumen
Abstract: In this paper, we present the design and implementation of Floating point addition and Floating point Multiplication. There are many multipliers in existence in which Floating point Multiplication and Floating point addition offers a high precision and more accuracy for the data representation of the image. This project is designed and simulated on Xilinx ISE 14.7 version software using verilog. Simulation results show area reduction and delay reduction as compared to the conventional method. Keywords: FIR Filter, Floating point Addition, Floating point Multiplication, Carry Look Ahead Adder
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Baidas, Z., A. D. Brown y A. C. Williams. "Floating-point behavioral synthesis". IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 20, n.º 7 (julio de 2001): 828–39. http://dx.doi.org/10.1109/43.931000.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Sayers, David y du Croz Jeremy. "Validating floating-point systems". Physics World 2, n.º 6 (junio de 1989): 59–62. http://dx.doi.org/10.1088/2058-7058/2/6/33.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Erle, Mark A., Brian J. Hickmann y Michael J. Schulte. "Decimal Floating-Point Multiplication". IEEE Transactions on Computers 58, n.º 7 (julio de 2009): 902–16. http://dx.doi.org/10.1109/tc.2008.218.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Nannarelli, Alberto. "Tunable Floating-Point Adder". IEEE Transactions on Computers 68, n.º 10 (1 de octubre de 2019): 1553–60. http://dx.doi.org/10.1109/tc.2019.2906907.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.

Tesis sobre el tema "Floating point"

1

Skogstrøm, Kristian. "Implementation of Floating-point Coprocessor". Thesis, Norwegian University of Science and Technology, Department of Computer and Information Science, 2005. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-9202.

Texto completo
Resumen

This thesis presents the architecture and implementation of a high-performance floating-point coprocessor for Atmel's new microcontroller. The coprocessor architecture is based on a fused multiply-add pipeline developed in the specialization project, TDT4720. This pipeline has been optimized significantly and extended to support negation of all operands and single-precision input and output. New hardware has been designed for the decode/fetch unit, the register file, the compare/convert pipeline and the approximation tables. Division and square root is performed in software using Newton-Raphson iteration. The Verilog RTL implementation has been synthesized at 167 MHz using a 0.18 um standard cell library. The total area of the final implementation is 107 225 gates. The coprocessor has also been synthesized with the CPU. Test-programs have been run to verify that the coprocessor works correctly. A complete verification of the floating-point coprocessor, however, has not been performed due to limitations in time.

Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Zhang, Yiwei. "Biophysically accurate floating point neuroprocessors". Thesis, University of Bristol, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.544427.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Baidas, Zaher Abdulkarim. "High-level floating-point synthesis". Thesis, University of Southampton, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.325049.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Duracz, Jan Andrzej. "Verification of floating point programs". Thesis, Aston University, 2010. http://publications.aston.ac.uk/15778/.

Texto completo
Resumen
In this thesis we present an approach to automated verification of floating point programs. Existing techniques for automated generation of correctness theorems are extended to produce proof obligations for accuracy guarantees and absence of floating point exceptions. A prototype automated real number theorem prover is presented, demonstrating a novel application of function interval arithmetic in the context of subdivision-based numerical theorem proving. The prototype is tested on correctness theorems for two simple yet nontrivial programs, proving exception freedom and tight accuracy guarantees automatically. The prover demonstrates a novel application of function interval arithmetic in the context of subdivision-based numerical theorem proving. The experiments show how function intervals can be used to combat the information loss problems that limit the applicability of traditional interval arithmetic in the context of hard real number theorem proving.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Ross, Johan y Hans Engström. "Voice Codec for Floating Point Processor". Thesis, Linköping University, Department of Electrical Engineering, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-15763.

Texto completo
Resumen

As part of an ongoing project at the department of electrical engineering, ISY, at Linköping University, a voice decoder using floating point formats has been the focus of this master thesis. Previous work has been done developing an mp3-decoder using the floating point formats. All is expected to be implemented on a single DSP.The ever present desire to make things smaller, more efficient and less power consuming are the main reasons for this master thesis regarding the use of a floating point format instead of the traditional integer format in a GSM codec. The idea with the low precision floating point format is to be able to reduce the size of the memory. This in turn reduces the size of the total chip area needed and also decreases the power consumption.One main question is if this can be done with the floating point format without losing too much sound quality of the speech. When using the integer format, one can represent every value in the range depending on how many bits are being used. When using a floating point format you can represent larger values using fewer bits compared to the integer format but you lose representation of some values and have to round the values off.From the tests that have been made with the decoder during this thesis, it has been found that the audible difference between the two formats is very small and can hardly be heard, if at all. The rounding seems to have very little effect on the quality of the sound and the implementation of the codec has succeeded in reproducing similar sound quality to the GSM standard decoder.

Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Englund, Madeleine. "Hybrid Floating-point Units in FPGAs". Thesis, Linköpings universitet, Datorteknik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-86587.

Texto completo
Resumen
Floating point numbers are used in many applications that  would be well suited to a higher parallelism than that offered in a CPU. In  these cases, an FPGA, with its ability to handle multiple calculations  simultaneously, could be the solution. Unfortunately, floating point  operations which are implemented in an FPGA is often resource intensive,  which means that many developers avoid floating point solutions in FPGAs or  using FPGAs for floating point applications. Here the potential to get less expensive floating point operations by using ahigher radix for the floating point numbers and using and expand the existingDSP block in the FPGA is investigated. One of the goals is that the FPGAshould be usable for both the users that have floating point in their designsand those who do not. In order to motivate hard floating point blocks in theFPGA, these must not consume too much of the limited resources. This work shows that the floating point addition will become smaller withthe use of the higher radix, while the multiplication becomes smaller by usingthe hardware of the DSP block. When both operations are examined at the sametime, it turns out that it is possible to get a reduced area, compared toseparate floating point units, by utilizing both the DSP block and higherradix for the floating point numbers.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Xiao, Yancheng. "Two floating point LLL reduction algorithms". Thesis, McGill University, 2013. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=114503.

Texto completo
Resumen
The Lenstra, Lenstra and Lov\'sz (LLL) reduction is the most popular lattice reduction and is a powerful tool for solving many complex problems in mathematics and computer science. The blocking technique casts matrix algorithms in terms of matrix-matrix operations to permit efficient reuse of data in the algorithms. In this thesis, we use the blocking technique to develop two floating point block LLL reduction algorithms, the left-to-right block LLL (LRBLLL) reduction algorithm and the alternating partition block LLL (APBLLL) reduction algorithm, and give the complexity analysis of these two algorithms. We compare these two block LLL reduction algorithms with the original LLL reduction algorithm (in floating point arithmetic) and the partial LLL (PLLL) reduction algorithm in the literature in terms of CPU run time, flops and relative backward errors. The simulation results show that the overall CPU run time of the two block LLL reduction algorithms are faster than the partial LLL reduction algorithm and much faster than the original LLL, even though the two block algorithms cost more flops than the partial LLL reduction algorithm in some cases. The shortcoming of the two block algorithms is that sometimes they may not be as numerically stable as the original and partial LLL reduction algorithms. The parallelization of APBLLL is discussed.
Le Lenstra, Lenstra et réduction Lovasz (LLL) est la réduction de réseaux plus populaire et il est un outil puissant pour résoudre de nombreux problèmes complexes en mathématiques et en informatique. La technique bloc LLL bloquante reformule les algorithmes en termes de matrice-matrice opérations de permettre la réutilisation efficace des données dans les algorithmes bloc LLL. Dans cette thèse, nous utilisons la technique de blocage de développer les deux algorithmes de réduction bloc LLL en points flottants, l'algorithme de réduction bloc LLL de la gauche vers la droite (LRBLLL) et l'algorithme de réduction bloc LLL en partition alternative (APBLLL), et donner a l'analyse de la complexité des ces deux algorithmes. Nous comparons ces deux algorithmes de réduction bloc LLL avec l'algorithme de réduction LLL original (en arithmétique au point flottant) et l'algorithme de réduction LLL partielle (PLLL) dans la littérature en termes de temps d'exécution CPU, flops et les erreurs de l'arrière par rapport. Les résultats des simulations montrent que les temps d'exécution CPU pour les deux algorithmes de réduction blocs LLL sont plus rapides que l'algorithme de réduction LLL partielle et beaucoup plus rapide que la réduction LLL originale, même si les deux algorithmes par bloc coûtent plus de flops que l'algorithme de réduction LLL partielle dans certains cas. L'inconvénient de ces deux algorithmes par blocs, c'est que parfois, ils peuvent n'être pas aussi stable numériquement que les algorithmes originaux et les algorithmes de réduction LLL partielle. Le parallélisation de APBLLL est discutée.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Kupriianova, Olga. "Towards a modern floating-point environment". Thesis, Paris 6, 2015. http://www.theses.fr/2015PA066584/document.

Texto completo
Resumen
Cette thèse fait une étude sur deux moyens d'enrichir l'environnement flottant courant : le premier est d'obtenir plusieurs versions d'implantation pour chaque fonction mathématique, le deuxième est de fournir des opérations de la norme IEEE754, qui permettent de mélanger les entrées et la sortie dans les bases différentes. Comme la quantité de versions différentes pour chaque fonction mathématique est énorme, ce travail se concentre sur la génération du code. Notre générateur de code adresse une large variété de fonctions: il produit les implantations paramétrées pour les fonctions définies par l'utilisateur. Il peut être vu comme un générateur de fonctions boîtes-noires. Ce travail inclut un nouvel algorithme pour le découpage de domaine et une tentative de remplacer les branchements pendant la reconstruction par un polynôme. Le nouveau découpage de domaines produit moins de sous-domaines et les degrés polynomiaux sur les sous-domaines adjacents ne varient pas beaucoup. Pour fournir les implantations vectorisables il faut éviter les branchements if-else pendant la reconstruction. Depuis la révision de la norme IEEE754 en 2008, il est devenu possible de mélanger des nombres de différentes précisions dans une opération. Par contre, il n'y a aucun mécanisme qui permettrait de mélanger les nombres dans des bases différentes dans une opération. La recherche dans l'arithmétique en base mixte a commencé par les pires cas pour le FMA. Un nouvel algorithme pour convertir une suite de caractères décimaux du longueur arbitraire en nombre flottant binaire est présenté. Il est indépendant du mode d'arrondi actuel et produit un résultat correctement arrondi
This work investigates two ways of enlarging the current floating-point environment. The first is to support several implementation versions of each mathematical function (elementary such as $\exp$ or $\log$ and special such as $\erf$ or $\Gamma$), the second one is to provide IEEE754 operations that mix the inputs and the output of different \radixes. As the number of various implementations for each mathematical function is large, this work is focused on code generation. Our code generator supports the huge variety of functions: it generates parametrized implementations for the user-specified functions. So it may be considered as a black-box function generator. This work contains a novel algorithm for domain splitting and an approach to replace branching on reconstruction by a polynomial. This new domain splitting algorithm produces less subdomains and the polynomial degrees on adjacent subdomains do not change much. To produce vectorizable implementations, if-else statements on the reconstruction step have to be avoided. Since the revision of the IEEE754 Standard in 2008 it is possible to mix numbers of different precisions in one operation. However, there is no mechanism that allows users to mix numbers of different radices in one operation. This research starts an examination ofmixed-radix arithmetic with the worst cases search for FMA. A novel algorithm to convert a decimal character sequence of arbitrary length to a binary floating-point number is presented. It is independent of currently-set rounding mode and produces correctly-rounded results
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Kupriianova, Olga. "Towards a modern floating-point environment". Electronic Thesis or Diss., Paris 6, 2015. http://www.theses.fr/2015PA066584.

Texto completo
Resumen
Cette thèse fait une étude sur deux moyens d'enrichir l'environnement flottant courant : le premier est d'obtenir plusieurs versions d'implantation pour chaque fonction mathématique, le deuxième est de fournir des opérations de la norme IEEE754, qui permettent de mélanger les entrées et la sortie dans les bases différentes. Comme la quantité de versions différentes pour chaque fonction mathématique est énorme, ce travail se concentre sur la génération du code. Notre générateur de code adresse une large variété de fonctions: il produit les implantations paramétrées pour les fonctions définies par l'utilisateur. Il peut être vu comme un générateur de fonctions boîtes-noires. Ce travail inclut un nouvel algorithme pour le découpage de domaine et une tentative de remplacer les branchements pendant la reconstruction par un polynôme. Le nouveau découpage de domaines produit moins de sous-domaines et les degrés polynomiaux sur les sous-domaines adjacents ne varient pas beaucoup. Pour fournir les implantations vectorisables il faut éviter les branchements if-else pendant la reconstruction. Depuis la révision de la norme IEEE754 en 2008, il est devenu possible de mélanger des nombres de différentes précisions dans une opération. Par contre, il n'y a aucun mécanisme qui permettrait de mélanger les nombres dans des bases différentes dans une opération. La recherche dans l'arithmétique en base mixte a commencé par les pires cas pour le FMA. Un nouvel algorithme pour convertir une suite de caractères décimaux du longueur arbitraire en nombre flottant binaire est présenté. Il est indépendant du mode d'arrondi actuel et produit un résultat correctement arrondi
This work investigates two ways of enlarging the current floating-point environment. The first is to support several implementation versions of each mathematical function (elementary such as exp or log and special such as erf or Γ), the second one is to provide IEEE754 operations that mix the inputs and the output of different radixes. As the number of various implementations for each mathematical function is large, this work is focused on code generation. Our code generator supports the huge variety of functions: it generates parametrized implementations for the user-specified functions. So it may be considered as a black-box function generator. This work contains a novel algorithm for domain splitting and an approach to replace branching on reconstruction by a polynomial. This new domain splitting algorithm produces less subdomains and the polynomial degrees on adjacent subdomains do not change much. To produce vectorizable implementations, if-else statements on the reconstruction step have to be avoided. Since the revision of the IEEE754 Standard in 2008 it is possible to mix numbers of different precisions in one operation. However, there is no mechanism that allows users to mix numbers of different radices in one operation. This research starts an examination ofmixed-radix arithmetic with the worst cases search for FMA. A novel algorithm to convert a decimal character sequence of arbitrary length to a binary floating-point number is presented. It is independent of currently-set rounding mode and produces correctly-rounded results
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Aamodt, Tor. "Floating-point to fixed-point compilation and embedded architectural support". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/MQ58787.pdf.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.

Libros sobre el tema "Floating point"

1

L, Wallis Peter J., ed. Improving floating-point programming. Chichester, England: Wiley, 1990.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Muller, Jean-Michel, Nicolas Brunie, Florent de Dinechin, Claude-Pierre Jeannerod, Mioara Joldes, Vincent Lefèvre, Guillaume Melquiond, Nathalie Revol y Serge Torres. Handbook of Floating-Point Arithmetic. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-76526-6.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Muller, Jean-Michel, Nicolas Brisebarre, Florent de Dinechin, Claude-Pierre Jeannerod, Vincent Lefèvre, Guillaume Melquiond, Nathalie Revol, Damien Stehlé y Serge Torres. Handbook of Floating-Point Arithmetic. Boston: Birkhäuser Boston, 2010. http://dx.doi.org/10.1007/978-0-8176-4705-6.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

service), SpringerLink (Online, ed. Handbook of floating-point arithmetic. Boston: Birkhäuser, 2010.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Vekaria, R. Bandpass filters using floating point arithmtic. London: University of East London, 1994.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Aamodt, Tor. Floating-point to fixed-point compilation and embedded architectural support. Ottawa: National Library of Canada, 2001.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Russinoff, David M. Formal Verification of Floating-Point Hardware Design. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-87181-9.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Russinoff, David M. Formal Verification of Floating-Point Hardware Design. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-319-95513-1.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Motorola. MC68881/MC68882 floating-point coprocessor user's manual. 2a ed. Englewood Cliffs, N.J: Prentice Hall, 1989.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

IEEE Computer Society. Standards Committee. Working group of the Microprocessor Standards Subcommittee. y American National Standards Institute, eds. IEEE standard for binary floating-point arithmetic. New York, N.Y: Institute of Electrical and Electronics Engineers, Inc, 1985.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.

Capítulos de libros sobre el tema "Floating point"

1

Kneusel, Ronald T. "Floating Point". En Numbers and Computers, 75–111. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-17260-6_3.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Kneusel, Ronald T. "Floating Point". En Numbers and Computers, 81–115. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-50508-4_3.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Lloris Ruiz, Antonio, Encarnación Castillo Morales, Luis Parrilla Roure, Antonio García Ríos y María José Lloris Meseguer. "Floating Point". En Arithmetic and Algebraic Circuits, 173–220. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-67266-9_4.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Stoyan, Gisbert y Agnes Baran. "Floating Point Arithmetic". En Compact Textbooks in Mathematics, 1–14. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-44660-8_1.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Kormanyos, Christopher. "Floating-Point Mathematics". En Real-Time C++, 213–39. Berlin, Heidelberg: Springer Berlin Heidelberg, 2015. http://dx.doi.org/10.1007/978-3-662-47810-3_12.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Deschamps, Jean-Pierre, Gustavo D. Sutter y Enrique Cantó. "Floating Point Arithmetic". En Lecture Notes in Electrical Engineering, 305–36. Dordrecht: Springer Netherlands, 2012. http://dx.doi.org/10.1007/978-94-007-2987-2_12.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Van Hoey, Jo. "Floating-Point Arithmetic". En Beginning x64 Assembly Programming, 95–100. Berkeley, CA: Apress, 2019. http://dx.doi.org/10.1007/978-1-4842-5076-1_11.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Smith, Stephen. "Floating-Point Operations". En Raspberry Pi Assembly Language Programming, 211–32. Berkeley, CA: Apress, 2019. http://dx.doi.org/10.1007/978-1-4842-5287-1_11.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Smith, Stephen. "Floating-Point Operations". En Programming with 64-Bit ARM Assembly Language, 269–89. Berkeley, CA: Apress, 2020. http://dx.doi.org/10.1007/978-1-4842-5881-1_12.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Mihailescu, Marius Iulian y Stefania Loredana Nita. "Floating-Point Arithmetic". En Pro Cryptography and Cryptanalysis, 109–32. Berkeley, CA: Apress, 2020. http://dx.doi.org/10.1007/978-1-4842-6367-9_4.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.

Actas de conferencias sobre el tema "Floating point"

1

Zaki, Ahmad M., Mohamed H. El-Shafey, Ayman M. Bahaa-Eldin y Gamal M. Aly. "Accurate floating-point operation using controlled floating-point precision". En 2011 IEEE Pacific Rim Conference on Communications, Computers and Signal Processing (PacRim). IEEE, 2011. http://dx.doi.org/10.1109/pacrim.2011.6032978.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Gemini, Vipin. "Reconfigurable floating point adder". En 2014 1st International Conference on Information Technology, Computer and Electrical Engineering (ICITACEE). IEEE, 2014. http://dx.doi.org/10.1109/icitacee.2014.7065719.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Collins, George E. y Werner Krandick. "Multiprecision floating point addition". En the 2000 international symposium. New York, New York, USA: ACM Press, 2000. http://dx.doi.org/10.1145/345542.345585.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Fandrianto, Jan y B. Y. Woo. "VLSI floating-point processors". En 1985 IEEE 7th Symposium on Computer Arithmetic (ARITH). IEEE, 1985. http://dx.doi.org/10.1109/arith.1985.6158947.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Han, Kyungtae, Alex G. Olson y Brian L. Evans. "Automatic Floating-Point to Fixed-Point Transformations". En 2006 Fortieth Asilomar Conference on Signals, Systems and Computers. IEEE, 2006. http://dx.doi.org/10.1109/acssc.2006.356588.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Marynets, K. "Study of the Antarctic Circumpolar Current via the Shallow Water Large Scale Modelling". En Floating Offshore Energy Devices. Materials Research Forum LLC, 2022. http://dx.doi.org/10.21741/9781644901731-11.

Texto completo
Resumen
Abstract. This paper proposes a modelling of the Antarctic Circumpolar Current (ACC) by means of a two-point boundary value problem. As the major means of exchange of water between the great ocean basins (Atlantic, Pacific and Indian), the ACC plays a highly important role in the global climate. Despite its importance, it remains one of the most poorly understood components of global ocean circulation. We present some recent results on the existence and uniqueness of solutions of a two-point nonlinear boundary value problem that arises in the modeling of the flow of the (ACC) (see discussions in [4-9]).
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Rao, R. Prakash, N. Dhanunjaya Rao, K. Naveen y P. Ramya. "IMPLEMENTATION OF THE STANDARD FLOATING POINT MAC USING IEEE 754 FLOATING POINT ADDER". En 2018 Second International Conference on Computing Methodologies and Communication (ICCMC). IEEE, 2018. http://dx.doi.org/10.1109/iccmc.2018.8487626.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Lam, Michael O. y Barry L. Rountree. "Floating-Point Shadow Value Analysis". En 2016 5th Workshop on Extreme-Scale Programming Tools (ESPT). IEEE, 2016. http://dx.doi.org/10.1109/espt.2016.007.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Sheikh, Basit Riaz y Rajit Manohar. "An Asynchronous Floating-Point Multiplier". En 2012 18th IEEE International Symposium on Asynchronous Circuits and Systems (ASYNC). IEEE, 2012. http://dx.doi.org/10.1109/async.2012.19.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Aarnio, Tomi, Claudio Brunelli y Timo Viitanen. "Efficient floating-point texture decompression". En 2010 International Symposium on System-on-Chip - SOC. IEEE, 2010. http://dx.doi.org/10.1109/issoc.2010.5625555.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.

Informes sobre el tema "Floating point"

1

Chen, Yirng-An y Randal E. Bryant. Verification of Floating-Point Adders. Fort Belvoir, VA: Defense Technical Information Center, abril de 1997. http://dx.doi.org/10.21236/ada346061.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Kahan, W. Rational Arithmetic in Floating-Point. Fort Belvoir, VA: Defense Technical Information Center, septiembre de 1986. http://dx.doi.org/10.21236/ada175190.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Dally, William J. Micro-Optimization of Floating-Point Operations. Fort Belvoir, VA: Defense Technical Information Center, agosto de 1988. http://dx.doi.org/10.21236/ada202001.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Hauser, John R. Handling Floating-point Exceptions in Numeric Programs. Fort Belvoir, VA: Defense Technical Information Center, marzo de 1995. http://dx.doi.org/10.21236/ada637041.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Kong, Soonho, Sicun Gao y Edmund M. Clarke. Floating-point Bugs in Embedded GNU C Library. Fort Belvoir, VA: Defense Technical Information Center, noviembre de 2013. http://dx.doi.org/10.21236/ada600185.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Jensen, Debby. Control Implementation for the SPUR Floating Point Coprocessor. Fort Belvoir, VA: Defense Technical Information Center, agosto de 1987. http://dx.doi.org/10.21236/ada604004.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Presuhn, R. Textual Conventions for the Representation of Floating-Point Numbers. RFC Editor, agosto de 2011. http://dx.doi.org/10.17487/rfc6340.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Chen, Yirng-An y Randal E. Bryant. PBHD: An Efficient Graph Representation for Floating Point Circuit Verification,. Fort Belvoir, VA: Defense Technical Information Center, mayo de 1997. http://dx.doi.org/10.21236/ada327995.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Nagayama, Shinobu, Tsuotomu Sasao y Jon T. Butler. Floating-Point Numeric Function Generators Based on Piecewise-Split EVMDDs. Fort Belvoir, VA: Defense Technical Information Center, mayo de 2010. http://dx.doi.org/10.21236/ada547647.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Nash, J. G. VLSI (Very Large Scale Integration) Floating Point Chip Design Study. Fort Belvoir, VA: Defense Technical Information Center, noviembre de 1985. http://dx.doi.org/10.21236/ada164198.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía