Literatura académica sobre el tema "Floating point"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Floating point".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Artículos de revistas sobre el tema "Floating point"
Jorgensen, Alan A., Las Vegas, Connie R. Masters, Ratan K. Guha y Andrew C. Masters. "Bounded Floating Point: Identifying and Revealing Floating-Point Error". Advances in Science, Technology and Engineering Systems Journal 6, n.º 1 (enero de 2021): 519–31. http://dx.doi.org/10.25046/aj060157.
Texto completoM. Somasekhar, M. Somasekhar. "Floating Point Operations in PipeRench CGRA". International Journal of Scientific Research 1, n.º 6 (1 de junio de 2012): 67–68. http://dx.doi.org/10.15373/22778179/nov2012/24.
Texto completoBoldo, Sylvie, Claude-Pierre Jeannerod, Guillaume Melquiond y Jean-Michel Muller. "Floating-point arithmetic". Acta Numerica 32 (mayo de 2023): 203–90. http://dx.doi.org/10.1017/s0962492922000101.
Texto completoBlinn, J. F. "Floating-point tricks". IEEE Computer Graphics and Applications 17, n.º 4 (1997): 80–84. http://dx.doi.org/10.1109/38.595279.
Texto completoGhosh, Aniruddha, Satrughna Singha y Amitabha Sinha. ""Floating point RNS"". ACM SIGARCH Computer Architecture News 40, n.º 2 (31 de mayo de 2012): 39–43. http://dx.doi.org/10.1145/2234336.2234343.
Texto completoKavya, Nagireddy. "Design and Implementation of Floating-Point Addition and Floating-Point Multiplication". International Journal for Research in Applied Science and Engineering Technology 10, n.º 1 (31 de enero de 2022): 98–101. http://dx.doi.org/10.22214/ijraset.2022.39742.
Texto completoBaidas, Z., A. D. Brown y A. C. Williams. "Floating-point behavioral synthesis". IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 20, n.º 7 (julio de 2001): 828–39. http://dx.doi.org/10.1109/43.931000.
Texto completoSayers, David y du Croz Jeremy. "Validating floating-point systems". Physics World 2, n.º 6 (junio de 1989): 59–62. http://dx.doi.org/10.1088/2058-7058/2/6/33.
Texto completoErle, Mark A., Brian J. Hickmann y Michael J. Schulte. "Decimal Floating-Point Multiplication". IEEE Transactions on Computers 58, n.º 7 (julio de 2009): 902–16. http://dx.doi.org/10.1109/tc.2008.218.
Texto completoNannarelli, Alberto. "Tunable Floating-Point Adder". IEEE Transactions on Computers 68, n.º 10 (1 de octubre de 2019): 1553–60. http://dx.doi.org/10.1109/tc.2019.2906907.
Texto completoTesis sobre el tema "Floating point"
Skogstrøm, Kristian. "Implementation of Floating-point Coprocessor". Thesis, Norwegian University of Science and Technology, Department of Computer and Information Science, 2005. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-9202.
Texto completoThis thesis presents the architecture and implementation of a high-performance floating-point coprocessor for Atmel's new microcontroller. The coprocessor architecture is based on a fused multiply-add pipeline developed in the specialization project, TDT4720. This pipeline has been optimized significantly and extended to support negation of all operands and single-precision input and output. New hardware has been designed for the decode/fetch unit, the register file, the compare/convert pipeline and the approximation tables. Division and square root is performed in software using Newton-Raphson iteration. The Verilog RTL implementation has been synthesized at 167 MHz using a 0.18 um standard cell library. The total area of the final implementation is 107 225 gates. The coprocessor has also been synthesized with the CPU. Test-programs have been run to verify that the coprocessor works correctly. A complete verification of the floating-point coprocessor, however, has not been performed due to limitations in time.
Zhang, Yiwei. "Biophysically accurate floating point neuroprocessors". Thesis, University of Bristol, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.544427.
Texto completoBaidas, Zaher Abdulkarim. "High-level floating-point synthesis". Thesis, University of Southampton, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.325049.
Texto completoDuracz, Jan Andrzej. "Verification of floating point programs". Thesis, Aston University, 2010. http://publications.aston.ac.uk/15778/.
Texto completoRoss, Johan y Hans Engström. "Voice Codec for Floating Point Processor". Thesis, Linköping University, Department of Electrical Engineering, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-15763.
Texto completoAs part of an ongoing project at the department of electrical engineering, ISY, at Linköping University, a voice decoder using floating point formats has been the focus of this master thesis. Previous work has been done developing an mp3-decoder using the floating point formats. All is expected to be implemented on a single DSP.The ever present desire to make things smaller, more efficient and less power consuming are the main reasons for this master thesis regarding the use of a floating point format instead of the traditional integer format in a GSM codec. The idea with the low precision floating point format is to be able to reduce the size of the memory. This in turn reduces the size of the total chip area needed and also decreases the power consumption.One main question is if this can be done with the floating point format without losing too much sound quality of the speech. When using the integer format, one can represent every value in the range depending on how many bits are being used. When using a floating point format you can represent larger values using fewer bits compared to the integer format but you lose representation of some values and have to round the values off.From the tests that have been made with the decoder during this thesis, it has been found that the audible difference between the two formats is very small and can hardly be heard, if at all. The rounding seems to have very little effect on the quality of the sound and the implementation of the codec has succeeded in reproducing similar sound quality to the GSM standard decoder.
Englund, Madeleine. "Hybrid Floating-point Units in FPGAs". Thesis, Linköpings universitet, Datorteknik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-86587.
Texto completoXiao, Yancheng. "Two floating point LLL reduction algorithms". Thesis, McGill University, 2013. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=114503.
Texto completoLe Lenstra, Lenstra et réduction Lovasz (LLL) est la réduction de réseaux plus populaire et il est un outil puissant pour résoudre de nombreux problèmes complexes en mathématiques et en informatique. La technique bloc LLL bloquante reformule les algorithmes en termes de matrice-matrice opérations de permettre la réutilisation efficace des données dans les algorithmes bloc LLL. Dans cette thèse, nous utilisons la technique de blocage de développer les deux algorithmes de réduction bloc LLL en points flottants, l'algorithme de réduction bloc LLL de la gauche vers la droite (LRBLLL) et l'algorithme de réduction bloc LLL en partition alternative (APBLLL), et donner a l'analyse de la complexité des ces deux algorithmes. Nous comparons ces deux algorithmes de réduction bloc LLL avec l'algorithme de réduction LLL original (en arithmétique au point flottant) et l'algorithme de réduction LLL partielle (PLLL) dans la littérature en termes de temps d'exécution CPU, flops et les erreurs de l'arrière par rapport. Les résultats des simulations montrent que les temps d'exécution CPU pour les deux algorithmes de réduction blocs LLL sont plus rapides que l'algorithme de réduction LLL partielle et beaucoup plus rapide que la réduction LLL originale, même si les deux algorithmes par bloc coûtent plus de flops que l'algorithme de réduction LLL partielle dans certains cas. L'inconvénient de ces deux algorithmes par blocs, c'est que parfois, ils peuvent n'être pas aussi stable numériquement que les algorithmes originaux et les algorithmes de réduction LLL partielle. Le parallélisation de APBLLL est discutée.
Kupriianova, Olga. "Towards a modern floating-point environment". Thesis, Paris 6, 2015. http://www.theses.fr/2015PA066584/document.
Texto completoThis work investigates two ways of enlarging the current floating-point environment. The first is to support several implementation versions of each mathematical function (elementary such as $\exp$ or $\log$ and special such as $\erf$ or $\Gamma$), the second one is to provide IEEE754 operations that mix the inputs and the output of different \radixes. As the number of various implementations for each mathematical function is large, this work is focused on code generation. Our code generator supports the huge variety of functions: it generates parametrized implementations for the user-specified functions. So it may be considered as a black-box function generator. This work contains a novel algorithm for domain splitting and an approach to replace branching on reconstruction by a polynomial. This new domain splitting algorithm produces less subdomains and the polynomial degrees on adjacent subdomains do not change much. To produce vectorizable implementations, if-else statements on the reconstruction step have to be avoided. Since the revision of the IEEE754 Standard in 2008 it is possible to mix numbers of different precisions in one operation. However, there is no mechanism that allows users to mix numbers of different radices in one operation. This research starts an examination ofmixed-radix arithmetic with the worst cases search for FMA. A novel algorithm to convert a decimal character sequence of arbitrary length to a binary floating-point number is presented. It is independent of currently-set rounding mode and produces correctly-rounded results
Kupriianova, Olga. "Towards a modern floating-point environment". Electronic Thesis or Diss., Paris 6, 2015. http://www.theses.fr/2015PA066584.
Texto completoThis work investigates two ways of enlarging the current floating-point environment. The first is to support several implementation versions of each mathematical function (elementary such as exp or log and special such as erf or Γ), the second one is to provide IEEE754 operations that mix the inputs and the output of different radixes. As the number of various implementations for each mathematical function is large, this work is focused on code generation. Our code generator supports the huge variety of functions: it generates parametrized implementations for the user-specified functions. So it may be considered as a black-box function generator. This work contains a novel algorithm for domain splitting and an approach to replace branching on reconstruction by a polynomial. This new domain splitting algorithm produces less subdomains and the polynomial degrees on adjacent subdomains do not change much. To produce vectorizable implementations, if-else statements on the reconstruction step have to be avoided. Since the revision of the IEEE754 Standard in 2008 it is possible to mix numbers of different precisions in one operation. However, there is no mechanism that allows users to mix numbers of different radices in one operation. This research starts an examination ofmixed-radix arithmetic with the worst cases search for FMA. A novel algorithm to convert a decimal character sequence of arbitrary length to a binary floating-point number is presented. It is independent of currently-set rounding mode and produces correctly-rounded results
Aamodt, Tor. "Floating-point to fixed-point compilation and embedded architectural support". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/MQ58787.pdf.
Texto completoLibros sobre el tema "Floating point"
L, Wallis Peter J., ed. Improving floating-point programming. Chichester, England: Wiley, 1990.
Buscar texto completoMuller, Jean-Michel, Nicolas Brunie, Florent de Dinechin, Claude-Pierre Jeannerod, Mioara Joldes, Vincent Lefèvre, Guillaume Melquiond, Nathalie Revol y Serge Torres. Handbook of Floating-Point Arithmetic. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-76526-6.
Texto completoMuller, Jean-Michel, Nicolas Brisebarre, Florent de Dinechin, Claude-Pierre Jeannerod, Vincent Lefèvre, Guillaume Melquiond, Nathalie Revol, Damien Stehlé y Serge Torres. Handbook of Floating-Point Arithmetic. Boston: Birkhäuser Boston, 2010. http://dx.doi.org/10.1007/978-0-8176-4705-6.
Texto completoservice), SpringerLink (Online, ed. Handbook of floating-point arithmetic. Boston: Birkhäuser, 2010.
Buscar texto completoVekaria, R. Bandpass filters using floating point arithmtic. London: University of East London, 1994.
Buscar texto completoAamodt, Tor. Floating-point to fixed-point compilation and embedded architectural support. Ottawa: National Library of Canada, 2001.
Buscar texto completoRussinoff, David M. Formal Verification of Floating-Point Hardware Design. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-87181-9.
Texto completoRussinoff, David M. Formal Verification of Floating-Point Hardware Design. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-319-95513-1.
Texto completoMotorola. MC68881/MC68882 floating-point coprocessor user's manual. 2a ed. Englewood Cliffs, N.J: Prentice Hall, 1989.
Buscar texto completoIEEE Computer Society. Standards Committee. Working group of the Microprocessor Standards Subcommittee. y American National Standards Institute, eds. IEEE standard for binary floating-point arithmetic. New York, N.Y: Institute of Electrical and Electronics Engineers, Inc, 1985.
Buscar texto completoCapítulos de libros sobre el tema "Floating point"
Kneusel, Ronald T. "Floating Point". En Numbers and Computers, 75–111. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-17260-6_3.
Texto completoKneusel, Ronald T. "Floating Point". En Numbers and Computers, 81–115. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-50508-4_3.
Texto completoLloris Ruiz, Antonio, Encarnación Castillo Morales, Luis Parrilla Roure, Antonio García Ríos y María José Lloris Meseguer. "Floating Point". En Arithmetic and Algebraic Circuits, 173–220. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-67266-9_4.
Texto completoStoyan, Gisbert y Agnes Baran. "Floating Point Arithmetic". En Compact Textbooks in Mathematics, 1–14. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-44660-8_1.
Texto completoKormanyos, Christopher. "Floating-Point Mathematics". En Real-Time C++, 213–39. Berlin, Heidelberg: Springer Berlin Heidelberg, 2015. http://dx.doi.org/10.1007/978-3-662-47810-3_12.
Texto completoDeschamps, Jean-Pierre, Gustavo D. Sutter y Enrique Cantó. "Floating Point Arithmetic". En Lecture Notes in Electrical Engineering, 305–36. Dordrecht: Springer Netherlands, 2012. http://dx.doi.org/10.1007/978-94-007-2987-2_12.
Texto completoVan Hoey, Jo. "Floating-Point Arithmetic". En Beginning x64 Assembly Programming, 95–100. Berkeley, CA: Apress, 2019. http://dx.doi.org/10.1007/978-1-4842-5076-1_11.
Texto completoSmith, Stephen. "Floating-Point Operations". En Raspberry Pi Assembly Language Programming, 211–32. Berkeley, CA: Apress, 2019. http://dx.doi.org/10.1007/978-1-4842-5287-1_11.
Texto completoSmith, Stephen. "Floating-Point Operations". En Programming with 64-Bit ARM Assembly Language, 269–89. Berkeley, CA: Apress, 2020. http://dx.doi.org/10.1007/978-1-4842-5881-1_12.
Texto completoMihailescu, Marius Iulian y Stefania Loredana Nita. "Floating-Point Arithmetic". En Pro Cryptography and Cryptanalysis, 109–32. Berkeley, CA: Apress, 2020. http://dx.doi.org/10.1007/978-1-4842-6367-9_4.
Texto completoActas de conferencias sobre el tema "Floating point"
Zaki, Ahmad M., Mohamed H. El-Shafey, Ayman M. Bahaa-Eldin y Gamal M. Aly. "Accurate floating-point operation using controlled floating-point precision". En 2011 IEEE Pacific Rim Conference on Communications, Computers and Signal Processing (PacRim). IEEE, 2011. http://dx.doi.org/10.1109/pacrim.2011.6032978.
Texto completoGemini, Vipin. "Reconfigurable floating point adder". En 2014 1st International Conference on Information Technology, Computer and Electrical Engineering (ICITACEE). IEEE, 2014. http://dx.doi.org/10.1109/icitacee.2014.7065719.
Texto completoCollins, George E. y Werner Krandick. "Multiprecision floating point addition". En the 2000 international symposium. New York, New York, USA: ACM Press, 2000. http://dx.doi.org/10.1145/345542.345585.
Texto completoFandrianto, Jan y B. Y. Woo. "VLSI floating-point processors". En 1985 IEEE 7th Symposium on Computer Arithmetic (ARITH). IEEE, 1985. http://dx.doi.org/10.1109/arith.1985.6158947.
Texto completoHan, Kyungtae, Alex G. Olson y Brian L. Evans. "Automatic Floating-Point to Fixed-Point Transformations". En 2006 Fortieth Asilomar Conference on Signals, Systems and Computers. IEEE, 2006. http://dx.doi.org/10.1109/acssc.2006.356588.
Texto completoMarynets, K. "Study of the Antarctic Circumpolar Current via the Shallow Water Large Scale Modelling". En Floating Offshore Energy Devices. Materials Research Forum LLC, 2022. http://dx.doi.org/10.21741/9781644901731-11.
Texto completoRao, R. Prakash, N. Dhanunjaya Rao, K. Naveen y P. Ramya. "IMPLEMENTATION OF THE STANDARD FLOATING POINT MAC USING IEEE 754 FLOATING POINT ADDER". En 2018 Second International Conference on Computing Methodologies and Communication (ICCMC). IEEE, 2018. http://dx.doi.org/10.1109/iccmc.2018.8487626.
Texto completoLam, Michael O. y Barry L. Rountree. "Floating-Point Shadow Value Analysis". En 2016 5th Workshop on Extreme-Scale Programming Tools (ESPT). IEEE, 2016. http://dx.doi.org/10.1109/espt.2016.007.
Texto completoSheikh, Basit Riaz y Rajit Manohar. "An Asynchronous Floating-Point Multiplier". En 2012 18th IEEE International Symposium on Asynchronous Circuits and Systems (ASYNC). IEEE, 2012. http://dx.doi.org/10.1109/async.2012.19.
Texto completoAarnio, Tomi, Claudio Brunelli y Timo Viitanen. "Efficient floating-point texture decompression". En 2010 International Symposium on System-on-Chip - SOC. IEEE, 2010. http://dx.doi.org/10.1109/issoc.2010.5625555.
Texto completoInformes sobre el tema "Floating point"
Chen, Yirng-An y Randal E. Bryant. Verification of Floating-Point Adders. Fort Belvoir, VA: Defense Technical Information Center, abril de 1997. http://dx.doi.org/10.21236/ada346061.
Texto completoKahan, W. Rational Arithmetic in Floating-Point. Fort Belvoir, VA: Defense Technical Information Center, septiembre de 1986. http://dx.doi.org/10.21236/ada175190.
Texto completoDally, William J. Micro-Optimization of Floating-Point Operations. Fort Belvoir, VA: Defense Technical Information Center, agosto de 1988. http://dx.doi.org/10.21236/ada202001.
Texto completoHauser, John R. Handling Floating-point Exceptions in Numeric Programs. Fort Belvoir, VA: Defense Technical Information Center, marzo de 1995. http://dx.doi.org/10.21236/ada637041.
Texto completoKong, Soonho, Sicun Gao y Edmund M. Clarke. Floating-point Bugs in Embedded GNU C Library. Fort Belvoir, VA: Defense Technical Information Center, noviembre de 2013. http://dx.doi.org/10.21236/ada600185.
Texto completoJensen, Debby. Control Implementation for the SPUR Floating Point Coprocessor. Fort Belvoir, VA: Defense Technical Information Center, agosto de 1987. http://dx.doi.org/10.21236/ada604004.
Texto completoPresuhn, R. Textual Conventions for the Representation of Floating-Point Numbers. RFC Editor, agosto de 2011. http://dx.doi.org/10.17487/rfc6340.
Texto completoChen, Yirng-An y Randal E. Bryant. PBHD: An Efficient Graph Representation for Floating Point Circuit Verification,. Fort Belvoir, VA: Defense Technical Information Center, mayo de 1997. http://dx.doi.org/10.21236/ada327995.
Texto completoNagayama, Shinobu, Tsuotomu Sasao y Jon T. Butler. Floating-Point Numeric Function Generators Based on Piecewise-Split EVMDDs. Fort Belvoir, VA: Defense Technical Information Center, mayo de 2010. http://dx.doi.org/10.21236/ada547647.
Texto completoNash, J. G. VLSI (Very Large Scale Integration) Floating Point Chip Design Study. Fort Belvoir, VA: Defense Technical Information Center, noviembre de 1985. http://dx.doi.org/10.21236/ada164198.
Texto completo