Gotowa bibliografia na temat „Floating point”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Floating point”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Artykuły w czasopismach na temat "Floating point"

1

Jorgensen, Alan A., Las Vegas, Connie R. Masters, Ratan K. Guha i Andrew C. Masters. "Bounded Floating Point: Identifying and Revealing Floating-Point Error". Advances in Science, Technology and Engineering Systems Journal 6, nr 1 (styczeń 2021): 519–31. http://dx.doi.org/10.25046/aj060157.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

M. Somasekhar, M. Somasekhar. "Floating Point Operations in PipeRench CGRA". International Journal of Scientific Research 1, nr 6 (1.06.2012): 67–68. http://dx.doi.org/10.15373/22778179/nov2012/24.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Boldo, Sylvie, Claude-Pierre Jeannerod, Guillaume Melquiond i Jean-Michel Muller. "Floating-point arithmetic". Acta Numerica 32 (maj 2023): 203–90. http://dx.doi.org/10.1017/s0962492922000101.

Pełny tekst źródła
Streszczenie:
Floating-point numbers have an intuitive meaning when it comes to physics-based numerical computations, and they have thus become the most common way of approximating real numbers in computers. The IEEE-754 Standard has played a large part in making floating-point arithmetic ubiquitous today, by specifying its semantics in a strict yet useful way as early as 1985. In particular, floating-point operations should be performed as if their results were first computed with an infinite precision and then rounded to the target format. A consequence is that floating-point arithmetic satisfies the ‘standard model’ that is often used for analysing the accuracy of floating-point algorithms. But that is only scraping the surface, and floating-point arithmetic offers much more.In this survey we recall the history of floating-point arithmetic as well as its specification mandated by the IEEE-754 Standard. We also recall what properties it entails and what every programmer should know when designing a floating-point algorithm. We provide various basic blocks that can be implemented with floating-point arithmetic. In particular, one can actually compute the rounding error caused by some floating-point operations, which paves the way to designing more accurate algorithms. More generally, properties of floating-point arithmetic make it possible to extend the accuracy of computations beyond working precision.
Style APA, Harvard, Vancouver, ISO itp.
4

Blinn, J. F. "Floating-point tricks". IEEE Computer Graphics and Applications 17, nr 4 (1997): 80–84. http://dx.doi.org/10.1109/38.595279.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Ghosh, Aniruddha, Satrughna Singha i Amitabha Sinha. ""Floating point RNS"". ACM SIGARCH Computer Architecture News 40, nr 2 (31.05.2012): 39–43. http://dx.doi.org/10.1145/2234336.2234343.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Kavya, Nagireddy. "Design and Implementation of Floating-Point Addition and Floating-Point Multiplication". International Journal for Research in Applied Science and Engineering Technology 10, nr 1 (31.01.2022): 98–101. http://dx.doi.org/10.22214/ijraset.2022.39742.

Pełny tekst źródła
Streszczenie:
Abstract: In this paper, we present the design and implementation of Floating point addition and Floating point Multiplication. There are many multipliers in existence in which Floating point Multiplication and Floating point addition offers a high precision and more accuracy for the data representation of the image. This project is designed and simulated on Xilinx ISE 14.7 version software using verilog. Simulation results show area reduction and delay reduction as compared to the conventional method. Keywords: FIR Filter, Floating point Addition, Floating point Multiplication, Carry Look Ahead Adder
Style APA, Harvard, Vancouver, ISO itp.
7

Baidas, Z., A. D. Brown i A. C. Williams. "Floating-point behavioral synthesis". IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 20, nr 7 (lipiec 2001): 828–39. http://dx.doi.org/10.1109/43.931000.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Sayers, David, i du Croz Jeremy. "Validating floating-point systems". Physics World 2, nr 6 (czerwiec 1989): 59–62. http://dx.doi.org/10.1088/2058-7058/2/6/33.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Erle, Mark A., Brian J. Hickmann i Michael J. Schulte. "Decimal Floating-Point Multiplication". IEEE Transactions on Computers 58, nr 7 (lipiec 2009): 902–16. http://dx.doi.org/10.1109/tc.2008.218.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Nannarelli, Alberto. "Tunable Floating-Point Adder". IEEE Transactions on Computers 68, nr 10 (1.10.2019): 1553–60. http://dx.doi.org/10.1109/tc.2019.2906907.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Rozprawy doktorskie na temat "Floating point"

1

Skogstrøm, Kristian. "Implementation of Floating-point Coprocessor". Thesis, Norwegian University of Science and Technology, Department of Computer and Information Science, 2005. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-9202.

Pełny tekst źródła
Streszczenie:

This thesis presents the architecture and implementation of a high-performance floating-point coprocessor for Atmel's new microcontroller. The coprocessor architecture is based on a fused multiply-add pipeline developed in the specialization project, TDT4720. This pipeline has been optimized significantly and extended to support negation of all operands and single-precision input and output. New hardware has been designed for the decode/fetch unit, the register file, the compare/convert pipeline and the approximation tables. Division and square root is performed in software using Newton-Raphson iteration. The Verilog RTL implementation has been synthesized at 167 MHz using a 0.18 um standard cell library. The total area of the final implementation is 107 225 gates. The coprocessor has also been synthesized with the CPU. Test-programs have been run to verify that the coprocessor works correctly. A complete verification of the floating-point coprocessor, however, has not been performed due to limitations in time.

Style APA, Harvard, Vancouver, ISO itp.
2

Zhang, Yiwei. "Biophysically accurate floating point neuroprocessors". Thesis, University of Bristol, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.544427.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Baidas, Zaher Abdulkarim. "High-level floating-point synthesis". Thesis, University of Southampton, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.325049.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Duracz, Jan Andrzej. "Verification of floating point programs". Thesis, Aston University, 2010. http://publications.aston.ac.uk/15778/.

Pełny tekst źródła
Streszczenie:
In this thesis we present an approach to automated verification of floating point programs. Existing techniques for automated generation of correctness theorems are extended to produce proof obligations for accuracy guarantees and absence of floating point exceptions. A prototype automated real number theorem prover is presented, demonstrating a novel application of function interval arithmetic in the context of subdivision-based numerical theorem proving. The prototype is tested on correctness theorems for two simple yet nontrivial programs, proving exception freedom and tight accuracy guarantees automatically. The prover demonstrates a novel application of function interval arithmetic in the context of subdivision-based numerical theorem proving. The experiments show how function intervals can be used to combat the information loss problems that limit the applicability of traditional interval arithmetic in the context of hard real number theorem proving.
Style APA, Harvard, Vancouver, ISO itp.
5

Ross, Johan, i Hans Engström. "Voice Codec for Floating Point Processor". Thesis, Linköping University, Department of Electrical Engineering, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-15763.

Pełny tekst źródła
Streszczenie:

As part of an ongoing project at the department of electrical engineering, ISY, at Linköping University, a voice decoder using floating point formats has been the focus of this master thesis. Previous work has been done developing an mp3-decoder using the floating point formats. All is expected to be implemented on a single DSP.The ever present desire to make things smaller, more efficient and less power consuming are the main reasons for this master thesis regarding the use of a floating point format instead of the traditional integer format in a GSM codec. The idea with the low precision floating point format is to be able to reduce the size of the memory. This in turn reduces the size of the total chip area needed and also decreases the power consumption.One main question is if this can be done with the floating point format without losing too much sound quality of the speech. When using the integer format, one can represent every value in the range depending on how many bits are being used. When using a floating point format you can represent larger values using fewer bits compared to the integer format but you lose representation of some values and have to round the values off.From the tests that have been made with the decoder during this thesis, it has been found that the audible difference between the two formats is very small and can hardly be heard, if at all. The rounding seems to have very little effect on the quality of the sound and the implementation of the codec has succeeded in reproducing similar sound quality to the GSM standard decoder.

Style APA, Harvard, Vancouver, ISO itp.
6

Englund, Madeleine. "Hybrid Floating-point Units in FPGAs". Thesis, Linköpings universitet, Datorteknik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-86587.

Pełny tekst źródła
Streszczenie:
Floating point numbers are used in many applications that  would be well suited to a higher parallelism than that offered in a CPU. In  these cases, an FPGA, with its ability to handle multiple calculations  simultaneously, could be the solution. Unfortunately, floating point  operations which are implemented in an FPGA is often resource intensive,  which means that many developers avoid floating point solutions in FPGAs or  using FPGAs for floating point applications. Here the potential to get less expensive floating point operations by using ahigher radix for the floating point numbers and using and expand the existingDSP block in the FPGA is investigated. One of the goals is that the FPGAshould be usable for both the users that have floating point in their designsand those who do not. In order to motivate hard floating point blocks in theFPGA, these must not consume too much of the limited resources. This work shows that the floating point addition will become smaller withthe use of the higher radix, while the multiplication becomes smaller by usingthe hardware of the DSP block. When both operations are examined at the sametime, it turns out that it is possible to get a reduced area, compared toseparate floating point units, by utilizing both the DSP block and higherradix for the floating point numbers.
Style APA, Harvard, Vancouver, ISO itp.
7

Xiao, Yancheng. "Two floating point LLL reduction algorithms". Thesis, McGill University, 2013. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=114503.

Pełny tekst źródła
Streszczenie:
The Lenstra, Lenstra and Lov\'sz (LLL) reduction is the most popular lattice reduction and is a powerful tool for solving many complex problems in mathematics and computer science. The blocking technique casts matrix algorithms in terms of matrix-matrix operations to permit efficient reuse of data in the algorithms. In this thesis, we use the blocking technique to develop two floating point block LLL reduction algorithms, the left-to-right block LLL (LRBLLL) reduction algorithm and the alternating partition block LLL (APBLLL) reduction algorithm, and give the complexity analysis of these two algorithms. We compare these two block LLL reduction algorithms with the original LLL reduction algorithm (in floating point arithmetic) and the partial LLL (PLLL) reduction algorithm in the literature in terms of CPU run time, flops and relative backward errors. The simulation results show that the overall CPU run time of the two block LLL reduction algorithms are faster than the partial LLL reduction algorithm and much faster than the original LLL, even though the two block algorithms cost more flops than the partial LLL reduction algorithm in some cases. The shortcoming of the two block algorithms is that sometimes they may not be as numerically stable as the original and partial LLL reduction algorithms. The parallelization of APBLLL is discussed.
Le Lenstra, Lenstra et réduction Lovasz (LLL) est la réduction de réseaux plus populaire et il est un outil puissant pour résoudre de nombreux problèmes complexes en mathématiques et en informatique. La technique bloc LLL bloquante reformule les algorithmes en termes de matrice-matrice opérations de permettre la réutilisation efficace des données dans les algorithmes bloc LLL. Dans cette thèse, nous utilisons la technique de blocage de développer les deux algorithmes de réduction bloc LLL en points flottants, l'algorithme de réduction bloc LLL de la gauche vers la droite (LRBLLL) et l'algorithme de réduction bloc LLL en partition alternative (APBLLL), et donner a l'analyse de la complexité des ces deux algorithmes. Nous comparons ces deux algorithmes de réduction bloc LLL avec l'algorithme de réduction LLL original (en arithmétique au point flottant) et l'algorithme de réduction LLL partielle (PLLL) dans la littérature en termes de temps d'exécution CPU, flops et les erreurs de l'arrière par rapport. Les résultats des simulations montrent que les temps d'exécution CPU pour les deux algorithmes de réduction blocs LLL sont plus rapides que l'algorithme de réduction LLL partielle et beaucoup plus rapide que la réduction LLL originale, même si les deux algorithmes par bloc coûtent plus de flops que l'algorithme de réduction LLL partielle dans certains cas. L'inconvénient de ces deux algorithmes par blocs, c'est que parfois, ils peuvent n'être pas aussi stable numériquement que les algorithmes originaux et les algorithmes de réduction LLL partielle. Le parallélisation de APBLLL est discutée.
Style APA, Harvard, Vancouver, ISO itp.
8

Kupriianova, Olga. "Towards a modern floating-point environment". Thesis, Paris 6, 2015. http://www.theses.fr/2015PA066584/document.

Pełny tekst źródła
Streszczenie:
Cette thèse fait une étude sur deux moyens d'enrichir l'environnement flottant courant : le premier est d'obtenir plusieurs versions d'implantation pour chaque fonction mathématique, le deuxième est de fournir des opérations de la norme IEEE754, qui permettent de mélanger les entrées et la sortie dans les bases différentes. Comme la quantité de versions différentes pour chaque fonction mathématique est énorme, ce travail se concentre sur la génération du code. Notre générateur de code adresse une large variété de fonctions: il produit les implantations paramétrées pour les fonctions définies par l'utilisateur. Il peut être vu comme un générateur de fonctions boîtes-noires. Ce travail inclut un nouvel algorithme pour le découpage de domaine et une tentative de remplacer les branchements pendant la reconstruction par un polynôme. Le nouveau découpage de domaines produit moins de sous-domaines et les degrés polynomiaux sur les sous-domaines adjacents ne varient pas beaucoup. Pour fournir les implantations vectorisables il faut éviter les branchements if-else pendant la reconstruction. Depuis la révision de la norme IEEE754 en 2008, il est devenu possible de mélanger des nombres de différentes précisions dans une opération. Par contre, il n'y a aucun mécanisme qui permettrait de mélanger les nombres dans des bases différentes dans une opération. La recherche dans l'arithmétique en base mixte a commencé par les pires cas pour le FMA. Un nouvel algorithme pour convertir une suite de caractères décimaux du longueur arbitraire en nombre flottant binaire est présenté. Il est indépendant du mode d'arrondi actuel et produit un résultat correctement arrondi
This work investigates two ways of enlarging the current floating-point environment. The first is to support several implementation versions of each mathematical function (elementary such as $\exp$ or $\log$ and special such as $\erf$ or $\Gamma$), the second one is to provide IEEE754 operations that mix the inputs and the output of different \radixes. As the number of various implementations for each mathematical function is large, this work is focused on code generation. Our code generator supports the huge variety of functions: it generates parametrized implementations for the user-specified functions. So it may be considered as a black-box function generator. This work contains a novel algorithm for domain splitting and an approach to replace branching on reconstruction by a polynomial. This new domain splitting algorithm produces less subdomains and the polynomial degrees on adjacent subdomains do not change much. To produce vectorizable implementations, if-else statements on the reconstruction step have to be avoided. Since the revision of the IEEE754 Standard in 2008 it is possible to mix numbers of different precisions in one operation. However, there is no mechanism that allows users to mix numbers of different radices in one operation. This research starts an examination ofmixed-radix arithmetic with the worst cases search for FMA. A novel algorithm to convert a decimal character sequence of arbitrary length to a binary floating-point number is presented. It is independent of currently-set rounding mode and produces correctly-rounded results
Style APA, Harvard, Vancouver, ISO itp.
9

Kupriianova, Olga. "Towards a modern floating-point environment". Electronic Thesis or Diss., Paris 6, 2015. http://www.theses.fr/2015PA066584.

Pełny tekst źródła
Streszczenie:
Cette thèse fait une étude sur deux moyens d'enrichir l'environnement flottant courant : le premier est d'obtenir plusieurs versions d'implantation pour chaque fonction mathématique, le deuxième est de fournir des opérations de la norme IEEE754, qui permettent de mélanger les entrées et la sortie dans les bases différentes. Comme la quantité de versions différentes pour chaque fonction mathématique est énorme, ce travail se concentre sur la génération du code. Notre générateur de code adresse une large variété de fonctions: il produit les implantations paramétrées pour les fonctions définies par l'utilisateur. Il peut être vu comme un générateur de fonctions boîtes-noires. Ce travail inclut un nouvel algorithme pour le découpage de domaine et une tentative de remplacer les branchements pendant la reconstruction par un polynôme. Le nouveau découpage de domaines produit moins de sous-domaines et les degrés polynomiaux sur les sous-domaines adjacents ne varient pas beaucoup. Pour fournir les implantations vectorisables il faut éviter les branchements if-else pendant la reconstruction. Depuis la révision de la norme IEEE754 en 2008, il est devenu possible de mélanger des nombres de différentes précisions dans une opération. Par contre, il n'y a aucun mécanisme qui permettrait de mélanger les nombres dans des bases différentes dans une opération. La recherche dans l'arithmétique en base mixte a commencé par les pires cas pour le FMA. Un nouvel algorithme pour convertir une suite de caractères décimaux du longueur arbitraire en nombre flottant binaire est présenté. Il est indépendant du mode d'arrondi actuel et produit un résultat correctement arrondi
This work investigates two ways of enlarging the current floating-point environment. The first is to support several implementation versions of each mathematical function (elementary such as exp or log and special such as erf or Γ), the second one is to provide IEEE754 operations that mix the inputs and the output of different radixes. As the number of various implementations for each mathematical function is large, this work is focused on code generation. Our code generator supports the huge variety of functions: it generates parametrized implementations for the user-specified functions. So it may be considered as a black-box function generator. This work contains a novel algorithm for domain splitting and an approach to replace branching on reconstruction by a polynomial. This new domain splitting algorithm produces less subdomains and the polynomial degrees on adjacent subdomains do not change much. To produce vectorizable implementations, if-else statements on the reconstruction step have to be avoided. Since the revision of the IEEE754 Standard in 2008 it is possible to mix numbers of different precisions in one operation. However, there is no mechanism that allows users to mix numbers of different radices in one operation. This research starts an examination ofmixed-radix arithmetic with the worst cases search for FMA. A novel algorithm to convert a decimal character sequence of arbitrary length to a binary floating-point number is presented. It is independent of currently-set rounding mode and produces correctly-rounded results
Style APA, Harvard, Vancouver, ISO itp.
10

Aamodt, Tor. "Floating-point to fixed-point compilation and embedded architectural support". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/MQ58787.pdf.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Książki na temat "Floating point"

1

L, Wallis Peter J., red. Improving floating-point programming. Chichester, England: Wiley, 1990.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Muller, Jean-Michel, Nicolas Brunie, Florent de Dinechin, Claude-Pierre Jeannerod, Mioara Joldes, Vincent Lefèvre, Guillaume Melquiond, Nathalie Revol i Serge Torres. Handbook of Floating-Point Arithmetic. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-76526-6.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Muller, Jean-Michel, Nicolas Brisebarre, Florent de Dinechin, Claude-Pierre Jeannerod, Vincent Lefèvre, Guillaume Melquiond, Nathalie Revol, Damien Stehlé i Serge Torres. Handbook of Floating-Point Arithmetic. Boston: Birkhäuser Boston, 2010. http://dx.doi.org/10.1007/978-0-8176-4705-6.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

service), SpringerLink (Online, red. Handbook of floating-point arithmetic. Boston: Birkhäuser, 2010.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Vekaria, R. Bandpass filters using floating point arithmtic. London: University of East London, 1994.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Aamodt, Tor. Floating-point to fixed-point compilation and embedded architectural support. Ottawa: National Library of Canada, 2001.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Russinoff, David M. Formal Verification of Floating-Point Hardware Design. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-87181-9.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Russinoff, David M. Formal Verification of Floating-Point Hardware Design. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-319-95513-1.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Motorola. MC68881/MC68882 floating-point coprocessor user's manual. Wyd. 2. Englewood Cliffs, N.J: Prentice Hall, 1989.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

IEEE Computer Society. Standards Committee. Working group of the Microprocessor Standards Subcommittee. i American National Standards Institute, red. IEEE standard for binary floating-point arithmetic. New York, N.Y: Institute of Electrical and Electronics Engineers, Inc, 1985.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Części książek na temat "Floating point"

1

Kneusel, Ronald T. "Floating Point". W Numbers and Computers, 75–111. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-17260-6_3.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Kneusel, Ronald T. "Floating Point". W Numbers and Computers, 81–115. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-50508-4_3.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Lloris Ruiz, Antonio, Encarnación Castillo Morales, Luis Parrilla Roure, Antonio García Ríos i María José Lloris Meseguer. "Floating Point". W Arithmetic and Algebraic Circuits, 173–220. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-67266-9_4.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Stoyan, Gisbert, i Agnes Baran. "Floating Point Arithmetic". W Compact Textbooks in Mathematics, 1–14. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-44660-8_1.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Kormanyos, Christopher. "Floating-Point Mathematics". W Real-Time C++, 213–39. Berlin, Heidelberg: Springer Berlin Heidelberg, 2015. http://dx.doi.org/10.1007/978-3-662-47810-3_12.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Deschamps, Jean-Pierre, Gustavo D. Sutter i Enrique Cantó. "Floating Point Arithmetic". W Lecture Notes in Electrical Engineering, 305–36. Dordrecht: Springer Netherlands, 2012. http://dx.doi.org/10.1007/978-94-007-2987-2_12.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Van Hoey, Jo. "Floating-Point Arithmetic". W Beginning x64 Assembly Programming, 95–100. Berkeley, CA: Apress, 2019. http://dx.doi.org/10.1007/978-1-4842-5076-1_11.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Smith, Stephen. "Floating-Point Operations". W Raspberry Pi Assembly Language Programming, 211–32. Berkeley, CA: Apress, 2019. http://dx.doi.org/10.1007/978-1-4842-5287-1_11.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Smith, Stephen. "Floating-Point Operations". W Programming with 64-Bit ARM Assembly Language, 269–89. Berkeley, CA: Apress, 2020. http://dx.doi.org/10.1007/978-1-4842-5881-1_12.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Mihailescu, Marius Iulian, i Stefania Loredana Nita. "Floating-Point Arithmetic". W Pro Cryptography and Cryptanalysis, 109–32. Berkeley, CA: Apress, 2020. http://dx.doi.org/10.1007/978-1-4842-6367-9_4.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Streszczenia konferencji na temat "Floating point"

1

Zaki, Ahmad M., Mohamed H. El-Shafey, Ayman M. Bahaa-Eldin i Gamal M. Aly. "Accurate floating-point operation using controlled floating-point precision". W 2011 IEEE Pacific Rim Conference on Communications, Computers and Signal Processing (PacRim). IEEE, 2011. http://dx.doi.org/10.1109/pacrim.2011.6032978.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Gemini, Vipin. "Reconfigurable floating point adder". W 2014 1st International Conference on Information Technology, Computer and Electrical Engineering (ICITACEE). IEEE, 2014. http://dx.doi.org/10.1109/icitacee.2014.7065719.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Collins, George E., i Werner Krandick. "Multiprecision floating point addition". W the 2000 international symposium. New York, New York, USA: ACM Press, 2000. http://dx.doi.org/10.1145/345542.345585.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Fandrianto, Jan, i B. Y. Woo. "VLSI floating-point processors". W 1985 IEEE 7th Symposium on Computer Arithmetic (ARITH). IEEE, 1985. http://dx.doi.org/10.1109/arith.1985.6158947.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Han, Kyungtae, Alex G. Olson i Brian L. Evans. "Automatic Floating-Point to Fixed-Point Transformations". W 2006 Fortieth Asilomar Conference on Signals, Systems and Computers. IEEE, 2006. http://dx.doi.org/10.1109/acssc.2006.356588.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Marynets, K. "Study of the Antarctic Circumpolar Current via the Shallow Water Large Scale Modelling". W Floating Offshore Energy Devices. Materials Research Forum LLC, 2022. http://dx.doi.org/10.21741/9781644901731-11.

Pełny tekst źródła
Streszczenie:
Abstract. This paper proposes a modelling of the Antarctic Circumpolar Current (ACC) by means of a two-point boundary value problem. As the major means of exchange of water between the great ocean basins (Atlantic, Pacific and Indian), the ACC plays a highly important role in the global climate. Despite its importance, it remains one of the most poorly understood components of global ocean circulation. We present some recent results on the existence and uniqueness of solutions of a two-point nonlinear boundary value problem that arises in the modeling of the flow of the (ACC) (see discussions in [4-9]).
Style APA, Harvard, Vancouver, ISO itp.
7

Rao, R. Prakash, N. Dhanunjaya Rao, K. Naveen i P. Ramya. "IMPLEMENTATION OF THE STANDARD FLOATING POINT MAC USING IEEE 754 FLOATING POINT ADDER". W 2018 Second International Conference on Computing Methodologies and Communication (ICCMC). IEEE, 2018. http://dx.doi.org/10.1109/iccmc.2018.8487626.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Lam, Michael O., i Barry L. Rountree. "Floating-Point Shadow Value Analysis". W 2016 5th Workshop on Extreme-Scale Programming Tools (ESPT). IEEE, 2016. http://dx.doi.org/10.1109/espt.2016.007.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Sheikh, Basit Riaz, i Rajit Manohar. "An Asynchronous Floating-Point Multiplier". W 2012 18th IEEE International Symposium on Asynchronous Circuits and Systems (ASYNC). IEEE, 2012. http://dx.doi.org/10.1109/async.2012.19.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Aarnio, Tomi, Claudio Brunelli i Timo Viitanen. "Efficient floating-point texture decompression". W 2010 International Symposium on System-on-Chip - SOC. IEEE, 2010. http://dx.doi.org/10.1109/issoc.2010.5625555.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Raporty organizacyjne na temat "Floating point"

1

Chen, Yirng-An, i Randal E. Bryant. Verification of Floating-Point Adders. Fort Belvoir, VA: Defense Technical Information Center, kwiecień 1997. http://dx.doi.org/10.21236/ada346061.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Kahan, W. Rational Arithmetic in Floating-Point. Fort Belvoir, VA: Defense Technical Information Center, wrzesień 1986. http://dx.doi.org/10.21236/ada175190.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Dally, William J. Micro-Optimization of Floating-Point Operations. Fort Belvoir, VA: Defense Technical Information Center, sierpień 1988. http://dx.doi.org/10.21236/ada202001.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Hauser, John R. Handling Floating-point Exceptions in Numeric Programs. Fort Belvoir, VA: Defense Technical Information Center, marzec 1995. http://dx.doi.org/10.21236/ada637041.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Kong, Soonho, Sicun Gao i Edmund M. Clarke. Floating-point Bugs in Embedded GNU C Library. Fort Belvoir, VA: Defense Technical Information Center, listopad 2013. http://dx.doi.org/10.21236/ada600185.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Jensen, Debby. Control Implementation for the SPUR Floating Point Coprocessor. Fort Belvoir, VA: Defense Technical Information Center, sierpień 1987. http://dx.doi.org/10.21236/ada604004.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Presuhn, R. Textual Conventions for the Representation of Floating-Point Numbers. RFC Editor, sierpień 2011. http://dx.doi.org/10.17487/rfc6340.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Chen, Yirng-An, i Randal E. Bryant. PBHD: An Efficient Graph Representation for Floating Point Circuit Verification,. Fort Belvoir, VA: Defense Technical Information Center, maj 1997. http://dx.doi.org/10.21236/ada327995.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Nagayama, Shinobu, Tsuotomu Sasao i Jon T. Butler. Floating-Point Numeric Function Generators Based on Piecewise-Split EVMDDs. Fort Belvoir, VA: Defense Technical Information Center, maj 2010. http://dx.doi.org/10.21236/ada547647.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Nash, J. G. VLSI (Very Large Scale Integration) Floating Point Chip Design Study. Fort Belvoir, VA: Defense Technical Information Center, listopad 1985. http://dx.doi.org/10.21236/ada164198.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii