Academic literature on the topic 'Error Transformations'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Error Transformations.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Error Transformations"

1

Chowdhary, Sangeeta, and Santosh Nagarakatte. "Fast shadow execution for debugging numerical errors using error free transformations." Proceedings of the ACM on Programming Languages 6, OOPSLA2 (October 31, 2022): 1845–72. http://dx.doi.org/10.1145/3563353.

Full text
Abstract:
This paper proposes, EFTSanitizer, a fast shadow execution framework for detecting and debugging numerical errors during late stages of testing especially for long-running applications. Any shadow execution framework needs an oracle to compare against the floating point (FP) execution. This paper makes a case for using error free transformations, which is a sequence of operations to compute the error of a primitive operation with existing hardware supported FP operations, as an oracle for shadow execution. Although the error of a single correctly rounded FP operation is bounded, the accumulation of errors across operations can result in exceptions, slow convergences, and even crashes. To ease the job of debugging such errors, EFTSanitizer provides a directed acyclic graph (DAG) that highlights the propagation of errors, which results in exceptions or crashes. Unlike prior work, DAGs produced by EFTSanitizer include operations that span various function calls while keeping the memory usage bounded. To enable the use of such shadow execution tools with long-running applications, EFTSanitizer also supports starting the shadow execution at an arbitrary point in the dynamic execution, which we call selective shadow execution. EFTSanitizer is an order of magnitude faster than prior state-of-art shadow execution tools such as FPSanitizer and Herbgrind. We have discovered new numerical errors and debugged them using EFTSanitizer.
APA, Harvard, Vancouver, ISO, and other styles
2

Freeman, J. M., and D. G. Ford. "Automated error analysis of serial manipulators and servo heads." Proceedings of the Institution of Mechanical Engineers, Part C: Journal of Mechanical Engineering Science 217, no. 9 (September 1, 2003): 1077–84. http://dx.doi.org/10.1243/095440603322407308.

Full text
Abstract:
This paper presents a general mathematical treatment of serial manipulators, an important example of which is the servo head. The paper includes validation by application to the angle head via comparison with the previously known transformations and a new application to the error analysis of the angle head. The usual approach to the error analysis of a servo head is to develop a geometrical model by elementary geometrical considerations using trigonometric relationships and various simplifying assumptions. This approach is very error prone, difficult to verify and extremely time consuming. The techniques described here constitute matrix methods that have been programmed in a general way to derive automatically the analytical equations relating the angles of rotation of the head and alignment errors in the head to the position of the tool and errors in that position. The approach is to use rotation and transformation matrices to evaluate the influence of the various errors such as offsets and angular errors. A general approach to the sign convention and notation for angular errors is presented in an attempt to reduce the possibility of errors of definition.
APA, Harvard, Vancouver, ISO, and other styles
3

Eckert, R. Stephen, Raymond J. Carroll, and Naisyin Wang. "Transformations to Additivity in Measurement Error Models." Biometrics 53, no. 1 (March 1997): 262. http://dx.doi.org/10.2307/2533112.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Yeon Fuh Jiang and Yu Ping Lin. "Error analysis of quaternion transformations (inertial navigation)." IEEE Transactions on Aerospace and Electronic Systems 27, no. 4 (July 1991): 634–39. http://dx.doi.org/10.1109/7.85036.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Simpson, R. B. "Anisotropic mesh transformations and optimal error control." Applied Numerical Mathematics 14, no. 1-3 (April 1994): 183–98. http://dx.doi.org/10.1016/0168-9274(94)90025-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Kapus-Kolar, Monika. "Error-preserving local transformations on communication protocols." Software Testing, Verification and Reliability 23, no. 1 (January 21, 2011): 3–25. http://dx.doi.org/10.1002/stvr.449.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Yuan, Sihan, and Daniel J. Eisenstein. "Decorrelating the errors of the galaxy correlation function with compact transformation matrices." Monthly Notices of the Royal Astronomical Society 486, no. 1 (March 27, 2019): 708–24. http://dx.doi.org/10.1093/mnras/stz899.

Full text
Abstract:
Abstract Covariance matrix estimation is a persistent challenge for cosmology, often requiring a large number of synthetic mock catalogues. The off-diagonal components of the covariance matrix also make it difficult to show representative error bars on the 2-point correlation function (2PCF) since errors computed from the diagonal values of the covariance matrix greatly underestimate the uncertainties. We develop a routine for decorrelating the projected and anisotropic 2PCF with simple and scale-compact transformations on the 2PCF. These transformation matrices are modelled after the Cholesky decomposition and the symmetric square root of the Fisher matrix. Using mock catalogues, we show that the transformed projected and anisotropic 2PCF recover the same structure as the original 2PCF while producing largely decorrelated error bars. Specifically, we propose simple Cholesky-based transformation matrices that suppress the off-diagonal covariances on the projected 2PCF by ${\sim } 95{{\ \rm per\ cent}}$ and that on the anisotropic 2PCF by ${\sim } 87{{\ \rm per\ cent}}$. These transformations also serve as highly regularized models of the Fisher matrix, compressing the degrees of freedom so that one can fit for the Fisher matrix with a much smaller number of mocks.
APA, Harvard, Vancouver, ISO, and other styles
8

Yao, Lihui, Peng Lin, Jingxiang Gao, and Chao Liu. "Robust Prediction Algorithm Based on a General EIV Model for Multiframe Transformation." Mathematical Problems in Engineering 2019 (February 11, 2019): 1–10. http://dx.doi.org/10.1155/2019/5173956.

Full text
Abstract:
In modern geodesy, there are cases in which the target frame is unique and there is more than one source frame. Helmert transformations, which are extensively used to solve transformation parameters, can be separately solved between the target frame and one of the source frames. However, this is not globally optimal, even though each transformation is locally optimal on its own. Additionally, this also generates the problem of multiple solutions in the noncommon station of the target frame. Moreover, least squares solutions can cause estimation value distortion, with a gross error existing in observations. Thus, in this paper, Helmert transformations among three frames, that is, one target frame and two source frames, are studied as an example. A robust prediction algorithm based on the general errors-in-variables prediction algorithm and the robust estimation is derived in detail and is applied to achieve multiframe total transformation. Furthermore, simulation experiments were conducted and the results validated the superiority of the proposed total transformation method over classical separate approaches.
APA, Harvard, Vancouver, ISO, and other styles
9

Kondratiev, Gennadii V. "Natural Transformations in Statistics." International Frontier Science Letters 6 (December 2015): 1–5. http://dx.doi.org/10.18052/www.scipress.com/ifsl.6.1.

Full text
Abstract:
The old idea of internal uniform regularity of empirical data is discussed within the framework of category theory. A new concept and technique of statistical analysis is being introduced. It is independent on and fully compatible with the classical probabilistic approach. The absence of the model in the natural approach to statistics eliminates the model error and allows to use it in all areas with poor models. The existing error is fully determined by incompleteness of the data. It is always uniformly small by the construction of the data extension.
APA, Harvard, Vancouver, ISO, and other styles
10

Chen, An Mei, Haw-minn Lu, and Robert Hecht-Nielsen. "On the Geometry of Feedforward Neural Network Error Surfaces." Neural Computation 5, no. 6 (November 1993): 910–27. http://dx.doi.org/10.1162/neco.1993.5.6.910.

Full text
Abstract:
Many feedforward neural network architectures have the property that their overall input-output function is unchanged by certain weight permutations and sign flips. In this paper, the geometric structure of these equioutput weight space transformations is explored for the case of multilayer perceptron networks with tanh activation functions (similar results hold for many other types of neural networks). It is shown that these transformations form an algebraic group isomorphic to a direct product of Weyl groups. Results concerning the root spaces of the Lie algebras associated with these Weyl groups are then used to derive sets of simple equations for minimal sufficient search sets in weight space. These sets, which take the geometric forms of a wedge and a cone, occupy only a minute fraction of the volume of weight space. A separate analysis shows that large numbers of copies of a network performance function optimum weight vector are created by the action of the equioutput transformation group and that these copies all lie on the same sphere. Some implications of these results for learning are discussed.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Error Transformations"

1

Gul, Yusuf. "Entanglement Transformations And Quantum Error Correction." Phd thesis, METU, 2008. http://etd.lib.metu.edu.tr/upload/2/12610773/index.pdf.

Full text
Abstract:
The main subject of this thesis is the investigation of the transformations of pure multipartite entangled states having Schmidt rank 2 by using only local operations assisted with classical communications (LOCC). A new parameterization is used for describing the entangled state of p particles distributed to p distant, spatially separated persons. Product, bipartite and truly multipartite states are identified in this new parametrization. Moreover, alternative parameterizations of local operations carried out by each party are provided. For the case of a deterministic transformation to a truly multipartite final state, one can find an analytic expression that determines whether such a transformation is possible. In this case, a chain of measurements by each party for carrying out the transformation is found. It can also be seen that, under deterministic LOCC transformations, there are some quantities that remain invariant. For the purpose of applying the results of this thesis in the context of the quantum information and computation, brief reviews of the entanglement purification, measurement based quantum computation and quantum codes are given.
APA, Harvard, Vancouver, ISO, and other styles
2

Suh, Sangwook. "Low-power discrete Fourier transform and soft-decision Viterbi decoder for OFDM receivers." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/42716.

Full text
Abstract:
The purpose of this research is to present a low-power wireless communication receiver with an enhanced performance by relieving the system complexity and performance degradation imposed by a quantization process. With an overwhelming demand for more reliable communication systems, the complexity required for modern communication systems has been increased accordingly. A byproduct of this increase in complexity is a commensurate increase in power consumption of the systems. Since the Shannon's era, the main stream of the methodologies for promising the high reliability of communication systems has been based on the principle that the information signals flowing through the system are represented in digits. Consequently, the system itself has been heavily driven to be implemented with digital circuits, which is generally beneficial over analog implementations when digitally stored information is locally accessible, such as in memory systems. However, in communication systems, a receiver does not have a direct access to the originally transmitted information. Since the received signals from a noisy channel are already continuous values with continuous probability distributions, we suggest a mixed-signal system in which the received continuous signals are directly fed into the analog demodulator and the subsequent soft-decision Viterbi decoder without any quantization involved. In this way, we claim that redundant system complexity caused by the quantization process is eliminated, thus gives better power efficiency in wireless communication systems, especially for battery-powered mobile devices. This is also beneficial from a performance perspective, as it takes full advantage of the soft information flowing through the system.
APA, Harvard, Vancouver, ISO, and other styles
3

TAKEDA, Kazuya, Norihide KITAOKA, and Makoto SAKAI. "Acoustic Feature Transformation Combining Average and Maximum Classification Error Minimization Criteria." Institute of Electronics, Information and Communication Engineers, 2010. http://hdl.handle.net/2237/14970.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Lau, Buon Kiong. "Applications of Adaptive Antennas in Third-Generation Mobile Communications Systems." Thesis, Curtin University, 2002. http://hdl.handle.net/20.500.11937/2019.

Full text
Abstract:
Adaptive antenna systems (AAS's) are traditionally of interest only in radar and sonar applications. However, since the onset of the explosive growth in demand for wireless communications during the 1990's, researchers are giving increasing attention to the use of AAS technology to overcome practical challenges in providing the service. The main benefit of the technology lies in its ability to exploit the spatial domain, on top of the temporal and frequency domains, to improve on transceiver performance. This thesis presents a unified study on two classes of preprocessing techniques for uniform circular arrays (UCA's). UCA's are of interest because of their natural ability to provide a full azimuth (i.e. 360') coverage found in typical scenarios for sensor array applications, such as radar, sonar and wireless communications. The two classes of preprocessing techniques studied are the Davies transformation and the interpolated array transformations. These techniques yield a mathematically more convenient form - the Vandermonde form - for the array steering vector via a linear transformation. The Vandermonde form is useful for different applications such as direction-of-arrival (DOA) estimation and optimum or minimum variance distortionless response (MVDR) beamforming in correlated signal environment and beampattem synthesis. A novel interpolated array transformation is proposed to overcome limitations in the existing interpolated array transformations. A disadvantage of the two classes of preprocessing techniques for UCA's with omnidirectional elements is the lack of robustness in the transformed array steering vector to array imperfections under certain conditions. In order to mitigate the robustness problem, optimisation problems are formulated to modify the transformation matrices.Suitable optimisation techniques are then applied to obtain more robust transformations. The improved transformations are shown to improve robustness but at the cost of larger transformation errors. The benefits of the robustification procedure are most apparent in DOA estimation. In addition to the algorithm level studies, the thesis also investigates the use of AAS technology with respect to two different third generation (3G) mobile communications systems: Enhanced Data rates for Global Evolution (EDGE) and Wideband Code Division Multiple Access (WCDMA). EDGE, or more generally GSM/EDGE Radio Access Network (GERAN), is the evolution of the widely successful GSM system to provide 3G mobile services in the existing radio spectrum. It builds on the TDMA technology of GSM and relies on improved coding and higher order modulation schemes to provide packet-based services at high data rates. WCDMA, on the other hand, is based on CDMA technology and is specially designed and streamlined for 3G mobile services. For WCDMA, a single-user approach to DOA estimation which utilises the user spreading code and the pulse-shaped chip waveform is proposed. It is shown that the proposed approach produces promising performance improvements. The studies with EDGE are concerned with the evaluation of a simple AAS at the system and link levels.Results from, the system and link level simulations are presented to demonstrate the effectiveness of AAS technology in the new mobile communications system. Finally, it is noted that the WCDMA and EDGE link level simulations employ the newly developed COST259 directional channel model, which is capable of producing accurate channel realisations of macrocell environments for the evaluation of AAS's.
APA, Harvard, Vancouver, ISO, and other styles
5

Lehmann, Rüdiger. "Ein automatisches Verfahren für geodätische Berechnungen." Hochschule für Technik und Wirtschaft Dresden, 2015. http://nbn-resolving.de/urn:nbn:de:bsz:520-qucosa-188715.

Full text
Abstract:
Dieses Manuskript entstand aus Vorlesungen über Geodätische Berechnungen an der Hochschule für Technik und Wirtschaft Dresden. Da diese Lehrveranstaltung im ersten oder zweiten Semester stattfindet, werden noch keine Methoden der höheren Mathematik benutzt. Das Themenspektrum beschränkt sich deshalb weitgehend auf elementare Berechnungen in der Ebene. Nur im Kapitel 7 kommen einige Methoden der Vektorrechnung zum Einsatz.
APA, Harvard, Vancouver, ISO, and other styles
6

Mannem, Narender Reddy. "Adaptive Data Rate Multicarrier Direct Sequence Spread Spectrum in Rayleigh Fading Channel." Ohio University / OhioLINK, 2005. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1125782227.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Lau, Buon Kiong. "Applications of Adaptive Antennas in Third-Generation Mobile Communications Systems." Curtin University of Technology, Australian Telecommunications Research Institute, 2002. http://espace.library.curtin.edu.au:80/R/?func=dbin-jump-full&object_id=12983.

Full text
Abstract:
Adaptive antenna systems (AAS's) are traditionally of interest only in radar and sonar applications. However, since the onset of the explosive growth in demand for wireless communications during the 1990's, researchers are giving increasing attention to the use of AAS technology to overcome practical challenges in providing the service. The main benefit of the technology lies in its ability to exploit the spatial domain, on top of the temporal and frequency domains, to improve on transceiver performance. This thesis presents a unified study on two classes of preprocessing techniques for uniform circular arrays (UCA's). UCA's are of interest because of their natural ability to provide a full azimuth (i.e. 360') coverage found in typical scenarios for sensor array applications, such as radar, sonar and wireless communications. The two classes of preprocessing techniques studied are the Davies transformation and the interpolated array transformations. These techniques yield a mathematically more convenient form - the Vandermonde form - for the array steering vector via a linear transformation. The Vandermonde form is useful for different applications such as direction-of-arrival (DOA) estimation and optimum or minimum variance distortionless response (MVDR) beamforming in correlated signal environment and beampattem synthesis. A novel interpolated array transformation is proposed to overcome limitations in the existing interpolated array transformations. A disadvantage of the two classes of preprocessing techniques for UCA's with omnidirectional elements is the lack of robustness in the transformed array steering vector to array imperfections under certain conditions. In order to mitigate the robustness problem, optimisation problems are formulated to modify the transformation matrices.
Suitable optimisation techniques are then applied to obtain more robust transformations. The improved transformations are shown to improve robustness but at the cost of larger transformation errors. The benefits of the robustification procedure are most apparent in DOA estimation. In addition to the algorithm level studies, the thesis also investigates the use of AAS technology with respect to two different third generation (3G) mobile communications systems: Enhanced Data rates for Global Evolution (EDGE) and Wideband Code Division Multiple Access (WCDMA). EDGE, or more generally GSM/EDGE Radio Access Network (GERAN), is the evolution of the widely successful GSM system to provide 3G mobile services in the existing radio spectrum. It builds on the TDMA technology of GSM and relies on improved coding and higher order modulation schemes to provide packet-based services at high data rates. WCDMA, on the other hand, is based on CDMA technology and is specially designed and streamlined for 3G mobile services. For WCDMA, a single-user approach to DOA estimation which utilises the user spreading code and the pulse-shaped chip waveform is proposed. It is shown that the proposed approach produces promising performance improvements. The studies with EDGE are concerned with the evaluation of a simple AAS at the system and link levels.
Results from, the system and link level simulations are presented to demonstrate the effectiveness of AAS technology in the new mobile communications system. Finally, it is noted that the WCDMA and EDGE link level simulations employ the newly developed COST259 directional channel model, which is capable of producing accurate channel realisations of macrocell environments for the evaluation of AAS's.
APA, Harvard, Vancouver, ISO, and other styles
8

Halbach, Till. "Error-robust coding and transformation of compressed hybered hybrid video streams for packet-switched wireless networks." Doctoral thesis, Norwegian University of Science and Technology, Faculty of Information Technology, Mathematics and Electrical Engineering, 2004. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-136.

Full text
Abstract:

This dissertation considers packet-switched wireless networks for transmission of variable-rate layered hybrid video streams. Target applications are video streaming and broadcasting services. The work can be divided into two main parts.

In the first part, a novel quality-scalable scheme based on coefficient refinement and encoder quality constraints is developed as a possible extension to the video coding standard H.264. After a technical introduction to the coding tools of H.264 with the main focus on error resilience features, various quality scalability schemes in previous research are reviewed. Based on this discussion, an encoder decoder framework is designed for an arbitrary number of quality layers, hereby also enabling region-of-interest coding. After that, the performance of the new system is exhaustively tested, showing that the bit rate increase typically encountered with scalable hybrid coding schemes is, for certain coding parameters, only small to moderate. The double- and triple-layer constellations of the framework are shown to perform superior to other systems.

The second part considers layered code streams as generated by the scheme of the first part. Various error propagation issues in hybrid streams are discussed, which leads to the definition of a decoder quality constraint and a segmentation of the code stream to transmit. A packetization scheme based on successive source rate consumption is drafted, followed by the formulation of the channel code rate optimization problem for an optimum assignment of available codes to the channel packets. Proper MSE-based error metrics are derived, incorporating the properties of the source signal, a terminate-on-error decoding strategy, error concealment, inter-packet dependencies, and the channel conditions. The Viterbi algorithm is presented as a low-complexity solution to the optimization problem, showing a great adaptivity of the joint source channel coding scheme to the channel conditions. An almost constant image qualiity is achieved, also in mismatch situations, while the overall channel code rate decreases only as little as necessary as the channel quality deteriorates. It is further shown that the variance of code distributions is only small, and that the codes are assigned irregularly to all channel packets.

A double-layer constellation of the framework clearly outperforms other schemes with a substantial margin.

Keywords — Digital lossy video compression, visual communication, variable bit rate (VBR), SNR scalability, layered image processing, quality layer, hybrid code stream, predictive coding, progressive bit stream, joint source channel coding, fidelity constraint, channel error robustness, resilience, concealment, packet-switched, mobile and wireless ATM, noisy transmission, packet loss, binary symmetric channel, streaming, broadcasting, satellite and radio links, H.264, MPEG-4 AVC, Viterbi, trellis, unequal error protection

APA, Harvard, Vancouver, ISO, and other styles
9

Nestler, Franziska. "Automated Parameter Tuning based on RMS Errors for nonequispaced FFTs." Universitätsbibliothek Chemnitz, 2015. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-160989.

Full text
Abstract:
In this paper we study the error behavior of the well known fast Fourier transform for nonequispaced data (NFFT) with respect to the L2-norm. We compare the arising errors for different window functions and show that the accuracy of the algorithm can be significantly improved by modifying the shape of the window function. Based on the considered error estimates for different window functions we are able to state an easy and efficient method to tune the involved parameters automatically. The numerical examples show that the optimal parameters depend on the given Fourier coefficients, which are assumed not to be of a random structure or roughly of the same magnitude but rather subject to a certain decrease.
APA, Harvard, Vancouver, ISO, and other styles
10

Dridi, Marwa. "Sur les méthodes rapides de résolution de systèmes de Toeplitz bandes." Thesis, Littoral, 2016. http://www.theses.fr/2016DUNK0402/document.

Full text
Abstract:
Cette thèse vise à la conception de nouveaux algorithmes rapides en calcul numérique via les matrices de Toeplitz. Tout d'abord, nous avons introduit un algorithme rapide sur le calcul de l'inverse d'une matrice triangulaire de Toeplitz en se basant sur des notions d'interpolation polynomiale. Cet algorithme nécessitant uniquement deux FFT(2n) est manifestement efficace par rapport à ses prédécésseurs. ensuite, nous avons introduit un algorithme rapide pour la résolution d'un système linéaire de Toeplitz bande. Cette approche est basée sur l'extension de la matrice donnée par plusieurs lignes en dessus, de plusieurs colonnes à droite et d'attribuer des zéros et des constantes non nulles dans chacune de ces lignes et de ces colonnes de telle façon que la matrice augmentée à la structure d'une matrice triangulaire inférieure de Toeplitz. La stabilité de l'algorithme a été discutée et son efficacité a été aussi justifiée. Finalement, nous avons abordé la résolution d'un système de Toeplitz bandes par blocs bandes de Toeplitz. Ceci étant primordial pour établir la connexion de nos algorithmes à des applications en restauration d'images, un domaine phare en mathématiques appliquées
This thesis aims to design new fast algorithms for numerical computation via the Toeplitz matrices. First, we introduced a fast algorithm to compute the inverse of a triangular Toeplitz matrix with real and/or complex numbers based on polynomial interpolation techniques. This algorithm requires only two FFT (2n) is clearly effective compared to predecessors. A numerical accuracy and error analysis is also considered. Numerical examples are given to illustrate the effectiveness of our method. In addition, we introduced a fast algorithm for solving a linear banded Toeplitz system. This new approach is based on extending the given matrix with several rows on the top and several columns on the right and to assign zeros and some nonzero constants in each of these rows and columns in such a way that the augmented matrix has a lower triangular Toeplitz structure. Stability of the algorithm is discussed and its performance is showed by numerical experiments. This is essential to connect our algorithms to applications such as image restoration applications, a key area in applied mathematics
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Error Transformations"

1

Jerri, Abdul J. Integral and discrete transforms with applications and error analysis. New York: M. Dekker, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Jerri, Abdul J. Integral and discrete transforms with applications and error analysis. New York: M. Dekker, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Gilsinn, David. Updating a turning center error model by singular value decomposition. Gaithersburg, MD: U.S. Dept. of Commerce, Technology Administration, National Institute of Standards and Technology, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Memphis State University. Center for Research on Women., Research Clearinghouse and Curriculum Integration Project on Women of Color and Southern Women., and Workshop on Women in the Curriculum (2nd : 1986 : Memphis State University, Center for Research on Women), eds. Conceptual errors across the curriculum: Towards a transformation of the tradition. Memphis, TN: Research Clearinghouse and Curriculum Integration Project, Center for Research on Women, Memphis State University, 1986.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Quality and safety for transformational nursing: Core competencies. Boston: Pearson, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Muth, Lorant A. An iterative technique to correct probe position errors in planar near-field to far-field transformations. Boulder, CO: U.S. Dept. of Commerce, National Institute of Standards & Technology, 1988.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Muth, Lorant A. An iterative technique to correct probe position errors in planar near-field to far-field transformations. Boulder, CO: U.S. Dept. of Commerce, National Institute of Standards & Technology, 1988.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Ritalin nation: Rapid-fire culture and the transformation of human consciousness. New York: W.W. Norton, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ritalin nation: Rapid-fire culture and the transformation of human consciousness. New York: W.W. Norton, 2000.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

United States. National Aeronautics and Space Administration., ed. Study of one- and two-dimensional filtering and deconvolution algorithms for a streaming array computer: Final report. [Washington, D.C: National Aeronautics and Space Administration, 1985.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Error Transformations"

1

Engel, Michael, and Peter Marwedel. "Soft Error Handling for Embedded Systems using Compiler-OS Interaction." In Dependable Embedded Systems, 33–55. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-52017-5_2.

Full text
Abstract:
AbstractAdvancing semiconductor technologies increasingly fail to provide expected gains in cost and energy reductions due to reaching the physical limits of Moore’s Law and Dennard scaling. Instead, shrinking semiconductor feature sizes increase a circuit’s susceptibility to soft errors. In order to ensure reliable operation, a significant hardware overhead would be required.The FEHLER project (Flexible Error Handling for Embedded Real-Time Systems) introduces error semantics into the software development process which provide a system with information about the criticality of a given data object to soft errors. Using this information, the overhead required for error correction can be reduced significantly for many applications, since only errors affecting critical data have to be corrected.In this chapter, the fundamental components of FEHLER that cooperate at design and runtime of an embedded system are presented. These include static compiler analyses and transformations as well as a fault-aware microkernel. Using examples of typical embedded applications, the efficiency of the approach is demonstrated, along with an extension towards approximate computing systems.
APA, Harvard, Vancouver, ISO, and other styles
2

Manoukian, Manouk V., and George A. Constantinides. "Accurate Floating Point Arithmetic through Hardware Error-Free Transformations." In Lecture Notes in Computer Science, 94–101. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-19475-7_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Guo, Kaizhong, and Shiyong Liu. "Transformations on Universe of Discourse and Rules in Error Systems." In Studies in Systems, Decision and Control, 221–50. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-40760-5_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Kriebel, Florian, Kuan-Hsun Chen, Semeen Rehman, Jörg Henkel, Jian-Jia Chen, and Muhammad Shafique. "Dependable Software Generation and Execution on Embedded Systems." In Dependable Embedded Systems, 139–60. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-52017-5_6.

Full text
Abstract:
AbstractFor generating and executing dependable software, the effects of hardware layer faults at the software layer have to be accurately analyzed and modeled. This requires relevant information from the hardware and software layers, as well as an in-depth analysis of how an application’s outputs are affected by errors, and quantifying the error masking and error propagation on the software layer. Based on this analysis, techniques for generating dependable software can be proposed, e.g., by different dependability-aware compiler-based software transformations or selective instruction protection. Beside functional aspects, timing also plays an important role, as oftentimes tasks have to be finished before a certain deadline to provide useful information, especially in real-time systems. Both aspects are jointly taken into account by the run-time system software which decides—with the help of offline and online-generated data—for multiple concurrently executing applications how to protect and when to execute which application task to optimize for dependability and timing correctness. This is achieved for example by selecting appropriate application versions and protection levels for single and multi-core systems—for example using redundant multithreading (RMT) in different modes—under tolerable performance overhead constraints.
APA, Harvard, Vancouver, ISO, and other styles
5

Grabe, Michael. "Transformation." In Generalized Gaussian Error Calculus, 237–42. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-03305-6_25.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Krämer, Walter, and Michael Zimmer. "Fast (Parallel) Dense Linear System Solvers in C-XSC Using Error Free Transformations and BLAS." In Numerical Validation in Current Hardware Architectures, 230–49. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-01591-5_15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Liu, Shiyong, and Kaizhong Guo. "Transformation Connectives in Error Logic." In Error Logic: Paving Pathways for Intelligent Error Identification and Management, 73–465. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-00820-7_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Chen, Jenn-Yeu, and Bernard J. Baars. "General and Specific Factors in “Transformational Errors”." In Experimental Slips and Human Error, 217–33. Boston, MA: Springer US, 1992. http://dx.doi.org/10.1007/978-1-4899-1164-3_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Kang, Yicheng, and Peihua Qiu. "Nonparametric Deconvolution by Fourier Transformation and Other Related Approaches." In Handbook of Measurement Error Models, 221–44. Boca Raton: Chapman and Hall/CRC, 2021. http://dx.doi.org/10.1201/9781315101279-11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Gvindadze, Dimitri. "The Margin of Error." In The Transformation of Georgia from 2004 to 2012, 177–81. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-18264-8_20.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Error Transformations"

1

Li, Yan, and Ronald Briggs. "Scalable and Error Tolerant Automated Georeferencing under Affine Transformations." In IGARSS 2008 - 2008 IEEE International Geoscience and Remote Sensing Symposium. IEEE, 2008. http://dx.doi.org/10.1109/igarss.2008.4780070.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Moon, Sang-Ku, Yong-Mo Moon, Sridhar Kota, and Robert G. Landers. "Screw Theory Based Metrology for Design and Error Compensation of Machine Tools." In ASME 2001 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2001. http://dx.doi.org/10.1115/detc2001/dac-21083.

Full text
Abstract:
Abstract The paper presents a generalized mathematical framework for computation and compensation of tool tip errors in multi-axis machine tools using screw theory. In contrast to conventional Denavit–Hartenberg notation, Screw theory offers several advantages including: (i) modeling of complex machine tool configurations with rotational axes, (ii) tractability of error propagation which simplifies solution of inverse kinematics and subsequent error-compensation procedures, and (iii) functional representation of error screws in a global reference frame rather than cumbersome coordinate transformations of local reference frames. Kinestatic filtering technique [11,12] is adopted for evaluating the compensatability of errors and the Jacobian is used for error compensation. The methodology is illustrated using a five-axis machine tool with two rotational axes.
APA, Harvard, Vancouver, ISO, and other styles
3

Smith, Monty J. "Transformations for reducing steady state error due to uncertain disturbance biases." In Automation (MED 2011). IEEE, 2011. http://dx.doi.org/10.1109/med.2011.5983159.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Oishi, Shin'ichi. "Error free transformations of floating point numbers and its applications to constructing efficient error free numerical algorithms." In the 2009 conference. New York, New York, USA: ACM Press, 2009. http://dx.doi.org/10.1145/1577190.1577193.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Domanski, M., and K. Rakowski. "Error accumulation in multiple cycles of still image compression and color transformations." In 2005 International Conference on Image Processing. IEEE, 2005. http://dx.doi.org/10.1109/icip.2005.1529874.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Christensen, Heidi, Børge Lindberg, and Ove Andersen. "Noise robustness of heterogeneous features employing minimum classification error feature space transformations." In 6th International Conference on Spoken Language Processing (ICSLP 2000). ISCA: ISCA, 2000. http://dx.doi.org/10.21437/icslp.2000-590.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Xiong, C. H., Y. Rong, R. P. Koganti, M. J. Zaluzek, and N. Wang. "Geometric Variation Prediction for Automotive Aluminum Welded Assemblies." In ASME 2002 International Mechanical Engineering Congress and Exposition. ASMEDC, 2002. http://dx.doi.org/10.1115/imece2002-39417.

Full text
Abstract:
Assembly error prediction is one of the key problems in quality control. The objective of this paper is to develop the statistical error analysis model for assembling, to derive measures of controlling the geometric variations in assembly with multiple assembly stations, and to provide a statistical tolerance prediction/distribution toolkit integrated with CAD system for responding quickly to market opportunities with reduced manufacturing costs and improved quality. First the homogeneous transformation is used to describe the location and orientation of assembly features, parts and other related surfaces. The desired location and orientation, and the related fixturing configuration (including locator position and orientation) are automatically extracted from CAD models. The location and orientation errors are represented with differential transformations. Then the statistical error prediction model is formulated and the related algorithms are integrated with the CAD system so that the complex geometric information can be directly accessed. In the prediction model, the manufacturing process (joining) error, induced by heat deformation in welding, is taken into account. Finally case studies are presented to verify the prediction algorithms. The proposed model has following characteristics: 1) variety error elements of both design and process aspects are taken into account; 2) assembly error prediction and control can be dealt with for multiple assembly stations and multiple fixtures in each station; and 3) the technical approach is integrated with CAD system.
APA, Harvard, Vancouver, ISO, and other styles
8

Hidaka, Soichiro. "Trace-baed Error Prevention and Detection of Consistency Maintenance via Bidirectional Graph Transformations." In 2020 IEEE 20th International Conference on Software Quality, Reliability and Security Companion (QRS-C). IEEE, 2020. http://dx.doi.org/10.1109/qrs-c51114.2020.00025.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Hagege, Rami, and Joseph M. Francos. "A random sets framework for error analysis in estimating geometric transformations — a first order analysis." In 2008 International Symposium on Information Theory and Its Applications (ISITA). IEEE, 2008. http://dx.doi.org/10.1109/isita.2008.4895442.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Bonney, Matthew S., Daniel C. Kammer, and Matthew R. W. Brake. "Fully Parameterized Reduced Order Models Using Hyper-Dual Numbers and Component Mode Synthesis." In ASME 2015 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2015. http://dx.doi.org/10.1115/detc2015-46029.

Full text
Abstract:
The uncertainty of a system is usually quantified with the use of sampling methods such as Monte-Carlo or Latin hypercube sampling. These sampling methods require many computations of the model and may include re-meshing. The re-solving and re-meshing of the model is a very large computational burden. One way to greatly reduce this computational burden is to use a parameterized reduced order model. This is a model that contains the sensitivities of the desired results with respect to changing parameters such as Young’s modulus. The typical method of computing these sensitivities is the use of finite difference technique that gives an approximation that is subject to truncation error and subtractive cancellation due to the precision of the computer. One way of eliminating this error is to use hyperdual numbers, which are able to generate exact sensitivities that are not subject to the precision of the computer. This paper uses the concept of hyper-dual numbers to parameterize a system that is composed of two substructures in the form of Craig-Bampton substructure representations, and combine them using component mode synthesis. The synthesis transformations using other techniques require the use of a nominal transformation while this approach allows for exact transformations when a perturbation is applied. This paper presents this technique for a planar motion frame and compares the use and accuracy of the approach against the true full system. This work lays the groundwork for performing component mode synthesis using hyper-dual numbers.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Error Transformations"

1

Kapelyushnyi, Anatolyi. TRANSFORMATION OF FORMS OF DEGREES OF COMPARISON OF ADJECTIVES IN LIVE TELEVISION BROADCASTING. Ivan Franko National University of Lviv, March 2021. http://dx.doi.org/10.30970/vjo.2021.50.11105.

Full text
Abstract:
The article analyzes transformation of forms of degrees of comparison of adjectives in live television broadcasting. Particular attention is paid to the specific properties of different forms of degrees of comparison of adjectives. To analyze the peculiarities of their use for errors in speech of television journalists, associated with non-compliance with linguistic norms on ways to avoid these errors, to make appropriate recommendations to television journalists. The main method we use is to observe the speech of live TV journalist, we used during the study methods of comparative analysis of comparison of theoretical positions from the work of individual linguists and journalism sat down as well as texts that sounded in the speech of journalists. Our objective is to trace these transformations and develop a certain attitude towards them in our researches of the language of the media and practicing journalists to support positive trends in the development of the broadcasting on TV and give recommendations for overcoming certain negative trends. Improving the live broadcasting of television journalists, in particular the work on deepening the language skills will contribute to the modernization of some trends in the reasonable expediency of the transformation of certain phenomena, moder­nization of some tendencies concerning the reasonable expedient transformation of separate grammatical phenomena and categories and at braking and in general stopping of processes of transformation of negative unreasonable not expedient. This fully applies primarily to attempts to transform the forms of degrees of comparison of adjectives and this explains importance of the results achieved in these study.
APA, Harvard, Vancouver, ISO, and other styles
2

Brill, Eric. A Report of Recent Progress in Transformation-Based Error-Driven Learning. Fort Belvoir, VA: Defense Technical Information Center, January 1994. http://dx.doi.org/10.21236/ada460636.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Muth, Lorant A. An iterative technique to correct probe position errors in planar near-field to far-field transformations. Gaithersburg, MD: National Bureau of Standards, 1988. http://dx.doi.org/10.6028/nist.tn.1323.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography