To see the other types of publications on this topic, follow the link: Error Transformations.

Journal articles on the topic 'Error Transformations'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Error Transformations.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Chowdhary, Sangeeta, and Santosh Nagarakatte. "Fast shadow execution for debugging numerical errors using error free transformations." Proceedings of the ACM on Programming Languages 6, OOPSLA2 (October 31, 2022): 1845–72. http://dx.doi.org/10.1145/3563353.

Full text
Abstract:
This paper proposes, EFTSanitizer, a fast shadow execution framework for detecting and debugging numerical errors during late stages of testing especially for long-running applications. Any shadow execution framework needs an oracle to compare against the floating point (FP) execution. This paper makes a case for using error free transformations, which is a sequence of operations to compute the error of a primitive operation with existing hardware supported FP operations, as an oracle for shadow execution. Although the error of a single correctly rounded FP operation is bounded, the accumulation of errors across operations can result in exceptions, slow convergences, and even crashes. To ease the job of debugging such errors, EFTSanitizer provides a directed acyclic graph (DAG) that highlights the propagation of errors, which results in exceptions or crashes. Unlike prior work, DAGs produced by EFTSanitizer include operations that span various function calls while keeping the memory usage bounded. To enable the use of such shadow execution tools with long-running applications, EFTSanitizer also supports starting the shadow execution at an arbitrary point in the dynamic execution, which we call selective shadow execution. EFTSanitizer is an order of magnitude faster than prior state-of-art shadow execution tools such as FPSanitizer and Herbgrind. We have discovered new numerical errors and debugged them using EFTSanitizer.
APA, Harvard, Vancouver, ISO, and other styles
2

Freeman, J. M., and D. G. Ford. "Automated error analysis of serial manipulators and servo heads." Proceedings of the Institution of Mechanical Engineers, Part C: Journal of Mechanical Engineering Science 217, no. 9 (September 1, 2003): 1077–84. http://dx.doi.org/10.1243/095440603322407308.

Full text
Abstract:
This paper presents a general mathematical treatment of serial manipulators, an important example of which is the servo head. The paper includes validation by application to the angle head via comparison with the previously known transformations and a new application to the error analysis of the angle head. The usual approach to the error analysis of a servo head is to develop a geometrical model by elementary geometrical considerations using trigonometric relationships and various simplifying assumptions. This approach is very error prone, difficult to verify and extremely time consuming. The techniques described here constitute matrix methods that have been programmed in a general way to derive automatically the analytical equations relating the angles of rotation of the head and alignment errors in the head to the position of the tool and errors in that position. The approach is to use rotation and transformation matrices to evaluate the influence of the various errors such as offsets and angular errors. A general approach to the sign convention and notation for angular errors is presented in an attempt to reduce the possibility of errors of definition.
APA, Harvard, Vancouver, ISO, and other styles
3

Eckert, R. Stephen, Raymond J. Carroll, and Naisyin Wang. "Transformations to Additivity in Measurement Error Models." Biometrics 53, no. 1 (March 1997): 262. http://dx.doi.org/10.2307/2533112.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Yeon Fuh Jiang and Yu Ping Lin. "Error analysis of quaternion transformations (inertial navigation)." IEEE Transactions on Aerospace and Electronic Systems 27, no. 4 (July 1991): 634–39. http://dx.doi.org/10.1109/7.85036.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Simpson, R. B. "Anisotropic mesh transformations and optimal error control." Applied Numerical Mathematics 14, no. 1-3 (April 1994): 183–98. http://dx.doi.org/10.1016/0168-9274(94)90025-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Kapus-Kolar, Monika. "Error-preserving local transformations on communication protocols." Software Testing, Verification and Reliability 23, no. 1 (January 21, 2011): 3–25. http://dx.doi.org/10.1002/stvr.449.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Yuan, Sihan, and Daniel J. Eisenstein. "Decorrelating the errors of the galaxy correlation function with compact transformation matrices." Monthly Notices of the Royal Astronomical Society 486, no. 1 (March 27, 2019): 708–24. http://dx.doi.org/10.1093/mnras/stz899.

Full text
Abstract:
Abstract Covariance matrix estimation is a persistent challenge for cosmology, often requiring a large number of synthetic mock catalogues. The off-diagonal components of the covariance matrix also make it difficult to show representative error bars on the 2-point correlation function (2PCF) since errors computed from the diagonal values of the covariance matrix greatly underestimate the uncertainties. We develop a routine for decorrelating the projected and anisotropic 2PCF with simple and scale-compact transformations on the 2PCF. These transformation matrices are modelled after the Cholesky decomposition and the symmetric square root of the Fisher matrix. Using mock catalogues, we show that the transformed projected and anisotropic 2PCF recover the same structure as the original 2PCF while producing largely decorrelated error bars. Specifically, we propose simple Cholesky-based transformation matrices that suppress the off-diagonal covariances on the projected 2PCF by ${\sim } 95{{\ \rm per\ cent}}$ and that on the anisotropic 2PCF by ${\sim } 87{{\ \rm per\ cent}}$. These transformations also serve as highly regularized models of the Fisher matrix, compressing the degrees of freedom so that one can fit for the Fisher matrix with a much smaller number of mocks.
APA, Harvard, Vancouver, ISO, and other styles
8

Yao, Lihui, Peng Lin, Jingxiang Gao, and Chao Liu. "Robust Prediction Algorithm Based on a General EIV Model for Multiframe Transformation." Mathematical Problems in Engineering 2019 (February 11, 2019): 1–10. http://dx.doi.org/10.1155/2019/5173956.

Full text
Abstract:
In modern geodesy, there are cases in which the target frame is unique and there is more than one source frame. Helmert transformations, which are extensively used to solve transformation parameters, can be separately solved between the target frame and one of the source frames. However, this is not globally optimal, even though each transformation is locally optimal on its own. Additionally, this also generates the problem of multiple solutions in the noncommon station of the target frame. Moreover, least squares solutions can cause estimation value distortion, with a gross error existing in observations. Thus, in this paper, Helmert transformations among three frames, that is, one target frame and two source frames, are studied as an example. A robust prediction algorithm based on the general errors-in-variables prediction algorithm and the robust estimation is derived in detail and is applied to achieve multiframe total transformation. Furthermore, simulation experiments were conducted and the results validated the superiority of the proposed total transformation method over classical separate approaches.
APA, Harvard, Vancouver, ISO, and other styles
9

Kondratiev, Gennadii V. "Natural Transformations in Statistics." International Frontier Science Letters 6 (December 2015): 1–5. http://dx.doi.org/10.18052/www.scipress.com/ifsl.6.1.

Full text
Abstract:
The old idea of internal uniform regularity of empirical data is discussed within the framework of category theory. A new concept and technique of statistical analysis is being introduced. It is independent on and fully compatible with the classical probabilistic approach. The absence of the model in the natural approach to statistics eliminates the model error and allows to use it in all areas with poor models. The existing error is fully determined by incompleteness of the data. It is always uniformly small by the construction of the data extension.
APA, Harvard, Vancouver, ISO, and other styles
10

Chen, An Mei, Haw-minn Lu, and Robert Hecht-Nielsen. "On the Geometry of Feedforward Neural Network Error Surfaces." Neural Computation 5, no. 6 (November 1993): 910–27. http://dx.doi.org/10.1162/neco.1993.5.6.910.

Full text
Abstract:
Many feedforward neural network architectures have the property that their overall input-output function is unchanged by certain weight permutations and sign flips. In this paper, the geometric structure of these equioutput weight space transformations is explored for the case of multilayer perceptron networks with tanh activation functions (similar results hold for many other types of neural networks). It is shown that these transformations form an algebraic group isomorphic to a direct product of Weyl groups. Results concerning the root spaces of the Lie algebras associated with these Weyl groups are then used to derive sets of simple equations for minimal sufficient search sets in weight space. These sets, which take the geometric forms of a wedge and a cone, occupy only a minute fraction of the volume of weight space. A separate analysis shows that large numbers of copies of a network performance function optimum weight vector are created by the action of the equioutput transformation group and that these copies all lie on the same sphere. Some implications of these results for learning are discussed.
APA, Harvard, Vancouver, ISO, and other styles
11

Lee, Eric S., James N. MacGregor, Alex Bavelas, Louise Mirlin, and et al. "The effects of error transformations on classification performance." Journal of Experimental Psychology: Learning, Memory, and Cognition 14, no. 1 (1988): 66–74. http://dx.doi.org/10.1037/0278-7393.14.1.66.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Raudys, Aistis, and Edvinas Goldstein. "Forecasting Detrended Volatility Risk and Financial Price Series Using LSTM Neural Networks and XGBoost Regressor." Journal of Risk and Financial Management 15, no. 12 (December 13, 2022): 602. http://dx.doi.org/10.3390/jrfm15120602.

Full text
Abstract:
It is common practice to employ returns, price differences or log returns for financial risk estimation and time series forecasting. In De Prado’s 2018 book, it was argued that by using returns we lose memory of time series. In order to verify this statement, we examined the differences between fractional differencing and logarithmic transformations and their impact on data memory. We employed LSTM (long short-term memory) recurrent neural networks and an XGBoost regressor on the data using those transformations. We forecasted risk (volatility) and price value and compared the results of all models using original, unmodified prices. From the results, models showed that, on average, a logarithmic transformation achieved better volatility predictions in terms of mean squared error and accuracy. Logarithmic transformation was the most promising transformation in terms of profitability. Our results were controversial to Marco Lopez de Prado’s suggestion, as we managed to achieve the most accurate volatility predictions in terms of mean squared error and accuracy using logarithmic transformation instead of fractional differencing. This transformation was also most promising in terms of profitability.
APA, Harvard, Vancouver, ISO, and other styles
13

Pan, Fang Yu, Jian Yin, and Ming Li. "Accuracy Calibration of 5-Axis Machine Tool Based on a Laser Approach." Applied Mechanics and Materials 263-266 (December 2012): 680–85. http://dx.doi.org/10.4028/www.scientific.net/amm.263-266.680.

Full text
Abstract:
To improve the machining precision and reduce the geometric errors for 5-axis machine tool, error model and calibration are presented in this paper. Error model is realized by Denavit-Hartenberg matrixes and homogeneous transformations, which can establish the relationship between the cutting tool and the workpiece. The accuracy calibration was difficult to achieve, but by a laser approach, the errors can be displayed accurately which is benefit for later compensation.
APA, Harvard, Vancouver, ISO, and other styles
14

BAHSOUN, WAEL, and PAWEŁ GÓRA. "INVARIANT DENSITIES FOR POSITION-DEPENDENT RANDOM MAPS ON THE REAL LINE: EXISTENCE, APPROXIMATION AND ERROR BOUNDS." Stochastics and Dynamics 06, no. 02 (June 2006): 155–72. http://dx.doi.org/10.1142/s0219493706001682.

Full text
Abstract:
A random map is a discrete-time dynamical system in which a transformation is randomly selected from a collection of transformations according to a probability function and applied to the process. In this note, we study random maps with position-dependent probabilities on ℝ. This means that the random map under consideration consists of transformations which are piecewise monotonic with countable number of branches from ℝ into itself and a probability function which is position dependent. We prove existence of absolutely continuous invariant probability measures and construct a method for approximating their densities. Explicit quantitative bound on the approximation error is given.
APA, Harvard, Vancouver, ISO, and other styles
15

Iqbal, Karry, Muhammad Kalim, and Adnan Khan. "Applications of Karry-Kalim-Adnan Transformation (KKAT) to Mechanics and Electrical Circuits." Journal of Function Spaces 2022 (July 4, 2022): 1–11. http://dx.doi.org/10.1155/2022/8722534.

Full text
Abstract:
In this paper, we have used the new integral transformation known as Karry-Kalim-Adnan transformation (KKAT) to solve the ordinary linear differential equations as well as partial differential equations. We have used KKAT to solve the problems of engineering and sciences specially electrical and mechanical problems. Also, we have established the comparison between KKAT and existing transformations. We have determined the KKAT of error functions and complementary error function. The fundamental purpose of this research paper is to transform the given problem into the easier form to get its solution.
APA, Harvard, Vancouver, ISO, and other styles
16

di Giacomo, Benedito, César Augusto Galvão de Morais, Vagner Augusto de Souza, and Luiz Carlos Neves. "Modeling Errors in Coordinate Measuring Machines and Machine Tools Using Homogeneous Transformation Matrices (HTM)." Advanced Materials Research 1025-1026 (September 2014): 56–59. http://dx.doi.org/10.4028/www.scientific.net/amr.1025-1026.56.

Full text
Abstract:
Coordinate Measuring Machines (CMM's) have attributes to provide results with accuracy and repeatability in measurements, so they are considered equipment with potential for application in industrial environments, specifically in inspection processes. However, as in a machine tools the knowledge of the errors in CMM is needed and allows applying techniques of error compensation. This study aimed to develop a mathematical model of the kinematic errors of a bridge type CMM in "X", "Y" and "Z" directions. Modeling of the errors was accomplished using coordinate transformations applied to the rigid body kinematics; the method of the homogeneous transformation was used for the development of the model. The position and angular errors for the three axes of CMM, in addition to errors related to the absence of orthogonality between them were equated. This study allowed to conclude that modeling of errors applied to CMM allied to calibration is able to evaluate the metrological performance of equipment with displacement on guides, thus is possible to use this technique as error budget analysis in machines.
APA, Harvard, Vancouver, ISO, and other styles
17

Jaros, Rene, Radek Martinek, and Lukas Danys. "Comparison of Different Electrocardiography with Vectorcardiography Transformations." Sensors 19, no. 14 (July 11, 2019): 3072. http://dx.doi.org/10.3390/s19143072.

Full text
Abstract:
This paper deals with transformations from electrocardiographic (ECG) to vectorcardiographic (VCG) leads. VCG provides better sensitivity, for example for the detection of myocardial infarction, ischemia, and hypertrophy. However, in clinical practice, measurement of VCG is not usually used because it requires additional electrodes placed on the patient’s body. Instead, mathematical transformations are used for deriving VCG from 12-leads ECG. In this work, Kors quasi-orthogonal transformation, inverse Dower transformation, Kors regression transformation, and linear regression-based transformations for deriving P wave (PLSV) and QRS complex (QLSV) are implemented and compared. These transformation methods were not yet compared before, so we have selected them for this paper. Transformation methods were compared for the data from the Physikalisch-Technische Bundesanstalt (PTB) database and their accuracy was evaluated using a mean squared error (MSE) and a correlation coefficient (R) between the derived and directly measured Frank’s leads. Based on the statistical analysis, Kors regression transformation was significantly more accurate for the derivation of the X and Y leads than the others. For the Z lead, there were no statistically significant differences in the medians between Kors regression transformation and the PLSV and QLSV methods. This paper thoroughly compared multiple VCG transformation methods to conventional VCG Frank’s orthogonal lead system, used in clinical practice.
APA, Harvard, Vancouver, ISO, and other styles
18

Crawford, J. Douglas, and Daniel Guitton. "Visual-Motor Transformations Required for Accurate and Kinematically Correct Saccades." Journal of Neurophysiology 78, no. 3 (September 1, 1997): 1447–67. http://dx.doi.org/10.1152/jn.1997.78.3.1447.

Full text
Abstract:
Crawford, J. Douglas and Daniel Guitton. Visual-motor transformations required for accurate and kinematically correct saccades. J. Neurophysiol. 78: 1447–1467, 1997. The goal of this study was to identify and model the three-dimensional (3-D) geometric transformations required for accurate saccades to distant visual targets from arbitrary initial eye positions. In abstract 2-D models, target displacement in space, retinal error (RE), and saccade vectors are trivially interchangeable. However, in real 3-D space, RE is a nontrivial function of objective target displacement and 3-D eye position. To determine the physiological implications of this, a visuomotor “lookup table” was modeled by mapping the horizontal/vertical components of RE onto the corresponding vector components of eye displacement in Listing's plane. This provided the motor error (ME) command for a 3-D displacement-feedback loop. The output of this loop controlled an oculomotor plant that mechanically implemented the position-dependent saccade axis tilts required for Listing's law. This model correctly maintained Listing's law but was unable to correct torsional position deviations from Listing's plane. Moreover, the model also generated systematic errors in saccade direction (as a function of eye position components orthogonal to RE), predicting errors in final gaze direction of up to 25° in the oculomotor range. Plant modifications could not solve these problems, because the intrisic oculomotor input-output geometry forced a fixed visuomotor mapping to choose between either accuracy or Listing's law. This was reflected internally by a sensorimotor divergence between input-defined visual displacement signals (inherently 2-D and defined in reference to the eye) and output-defined motor displacement signals (inherently 3-D and defined in reference to the head). These problems were solved by rotating RE by estimated 3-D eye position (i.e., a reference frame transformation), inputting the result into a 2-D–to–3-D “Listing's law operator,” and then finally subtracting initial 3-D eye position to yield the correct ME. This model was accurate and upheld Listing's law from all initial positions. Moreover, it suggested specific experiments to invasively distinguish visual and motor displacement codes, predicting a systematic position dependence in the directional tuning of RE versus a fixed-vector tuning in ME. We conclude that visual and motor displacement spaces are geometrically distinct such that a fixed visual-motor mapping will produce systematic and measurable behavioral errors. To avoid these errors, the brain would need to implement both a 3-D position-dependent reference frame transformation and nontrivial 2-D–to–3-D transformation. Furthermore, our simulations provide new experimental paradigms to invasively identify the physiological progression of these spatial transformations by reexamining the position-dependent geometry of displacement code directions in the superior colliculus, cerebellum, and various cortical visuomotor areas.
APA, Harvard, Vancouver, ISO, and other styles
19

Flanders, Martha, Stephen I. Helms Tillery, and John F. Soechting. "Early stages in a sensorimotor transformation." Behavioral and Brain Sciences 15, no. 2 (June 1992): 309–20. http://dx.doi.org/10.1017/s0140525x00068813.

Full text
Abstract:
AbstractWe present a model for several early stages of the sensorimotor transformations involved in targeted arm movement. In psychophysical experiments, human subjects pointed to the remembered locations of randomly placed targets in three-dimensional space. They made consistent errors in distance, and from these errors stages in the sensorimotor transformation were deduced. When subjects attempted to move the right index finger to a virtual target they consistently undershot the distance of the more distal targets. Other experiments indicated that the error was in the sensorimotor transformation rather than in the perception of distance. The error was most consistent when evaluated using a spherical coordinate system based at the right shoulder, indicating that the neural representation of target parameters is transformed from a retinocentric representation to a shoulder-centered representation. According to the model, the error in distance results from the neural implementation of a linear approximation in the algorithm to transform shoulder-centered target parameters into a set of arm orientations appropriate for placing the finger on the target. The transformation to final arm orientations places visually derived information into a frame of reference where it can readily be combined with kinesthetically derived information about initial arm orientations. The combination of these representations of initial and final arm orientations could give rise to the representation of movement direction recorded in the motor cortex by Georgopoulos and his colleagues. Later stages, such as the transformation from kinematic (position) to dynamic (force) parameters, or to levels of muscle activation, are beyond the scope of the present model.
APA, Harvard, Vancouver, ISO, and other styles
20

Ozaki, Katsuhisa, and Takeshi Ogita. "The Essentials of verified numerical computations, rounding error analyses, interval arithmetic, and error-free transformations." Nonlinear Theory and Its Applications, IEICE 11, no. 3 (2020): 279–302. http://dx.doi.org/10.1587/nolta.11.279.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

WANG, XING-YUAN, and LI-XIAN ZOU. "FRACTAL IMAGE COMPRESSION BASED ON MATCHING ERROR THRESHOLD." Fractals 17, no. 01 (March 2009): 109–15. http://dx.doi.org/10.1142/s0218348x09004247.

Full text
Abstract:
This paper proposes a fractal image encoding algorithm based on matching error threshold: first the authors set up two kick-out conditions to reduce the capacity of the codebook, and then set up a matching threshold when searching the best matching blocks, which can shorten its runtime greatly. Meanwhile, the authors discard the isometric transformations that are mentioned in most literatures, because the usage of the isometric transformations only increase the computational complexity, the same or even better reconstructed image can achieve through reducing the sliding step of producing domain blocks. Experimental results indicate that the proposed algorithm can both shorten the encoding time greatly and achieve the same or better reconstructed image quality as compared with the basic fractal encoding algorithm with full search.
APA, Harvard, Vancouver, ISO, and other styles
22

Hapgood, M. A. "Space physics coordinate transformations: the role of precession." Annales Geophysicae 13, no. 7 (July 31, 1995): 713–16. http://dx.doi.org/10.1007/s00585-995-0713-8.

Full text
Abstract:
Abstract. Raw data on spacecraft orbits and attitude are usually supplied in "inertial" coordinates. The normal geocentric inertial coordinate system changes slowly in time owing to the effects of astronomical precession and the nutation of the Earth's rotation axis. However, only precession produces a change that is significant compared with the errors in determining spacecraft position. We show that the transformations specified by Russell (1971) and Hapgood (1992) are strictly correct only if the epoch-of-date inertial system is used. We provide a simple formula for estimating the error in the calculated position if the inertial system for some other epoch is used. We also provide a formula for correcting inertial coordinates to the epoch-of-date from the standard fixed epoch of J2000.0.
APA, Harvard, Vancouver, ISO, and other styles
23

Chen, Chu-Ohih. "Rank transformations when oovariables are measured with error or mismodelled." Communications in Statistics - Theory and Methods 26, no. 12 (January 1997): 2967–82. http://dx.doi.org/10.1080/03610929708832088.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Zavala, Virginia. "Correcting whose errors? The principle of error correction from an ethnographic lens." Language in Society 47, no. 3 (June 2018): 377–80. http://dx.doi.org/10.1017/s0047404518000337.

Full text
Abstract:
Labovian sociolinguistics constitutes an important paradigm that brings to the forefront issues of social justice in linguistics and asks about the debt the scholar has towards the community once s/he gets information from it. Nevertheless, as many scholars have discussed, and even though this paradigm has focused on changing society for the better, it has serious limitations on how it conceptualizes the relationship between language and society. Based on critical race theory and language ideologies, Lewis powerfully contributes to this discussion by critiquing the principle of error correction (PEC) proposed by Labov as a particular way of conceptualizing social change. As Lewis points out at the end of the article, this principle reflects an ‘earlier era’ and needs to be reconsidered in light of the significant transformations not only in the study of language in society developed in recent decades but also in critical theory and humanities in general.
APA, Harvard, Vancouver, ISO, and other styles
25

Ligas, Marcin. "Partially error-affected point-wise weighted closed-form solution to the similarity transformation and its variants." Journal of Applied Geodesy 14, no. 2 (April 26, 2020): 231–39. http://dx.doi.org/10.1515/jag-2019-0067.

Full text
Abstract:
AbstractThe paper presents a closed-form solution to the point-wise weighted similarity transformation and its variants in the least squares framework under two estimation scenarios.In the first scenario a target system is subject to random errors whilst in the second one a source system is considered to be erroneous. These transformation models will be named asymmetric in contrast to the symmetrical one solved under the errors-in-variables model where both systems are contaminated by random errors. The entire derivation is based on Procrustes Analysis. The formulas presented herein hold for both 2D and 3D transformations without any modification. The solution uses a polar decomposition to recover the rotation matrix.
APA, Harvard, Vancouver, ISO, and other styles
26

McIntyre, J., F. Stratta, J. Droulez, and F. Lacquaniti. "Analysis of Pointing Errors Reveals Properties of Data Representations and Coordinate Transformations Within the Central Nervous System." Neural Computation 12, no. 12 (December 1, 2000): 2823–55. http://dx.doi.org/10.1162/089976600300014746.

Full text
Abstract:
The execution of a simple pointing task invokes a chain of processing that includes visual acquisition of the target, coordination of multimodal proprioceptive signals, and ultimately the generation of a motor command that will drive the finger to the desired target location. These processes in the sensorimotor chain can be described in terms of internal representations of the target or limb positions and coordinate transformations between different internal reference frames. In this article we first describe how different types of error analysis can be used to identify properties of the internal representations and coordinate transformations within the central nervous system. We then describe a series of experiments in which subjects pointed to remembered 3D visual targets under two lighting conditions (dim light and total darkness) and after two different memory delays (0.5 and 5.0 s) and report results in terms of variable error, constant error, and local distortion. Finally, we present a set of simulations to help explain the patterns of errors produced in this pointing task. These analyses and experiments provide insight into the structure of the underlying sensorimotor processes employed by the central nervous system.
APA, Harvard, Vancouver, ISO, and other styles
27

Yang, Guilin, I.-Ming Chen, Song Huat Yeo, and Wee Kiat Lim. "Simultaneous base and tool calibration for self-calibrated parallel robots." Robotica 20, no. 4 (June 24, 2002): 367–74. http://dx.doi.org/10.1017/s0263574702004101.

Full text
Abstract:
In this paper, we focus on the base and tool calibration of a self-calibrated parallel robot. After the self-calibration of a parellel robot by using the built-in sensors in the passive joints, its kinematic transformation from the robot base to the mobile platform frame can be computed with sufficient accuracy. The base and tool calibration, hence, is to identify the kinematic errors in the fixed transformations from the world frame to the robot base frame and from the mobile platform frame to the tool (end-effector) frame in order to improve the absolute positioning accuracy of the robot. Using the mathematical tools from group theory and differential geometry, a simultaneous base and tool calibration model is formulated. Since the kinematic errors in a kinematic transformation can be represented by a twist, i.e. an element of se(3), the resultant calibration model is simple, explicit and geometrically meaningful. A least-square algorithm is employed to iteratively identify the error parameters. The simulation example shows that all the preset kinematic errors can be fully recovered within three to four iterations.
APA, Harvard, Vancouver, ISO, and other styles
28

Raillon, Loïc, and Christian Ghiaus. "Study of Error Propagation in the Transformations of Dynamic Thermal Models of Buildings." Journal of Control Science and Engineering 2017 (2017): 1–15. http://dx.doi.org/10.1155/2017/5636145.

Full text
Abstract:
Dynamic behaviour of a system may be described by models with different forms: thermal (RC) networks, state-space representations, transfer functions, and ARX models. These models, which describe the same process, are used in the design, simulation, optimal predictive control, parameter identification, fault detection and diagnosis, and so on. Since more forms are available, it is interesting to know which one is the most suitable by estimating the sensitivity of the model to transform into a physical model, which is represented by a thermal network. A procedure for the study of error by Monte Carlo simulation and of factor prioritization is exemplified on a simple, but representative, thermal model of a building. The analysis of the propagation of errors and of the influence of the errors on the parameter estimation shows that the transformation from state-space representation to transfer function is more robust than the other way around. Therefore, if only one model is chosen, the state-space representation is preferable.
APA, Harvard, Vancouver, ISO, and other styles
29

Ahrens, William H., Darrell J. Cox, and Girish Budhwar. "Use of the Arcsine and Square Root Transformations for Subjectively Determined Percentage Data." Weed Science 38, no. 4-5 (September 1990): 452–58. http://dx.doi.org/10.1017/s0043174500056824.

Full text
Abstract:
The arcsine and square root transformations were tested on 82 weed control data sets and 62 winter wheat winter survival data sets to determine effects on normality of the error terms, homogeneity of variance, and additivity of the model. Transformations appeared to correct deficiencies in these three parameters in the majority of data sets, but had adverse effects in certain other data sets. Performing the recommended transformation in conjunction with omitting treatments having identical replicate observations provided a high percentage of correction of non-normality, heterogeneity of variance, and nonadditivity. The arcsine transformation, not generally recommended for data sets having values from 0 to 20% or 80 to 100%, was as effective in correcting non-normality, heterogeneity of variance, and nonadditivity in these data sets as was the recommended square root transformation. A majority of data sets showed differences between transformed and nontransformed data in mean separations determined using LSD (0.05), although most of these differences were minor and had little effect on interpretation of results.
APA, Harvard, Vancouver, ISO, and other styles
30

Feng, Youyang, Qing Wang, and Hao Zhang. "Total Least-Squares Iterative Closest Point Algorithm Based on Lie Algebra." Applied Sciences 9, no. 24 (December 7, 2019): 5352. http://dx.doi.org/10.3390/app9245352.

Full text
Abstract:
In geodetic surveying, input data from two coordinates are needed to compute rigid transformations. A common solution is a least-squares algorithm based on a Gauss–Markov model, called iterative closest point (ICP). However, the error in the ICP algorithm only exists in target coordinates, and the algorithm does not consider the source model’s error. A total least-squares (TLS) algorithm based on an errors-in-variables (EIV) model is proposed to solve this problem. Previous total least-squares ICP algorithms used a Euler angle parameterization method, which is easily affected by a gimbal lock problem. Lie algebra is more suitable than the Euler angle for interpolation during an iterative optimization process. In this paper, Lie algebra is used to parameterize the rotation matrix, and we re-derive the TLS algorithm based on a GHM (Gauss–Helmert model) using Lie algebra. We present two TLS-ICP models based on Lie algebra. Our method is more robust than previous TLS algorithms, and it suits all kinds of transformation matrices.
APA, Harvard, Vancouver, ISO, and other styles
31

Blahodyr, Liudmyla. "PREVENTION OF STUDENTS’ MISTAKES DURING THE STUDY OF FRACTIONAL RATIONAL EXPRESSIONS IN THE NEW UKRAINIAN SCHOOL." Collection of Scientific Papers of Uman State Pedagogical University, no. 4 (December 29, 2021): 6–13. http://dx.doi.org/10.31499/2307-4906.4.2021.250117.

Full text
Abstract:
Among the semantic lines of the school course in algebra, the line of expressions and their transformations is essentially significant. Free execution of the main types of transformations of whole and fractional, rational and irrational expressions is a prerequisite for further successful mastering of other semantic lines. Therefore, the provision of strong knowledge and skills on identical transformations of expressions should be the subject of constant attention of the mathematics teacher.The article considers typical mistakes of students during the study of the semantic line of expression and transformation of expressions, the course of algebra in institutions that provide basic secondary education, namely in the process of studying the topic “Fractional rational expressions”. The analysis of the most widespread mathematical errors of schoolchildren, psychological and pedagogical preconditions of their occurrence is carried out. The method of organization of preventive activity of the teacher of mathematics is offered (under preventive activity of the teacher of mathematics we understand the activity initiated by necessity: to prevent mathematical mistakes of pupils, to correct the admitted, having found out the reasons of their occurrence), and transformations of whole expressions.Preventive activities should be organized as a process of interaction between teachers and students, during which through specially selected methods, firstly, reveals the origin of errors, and secondly, organizes work to prevent and correct them. The main task of the formation of preventive activities of students is to develop their ability to independently adhere to all its structural components. The result of such activities largely depends on how the teacher understands the structure of mental activity of students in specific learning conditions, is able to take into account the objective patterns of learning material, psychological and pedagogical patterns of perception and memory.The effectiveness of the proposed method was tested by the author in the research process: “Methodical system of analysis and prevention of mathematical errors in the study of algebra in primary school”. Keywords: algebra; fractional rational expressions; typical errors; students; preventive activities; method; Teacher of Mathematics; educational process; error prevention.
APA, Harvard, Vancouver, ISO, and other styles
32

Zahorian, S. A., and A. J. Jagharghi. "Minimum mean-square error transformations of categorical data to target positions." IEEE Transactions on Signal Processing 40, no. 1 (1992): 13–23. http://dx.doi.org/10.1109/78.157177.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Zamani, Behzad, Ahmad Akbari, Babak Nasersharif, and Azarakhsh Jalalvand. "Optimized discriminative transformations for speech features based on minimum classification error." Pattern Recognition Letters 32, no. 7 (May 2011): 948–55. http://dx.doi.org/10.1016/j.patrec.2011.01.017.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Short, T. M. "RIGOROUS ERROR PROPAGATION UNDER CONDITIONS OF BIASED, THREE-DIMENSIONAL, ORTHOGONAL TRANSFORMATIONS." Survey Review 32, no. 250 (October 1993): 244–48. http://dx.doi.org/10.1179/sre.1993.32.250.244.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Costamagna, E. "Error-masking phenomena during numerical computation of Schwarz-Christoffel conformal transformations." Microwave and Optical Technology Letters 20, no. 4 (February 20, 1999): 223–25. http://dx.doi.org/10.1002/(sici)1098-2760(19990220)20:4<223::aid-mop1>3.0.co;2-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Likassa, Habte Tadesse, Yu Xia, and Butte Gotu. "An Efficient New Robust PCA Method for Joint Image Alignment and Reconstruction via the L 2,1 Norms and Affine Transformation." Scientific Programming 2022 (August 2, 2022): 1–15. http://dx.doi.org/10.1155/2022/5682492.

Full text
Abstract:
In this study, an effective robust PCA is developed for joint image alignment and recovery via L 2,1 norms and affine transformations. To alleviate the potential impacts of outliers, heavy sparse noises, occlusions, and illuminations, the L 2,1 norms along with affine transformations are taken into consideration. The determination of the parameters involved and the updating affine transformations is arranged in the form of a constrained convex optimization problem. To reduce the computation load, we also further decompose the error as sparse error and Gaussian noise; additionally, the alternating direction method of multipliers (ADMM) is considered to develop a new set of recursive equations to update the optimization parameters and the affine transformations iterative. The convergence of the derived updating equation is explained as well. Conducted simulations illustrate that the new method is superior to the baseline works in terms of precision on some public databases.
APA, Harvard, Vancouver, ISO, and other styles
37

Buciuman, Cella Flavia, and Valeria Vacarescu. "Research on Determining the Pose Errors in a Robot Welding Cell Using Theodolits." Applied Mechanics and Materials 162 (March 2012): 435–44. http://dx.doi.org/10.4028/www.scientific.net/amm.162.435.

Full text
Abstract:
In this article, the authors aim to develop a method for determining the components errors of a robotic welding cell, using coordinate transformations between the reference systems attached to the cells components. Errors are expressed as matrices, as it is usual in the mathematical expressions used in robotics. Errors mathematical modeling refers to the positioning error and also to the orientation error. Therefore, a test cube is used. The experiments presented in the paper, approaches only the positiong error (AP). The method is used for measuring the pose errors of a welding cell, using two theodolits. The method is based on measuring the coordinates of a target characteristic point TCP, the corner of a test cube, relative to a coordinate system attached to one of the two theodolits. It is part of the no contact measurement methods. The main principles of the method and its theoretical basis are validated by experiments, conducted in a welding cell, using a robot CLOOS-Romat in the Robotics Laboratory of the University "Politehnica" of Timisoara.
APA, Harvard, Vancouver, ISO, and other styles
38

Saidel and, Gerald M., and Erin H. Liu. "Model Transformations to Evaluate Transient Thermal Responses at a Tissue Surface." Journal of Biomechanical Engineering 123, no. 4 (February 27, 2001): 370–72. http://dx.doi.org/10.1115/1.1385844.

Full text
Abstract:
For a spatially distributed model describing the transient temperature response of a thermistor-tissue system, Wei et al., [J. Biomech. Eng., 117:74–85, 1995] obtained an approximate transformation for fast analysis of the temperature response at the tissue surface. This approximate transformation reduces the model to a single ordinary differential equation. Here, we present an exact transformation that yields a single differential-integral equation. Numerical solutions from the approximate and exact transformations were compared to evaluate the differences with several sets of parameter values. The maximum difference between the exact and approximate solutions did not exceed 15 percent and occurred for only a short time interval. The root-mean-square error of the approximate solution was no more than 5 percent and within the level of experimental noise. Under the experimental conditions used by Wei et al., the approximate transformation is justified for estimating model parameters from transient thermal responses.
APA, Harvard, Vancouver, ISO, and other styles
39

Buscemi, Francesco, David Sutter, and Marco Tomamichel. "An information-theoretic treatment of quantum dichotomies." Quantum 3 (December 9, 2019): 209. http://dx.doi.org/10.22331/q-2019-12-09-209.

Full text
Abstract:
Given two pairs of quantum states, we want to decide if there exists a quantum channel that transforms one pair into the other. The theory of quantum statistical comparison and quantum relative majorization provides necessary and sufficient conditions for such a transformation to exist, but such conditions are typically difficult to check in practice. Here, by building upon work by Keiji Matsumoto, we relax the problem by allowing for small errors in one of the transformations. In this way, a simple sufficient condition can be formulated in terms of one-shot relative entropies of the two pairs. In the asymptotic setting where we consider sequences of state pairs, under some mild convergence conditions, this implies that the quantum relative entropy is the only relevant quantity deciding when a pairwise state transformation is possible. More precisely, if the relative entropy of the initial state pair is strictly larger compared to the relative entropy of the target state pair, then a transformation with exponentially vanishing error is possible. On the other hand, if the relative entropy of the target state is strictly larger, then any such transformation will have an error converging exponentially to one. As an immediate consequence, we show that the rate at which pairs of states can be transformed into each other is given by the ratio of their relative entropies. We discuss applications to the resource theories of athermality and coherence, where our results imply an exponential strong converse for general state interconversion.
APA, Harvard, Vancouver, ISO, and other styles
40

Glenn, D. Michael. "Statistical Analysis of Root Count Data." HortScience 30, no. 4 (July 1995): 907A—907. http://dx.doi.org/10.21273/hortsci.30.4.907a.

Full text
Abstract:
The minirhizotron approach for studying the dynamics of root systems is gaining acceptance; however, problems have arisen in the analysis of data. The purposes of this study were to determine if analysis of variance (ANOVA) was appropriate for root count data, and to evaluate transformation procedures to utilize ANOVA. In peach, apple, and strawberry root count data, the variance of treatment means was positively correlated with the mean, violating assumptions of ANOVA. A transformation based on Taylor's power law as a first approximation, followed by a trial and error approach, developed transformations that reduced the correlation of variance and mean.
APA, Harvard, Vancouver, ISO, and other styles
41

JASKIERNY, Leszek. "REVIEW OF THE DATA MODELING STANDARDS AND DATA MODEL TRANSFORMATION TECHNIQUES." Applied Computer Science 14, no. 4 (December 30, 2018): 93–108. http://dx.doi.org/10.35784/acs-2018-32.

Full text
Abstract:
Manual data transformations that result in high error rates are a big problem in complex integration and data warehouse projects, resulting in poor quality of data and delays in deployment to production. Automation of data transformations can be easily verified by humans; the ability to learn from past decisions allows the creation of metadata that can be leveraged in future mappings. Significant improvement of the quality of data transformations can be achieved, when at least one of the models used in transformation is already analyzed and understood. Over recent decades, particular industries have defined data models that are widely adopted in commercial and open source solutions. Those models (often industry standards, accepted by ISO or other organizations) can be leveraged to increase reuse in integration projects resulting in a) lower project costs and b) faster delivery to production. The goal of this article is to provide a comprehensive review of the practical applications of standardization of data formats. Using use cases from the Financial Services Industry as examples, the author tries to identify the motivations and common elements of particular data formats, and how they can be leveraged in order to automate process of data transformations between the models.
APA, Harvard, Vancouver, ISO, and other styles
42

Ferraro, Maria Brigida. "On the Generalization Performance of a Regression Model with Imprecise Elements." International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 25, no. 05 (September 4, 2017): 723–40. http://dx.doi.org/10.1142/s0218488517500313.

Full text
Abstract:
A linear regression model for imprecise random variables is considered. The imprecision of a random element has been formalized by means of the LR fuzzy random variable, characterized by a center, a left and a right spread. In order to avoid the non-negativity conditions the spreads are transformed by means of two invertible functions. To analyze the generalization performance of that model an appropriate prediction error is introduced, and it is estimated by means of a bootstrap procedure. Furthermore, since the choice of response transformations could affect the inferential procedures, a computational proposal is introduced for choosing from a family of parametric link functions, the Box-Cox family, the transformation parameters that minimize the prediction error of the model.
APA, Harvard, Vancouver, ISO, and other styles
43

Kronawitter, Stefan, Sebastian Kuckuk, Harald Köstler, and Christian Lengauer. "Automatic Data Layout Transformations in the ExaStencils Code Generator." Parallel Processing Letters 28, no. 03 (September 2018): 1850009. http://dx.doi.org/10.1142/s0129626418500093.

Full text
Abstract:
Performance optimizations should focus not only on the computations of an application, but also on the internal data layout. A well-known problem is whether a struct of arrays or an array of structs results in a higher performance for a particular application. Even though the switch from the one to the other is fairly simple to implement, testing both transformations can become laborious and error-prone. Additionally, there are more complex data layout transformations, such as a color splitting for multi-color kernels in the domain of stencil codes, that are manually difficult. As a remedy, we propose new flexible layout transformation statements for our domain-specific language ExaSlang that support arbitrary affine transformations. Since our code generator applies them automatically to the generated code, these statements enable the simple adaptation of the data layout without the need for any other modifications of the application code. This constitutes a big advance in the ease of testing and evaluating different memory layout schemes in order to identify the best.
APA, Harvard, Vancouver, ISO, and other styles
44

Tsui, K. M., and S. C. Chan. "Error Analysis and Efficient Realization of the Multiplier-Less FFT-Like Transformation (ML-FFT) and Related Sinusoidal Transformations." Journal of VLSI signal processing systems for signal, image and video technology 44, no. 1-2 (May 27, 2006): 97–115. http://dx.doi.org/10.1007/s11265-006-7510-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Sekara, Tomislav, and Milic Stojic. "Application of the α-approximation for discretization of analogue systems." Facta universitatis - series: Electronics and Energetics 18, no. 3 (2005): 571–86. http://dx.doi.org/10.2298/fuee0503571s.

Full text
Abstract:
The method for discretization of analogue systems using the ?-approximation is presented. A generalization of some of the existing transformation methods is also done. A comparative analysis, through the corresponding examples involving several known discretization methods, is carried out. It is demonstrated that the application of this ?-approximation allows the reduction of discretization error compared to other approximation methods. The frequency characteristics of the discrete system obtained by these transformations are approximately equal to these of the original analogue system in the basic frequency range.
APA, Harvard, Vancouver, ISO, and other styles
46

Blaylock, James R., and David M. Smallwood. "Box-Cox Transformations and a Heteroscedastic Error Variance: Import Demand Equations Revisited." International Statistical Review / Revue Internationale de Statistique 53, no. 1 (April 1985): 91. http://dx.doi.org/10.2307/1402882.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Baltus, C. "Truncation error bounds for the composition of limit-periodic linear fractional transformations." Journal of Computational and Applied Mathematics 46, no. 3 (June 1993): 395–404. http://dx.doi.org/10.1016/0377-0427(93)90035-a.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Li, Yanlei, Youmin Hu, Bo Wu, and Jikai Fan. "Thermal Error Modelling of the Spindle Using Data Transformation and Adaptive Neurofuzzy Inference System." Mathematical Problems in Engineering 2015 (2015): 1–10. http://dx.doi.org/10.1155/2015/130253.

Full text
Abstract:
This paper proposes a new method for predicting spindle deformation based on temperature data. The method introduces the adaptive neurofuzzy inference system (ANFIS), which is a neurofuzzy modeling approach that integrates the kernel and geometrical transformations. By utilizing data transformation, the number of ANFIS rules can be effectively reduced and the predictive model structure can be simplified. To build the predictive model, we first map the original temperature data to a feature space with Gaussian kernels. We then process the mapped data with the geometrical transformation and make the data gather in the square region. Finally, the transformed data are used as input to train the ANFIS. A verification experiment is conducted to evaluate the performance of the proposed method. Six Pt100 thermal resistances are used to monitor the spindle temperature, and a laser displacement sensor is used to detect the spindle deformation. Experimental results show that the proposed method can precisely predict the spindle deformation and greatly improve the thermal performance of the spindle. Compared with back propagation (BP) networks, the proposed method is more suitable for complex working conditions in practical applications.
APA, Harvard, Vancouver, ISO, and other styles
49

Orr, Jon. "Function Transformations and the Desmos Activity Builder." Mathematics Teacher 110, no. 7 (March 2017): 549–51. http://dx.doi.org/10.5951/mathteacher.110.7.0549.

Full text
Abstract:
In my classroom, the Desmos® calculator has been a game-changer for student understanding of relationships between graphs and algebraic representations of functions. We use this beautifully designed website every day. Lately, the Desmos class activity site (https://teacher.demos.com) and the new Activity Builder tool have proven especially useful by allowing me to provide my students with opportunities to struggle productively, create, error-check, and think deeply to learn mathematics. Activity Builder is the easiest tool I have found to set up and to run an activity that promotes these ideas in your classroom.
APA, Harvard, Vancouver, ISO, and other styles
50

Çömez, Doğan, and Mrinal Kanti Roychowdhury. "Quantization for Infinite Affine Transformations." Fractal and Fractional 6, no. 5 (April 25, 2022): 239. http://dx.doi.org/10.3390/fractalfract6050239.

Full text
Abstract:
Quantization for a probability distribution refers to the idea of estimating a given probability by a discrete probability supported by a finite set. In this article, we consider a probability distribution generated by an infinite system of affine transformations {Sij} on R2 with associated probabilities {pij} such that pij>0 for all i,j∈N and ∑i,j=1∞pij=1. For such a probability measure P, the optimal sets of n-means and the nth quantization error are calculated for every natural number n. It is shown that the distribution of such a probability measure is the same as that of the direct product of the Cantor distribution. In addition, it is proved that the quantization dimension D(P) exists and is finite; whereas, the D(P)-dimensional quantization coefficient does not exist, and the D(P)-dimensional lower and the upper quantization coefficients lie in the closed interval [112,54].
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography