Journal articles on the topic 'Data weighting function'

To see the other types of publications on this topic, follow the link: Data weighting function.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Data weighting function.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Dombi, József, and Tamás Jónás. "Towards a general class of parametric probability weighting functions." Soft Computing 24, no. 21 (September 24, 2020): 15967–77. http://dx.doi.org/10.1007/s00500-020-05335-3.

Full text
Abstract:
Abstract In this study, we present a novel methodology that can be used to generate parametric probability weighting functions, which play an important role in behavioral economics, by making use of the Dombi modifier operator of continuous-valued logic. Namely, we will show that the modifier operator satisfies the requirements for a probability weighting function. Next, we will demonstrate that the application of the modifier operator can be treated as a general approach to create parametric probability weighting functions including the most important ones such as the Prelec and the Ostaszewski, Green and Myerson (Lattimore, Baker and Witte) probability weighting function families. Also, we will show that the asymptotic probability weighting function induced by the inverse of the so-called epsilon function is none other than the Prelec probability weighting function. Furthermore, we will prove that, by using the modifier operator, other probability weighting functions can be generated from the dual generator functions and from transformed generator functions. Finally, we will show how the modifier operator can be used to generate strictly convex (or concave) probability weighting functions and introduce a method for fitting a generated probability weighting function to empirical data.
APA, Harvard, Vancouver, ISO, and other styles
2

Ying Han, Pang, Andrew Teoh Beng Jin, and Lim Heng Siong. "Eigenvector Weighting Function in Face Recognition." Discrete Dynamics in Nature and Society 2011 (2011): 1–15. http://dx.doi.org/10.1155/2011/521935.

Full text
Abstract:
Graph-based subspace learning is a class of dimensionality reduction technique in face recognition. The technique reveals the local manifold structure of face data that hidden in the image space via a linear projection. However, the real world face data may be too complex to measure due to both external imaging noises and the intra-class variations of the face images. Hence, features which are extracted by the graph-based technique could be noisy. An appropriate weight should be imposed to the data features for better data discrimination. In this paper, a piecewise weighting function, known as Eigenvector Weighting Function (EWF), is proposed and implemented in two graph based subspace learning techniques, namely Locality Preserving Projection and Neighbourhood Preserving Embedding. Specifically, the computed projection subspace of the learning approach is decomposed into three partitions: a subspace due to intra-class variations, an intrinsic face subspace, and a subspace which is attributed to imaging noises. Projected data features are weighted differently in these subspaces to emphasize the intrinsic face subspace while penalizing the other two subspaces. Experiments on FERET and FRGC databases are conducted to show the promising performance of the proposed technique.
APA, Harvard, Vancouver, ISO, and other styles
3

Lu, Guangyin, Dongxing Zhang, Shujin Cao, Yihuai Deng, Gang Xu, Yihu Liu, Ziqiang Zhu, and Peng Chen. "Spherical Planting Inversion of GRAIL Data." Applied Sciences 13, no. 5 (March 6, 2023): 3332. http://dx.doi.org/10.3390/app13053332.

Full text
Abstract:
In large-scale potential field data inversion, constructing the kernel matrix is a time-consuming problem with large memory requirements. Therefore, a spherical planting inversion of Gravity Recovery and Interior Laboratory (GRAIL) data is proposed using the L1-norm in conjunction with tesseroids. Spherical planting inversion, however, is strongly dependent on the correct seeds’ density contrast, location, and number; otherwise, it can cause mutual intrusion of anomalous sources produced by different seeds. Hence, a weighting function was introduced to limit the influence area of the seeds for yielding robust solutions; moreover, it is challenging to set customized parameters for each seed, especially for the large number of seeds used or complex gravity anomalies data. Hence, we employed the “shape-of-anomaly” data-misfit function in conjunction with a new seed weighting function to improve the spherical planting inversion. The proposed seed weighting function is constructed based on the covariance matrix for given gravity data and can avoid manually setting customized parameters for each seed. The results of synthetic tests and field data show that spherical planting inversion requires less computer memory than traditional inversion. Furthermore, the proposed seed weighting function can effectively limit the seed influence area. The result of spherical planting inversion indicates that the crustal thickness of Mare Crisium is about 0 km because the Crisium impact may have removed all crust from parts of the basin.
APA, Harvard, Vancouver, ISO, and other styles
4

Bedini, L., S. Fossi, and R. Reggiannini. "Generalised crosscorrelator with data-estimated weighting function: a simulation analysis." IEE Proceedings F Communications, Radar and Signal Processing 133, no. 2 (1986): 195. http://dx.doi.org/10.1049/ip-f-1.1986.0030.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

KELLER, ANNETTE, and FRANK KLAWONN. "FUZZY CLUSTERING WITH WEIGHTING OF DATA VARIABLES." International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 08, no. 06 (December 2000): 735–46. http://dx.doi.org/10.1142/s0218488500000538.

Full text
Abstract:
We introduce an objective function-based fuzzy clustering technique that assigns one influence parameter to each single data variable for each cluster. Our method is not only suited to detect structures or groups of data that are not uniformly distributed over the structure's single domains, but gives also information about the influence of individual variables on the detected groups. In addition, our approach can be seen as a generalization of the well-known fuzzy c-means clustering algorithm.
APA, Harvard, Vancouver, ISO, and other styles
6

Bedini, L., S. Fossi, and R. Reggiannini. "Erratum: Generalised crosscorrelator with data-estimated weighting function: a simulation analysis." IEE Proceedings F Communications, Radar and Signal Processing 133, no. 3 (1986): 231. http://dx.doi.org/10.1049/ip-f-1.1986.0039.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Jiang, Ray, Shangtong Zhang, Veronica Chelu, Adam White, and Hado van Hasselt. "Learning Expected Emphatic Traces for Deep RL." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 6 (June 28, 2022): 7015–23. http://dx.doi.org/10.1609/aaai.v36i6.20660.

Full text
Abstract:
Off-policy sampling and experience replay are key for improving sample efficiency and scaling model-free temporal difference learning methods. When combined with function approximation, such as neural networks, this combination is known as the deadly triad and is potentially unstable. Recently, it has been shown that stability and good performance at scale can be achieved by combining emphatic weightings and multi-step updates. This approach, however, is generally limited to sampling complete trajectories in order, to compute the required emphatic weighting. In this paper we investigate how to combine emphatic weightings with non-sequential, off-line data sampled from a replay buffer. We develop a multi-step emphatic weighting that can be combined with replay, and a time-reversed n-step TD learning algorithm to learn the required emphatic weighting. We show that these state weightings reduce variance compared with prior approaches, while providing convergence guarantees. We tested the approach at scale on Atari 2600 video games, and observed that the new X-ETD(n) agent improved over baseline agents, highlighting both the scalability and broad applicability of our approach.
APA, Harvard, Vancouver, ISO, and other styles
8

Blahak, Ulrich. "An Approximation to the Effective Beam Weighting Function for Scanning Meteorological Radars with an Axisymmetric Antenna Pattern." Journal of Atmospheric and Oceanic Technology 25, no. 7 (July 1, 2008): 1182–96. http://dx.doi.org/10.1175/2007jtecha1010.1.

Full text
Abstract:
Abstract To obtain statistically stable reflectivity measurements by meteorological radars, it is common practice to average over several consecutive pulses during which the antenna rotates at a certain angular velocity. Taking into account the antenna’s continuous motion, the measured reflectivity is determined by an effective beam weighting function, which is different from a single-pulse weighting function—a fact that is widely ignored in applications involving beam weighting. In this paper, the effective beam weighting function is investigated in detail. The theoretical derivation shows that the effective weighting function is essentially a simple moving sum of single-beam weighting functions. Assuming a Gaussian shape of a single pulse, a simple and easy-to-use parameterization of the effective beam weighting function is arrived at, which depends only on the single beamwidth and the ratio of the single beamwidth to the rotational angular averaging interval. The derived relation is formulated in the “radar system” (i.e., the spherical coordinate system consisting of azimuth and elevation angles) that is often applied in practice. Formulas for the “beam system” (two orthogonal angles relative to the beam axis) are also presented. The final parameterization should be applicable to almost all meteorological radars and might be used (i) in specialized radar data analyses (with ground-based or satellite radars) and (ii) for radar forward operators to calculate simulated radar parameters from the results of NWP models.
APA, Harvard, Vancouver, ISO, and other styles
9

Nie, Lichao, Zhao Ma, Bin Liu, Zhenhao Xu, Wei Zhou, Chengkun Wang, Junyang Shao, and Xin Yin. "A Weighting Function-Based Method for Resistivity Inversion in Subsurface Investigations." Journal of Environmental and Engineering Geophysics 25, no. 1 (March 2020): 129–38. http://dx.doi.org/10.2113/jeeg19-029.

Full text
Abstract:
There is a high demand for high detection accuracy and resolution with respect to anomalous bodies due to the increased development of underground spaces. This study focused on the weighted inversion of observed data from individual array type electrical resistivity tomography (ERT), and developed an improved method of applying a data weighing function to the geoelectrical inversion procedure. In this method, the weighting factor as an observed data weighting term was introduced into the objective function. For individual arrays, the sensitivity decreases with increasing electrode interval. Therefore, the Jacobian matrices were computed for the observed data of individual arrays to determine the value of the weighting factor, and the weighting factor was calculated automatically during inversion. In this work, 2D combined inversion of ERT data from four-electrode Alfa-type arrays is examined. The effectiveness of the weighted inversion method was demonstrated using various synthetic and real data examples. The results indicated that the inversion method based on observed data weighted function could improve the contribution of observed data with depth information to the objective function. It has been proven that the combined weighted inversion method could be a feasible tool for improving the accuracies of positioning and resolution while imaging deep anomalous bodies in the subsurface.
APA, Harvard, Vancouver, ISO, and other styles
10

Vitale, Andrea, and Maurizio Fedi. "Self-constrained inversion of potential fields through a 3D depth weighting." GEOPHYSICS 85, no. 6 (November 1, 2020): G143—G156. http://dx.doi.org/10.1190/geo2019-0812.1.

Full text
Abstract:
A new method for inversion of potential fields is developed using a depth-weighting function specifically designed for fields related to complex source distributions. Such a weighting function is determined from an analysis of the field that precedes the inversion itself. The algorithm is self-consistent, meaning that the weighting used in the inversion is directly deduced from the scaling properties of the field. Hence, the algorithm is based on two steps: (1) estimation of the locally homogeneous degree of the field in a 3D domain of the harmonic region and (2) inversion of the data using a specific weighting function with a 3D variable exponent. A multiscale data set is first formed by upward continuation of the original data. Local homogeneity and a multihomogeneous model are then assumed, and a system built on the scaling function is solved at each point of the multiscale data set, yielding a multiscale set of local-homogeneity degrees of the field. Then, the estimated homogeneity degree is associated to the model weighting function in the source volume. Tests on synthetic data show that the generalization of the depth weighting to a 3D function and the proposed two-step algorithm has great potential to improve the quality of the solution. The gravity field of a polyhedron is inverted yielding a realistic reconstruction of the whole body, including the bottom surface. The inversion of the aeromagnetic real data set, from the Mt. Vulture area, also yields a good and geologically consistent reconstruction of the complex source distribution.
APA, Harvard, Vancouver, ISO, and other styles
11

Delavar, Arash Ghorban Niya, and Zahra Jafari. "One Method to Reduce Data Classification Using Weighting Technique in SVM +." Modern Applied Science 10, no. 9 (July 21, 2016): 245. http://dx.doi.org/10.5539/mas.v10n9p245.

Full text
Abstract:
SVM, a learning algorithm to analyze data and recognize patterns is used. But there is an important issue, replicate data as well as its real-time processing has not been correctly calculated. For this reason, in this paper we have provided a method DCSVM+ to reduce data classification using weighting technique in SVM +. The proposed method with regard to the parameters to SVM + has the optimum response time. By observing the parameter of data volume and their density, we abled to classify the size of interval as case that this classification to investigated case study reduces the running time of algorithm SVM +. Also by providing objective function of the proposed method, we abled to reduce replicate data to SVM + by integrating parameters and data classification and finally we provided threshold detector (TD) for method of DCSVM + to with respect to the competency function, we reduce the processing time as well as increase data processing speed. Finally proposed algorithm with weighting technique of function to SVM + is optimized in terms of efficiency.
APA, Harvard, Vancouver, ISO, and other styles
12

Komamizu, Takahiro, Yasuhiro Ogawa, and Katsuhiko Toyama. "An Ensemble Framework of Multi-ratio Undersampling-based Imbalanced Classification." Journal of Data Intelligence 2, no. 1 (March 2021): 30–46. http://dx.doi.org/10.26421/jdi2.1-2.

Full text
Abstract:
Class imbalance is commonly observed in real-world data, and it is problematic in that it degrades classification performance due to biased supervision. Undersampling is an effective resampling approach to the class imbalance. The conventional undersampling-based approaches involve a single fixed sampling ratio. However, different sampling ratios have different preferences toward classes. In this paper, an undersampling-based ensemble framework, MUEnsemble, is proposed. This framework involves weak classifiers of different sampling ratios, and it allows for a flexible design for weighting weak classifiers in different sampling ratios. To demonstrate the principle of the design, in this paper, a uniform weighting function and a Gaussian weighting function are presented. An extensive experimental evaluation shows that MUEnsemble outperforms undersampling-based and oversampling-based state-of-the-art methods in terms of recall, gmean, F-measure, and ROC-AUC metrics. Also, the evaluation showcases that the Gaussian weighting function is superior to the uniform weighting function. This indicates that the Gaussian weighting function can capture the different preferences of sampling ratios toward classes. An investigation into the effects of the parameters of the Gaussian weighting function shows that the parameters of this function can be chosen in terms of recall, which is preferred in many real-world applications.
APA, Harvard, Vancouver, ISO, and other styles
13

Ramadhan, Riza F., and Robert Kurniawan. "PEMODELAN DATA KEMATIAN BAYI DENGAN GEOGRAPHICALLY WEIGHTED NEGATIVE BINOMIAL REGRESSION." MEDIA STATISTIKA 9, no. 2 (January 24, 2017): 95. http://dx.doi.org/10.14710/medstat.9.2.95-106.

Full text
Abstract:
Overdispersion phenomenon and the influence of location or spatial aspect on data are handled using Binomial Geographically Weighted Regression (GWNBR). GWNBR is the best solution to form a regression analysis that is specific to each observation’s location. The analysis resulted in parameter value which different from one observation to another between location. The Weighting Matrix Selection is done before doing The GWNBR modeling. Different weighting will resulted in different model. Thus this study aims to investigate the best fit model using infant mortality data that is produced by some kind of weighting such as fixed kernel Gaussian, fixed kernel Bisquare, adaptive kernel Gaussian and adaptive kernal Bisquare in GWNBR modeling. This region study covers all the districts/municipalities in Java because the number of observations are more numerous and have more diverse characteristics. The study shows that out of four kernel functions, infant mortality data in Java2012, the best fit model is produced by fixed kernel Gaussian function. Besides that GWNBR with fixed kernel Gaussian also shows better result than the poisson regression and negative binomial regression for data modeling on infant mortality based on the value of AIC and Deviance. Keywords: GWNBR, infant mortality, fixed gaussian, fixed bisquare, adaptive gaussian, adaptive bisquare.
APA, Harvard, Vancouver, ISO, and other styles
14

Pérez‐Flores, M. A., S. Méndez‐Delgado, and E. Gómez‐Treviño. "Imaging low‐frequency and dc electromagnetic fields using a simple linear approximation." GEOPHYSICS 66, no. 4 (July 2001): 1067–81. http://dx.doi.org/10.1190/1.1487054.

Full text
Abstract:
We consider that all types of electromagnetic measurements represent weighted averages of the subsurface electrical conductivity distribution, and that to each type of measurement there corresponds a different weighting function. We use this concept for the quantitative interpretation of dc resistivity, magnetometric resistivity, and low‐frequency electric and magnetic measurements at low induction numbers. In all three cases the corresponding inverse problems are nonlinear because the weighting functions depend on the unknown conductivity distribution. We use linear approximations that adapt to the data and do not require reference resistivity values. The problem is formulated numerically as a solution of a system of linear equations. The unknown conductivity values are obtained by minimizing an objective function that includes the quadratic norm of the residuals as well as the spatial derivatives of the unknowns. We also apply constraints through the use of quadratic programming. The final product is the flattest model that is compatible with the data under the assumption of the given weighting functions. This approximate inversion or imaging technique produces reasonably good results for low and moderate conductivity contrasts. We present the results of inverting jointly and individually different data sets using synthetic and field data.
APA, Harvard, Vancouver, ISO, and other styles
15

Togo, Hidetoshi, Kohei Asanuma, Tatsushi Nishi, and Ziang Liu. "Machine Learning and Inverse Optimization for Estimation of Weighting Factors in Multi-Objective Production Scheduling Problems." Applied Sciences 12, no. 19 (September 21, 2022): 9472. http://dx.doi.org/10.3390/app12199472.

Full text
Abstract:
In recent years, scheduling optimization has been utilized in production systems. To construct a suitable mathematical model of a production scheduling problem, modeling techniques that can automatically select an appropriate objective function from historical data are necessary. This paper presents two methods to estimate weighting factors of the objective function in the scheduling problem from historical data, given the information of operation time and setup costs. We propose a machine learning-based method, and an inverse optimization-based method using the input/output data of the scheduling problems when the weighting factors of the objective function are unknown. These two methods are applied to a multi-objective parallel machine scheduling problem and a real-world chemical batch plant scheduling problem. The results of the estimation accuracy evaluation show that the proposed methods for estimating the weighting factors of the objective function are effective.
APA, Harvard, Vancouver, ISO, and other styles
16

Bookout, Paul Stanley. "Statistically Generated Weighted Curve Fit of Residual Functions for Modal Analysis of Structures." Shock and Vibration 4, no. 4 (1997): 211–22. http://dx.doi.org/10.1155/1997/587328.

Full text
Abstract:
A statistically generated weighting function for a second-order polynomial curve fit of residual functions has been developed. The residual flexibility test method, from which a residual function is generated, is a procedure to modal test large structures in a free-free environment to measure the effects of higher order modes and stiffness at distinct degree of freedom interfaces. Due to the present damping estimate limitations in the modal parameter evaluation (natural frequencies and mode shapes) of test data, the residual function has regions of irregular data, which should be a smooth curve in a second-order polynomial form. A weighting function of the data is generated by examining the variances between neighboring data points. From a weighted second-order polynomial curve fit, an accurate residual flexibility value can be obtained. The residual flexibility value and free-free modes from testing are used to improve a mathematical model of the structure, which is used to predict constrained mode shapes.
APA, Harvard, Vancouver, ISO, and other styles
17

Mascitti, Matías. "La función conjetural del Derecho reforzada por los algoritmos en la era de big data." IUS ET SCIENTIA 6, no. 2 (2020): 162–85. http://dx.doi.org/10.12795/ietscientia.2020.i02.11.

Full text
Abstract:
Through this paper we aims to illustrate the increase in the power of the predictive function of Law that will be generated by the use of intelligent integral legal search engines (IILSE). They will allow a more effective strategic conjectural analysis by virtue of the sociological, psychological, normative and axiological information that they will provide to the legal operator for their decision making. To this end, we use an interdisciplinary analysis perspective of Law, highlighting the advancement of artificial intelligence systems in a transparency society where data is a valuable asset. The IILSE integrated with an efficient natural language and with algorithms created to obtain the aforementioned interdisciplinary information will be a valuable aid instrument for: greater linguistic precision, normative interpretation, weighting of legal principles, prediction of judicial sentences, democratization of Law and a significant decrease in the differences in the practical effects in force between the traditions of Civil Law and Common Law.
APA, Harvard, Vancouver, ISO, and other styles
18

XIAO, GUANGHUA, and WEI PAN. "GENE FUNCTION PREDICTION BY A COMBINED ANALYSIS OF GENE EXPRESSION DATA AND PROTEIN-PROTEIN INTERACTION DATA." Journal of Bioinformatics and Computational Biology 03, no. 06 (December 2005): 1371–89. http://dx.doi.org/10.1142/s0219720005001612.

Full text
Abstract:
Prediction of biological functions of genes is an important issue in basic biology research and has applications in drug discoveries and gene therapies. Previous studies have shown either gene expression data or protein-protein interaction data alone can be used for predicting gene functions. In particular, clustering gene expression profiles has been widely used for gene function prediction. In this paper, we first propose a new method for gene function prediction using protein-protein interaction data, which will facilitate combining prediction results based on clustering gene expression profiles. We then propose a new method to combine the prediction results based on either source of data by weighting on the evidence provided by each. Using protein-protein interaction data downloaded from the GRID database, published gene expression profiles from 300 microarray experiments for the yeast S. cerevisiae, we show that this new combined analysis provides improved predictive performance over that of using either data source alone in a cross-validated analysis of the MIPS gene annotations. Finally, we propose a logistic regression method that is flexible enough to combine information from any number of data sources while maintaining computational feasibility.
APA, Harvard, Vancouver, ISO, and other styles
19

Warsito, Budi, Hasbi Yasin, Dwi Ispriyanti, and Arief Rachman Hakim. "The Step Construction of Geographically Weighted Panel Regression in Air Polluter Standard Index (APSI) Data." E3S Web of Conferences 73 (2018): 12006. http://dx.doi.org/10.1051/e3sconf/20187312006.

Full text
Abstract:
Geographically Weighted Panel Regression or GWPR is a local linear regression model that combines GWR model and panel data regression model with considering spatial effect, especially spatial heterogeneity problem. This article is focused on the soft computation of GWPR model using Fixed Effect Model (FEM). Parameter estimation in GWPR is obtain by Weighted Least Squares (WLS) methods and the resulting model for each location will be different from one to another. This study will compare the fixed-effect GWPR model with several weighting functions. The best model is determined based on the biggest coefficient of determination (R2) value. In this study, the model is applied in the Air Polluter Standard Index (APSI) in Surabaya City, East Java. The results of this study indicate that Fixed Effect GWPR model with a fixed exponential kernel weighting function is the best model to describe the APSI because it has the smallest AIC.
APA, Harvard, Vancouver, ISO, and other styles
20

Coldewey-Egbers, M., M. Weber, L. N. Lamsal, R. de Beek, M. Buchwitz, and J. P. Burrows. "Total ozone retrieval from GOME UV spectral data using the weighting function DOAS approach." Atmospheric Chemistry and Physics Discussions 4, no. 4 (August 31, 2004): 4915–44. http://dx.doi.org/10.5194/acpd-4-4915-2004.

Full text
Abstract:
Abstract. A new algorithm approach called Weighting Function Differential Optical Absorption Spectroscopy (WFDOAS) is presented which has been developed to retrieve total ozone columns from nadir observations of the Global Ozone Monitoring Experiment. By fitting the vertically integrated ozone weighting function rather than ozone cross-section to the sun-normalized radiances, a direct retrieval of vertical column amounts is possible. The new WFDOAS approach takes into account the slant path wavelength modulation that is usually neglected in the standard DOAS approach using single airmass factors. This paper focuses on the algorithm description and error analysis, while in a companion paper by Weber et al. (2004) a detailed validation with groundbased measurements is presented. For the first time several auxiliary quantities directly derived from the GOME spectral range such as cloud-top-height and cloud fraction (O2-A band) and effective albedo using the Lambertian Equivalent Reflectivity (LER) near 377 nm are used in combination as input to the ozone retrieval. In addition the varying ozone dependent contribution to the Raman correction in scattered light known as Ring effect has been included. Detailed investigations have been performed concerning the influence of the molecular ozone filling-in as part of the Ring effect. The precision of the total ozone retrieval is estimated to be better than 3% for solar zenith angles below 80°.
APA, Harvard, Vancouver, ISO, and other styles
21

Coldewey-Egbers, M., M. Weber, L. N. Lamsal, R. de Beek, M. Buchwitz, and J. P. Burrows. "Total ozone retrieval from GOME UV spectral data using the weighting function DOAS approach." Atmospheric Chemistry and Physics 5, no. 4 (March 29, 2005): 1015–25. http://dx.doi.org/10.5194/acp-5-1015-2005.

Full text
Abstract:
Abstract. A new algorithm approach called Weighting Function Differential Optical Absorption Spectroscopy (WFDOAS) is presented which has been developed to retrieve total ozone columns from nadir observations of the Global Ozone Monitoring Experiment. By fitting the vertically integrated ozone weighting function rather than ozone cross-section to the sun-normalized radiances, a direct retrieval of vertical column amounts is possible. The new WFDOAS approach takes into account the slant path wavelength modulation that is usually neglected in the standard DOAS approach using single airmass factors. This paper focuses on the algorithm description and error analysis, while in a companion paper by Weber et al. (2004) a detailed validation with groundbased measurements is presented. For the first time several auxiliary quantities directly derived from the GOME spectral range such as cloud-top-height and cloud fraction (O2-A band) and effective albedo using the Lambertian Equivalent Reflectivity (LER) near 377nm are used in combination as input to the ozone retrieval. In addition the varying ozone dependent contribution to the Raman correction in scattered light known as Ring effect has been included. The molecular ozone filling-in that is accounted for in the new algorithm has the largest contribution to the improved total ozone results from WFDOAS compared to the operational product. The precision of the total ozone retrieval is estimated to be better than 3% for solar zenith angles below 80°.
APA, Harvard, Vancouver, ISO, and other styles
22

Cella, Federico, and Maurizio Fedi. "Inversion of potential field data using the structural index as weighting function rate decay." Geophysical Prospecting 60, no. 2 (July 4, 2011): 313–36. http://dx.doi.org/10.1111/j.1365-2478.2011.00974.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Wang, Wei, Xiang-Gen Xia, Chuanjiang He, Zemin Ren, and Jian Lu. "A new weighting scheme for arc based circle cone-beam CT reconstruction." Journal of X-Ray Science and Technology 30, no. 1 (January 22, 2022): 145–63. http://dx.doi.org/10.3233/xst-211000.

Full text
Abstract:
In this paper, we present an arc based fan-beam computed tomography (CT) reconstruction algorithm by applying Katsevich’s helical CT image reconstruction formula to 2D fan-beam CT scanning data. Specifically, we propose a new weighting function to deal with the redundant data. Our weighting function ϖ ( x _ , λ ) is an average of two characteristic functions, where each characteristic function indicates whether the projection data of the scanning angle contributes to the intensity of the pixel x _ . In fact, for every pixel x _ , our method uses the projection data of two scanning angle intervals to reconstruct its intensity, where one interval contains the starting angle and another contains the end angle. Each interval corresponds to a characteristic function. By extending the fan-beam algorithm to the circle cone-beam geometry, we also obtain a new circle cone-beam CT reconstruction algorithm. To verify the effectiveness of our method, the simulated experiments are performed for 2D fan-beam geometry with straight line detectors and 3D circle cone-beam geometry with flat-plan detectors, where the simulated sinograms are generated by the open-source software “ASTRA toolbox.” We compare our method with the other existing algorithms. Our experimental results show that our new method yields the lowest root-mean-square-error (RMSE) and the highest structural-similarity (SSIM) for both reconstructed 2D and 3D fan-beam CT images.
APA, Harvard, Vancouver, ISO, and other styles
24

Choquet, C. G., G. D. Ferroni, L. G. Leduc, and Joseph A. Robinson. "Statistical considerations associated with the use of the heterotrophic activity method for estimating Vmax and K′ for aquatic environments." Canadian Journal of Microbiology 34, no. 3 (March 1, 1988): 272–76. http://dx.doi.org/10.1139/m88-050.

Full text
Abstract:
This article deals with whether linear or nonlinear, ordinary least-squares regression analysis is superior in the estimation of the kinetic parameters Vmax and K′ from data sets generated by the Wright–Hobbie heterotrophic activity method. An analysis of variance showed that weighting of data was theoretically required for the optimal estimation of Vmax and K′ for both models. However, the magnitude of the parameter estimates was largely independent of the weighting function employed. Monte-Carlo analysis also indicated that the nature of the weighting function did not appreciably affect the parameter estimates, and more importantly, that there was not a strong statistical justification for the use of the nonlinear model over the linear model in the estimation of Vmax and K′.
APA, Harvard, Vancouver, ISO, and other styles
25

Luo, Yi, Yuchun Eugene Wang, Nasher M. AlBinHassan, and Mohammed N. Alfaraj. "Computation of dips and azimuths with weighted structural tensor approach." GEOPHYSICS 71, no. 5 (September 2006): V119—V121. http://dx.doi.org/10.1190/1.2235591.

Full text
Abstract:
The structural tensor method can be used to compute dips and azimuths (i.e., orientation) encased in seismic data. However, this method may produce erratic and uninterpretable orientations when noisy data are encountered. To overcome this difficulty, we incorporate a data-adaptive weighting function to reformulate the gradient structural tensor. In our experiment, the squared instantaneous power is adopted as the weight factor; this can simplify the computation when the instantaneous phase is used as input. The real data examples illustrate that such a weighting function can produce more interpretable and spatially consistent orientations than conventional approaches.
APA, Harvard, Vancouver, ISO, and other styles
26

Fedi, Maurizio, and Mark Pilkington. "Understanding imaging methods for potential field data." GEOPHYSICS 77, no. 1 (January 2012): G13—G24. http://dx.doi.org/10.1190/geo2011-0078.1.

Full text
Abstract:
Several noniterative, imaging methods for potential field data have been proposed that provide an estimate of the 3D magnetization/density distribution within the subsurface or that produce images of quantities related or proportional to such distributions. They have been derived in various ways, using generalized linear inversion, Wiener filtering, wavelet and depth from extreme points (DEXP) transformations, crosscorrelation, and migration. We demonstrated that the resulting images from each of these approaches are equivalent to an upward continuation of the data, weighted by a (possibly) depth-dependent function. Source distributions or related quantities imaged by all of these methods are smeared, diffuse versions of the true distributions; but owing to the stability of upward continuation, resolution may be substantially increased by coupling derivative and upward continuation operators. These imaging techniques appeared most effective in the case of isolated, compact, and depth-limited sources. Because all the approaches were noniterative, computationally fast, and in some cases, produced a fit to the data, they did provide a quick, but approximate picture of physical property distributions. We have found that inherent or explicit depth-weighting is necessary to image sources at their correct depths, and that the best scaling law or weighting function has to be physically based, for instance, using the theory of homogeneous fields. A major advantage of these techniques was their speed, efficiently providing a basis for further detailed, follow-up modelling.
APA, Harvard, Vancouver, ISO, and other styles
27

Krasuski, Kamil, Dariusz Popielarczyk, Adam Ciećko, and Janusz Ćwiklak. "A New Strategy for Improving the Accuracy of Aircraft Positioning Using DGPS Technique in Aerial Navigation." Energies 14, no. 15 (July 22, 2021): 4431. http://dx.doi.org/10.3390/en14154431.

Full text
Abstract:
In this paper a new mathematical algorithm is proposed to improve the accuracy of DGPS (Differential GPS) positioning using several GNSS (Global Navigation Satellites System) reference stations. The new mathematical algorithm is based on a weighting scheme for the following three criteria: weighting in function of baseline (vector) length, weighting in function of vector length error and weighting in function of the number of tracked GPS (Global Positioning System) satellites for a single baseline. The algorithm of the test method takes into account the linear combination of the weighting coefficients and relates the position errors determined for single baselines. The calculation uses a weighting scheme for three independent baselines denoted as (1A,2A,3A). The proposed research method makes it possible to determine the resultant position errors for ellipsoidal BLh coordinates of the aircraft and significantly improves the accuracy of DGPS positioning. The analysis and evaluation of the new research methodology was checked for data from two flight experiments carried out in Mielec and Dęblin. Based on the calculations performed, it was found that in the flight experiment in Mielec, due to the application of the new research methodology, DGPS positioning accuracy improved from 55 to 94% for all the BLh components. In turn, in the flight experiment in Dęblin, the accuracy of DGPS positioning improved by 63–91%. The study shows that the highest DGPS positioning accuracy is seen when using weighting criterion II, the inverse of the square of the vector length error.
APA, Harvard, Vancouver, ISO, and other styles
28

Keating, Pierre B. "Weighted Euler deconvolution of gravity data." GEOPHYSICS 63, no. 5 (September 1998): 1595–603. http://dx.doi.org/10.1190/1.1444456.

Full text
Abstract:
Euler deconvolution is used for rapid interpretation of magnetic and gravity data. It is particularly good at delineating contacts and rapid depth estimation. The quality of the depth estimation depends mostly on the choice of the proper structural index and adequate sampling of the data. The structural index is a function of the geometry of the causative bodies. For gravity surveys, station distribution is in general irregular, and the gravity field is aliased. This results in erroneous depth estimates. By weighting the Euler equations by an error function proportional to station accuracies and the interstation distance, it is possible to reject solutions resulting from aliasing of the field and less accurate measurements. The technique is demonstrated on Bouguer anomaly data from the Charlevoix region in eastern Canada.
APA, Harvard, Vancouver, ISO, and other styles
29

Wang, Shizhe, Zongji Li, Pingbo Wang, and Huadong Chen. "Optimization Algorithm for Delay Estimation Based on Singular Value Decomposition and Improved GCC-PHAT Weighting." Sensors 22, no. 19 (September 24, 2022): 7254. http://dx.doi.org/10.3390/s22197254.

Full text
Abstract:
The accuracy of time delay estimation seriously affects the accuracy of sound source localization. In order to improve the accuracy of time delay estimation under the condition of low SNR, a delay estimation optimization algorithm based on singular value decomposition and improved GCC-PHAT weighting (GCC-PHAT-ργ weighting) is proposed. Firstly, the acoustic signal collected by the acoustic sensor array is subjected to singular value decomposition and noise reduction processing to improve the signal-to-noise ratio of the signal; then, the cross-correlation operation is performed, and the cross-correlation function is processed by the GCC-PHAT-ργ weighting method to obtain the cross-power spectrum; finally, the inverse transformation is performed to obtain the generalized correlation time domain function, and the peak detection is performed to obtain the delay difference. The experiment was carried out in a large outdoor pool, and the experimental data were processed to compare the time delay estimation performance of three methods: GCC-PHAT weighting, SVD-GCC-PHAT weighting (meaning: GCC-PHAT weighting based on singular value decomposition) and SVD-GCC-PHAT-ργ weighting (meaning: GCC-PHAT-ργ weighting based on singular value decomposition). The results show that the delay estimation optimization algorithm based on SVD-GCC-PHAT-ργ improves the delay estimation accuracy by at least 37.95% compared with the other two methods. The new optimization algorithm has good delay estimation performance.
APA, Harvard, Vancouver, ISO, and other styles
30

Atkinson, MaryAnne, and Scott Jones. "Decision Makers' Ability To Identify Unusual Costs And Implications For Alternative Estimation Procedures." Journal of Applied Business Research (JABR) 7, no. 4 (October 18, 2011): 36. http://dx.doi.org/10.19030/jabr.v7i4.6201.

Full text
Abstract:
This paper reports the results of an experiment in which individuals visually fitted a cost function to data. The inclusion or omission of unusual data points within the data set was experimentally manipulated. The results indicate that individuals omit outliers from their visual fits, but do not omit influential points. Evidence also suggests that the weighting rule used by individuals is more robust that the weighting rule used in the ordinary least squares criterion.
APA, Harvard, Vancouver, ISO, and other styles
31

Qu, Ming Cheng, Xiang Hu Wu, Ming Hong Liao, and Xiao Zong Yang. "A Novel Resource Selection Model for Data Grid Based on QoS." Advanced Materials Research 186 (January 2011): 203–9. http://dx.doi.org/10.4028/www.scientific.net/amr.186.203.

Full text
Abstract:
The parallel data transmission based on multi-copy is a key strategy to enhance transmission speed and ensure the QoS of data grid. Currently how to select replica nodes to achieve multiple optimization objectives is an import problem to be solved. To satisfy the constraint of transmission time for service request, a model is proposed. The network load and services tolerance are all taken into account by putting forward a tolerance function for service request, taking network load and tolerance functions as optimization objective at the same time, and introducing a weighting factor to balance the two functions. In experiments various parameters of the model were detected, and good results are achieved in decision-making. It shows that the model is correct and effective.
APA, Harvard, Vancouver, ISO, and other styles
32

Mardalena, Selvi, Purhadi Purhadi, Jerry Dwi Trijoyo Purnomo, and Dedy Dwi Prastyo. "The Geographically Weighted Multivariate Poisson Inverse Gaussian Regression Model and Its Applications." Applied Sciences 12, no. 9 (April 21, 2022): 4199. http://dx.doi.org/10.3390/app12094199.

Full text
Abstract:
This study aims to develop a method for multivariate spatial overdispersion count data with mixed Poisson distribution, namely the Geographically Weighted Multivariate Poisson Inverse Gaussian Regression (GWMPIGR) model. The parameters of the GWMPIGR model are estimated locally using the maximum likelihood estimation (MLE) method by considering spatial effects. Therefore, the significance of the regression parameter differs for each location. In this study, four GWMPIGR models are evaluated based on the exposure variable and the spatial weighting function. We compare the performance of those four models in real-world application using data on the number of infant, under-5 and maternal deaths in East Java in 2019 using five predictor variables. In this study, the GWMPIGR model uses one exposure variable and three exposure variables. Compared to the fixed kernel Gaussian weighting function, the GWMPIGR model with the fixed kernel bisquare weighting function and one exposure variable has a better fit based on the AICc value. Furthermore, according to the best GWMPIGR model, there are several regional groups formed based on predictors that significantly affected each event in East Java in 2019.
APA, Harvard, Vancouver, ISO, and other styles
33

Ebrahimi, Saleh, Amin Roshandel Kahoo, Yangkang Chen, and Milton Porsani. "A high-resolution weighted AB semblance for dealing with amplitude-variation-with-offset phenomenon." GEOPHYSICS 82, no. 2 (March 1, 2017): V85—V93. http://dx.doi.org/10.1190/geo2016-0047.1.

Full text
Abstract:
Velocity analysis is an essential step in seismic reflection data processing. The conventional and fastest method to estimate how velocity changes with increasing depth is to calculate semblance coefficients. Traditional semblance has two problems: low time and velocity resolution and an inability to handle amplitude variation-with-offset (AVO) phenomenon. Although a method known as the AB semblance can arrive at peak velocities in the areas with an AVO anomaly, it has a lower velocity resolution than conventional semblance. We have developed a weighted AB semblance method that can handle both problems simultaneously. We have developed two new weighting functions to weight the AB semblance to enhance the resolution of velocity spectra in the time and velocity directions. In this way, we increase the time and velocity resolution while eliminating the AVO problem. The first weighting function is defined based on the ratio between the first and the second singular values of the time window to improve the resolution of velocity spectra in velocity direction. The second weighting function is based on the position of the seismic wavelet in the time window, thus enhancing the resolution of velocity spectra in time direction. We use synthetic and field data examples to show the superior performance of our approach over the traditional one.
APA, Harvard, Vancouver, ISO, and other styles
34

Kang, Seokho. "k-Nearest Neighbor Learning with Graph Neural Networks." Mathematics 9, no. 8 (April 10, 2021): 830. http://dx.doi.org/10.3390/math9080830.

Full text
Abstract:
k-nearest neighbor (kNN) is a widely used learning algorithm for supervised learning tasks. In practice, the main challenge when using kNN is its high sensitivity to its hyperparameter setting, including the number of nearest neighbors k, the distance function, and the weighting function. To improve the robustness to hyperparameters, this study presents a novel kNN learning method based on a graph neural network, named kNNGNN. Given training data, the method learns a task-specific kNN rule in an end-to-end fashion by means of a graph neural network that takes the kNN graph of an instance to predict the label of the instance. The distance and weighting functions are implicitly embedded within the graph neural network. For a query instance, the prediction is obtained by performing a kNN search from the training data to create a kNN graph and passing it through the graph neural network. The effectiveness of the proposed method is demonstrated using various benchmark datasets for classification and regression tasks.
APA, Harvard, Vancouver, ISO, and other styles
35

Li, Yaoguo, and Douglas W. Oldenburg. "3-D inversion of magnetic data." GEOPHYSICS 61, no. 2 (March 1996): 394–408. http://dx.doi.org/10.1190/1.1443968.

Full text
Abstract:
We present a method for inverting surface magnetic data to recover 3-D susceptibility models. To allow the maximum flexibility for the model to represent geologically realistic structures, we discretize the 3-D model region into a set of rectangular cells, each having a constant susceptibility. The number of cells is generally far greater than the number of the data available, and thus we solve an underdetermined problem. Solutions are obtained by minimizing a global objective function composed of the model objective function and data misfit. The algorithm can incorporate a priori information into the model objective function by using one or more appropriate weighting functions. The model for inversion can be either susceptibility or its logarithm. If susceptibility is chosen, a positivity constraint is imposed to reduce the nonuniqueness and to maintain physical realizability. Our algorithm assumes that there is no remanent magnetization and that the magnetic data are produced by induced magnetization only. All minimizations are carried out with a subspace approach where only a small number of search vectors is used at each iteration. This obviates the need to solve a large system of equations directly, and hence earth models with many cells can be solved on a deskside workstation. The algorithm is tested on synthetic examples and on a field data set.
APA, Harvard, Vancouver, ISO, and other styles
36

Dučinskas, Kęstutis, Marta Karaliutė, and Laura Šaltytė-Vaisiauskė. "Spatially Weighted Bayesian Classification of Spatio-Temporal Areal Data Based on Gaussian-Hidden Markov Models." Mathematics 11, no. 2 (January 9, 2023): 347. http://dx.doi.org/10.3390/math11020347.

Full text
Abstract:
This article is concerned with an original approach to generative classification of spatiotemporal areal (or lattice) data based on implementation of spatial weighting to Hidden Markov Models (HMMs). In the framework of this approach data model at each areal unit is specified by conditionally independent Gaussian observations and first-order Markov chain for labels and call it local HMM. The proposed classification is based on modification of conventional HMM by the implementation of spatially weighted estimators of local HMMs parameters. We focus on classification rules based on Bayes discriminant function (BDF) with plugged in the spatially weighted parameter estimators obtained from the labeled training sample. For each local HMM, the estimators of regression coefficients and variances and two types of transition probabilities are used in two levels (higher and lower) of spatial weighting. The average accuracy rate (ACC) and balanced accuracy rate (BAC), computed from confusion matrices evaluated from a test sample, are used as performance measures of classifiers. The proposed methodology is illustrated for simulated data and for real dataset, i.e., annual death rate data collected by the Institute of Hygiene of the Republic of Lithuania from the 60 municipalities in the period from 2001 to 2019. Critical comparison of proposed classifiers is done. The experimental results showed that classifiers based on HMM with higher level of spatial weighting in majority cases have advantage in spatial–temporal consistency and classification accuracy over one with lower level of spatial weighting.
APA, Harvard, Vancouver, ISO, and other styles
37

Amundsen, Lasse, Tage Røsten, Johan O. A. Robertsson, and Ed Kragh. "Rough-sea deghosting of streamer seismic data using pressure gradient approximations." GEOPHYSICS 70, no. 1 (January 2005): V1—V9. http://dx.doi.org/10.1190/1.1852892.

Full text
Abstract:
A new method is presented for seismic deghosting of towed streamer data acquired in rough seas. The deghosting scheme combines pressure recordings along one or several cables with an estimate of the vertical pressure gradient (or the vertical component of the particle velocity). The estimation of the vertical pressure gradient requires continuous elevation measurements of the wave height directly above the receivers. The vertical pressure gradient estimate is obtained by spatially weighting the pressure field. Each spatial weight generally is the product of two weight functions. The first is a function of partial derivatives acting solely along the horizontal Cartesian coordinates. It can be implemented by finite-difference or Fourier derivative operations. The second is a function of the vertical Cartesian coordinate and accounts for the varying sea state. This weight can be changed from one receiver to the next, making the deghosting a local process. Integrated with the measured pressure field, the estimate of the vertical pressure gradient also enables other seismic processing opportunities beyond deghosting.
APA, Harvard, Vancouver, ISO, and other styles
38

Aliu, Muftih Alwi, Fahrezal Zubedi, Lailany Yahya, and Franky Alfrits Oroh. "The Comparison of Kernel Weighting Functions in Geographically Weighted Logistic Regression in Modeling Poverty in Indonesia." Jurnal Matematika, Statistika dan Komputasi 18, no. 3 (May 15, 2022): 362–84. http://dx.doi.org/10.20956/j.v18i3.19567.

Full text
Abstract:
Indonesia is a developing country that is facing poverty. The percentage of the poor population in Indonesia in 2020 increased by 0.97 percent from 2019. A suitable analysis to overcome poverty in Indonesia is the regional effect, namely Geographically Weighted Logistic Regression (GWLR). This study aimed to compare the weighting functions of the Fixed Gaussian Kernel, Fixed Tricube Kernel, and Fixed Bisquare Kernel in the GWLR model in modeling poverty in Indonesia in 2020. The best model can determine significant factors that affected poverty in Indonesia in 2020. This study used the percentage data of poor population and the factors affecting it, namely the Open Unemployment Rate , Human Development Index , and Total Population in 34 Provinces in Indonesia. This study indicates that the GWLR model with the Fixed Gaussian Kernel weighting function is the best in modeling poverty in Indonesia in 2020 based on the smallest Akaike Information Criterion Corrected (AlCc) value. The GWLR model with the Fixed Gaussian Kernel weighting function shows the Open Unemployment Rate as a significant factor affecting poverty in Indonesia in 2020 in 10 provinces in Indonesia, namely Aceh, North Sumatra, West Sumatra, Riau, Jambi, South Sumatra, Bengkulu, Lampung, DKI Jakarta, and Banten.
APA, Harvard, Vancouver, ISO, and other styles
39

Yang, Dong, and Ruola Ning. "FDK Half-Scan with a Heuristic Weighting Scheme on a Flat Panel Detector-Based Cone Beam CT (FDKHSCW)." International Journal of Biomedical Imaging 2006 (2006): 1–8. http://dx.doi.org/10.1155/ijbi/2006/83983.

Full text
Abstract:
A cone beam circular half-scan scheme is becoming an attractive imaging method in cone beam CT since it improves the temporal resolution. Traditionally, the redundant data in the circular half-scan range is weighted by a central scanning plane-dependent weighting function; FDK algorithm is then applied on the weighted projection data for reconstruction. However, this scheme still suffers the attenuation coefficient drop inherited with FDK when the cone angle becomes large. A new heuristic cone beam geometry-dependent weighting scheme is proposed based on the idea that there exists less redundancy for the projection data away from the central scanning plane. The performance of FDKHSCW scheme is evaluated by comparing it to the FDK full-scan (FDKFS) scheme and the traditional FDK half-scan scheme with Parker's fan beam weighting function (FDKHSFW). Computer simulation is employed and conducted on a 3D Shepp-Logan phantom. The result illustrates a correction of FDKHSCW to the attenuation coefficient drop in the off-scanning plane associated with FDKFS and FDKHSFW while maintaining the same spatial resolution.
APA, Harvard, Vancouver, ISO, and other styles
40

Zhang, Yikai, Yong Peng, Hongyu Bian, Yuan Ge, Feiwei Qin, and Wanzeng Kong. "Auto-weighted concept factorization for joint feature map and data representation learning." Journal of Intelligent & Fuzzy Systems 41, no. 1 (August 11, 2021): 69–81. http://dx.doi.org/10.3233/jifs-200298.

Full text
Abstract:
Concept factorization (CF) is an effective matrix factorization model which has been widely used in many applications. In CF, the linear combination of data points serves as the dictionary based on which CF can be performed in both the original feature space as well as the reproducible kernel Hilbert space (RKHS). The conventional CF treats each dimension of the feature vector equally during the data reconstruction process, which might violate the common sense that different features have different discriminative abilities and therefore contribute differently in pattern recognition. In this paper, we introduce an auto-weighting variable into the conventional CF objective function to adaptively learn the corresponding contributions of different features and propose a new model termed Auto-Weighted Concept Factorization (AWCF). In AWCF, on one hand, the feature importance can be quantitatively measured by the auto-weighting variable in which the features with better discriminative abilities are assigned larger weights; on the other hand, we can obtain more efficient data representation to depict its semantic information. The detailed optimization procedure to AWCF objective function is derived whose complexity and convergence are also analyzed. Experiments are conducted on both synthetic and representative benchmark data sets and the clustering results demonstrate the effectiveness of AWCF in comparison with some related models.
APA, Harvard, Vancouver, ISO, and other styles
41

Li, Yaoguo, and Douglas W. Oldenburg. "3-D inversion of gravity data." GEOPHYSICS 63, no. 1 (January 1998): 109–19. http://dx.doi.org/10.1190/1.1444302.

Full text
Abstract:
We present two methods for inverting surface gravity data to recover a 3-D distribution of density contrast. In the first method, we transform the gravity data into pseudomagnetic data via Poisson’s relation and carry out the inversion using a 3-D magnetic inversion algorithm. In the second, we invert the gravity data directly to recover a minimum structure model. In both approaches, the earth is modeled by using a large number of rectangular cells of constant density, and the final density distribution is obtained by minimizing a model objective function subject to fitting the observed data. The model objective function has the flexibility to incorporate prior information and thus the constructed model not only fits the data but also agrees with additional geophysical and geological constraints. We apply a depth weighting in the objective function to counteract the natural decay of the kernels so that the inversion yields depth information. Applications of the algorithms to synthetic and field data produce density models representative of true structures. Our results have shown that the inversion of gravity data with a properly designed objective function can yield geologically meaningful information.
APA, Harvard, Vancouver, ISO, and other styles
42

Zahariah, Siti, Habshah Midi, and Mohd Shafie Mustafa. "An Improvised SIMPLS Estimator Based on MRCD-PCA Weighting Function and Its Application to Real Data." Symmetry 13, no. 11 (November 19, 2021): 2211. http://dx.doi.org/10.3390/sym13112211.

Full text
Abstract:
Multicollinearity often occurs when two or more predictor variables are correlated, especially for high dimensional data (HDD) where p>>n. The statistically inspired modification of the partial least squares (SIMPLS) is a very popular technique for solving a partial least squares regression problem due to its efficiency, speed, and ease of understanding. The execution of SIMPLS is based on the empirical covariance matrix of explanatory variables and response variables. Nevertheless, SIMPLS is very easily affected by outliers. In order to rectify this problem, a robust iteratively reweighted SIMPLS (RWSIMPLS) is introduced. Nonetheless, it is still not very efficient as the algorithm of RWSIMPLS is based on a weighting function that does not specify any method of identification of high leverage points (HLPs), i.e., outlying observations in the X-direction. HLPs have the most detrimental effect on the computed values of various estimates, which results in misleading conclusions about the fitted regression model. Hence, their effects need to be reduced by assigning smaller weights to them. As a solution to this problem, we propose an improvised SIMPLS based on a new weight function obtained from the MRCD-PCA diagnostic method of the identification of HLPs for HDD and name this method MRCD-PCA-RWSIMPLS. A new MRCD-PCA-RWSIMPLS diagnostic plot is also established for classifying observations into four data points, i.e., regular observations, vertical outliers, and good and bad leverage points. The numerical examples and Monte Carlo simulations signify that MRCD-PCA-RWSIMPLS offers substantial improvements over SIMPLS and RWSIMPLS. The proposed diagnostic plot is able to classify observations into correct groups. On the contrary, SIMPLS and RWSIMPLS plots fail to correctly classify observations into correct groups and show masking and swamping effects.
APA, Harvard, Vancouver, ISO, and other styles
43

Chan, Keith K. H., Frieder Langenbucher, and Milo Gibaldi. "Evaluation of In Vivo Drug Release by Numerical Deconvolution Using Oral Solution Data as Weighting Function." Journal of Pharmaceutical Sciences 76, no. 6 (June 1987): 446–50. http://dx.doi.org/10.1002/jps.2600760607.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Muhammad Jauhar Vikri and Roihatur Rohmah. "Penerapan Fungsi Exponential Pada Pembobotan Fungsi Jarak Euclidean Algoritma K-Nearest Neighbor." Generation Journal 6, no. 2 (September 1, 2022): 57–64. http://dx.doi.org/10.29407/gj.v6i2.18070.

Full text
Abstract:
– k-Nearest Neighbor (k-NN) is one of the popular classification algorithms and is widely used to solve classification cases. This is because the k-NN algorithm has advantages such as being simple, easy to explain, and easy to implement. However, the k-NN algorithm has a lack of classification results that are strongly influenced by the scale of input data and Euclidean which treats attribute data evenly, not according to the relevance of each data attribute. This causes a decrease in the classification results. One way to improve the classification accuracy performance of the k-NN algorithm is the method of weighting its features when measuring the Euclidean distance. The exponential function of the optimized Euclidean distance measurement is applied to the k-NN algorithm as a distance measurement method. Improving the performance of the k-NN method with the Exponential function for weighting features on k-NN will be carried out by experimentation using the Data Mining method. Then the results of the performance of the objective method will be compared with the original k-NN method and the previous k-NN weighting research method. As a result of the closest distance decision, taking the closest distance to k-NN will be determined with a value of k=5. After the experiment, the goal algorithm was compared with the k-NN, Wk-NN, and DWk-NN algorithms. Overall the comparison results obtained an average value of k-NN 85.87%, Wk-NN 86.98%, DWk-NN 88.19% and the k-NN algorithm given the weighting of the Exponential function obtained a value of 90.17%.
APA, Harvard, Vancouver, ISO, and other styles
45

Rius, Jordi. "XLENS, a direct methods program based on the modulus sum function: Its application to powder data." Powder Diffraction 14, no. 4 (December 1999): 267–73. http://dx.doi.org/10.1017/s0885715600010654.

Full text
Abstract:
XLENS is a traditional direct methods program working exclusively in reciprocal space. The distinctive feature of XLENS is the use of the modulus sum function as target function for the phase refinement. Due to its efficiency, robustness, and no need of weighting schemes, this function is specially well suited for treating powder diffraction data. The mathematical basis as well as the significance of the most important control parameters of the program will be described here. To illustrate how XLENS works, three different examples will be shown. Due to its simplicity, the modulus sum function can be easily combined with real-space filtering procedures to produce even more efficient crystal structure solving strategies.
APA, Harvard, Vancouver, ISO, and other styles
46

Altunkaynak, A. "Estimation of streamflow by slope Regional Dependency Function." Hydrology and Earth System Sciences Discussions 5, no. 2 (April 11, 2008): 1003–20. http://dx.doi.org/10.5194/hessd-5-1003-2008.

Full text
Abstract:
Abstract. Kriging is one of the most developed methodologies in the regional variable modeling. However, one of its drawbacks is that the influence radius can not be determined by this method. In which distance and in what ratio that pivot station is influenced from adjacent sites is rather often encountered problem in practical applications. Regional weighting functions obtained from available data consist of several broken lines. Each line has different slopes which represent the similarity and the contribution of adjacent stations as a weighting coefficient. The approach in this study is called as Slope Regional Dependency Function (SRDF). The main idea of this approach is to express the variability in value differences [γ(d)] and distances together. Originally proposed SRDF and Trigonometric Point Cumulative Semi-Variogram (TPCSV) methods are used to predict streamflow. Also TPCSV and Point Cumulative Semi-Variogram (PCSV) approaches are compared with each other. Prediction performance of all three methods stays below 10% relative error which is acceptable for the engineering applications. It is shown that SRDF outperforms PCSV and TPCSV with very high differences. It can be used for missing data completion, determination of measurement sites location, calculation of influence radius, and determination of regional variable potential. The proposed method is applied for the 38 stream flow measurement sites located in the Mississippi River basin.
APA, Harvard, Vancouver, ISO, and other styles
47

Altunkaynak, A. "Estimation of streamflow by slope regional dependency function." Hydrology and Earth System Sciences 12, no. 4 (August 25, 2008): 1121–27. http://dx.doi.org/10.5194/hess-12-1121-2008.

Full text
Abstract:
Abstract. Kriging is one of the most developed methodologies in the regional variable modeling. However, one of its drawbacks is that the influence radius can not be determined by this method. In which distance and in what ratio that pivot station is influenced from adjacent sites is rather often encountered problem in practical applications. Regional weighting functions obtained from available data consist of several broken lines. Each line has different slopes which represent the similarity and the contribution of adjacent stations as a weighting coefficient. The approach in this study is called as Slope Regional Dependency Function (SRDF). The main idea of this approach is to express the variability in value differences γ and distances together. Originally proposed SRDF and Trigonometric Point Cumulative Semi-Variogram (TPCSV) methods are used to predict streamflow. TPCSV and Point Cumulative Semi-Variogram (PCSV) approaches are also compared with each other. Prediction performance of all the three methods revealed a relative error less than 10% which is acceptable for most engineering applications. It is shown that SRDF outperforms PCSV and TPCSV with very high differences. It can be used for missing data completion, determination of measurement sites location, calculation of influence radius, and determination of regional variable potential. The proposed method is applied for the 38 stream flow measurement sites located in the Mississippi River basin.
APA, Harvard, Vancouver, ISO, and other styles
48

Wang, Wenting, Yulin He, Liheng Ma, and Joshua Zhexue Huang. "Latent Feature Group Learning for High-Dimensional Data Clustering." Information 10, no. 6 (June 10, 2019): 208. http://dx.doi.org/10.3390/info10060208.

Full text
Abstract:
In this paper, we propose a latent feature group learning (LFGL) algorithm to discover the feature grouping structures and subspace clusters for high-dimensional data. The feature grouping structures, which are learned in an analytical way, can enhance the accuracy and efficiency of high-dimensional data clustering. In LFGL algorithm, the Darwinian evolutionary process is used to explore the optimal feature grouping structures, which are coded as chromosomes in the genetic algorithm. The feature grouping weighting k-means algorithm is used as the fitness function to evaluate the chromosomes or feature grouping structures in each generation of evolution. To better handle the diverse densities of clusters in high-dimensional data, the original feature grouping weighting k-means is revised with the mass-based dissimilarity measure rather than the Euclidean distance measure and the feature weights are optimized as a nonnegative matrix factorization problem under the orthogonal constraint of feature weight matrix. The genetic operations of mutation and crossover are used to generate the new chromosomes for next generation. In comparison with the well-known clustering algorithms, LFGL algorithm produced encouraging experimental results on real world datasets, which demonstrated the better performance of LFGL when clustering high-dimensional data.
APA, Harvard, Vancouver, ISO, and other styles
49

Silva, Anderson Rodrigo da, Ana Paula Alencastro Silva, and Lauro Joaquim Tiago Neto. "A new local stochastic method for predicting data with spatial heterogeneity." Acta Scientiarum. Agronomy 43 (November 5, 2020): e49947. http://dx.doi.org/10.4025/actasciagron.v43i1.49947.

Full text
Abstract:
Spatial data (e.g., phytopathogenic data) do not always meet assumptions such as stationarity, isotropy and Gaussian distribution, thereby requiring complex spatial methods and models. Some deterministic assumption-free methods such as the inverse distance weighting can also be applied to predict spatial data, but their output is limited for graphical solutions (mapping). We adapted a computer-based prediction method called Circular Variable Radius Moving Window (CVRMW) that is based on two others: moving window kriging (MWK) and inverse squared-distance weighting (ISDW). The algorithm is developed to meet an objective function that minimizes the index of variation of the spatial observations inside the moving window. A code in R language is presented and thoroughly described. The outputs include the range of the spatial dependence as the radius calculated at every target location and the standard error of the predicted values, mapped to provide a useful tool for spatial exploratory analysis. The method does not make any assumptions about the spatial process, and it is an alternative for dealing with spatial heterogeneity.
APA, Harvard, Vancouver, ISO, and other styles
50

Nurcahyani, Nita, Budi Pratikno, and Supriyanto Supriyanto. "FUNGSI PEMBOBOT TUKEY BISQUARE DAN WELSCH PADA REGRESI ROBUST ESTIMASI-S." Jurnal Ilmiah Matematika dan Pendidikan Matematika 14, no. 2 (December 28, 2022): 133. http://dx.doi.org/10.20884/1.jmp.2022.14.2.6668.

Full text
Abstract:
The research studied the robust regression estimation-S on Tukey-Bisquare and Welsch weighting function. Due to the data contains outliers, we therefore used robust regression to handle outliers. The purpose is to find the best model between the Tukey Bisquare or Welsch. The mean square error (MSE) and adjusted is then used. The simulation data is a human development index (HDI) in Papua in 2021. The result showed that the robust regression model on the S-estimation between Tukey Bisquare and Welsch are similar (close). However, the adjusted weighting function Welsch is greater than the Tukey Bisquare, and the MSE of the Welsch is smaller than the Tukey Bisquare, so we recommended the robust S-estimation Welsch is a little better.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography