To see the other types of publications on this topic, follow the link: Weighted Differences Method.

Journal articles on the topic 'Weighted Differences Method'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Weighted Differences Method.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Løseth, Lars O., Lasse Amundsen, and Arne J. Jenssen. "A solution to the airwave-removal problem in shallow-water marine EM." GEOPHYSICS 75, no. 5 (September 2010): A37—A42. http://dx.doi.org/10.1190/1.3475359.

Full text
Abstract:
A new method is presented for removing the airwave in shallow-water electromagnetic (EM) data. By using pairs of receivers or pairs of source points along a towline and forming weighted differences of an EM field component, the airwave is significantly attenuated. The weights are related to the geometric spreading of the airwave. Specifically, the horizontal electric and magnetic field components are weighted with the horizontal offset to the power of three before their differences are calculated. One can also first calculate difference data without weights in the common-source domain to remove correlated noise and then use the resulting data to perform a second-order difference with weights. In this case, the weights should equal the horizontal offset to the power of four. Moreover, the weighted difference data can be integrated to obtain an estimate of the physical field void of the airwave. Additionally, the airwave contribution to the physical field can be estimated. The theory behind the method is presented and demonstrated on a real controlled-source EM (CSEM) data set.
APA, Harvard, Vancouver, ISO, and other styles
2

Setyawan, Agus, Florentina Yuni Arini, and Isa Akhlis. "Comparative Analysis of Simple Additive Weighting Method and Weighted Product Method to New Employee Recruitment Decision Support System (DSS) at PT. Warta Media Nusantara." Scientific Journal of Informatics 4, no. 1 (May 10, 2017): 34–42. http://dx.doi.org/10.15294/sji.v4i1.8458.

Full text
Abstract:
Issue handling of inadvertence situations in the decision-making process of recruiting new employees at PT. Warta Media Nusantara that use criteria value of interviews, field test, a psychological test and medical check-up requires Multi Attribute Decision Making (MADM) as an auxiliary method of decision-making on the prospective eligible employee to be accepted in the company. There are various MADM methods, such as Simple Additive Weighting (SAW) method and Weighted Product (WP) method. Both of these methods are known as the most common method used in handling MADM issues, so in this study both methods are applied to the DSS and analyzed the differences in terms of obtained results and the execution time required for each method. The results of the study of the application of SAW and WP methods in the recruitment of new employees DSS there are some differences in the results of the candidates rank order and the differences in execution time of each method. The differences in rank order of these methods are due to the effects of alternative values, weighting criteria, and the calculation method. WP method is able to provide more rigorous result than SAW method, while the difference in execution time of SAW and WP methods explains that the execution time of SAW method relatively quick because SAW calculation method has a simpler process than the process of WP calculation methods
APA, Harvard, Vancouver, ISO, and other styles
3

Abolhasani Foroughi, Amin, Roohollah Salahi, Alireza Nikseresht, Hora Heidari, Masoume Nazeri, and Ali Khorsand. "Comparison of diffusion-weighted imaging and enhanced T1-weighted sequencing in patients with multiple sclerosis." Neuroradiology Journal 30, no. 4 (April 28, 2017): 347–51. http://dx.doi.org/10.1177/1971400916678224.

Full text
Abstract:
Introduction The purpose of this study was to assess whether demographic, brain anatomical regions and contrast enhancement show differences in multiple sclerosis (MS) patients with increased diffusion lesions (ID group) compared with diffusion restriction (DR group). Method MRI protocol comprised T1- and T2-weighted sequences with and without gadolinium (Gd), and sagittal three-dimensional FLAIR sequence, DWI and ADC maps were prospectively performed in 126 MS patients from January to December 2015. The investigation was conducted to evaluate differences in demographic, cord and brain regional, technical, and positive or negative Gd contrast imaging parameters in two groups of ID and DR. Statistical analysis was performed by using SPSS. Results A total of 9.6% of patients showed DR. In the DR group, 66.6% of the patients showed contrast enhancement of plaques, whereas 29.2% of the IR group showed enhancement of plaques. The most prevalent group was non-enhanced plaques in the ID group, followed by Gd-enhanced plaques in the ID group. Patients in the ID group (90.4%) were significantly more than in the DR group (9.6%). Out of the 40 patients with Gd-enhanced plaques, 80.5% was from the ID group and 19.5% from the DR group. Conclusion MRI of the brain, unlike of the cord, with Gd demonstrates significant difference in enhancement between the two groups ( p < 0.05). No significant difference was seen in demographic, cord and brain regional, and technical parameters, EDSS, disease duration, and attack rate as well as demographic and regional parameters between the ID and decrease diffusion groups ( p > 0.05).
APA, Harvard, Vancouver, ISO, and other styles
4

Pusdiktasari, Zerlita Fahdha, Rahma Fitriani, and Eni Sumarminingsih. "An Improved Weighted Median Algorithm for Spatial Outliers Detection." ComTech: Computer, Mathematics and Engineering Applications 13, no. 2 (November 25, 2022): 111–21. http://dx.doi.org/10.21512/comtech.v13i2.7821.

Full text
Abstract:
A spatial outlier is an object that significantly deviates from its surrounding neighbors. The median algorithm is one of the spatial outlier methods, which is robust. However, it assumes that all spatial objects have the same characteristics. Meanwhile, the Average Difference Algorithm (AvgDiff) has accommodated the differences in spatial characteristics, but it does not use statistical tests to determine the status of an object, whether it is an outlier or not. The research developed an improved version of the median algorithm and AvgDiff, called the Weighted Median Algorithm (WMA) which combined the advantages of the two methods. From the median algorithm, WMA adopted median and statistical test concepts. Meanwhile, from AvgDiff, WMA adopted the concept of using differences in objects’ spatial characteristics as weights. A combination of the two advantages was innovated by calculating WMA’s neighborhood score using a weighted median. Then, a simulation was conducted to analyze the accuracy of the method. The result confirms that when objects have heterogeneous spatial characteristics, WMA performs better than the median algorithm. The accuracy of WMA is not much higher than AvgDiff, but the use of WMA can prevent a serious false detection problem. The methods can be applied to an incidence rate of Covid-19 data in East Java.
APA, Harvard, Vancouver, ISO, and other styles
5

Shi, X., L. Lu, S. Yang, G. Huang, and Z. Zhao. "Object-oriented change detection based on weighted polarimetric scattering differences on POLSAR images." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-7/W4 (June 26, 2015): 149–54. http://dx.doi.org/10.5194/isprsarchives-xl-7-w4-149-2015.

Full text
Abstract:
For wide application of change detection with SAR imagery, current processing technologies and methods are mostly based on pixels. It is difficult for pixel-based technologies to utilize spatial characteristics of images and topological relations of objects. Object-oriented technology takes objects as processing unit, which takes advantage of the shape and texture information of image. It can greatly improve the efficiency and reliability of change detection. Recently, with the development of polarimetric synthetic aperture radar (PolSAR), more backscattering features on different polarization state can be available for usage of object-oriented change detection study. In this paper, the object-oriented strategy will be employed. Considering the fact that the different target or target's state behaves different backscattering characteristics dependent on polarization state, an object-oriented change detection method that based on weighted polarimetric scattering difference of PolSAR images is proposed. The method operates on the objects generated by generalized statistical region merging (GSRM) segmentation processing. The merit of GSRM method is that image segmentation is executed on polarimetric coherence matrix, which takes full advantages of polarimetric backscattering features. And then, the measurement of polarimetric scattering difference is constructed by combining the correlation of covariance matrix and the difference of scattering power. Through analysing the effects of the covariance matrix correlation and the scattering echo power difference on the polarimetric scattering difference, the weighted method is used to balance the influences caused by the two parts, so that more reasonable weights can be chosen to decrease the false alarm rate. The effectiveness of the algorithm that proposed in this letter is tested by detection of the growth of crops with two different temporal radarsat-2 fully PolSAR data. First, objects are produced by GSRM algorithm based on the coherent matrix in the pre-processing. Then, the corresponding patches are extracted in two temporal images to measure the differences of objects. To detect changes of patches, a difference map is created by means of weighted polarization scattering difference. Finally, the result of change detection can be obtained by threshold determining. The experiments show that this approach is feasible and effective, and a reasonable choice of weights can improve the detection accuracy significantly.
APA, Harvard, Vancouver, ISO, and other styles
6

Zare, Mohsen, Maxime Norval, Julie Bodin, and Yves Roquelaure. "Differences between risk situations identified using a self-reported questionnaire and an observational method." Work 68, no. 3 (March 26, 2021): 759–69. http://dx.doi.org/10.3233/wor-203409.

Full text
Abstract:
Background: This study investigates work-related musculoskeletal disorders risk estimation by frequently as used as ergonomic methods in the field. Objective: To identify the difference in risk estimation by an in-house observational method and a self-reported questionnaire, and to evaluate the complementary aspects of these methods. Methods: A sample of 15 operators who worked on the assembly workstations was selected from a truck manufacturing plant. The risk assessment of these workstations (28 scenarios) was performed by the observational method and the self-reported questionnaire. The agreement between both methods to identify risk situations was measured with the weighted Kappa coefficient. Results: The observational method and the self-reported questionnaire deployed on the same activity estimated different risk situations. Conclusion: This analysis does not reveal that one tool is more powerful than the other one, but shows the probability of different risk estimation. The complementary effect of each method might be considered for further investigation concerning musculoskeletal risk factors.
APA, Harvard, Vancouver, ISO, and other styles
7

Ashpreet and Mantosh Biswas. "Impulse Noise Detection and Removal Method Based on Modified Weighted Median." International Journal of Software Innovation 8, no. 2 (April 2020): 38–53. http://dx.doi.org/10.4018/ijsi.2020040103.

Full text
Abstract:
Impulse noise generally occurs because of bit errors in progression of image acquisition and transmission. It is well known that median filtering method is an impulse noise removal method. Lots of modified median filters have been proposed in the last decades to improve the methods for noise suppression and detail preservation, which have their own deficiencies while identifying and restoring noise pixels. In this article, after deeply analyzing the reasons, such as decreased noise detection and noise removal accuracy that forms the basis of the deficiencies, this article proposes a modified weighted median filter method for color images corrupted by salt-and-pepper noise. In this method, a pixel is classified into either “noise free pixel” or “noise pixel” by checking the center pixel in the current filtering window with the extreme values (0 or 255) for an 8-bit image using noise detection step. Directional differences and the number of “good” pixels in the current filtering window modify the detected noise pixels. Simulation effects on considered test images reveal the proposed method to be improved over state-of-the-art de-noising methods in terms of PSNR and SSIM with pictorial comparative analysis.
APA, Harvard, Vancouver, ISO, and other styles
8

Salimi, Negin, and Jafar Rezaei. "Measuring efficiency of university-industry Ph.D. projects using best worst method." Scientometrics 109, no. 3 (September 17, 2016): 1911–38. http://dx.doi.org/10.1007/s11192-016-2121-0.

Full text
Abstract:
AbstractA collaborative Ph.D. project, carried out by a doctoral candidate, is a type of collaboration between university and industry. Due to the importance of such projects, researchers have considered different ways to evaluate the success, with a focus on the outputs of these projects. However, what has been neglected is the other side of the coin—the inputs. The main aim of this study is to incorporate both the inputs and outputs of these projects into a more meaningful measure called efficiency. A ratio of the weighted sum of outputs over the weighted sum of inputs identifies the efficiency of a Ph.D. project. The weights of the inputs and outputs can be identified using a multi-criteria decision-making (MCDM) method. Data on inputs and outputs are collected from 51 Ph.D. candidates who graduated from Eindhoven University of Technology. The weights are identified using a new MCDM method called Best Worst Method (BWM). Because there may be differences in the opinion of Ph.D. candidates and supervisors on weighing the inputs and outputs, data for BWM are collected from both groups. It is interesting to see that there are differences in the level of efficiency from the two perspectives, because of the weight differences. Moreover, a comparison between the efficiency scores of these projects and their success scores reveals differences that may have significant implications. A sensitivity analysis divulges the most contributing inputs and outputs.
APA, Harvard, Vancouver, ISO, and other styles
9

Gipson, Geoffrey T., Kay S. Tatsuoka, Brian C. Sweatman, and Susan C. Connor. "Weighted least-squares deconvolution method for discovery of group differences between complex biofluid 1H NMR spectra." Journal of Magnetic Resonance 183, no. 2 (December 2006): 269–77. http://dx.doi.org/10.1016/j.jmr.2006.09.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Baumann, Philipp Sebastian, Leila Cammoun, Philippe Conus, Kim Quang Do, Pierre Marquet, Djalel Meskaldji, Reto Meuli, Jean-Philippe Thiran, and Patric Hagmann. "High b-value diffusion-weighted imaging: A sensitive method to reveal white matter differences in schizophrenia." Psychiatry Research: Neuroimaging 201, no. 2 (February 2012): 144–51. http://dx.doi.org/10.1016/j.pscychresns.2011.08.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Holliday, Charles L., Ryan Martin, and John A. Grant. "Comparing the Efficacy of Kneeling Stress Radiographs and Weighted Gravity Stress Radiographs to Assess Posterior Cruciate Ligament Insufficiency." American Journal of Sports Medicine 49, no. 4 (February 18, 2021): 1017–22. http://dx.doi.org/10.1177/0363546520988114.

Full text
Abstract:
Background: Kneeling posterior cruciate ligament (PCL) stress radiographs are commonly used to evaluate PCL laxity. Patients, however, report significant pain, and the method’s reproducibility may be challenged due to its dependence on patient body weight distribution to produce posterior tibial displacement. Weighted gravity stress radiography may offer better reproducibility and comfort than the kneeling technique, but its efficacy has not been studied. Hypothesis: Weighted gravity PCL stress radiographs will be more comfortable and produce similar measurements of side-to-side difference in posterior tibial displacement when compared with the kneeling technique. Study Design: Cohort study (diagnosis); Level of evidence, 3. Methods: A total of 40 patients with nonoperatively or >6 months postoperatively treated PCL injuries (isolated or multiligamentous) underwent bilateral stress radiographs. Weighted gravity and kneeling stress radiographs were acquired, in random order, for each patient, as well as side-to-side difference in posterior tibial displacement between each knee, patient-reported visual analog scale knee pain (100 mm), time to acquire the images, and patient preference for technique. Paired t tests were used to compare the side-to-side difference, pain score, and time to complete the radiographs. Results: There was no difference between the 2 radiographic methods in the mean side-to-side difference (gravity: 6.45 ± 4.61 mm, kneeling: 6.82 ± 4.60 mm; P = .72), time required to acquire radiographs (kneeling: 307.3 ± 140.5 seconds, gravity: 318.7 ± 151.1 seconds; P = .073), or number of radiographs taken to obtain acceptable images (kneeling: 3.6 ± 1.6, gravity: 3.7 ± 1.7; P = .73). Patients reported significantly less knee pain during the weighted gravity views (kneeling: 31.8 ± 26.6, gravity: 4.0 ± 12.0; P < .0001). Of the patients, 88% preferred the weighted gravity method. Conclusion: Weighted gravity stress radiographs produce similar side-to-side differences in posterior tibial translation compared with the kneeling stress technique, but do not rely on patient weightbearing and provide significantly better patient comfort. Clinicians should therefore consider the use of weighted gravity stress radiographs in clinical practice to minimize the pain associated with stress radiography while allowing for accurate decision making.
APA, Harvard, Vancouver, ISO, and other styles
12

Wright, Karen C., Patricia Harnden, Sue Moss, Dan M. Berney, and Jane Melia. "A practical application of analysing weighted kappa for panels of experts and EQA schemes in pathology." Journal of Clinical Pathology 64, no. 3 (January 7, 2011): 257–60. http://dx.doi.org/10.1136/jcp.2010.086330.

Full text
Abstract:
BackgroundKappa statistics are frequently used to analyse observer agreement for panels of experts and External Quality Assurance (EQA) schemes and generally treat all disagreements as total disagreement. However, the differences between ordered categories may not be of equal importance (eg, the difference between grades 1 vs 2 compared with 1 vs 3). Weighted kappa can be used to adjust for this when comparing a small number of readers, but this has not as yet been applied to the large number of readers typical of a national EQA scheme.AimTo develop and validate a method for applying weighted kappa to a large number of readers within the context of a real dataset: the UK National Urological Pathology EQA Scheme for prostatic biopsies.MethodsData on Gleason grade recorded by 19 expert readers were extracted from the fixed text responses of 20 cancer cases from four circulations of the EQA scheme. Composite kappa, currently used to compute an unweighted kappa for large numbers of readers, was compared with the mean kappa for all pairwise combinations of readers. Weighted kappa generalised for multiple readers was compared with the newly developed ‘pairwise-weighted’ kappa.ResultsFor unweighted analyses, the median increase from composite to pairwise kappa was 0.006 (range −0.005 to +0.052). The difference between the pairwise-weighted kappa and generalised weighted kappa for multiple readers never exceeded ±0.01.ConclusionPairwise-weighted kappa is a suitable and highly accurate approximation to weighted kappa for multiple readers.
APA, Harvard, Vancouver, ISO, and other styles
13

Wang, Zhengbin, Anxi Yu, Ben Zhang, Zhen Dong, and Xing Chen. "A Fast Registration Method for Optical and SAR Images Based on SRAWG Feature Description." Remote Sensing 14, no. 19 (October 10, 2022): 5060. http://dx.doi.org/10.3390/rs14195060.

Full text
Abstract:
Due to differences in synthetic aperture radar (SAR) and optical imaging modes, there is a considerable degree of nonlinear intensity difference (NID) and geometric difference between the two images. The SAR image is also accompanied by strong multiplicative speckle noise. These phenomena lead to what is known as a challenging task to register optical and SAR images. With the development of remote sensing technology, both optical and SAR images equipped with sensor positioning parameters can be roughly registered according to geographic coordinates in advance. However, due to the inaccuracy of sensor parameters, the relative positioning accuracy is still as high as tens or even hundreds of pixels. This paper proposes a fast co-registration method including 3D dense feature description based on a single-scale Sobel and the ratio of exponentially weighted averages (ROEWA) combined with the angle-weighted gradient (SRAWG), overlapping template merging, and non-maxima suppressed template search. In order to more accurately describe the structural features of the image, the single-scale Sobel and ROEWA operators are used to calculate the gradients of optical and SAR images, respectively. On this basis, the 3 × 3 neighborhood angle-weighted gradients of each pixel are fused to form a pixel-wise 3D dense feature description. Aiming at the repeated feature description in the overlapping template and the multi-peak problem on the search surface, this paper adopts the template search strategy of overlapping template merging and non-maximum suppression. The registration results obtained on seven pairs of test images show that the proposed method has significant advantages over state-of-the-art methods in terms of comprehensive registration accuracy and efficiency.
APA, Harvard, Vancouver, ISO, and other styles
14

Yang, Guangyu, Daolin Xu, Haicheng Zhang, and Shuyan Xia. "Weighted recurrence network for characterizing continuous dynamical systems." Modern Physics Letters B 35, no. 21 (June 12, 2021): 2150361. http://dx.doi.org/10.1142/s0217984921503619.

Full text
Abstract:
Recurrence network (RN) is a powerful tool for the analysis of complex dynamical systems. It integrates complex network theory with the idea of recurrence of a trajectory, i.e. whether two state vectors are close neighbors in a phase space. However, the differences in proximity between connected state vectors are not considered in the RN construction. Here, we propose a weighted state vector recurrence network method which assigns weights to network links based on the proximity of the two connected state vectors. On the basis, we further propose a weighted data segment recurrence network that takes continuous data segments as nodes for the analysis of noisy time series. The feasibility of the proposed methods is illustrated based on the Lorenz system. Finally, an application to five types of EEG recordings is conducted to demonstrate the potentials of the proposed methods in the study of real-world data.
APA, Harvard, Vancouver, ISO, and other styles
15

Hu, Shaoxiang, Zhiwu Liao, Dan Sun, and Wufan Chen. "A Numerical Method for Preserving Curve Edges in Nonlinear Anisotropic Smoothing." Mathematical Problems in Engineering 2011 (2011): 1–14. http://dx.doi.org/10.1155/2011/186507.

Full text
Abstract:
We focus on nonlinearity for images and propose a new method which can preserve curve edges in image smoothing using nonlinear anisotropic diffusion (NAD). Unlike existing methods which diffuse only among the spatial variants, the new method suggests that the diffusion should be performed both among the time variants and spatial variants, named time and space nonlinear anisotropic diffusion (TSNAD). That is, not only the differences of the spatial variants should be estimated by the nearby spatial points but also the differences of the time variants should be approximated by the weighted time differences of nearby points, according to the differences of gray levels between them and the consideration point. Since the time differences of nearby points using NAD can find more points with similar gray levels which form a curve belt for the center pixel on a curve edge, TSNAD can provide satisfied smoothing results while preserving curve edges. The experiments for digital images also show us the ability of TSNAD to preserve curve edges.
APA, Harvard, Vancouver, ISO, and other styles
16

Liu, Hui, Qing Xu, Guowang Jin, and Hongmin Zhang. "MIMO Antenna Polynomial Weighted Average Design Method of Downward-Looking Array SAR." International Journal of Antennas and Propagation 2017 (2017): 1–18. http://dx.doi.org/10.1155/2017/3029847.

Full text
Abstract:
MIMO antenna polynomial weighted average design method of downward-looking array SAR was proposed from the angle of surveying and mapping in this paper, in order to solve the ill-posed problem that an equivalent virtual array can be implemented by a variety of physical transmitter-receiver arrays for bistatic MIMO linear array. For wave band, resolution, elevation precision, and working height concerned by the applications of surveying and mapping, the length of equivalent virtual array and actual physical array meeting the needs of large scale topographical mapping was solved. Then array numbers and position vectors of MIMO downward-looking array SAR of real aerial mapping platform were optimized. According to this design, some simulation experiments and comparisons were processed. The results proved the rationality and effectiveness of this array configuration by comparing the differences of 3D imaging results and the original simulation scene, counting mean and standard deviation of elevation reconstruction error eliminating the influence of shadow areas, and counting the probability of elevation reconstruction error within half a resolution of the whole scene and individual building area.
APA, Harvard, Vancouver, ISO, and other styles
17

Zheng, En Wei, and Xian Jun Wang. "An Effective Regularization Method for Image Super Resolution." Advanced Materials Research 219-220 (March 2011): 1411–14. http://dx.doi.org/10.4028/www.scientific.net/amr.219-220.1411.

Full text
Abstract:
In this paper, we propose a new super resolution (SR) reconstruction method to handle license plate numbers of vehicles in real traffic videos. Recently, SR reconstruction shemes based on regularization have been demonstrated to be effective because SR reconstrction is an ill-posed problem. Working within this promising framework, the residual data (RD) term can be weighted according to the differences among the observed LR images in the SR reconstruction model. Moreover, L1 norm is used to measure the RD term in order to improve the robustness of our method. Experiments show the proposed method improves the subjective visual quality of the high resolution images.
APA, Harvard, Vancouver, ISO, and other styles
18

Yao, Yunhan, and Ke Zhang. "An Improved Self-Born Weighted Least Square Method for Cylindricity Error Evaluation." Applied Sciences 12, no. 23 (December 1, 2022): 12319. http://dx.doi.org/10.3390/app122312319.

Full text
Abstract:
In order to improve the stability of the evaluation results and the gross error resistance of the algorithm in view of the widespread gross errors in geometric error evaluation, an improved self-born weighted least square method (ISWLS) is proposed in this paper. First, the nonlinear cylindrical axial model is linearized to establish the error equation of the observed values. We use the conditional equations of the independent observations found as valid information to derive the weights of the observations. The weights of the observations are subjected to least-square iteration to calculate the error values and equation parameters. Meanwhile, the ordinal numbers of the independent sets of equations in the observed equations are updated several times. By updating the ordinal number information of the conditional equations, the influence of gross error data on the solution of the equations is minimized. Through a series of experiments, the algorithm is proved to have a strong resistance to gross differences, and operation time is shorter. According to the evaluation results of cylindricity error, the uncertainty of cylindricity error was calculated by the Guide to the expression of uncertainty in measurement method (GUM)and the Monte Carlo method (MCM). Experiments show that the uncertainty results of the MCM method can verify the results assessed by the GUM method, which proves that the results of the ISWLS method are effective and robust.
APA, Harvard, Vancouver, ISO, and other styles
19

Gempesaw, C. M., A. M. Tambe, R. M. Nayga, and U. C. Toensmeyer. "The Single Index Market Model in Agriculture." Northeastern Journal of Agricultural and Resource Economics 17, no. 2 (October 1988): 147–55. http://dx.doi.org/10.1017/s0899367x00001781.

Full text
Abstract:
This study illustrates the differences in empirical results due to data measurements and estimating procedures when applying the single index market model in agriculture. Gross and net return betas along with systematic and unsystematic risk proportions are estimated and found to be different. The stochastic coefficients model is used to show the difference in beta-risk estimates compared with the traditional fixed coefficients OLS procedure. A third estimating technique, weighted least squares/Prais Winsten method, is also proposed.
APA, Harvard, Vancouver, ISO, and other styles
20

Patel, Vrushang, Sheela Ramanna, Ketan Kotecha, and Rahee Walambe. "Short Text Classification with Tolerance-Based Soft Computing Method." Algorithms 15, no. 8 (July 30, 2022): 267. http://dx.doi.org/10.3390/a15080267.

Full text
Abstract:
Text classification aims to assign labels to textual units such as documents, sentences and paragraphs. Some applications of text classification include sentiment classification and news categorization. In this paper, we present a soft computing technique-based algorithm (TSC) to classify sentiment polarities of tweets as well as news categories from text. The TSC algorithm is a supervised learning method based on tolerance near sets. Near sets theory is a more recent soft computing methodology inspired by rough sets where instead of set approximation operators used by rough sets to induce tolerance classes, the tolerance classes are directly induced from the feature vectors using a tolerance level parameter and a distance function. The proposed TSC algorithm takes advantage of the recent advances in efficient feature extraction and vector generation from pre-trained bidirectional transformer encoders for creating tolerance classes. Experiments were performed on ten well-researched datasets which include both short and long text. Both pre-trained SBERT and TF-IDF vectors were used in the experimental analysis. Results from transformer-based vectors demonstrate that TSC outperforms five well-known machine learning algorithms on four datasets, and it is comparable with all other datasets based on the weighted F1, Precision and Recall scores. The highest AUC-ROC (Area under the Receiver Operating Characteristics) score was obtained in two datasets and comparable in six other datasets. The highest ROC-PRC (Area under the Precision–Recall Curve) score was obtained in one dataset and comparable in four other datasets. Additionally, significant differences were observed in most comparisons when examining the statistical difference between the weighted F1-score of TSC and other classifiers using a Wilcoxon signed-ranks test.
APA, Harvard, Vancouver, ISO, and other styles
21

Zhang, Xiuyuan, Paul A. McDermott, John W. Fantuzzo, and Vivian L. Gadsden. "Longitudinal Stability of IRT and Equivalent-Groups Linear and Equipercentile Equating." Psychological Reports 113, no. 1 (August 2013): 291–313. http://dx.doi.org/10.2466/03.10.pr0.113x11z6.

Full text
Abstract:
A multiscale criterion-referenced test that featured two presumably equivalent forms (A and B), was administered to 1,667 Head Start children at each of four points over an academic year. Using a randomly equivalent groups design, three equating methods were applied: common-item IRT equating using concurrent calibration, linear transformation, and equipercentile transformation. The methods were compared by examining mean score differences, weighted mean squared difference, and Kolmogorov's D statistics for each subscale. The results indicated that over time the IRT equating method and conventional equating methods exhibited different patterns of discrepancy between the two test forms. IRT equating yielded marginally smaller form-to-form mean score differences and generated slightly f ewer distributional discrepancies between Forms A and B than both linear and equipercentile equating. However, the results were mixed indicating that more studies are needed to provide additional information on the relative merits and weaknesses of each approach.
APA, Harvard, Vancouver, ISO, and other styles
22

Edler, R., M. Abd Rahim, D. Wertheim, and D. Greenhill. "The Use of Facial Anthropometrics in Aesthetic Assessment." Cleft Palate-Craniofacial Journal 47, no. 1 (January 2010): 48–57. http://dx.doi.org/10.1597/08-218.1.

Full text
Abstract:
Objective To identify a valid method for scoring facial aesthetics, correlating clinicians' assessments with measurements taken from 3D facial photographs. Design Album assessment of facial images, using a visual analog scale of attractiveness and the scores ranked. Facial ratios, obtained following digitization of the images compared with control group mean data, obtained from the Farkas growth study and the differences from the matched mean calculated. Ranked scores compared according to the inclusion of surface as well as caliper (shortest distance) measurements and whether the ranked scores, representing difference to the mean, were weighted. Patients, Participants Eleven facial images were presented. Seven clinicians provided clinical judgment. Main Outcome Measures Correlation coefficients between mean attractiveness rankings and summed differences between patients' facial proportions and matched control data from the Farkas growth study. Results The highest correlation was obtained with the use of the difference accommodating the mean of the Farkas proportion index ( r = −.76, p = .006), followed by standard deviation ( r = −.65, p = .032), using data in which the difference is weighted according to the magnitude of the linear measurement involved. Repeatability of the clinicians' assessments and operator digitization were validated. Conclusions There appears to be potential in the use of the mean as aesthetic ideal as a principle in aesthetic assessment and perhaps as an objective means of outcome assessment after facial surgery. The most appropriate scoring method would seem to include use of both surface and caliper measurements and incorporate weighting.
APA, Harvard, Vancouver, ISO, and other styles
23

Ren, Jian, Yang Gao, and Can Bian. "Multiple Criteria Decision Making Based on Discrete Linguistic Stochastic Variables." Mathematical Problems in Engineering 2013 (2013): 1–11. http://dx.doi.org/10.1155/2013/546508.

Full text
Abstract:
For solving the discrete linguistic stochastic multiple criteria decision making problems with incomplete information, a new decision making method based on the differences between the superiorities and the inferiorities is proposed. According to the two basic parameters which are the possible outcome and the state probability, the superior decision matrix and the inferior decision matrix of the alternative set under each criterion are first worked out. Then, by the differences between the elements on the appropriate locations of these matrices, the corresponding dominant decision matrices are formed. Subsequently, with the help of the weight vector of the criterion set, the weighted integrated dominant decision matrix of the alternative set is built. Consequently, the weighted integrated dominant indices' sum of each alternative is calculated. Thus, the rank of the alternatives comes out. Finally, a numerical example is given. The result shows the superiority of the method.
APA, Harvard, Vancouver, ISO, and other styles
24

Rhee, Jinyoung, Gregory J. Carbone, and James Hussey. "Drought Index Mapping at Different Spatial Units." Journal of Hydrometeorology 9, no. 6 (December 1, 2008): 1523–34. http://dx.doi.org/10.1175/2008jhm983.1.

Full text
Abstract:
Abstract This paper investigates the influence of spatial interpolation and aggregation of data to depict drought at different spatial units relevant to and often required for drought management. Four different methods for drought index mapping were explored, and comparisons were made between two spatial operation methods (simple unweighted average versus spatial interpolation plus aggregation) and two calculation procedures (whether spatial operations are performed before or after the calculations of drought index values). Deterministic interpolation methods including Thiessen polygons, inverse distance weighted, and thin-plate splines as well as a stochastic and geostatistical interpolation method of ordinary kriging were compared for the two methods that use interpolation. The inverse distance weighted method was chosen based on the cross-validation error. After obtaining drought index values for different spatial units using each method in turn, differences in the empirical binned frequency distributions were tested between the methods and spatial units. The two methods using interpolation and aggregation introduced fewer errors in cross validation than the two simple unweighted average methods. Whereas the method performing spatial interpolation and aggregation before calculating drought index values generally provided consistent drought information between various spatial units, the method performing spatial interpolation and aggregation after calculating drought index values reduced errors related to the calculations of precipitation data.
APA, Harvard, Vancouver, ISO, and other styles
25

Fishbein, Evan, and R. Timothy Patterson. "Error-weighted maximum likelihood (EWML): a new statistically based method to cluster quantitative micropaleontological data." Journal of Paleontology 67, no. 3 (May 1993): 475–86. http://dx.doi.org/10.1017/s0022336000036921.

Full text
Abstract:
The advent of readily available computer-based clustering packages has created some controversy in the micropaleontological community concerning the use and interpretation of computer-based biofacies discrimination. This is because dramatically different results can be obtained depending on methodology. The analysis of various clustering techniques reveals that, in most instances, no statistical hypothesis is contained in the clustering model and no basis exists for accepting one biofacies partitioning over another. Furthermore, most techniques do not consider standard error in species abundances and generate results that are not statistically relevant. When many rare species are present, statistically insignificant differences in rare species can accumulate and overshadow the significant differences in the major species, leading to biofacies containing members having little in common.A statistically based “error-weighted maximum likelihood” (EWML) clustering method is described that determines biofacies by assuming that samples from a common biofacies are normally distributed. Species variability is weighted to be inversely proportional to measurement uncertainty. The method has been applied to samples collected from the Fraser River Delta marsh and shows that five distinct biofacies can be resolved in the data. Similar results were obtained from readily available packages when the data set was preprocessed to reduce the number of degrees of freedom. Based on the sample results from the new algorithm, and on tests using a representative micropaleontological data set, a more conventional iterative processing method is recommended. This method, although not statistical in nature, produces similar results to EWML (not commercially available yet) with readily available analysis packages. Finally, some of the more common clustering techniques are discussed and strategies for their proper utilization are recommended.
APA, Harvard, Vancouver, ISO, and other styles
26

Bautista, Pinky A., and Yukako Yagi. "Multispectral Enhancement Method to Increase the Visual Differences of Tissue Structures in Stained Histopathology Images." Analytical Cellular Pathology 35, no. 5-6 (2012): 407–20. http://dx.doi.org/10.1155/2012/501742.

Full text
Abstract:
In this paper we proposed a multispectral enhancement scheme in which the spectral colors of the stained tissue-structure of interest and its background can be independently modified by the user to further improve their visualization and color discrimination. The colors of the background objects are modified by transforming theirN-band spectra through anNxNtransformation matrix, which is derived by mapping the representative samples of their original spectra to the spectra of their target colors using least mean square method. On the other hand, the color of the tissue structure of interest is modified by modulating the transformed spectra with the sum of the pixel’s spectral residual-errors at specific bands weighted through anNxNweighting matrix; the spectral error is derived by taking the difference between the pixel’s original spectrum and its reconstructed spectrum using the firstMdominant principal component vectors in principal component analysis. Promising results were obtained on the visualization of the collagen fiber and the non-collagen tissue structures, e.g., nuclei, cytoplasm and red blood cells (RBC), in a hematoxylin and eosin (H&E) stained image.
APA, Harvard, Vancouver, ISO, and other styles
27

Wang, Aili, Chengyang Liu, Dong Xue, Haibin Wu, Yuxiao Zhang, and Meihong Liu. "Hyperspectral Image Classification Based on Cross-Scene Adaptive Learning." Symmetry 13, no. 10 (October 5, 2021): 1878. http://dx.doi.org/10.3390/sym13101878.

Full text
Abstract:
Aiming at few-shot classification in the field of hyperspectral remote sensing images, this paper proposes a classification method based on cross-scene adaptive learning. First, based on the unsupervised domain adaptive technology, cross-scene knowledge transfer learning is carried out to reduce the differences between source scene and target scene. At the same time, depthwise over-parameterized convolution is used in the deep embedding model to improve the convergence speed and feature extraction ability. Second, two symmetrical subnetworks are designed in the model to further reduce the differences between source scene and target scene. Then, Manhattan distance is learned in the Manhattan metric space in order to reduce the computational cost of the model. Finally, the weighted K-nearest neighbor is introduced for classification, in which the weighted Manhattan metric distance is assigned to the clustered samples to improve the processing ability to the imbalanced hyperspectral image data. The effectiveness of the proposed algorithm is verified on the Pavia and Indiana hyperspectral dataset. The overall classification accuracy is 90.90% and 65.01%. Compared with six other kinds of hyperspectral image classification methods, the proposed cross-scene method has better classification accuracy.
APA, Harvard, Vancouver, ISO, and other styles
28

Byvaltsev, V. A., I. A. Stepanov, and A. I. Kichigin. "The Value of Diffusion-weighted Magnetic Resonance Imaging in the Preoperative Evaluation of the Grade of Brain Gliomas." Journal of radiology and nuclear medicine 100, no. 2 (May 22, 2019): 102–10. http://dx.doi.org/10.20862/0042-4676-2019-100-2-102-110.

Full text
Abstract:
Objective. To compare the values obtained for the measured diffusion coefficient (MDC) of brain gliomas with cell density and Ki-67 proliferative activity index and to study whether diffusion-weighted MRI (DWMRI) can be used in the preoperative evaluation of the grade of glial tumors.Material and methods. Diffusion-weighted images of 39 patients with brain gliomas were studied. MDC, cell density, and Ki-67 proliferative activity index were calculated for each tumor. The correlation between MDC values, cell density, and Ki-67 proliferative activity index was analyzed.Results. Comparison of the mean values for MDC revealed a significant difference between grades I–II and III–IV tumors. There were statistically significant differences in the mean Ki-67 index between different grades of gliomas. Evaluation of the correlation between MCD and Ki-67 proliferative activity index demonstrated moderate and strong inverse correlations for low- and high-grade tumors, respectively.Conclusion. The procedure using DW-MRI along with MDC calculation can be used as an additional noninvasive method for the preoperative estimation of the grade and proliferative potential of brain gliomas.
APA, Harvard, Vancouver, ISO, and other styles
29

Rao, Junmin, Jing Mu, Fanming Li, and Shijian Liu. "Infrared Small Target Detection Based on Weighted Local Coefficient of Variation Measure." Sensors 22, no. 9 (May 2, 2022): 3462. http://dx.doi.org/10.3390/s22093462.

Full text
Abstract:
Robust infrared (IR) small target detection is critical for infrared search and track (IRST) systems and is a challenging task for complicated backgrounds. Current algorithms have poor performance on complex backgrounds, and there is a high false alarm rate or even missed detection. To address this problem, a weighted local coefficient of variation (WLCV) is proposed for IR small target detection. This method consists of three stages. First, the preprocessing stage can enhance the original IR image and extract potential targets. Second, the detection stage consists of a background suppression module (BSM) and a local coefficient of variation (LCV) module. BSM uses a special three-layer window that combines the anisotropy of the target and differences in the grayscale distribution. LCV exploits the discrete statistical properties of the target grayscale. The weighted advantages of the two modules complement each other and greatly improve the effect of small target enhancement and background suppression. Finally, the weighted saliency map is subjected to adaptive threshold segmentation to extract the true target for detection. The experimental results show that the proposed method is more robust to different target sizes and background types than other methods and has a higher detection accuracy.
APA, Harvard, Vancouver, ISO, and other styles
30

Yin, Shouqiang, Jing Li, Jiaxin Liang, Kejing Jia, Zhen Yang, and Yuan Wang. "Optimization of the Weighted Linear Combination Method for Agricultural Land Suitability Evaluation Considering Current Land Use and Regional Differences." Sustainability 12, no. 23 (December 4, 2020): 10134. http://dx.doi.org/10.3390/su122310134.

Full text
Abstract:
This study was aimed at optimizing the weighted linear combination method (WLC) for agricultural land suitability evaluation (ALSE) through indicator selection, weight determination, and classification of overall suitability scores in Handan, China. Handan is a representative research area with distinct agricultural advantages and regional differences in land use, where the expansion of construction land has led to a rapid decrease of agricultural land in recent years. Natural factors (topography, climate, soil conditions, and vegetation cover) and socioeconomic factors (land use and spatial accessibility) were selected to establish a more comprehensive evaluation system. The index weight was calculated by the mutual information between index suitability and current land use. The consistency index was used to identify the boundary value dividing the overall suitability score into a suitable category and unsuitable category in each sub-region. The results demonstrated that the optimized WLC-ALSE model outperformed the comparison models using conventional methods in terms of the consistency between the evaluation results and current land use. Owing to the increasing limitations of topography, soil conditions, spatial accessibility, and land use, the proportions of suitable land in Zone 1, Zone 2, and Zone 3 were 77.4%, 67.5%, and 30.9%, respectively. The agricultural land unsuitable for agriculture (14.5%) was less than non-agricultural land suitable for agriculture (7.4%), indicating that agricultural land had low growth potential in Handan. Finally, specific recommendations were made to improve agricultural land suitability, alleviate land use conflicts, and further optimize the model. The results can provide effective guidance for WLC-ALSE and land use decision-making for sustainable agriculture.
APA, Harvard, Vancouver, ISO, and other styles
31

Lamouroux, Raphaël, Jérémie Gressier, and Gilles Grondin. "A High-Order Compact Limiter Based on Spatially Weighted Projections for the Spectral Volume and the Spectral Differences Method." Journal of Scientific Computing 67, no. 1 (September 2, 2015): 375–403. http://dx.doi.org/10.1007/s10915-015-0084-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Yang, Hu, Longjiang Tang, Huizhong Zhu, Aigong Xu, and Bo Li. "A Concise Method for Calibrating the Offset of GPS Precise Satellite Orbit." Remote Sensing 15, no. 1 (December 20, 2022): 8. http://dx.doi.org/10.3390/rs15010008.

Full text
Abstract:
A set of Global Navigation Satellite Systems (GNSS) satellite orbit and clock offset are an essential prerequisite for precise application. However, abrupt changes in accuracy at the boundaries are prevalent in products provided by international GNSS services, resulting in decreased orbit interpolation precision near the daily boundary. In addition, the effect of this phenomenon is reflected in the deterioration of accuracy and the fluctuations in subsequent applications. In this study, time-weighted and equal-weighted calibrated methods were utilized for adjacent Global Positioning System (GPS) satellite orbits and the orbit variations were then corrected for the clock offset to ensure their consistency. The calibration method is evaluated based on the accuracy and smoothness of post-processing kinematic precise point positioning (PPP) and low earth orbit (LEO) precise orbit determination (POD) near the day boundary. In a variety of scientific applications, the results indicate that the proposed calibration method can effectively reduce the excessive differences near the day boundary between adjacent days. Near the boundary, maximum improvements for post-processing kinematic PPP, dynamic LEO precision orbit, kinematic LEO precision orbit are 41.5%, 9.4%, and 20.5%, respectively.
APA, Harvard, Vancouver, ISO, and other styles
33

Christine, Christine, and Halim Agung. "Sistem Penilaian Karakteristik Anjing Menggunakan Metode Weighted Product." Creative Information Technology Journal 5, no. 1 (May 24, 2019): 71. http://dx.doi.org/10.24076/citec.2017v5i1.127.

Full text
Abstract:
Anjing merupakan hewan yang paling banyak populasinya di tahun 2016 menurut survey populasi hewan kesayangan oleh asosiasi pembuat makanan hewan di Inggris (PFMA). Dengan banyaknya jenis anjing dan perbedaan karakteristik, tidak semua manusia dapat memilih jenis anjing yang sesuai dengan situasi dan kondisi mereka. Maka dari itu sistem penilaian karateristik anjing dibuat yang bertujuan untuk memudahkan user dalam memilih jenis anjing yang sesuai dengan situasi dan kondisi user tersebut. Sistem ini dibuat dengan menggunakan metode weighted product karena metode ini lebih efisien dibandingkan metode lain yang termasuk dalam penyelesaian masalah MADM (Multi Attibute Decision Making). Sistem ini dibuat berdasarkan data anjing yang terdaftar di American Kennel Club. Variabel-variabel yang terdapat dalam sistem ini meliputi kategori kelompok, lingkungan yang cocok, level energi, kerontokan bulu, tingkat kesulitan grooming, tingkat kepatuhan, size anjing, dan seringnya gonggongan. Hasil yang didapatkan dari sistem penilaian ini adalah 20 dari 30 pengujian sesuai dengan perhitungan manual. Sehingga tingkat kecocokan dari metode weighted product adalah 67%. Kesimpulan yang didapat dari penilitian ini adalah metode weighted product cukup efisien untuk diterapkan dalam sistem penilaian karakteristik anjing.Kata Kunci — Rekomendasi Jenis Anjing, Weighted Product, Memilih Anjing, Karakteristik, Sistem Penilaian.The dog is the most populous animal in the year 2016 according to a survey of pets population by the association of animal feed makers in the UK (PFMA). With many dog breeds and characteristic differences, not all humans can choose the type of dog that suits their circumstances. Therefore, the system of characteristic assessment of dogs made that aims to facilitate the user in selecting the type of dog that suits the situation and condition of the user. This system is made using the weighted product method because this method is more efficient than other methods which are included in solving the MADM (Multi Attibute Decision Making) problem. This system is based on dog data registered with American Kennel Club. The variables contained in this systeminclude group categories, suitable environments, energy levels, shedding, grooming difficulty level, adherence level, dog size, and frequent barking. The results obtained from this scoring system are 20 out of 30 tests according to manual calculations. So the match rate of weighted product method is 67%. The conclusion of this research is that the weighted product method is efficient enough to be applied in the dog's characteristic rating system.Keywords — Dog’s Breed Recomendation, Weighted Product, Choosing Dog, Characteristic, Scoring System.
APA, Harvard, Vancouver, ISO, and other styles
34

Tan, Gao Shan, Li Yan Zhang, Sheng Lan Liu, and Qing Lan Shi. "A Differentiated Registration Method for Complex Surfaces." Applied Mechanics and Materials 457-458 (October 2013): 1635–38. http://dx.doi.org/10.4028/www.scientific.net/amm.457-458.1635.

Full text
Abstract:
Different error requirements on different surfaces are necessary in many situations. If the registration between the measured dataset and the nominal surface is performed ignoring these differences, an inaccurate result could occur due to the surfaces which have low precision. An improper registration even results in false verdict. So the differentiated registration is of significance in precision engineering. In this paper, a new registration method taking into consideration of the different error requirements is proposed. Firstly surfaces are classified based on different error requirement. Then, appropriate weight factors are iteratively given to these surfaces. Based on the change relationship of error and weight factors, the algorithm controls the error distribution by optimizing the weighted least squares. Our method has the great flexibility in distributing the error, which is suitable for engineering applications. The experiment demonstrates the validity.
APA, Harvard, Vancouver, ISO, and other styles
35

Cho, Eun, Jin Hwa Lee, Hye Jin Baek, Ji Young Ha, Kyeong Hwa Ryu, Sung Eun Park, Jin Il Moon, Sung-Min Gho, and Tetsuya Wakayama. "Clinical Feasibility of Reduced Field-of-View Diffusion-Weighted Magnetic Resonance Imaging with Computed Diffusion-Weighted Imaging Technique in Breast Cancer Patients." Diagnostics 10, no. 8 (July 30, 2020): 538. http://dx.doi.org/10.3390/diagnostics10080538.

Full text
Abstract:
Background: We evaluated the feasibility of the reduced field-of-view (rFOV) diffusion-weighted imaging (DWI) with computed DWI technique by comparison and analysis of the inter-method agreement among acquired rFOV DWI (rFOVA), rFOV DWI with computed DWI technique (rFOVS), and dynamic contrast-enhanced (DCE) magnetic resonance imaging (MRI) in patients with breast cancer. Methods: A total of 130 patients with biopsy-proven breast cancers who underwent breast MRI from April 2017 to December 2017 were included in this study. The rFOVS were reformatted by calculation of the apparent diffusion coefficient curve obtained from rFOVA b = 0 s/mm2 and b = 500 s/mm2. Visual assessment of the image quality of rFOVA b = 1000 s/mm2, rFOVS, and DCE MRI was performed using a four-point grading system. Morphologic analyses of the index cancer was performed on rFOVA, rFOVS, and DCE MRI. The signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), and contrast of tumor-to-parenchyma (TPC) were calculated. Results: Image quality scores with rFOVA, rFOVS, and DCE MRI were not significantly different (p = 0.357). Lesion analysis of shape, margin, and size of the index cancer also did not show significant differences among the three sequences (p = 0.858, p = 0.242, and p = 0.858, respectively). SNR, CNR, and TPC of DCE MRI were significantly higher than those of rFOVA and rFOVS (p < 0.001, p = 0.001, and p = 0.016, respectively). Significant differences were not found between the SNR, CNR, and TPC of rFOVA and those of rFOVS (p > 0.999, p > 0.999, and p > 0.999, respectively). Conclusion: The rFOVA and rFOVS showed nearly equivalent levels of image quality required for morphological analysis of the tumors and for lesion conspicuity compared with DCE MRI.
APA, Harvard, Vancouver, ISO, and other styles
36

Parkar, Anagha P., Miraude E. A. P. M. Adriaensen, Søren Vindfeld, and Eirik Solheim. "The Anatomic Centers of the Femoral and Tibial Insertions of the Anterior Cruciate Ligament: A Systematic Review of Imaging and Cadaveric Studies Reporting Normal Center Locations." American Journal of Sports Medicine 45, no. 9 (November 29, 2016): 2180–88. http://dx.doi.org/10.1177/0363546516673984.

Full text
Abstract:
Background: The anterior cruciate ligament (ACL) is regularly reconstructed if knee joint function is impaired. Anatomic graft tunnel placement, often assessed with varying measurement methods, in the femur and tibia is considered important for an optimal clinical outcome. A consensus on the exact location of the femoral and tibial footprint centers is lacking. Purpose: To systematically review the literature regarding anatomic centers of the femoral and tibial ACL footprints and assess the mean, median, and percentiles of normal centers. Study Design: Systematic review. Methods: A systematic literature search was performed in the PubMed/Medline database in November 2015. Search terms were the following: “ACL” and “insertion anatomy” or “anatomic footprint” or “radiographic landmarks” or “quadrant methods” or “tunnel placement” or “cadaveric femoral” or “cadaveric tibial.” English-language articles that reported the location of the ACL footprint according to the Bernard and Hertel grid in the femur and the Stäubli and Rauschning method in the tibia were included. Weighted means, weighted medians, and weighted 5th and 95th percentiles were calculated. Results: The initial search yielded 1393 articles. After applying the inclusion and exclusion criteria, 16 studies with measurements on cadaveric specimens or a healthy population were reviewed. The weighted mean of the femoral insertion center based on measurements in 218 knees was 29% in the deep-shallow (DS) direction and 35% in the high-low (HL) direction. The weighted median was 26% for DS and 34% for HL. The weighted 5th and 95th percentiles for DS were 24% and 37%, respectively, and for HL were 28% and 43%, respectively. The weighted mean of the tibial insertion center in the anterior-posterior direction based on measurements in 300 knees was 42%, and the weighted median was 44%; the 5th and 95th percentiles were 39% and 46%, respectively. Conclusion: Our results show slight differences between the weighted means and medians in the femoral and tibial insertion centers. We recommend the use of the 5th and 95th percentiles when considering postoperative placement to be “in or out of the anatomic range.”
APA, Harvard, Vancouver, ISO, and other styles
37

Schnoor, Henning, and Wilhelm Hasselbring. "Comparing Static and Dynamic Weighted Software Coupling Metrics." Computers 9, no. 2 (March 30, 2020): 24. http://dx.doi.org/10.3390/computers9020024.

Full text
Abstract:
Coupling metrics that count the number of inter-module connections in a software system are an established way to measure internal software quality with respect to modularity. In addition to static metrics, which are obtained from the source or compiled code of a program, dynamic metrics use runtime data gathered, e.g., by monitoring a system in production. Dynamic metrics have been used to improve the accuracy of static metrics for object-oriented software. We study weighted dynamic coupling that takes into account how often a connection (e.g., a method call) is executed during a system’s run. We investigate the correlation between dynamic weighted metrics and their static counterparts. To compare the different metrics, we use data collected from four different experiments, each monitoring production use of a commercial software system over a period of four weeks. We observe an unexpected level of correlation between the static and the weighted dynamic case as well as revealing differences between class- and package-level analyses.
APA, Harvard, Vancouver, ISO, and other styles
38

Stein, David, and Ralf Michael. "Determinants affecting the decision of German dentists to set-up an office." SCENTIA International Economic Review 1, no. 2 (April 19, 2022): 215–41. http://dx.doi.org/10.52514/sier.v1i2.27.

Full text
Abstract:
Regarding a peril to full-coverage dental care, it is reasonable to find a method to identify the crucial determinants affecting dentists’ decision to set-up a practice (office). METHODS: A systematic literature review was conducted for the years 2010-2020 using the databases Elsevier-Scopus, Springer-SpringerLink and MedLine-PubMed. The factors found in the literature were compared in a pairwise comparison within the framework of a pilot study and could be weighted using the Analytical Hierarchy Process (AHP) and verified for their consistency. RESULTS: Nine factors could be identified. They got weighted: quality of life in private environment (24.12%), environment for the family (20.26%), infrastructure (14.80%), location of the practice (9.72%), real income (7.51%), support programs (6.86%), professional cooperation’s (6.39%), dentist density (5.51%), funding conditions (4.82%). The consistency check resulted in CR=0.06 and is considered consistent. CONCLUSION: The AHP method is a way to analyse relevant factors for the setting-up of dental practices. The AHP can be used as an approximation method for further analysis of business types and socio-demographic differences within the dental profession.
APA, Harvard, Vancouver, ISO, and other styles
39

Zhao, Hai Jun. "A Video Image Segmentation Technology Based on Adaptive Thresholding Algorithm." Applied Mechanics and Materials 380-384 (August 2013): 1189–92. http://dx.doi.org/10.4028/www.scientific.net/amm.380-384.1189.

Full text
Abstract:
Image segmentation is a key step in image processing and image analysis and occupies an important position in image engineering.In this paper, basing on maximum variance between-class, an adaptive and multi-objective image segmentation method is proposed. The concrete implement is to determine adaptively the optimum number of threshold of image using the idea of variance decomposition,while calculating the weighted ratio of within class difference and class difference existing in each classification image. By comparing the ratio, the optimum number of target for image can be get. The experimental results show that the sub-images after segmentation are relatively clear and the differences between classes are obvious.
APA, Harvard, Vancouver, ISO, and other styles
40

Chen, Yong, Fuzhong Weng, Yong Han, and Quanhua Liu. "Planck-Weighted Transmittance and Correction of Solar Reflection for Broadband Infrared Satellite Channels." Journal of Atmospheric and Oceanic Technology 29, no. 3 (March 1, 2012): 382–96. http://dx.doi.org/10.1175/jtech-d-11-00102.1.

Full text
Abstract:
Abstract The line-by-line radiative transfer model (LBLRTM) is used to derive the channel transmittances. The channel transmittance from a level to the top of the atmosphere can be approximated by three methods: Planck-weighted transmittance 1 (PW1), Planck-weighted transmittance 2 (PW2), and non-Planck-weighted transmittance (ORD). The PW1 method accounts for a radiance variation across the instrument’s spectral response function (SRF) and the Planck function is calculated with atmospheric layer temperature, whereas the PW2 method accounts for the variation based on the temperatures at the interface between atmospheric layers. For channels with broad SRFs, the brightness temperatures (BTs) derived from the ORD are less accurate than these from either PW1 or PW2. Furthermore, the BTs from PW1 are more accurate than these from PW2, and the BT differences between PW1 and PW2 increase with atmospheric optical thickness. When the band correction is larger than 1, the PW1 method should be used to account for the Planck radiance variation across the instrument’s SRF. When considering the solar contribution in daytime, the correction of the solar reflection has been made for near-infrared broadband channels (~3.7 μm) when using PW1 transmittance. The solar transmittance is predicted by using explanatory variables, such as PW1 transmittance, the secant of zenith angle, and the surface temperature. With this correction, the errors can be significantly reduced.
APA, Harvard, Vancouver, ISO, and other styles
41

Nakov, Natalija, Jasmina Tonic-Ribarska, Aneta Dimitrovska, and Rumenka Petkovska. "Statistical approach for selection of regression model during validation of bioanalytical method." Macedonian Pharmaceutical Bulletin 60, no. 01 (2014): 19–25. http://dx.doi.org/10.33320/maced.pharm.bull.2014.60.01.003.

Full text
Abstract:
The selection of an adequate regression model is the basis for obtaining accurate and reproducible results during the bionalytical method validation. Given the wide concentration range, frequently present in bioanalytical assays, heteroscedasticity of the data may be expected. Several weighted linear and quadratic regression models were evaluated during the selection of the adequate curve fit using nonparametric statistical tests: One sample rank test and Wilcoxon signed rank test for two independent groups of samples. The results obtained with One sample rank test could not give statistical justification for the selection of linear vs. quadratic regression models because slight differences between the error (presented through the relative residuals) were obtained. Estimation of the significance of the differences in the RR was achieved using Wilcoxon signed rank test, where linear and quadratic regression models were treated as two independent groups. The application of this simple non-parametric statistical test provides statistical confirmation of the choice of an adequate regression model.
APA, Harvard, Vancouver, ISO, and other styles
42

Abed, Suzan, Basil Al-Najjar, and Clare Roberts. "Measuring annual report narratives disclosure." Managerial Auditing Journal 31, no. 4/5 (April 4, 2016): 338–61. http://dx.doi.org/10.1108/maj-09-2014-1101.

Full text
Abstract:
Purpose This paper aims to investigate empirically the common alternative methods of measuring annual report narratives. Five alternative methods are employed, a weighted and un-weighted disclosure index and three textual coding systems, measuring the amount of space devoted to relevant disclosures. Design/methodology/approach The authors investigate the forward-looking voluntary disclosures of 30 UK non-financial companies. They employ descriptive analysis, correlation matrix, mean comparison t-test, rankings and multiple regression analysis of disclosure measures against determinants of corporate voluntary reporting. Findings The results reveal that while the alternative methods of forward-looking voluntary disclosure are highly correlated, important significant differences do nevertheless emerge. In particular, it appears important to measure volume rather than simply the existence or non-existence of each type of disclosure. Overall, we detect that the optimal method is content analysis by text-unit rather than by sentence. Originality/value This paper contributes to the extant literature in forward-looking disclosure by reporting important differences among alternative content analyses. However, the decision regarding whether this should be a computerised or a manual content analysis appears not to be driven by differences in the resulting measures. Rather, the choice is the outcome of a trade-off between the time involved in setting up coding rules for computerised analysis versus the time saved undertaking the analysis itself.
APA, Harvard, Vancouver, ISO, and other styles
43

Fesenko, V. V. "Analysis of Price Differences for Transfer Prices Auditing in the Controlled Import Transactions." Statistics of Ukraine 82, no. 3 (September 4, 2018): 83–88. http://dx.doi.org/10.31767/su.3(82)2018.03.10.

Full text
Abstract:
The article’s objective is to justify and develop a method for analyzing price differences in the controlled foreign economic transactions of enterprises, for further auditing and analyzing transfer prices in foreign economic transactions with related parties. Actual transfer prices in a controlled transaction may differ from planned ones or from the estimated market range of prices, determined by normative requirements for reports on controlled transactions. A detailed analysis of differences in estimated prices is made, which may either occur occasionally or be formed purposefully by management staff of an enterprise when performing controlled import transactions. The economically reasonable ratio of prices in performing import transactions is determined from the perspective of management control for under-pricing purposes, and from the perspective of minimizing tax liabilities for over-pricing purposes. The proposed method for analysis of price differences in the controlled foreign economic transactions is built on the differences between an actual price and its planned, intra-group, minimal, average weighted and maximal levels, thus enabling to determine six types of differences: optimization difference, anti-group difference, the difference by market potential and the difference by the effectiveness of purchases. Identification and analysis of the calculated price difference will contribute in constructing a system for recording and analysis of reasons and effects of price distortions, which can be used for audit (internal and external) and analysis of the effectiveness of such transactions. The proposed method for analyzing price differences in the controlled foreign economic transaction can help internal auditors in identifying price differences that are exposed to the risk of essential distortion due to a probability of management impact, which enables to focus audit procedures on accounting and reporting segments with the increased risk of distortion. This method can also be used as an analytical procedure involved in independent or tax audit of import transactions in testing prices for over-pricing (in order to under-price the taxation base) or under-pricing (in order to reduce the amount of custom payments).
APA, Harvard, Vancouver, ISO, and other styles
44

Gregg, Mary, Somnath Datta, and Doug Lorenz. "Variance estimation in tests of clustered categorical data with informative cluster size." Statistical Methods in Medical Research 29, no. 11 (June 8, 2020): 3396–408. http://dx.doi.org/10.1177/0962280220928572.

Full text
Abstract:
In the analysis of clustered data, inverse cluster size weighting has been shown to be resistant to the potentially biasing effects of informative cluster size, where the number of observations within a cluster is associated with the outcome variable of interest. The method of inverse cluster size reweighting has been implemented to establish clustered data analogues of common tests for independent data, but the method has yet to be extended to tests of categorical data. Many variance estimators have been implemented across established cluster-weighted tests, but potential effects of differing methods on test performance has not previously been explored. Here, we develop cluster-weighted estimators of marginal proportions that remain unbiased under informativeness, and derive analogues of three popular tests for clustered categorical data, the one-sample proportion, goodness of fit, and independence chi square tests. We construct these tests using several variance estimators and show substantial differences in the performance of cluster-weighted tests based on variance estimation technique, with variance estimators constructed under the null hypothesis maintaining size closest to nominal. We illustrate the proposed tests through an application to a data set of functional measures from patients with spinal cord injuries participating in a rehabilitation program.
APA, Harvard, Vancouver, ISO, and other styles
45

Wu, Qisong, Hao Zhang, Zhichao Lai, Youhai Xu, Shuai Yao, and Jun Tao. "An Enhanced Data-Driven Array Shape Estimation Method Using Passive Underwater Acoustic Data." Remote Sensing 13, no. 9 (May 2, 2021): 1773. http://dx.doi.org/10.3390/rs13091773.

Full text
Abstract:
Beamforming-based signal enhancement technologies in passive sonar array processing often have poor performance due to array distortion caused by rapid tactical maneuvers of the towed platform, oceanic currents, hydrodynamic effects, etc. In this paper, an enhanced data-driven shape array estimation formulation is proposed using passive underwater acoustic data. Beamforming based on a hypothetically ideal array is firstly employed to perform the detection of narrow-band components from sources of opportunity, and the corresponding phases of these detected narrow-band components are subsequently extracted to acquire time-delay differences. Then, a weighted outlier-robust Kalman smoother is proposed to acquire enhanced estimates of the time-delay differences, since the underlying properties of slowly changing time-delay differences in the hydrophone array and diverse signal to interference and noise ratios in multiple narrow-band components are explored; and its Cramer–Rao Lower Bound is also provided. Finally, the hydrophone array shape is estimated based on the estimated time delay differences. The proposed formulation fully exploits directional radiated noise signals from distant underwater acoustic targets as sources of opportunity for real-time array shape estimation, and thus it requires neither the number nor direction of sources to be known in advance. The effectiveness of the proposed method is validated in simulations and real experimental data.
APA, Harvard, Vancouver, ISO, and other styles
46

Vershinina, Ksenia, Vadim Dorokhov, Daniil Romanov, Galina Nyashina, and Geniy Kuznetsov. "Multi-Criteria Efficiency Analysis of Using Waste-Based Fuel Mixtures in the Power Industries of China, Japan, and Russia." Applied Sciences 10, no. 7 (April 3, 2020): 2460. http://dx.doi.org/10.3390/app10072460.

Full text
Abstract:
This paper presents the results of analyzing the efficiency of the following five fuel types: dry coal, wet coal processing waste, coal–water slurry, and two waste-derived slurries. In the calculations, we employed 16 criteria related to the energy industry, economy, social aspects, safety at plants, and environmental protection. We used the experimental data, obtained from the combustion of the fuels under study at three heating temperatures (700 °C, 800 °C, and 900 °C). Three countries were analyzed, where all of them have a high share of using fossil fuels in the energy industry: Japan, China, and Russia. The total performance indicator was calculated using three multiple-criteria decision analysis techniques (weighted sum method, weighted product method, and analytic hierarchy process). The choice of weight coefficients was confirmed for each method. We found that coal and coal–water slurry had the lowest integral efficiency indicators (0.016–0.535 and 0.045–0.566, respectively). The maximum effect was achieved when using waste-derived slurry with used turbine oil (0.190–0.800) and coal processing waste (0.535–0.907). There were, on average, 3%–60% differences in the integral efficiency indicator for the same fuel in different countries. The difference in the efficiency indicator of the same fuel in different countries was on average 3%–60%; with changes in temperature, the difference in efficiency was 5%–20%; and when changing the calculation procedure, the difference was 10%–90%.
APA, Harvard, Vancouver, ISO, and other styles
47

Zhou, Zhi-Jie, Feng-Dong Zhao, Xiang-Qian Fang, Xing Zhao, and Shun-Wu Fan. "Meta-analysis of instrumented posterior interbody fusion versus instrumented posterolateral fusion in the lumbar spine." Journal of Neurosurgery: Spine 15, no. 3 (September 2011): 295–310. http://dx.doi.org/10.3171/2011.4.spine10330.

Full text
Abstract:
Object The authors compared the effectiveness of instrumented posterior lumbar interbody fusion (iPLIF) and instrumented posterolateral fusion (iPLF) for the treatment of low-back pain (LBP) due to degenerative lumbar disease. Methods Relevant randomized controlled trials (RCTs) and comparative observational studies through December 2009 were identified using a retrieval strategy of sensitive and specific searches. The study design, participant characteristics, interventions, follow-up rate and period, and outcomes were abstracted after the assessment of methodological quality of the trials. Analyses were performed following the method guidelines of the Cochrane Back Review Group. Results Nine studies were identified—3 RCTs and 6 comparative observational studies. No significant difference was found between the 2 fusion procedures in the global assessment of clinical outcome (OR 1.51, 95% CI 0.71–3.22, p = 0.29) and complication rate (OR 0.55, 95% CI 0.16–1.86, p = 0.34). Both techniques were effective in reducing pain and improving functional disability, as well as restoring intervertebral disc height. Instrumented PLIF was more effective in achieving solid fusion (OR 2.60, 95% CI 1.35–5.00, p = 0.004), a lower reoperation rate (OR 0.20, 95% CI 0.03–1.29, p = 0.09), and better restoration of segmental angle and lumbar lordotic angle than iPLF. There were no significant differences between the fusion methods regarding blood loss (weighted mean difference –179.63, 95% CI –516.42 to 157.15, p = 0.30), and operating time (weighted mean difference 8.03, 95% CI –45.46 to 61.53, p = 0.77). Conclusions The authors' analysis provided moderate-quality evidence that iPLIF has the advantages of higher fusion rate and better restoration of spinal alignment over iPLF. No significant differences were identified between iPLIF and iPLF concerning clinical outcome, complication rate, operating time, and blood loss.
APA, Harvard, Vancouver, ISO, and other styles
48

Okazawa, Hidehiko, Hiroshi Yamauchi, Kanji Sugimoto, Masaaki Takahashi, Hiroshi Toyoda, Yoshihiko Kishibe, and Hideo Shio. "Quantitative Comparison of the Bolus and Steady-State Methods for Measurement of Cerebral Perfusion and Oxygen Metabolism: Positron Emission Tomography Study Using 15O-Gas and Water." Journal of Cerebral Blood Flow & Metabolism 21, no. 7 (July 2001): 793–803. http://dx.doi.org/10.1097/00004647-200107000-00004.

Full text
Abstract:
To evaluate a new simplified bolus method for measurement of cerebral perfusion and metabolism, the parametric images with that method were compared with those obtained from the conventional steady-state method with 15O-gas. The new method also provided images of arterial blood volume (V0), which is a different parameter from cerebral blood volume (CBV) obtained using a C15O technique. Seven healthy volunteers and 10 patients with occlusive cerebrovascular diseases underwent positron emission tomography (PET) scans with both methods. Three-weighted integration was applied to calculate regional cerebral blood flow (rCBF) and regional cerebral metabolic rate of oxygen (rCMRO2) in the bolus method. Global and regional CBF and CMRO2 in volunteers were compared between the two methods and used as control data. Regional values in patients also were evaluated to observe differences between the bilateral hemispheres. Both rCBF and rCMRO2 were linearly well correlated between the two methods, although global difference in CMRO2 was significant. The difference in each parametric image except for V0 was significant between the bilateral hemispheres in patients. The bolus method can simplify oxygen metabolism studies and yield parametric images comparable with those with the steady-state method, and can allow for evaluation of V0 simultaneously. Increase in CBV without a change in V0 suggested the increase might mainly be caused by venous dilatation in the ischemic regions.
APA, Harvard, Vancouver, ISO, and other styles
49

Fuse, Yuma, Hidehiko Yoshimatsu, Ryo Karakawa, and Tomoyuki Yano. "Deep Fat Saving Elevation of the Superficial Circumflex Iliac Artery Perforator Flap." Medicina 58, no. 5 (May 18, 2022): 670. http://dx.doi.org/10.3390/medicina58050670.

Full text
Abstract:
Background and Objectives: Prolonged drain stay and lymphorrhea are often problems at the donor site of the superficial circumflex iliac artery perforator (SCIP) flap. This study aimed to introduce a novel technique of the SCIP flap elevation: Deep Fat Saving (DFS) technique. Materials and Methods: Thirty-two patients who underwent the SCIP flap transfer were divided based on the flap-elevated layer: above the deep fascia or the Camper fascia saving the deep fat. The duration of drain stay and the rates of flap survival and donor-site complications were compared between the groups. The inverse probability weighting (IPW) method was conducted to balance confounders. Results: By IPW, two balanced pseudo-populations were created: DFS = 33.9 and Conventional = 31.3. There were no significant differences in the rate of flap survival (DFS: 100% verses Conventional: 95.8%, p = 0.32) and donor site complications (DFS: 2.4% versus Conventional: 1.3%, p = 0.68, respectively). The duration of drain stay was shorter in the DFS group (weighted median: 6 versus 8 days; weighted difference: −1.6 days (95% confidence interval: −2.8 to −0.4), p = 0.01). Conclusions: An SCIP flap can be reliably harvested using the Deep Fat Saving technique.
APA, Harvard, Vancouver, ISO, and other styles
50

Shen, Xin, Lisheng Wei, and Shaoyu Tang. "Dermoscopic Image Classification Method Using an Ensemble of Fine-Tuned Convolutional Neural Networks." Sensors 22, no. 11 (May 30, 2022): 4147. http://dx.doi.org/10.3390/s22114147.

Full text
Abstract:
Aiming at the problems of large intra-class differences, small inter-class differences, low contrast, and small and unbalanced datasets in dermoscopic images, this paper proposes a dermoscopic image classification method based on an ensemble of fine-tuned convolutional neural networks. By reconstructing the fully connected layers of the three pretrained models of Xception, ResNet50, and Vgg-16 and then performing transfer learning and fine-tuning the three pretrained models with the ISIC 2016 Challenge official skin dataset, we integrated the outputs of the three base models using a weighted fusion ensemble strategy in order to obtain a final prediction result able to distinguish whether a dermoscopic image indicates malignancy. The experimental results show that the accuracy of the ensemble model is 86.91%, the precision is 85.67%, the recall is 84.03%, and the F1-score is 84.84%, with these four evaluation metrics being better than those of the three basic models and better than some classical methods, proving the effectiveness and feasibility of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography