Добірка наукової літератури з теми "Goodness metric"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Goodness metric".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Goodness metric"

1

Adebolu, Ibukun O., Hirokazu Masui, and Mengu Cho. "Quantitative Evaluation of SRS Similarity for Aerospace Testing Applications." Shock and Vibration 2021 (February 8, 2021): 1–10. http://dx.doi.org/10.1155/2021/6655878.

Повний текст джерела
Анотація:
The similarity between a shock response spectrum (SRS) and a target shock specification is essential in evaluating the success of a qualification test of a space component. Qualification testing facilities often utilize shock response databases for rapid testing. Traditionally, the comparison of two shocks (SRS) depends on visual evaluation, which is, at best, subjective. This paper compares five different quantitative methods for evaluating shock response similarity. This work aims to find the most suitable metric for retrieving an SRS from a pyroshock database. The five methods are the SRS difference, mean acceleration difference, average SRS ratio, dimensionless SRS coefficients, and mean square goodness-of-fit method. None of the similarity metrics account for the sign of the deviation between the target SRS and database SRS, making it challenging to satisfy the criteria for a good shock test. We propose a metric (the weighted distance) for retrieving the most similar SRS to a target SRS specification from a shock database in this work. The weighted distance outperforms the mean square goodness-of-fit and other metrics in database SRS retrieval for rapid qualification testing.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Franco, Manuel, Juana María Vivo, Manuel Quesada-Martínez, Astrid Duque-Ramos, and Jesualdo Tomás Fernández-Breis. "Evaluation of ontology structural metrics based on public repository data." Briefings in Bioinformatics 21, no. 2 (February 4, 2019): 473–85. http://dx.doi.org/10.1093/bib/bbz009.

Повний текст джерела
Анотація:
Abstract The development and application of biological ontologies have increased significantly in recent years. These ontologies can be retrieved from different repositories, which do not provide much information about quality aspects of the ontologies. In the past years, some ontology structural metrics have been proposed, but their validity as measurement instrument has not been sufficiently studied to date. In this work, we evaluate a set of reproducible and objective ontology structural metrics. Given the lack of standard methods for this purpose, we have applied an evaluation method based on the stability and goodness of the classifications of ontologies produced by each metric on an ontology corpus. The evaluation has been done using ontology repositories as corpora. More concretely, we have used 119 ontologies from the OBO Foundry repository and 78 ontologies from AgroPortal. First, we study the correlations between the metrics. Second, we study whether the clusters for a given metric are stable and have a good structure. The results show that the existing correlations are not biasing the evaluation, there are no metrics generating unstable clusterings and all the metrics evaluated provide at least reasonable clustering structure. Furthermore, our work permits to review and suggest the most reliable ontology structural metrics in terms of stability and goodness of their classifications. Availability: http://sele.inf.um.es/ontology-metrics
Стилі APA, Harvard, Vancouver, ISO та ін.
3

CHAN, VICTOR K. Y., W. ERIC WONG, and T. F. XIE. "A STATISTICAL METHODOLOGY TO SIMPLIFY SOFTWARE METRIC MODELS CONSTRUCTED USING INCOMPLETE DATA SAMPLES." International Journal of Software Engineering and Knowledge Engineering 17, no. 06 (December 2007): 689–707. http://dx.doi.org/10.1142/s0218194007003495.

Повний текст джерела
Анотація:
Software metric models predict the target software metric(s), e.g., the development work effort or defect rates, for any future software project based on the project's predictor software metric(s), e.g., the project team size. Obviously, the construction of such a software metric model makes use of a data sample of such metrics from analogous past projects. However, incomplete data often appear in such data samples. Moreover, the decision on whether a particular predictor metric should be included is most likely based on an intuitive or experience-based assumption that the predictor metric has an impact on the target metric with a statistical significance. However, this assumption is usually not verifiable "retrospectively" after the model is constructed, leading to redundant predictor metric(s) and/or unnecessary predictor metric complexity. To solve all these problems, we derived a methodology consisting of the k-nearest neighbors (k-NN) imputation method, statistical hypothesis testing, and a "goodness-of-fit" criterion. This methodology was tested on software effort metric models and software quality metric models, the latter usually suffers from far more serious incomplete data. This paper documents this methodology and the tests on these two types of software metric models.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Kang, Tae-Ho, Ashish Sharma, and Lucy Marshall. "Assessing Goodness of Fit for Verifying Probabilistic Forecasts." Forecasting 3, no. 4 (October 27, 2021): 763–73. http://dx.doi.org/10.3390/forecast3040047.

Повний текст джерела
Анотація:
The verification of probabilistic forecasts in hydro-climatology is integral to their development, use, and adoption. We propose here a means of utilizing goodness of fit measures for verifying the reliability of probabilistic forecasts. The difficulty in measuring the goodness of fit for a probabilistic prediction or forecast is that predicted probability distributions for a target variable are not stationary in time, meaning one observation alone exists to quantify goodness of fit for each prediction issued. Therefore, we suggest an additional dissociation that can dissociate target information from the other time variant part—the target to be verified in this study is the alignment of observations to the predicted probability distribution. For this dissociation, the probability integral transformation is used. To measure the goodness of fit for the predicted probability distributions, this study uses the root mean squared deviation metric. If the observations after the dissociation can be assumed to be independent, the mean square deviation metric becomes a chi-square test statistic, which enables statistically testing the hypothesis regarding whether the observations are from the same population as the predicted probability distributions. An illustration of our proposed rationale is provided using the multi-model ensemble prediction for El Niño–Southern Oscillation.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Nadir, Zeeshan, Kristin M. Rice, Michael S. Brown, and Charles A. Bouman. "Testing the Goodness of Model Fit in Tunable Diode Laser Absorption Tomography." Electronic Imaging 2021, no. 15 (January 18, 2021): 291–1. http://dx.doi.org/10.2352/issn.2470-1173.2021.15.coimg-291.

Повний текст джерела
Анотація:
Tunable diode laser absorption tomography (TDLAT) has emerged as a popular nonintrusive technique for simultaneous sensing of gas concentration and temperature by making light absorbance measurements. Major challenge of TDLAT imaging is that the measurement data is very sparse. Therefore, precise models are required to describe the measurement process (forward model) and the behavior of the gas flow properites (prior model) to get accurate reconstructions. The sparsity of the measurement data makes TDLAT very sensitive to the accuracy of the models and makes it prone to overfitting. Both the forward and prior models can have systematic errors due to several reasons. So far, substantial amount of work has been done by researchers on developing reconstruction methods and formulating models, forward and prior. Yet, there has not been significant research work done on constructing a metric for goodness of the model fit that can indicate when there is an inaccuracy in the forward or the prior model. In this paper, we present a metric for goodness of model fit that can be used to indicate if the models used in the reconstruction are inaccurate. Results show that our metric can reliably quantify the goodness of model fit for sparese data reconstruction problems such as TDLAT.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Chechile, Richard A. "A vector-based goodness-of-fit metric for interval-scaled data." Communications in Statistics - Theory and Methods 28, no. 2 (January 1999): 277–96. http://dx.doi.org/10.1080/03610929908832298.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Babić, Sladana, Christophe Ley, Lorenzo Ricci, and David Veredas. "TailCoR: A new and simple metric for tail correlations that disentangles the linear and nonlinear dependencies that cause extreme co-movements." PLOS ONE 18, no. 1 (January 3, 2023): e0278599. http://dx.doi.org/10.1371/journal.pone.0278599.

Повний текст джерела
Анотація:
Economic and financial crises are characterised by unusually large events. These tail events co-move because of linear and/or nonlinear dependencies. We introduce TailCoR, a metric that combines (and disentangles) these linear and non-linear dependencies. TailCoR between two variables is based on the tail inter quantile range of a simple projection. It is dimension-free, and, unlike competing metrics, it performs well in small samples and no optimisations are needed. Indeed, TailCoR requires a few lines of coding and it is very fast. A Monte Carlo analysis confirms the goodness of the metric, which is illustrated on a sample of 21 daily financial market indexes across the globe and for 20 years. The estimated TailCoRs are in line with the financial and economic events, such as the 2008 great financial crisis and the 2020 pandemic.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Farhang-Mehr, Ali, and Shapour Azarm. "An Information-Theoretic Entropy Metric for Assessing Multi-Objective Optimization Solution Set Quality." Journal of Mechanical Design 125, no. 4 (December 1, 2003): 655–63. http://dx.doi.org/10.1115/1.1623186.

Повний текст джерела
Анотація:
An entropy-based metric is presented that can be used for assessing the quality of a solution set as obtained from multi-objective optimization techniques. This metric quantifies the “goodness” of a set of solutions in terms of distribution quality over the Pareto frontier. The metric can be used to compare the performance of different multi-objective optimization techniques. In particular, the metric can be used in analysis of multi-objective evolutionary algorithms, wherein the capabilities of such techniques to produce and maintain diversity among different solution points are desired to be compared on a quantitative basis. An engineering test example, the multi-objective design optimization of a speed-reducer, is provided to demonstrate an application of the proposed entropy metric.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Graffelman, Jan. "Goodness-of-fit filtering in classical metric multidimensional scaling with large datasets." Journal of Applied Statistics 47, no. 11 (December 17, 2019): 2011–24. http://dx.doi.org/10.1080/02664763.2019.1702929.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Arnastauskaitė, Jurgita, Tomas Ruzgas, and Mindaugas Bražėnas. "An Exhaustive Power Comparison of Normality Tests." Mathematics 9, no. 7 (April 6, 2021): 788. http://dx.doi.org/10.3390/math9070788.

Повний текст джерела
Анотація:
A goodness-of-fit test is a frequently used modern statistics tool. However, it is still unclear what the most reliable approach is to check assumptions about data set normality. A particular data set (especially with a small number of observations) only partly describes the process, which leaves many options for the interpretation of its true distribution. As a consequence, many goodness-of-fit statistical tests have been developed, the power of which depends on particular circumstances (i.e., sample size, outlets, etc.). With the aim of developing a more universal goodness-of-fit test, we propose an approach based on an N-metric with our chosen kernel function. To compare the power of 40 normality tests, the goodness-of-fit hypothesis was tested for 15 data distributions with 6 different sample sizes. Based on exhaustive comparative research results, we recommend the use of our test for samples of size n≥118.
Стилі APA, Harvard, Vancouver, ISO та ін.

Дисертації з теми "Goodness metric"

1

Mateu, Figueras Glòria. "Models de distribució sobre el símplex." Doctoral thesis, Universitat Politècnica de Catalunya, 2003. http://hdl.handle.net/10803/6706.

Повний текст джерела
Анотація:
Les dades composicionals són vectors les components dels quals representen proporcions respecte d'un total, i per tant estan sotmesos a la restricció que la suma de les seves components és una constant. L'espai natural per a vectors amb D components és el símplex SD. En l'àmbit de la modelització, ens trobem amb una gran dificultat: no coneixem prou classes de distribucions que permetin modelitzar adequadament la majoria dels conjunts de dades composicionals.
En els anys 80, Aitchison proposa una metodologia per treballar amb dades composicionals que hem anomenat metodologia MOVE, ja que es basa en transformacions. En el tema específic de la modelització, Aitchison utilitza la transformació logquocient additiva per projectar les composicions a l'espai real i posteriorment les modelitza amb una distribució normal. D'aquesta manera introdueix la distribució normal logística additiva. Tot i les bones propietats algebraiques que presenta aquesta distribució ens trobem amb dues dificultats: el model normal no pot modelitzar alguns conjunts de dades transformades, especialment quan presenten una certa asimetria. Per altra banda, aquesta família de distribucions no és tancada respecte de l'amalgama (o suma) de components.
El 1996 Azzalini i Dalla-Valle introdueixen la distribució normal asimètrica a RD. Es tracta d'una generalització del model normal amb un paràmetre de forma que regula la asimetria de la distribució. Utilitzant la teoria de les transformacions i la distribució normal asimètrica, hem definit una nova distribució que hem anomenat normal asimètrica logística additiva. Aquesta és especialment indicada per modelitzar conjunts de dades composicionals amb un biaix moderat, i consegüentment ens aporta la solució a una de les dificultats de la distribució normal logística additiva. Estudiant amb més detall aquest nou model, hem comprovat que presenta unes bones propietats algebraiques. Per altra banda i mitjançant simulacions, hem pogut il·lustrar l'efecte que tenen els paràmetres de la distribució normal logística additiva inicial en la distribució de l'amalgama i hem pogut comprovar que, en certs casos, el model normal asimètric proporciona un bon ajust per al logquocient de l'amalgama.
Una eina útil en la modelització de vectors aleatoris són els tests de bondat d'ajust. Malauradament, no és gens freqüent trobar a la literatura tests de bondat d'ajust aplicables a la distribució normal asimètrica. Així doncs, hem desenvolupat uns tests per aquesta distribució i hem realitzat un estudi de potència utilitzant diverses distribucions alternatives. La metodologia que hem escollit és la de D'Agostino i Stephens que consisteix en mesurar la diferència entre la funció de distribució empírica (calculada mitjançant la mostra) i la funció de distribució teòrica (la normal asimètrica).
L'estructura d'espai euclidià del símplex ens ha suggerit una nova metodologia que hem anomenat STAY ja que no es basa en les transformacions. Sabem que és equivalent utilitzar les operacions pròpies de SD que aplicar les operacions de l'espai real a les coordenades de les composicions respecte d'una base ortonormal. Sobre aquestes coordenades hem definit el model normal i el model normal asimètric a SD i hem realitzat un estudi comparatiu amb els models normal logístic additiu i normal asimètric logístic additiu. Si bé en determinades situacions aquesta nova metodologia dóna resultats totalment equivalents als obtinguts amb la tècnica de les transformacions, en altres aporta canvis importants. Per exemple, ha permès expressar directament sobre el símplex conceptes bàsics de l'estadística clàssica, com el concepte d'esperança o de variància. Donat que no existeixen treballs previs en aquesta direcció, proposem un exemple il·lustratiu en el cas univariant. Sobre les coordenades respecte d'una base unitària, hem definit el model normal a R+ i hem realitzat una comparació amb el model lognormal obtingut mitjançant la transformació logarítmica.
Compositional data are vectors whose components represent proportions of some whole and this is the reason why they are subject to the unit-sum constraint of its components. Therefore, a suitable sample space for compositional data is the unit simplex SD. The modelling of compositional data has a great problem: the lack of enough flexible models.
In the eighties Aitchison developed a methodology to work with compositional data that we have called MOVE methodology. It is based on the transformation of compositional data from SD to the real space and the transformed data is modelled by a multivariate normal distribution. The additive logratio transformation gives rice to the additive logistic normal model which exhibits rich properties. Unfortunately, sometimes a multivariate normal model cannot properly fit the transformed data set, especially when it presents some skewness. Also the additive logistic normal family is not closed under amalgamation of components.
In 1996 Azzalini and Dalla Valle introduced the skew normal distribution: a family of distributions on the real space, including the multivariate normal distribution, but with an extra parameter which allows the density to have some skewness. Emulating Aitchison, we have combined the logistic normal approach with the skew-normal distribution to define a new class of distributions on the simplex: the additive logistic skew-normal class. We apply it to model compositional data sets when the transformed data presents some skewness. We have proved that this class of distributions has good algebraic properties. We have also studied the adequacy of the logistic skew-normal distribution to model amalgamations of additive logistic normal vectors. Simulation studies show that in some cases our distribution can provide a reasonable fit.
A useful tool in the study of the modelisation of vectors is the test of goodness-of-fit. Unfortunately we don't find in the literature tests of goodness-of-fit for the skew-normal distribution. Thus, we have developed these kinds of tests and we have completed the work with a power study. We have chosen the R.B. D'Agostino and M.A. Stephens methodology that consists in computing the difference between the empirical distribution function (computed from the sample) and the theoretic distribution function (skew-normal).
Parallel studies have recently developed the metric space structure of SD. This has suggested us a new methodology to work with compositional data sets that we have called STAY approach because it is not based on transformations. The theory of algebra tells us that any D dimensional real vector space with an inner product has an orthonormal basis to which the coefficients behave like usual elements in RD. Our suggestion is to apply to these coefficients all the standard methods and results available for real random vectors. Thus, on the coefficients with respect to an orthonormal basis we have defined the normal model in SD and the skew-normal model in SD and we have compared them with the additive logistic normal and the additive logistic skew-normal model respectively. From a probabilistic point of view, the laws on SD defined using the STAY methodology are identical to the laws defined using the MOVE methodology. But the STAY methodology has provided some important changes. For example, it has allowed us to express directly over the simplex some basic concepts like the expected value or the variance of a random composition. As we have not found in the literature previous work in this direction, we have started this study with an illustrative example. Over the coefficients with respect to a unitary basis we have defined the normal model in the positive real line and we have compared it with the lognormal model, defined with the logarithmic transformation.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Bakšajev, Aleksej. "Statistical tests based on N-distances." Doctoral thesis, Lithuanian Academic Libraries Network (LABT), 2010. http://vddb.laba.lt/obj/LT-eLABa-0001:E.02~2010~D_20100409_082443-70166.

Повний текст джерела
Анотація:
The thesis is devoted to the application of a new class of probability metrics, N-distances, introduced by Klebanov (Klebanov, 2005; Zinger et al., 1989), to the problems of verification of the classical statistical hypotheses of goodness of fit, homogeneity, symmetry and independence. First of all a construction of statistics based on N metrics for testing mentioned hypotheses is proposed. Then the problem of determination of the critical region of the criteria is investigated. The main results of the thesis are connected with the asymptotic behavior of test statistics under the null and alternative hypotheses. In general case the limit null distribution of proposed in the thesis tests statistics is established in terms of the distribution of infinite quadratic form of random normal variables with coefficients dependent on eigenvalues and functions of a certain integral operator. It is proved that under the alternative hypothesis the test statistics are asymptotically normal. In case of parametric hypothesis of goodness of fit particular attention is devoted to normality and exponentiality criteria. For hypothesis of homogeneity a construction of multivariate distribution free two-sample test is proposed. Testing the hypothesis of uniformity on hypersphere in more detail S1 and S2 cases are investigated. In conclusion, a comparison of N-distance tests with some classical criteria is provided. For simple hypothesis of goodness of fit in univariate case as a measure for... [to full text]
Disertacinis darbas yra skirtas N-metrikų teorijos (Klebanov, 2005; Zinger et al., 1989) pritaikymui klasikinėms statistinėms suderinamumo, homogeniškumo, simetriškumo bei nepriklausomumo hipotezėms tikrinti. Darbo pradžioje pasiūlytas minėtų hipotezių testinių statistikų konstravimo būdas, naudojant N-metrikas. Toliau nagrinėjama problema susijusi su suformuotų kriterijų kritinės srities nustatymu. Pagrindiniai darbo rezultatai yra susiję su pasiūlytų kriterijaus statistikų asimptotiniu skirstiniu. Bendru atveju N-metrikos statistikų asimptotinis skirstinys esant nulinei hipotezei sutampa su Gauso atsitiktinių dydžių begalinės kvadratinės formos skirstiniu. Alternatyvos atveju testinių statistikų ribinis skirstinys yra normalusis. Sudėtinės suderinamumo hipotezės atveju išsamiau yra analizuojami normalumo ir ekponentiškumo kriterijai. Daugiamačiu atveju pasiūlyta konstrukcija, nepriklausanti nuo skirstinio homogeniškumo testo. Tikrinant tolygumo hipersferoje hipotezę detaliau yra nagrinėjami apskritimo ir sferos atvejai. Darbo pabaigoje lyginami pasiūlytos N-metrikos bei kai kurie klasikiniai kriterijai. Neparametrinės suderinamumo hipotezės vienamačiu atveju, kaip palyginimo priemonė, nagrinėjamas Bahaduro asimptotinis santykinis efektyvumas (Bahadur, 1960; Nikitin, 1995). Kartu su teoriniais rezultatais pasiūlytų N-metrikos tipo testų galingumas ištirtas, naudojant Monte-Karlo metodą. Be paprastos ir sudėtinės suderinamumo hipotezių yra analizuojami homogeniškumo testai... [toliau žr. visą tekstą]
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Bakšajev, Aleksej. "Statistinių hipotezių tikrinimas, naudojant N-metrikas." Doctoral thesis, Lithuanian Academic Libraries Network (LABT), 2010. http://vddb.laba.lt/obj/LT-eLABa-0001:E.02~2010~D_20100409_082453-39497.

Повний текст джерела
Анотація:
Disertacinis darbas yra skirtas N-metrikų teorijos (Klebanov, 2005; Zinger et al., 1989) pritaikymui klasikinėms statistinėms suderinamumo, homogeniškumo, simetriškumo bei nepriklausomumo hipotezėms tikrinti. Darbo pradžioje pasiūlytas minėtų hipotezių testinių statistikų konstravimo būdas, naudojant N-metrikas. Toliau nagrinėjama problema susijusi su suformuotų kriterijų kritinės srities nustatymu. Pagrindiniai darbo rezultatai yra susiję su pasiūlytų kriterijaus statistikų asimptotiniu skirstiniu. Bendru atveju N-metrikos statistikų asimptotinis skirstinys esant nulinei hipotezei sutampa su Gauso atsitiktinių dydžių begalinės kvadratinės formos skirstiniu. Alternatyvos atveju testinių statistikų ribinis skirstinys yra normalusis. Sudėtinės suderinamumo hipotezės atveju išsamiau yra analizuojami normalumo ir ekponentiškumo kriterijai. Daugiamačiu atveju pasiūlyta konstrukcija, nepriklausanti nuo skirstinio homogeniškumo testo. Tikrinant tolygumo hipersferoje hipotezę detaliau yra nagrinėjami apskritimo ir sferos atvejai. Darbo pabaigoje lyginami pasiūlytos N-metrikos bei kai kurie klasikiniai kriterijai. Neparametrinės suderinamumo hipotezės vienamačiu atveju, kaip palyginimo priemonė, nagrinėjamas Bahaduro asimptotinis santykinis efektyvumas (Bahadur, 1960; Nikitin, 1995). Kartu su teoriniais rezultatais pasiūlytų N-metrikos tipo testų galingumas ištirtas, naudojant Monte-Karlo metodą. Be paprastos ir sudėtinės suderinamumo hipotezių yra analizuojami homogeniškumo testai... [toliau žr. visą tekstą]
The thesis is devoted to the application of a new class of probability metrics, N-distances, introduced by Klebanov (Klebanov, 2005; Zinger et al., 1989), to the problems of verification of the classical statistical hypotheses of goodness of fit, homogeneity, symmetry and independence. First of all a construction of statistics based on N-metrics for testing mentioned hypotheses is proposed. Then the problem of determination of the critical region of the criteria is investigated. The main results of the thesis are connected with the asymptotic behavior of test statistics under the null and alternative hypotheses. In general case the limit null distribution of proposed in the thesis tests statistics is established in terms of the distribution of infinite quadratic form of random normal variables with coefficients dependent on eigenvalues and functions of a certain integral operator. It is proved that under the alternative hypothesis the test statistics are asymptotically normal. In case of parametric hypothesis of goodness of fit particular attention is devoted to normality and exponentiality criteria. For hypothesis of homogeneity a construction of multivariate distribution-free two-sample test is proposed. Testing the hypothesis of uniformity on hypersphere in more detail S1 and S2 cases are investigated. In conclusion, a comparison of N-distance tests with some classical criteria is provided. For simple hypothesis of goodness of fit in univariate case as a measure for... [to full text]
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Sun, Xufei. "Efficient Community Detection." Thesis, 2015. http://hdl.handle.net/1885/16471.

Повний текст джерела
Анотація:
Given a large network, local community detection aims at finding the community that contains a set of query nodes and also maximises (minimises) a goodness metric. Furthermore, due to the inconvenience or impossibility of obtaining the complete network information in many situations, the detection becomes more challenging. This problem has recently drawn intense research interest. Various goodness metrics have been proposed. And most of them base on the statistical features of community structures, such as the internal density or external spareness. However, the metrics often result in unsatisfactory results by either including irrelevant subgraphs of high density, or pulling in outliers which accidentally match the metric for the time being. Further more, when in a highly overlapping environment such as social networks, the unconventional community structures make these metrics usually end up with a quite trivial detection result. In our work, we go for a alternative point of view on the formation of the communities, namely the assembly of nodes with different roles in the structure. With the new view point, we present two metrics which are proved to perform superiorly in traditional and complex environment respectively. Moreover, on realising a single metric is whatsoever limited in effectiveness as well as scope of application, we raise up a complete framework for the collaboration ofmetrics in the field, which also lands a base-stone for future innovations. The experiment results collected from Amazon, DBLP, Youtube and LivingJournal well certifies the effectiveness of the metrics.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Van, Hoepen Wilhelmina Adriana. "Exploring algorithms to score control points in metrogaine events." Diss., 2018. http://hdl.handle.net/10500/24532.

Повний текст джерела
Анотація:
Metrogaining is an urban outdoor navigational sport that uses a street map to which scored control points have been added. The objective is to collect maximum score points within a set time by visiting a subset of the scored control points. There is currently no metrogaining scoring standard, only guidelines on how to allocate scores. Accordingly, scoring approaches were explored to create new score sets by using scoring algorithms based on a simple relationship between the score of, and the number of visits to a control point. A spread model, which was developed to evaluate the score sets, generated a range of routes by solving a range of orienteering problems, which belongs to the class of NP-hard combinatorial optimisation problems. From these generated routes, the control point visit frequencies of each control point were determined. Using the visit frequencies, test statistics were subsequently adapted to test the goodness of scoring for each score set. The ndings indicate that the score-visits relationship is not a simple one, as the number of visits to a control point is not only dependent on its score, but also on the scores of the surrounding control points. As a result, the scoring algorithms explored were unable to cope with the complex scoring process uncovered.
Decision Sciences
M. Sc. (Operations Research)
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "Goodness metric"

1

Paegelow, Martin, and David García-Álvarez. "Advanced Pattern Analysis to Validate Land Use Cover Maps." In Land Use Cover Datasets and Validation Tools, 229–54. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-90998-7_12.

Повний текст джерела
Анотація:
AbstractIn this chapter we explore pattern analysis for categorical LUC maps as a means of validating land use cover maps, land change and land change simulations. In addition to those described in Chap. “Spatial Metrics to Validate Land Use Cover Maps”, we present three complementary methods and techniques: a Goodness of Fit metric to measure the agreement between two maps in terms of pattern (Map Curves), the focus on changes on pattern borders as a method for validating on-border processes and a technique quantifying the magnitude of distance error. Map Curves (Sect. 1) offers a universal pattern-based index, called Goodness of Fit (GOF), which measures the spatial concordance between categorical rasters or vector layers. Complementary to this pattern validation metric, the following Sect. 2 focuses specifically on the changes that take place on pattern borders. This enables changes to be divided into those that take place on the borders of existing features and those that form new, disconnected features. Bringing this chapter on landscape patterns to a close, Sect. 3 presents a technique for quantifying allocation errors in simulation maps and more precisely on the minimum distance between the allocation errors in simulation maps and the nearest patch belonging to the same category on the reference map. The comparison between a raster-based and a vector-based approach brings us back to the differences in measurement inherent in the representation of entities in raster and vector mode. These techniques are applied to two datasets. Section 1 uses the Asturias Central Area database, where CORINE maps are compared to SIOSE maps and simulation outputs. For their part, the techniques described in Sects. 2 and 3 are applied to the Ariège Valley database. CORINE maps for 2000 and 2018 are used as reference maps in comparisons with simulated land covers.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Cheng, Shi, Yuhui Shi, and Quande Qin. "Population Diversity of Particle Swarm Optimizer Solving Single- and Multi-Objective Problems." In Emerging Research on Swarm Intelligence and Algorithm Optimization, 71–98. IGI Global, 2015. http://dx.doi.org/10.4018/978-1-4666-6328-2.ch004.

Повний текст джерела
Анотація:
Premature convergence occurs in swarm intelligence algorithms searching for optima. A swarm intelligence algorithm has two kinds of abilities: exploration of new possibilities and exploitation of old certainties. The exploration ability means that an algorithm can explore more search places to increase the possibility that the algorithm can find good enough solutions. In contrast, the exploitation ability means that an algorithm focuses on the refinement of found promising areas. An algorithm should have a balance between exploration and exploitation, that is, the allocation of computational resources should be optimized to ensure that an algorithm can find good enough solutions effectively. The diversity measures the distribution of individuals' information. From the observation of the distribution and diversity change, the degree of exploration and exploitation can be obtained. Another issue in multiobjective is the solution metric. Pareto domination is utilized to compare two solutions; however, solutions are almost Pareto non-dominated for multiobjective problems with more than ten objectives. In this chapter, the authors analyze the population diversity of a particle swarm optimizer for solving both single objective and multiobjective problems. The population diversity of solutions is used to measure the goodness of a set of solutions. This metric may guide the search in problems with numerous objectives. Adaptive optimization algorithms can be designed through controlling the balance between exploration and exploitation.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

"Appendix A: Goodness of fit metrics." In Radar Sea Clutter: Modelling and target detection, 329–31. Institution of Engineering and Technology, 2021. http://dx.doi.org/10.1049/sbra530e_appendixa.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Babbar, Jannat Kaur, Keshav Jindal, Parigya Jain, and Payal. "Deduction of Edge Signs in Bitcoin Alpha Social Network Modelled as a Signed Graph." In Advances in Transdisciplinary Engineering. IOS Press, 2022. http://dx.doi.org/10.3233/atde220742.

Повний текст джерела
Анотація:
Signed graphs have a wide array of applications in the social networking domain as the industry and platforms highly rely on forming trust links between users so as to smooth out the process of interacting virtually. Signed graphs facilitate this process by providing a mode of representing such networks as graphs that have edges with a positive/negative sign that help in defining the nature of relationship between the nodes(that can represent the users of the platform or any respective representatives of the platform) of the graph. In this paper we have dealt with the platform of bitcoin alpha that is now coming into notice due to cryptocurrency’s rising popularity. Trading online can be risky and thus the entire platform is functional on the principle of trust/distrust between such anonymous users. We have attempted to formulate the social network of bitcoin alpha platform into a signed graph and predict the links to establish trust/distrust between any two users in the entire graph using concepts of balanced and unbalanced graph theories, and fairness and goodness measures of vertices. Fairness of a user denotes how reliable the rating given by that particular user to others is, whereas goodness measures how likeable or trustworthy a particular user of the website is. Using these metrics, we have attempted to solve this problem.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Alessi, Lucia, Carsten Detken, and Silviu Oprică. "On the Evaluation of Early Warning Models for Financial Crises." In Advances in Finance, Accounting, and Economics, 80–96. IGI Global, 2016. http://dx.doi.org/10.4018/978-1-4666-9484-2.ch004.

Повний текст джерела
Анотація:
Early Warning Models (EWMs) are back on the policy agenda. In particular, accurate models are increasingly needed for financial stability and macro-prudential policy purposes. However, owing to the alleged poor out-of-sample performance of the first generation of EWMs developed in the 90's, the economic profession remains largely unconvinced about the ability of EWMs to play any important role in the prediction of future financial crises. The authors argue that a lot of progress has been made recently in the literature and that one key factor behind the prevailing skepticism relates to the basic evaluation metrics (e.g. the noise-to-signal ratio) traditionally used to evaluate the predictive performance of EWMs, and in turn to select benchmark models. This chapter provides an overview of methodologies (e.g. the (partial) Area Under the Receiver Operating Characteristic curve and the (standardized) Usefulness measure) better suitable for measuring the goodness of EWMs and constructing optimal monitoring tools.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

D'Rosario, Michael, and John Zeleznikow. "Compliance with International Soft Law." In Natural Language Processing, 49–64. IGI Global, 2020. http://dx.doi.org/10.4018/978-1-7998-0951-7.ch004.

Повний текст джерела
Анотація:
The present article considers the importance of legal system origin in compliance with ‘international soft law,' or normative provisions contained in non-binding texts. The study considers key economic and governance metrics on national acceptance an implementation of the first Basle accord. Employing a data set of 70 countries, the present study considers the role of market forces and bilateral and multi-lateral pressures on implementation of soft law. There is little known about the role of legal system structure-related variables as factors moderating the implementation of multi-lateral agreements and international soft law, such as the 1988 accord. The present study extends upon research within the extant literature by employing a novel estimation method, a neural network modelling technique, with multi-layer perceptron artificial neural network (MPANN). Consistent with earlier studies, the article identifies a significant and positive effect associated with democratic systems and the implementation of the Basle accord. However, extending upon traditional estimation techniques, the study identifies the significance of savings rates and government effectiveness in determining implementation. Notably, the method is able to achieve a superior goodness of fit and predictive accuracy in determining implementation.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Capps, Oral. "Forecasting Weekly Shipments of Hass Avocados from Mexico to the United States Using Econometric and Vector Autoregression Models." In Business, Management and Economics. IntechOpen, 2022. http://dx.doi.org/10.5772/intechopen.107316.

Повний текст джерела
Анотація:
Domestic production cannot meet the U.S. demand for avocados, satisfying only 10% of the national demand. Due to year-round production and longer shelf-life, the Hass variety of avocados accounts for about 85% of avocados consumed in the United States and roughly 95% of total avocado imports, primarily from Mexico. Using weekly data over the period July 3, 2011, to October 24, 2021, econometric and vector autoregression models are estimated regarding the seven main shipment sizes of Hass avocados from Mexico to the United States. Both types of models discern the impacts of inflation-adjusted and exchange-rate adjusted prices per box as well as U.S. disposable income, holidays and events, and seasonality on the level of Hass avocado shipments by size. In general, these impacts are robust across the respective models by shipment size. These types of models also mimic the variability in the level of shipments by size quite well based on goodness-of-fit metrics. Based on absolute percent error, these models provide reasonably accurate forecasts of the level of Hass avocado shipments from Mexico by size associated with a time horizon of 13 weeks. But neither type of models provides better forecast performance universally across all avocado shipment sizes.
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "Goodness metric"

1

Ruiz, Maritza, and Van P. Carey. "An Exergy-Based Metric for Evaluating Solar Thermal Absorber Technologies for Gas Heating." In ASME/JSME 2011 8th Thermal Engineering Joint Conference. ASMEDC, 2011. http://dx.doi.org/10.1115/ajtec2011-44354.

Повний текст джерела
Анотація:
The energy conversion effectiveness of the central receiver absorber in concentrating solar thermal power systems is dictated primarily by heat losses, material temperature limits, and pumping power losses. To deliver concentrated solar energy to a gas for process heat applications or gas cycle power generation, there are a wide variety of compact heat exchanger finned surfaces that could be used to enhance the convective transfer of absorbed solar energy to the gas stream flowing through the absorber. In such circumstances, a key design objective for the absorber is to maximize the heat transfer thermodynamic performance while minimizing the pumping power necessary to drive the gas flow through the fin matrix. This paper explores the use of different performance metrics to quantify the combined heat transfer, thermodynamic and pressure loss effectiveness of enhanced fins surfaces used in solar thermal absorbers for gas heating. Previously defined heat exchanger performance metrics, such as the “goodness factor”, are considered, and we develop and explore the use of a new metric, the “loss factor”, for determining the preferred enhanced fin matrix surfaces for concentrated solar absorbers. The loss factor, defined as the normalized exergy loss in the receiver, can be used for nondimensional analysis of the desirable qualities in an optimized solar receiver design. In comparison to previous goodness factor methods, the loss factor metric has the advantage that it quantifies the trade-off between trying to maximize the solar exergy transferred to the gas (high heat transfer rate and delivery at high temperature) and minimizing the pumping exergy loss. In this study, the loss factor is used to compare current solar receiver designs, and designs that use a variety of available plate-finned compact heat transfer surfaces with known Colburn factor (j) and friction factor (f) characteristics. These examples demonstrate how the loss factor metric can be used to design and optimize novel solar central receiver systems, and they indicate fin matrix surfaces that are particularly attractive for this type of application.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Zhang, Xingxing, Zhenfeng Zhu, Yao Zhao, and Deqiang Kong. "Self-Supervised Deep Low-Rank Assignment Model for Prototype Selection." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/436.

Повний текст джерела
Анотація:
Prototype selection is a promising technique for removing redundancy and irrelevance from large-scale data. Here, we consider it as a task assignment problem, which refers to assigning each element of a source set to one representative, i.e., prototype. However, due to the outliers and uncertain distribution on source, the selected prototypes are generally less representative and interesting. To alleviate this issue, we develop in this paper a Self-supervised Deep Low-rank Assignment model (SDLA). By dynamically integrating a low-rank assignment model with deep representation learning, our model effectively ensures the goodness-of-exemplar and goodness-of-discrimination of selected prototypes. Specifically, on the basis of a denoising autoencoder, dissimilarity metrics on source are continuously self-refined in embedding space with weak supervision from selected prototypes, thus preserving categorical similarity. Conversely, working on this metric space, similar samples tend to select the same prototypes by designing a low-rank assignment model. Experimental results on applications like text clustering and image classification (using prototypes) demonstrate our method is considerably superior to the state-of-the-art methods in prototype selection.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Farhang-Mehr, Ali, and Shapour Azarm. "On the Entropy of Multi-Objective Design Optimization Solution Sets." In ASME 2002 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2002. http://dx.doi.org/10.1115/detc2002/dac-34122.

Повний текст джерела
Анотація:
In this paper, an entropy-based metric is presented for quality assessment of non-dominated solution sets obtained from a multiobjective optimization technique. This metric quantifies the ‘goodness’ of a solution set in terms of its distribution quality over the Pareto-optimal frontier. Therefore, it can be useful in comparison studies of different multi-objective optimization techniques, such as Multi-Objective Genetic Algorithms (MOGAs), wherein the capabilities of such techniques to produce and maintain diversity among different solution points are desired to be compared on a quantitative basis. An engineering test example, the multiobjective design optimization of a speed-reducer, is presented in order to demonstrate an application of the proposed entropy metric.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Chen, Wei, and Chenhao Yuan. "A Probabilistic-Based Design Model for Achieving Flexibility in Design." In ASME 1997 Design Engineering Technical Conferences. American Society of Mechanical Engineers, 1997. http://dx.doi.org/10.1115/detc97/dtm-3882.

Повний текст джерела
Анотація:
Abstract In the early stages of product development, the transformation between design requirements and design solutions often involves uncertainties when specifying the desired target value for the performance expressed in design requirements. Additionally, to provide flexibility for later development, the design solution obtained is desired to be a range rather than a single solution. Our primary focus in this paper is on developing a probabilistic-based design model as a basis for providing the flexibility that allows designs to be readily adapted to changing conditions. This is obtained by developing a range of design solutions which meet a ranged set of design requirements. Meanwhile, designers are allowed to specify the varying degree of desirability of a ranged set of design requirements based on their preferences. The Design Preference Index (DPI) is introduced as a design metric to measure the goodness of flexible designs. Providing the foundation to our work are the probabilistic representations of design performance, the application of robust design concept, and the utilization of the compromise Decision Support Problem (DSP) as a multi-objective decision model. A two-bar structural design is used as an example to demonstrate our approach. Our focus in this paper is on introducing the probabilistic-based design model and not on the results of the example problem, per se.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Papa, Lia Maria, and Saverio D’Auria. "Rilievo e modellazione digitale: un percorso critico per la valorizzazione del Castello di Ischia." In FORTMED2020 - Defensive Architecture of the Mediterranean. Valencia: Universitat Politàcnica de València, 2020. http://dx.doi.org/10.4995/fortmed2020.2020.11343.

Повний текст джерела
Анотація:
Survey and digital modeling: a critical approach for the enhancement of the Castle of IschiaThe development and dissemination of ICT have also influenced the cultural heritage sector. In the last decade, for this reason and not only, the way of doing scientific research, documentation and enhancement has quickly changed, to such a degree to question the real benefits brought by the digitization and virtualization of the historical buildings, in the fields just mentioned. The paper is part of a research field linked to the analysis of the metric and formal reliability of 3D integrated surveying and digital modeling of architectural heritage, and to the critical evaluation of the photogrammetry and laser scanning as tools for the deep knowledge and for the dissemination of the object of study. In particular, the focus is on the majestic Aragonese Castle of Ischia –whose origins date back to the fifth century BC, located in the eastern part of the island, on an area of over 5 hectares– and on the ruins of the Cathedral of the Assumption, of the fourteenth century, preserved inside. Starting from particular analysis and coming to general methodological considerations, the research had different objectives: comparing qualitatively and quantitatively the goodness of image-based and range-based surveys of single environments, determining the spatial location of some volumes compared to others, analyzing the “Castle-Cathedral system” through the development of virtual reality applications built from photo-modeling and laser scanning at the urban scale.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Lazar, Mihael, and Aleš Hladnik. "Improved reconstruction of the reflectance spectra from RGB readings using two instead of one digital camera." In 11th International Symposium on Graphic Engineering and Design. University of Novi Sad, Faculty of technical sciences, Department of graphic engineering and design, 2022. http://dx.doi.org/10.24867/grid-2022-p96.

Повний текст джерела
Анотація:
The colour of an observed object can be described in many different manners, and the description by its reflectance provides the unambiguous colour representation. The reflectance description can be acquired by expensive multispectral cameras or, e.g., with time-sequential multispectral illumination. In our experiment, we propose that under the condition of constant and uniform illumination, the reflectance can be deduced from the object's RGB camera readouts, captured alongside the set of colour patches with known spectral characteristics. Translation from a colour description in RGB space into reflectance spectra, independent of illuminant and camera sensor characteristics, was performed with the help of an artificial neural network (ANN). In our study, the hypothesis was proposed that the ANN's performance of reflectance reconstruction can be enhanced by employing richer learning datasets using RGB input sets of two cameras instead of just one. Additional second camera information would be adequate only if the equivalent channels of cameras used are linearly independent. A quantitative measure of nonlinearity (QMoN), which is the metric primarily developed for use in chemistry, was employed to estimate the degree of nonlinearity. Additional attention was paid to ANN training, structure and learning set sizes. Two ANN training algorithms have been utilised, a faster GPU executed standard backpropagation and an order of magnitude slower CPU based, but with significantly better convergence Levenberg-Marquardt training algorithm. The number of neurons in the hidden ANN layer varied from the size of the input layer to a number greater than the number in the output layer. The complete set of colour samples was divided into five learning sets of different sizes, with the smaller sets being subsets of the larger ones. To assess performances of the resulting ANNs, mean squared error, the goodness of fit and colour differences calculated from original and reconstructed reflectances assuming several standard illuminations have been compared. A noticeable reflectance performance improvement has been found by using two cameras, even though the cameras' equivalent channels exerted only small degrees of nonlinearity.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Wu, Jin, and Shapour Azarm. "Metrics for Quality Assessment of a Multiobjective Design Optimization Solution Set." In ASME 2000 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2000. http://dx.doi.org/10.1115/detc2000/dac-14233.

Повний текст джерела
Анотація:
Abstract In this paper, several new set quality metrics are introduced that can be used to evaluate the ‘goodness’ of an observed Pareto solution set. These metrics, which are formulated in closed-form and geometrically illustrated, include coverage difference, Pareto spread, accuracy of an observed Pareto frontier, number of distinct choices and cluster. The metrics should enable a designer either monitor the quality of an observed Pareto solution set as obtained by a multiobjective optimization method, or compare the quality of observed Pareto solution sets as reported by different multiobjective optimization methods. A vibrating platform example is used to demonstrate the calculation of these metrics for an observed Pareto solution set.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Hirode, Kartheek, and Jami J. Shah. "Metrics for Evaluating Machining Process Plans." In ASME 1999 Design Engineering Technical Conferences. American Society of Mechanical Engineers, 1999. http://dx.doi.org/10.1115/detc99/dfm-8931.

Повний текст джерела
Анотація:
Abstract Selection of the best plan from alternative process plans, and/or plan improvement, requires that we have a set of evaluation metrics. In this paper, we explore the issues related to measures of goodness of process plans at different levels. Several evaluation metrics are proposed such as feasibility, accuracy, consistency, operation-time and setups. These measures can not only help in plan selection but also in pin pointing the flaws in the process plans. Using the proposed metrics, a structured feedback mechanisms and a refinement framework could be developed to arrive at the best process plan. Evaluation is done at the operation and sequence levels, and could be extended to other levels. The measures take into account shapes, sizes, feature relations and all ANSI Y14.5M tolerances in the evaluation of process plans. The proposed measures are also relevant to DFM, since the inability to produce a satisfactory process plan implies difficult to satisfy design specifications.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Shah, Jami J., and George Runger. "Misuse of Information-Theoretic Dispersion Measures as Design Complexity Metrics." In ASME 2011 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2011. http://dx.doi.org/10.1115/detc2011-48295.

Повний текст джерела
Анотація:
Complexity is defined as a quality of an object with many interwoven elements, aspects, details, or attributes that makes the whole object difficult to understand in a collective sense. Many measures of design complexity have been proposed in the literature. Of these the most popular are Information-theoretic metrics, such as Information Content based on Suh’s Axiomatic Theory and Entropy based on Shannon’s Information Theory. In this paper we will show that not only these metrics do not provide common sense measures of complexity, but they also do not possess proper mathematical properties. At best, they are geared towards measuring a designs goodness of fit rather than its complexity. It is hoped that this paper will generate some debate on strongly held beliefs in the design theory community.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Sharma, N., and R. R. Rhinehart. "Autonomous creation of process cause and effect relationships: metrics for evaluation of the goodness of linguistic rules." In Proceedings of the 2004 American Control Conference. IEEE, 2004. http://dx.doi.org/10.23919/acc.2004.1384021.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії