Academic literature on the topic 'Model quality indices'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Model quality indices.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Model quality indices"

1

Castillo Canalejo, Ana Ma, and Juan Antonio Jimber del Río. "Quality, satisfaction and loyalty indices." Journal of Place Management and Development 11, no. 4 (October 8, 2018): 428–46. http://dx.doi.org/10.1108/jpmd-05-2017-0040.

Full text
Abstract:
Purpose The main purpose of this research was to develop a universal model to evaluate the perceived value of tourism services and satisfaction with, and loyalty to, destinations from the consumers’ perspective and demonstrated the model’s applicability in this context. Design/methodology/approach Using the structural equation model, cause and effect relationships were identified between the proposed model’s constructs, and indices of quality, satisfaction and loyalty among tourists were estimated. This system was applied to a large set of data collected with a structured questionnaire distributed to tourists visiting the city of Seville through a non-probabilistic sampling by intentional quotas method. In total, 922 valid surveys were obtained. Findings The indices show that tourists who visit Seville report a high level of loyalty to, and satisfaction with, this place because of the perceived quality of a variety of services. It is observed that the perceived quality index is much higher (17.95 per cent) than the expected quality index, so the quality of the service received by the tourist during his/her visit to Seville is described as excellent. Research limitations/implications Regarding this study’s limitations, other variables could have been included that influence tourist satisfaction, such as the climate, the effect of advertising medium, the prices and the emotional components. In addition, surveying tourists’ expectations before their visit is virtually impossible, as is surveying the same tourists again about their perceived value and satisfaction after their visit. Future lines of research could focus on the intersection of information between tourism offer and demand, providing information about an appropriate balance in specific markets. The proposed model can also be applied to other tourism places that are similar to Seville’s tourism offer, allowing useful comparisons and identification of critical points and ways to improve customer satisfaction continuously. Practical implications By establishing indices of expected and perceived quality and satisfaction and loyalty among tourists, tourism authorities and different economic agents involved in this sector can receive objective information about the results and quality of tourism services. Tourism managers, thus, can set objectives for improvements and competitiveness, as well as building and maintaining customer loyalty. At the same time, these indices allow comparisons with other organisations and places. By facilitating greater transparency in the measurement of quality and satisfaction, service providers connected to tourism can create a platform on which to articulate clearly their contributions to interested parties and local communities. Social implications These results constitute strategies and findings that any tourism place has to consider in the planning and development of its products. Therefore the model can help to encourage a long-term market perspective among tourism sector regulators, investors and agencies. With the information obtained with this model, areas needing improvement can be identified and the appropriate procedures can be put into practice to improve the tourism offer, adjusting it to meet travellers’ needs according to their motivations to travel to the destination. Residents also can benefit from these measures, as their quality of life will improve through upgrades of the city’s tourism facilities. Originality/value The unique contribution of the present study lies in how the indices or indicators of quality of, satisfaction with and loyalty to destinations among tourists are easily measured by applying structural equation modelling. A new approach to measure satisfaction, loyalty and quality is used based on a scale from 0 to 100, and the index results are very useful for comparing different tourist places.
APA, Harvard, Vancouver, ISO, and other styles
2

Dolgonosov, B. M., K. A. Korchagin, and E. M. Messineva. "Model of fluctuations in bacteriological indices of water quality." Water Resources 33, no. 6 (December 2006): 637–50. http://dx.doi.org/10.1134/s0097807806060054.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ayriev, R. S., and M. A. Kudryashov. "QUALITY INDICES OF PUBLIC TRANSPORTATION SERVICES." World of Transport and Transportation 16, no. 4 (August 28, 2018): 140–49. http://dx.doi.org/10.30932/1992-3252-2018-16-4-11.

Full text
Abstract:
ABSTRACT The authors’ study was devoted to the methodology that could estimate quality and ranking of transport services provided to townspeople in urban passenger and baggage transit, particularly using road vehicles and land electric transport vehicles. The experience of the entities of the Russian Federation, of the United States and the European Union in relation to standardization of the quality of transport services is cited. A mathematical model of quality of transport services, and methods of its integral assessment are considered. Basing on the general analysis of the methodology for assessing transport services, the authors suggest conclusions on its adaptability regarding city of Moscow. A systematic approach to obtaining the initial data necessary for calculation of quality indicators is proposed. Keywords: quality of transport service, urban passenger transport, social standard of transport service, quality indicators.
APA, Harvard, Vancouver, ISO, and other styles
4

Aldrees, Ali, Mohsin Ali Khan, Muhammad Atiq Ur Rehman Tariq, Abdeliazim Mustafa Mohamed, Ane Wai Man Ng, and Abubakr Taha Bakheit Taha. "Multi-Expression Programming (MEP): Water Quality Assessment Using Water Quality Indices." Water 14, no. 6 (March 17, 2022): 947. http://dx.doi.org/10.3390/w14060947.

Full text
Abstract:
Water contamination is indeed a worldwide problem that threatens public health, environmental protection, and agricultural productivity. The distinctive attributes of machine learning (ML)-based modelling can provide in-depth understanding into increasing water quality challenges. This study presents the development of a multi-expression programming (MEP) based predictive model for water quality parameters, i.e., electrical conductivity (EC) and total dissolved solids (TDS) in the upper Indus River at two different outlet locations using 360 readings collected on a monthly basis. The optimized MEP models were assessed using different statistical measurements i.e., coefficient-of-determination (R2), root-mean-square error (RMSE), mean-absolute error (MAE), root-mean-square-logarithmic error (RMSLE) and mean-absolute-percent error (MAPE). The results show that the R2 in the testing phase (subjected to unseen data) for EC-MEP and TDS-MEP models is above 0.90, i.e., 0.9674 and 0.9725, respectively, reflecting the higher accuracy and generalized performance. Also, the error measures are quite lower. In accordance with MAPE statistics, both the MEP models shows an “excellent” performance in all three stages. In comparison with traditional non-linear regression models (NLRMs), the developed machine learning models have good generalization capabilities. The sensitivity analysis of the developed MEP models with regard to the significance of each input on the forecasted water quality parameters suggests that Cl and HCO3 have substantial impacts on the predictions of MEP models (EC and TDS), with a sensitiveness index above 0.90, although the influence of the Na is the less prominent. The results of this research suggest that the development of intelligence models for EC and TDS are cost effective and viable for the evaluation and monitoring of the quality of river water.
APA, Harvard, Vancouver, ISO, and other styles
5

Rahmouni, Ali, Moufida Touhami, and Tahar Benaissa. "Fukui Indices as QSAR Model Descriptors." International Journal of Chemoinformatics and Chemical Engineering 6, no. 2 (July 2017): 31–44. http://dx.doi.org/10.4018/ijcce.2017070103.

Full text
Abstract:
This article describes the Quantitative structure–activity relationship models of 1-[2-hydroxyethoxy-methyl]-6-(phenylthio) thymine inhibition of the human immunodeficiency virus (HIV-1) reverse transcriptase (RT) was developed using the multi linear regressions method. These studies were performed using 60 compounds with the help of quantum descriptors as Ionization Potential, Electron Affinity, Softness, global Electrophilicity index and Fukui functions. These indices are obtained at the DFT/B3LYP level of quantum calculation. The statistical quality of the QSAR models was assessed using statistical parameters R2. Good agreements between experimental and calculated log1/EC50 values of anti-HIV activity were obtained. Four QSAR models are presented and the best one use nine molecular quantum descriptors.
APA, Harvard, Vancouver, ISO, and other styles
6

Chen, You Wen, and Tian You Chai. "Optimal Control for Quality Indices of Heat Furnace." Advanced Materials Research 201-203 (February 2011): 1748–51. http://dx.doi.org/10.4028/www.scientific.net/amr.201-203.1748.

Full text
Abstract:
Heat furnace is one of the most important parts in iron and steel industry which is one of the basic industries. The control of furnace quality indices has a direct impact on iron and steel qualities and energy consumptions in an iron and steel manufacturing process. Due to dramatic changes of exhaust gas in combustion, this multi-variable process becomes time-varying, and also has inherent nonlinearities, couplings among the variables, large inertias and time delays.Therefore, manual operations are still being widely used in quality indices control. In this paper, an optimization control method is proposed for the control of furnace quality indices. The optimization consists of a materials temperature calculation model, an optimization objective, an ideal materials heating model and a furnace heating model. Optimization algorithm consists of the maximum principle, the simulated annealing algorithm,the iterative algorithm and sensitive analysis.Finally, quality indices were controlled by lower layer loop control on-line.The proposed optimization control method has been successfully applied to some steel plants. The industrial applications show the effectiveness of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
7

Sen, Sedat, and Laine Bradshaw. "Comparison of Relative Fit Indices for Diagnostic Model Selection." Applied Psychological Measurement 41, no. 6 (March 8, 2017): 422–38. http://dx.doi.org/10.1177/0146621617695521.

Full text
Abstract:
The purpose of this study was to thoroughly examine the performance of three information-based fit indices—Akaike’s Information Criterion (AIC), Bayesian Information Criterion (BIC), and sample-size-adjusted BIC (SABIC)—using the log-linear cognitive diagnosis model and a set of well-known item response theory (IRT) models. Two simulation studies were conducted to examine the extent to which relative fit indices can identify the generating model under a variety of data conditions and model misspecifications. Generally, indices performed better when item quality was stronger. When the IRT was the generating model, all three indices correctly selected the IRT model for all replications. When the true model was a diagnostic classification model, for all three fit indices, the multidimensional IRT model was incorrectly selected as frequently as 70% of the replications. The results of this study identify situations for researchers where commonly used—and typically well-performing—fit indices may not be appropriate to compare models for selection.
APA, Harvard, Vancouver, ISO, and other styles
8

Aćimović, Milica, Lato Pezo, Tijana Zeremski, Biljana Lončar, Ana Marjanović Jeromela, Jovana Stanković Jeremic, Mirjana Cvetković, Vladimir Sikora, and Maja Ignjatov. "Weather Conditions Influence on Hyssop Essential Oil Quality." Processes 9, no. 7 (July 2, 2021): 1152. http://dx.doi.org/10.3390/pr9071152.

Full text
Abstract:
This paper is a study of the chemical composition of Hyssopus officinalis ssp. officinalis grown during three years (2017–2019) at the Institute of Field and Vegetable Crops Novi Sad (Vojvodina Province, Serbia). Furthermore, comparisons with ISO standards during the years were also investigated, as well as a prediction model of retention indices of compounds from the essential oils. An essential oil obtained by hydrodistillation and analysed by GC-FID and GC-MS was isopinocamphone chemotype. The gathered information about the volatile compounds from H. officinalis was used to classify the samples using the unrooted cluster tree. The correlation analysis was applied to investigate the similarity of different samples, according to GC-MS data. The quantitative structure–retention relationship (QSRR) was also employed to predict the retention indices of the identified compounds. A total of 74 experimentally obtained retention indices were used to build a prediction model. The coefficient of determination for the training cycle was 0.910, indicating that this model could be used for the prediction of retention indices for H. officinalis essential oil compounds.
APA, Harvard, Vancouver, ISO, and other styles
9

Arfan, Yopy, and Dwita Sutjiningsih. "Development of correlation-regression model between land use change and water quality indices in Ciliwung watershed." MATEC Web of Conferences 192 (2018): 02047. http://dx.doi.org/10.1051/matecconf/201819202047.

Full text
Abstract:
Urbanization and industrialization lead to the change of land cover from pervious into impervious. This can impact environmental problems such as water quality degradation that affects human health and water ecosystems. The study aimed to develop a regression-correlation model between impervious cover in Ciliwung watershed and water quality indices in Ciliwung river. The correlation-regression model can be used to predict changes in the status of Ciliwung river water quality due to impervious cover changes. Methods of assessing the indices of water quality are CCME-WQI, NSF-WQI, and STORET within the period of 2005-2016. Monitoring locations from the most upstream to downstream are Atta’awun, Katulampa, Kedung Halang, Pondok Rajeg, Panus Bridge, Kelapa Dua, Condet, Kalibata, MT Haryono and Manggarai. Impervious cover data for each water quality monitoring location is processed using ArcGIS Software. Test of correlation significance between percentage of impervious cover and water quality indices using Pearson Correlation test method. The result of correlation test is significantly a strong inverse relationship between impervious cover and water quality indices. The result of regression test is trend line between impervious cover change and water quality indices that can be used to predict the change of water quality status in Ciliwung River.
APA, Harvard, Vancouver, ISO, and other styles
10

Gossen, Tatiana, Michael Kotzyba, and Andreas Nürnberger. "Graph clusterings with overlaps: Adapted quality indices and a generation model." Neurocomputing 123 (January 2014): 13–22. http://dx.doi.org/10.1016/j.neucom.2012.09.046.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Model quality indices"

1

Lehmann, Adam Clay. "AN ANALYSIS OF RELATIONSHIPS BETWEEN MODELED HYDROLOGIC/SEDIMENT LOADS AND INDICES OF IN-STREAM PHYSICAL HABITAT QUALITY IN HEADWATER STREAMS OF SOUTHWEST OHIO." Miami University / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=miami1292959248.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

BORDOGNA, ANNALISA. "Predicting the binding modes of protein complexes: new strategies for molecular docking." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2011. http://hdl.handle.net/10281/19617.

Full text
Abstract:
Docking approaches and homology modelling procedures have recently become protagonists in the structural prediction of protein complexes. Therefore, in the last years it raised the need of developing more and more reliable tools and defining some guidelines to obtain accurate prediction of those structures. These goals are particularly relevant when only poorly accurate information about the proteins or the interaction (e.g. no interface indication for protein-protein docking, or protein models instead of experimental structures) is available. In this thesis several strategies based on combining different computational techniques are proposed to overcome the limitations of the sampling stage of ligand- and protein-protein docking approaches and to broaden the possibility of predicting the structure of protein complexes. In particular, in the field of protein-protein docking algorithms, the combination of two existing docking methods (HADDOCK and ZDOCK) was proposed, allowing to obtain a method (called ZADDOCK) able to overcome the limitations of the single strategies used and to sum their strengths. Moreover, both for ligand- and protein-protein docking, an analysis of the relationships between docking results and the quality of homology models was performed to assess the possibility of an a priori prediction of the accuracy of docking results on the basis of the evaluation of model quality indices. The results obtained for ZADDOCK show on average a very good performance of the method, that produced reliable predictions without the need of any interface data to guide the sampling step. This allows its employment in the study of complexes for which no experimental information is available and bioinformatics interface prediction fails. Moreover, an accurate description of the intermolecular interactions occurring in protein complexes was obtained, which is a key information to drive subsequent experimental work. The quality of ZADDOCK results indicates that the strategy of combining different computational techniques is a very promising avenue for the development of new docking approaches. As for the use of homology models in docking calculations, in this work it is demonstrated the possibility of a priori predicting the accuracy of ligand-protein docking results on the basis of model quality indices, with the development of a strategy based on the comparison with homologous proteins. Moreover, the results found for protein-protein docking give the bases for a future development of a general prediction strategy of docking accuracy on protein models.
APA, Harvard, Vancouver, ISO, and other styles
3

Raut, Yogendra Y. "Sustainable Bioenergy Feedstock Production Using Long-Term (1999-2014) Conservation Reserve Program Land." The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu148344789416295.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Nébouy, David. "Printing quality assessment by image processing and color prediction models." Thesis, Saint-Etienne, 2015. http://www.theses.fr/2015STET4018/document.

Full text
Abstract:
L'impression, bien qu'étant une technique ancienne pour la coloration de surfaces, a connu un progrès considérable ces dernières années essentiellement grâce à la révolution du numérique. Les professionnels souhaitant remplir les exigences en termes de qualité du rendu visuel de leurs clients veulent donc savoir dans quelle mesure des observateurs humains sont sensibles à la dégradation d'une image. De telles questions concernant la qualité perçue d'une image reproduite peuvent être séparées en deux sujets différents: La qualité de l'impression, comme la capacité d'un système d'impression à reproduire fidèlement l'image d'origine, et la qualité d'une image imprimée, résultant à la fois de la qualité de reproduction, mais aussi de la qualité même de l'image numérique d'origine. Ce premier concept repose sur une analyse physique de la façon dont l'image d'origine est dégradée lors de son transfert sur un support, et nous proposons de la coupler avec une analyse sensorielle, visant à évaluer des attributs perceptuels et leur donner une valeur sur une certaine échelle, déterminée par des échantillons de référence classés par un ensemble d'observateurs. Le second concept inclut cette dégradation due à l’impression mais aussi la qualité perçu de l’image d’origine, qui ne fait pas parti de notre étude. Notre approche consiste d'abord à définir les différents indices de qualité, basés sur des critères mesurables en utilisant des outils d'évaluation basés sur des algorithmes "objectifs" de traitement d'image et des modèles optiques, sur une image imprimée-scannée. Thèse réalisée au Laboratoire Hubert Curien
Printing, though an old technique for surface coloration, considerably progressed these last decades especially thanks to the digital revolution. Professionals who want to meet the demands in terms of quality regarding the visual rendering of their clients thus want to know to which extent human observers are sensitive to the degradation of an image. Such questions regarding the perceived quality of a reproduced image can be split into two different topics: the printing quality as capacity of a printing system of accurately reproduce an original digital image, and the printed image quality which results from both the reproduction quality and the quality of the original image itself. The first concept relies on physical analysis of the way the original image is deteriorated when transferred onto the support, and we propose to couple it with a sensorial analysis, which aims at assessing perceptual attributes by giving them a value on a certain scale, determined with respect to reference samples classified by a set of observers. The second concept includes the degradation due to the printing plus the perceived quality of the original image, not in the scope of this work. In this report, we focus on the printing quality concept. Our approach first consists in the definition of several printing quality indices, based on measurable criteria using assessment tools based on “objective” image processing algorithms and optical models on a printed-then-scanned image. PhD work made in Hubert Curien Laboratory
APA, Harvard, Vancouver, ISO, and other styles
5

Kamari, Halaleh. "Qualité prédictive des méta-modèles construits sur des espaces de Hilbert à noyau auto-reproduisant et analyse de sensibilité des modèles complexes." Thesis, université Paris-Saclay, 2020. http://www.theses.fr/2020UPASE010.

Full text
Abstract:
Ce travail porte sur le problème de l'estimation d'un méta-modèle d'un modèle complexe, noté m. Le modèle m dépend de d variables d'entrées X1,...,Xd qui sont indépendantes et ont une loi connue. Le méta-modèle, noté f∗, approche la décomposition de Hoeffding de m et permet d'estimer ses indices de Sobol. Il appartient à un espace de Hilbert à noyau auto-reproduisant (RKHS), noté H, qui est construit comme une somme directe d'espaces de Hilbert (Durrande et al. (2013)). L'estimateur du méta-modèle, noté f^, est calculé en minimisant un critère des moindres carrés pénalisé par la somme de la norme de Hilbert et de la norme empirique L2 (Huet and Taupin (2017)). Cette procédure, appelée RKHS ridge groupe sparse, permet à la fois de sélectionner et d'estimer les termes de la décomposition de Hoeffding, et donc de sélectionner les indices de Sobol non-nuls et de les estimer. Il permet d'estimer les indices de Sobol même d'ordre élevé, un point connu pour être difficile à mettre en pratique.Ce travail se compose d'une partie théorique et d'une partie pratique. Dans la partie théorique, j'ai établi les majorations du risque empirique L2 et du risque quadratique de l'estimateur f^ d'un modèle de régression où l'erreur est non-gaussienne et non-bornée. Il s'agit des bornes supérieures par rapport à la norme empirique L2 et à la norme L2 pour la distance entre le modèle m et son estimation f^ dans le RKHS H. Dans la partie pratique, j'ai développé un package R appelé RKHSMetaMod, pour la mise en œuvre des méthodes d'estimation du méta-modèle f∗ de m. Ce package s'applique indifféremment dans le cas où le modèle m est calculable et le cas du modèle de régression. Afin d'optimiser le temps de calcul et la mémoire de stockage, toutes les fonctions de ce package ont été écrites en utilisant les bibliothèques GSL et Eigen de C++ à l'exception d'une fonction qui est écrite en R. Elles sont ensuite interfacées avec l'environnement R afin de proposer un package facilement exploitable aux utilisateurs. La performance des fonctions du package en termes de qualité prédictive de l'estimateur et de l'estimation des indices de Sobol, est validée par une étude de simulation
In this work, the problem of estimating a meta-model of a complex model, denoted m, is considered. The model m depends on d input variables X1 , ..., Xd that are independent and have a known law. The meta-model, denoted f ∗ , approximates the Hoeffding decomposition of m, and allows to estimate its Sobol indices. It belongs to a reproducing kernel Hilbert space (RKHS), denoted H, which is constructed as a direct sum of Hilbert spaces (Durrande et al. (2013)). The estimator of the meta-model, denoted f^, is calculated by minimizing a least-squares criterion penalized by the sum of the Hilbert norm and the empirical L2-norm (Huet and Taupin (2017)). This procedure, called RKHS ridge group sparse, allows both to select and estimate the terms in the Hoeffding decomposition, and therefore, to select the Sobol indices that are non-zero and estimate them. It makes possible to estimate the Sobol indices even of high order, a point known to be difficult in practice.This work consists of a theoretical part and a practical part. In the theoretical part, I established upper bounds of the empirical L2 risk and the L2 risk of the estimator f^. That is, upper bounds with respect to the L2-norm and the empirical L2-norm for the f^ distance between the model m and its estimation f into the RKHS H. In the practical part, I developed an R package, called RKHSMetaMod, that implements the RKHS ridge group sparse procedure and a spacial case of it called the RKHS group lasso procedure. This package can be applied to a known model that is calculable in all points or an unknown regression model. In order to optimize the execution time and the storage memory, except for a function that is written in R, all of the functions of the RKHSMetaMod package are written using C++ libraries GSL and Eigen. These functions are then interfaced with the R environment in order to propose an user friendly package. The performance of the package functions in terms of the predictive quality of the estimator and the estimation of the Sobol indices, is validated by a simulation study
APA, Harvard, Vancouver, ISO, and other styles
6

Hadlich, Janaina Conte [UNESP]. "Características do crescimento animal, do tecido muscular esquelético e da maciez da carne de bovinos nelore e mestiços no modelo biológico superprecoce." Universidade Estadual Paulista (UNESP), 2007. http://hdl.handle.net/11449/104128.

Full text
Abstract:
Made available in DSpace on 2014-06-11T19:32:58Z (GMT). No. of bitstreams: 0 Previous issue date: 2007-08-09Bitstream added on 2014-06-13T20:04:46Z : No. of bitstreams: 1 hadlich_jc_dr_botfmvz.pdf: 425826 bytes, checksum: 851cc90401c88b4b9f86ddc4b378c705 (MD5)
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Universidade Estadual Paulista (UNESP)
Este trabalho foi conduzido com o objetivo de estudar o padrão de crescimento do tecido muscular por caracterização das fibras musculares durante o desenvolvimento animal e suas conseqüências na maciez final da carne de bovinos da raça Nelore. Foram utilizados 20 animais da raça Nelore criados no modelo biológico superprecoce. Durante o crescimento foi realizada biopsia nos animais para analisar a composição do tecido muscular no animal in vivo e sua influência na qualidade da carne postmortem. A área e diâmetro das fibras SO, FOG e FG (fibras oxidativas, oxidativas-glicolíticas e glicolíticas) apresentaram correlações (P<0,01) entre elas demonstrando que o crescimento é um evento que ocorre concomitantemente para os diferentes tipos de fibras. Com relação aos parâmetros de qualidade de carne foi encontrada correlação positiva entre a área e diâmetro das fibras FG e valores de força de cisalhamento (rp=0,68, P<0,05; rp=0,81, P<0,01 respectivamente) e correlações negativas entre área e diâmetros das fibras FG e índice de fragmentação miofibrilar, ambos medidos na carne com 48 horas de resfriamento (rp= -0,75, P<0,01; rp= -0,58, P>0,05 respectivamente). Desta maneira podemos inferir que o tamanho da fibra muscular pode influenciar negativamente nas características de qualidade de carne, em especial a maciez.
The present work was lead with objective to study the standard of growth of the tissue muscular by characterization of the muscular fibers during the animal development and the consequences in the final meat tenderness of bovines of Nellore breed. Twenty animals of Nellore breed had been feedlot in the superprecoce' biological model. During the growth samples were collected by biopsy in animals to analyze the composition of muscular tissue in living animal and the influence in the meat quality on the postmortem. The area and diameter of fibers SO, FOG and FG (slow oxidative, fast oxidative-glicolitic and fast glicolitic), had showed correlations (P<0,01) between them demonstrating that growth is an event which occurs concomitantly for different types of fibers. In relation to the characteristics of meat quality it was found a positive correlation between the area and diameter of fibers FG and shear force values (rp=0,68, P<0,05; rp=0,81, P<0,01 respectively) and negative correlations between FG area and diameters of fibers and myofibrilar fragmentation index, both measured in the meat with 48 hours of cooling (rp= -0,75, P<0,01; rp= -0,58, P>0,05 respectively). In this way we can infer which size of muscular fiber can influence negatively in meat quality characteristics of, tenderness especially.
APA, Harvard, Vancouver, ISO, and other styles
7

Hadlich, Janaina Conte 1976. "Características do crescimento animal, do tecido muscular esquelético e da maciez da carne de bovinos nelore e mestiços no modelo biológico superprecoce /." Botucatu : [s.n.], 2007. http://hdl.handle.net/11449/104128.

Full text
Abstract:
Orientador: Luis Artur Loyola Chardulo
Banca: Maurício Medeiros Cabral
Banca: Saulo da Luz e Silva
Banca: Henrique Nunes de Oliveira
Banca: Paulo Roberto Rodrigues Ramos
Resumo: Este trabalho foi conduzido com o objetivo de estudar o padrão de crescimento do tecido muscular por caracterização das fibras musculares durante o desenvolvimento animal e suas conseqüências na maciez final da carne de bovinos da raça Nelore. Foram utilizados 20 animais da raça Nelore criados no modelo biológico superprecoce. Durante o crescimento foi realizada biopsia nos animais para analisar a composição do tecido muscular no animal in vivo e sua influência na qualidade da carne postmortem. A área e diâmetro das fibras SO, FOG e FG (fibras oxidativas, oxidativas-glicolíticas e glicolíticas) apresentaram correlações (P<0,01) entre elas demonstrando que o crescimento é um evento que ocorre concomitantemente para os diferentes tipos de fibras. Com relação aos parâmetros de qualidade de carne foi encontrada correlação positiva entre a área e diâmetro das fibras FG e valores de força de cisalhamento (rp=0,68, P<0,05; rp=0,81, P<0,01 respectivamente) e correlações negativas entre área e diâmetros das fibras FG e índice de fragmentação miofibrilar, ambos medidos na carne com 48 horas de resfriamento (rp= -0,75, P<0,01; rp= -0,58, P>0,05 respectivamente). Desta maneira podemos inferir que o tamanho da fibra muscular pode influenciar negativamente nas características de qualidade de carne, em especial a maciez.
Abstract: The present work was lead with objective to study the standard of growth of the tissue muscular by characterization of the muscular fibers during the animal development and the consequences in the final meat tenderness of bovines of Nellore breed. Twenty animals of Nellore breed had been feedlot in the superprecoce' biological model. During the growth samples were collected by biopsy in animals to analyze the composition of muscular tissue in living animal and the influence in the meat quality on the postmortem. The area and diameter of fibers SO, FOG and FG (slow oxidative, fast oxidative-glicolitic and fast glicolitic), had showed correlations (P<0,01) between them demonstrating that growth is an event which occurs concomitantly for different types of fibers. In relation to the characteristics of meat quality it was found a positive correlation between the area and diameter of fibers FG and shear force values (rp=0,68, P<0,05; rp=0,81, P<0,01 respectively) and negative correlations between FG area and diameters of fibers and myofibrilar fragmentation index, both measured in the meat with 48 hours of cooling (rp= -0,75, P<0,01; rp= -0,58, P>0,05 respectively). In this way we can infer which size of muscular fiber can influence negatively in meat quality characteristics of, tenderness especially.
Doutor
APA, Harvard, Vancouver, ISO, and other styles
8

Borke, Lukas. "Dynamic Clustering and Visualization of Smart Data via D3-3D-LSA." Doctoral thesis, Humboldt-Universität zu Berlin, 2017. http://dx.doi.org/10.18452/18307.

Full text
Abstract:
Mit der wachsenden Popularität von GitHub, dem größten Online-Anbieter von Programm-Quellcode und der größten Kollaborationsplattform der Welt, hat es sich zu einer Big-Data-Ressource entfaltet, die eine Vielfalt von Open-Source-Repositorien (OSR) anbietet. Gegenwärtig gibt es auf GitHub mehr als eine Million Organisationen, darunter solche wie Google, Facebook, Twitter, Yahoo, CRAN, RStudio, D3, Plotly und viele mehr. GitHub verfügt über eine umfassende REST API, die es Forschern ermöglicht, wertvolle Informationen über die Entwicklungszyklen von Software und Forschung abzurufen. Unsere Arbeit verfolgt zwei Hauptziele: (I) ein automatisches OSR-Kategorisierungssystem für Data Science Teams und Softwareentwickler zu ermöglichen, das Entdeckbarkeit, Technologietransfer und Koexistenz fördert. (II) Visuelle Daten-Exploration und thematisch strukturierte Navigation innerhalb von GitHub-Organisationen für reproduzierbare Kooperationsforschung und Web-Applikationen zu etablieren. Um Mehrwert aus Big Data zu generieren, ist die Speicherung und Verarbeitung der Datensemantik und Metadaten essenziell. Ferner ist die Wahl eines geeigneten Text Mining (TM) Modells von Bedeutung. Die dynamische Kalibrierung der Metadaten-Konfigurationen, TM Modelle (VSM, GVSM, LSA), Clustering-Methoden und Clustering-Qualitätsindizes wird als "Smart Clusterization" abgekürzt. Data-Driven Documents (D3) und Three.js (3D) sind JavaScript-Bibliotheken, um dynamische, interaktive Datenvisualisierung zu erzeugen. Beide Techniken erlauben Visuelles Data Mining (VDM) in Webbrowsern, und werden als D3-3D abgekürzt. Latent Semantic Analysis (LSA) misst semantische Information durch Kontingenzanalyse des Textkorpus. Ihre Eigenschaften und Anwendbarkeit für Big-Data-Analytik werden demonstriert. "Smart clusterization", kombiniert mit den dynamischen VDM-Möglichkeiten von D3-3D, wird unter dem Begriff "Dynamic Clustering and Visualization of Smart Data via D3-3D-LSA" zusammengefasst.
With the growing popularity of GitHub, the largest host of source code and collaboration platform in the world, it has evolved to a Big Data resource offering a variety of Open Source repositories (OSR). At present, there are more than one million organizations on GitHub, among them Google, Facebook, Twitter, Yahoo, CRAN, RStudio, D3, Plotly and many more. GitHub provides an extensive REST API, which enables scientists to retrieve valuable information about the software and research development life cycles. Our research pursues two main objectives: (I) provide an automatic OSR categorization system for data science teams and software developers promoting discoverability, technology transfer and coexistence; (II) establish visual data exploration and topic driven navigation of GitHub organizations for collaborative reproducible research and web deployment. To transform Big Data into value, in other words into Smart Data, storing and processing of the data semantics and metadata is essential. Further, the choice of an adequate text mining (TM) model is important. The dynamic calibration of metadata configurations, TM models (VSM, GVSM, LSA), clustering methods and clustering quality indices will be shortened as "smart clusterization". Data-Driven Documents (D3) and Three.js (3D) are JavaScript libraries for producing dynamic, interactive data visualizations, featuring hardware acceleration for rendering complex 2D or 3D computer animations of large data sets. Both techniques enable visual data mining (VDM) in web browsers, and will be abbreviated as D3-3D. Latent Semantic Analysis (LSA) measures semantic information through co-occurrence analysis in the text corpus. Its properties and applicability for Big Data analytics will be demonstrated. "Smart clusterization" combined with the dynamic VDM capabilities of D3-3D will be summarized under the term "Dynamic Clustering and Visualization of Smart Data via D3-3D-LSA".
APA, Harvard, Vancouver, ISO, and other styles
9

Achuo, George. "Partner satisfaction and renewal likelihood in consumer supported agriculture (CSA) : a case study of The Equiterre CSA network." Thesis, McGill University, 2003. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=19555.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Rudolph, Johnny. "A benchmarking model for harmonic distortion in a power system / Johnny Rudolph." Thesis, 2011. http://hdl.handle.net/10394/11077.

Full text
Abstract:
The present power system is loaded with sophisticated energy conversion technologies like solid state converters. With the rapid advance in semiconductor technology, power electronics have provided new devices that are highly efficient and reliable. These devices are inherently non-linear, which causes the current to deviate from sinusoidal conditions. This phenomenon is known as harmonic current distortion. Multiple consumers are connected to the utility at the point of common coupling. Harmonic currents are then transmitted into the distribution system by various solid state users and this could lead to voltage distortion. Harmonic distortion is just one of the power quality fields and is not desirable in a power system. Distortion levels could cause multiple problems in the form of additional heating, increased power losses and even failing of sensitive equipment. Utility companies like Eskom have power quality monitors on various points in their distribution system. Data measurements are taken at a single point of delivery during certain time intervals and stored on a database. Multiple harmonic measurements will not be able to describe distortion patterns of the whole distribution system. Analysis must be done on this information to translate it to useful managerial information. The aim of this project is to develop a benchmarking methodology that could aid the supply industry with useful information to effectively manage harmonic distortion in a distribution system. The methodology will implement distortion indexes set forth by the Electrical Power Research Institute [3], which will describe distortion levels in a qualitative and quantitative way. Harmonic measurements of the past two years will be used to test the methodology. The information is obtained from Eskom’s database and will benchmark the North-West Province distribution network [40]. This proposed methodology will aim to aid institutions like NERSA to establish a reliable power quality management system.
Thesis (M.Ing. (Nuclear Engineering))--North-West University, Potchefstroom Campus, 2012
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Model quality indices"

1

Trajtenberg, Manuel. Product innovations, price indices and the (mis)measurement of economic performance. Cambridge, MA: National Bureau of Economic Research, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Service, United States Indian Health. Evaluation of a quality assurance model for public health nursing: Appendices. [Rockville, Md.?]: The Service, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Armknecht, Paul A. Price imputation and other techniques for dealing with missing observations, seasonality and quality change in price indices. [Washington, D.C.]: International Monetary Fund, Statistics Department, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Kamenskaya, Valentina, and Leonid Tomanov. The fractal-chaotic properties of cognitive processes: age. ru: INFRA-M Academic Publishing LLC., 2020. http://dx.doi.org/10.12737/1053569.

Full text
Abstract:
In the monograph the literature information about the nature of stochastic processes and their participation in the work of the brain and human behavior. Established that the real cognitive processes and mental functions associated with the procedural side of external events and the stochastic properties of the internal dynamics of brain systems in the form of fluctuations of their parameters, including cardiac rhythm generation and sensorimotor reactions. Experimentally proved that the dynamics of the measured physiological processes is in the range from chaotic regime to a weakly deterministic — fractal mode. Fractal mode determines the maximum order and organization homeostasis of cognitive processes and States, as well as high adaptive ability of the body systems with fractal properties. The fractal-chaotic dynamics is a useful quality to examine the actual physiological and psychological systems - a unique numerical identification of the order and randomness of the processes through calculation of fractal indices. The monograph represents the results of many years of experimental studies of the reflection properties of stochastic sensorimotor reactions, as well as stochastic properties of heart rate in children, Teens and adults in the age aspect in the speech activity and the perception of different kinds of music with its own frequency-spectral structure. Designed for undergraduates, graduate students and researchers that perform research and development on cognitive psychology and neuroscience.
APA, Harvard, Vancouver, ISO, and other styles
5

Fisher, Franklin M., and Karl Shell. Economic Theory of Price Indices: Two Essays on the Effects of Taste, Quality, and Technological Change. Elsevier Science & Technology Books, 2014.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Malawey, Victoria. A Blaze of Light in Every Word. Oxford University Press, 2020. http://dx.doi.org/10.1093/oso/9780190052201.001.0001.

Full text
Abstract:
A Blaze of Light in Every Word presents a conceptual model for analyzing vocal delivery in popular song recordings focused on three overlapping areas of inquiry: pitch, prosody, and quality. The domain of pitch, which refers to listeners’ perceptions of frequency, considers range, tessitura, intonation, and registration. Prosody, the pacing and flow of delivery, comprises phrasing, metric placement, motility, embellishment, and consonantal articulation. Qualitative elements include timbre, phonation, onset, resonance, clarity, paralinguistic effects, and loudness. Intersecting all three domains is the area of technological mediation, which considers how external technologies, such as layering, overdubbing, pitch modification, recording transmission, compression, reverb, spatial placement, delay, and other electronic effects, impact voice in recorded music. Though the book focuses primarily on the sonic and material aspects of vocal delivery, it situates these aspects among broader cultural, philosophical, and anthropological approaches to voice with the goal to better understand the relationship between sonic content and its signification. Drawing upon transcription and spectrographic analysis as the primary means of representation, as well as modes of analysis, this book features in-depth analyses of a wide array of popular song recordings spanning genres from indie rock to hip-hop to death metal, develops analytical tools for understanding how individual dimensions make singing voices both complex and unique, and synthesizes how multiple aspects interact to better understand the multidimensionality of singing voices.
APA, Harvard, Vancouver, ISO, and other styles
7

Sobczyk, Eugeniusz Jacek. Uciążliwość eksploatacji złóż węgla kamiennego wynikająca z warunków geologicznych i górniczych. Instytut Gospodarki Surowcami Mineralnymi i Energią PAN, 2022. http://dx.doi.org/10.33223/onermin/0222.

Full text
Abstract:
Hard coal mining is characterised by features that pose numerous challenges to its current operations and cause strategic and operational problems in planning its development. The most important of these include the high capital intensity of mining investment projects and the dynamically changing environment in which the sector operates, while the long-term role of the sector is dependent on factors originating at both national and international level. At the same time, the conditions for coal mining are deteriorating, the resources more readily available in active mines are being exhausted, mining depths are increasing, temperature levels in pits are rising, transport routes for staff and materials are getting longer, effective working time is decreasing, natural hazards are increasing, and seams with an increasing content of waste rock are being mined. The mining industry is currently in a very difficult situation, both in technical (mining) and economic terms. It cannot be ignored, however, that the difficult financial situation of Polish mining companies is largely exacerbated by their high operating costs. The cost of obtaining coal and its price are two key elements that determine the level of efficiency of Polish mines. This situation could be improved by streamlining the planning processes. This would involve striving for production planning that is as predictable as possible and, on the other hand, economically efficient. In this respect, it is helpful to plan the production from operating longwalls with full awareness of the complexity of geological and mining conditions and the resulting economic consequences. The constraints on increasing the efficiency of the mining process are due to the technical potential of the mining process, organisational factors and, above all, geological and mining conditions. The main objective of the monograph is to identify relations between geological and mining parameters and the level of longwall mining costs, and their daily output. In view of the above, it was assumed that it was possible to present the relationship between the costs of longwall mining and the daily coal output from a longwall as a function of onerous geological and mining factors. The monograph presents two models of onerous geological and mining conditions, including natural hazards, deposit (seam) parameters, mining (technical) parameters and environmental factors. The models were used to calculate two onerousness indicators, Wue and WUt, which synthetically define the level of impact of onerous geological and mining conditions on the mining process in relation to: —— operating costs at longwall faces – indicator WUe, —— daily longwall mining output – indicator WUt. In the next research step, the analysis of direct relationships of selected geological and mining factors with longwall costs and the mining output level was conducted. For this purpose, two statistical models were built for the following dependent variables: unit operating cost (Model 1) and daily longwall mining output (Model 2). The models served two additional sub-objectives: interpretation of the influence of independent variables on dependent variables and point forecasting. The models were also used for forecasting purposes. Statistical models were built on the basis of historical production results of selected seven Polish mines. On the basis of variability of geological and mining conditions at 120 longwalls, the influence of individual parameters on longwall mining between 2010 and 2019 was determined. The identified relationships made it possible to formulate numerical forecast of unit production cost and daily longwall mining output in relation to the level of expected onerousness. The projection period was assumed to be 2020–2030. On this basis, an opinion was formulated on the forecast of the expected unit production costs and the output of the 259 longwalls planned to be mined at these mines. A procedure scheme was developed using the following methods: 1) Analytic Hierarchy Process (AHP) – mathematical multi-criteria decision-making method, 2) comparative multivariate analysis, 3) regression analysis, 4) Monte Carlo simulation. The utilitarian purpose of the monograph is to provide the research community with the concept of building models that can be used to solve real decision-making problems during longwall planning in hard coal mines. The layout of the monograph, consisting of an introduction, eight main sections and a conclusion, follows the objectives set out above. Section One presents the methodology used to assess the impact of onerous geological and mining conditions on the mining process. Multi-Criteria Decision Analysis (MCDA) is reviewed and basic definitions used in the following part of the paper are introduced. The section includes a description of AHP which was used in the presented analysis. Individual factors resulting from natural hazards, from the geological structure of the deposit (seam), from limitations caused by technical requirements, from the impact of mining on the environment, which affect the mining process, are described exhaustively in Section Two. Sections Three and Four present the construction of two hierarchical models of geological and mining conditions onerousness: the first in the context of extraction costs and the second in relation to daily longwall mining. The procedure for valuing the importance of their components by a group of experts (pairwise comparison of criteria and sub-criteria on the basis of Saaty’s 9-point comparison scale) is presented. The AHP method is very sensitive to even small changes in the value of the comparison matrix. In order to determine the stability of the valuation of both onerousness models, a sensitivity analysis was carried out, which is described in detail in Section Five. Section Six is devoted to the issue of constructing aggregate indices, WUe and WUt, which synthetically measure the impact of onerous geological and mining conditions on the mining process in individual longwalls and allow for a linear ordering of longwalls according to increasing levels of onerousness. Section Seven opens the research part of the work, which analyses the results of the developed models and indicators in individual mines. A detailed analysis is presented of the assessment of the impact of onerous mining conditions on mining costs in selected seams of the analysed mines, and in the case of the impact of onerous mining on daily longwall mining output, the variability of this process in individual fields (lots) of the mines is characterised. Section Eight presents the regression equations for the dependence of the costs and level of extraction on the aggregated onerousness indicators, WUe and WUt. The regression models f(KJC_N) and f(W) developed in this way are used to forecast the unit mining costs and daily output of the designed longwalls in the context of diversified geological and mining conditions. The use of regression models is of great practical importance. It makes it possible to approximate unit costs and daily output for newly designed longwall workings. The use of this knowledge may significantly improve the quality of planning processes and the effectiveness of the mining process.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Model quality indices"

1

Rankenhohn, Florian, Tido Strauß, and Paul Wermter. "Dianchi Shallow Lake Management." In Terrestrial Environmental Sciences, 69–102. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-80234-9_3.

Full text
Abstract:
AbstractLake Dianchi in the Chinese province Yunnan is a shallow lake suffering from algae blooms for years due to high pollution. We conducted a thorough survey of the water quality of the northern part of the lake called Caohai. This study was intended as the basis for the system understanding of the shallow lake of Caohai. The study consisted of two steps. First we collected available environmental, hydrological and pollution data from Kunming authorities and other sources. It was possible to parameterise a lake model model based on the preliminary data set. It supported first estimations of management scenarios. But the first and quick answers came with a relevant vagueness. Relevant monitoring data was still missing like P release from lake-internal sediment.Because data uncertainty causes model uncertainty and model uncertainty causes planning and management uncertainties, we recommended and conducted a thorough sediment and river pollution monitoring campaign in 2017. Examination of the sediment phosphorus release and additional measurements of N and P was crucial for the improvement of the shallow lake model of Caohai. In May 2018 we presented and discussed the results of StoLaM shallow lake model of Caohai and the outcomes of a set of management scenarios.The StoLaM shallow lake model for Caohai used in SINOWATER indicates that sediment dredging could contribute to the control of algae by limitation of phosphorus, but sediment management can only produce sustainable effects when the overall nutrient input and especially the phosphorus input from the inflows will be reduced significantly.
APA, Harvard, Vancouver, ISO, and other styles
2

van Rooij, Wilbert, Iulie Aslaksen, Isak Henrik Eira, Philip Burgess, and Per Arild Garnåsjordet. "Loss of Reindeer Grazing Land in Finnmark, Norway, and Effects on Biodiversity: GLOBIO3 as Decision Support Tool at Arctic Local Level." In Reindeer Husbandry, 223–54. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-17625-8_9.

Full text
Abstract:
AbstractCompeting land use and climate change are threats to the pasture land of Sámi reindeer herding. Reindeer pastures are exposed to the development of infrastructure, hydropower, mineral exploration, recreational cabin areas, and wind power. Land use conflicts are exacerbated under climate policy with wind power plants in reindeer herding areas. Projected developments and climate change impacts challenge the adaptive capacity of reindeer herders and the resilience of reindeer herding. Analysis of biodiversity loss by the GLOBIO3 model is suggested as tool for decision support, in consultation with Sámi reindeer owners, taking into account traditional knowledge of reindeer herding. GLOBIO3 analysis for Sámi reindeer herding land in Finnmark indicates that in 2011, compared to an intact situation, about 50% of the biodiversity of reindeer calving grounds has been lost, and it is expected to be reduced with another 10% in the scenario for 2030. Reindeer owners in Finnmark told that they expect biodiversity loss will have implications for the quality and extent of suitable grazing areas. Especially the quality of the calving grounds is essential for reindeer herding. An important lesson from dialogue with reindeer owners is that even highly impacted areas should not be considered as lost, and thus be opened to further development, as they are still important for seasonal reindeer migration and grazing at certain times of the year. The chapter presents research on methods development, traditional knowledge in the context of Sámi reindeer herders in Finnmark and highlights innovative tools to engage rightsholders and stakeholders in the Arctic in development planning processes.
APA, Harvard, Vancouver, ISO, and other styles
3

Mishra, Prakash Chandra, and Anil Kumar Giri. "Prediction of Water Quality Indices by Using Artificial Neural Network Models." In Handbook of Research on Manufacturing Process Modeling and Optimization Strategies, 418–29. IGI Global, 2017. http://dx.doi.org/10.4018/978-1-5225-2440-3.ch020.

Full text
Abstract:
Conventionally, fixed techniques are used for prediction of future time-series data. Subsequently adaptive techniques are used to forecast improved future data. The adaptive techniques are essentially based on ANN and fuzzy logic techniques. It is observed that these techniques also perform poorly when the input data set available is less and when there is abrupt change in the input data set. In this paper the proposed hybrid technique is based on data farming for intermediate data generation and the ANN model for better learning and forecasting. The performance of the proposed model has been tested with actual pertaining to water quality indices of various water samples collected from different sources.
APA, Harvard, Vancouver, ISO, and other styles
4

Roy, Kunal, and Rudra Narayan Das. "The “ETA” Indices in QSAR/QSPR/QSTR Research." In Pharmaceutical Sciences, 978–1011. IGI Global, 2017. http://dx.doi.org/10.4018/978-1-5225-1762-7.ch038.

Full text
Abstract:
Descriptors are one of the most essential components of predictive Quantitative Structure-Activity/Property/Toxicity Relationship (QSAR/QSPR/QSTR) modeling analysis, as they encode chemical information of molecules in the form of quantitative numbers, which are used to develop mathematical correlation models. The quality of a predictive model not only depends on good modeling statistics, but also on the extraction of chemical features. A significant amount of research since the beginning of QSAR analysis paradigm has led to the introduction of a large number of predictor variables or descriptors. The Extended Topochemical Atom (ETA) indices, developed by the authors' group, successfully address the aspects of molecular topology, electronic information, and different types of bonded interactions, and have been extensively employed for the modeling of different types of activity/property and toxicity endpoints. This chapter provides explicit information regarding the basis, algorithm, and applicability of the ETA indices for a predictive modeling paradigm.
APA, Harvard, Vancouver, ISO, and other styles
5

Roy, Kunal, and Rudra Narayan Das. "The “ETA” Indices in QSAR/QSPR/QSTR Research." In Quantitative Structure-Activity Relationships in Drug Design, Predictive Toxicology, and Risk Assessment, 48–83. IGI Global, 2015. http://dx.doi.org/10.4018/978-1-4666-8136-1.ch002.

Full text
Abstract:
Descriptors are one of the most essential components of predictive Quantitative Structure-Activity/Property/Toxicity Relationship (QSAR/QSPR/QSTR) modeling analysis, as they encode chemical information of molecules in the form of quantitative numbers, which are used to develop mathematical correlation models. The quality of a predictive model not only depends on good modeling statistics, but also on the extraction of chemical features. A significant amount of research since the beginning of QSAR analysis paradigm has led to the introduction of a large number of predictor variables or descriptors. The Extended Topochemical Atom (ETA) indices, developed by the authors' group, successfully address the aspects of molecular topology, electronic information, and different types of bonded interactions, and have been extensively employed for the modeling of different types of activity/property and toxicity endpoints. This chapter provides explicit information regarding the basis, algorithm, and applicability of the ETA indices for a predictive modeling paradigm.
APA, Harvard, Vancouver, ISO, and other styles
6

Saktioto, Toto, Yoli Zairmi, Sopya Erlinda, and Velia Veriyanti. "Simulation of Birefringence and Polarization Mode Dispersion Characteristics in Various Commercial Single Mode Fibers." In Application of Optical Fiber in Engineering. IntechOpen, 2021. http://dx.doi.org/10.5772/intechopen.94127.

Full text
Abstract:
Single mode optical fiber operation for long haul distance communication media has rapidly developed. Several efforts are implemented to reduce and control the attenuation and absorption of signal propagation. However, fiber parameters still experienced interference with internal and external factors that result birefringence and polarization mode dispersion such as bending power losses, signal widening and increasing wavelengths. In order to reduce and optimize the interference which is experimentally difficult to demonstrate because of the very long fibers hence a numerical simulation is set with perspective of twisted fiber disorder as a function of wavelengths and fiber geometry. The simulation evaluates the various refractive indices, radius of fibers and wavelength sources. The quality of optical fiber interference can be identified from the twisted power losses values with different variations of twisted radius. This model obtained indicates the greatest power losses occurring as a function of radius, refractive indices and wavelength. The results show that normalized frequency value has important role in determining the effectiveness the optical fiber performance and stability of power deliver. The addition of wavelength can affect the fibers experiencing birefringence and polarization mode dispersion occurring at wavelength of telecommunication regimes.
APA, Harvard, Vancouver, ISO, and other styles
7

Wong, Ho Yin, and Anthony Perrone. "The Antecedents of Word-of-Mouth Behaviour." In Strategic Marketing in Fragile Economic Conditions, 185–202. IGI Global, 2014. http://dx.doi.org/10.4018/978-1-4666-6232-2.ch010.

Full text
Abstract:
The aim of this study is to undertake empirical research investigating the nature and magnitude of the determinants of word-of-mouth behaviour from the point of view of service performance and post-purchase perceptions. A quantitative study was undertaken. A theoretical model linking service quality issues and word-of-mouth behaviour was developed and tested using structural equation modelling of 280 surveyed participants at various day spa locations. All major fit indices from structural equation modelling methods show satisfactory results for the measurement and structural models. The results confirm significant relationships between the constructs in the model. While the quality of the product, customer service, and servicescape atmosphere lead to customer satisfaction, it is servicescape atmosphere and customer satisfaction that drive word-of-mouth behaviour. The results of this study provide insights to aid service providers and marketing professionals in the service industry in fully understanding that the enhancement of the delivery of high quality service, an accommodating environment, and instilling feelings of satisfaction with their customers will more likely lead to positive word-of-mouth referrals. One major limitation is that the survey was conducted within one industry in one country. The major value of this chapter is the establishment of the role of service quality on word-of-mouth behaviour. This research provides empirical results of the impacts of service performance and post-purchase perceptions on word-of-mouth behaviour.
APA, Harvard, Vancouver, ISO, and other styles
8

Relich, Marcin, and Jana Šujanová. "Identifying the Key Success Factors of Innovation for Improving the New Product Development Process." In Advances in Business Strategy and Competitive Advantage, 303–19. IGI Global, 2015. http://dx.doi.org/10.4018/978-1-4666-8348-8.ch018.

Full text
Abstract:
This chapter is concerned with the study of success factors identification in product innovation. The critical success factors are identified on the basis of project management software database and questionnaires concerning communication and quality management in new product development. The model of measuring innovation includes the indicators connected with the fields such as research and development, purchasing and materials management, manufacturing, sales and marketing, and communication. The proposed methodology enables merger of objective indices and subjective judgments with the use of fuzzy logic. In order to identify the relationships between product success and project environment parameters, the artificial neural networks are used.
APA, Harvard, Vancouver, ISO, and other styles
9

Drabick, Deborah A. G., and Jill Rabinowitz. "Heterogeneity in Dementia and Mild Cognitive Impairment." In Vascular Disease, Alzheimer's Disease, and Mild Cognitive Impairment, 129–45. Oxford University Press, 2020. http://dx.doi.org/10.1093/oso/9780190634230.003.0007.

Full text
Abstract:
Mild cognitive impairment (MCI) and dementia such as Alzheimer’s disease and vascular dementia are heterogeneous conditions that are associated with a chronic course and impairments across a multitude of neurobiological and neuropsychological domains. Until relatively recently, much research has relied on variable-centered techniques (e.g., structural equation modeling, factor analysis, regression) to delineate and study these conditions. This chapter presents evidence of the potential benefits of using person-centered procedures (e.g., latent class analysis) for identifying more homogeneous subgroups of individuals with MCI or dementia that may have distinct correlates, courses, and potential responses to interventions. The research reviewed in this chapter indicates that these strategies permit clinicians and investigators to (a) identify subgroups of individuals who differ in the frequency and/or quality of signs and symptoms, correlates, course, or outcomes, and (b) externally validate and provide support for the predictive validity of these subgroups. Steps for conducting latent class/profile analysis are presented, as well as indices used in selecting the best-fitting model. Implications for assessment, intervention, and future research are provided.
APA, Harvard, Vancouver, ISO, and other styles
10

Sahu, Nitin Kumar, Atul Kumar Sahu, and Anoop Kumar Sahu. "Fuzzy-AHP." In Theoretical and Practical Advancements for Fuzzy System Integration, 97–125. IGI Global, 2017. http://dx.doi.org/10.4018/978-1-5225-1848-8.ch005.

Full text
Abstract:
Logistics activities are performed in order to balance the operational chains of firms. The selection of the Third Party Logistics (3PL) is a challenging task for each organization, which involves various factors and attributes. The presented methodology acts as a boon and aids the decision makers for effectively choosing the appropriate Third Party Logistics (3PL) network. In the revealed work, the authors explored fuzzy sets theory and presented a fuzzy AHP model to facilitate the managers of organizations to deal with the Third Party Logistics (3PL) decision making problems. The overall performance of defined Third Party Logistics (3PL) Service Providers are greatly influenced by many significant parameters: quality, reliability, service assurance, shipment cost, customer relationship, etc. The authors have considered various significant parameters: service level, financial security capabilities, location, global presence, relationship management, and client fulfillment representing first level indices. These parameters have chain of various sub-parameters, represented as second level indices, whose importance is affecting the judgment of the decision makers. Various researches have constraint their work up to first level indices and have not considered the second level indices, which is a crucial part of today's practical decision making process. The authors have considered this issue as research gap and transformed this research gap into research agenda. The authors applied an AHP (Analytical Hierarchy Process) accompanied with fuzzy set theory in order to solve industrial Logistics problems. The objective of chapter is to propose a fuzzy based AHP method towards solve benchmarking (preference orders of defined alternatives under criteria) problems. The presented method facilitates the managers of firms to make the verdict towards choosing the best Third Party Logistics (3PL) service provider. A numerical illustration is provided to validate the method application upon module.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Model quality indices"

1

Zhang, Lihua, Shuaidong Jia, Tao Wang, and Hongbo Cao. "Estimating quality indices of a digital depth model for navigational safety." In 2015 2nd IEEE International Conference on Spatial Data Mining and Geographical Knowledge Services (ICSDM). IEEE, 2015. http://dx.doi.org/10.1109/icsdm.2015.7298039.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Harris, Salil, Aniruddha Sinha, and Sudarshan Kumar. "Model Order Identification of Combustion Instability Using Lipschitz Indices." In ASME 2019 Gas Turbine India Conference. American Society of Mechanical Engineers, 2019. http://dx.doi.org/10.1115/gtindia2019-2694.

Full text
Abstract:
Abstract Gas turbine combustors employing lean premixed combustion are prone to combustion instability. Combustion instability, if unchecked, will have deleterious effects to the combustor and hence needs to be controlled. Active control methods are preferred to obtain better off-design performance. The effectiveness of active control methods is dependent on the quality of controller which in-turn depends on the quality of model. In the present work, an input-output model structure, where the output of the system at the current instant is modelled as a nonlinear function of delayed inputs and outputs is chosen. As there are infinite possibilities for representation of nonlinear functions, all parameters in the model structure like time delay between input and output, number of delayed input and output terms and the appropriate form of nonlinear function can be obtained only iteratively. However, prior knowledge of delay and number of delayed inputs and outputs reduces the computational intensity. To this end, the present work utilizes the method of Lipschitz indices to obtain the number of delayed inputs and outputs.
APA, Harvard, Vancouver, ISO, and other styles
3

da Rosa, M. A., G. Bolacell, I. Costa, D. Calado, and D. Issicaba. "Impact evaluation of the network geometric model on power quality indices using probabilistic techniques." In 2016 International Conference on Probabilistic Methods Applied to Power Systems (PMAPS). IEEE, 2016. http://dx.doi.org/10.1109/pmaps.2016.7764215.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Rui, Bai, Tong Shaocheng, Zhang Jian, and Chai Tianyou. "Prediction model of quality indices based-on RBF neural network in the raw slurry blending process." In 2009 IEEE International Conference on Automation and Logistics (ICAL). IEEE, 2009. http://dx.doi.org/10.1109/ical.2009.5262778.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Ih, Jeong-Guon, Su-Won Jang, Cheol-Ho Jeong, Youn-Young Jeung, and Kye-Sup Jun. "A Study on the Sound Quality Evaluation Model of the Air Cleaner." In ASME 2007 International Mechanical Engineering Congress and Exposition. ASMEDC, 2007. http://dx.doi.org/10.1115/imece2007-41115.

Full text
Abstract:
In operating the air cleaner for a long time, people in a quiet enclosed space expect calm sound at low operational levels for a routine cleaning of air; in contrast, a powerful, yet not-annoying, sound is expected at high operational levels for an immediate cleaning of pollutants. In this context, it is important to evaluate and design the air cleaner noise to satisfy such contradictory expectation from the customers. In this study, a model for evaluating the air cleaner sound quality was developed based on the objective and subjective analyses. Sound signals from various air cleaners were recorded and they were edited by increasing or decreasing the loudness at three wide specific-loudness bands: 20–400 Hz (0–3.8 Bark), 400–1250 Hz (3.8–10 Bark), 1.25k–12.5k Hz bands (10–22.8 Bark). Subjective tests using the edited sounds were conducted by the semantic differential method (SDM) and the method of successive intervals (MSI). SDM test for 7 adjective pairs was conducted to find the relation between subjective feeling and frequency bands. Two major feelings, performance and annoyance, were factored out from the principal component analysis. We found that the performance feeling was related to both low and high frequency bands; whereas the annoyance feeling was related to high frequency bands. MSI test using the 7 scales was conducted to derive the sound quality index to express the severity of each perceptive descriptor. Annoyance and performance indices of air cleaners were modeled from the subjective responses of the juries and the measured sound quality metrics: loudness, sharpness, roughness, and fluctuation strength. Multiple regression method was employed to generate sound quality evaluation models. Using the developed indices, sound quality of the measured data were evaluated and compared with the subjective data. The difference between predicted and tested scores was less than 0.5 point.
APA, Harvard, Vancouver, ISO, and other styles
6

Singh, Sultan, and Anil Kumar. "Validation of an Indian Rail Vehicle Model Using Ride Indices From Oscillation Test Trials." In ASME 2021 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 2021. http://dx.doi.org/10.1115/imece2021-70218.

Full text
Abstract:
Abstract Safety and comfort are more critical factors prior to efficient and economical travel for any passenger train. Due to the wheel-rail interaction at high speed, high amplitude vibrations occur that deteriorate the ride comfort of the passengers. In this paper, a multi-body dynamic model is developed by using Adams/VI-Rail software. Actual parameters of track and LHB coach are considered to simulate the rail vehicle model. Sperling’s ride index (Wz) method determines the ride indices values at different speeds with random track irregularity. The proposed multi-body dynamic model is analysed, and the results are compared with oscillation test trials performed and reported by the Research Design and Standards Organisation (RDSO). Obtained results were in good agreement and within permission values. The validated model can be extended to study the improvements in ride quality and ride comfort by introducing a semi-active suspension system based on an MR damper.
APA, Harvard, Vancouver, ISO, and other styles
7

Ahmadian, Mehdi, and Emmanuel Blanchard. "Non-Dimensional Analysis of the Performance of Semiactive Vehicle Suspensions." In ASME 2007 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2007. http://dx.doi.org/10.1115/detc2007-35689.

Full text
Abstract:
An analytical study that evaluates the response characteristics of a two-degree-of freedom quarter-car model using passive and semi-active dampers is provided as an extension to the results published by Chalasani for active suspensions. The behavior of a semi-actively suspended vehicle is evaluated using the hybrid control policy, and compared to the behavior of a passively suspended vehicle. The relationship between vibration isolation, suspension deflection, and road-holding is studied for the quarter-car model. Three performance indices are used as a measure of vibration isolation (which can be seen as a comfort index), suspension travel requirements, and road-holding quality. These indices are based on the mean square responses to a white noise velocity input for three motion variables: the vertical acceleration of the sprung mass, the deflection of the suspension, and the deflection of the tire, respectively. The results of this study indicate that the hybrid control policy yields better comfort than a passive suspension, without reducing the road-holding quality or increasing the suspension displacement for typical passenger cars.
APA, Harvard, Vancouver, ISO, and other styles
8

Gutium, Tatiana. "Approaches to Measurement of Well-being: Case of the Republic of Moldova." In International Conference Innovative Business Management & Global Entrepreneurship. LUMEN Publishing, 2020. http://dx.doi.org/10.18662/lumproc/ibmage2020/20.

Full text
Abstract:
The development strategy of a modern state is oriented towards ensuring economic growth, increasing the well-being of citizens and reducing the level of poverty. The COVID-19 pandemic had a negative impact on national economies, including the economy of the Republic of Moldova. That is why, the assessment of well-being, identify impact factors, the elaboration of recommendations for increasing well-being become current. Contemporary approaches to quantifying well-being focus on both the economic and social spheres. In this study are identified the weaknesses and strengths of the well-being indices, the dynamics of two composite welfare indices have been analyzed. In the research process, the influence of different factors was identified and their influence on the well-being of citizens and living standards was estimated. Applying the method of correlation and regression analysis, and using the software Eviews 9 were developed two multifactorial linear regression models: a model of the well-being and a model of living standard of population of the Republic of Moldova. Based on the analysis of the pillars of the Legatum Prosperity Index and the components of the Social Progress Index, priority sectors were identified, such as: health care, education, economic quality, enterprise conditions, environmental quality. At present, it is necessary to promote strategies to ensure sustainable economic growth, which will inevitably lead to an increase in the well-being of the local population.
APA, Harvard, Vancouver, ISO, and other styles
9

Osborn, Mark, and LiJie Yu. "Decision Support for Remote Monitoring and Diagnostics of Aircraft Engine Using Influence Diagrams." In ASME Turbo Expo 2007: Power for Land, Sea, and Air. ASMEDC, 2007. http://dx.doi.org/10.1115/gt2007-28331.

Full text
Abstract:
FAA regulations require the monitoring of all commercial aircraft engines to ensure airworthiness. In doing so, it provides economic advantages to engine owners to monitor engine performance and resolve identified issues in a timely manner to reduce operational costs or avoid secondary damage. Various remote monitoring and diagnostics service providers exist in the marketplace. However, a common understanding among most of them is that given limited time and information, it is an extremely difficult task to make quick and optimized decisions. Difficulties arise from the fact that an aircraft engine is a complex system and demands considerable expertise to diagnose, but also due to the uncertainty in estimating an engine’s true physical state because of measurement and process noise. Therefore, it is often difficult to decide what action to take in order to achieve the most desirable outcome. In this paper, a cost sensitive engine diagnostic and decision making methodology is described. Diagnostic tool performance at various decision thresholds is estimated over a large set of validated historical cases to evaluate sensitivity, specificity and other quality indices. These quality indices and a set of cost functions are utilized in influence diagrams to derive the optimized decision model in order to minimize costs given the uncertain engine condition and noisy parametric data.
APA, Harvard, Vancouver, ISO, and other styles
10

Nguyen, Vu, Dinh Phung, Trung Le, and Hung Bui. "Discriminative Bayesian Nonparametric Clustering." In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/355.

Full text
Abstract:
We propose a general framework for discriminative Bayesian nonparametric clustering to promote the inter-discrimination among the learned clusters in a fully Bayesian nonparametric (BNP) manner. Our method combines existing BNP clustering and discriminative models by enforcing latent cluster indices to be consistent with the predicted labels resulted from probabilistic discriminative model. This formulation results in a well-defined generative process wherein we can use either logistic regression or SVM for discrimination. Using the proposed framework, we develop two novel discriminative BNP variants: the discriminative Dirichlet process mixtures, and the discriminative-state infinite HMMs for sequential data. We develop efficient data-augmentation Gibbs samplers for posterior inference. Extensive experiments in image clustering and dynamic location clustering demonstrate that by encouraging discrimination between induced clusters, our model enhances the quality of clustering in comparison with the traditional generative BNP models.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Model quality indices"

1

Saltus, Christina, Todd Swannack, and S. McKay. Geospatial Suitability Indices Toolbox (GSI Toolbox). Engineer Research and Development Center (U.S.), September 2021. http://dx.doi.org/10.21079/11681/41881.

Full text
Abstract:
Habitat suitability models are widely adopted in ecosystem management and restoration, where these index models are used to assess environmental impacts and benefits based on the quantity and quality of a given habitat. Many spatially distributed ecological processes require application of suitability models within a geographic information system (GIS). Here, we present a geospatial toolbox for assessing habitat suitability. The Geospatial Suitability Indices (GSI) toolbox was developed in ArcGIS Pro 2.7 using the Python® 3.7 programming language and is available for use on the local desktop in the Windows 10 environment. Two main tools comprise the GSI toolbox. First, the Suitability Index Calculator tool uses thematic or continuous geospatial raster layers to calculate parameter suitability indices based on user-specified habitat relationships. Second, the Overall Suitability Index Calculator combines multiple parameter suitability indices into one overarching index using one or more options, including: arithmetic mean, weighted arithmetic mean, geometric mean, and minimum limiting factor. The resultant output is a raster layer representing habitat suitability values from 0.0 to 1.0, where zero is unsuitable habitat and one is ideal suitability. This report documents the model purpose and development as well as provides a user’s guide for the GSI toolbox.
APA, Harvard, Vancouver, ISO, and other styles
2

Saltus, Christina, S. McKay, and Todd Swannack. Geospatial suitability indices (GSI) toolbox : user's guide. Engineer Research and Development Center (U.S.), August 2022. http://dx.doi.org/10.21079/11681/45128.

Full text
Abstract:
Habitat suitability models have been widely adopted in ecosystem management and restoration to assess environmental impacts and benefits according to the quantity and quality of a given habitat. Many spatially distributed ecological processes require application of suitability models within a geographic information system (GIS). This technical report presents a geospatial toolbox for assessing habitat suitability. The geospatial suitability indices (GSI) toolbox was developed in ArcGIS Pro 2.7 using the Python 3.7 programming language and is available for use on the local desktop in the Windows 10 environment. Two main tools comprise the GSI toolbox. First, the suitability index (SIC) calculator tool uses thematic or continuous geospatial raster layers to calculate parameter suitability indices using user-specified habitat relationships. Second, the overall suitability index calculator (OSIC) combines multiple parameter suitability indices into one overarching index using one or more options, including arithmetic mean, weighted arithmetic mean, geometric mean, and minimum limiting factor. The result is a raster layer representing habitat suitability values from 0.0–1.0, where zero (0) is unsuitable habitat and one (1) is ideal suitability. This report documents the model purpose and development and provides a user’s guide for the GSI toolbox.
APA, Harvard, Vancouver, ISO, and other styles
3

Daudelin, Francois, Lina Taing, Lucy Chen, Claudia Abreu Lopes, Adeniyi Francis Fagbamigbe, and Hamid Mehmood. Mapping WASH-related disease risk: A review of risk concepts and methods. United Nations University Institute for Water, Environment and Health, December 2021. http://dx.doi.org/10.53328/uxuo4751.

Full text
Abstract:
The report provides a review of how risk is conceived of, modelled, and mapped in studies of infectious water, sanitation, and hygiene (WASH) related diseases. It focuses on spatial epidemiology of cholera, malaria and dengue to offer recommendations for the field of WASH-related disease risk mapping. The report notes a lack of consensus on the definition of disease risk in the literature, which limits the interpretability of the resulting analyses and could affect the quality of the design and direction of public health interventions. In addition, existing risk frameworks that consider disease incidence separately from community vulnerability have conceptual overlap in their components and conflate the probability and severity of disease risk into a single component. The report identifies four methods used to develop risk maps, i) observational, ii) index-based, iii) associative modelling and iv) mechanistic modelling. Observational methods are limited by a lack of historical data sets and their assumption that historical outcomes are representative of current and future risks. The more general index-based methods offer a highly flexible approach based on observed and modelled risks and can be used for partially qualitative or difficult-to-measure indicators, such as socioeconomic vulnerability. For multidimensional risk measures, indices representing different dimensions can be aggregated to form a composite index or be considered jointly without aggregation. The latter approach can distinguish between different types of disease risk such as outbreaks of high frequency/low intensity and low frequency/high intensity. Associative models, including machine learning and artificial intelligence (AI), are commonly used to measure current risk, future risk (short-term for early warning systems) or risk in areas with low data availability, but concerns about bias, privacy, trust, and accountability in algorithms can limit their application. In addition, they typically do not account for gender and demographic variables that allow risk analyses for different vulnerable groups. As an alternative, mechanistic models can be used for similar purposes as well as to create spatial measures of disease transmission efficiency or to model risk outcomes from hypothetical scenarios. Mechanistic models, however, are limited by their inability to capture locally specific transmission dynamics. The report recommends that future WASH-related disease risk mapping research: - Conceptualise risk as a function of the probability and severity of a disease risk event. Probability and severity can be disaggregated into sub-components. For outbreak-prone diseases, probability can be represented by a likelihood component while severity can be disaggregated into transmission and sensitivity sub-components, where sensitivity represents factors affecting health and socioeconomic outcomes of infection. -Employ jointly considered unaggregated indices to map multidimensional risk. Individual indices representing multiple dimensions of risk should be developed using a range of methods to take advantage of their relative strengths. -Develop and apply collaborative approaches with public health officials, development organizations and relevant stakeholders to identify appropriate interventions and priority levels for different types of risk, while ensuring the needs and values of users are met in an ethical and socially responsible manner. -Enhance identification of vulnerable populations by further disaggregating risk estimates and accounting for demographic and behavioural variables and using novel data sources such as big data and citizen science. This review is the first to focus solely on WASH-related disease risk mapping and modelling. The recommendations can be used as a guide for developing spatial epidemiology models in tandem with public health officials and to help detect and develop tailored responses to WASH-related disease outbreaks that meet the needs of vulnerable populations. The report’s main target audience is modellers, public health authorities and partners responsible for co-designing and implementing multi-sectoral health interventions, with a particular emphasis on facilitating the integration of health and WASH services delivery contributing to Sustainable Development Goals (SDG) 3 (good health and well-being) and 6 (clean water and sanitation).
APA, Harvard, Vancouver, ISO, and other styles
4

Galili, Naftali, Roger P. Rohrbach, Itzhak Shmulevich, Yoram Fuchs, and Giora Zauberman. Non-Destructive Quality Sensing of High-Value Agricultural Commodities Through Response Analysis. United States Department of Agriculture, October 1994. http://dx.doi.org/10.32747/1994.7570549.bard.

Full text
Abstract:
The objectives of this project were to develop nondestructive methods for detection of internal properties and firmness of fruits and vegetables. One method was based on a soft piezoelectric film transducer developed in the Technion, for analysis of fruit response to low-energy excitation. The second method was a dot-matrix piezoelectric transducer of North Carolina State University, developed for contact-pressure analysis of fruit during impact. Two research teams, one in Israel and the other in North Carolina, coordinated their research effort according to the specific objectives of the project, to develop and apply the two complementary methods for quality control of agricultural commodities. In Israel: An improved firmness testing system was developed and tested with tropical fruits. The new system included an instrumented fruit-bed of three flexible piezoelectric sensors and miniature electromagnetic hammers, which served as fruit support and low-energy excitation device, respectively. Resonant frequencies were detected for determination of firmness index. Two new acoustic parameters were developed for evaluation of fruit firmness and maturity: a dumping-ratio and a centeroid of the frequency response. Experiments were performed with avocado and mango fruits. The internal damping ratio, which may indicate fruit ripeness, increased monotonically with time, while resonant frequencies and firmness indices decreased with time. Fruit samples were tested daily by destructive penetration test. A fairy high correlation was found in tropical fruits between the penetration force and the new acoustic parameters; a lower correlation was found between this parameter and the conventional firmness index. Improved table-top firmness testing units, Firmalon, with data-logging system and on-line data analysis capacity have been built. The new device was used for the full-scale experiments in the next two years, ahead of the original program and BARD timetable. Close cooperation was initiated with local industry for development of both off-line and on-line sorting and quality control of more agricultural commodities. Firmalon units were produced and operated in major packaging houses in Israel, Belgium and Washington State, on mango and avocado, apples, pears, tomatoes, melons and some other fruits, to gain field experience with the new method. The accumulated experimental data from all these activities is still analyzed, to improve firmness sorting criteria and shelf-life predicting curves for the different fruits. The test program in commercial CA storage facilities in Washington State included seven apple varieties: Fuji, Braeburn, Gala, Granny Smith, Jonagold, Red Delicious, Golden Delicious, and D'Anjou pear variety. FI master-curves could be developed for the Braeburn, Gala, Granny Smith and Jonagold apples. These fruits showed a steady ripening process during the test period. Yet, more work should be conducted to reduce scattering of the data and to determine the confidence limits of the method. Nearly constant FI in Red Delicious and the fluctuations of FI in the Fuji apples should be re-examined. Three sets of experiment were performed with Flandria tomatoes. Despite the complex structure of the tomatoes, the acoustic method could be used for firmness evaluation and to follow the ripening evolution with time. Close agreement was achieved between the auction expert evaluation and that of the nondestructive acoustic test, where firmness index of 4.0 and more indicated grade-A tomatoes. More work is performed to refine the sorting algorithm and to develop a general ripening scale for automatic grading of tomatoes for the fresh fruit market. Galia melons were tested in Israel, in simulated export conditions. It was concluded that the Firmalon is capable of detecting the ripening of melons nondestructively, and sorted out the defective fruits from the export shipment. The cooperation with local industry resulted in development of automatic on-line prototype of the acoustic sensor, that may be incorporated with the export quality control system for melons. More interesting is the development of the remote firmness sensing method for sealed CA cool-rooms, where most of the full-year fruit yield in stored for off-season consumption. Hundreds of ripening monitor systems have been installed in major fruit storage facilities, and being evaluated now by the consumers. If successful, the new method may cause a major change in long-term fruit storage technology. More uses of the acoustic test method have been considered, for monitoring fruit maturity and harvest time, testing fruit samples or each individual fruit when entering the storage facilities, packaging house and auction, and in the supermarket. This approach may result in a full line of equipment for nondestructive quality control of fruits and vegetables, from the orchard or the greenhouse, through the entire sorting, grading and storage process, up to the consumer table. The developed technology offers a tool to determine the maturity of the fruits nondestructively by monitoring their acoustic response to mechanical impulse on the tree. A special device was built and preliminary tested in mango fruit. More development is needed to develop a portable, hand operated sensing method for this purpose. In North Carolina: Analysis method based on an Auto-Regressive (AR) model was developed for detecting the first resonance of fruit from their response to mechanical impulse. The algorithm included a routine that detects the first resonant frequency from as many sensors as possible. Experiments on Red Delicious apples were performed and their firmness was determined. The AR method allowed the detection of the first resonance. The method could be fast enough to be utilized in a real time sorting machine. Yet, further study is needed to look for improvement of the search algorithm of the methods. An impact contact-pressure measurement system and Neural Network (NN) identification method were developed to investigate the relationships between surface pressure distributions on selected fruits and their respective internal textural qualities. A piezoelectric dot-matrix pressure transducer was developed for the purpose of acquiring time-sampled pressure profiles during impact. The acquired data was transferred into a personal computer and accurate visualization of animated data were presented. Preliminary test with 10 apples has been performed. Measurement were made by the contact-pressure transducer in two different positions. Complementary measurements were made on the same apples by using the Firmalon and Magness Taylor (MT) testers. Three-layer neural network was designed. 2/3 of the contact-pressure data were used as training input data and corresponding MT data as training target data. The remaining data were used as NN checking data. Six samples randomly chosen from the ten measured samples and their corresponding Firmalon values were used as the NN training and target data, respectively. The remaining four samples' data were input to the NN. The NN results consistent with the Firmness Tester values. So, if more training data would be obtained, the output should be more accurate. In addition, the Firmness Tester values do not consistent with MT firmness tester values. The NN method developed in this study appears to be a useful tool to emulate the MT Firmness test results without destroying the apple samples. To get more accurate estimation of MT firmness a much larger training data set is required. When the larger sensitive area of the pressure sensor being developed in this project becomes available, the entire contact 'shape' will provide additional information and the neural network results would be more accurate. It has been shown that the impact information can be utilized in the determination of internal quality factors of fruit. Until now,
APA, Harvard, Vancouver, ISO, and other styles
5

Engel, Bernard, Yael Edan, James Simon, Hanoch Pasternak, and Shimon Edelman. Neural Networks for Quality Sorting of Agricultural Produce. United States Department of Agriculture, July 1996. http://dx.doi.org/10.32747/1996.7613033.bard.

Full text
Abstract:
The objectives of this project were to develop procedures and models, based on neural networks, for quality sorting of agricultural produce. Two research teams, one in Purdue University and the other in Israel, coordinated their research efforts on different aspects of each objective utilizing both melons and tomatoes as case studies. At Purdue: An expert system was developed to measure variances in human grading. Data were acquired from eight sensors: vision, two firmness sensors (destructive and nondestructive), chlorophyll from fluorescence, color sensor, electronic sniffer for odor detection, refractometer and a scale (mass). Data were analyzed and provided input for five classification models. Chlorophyll from fluorescence was found to give the best estimation for ripeness stage while the combination of machine vision and firmness from impact performed best for quality sorting. A new algorithm was developed to estimate and minimize training size for supervised classification. A new criteria was established to choose a training set such that a recurrent auto-associative memory neural network is stabilized. Moreover, this method provides for rapid and accurate updating of the classifier over growing seasons, production environments and cultivars. Different classification approaches (parametric and non-parametric) for grading were examined. Statistical methods were found to be as accurate as neural networks in grading. Classification models by voting did not enhance the classification significantly. A hybrid model that incorporated heuristic rules and either a numerical classifier or neural network was found to be superior in classification accuracy with half the required processing of solely the numerical classifier or neural network. In Israel: A multi-sensing approach utilizing non-destructive sensors was developed. Shape, color, stem identification, surface defects and bruises were measured using a color image processing system. Flavor parameters (sugar, acidity, volatiles) and ripeness were measured using a near-infrared system and an electronic sniffer. Mechanical properties were measured using three sensors: drop impact, resonance frequency and cyclic deformation. Classification algorithms for quality sorting of fruit based on multi-sensory data were developed and implemented. The algorithms included a dynamic artificial neural network, a back propagation neural network and multiple linear regression. Results indicated that classification based on multiple sensors may be applied in real-time sorting and can improve overall classification. Advanced image processing algorithms were developed for shape determination, bruise and stem identification and general color and color homogeneity. An unsupervised method was developed to extract necessary vision features. The primary advantage of the algorithms developed is their ability to learn to determine the visual quality of almost any fruit or vegetable with no need for specific modification and no a-priori knowledge. Moreover, since there is no assumption as to the type of blemish to be characterized, the algorithm is capable of distinguishing between stems and bruises. This enables sorting of fruit without knowing the fruits' orientation. A new algorithm for on-line clustering of data was developed. The algorithm's adaptability is designed to overcome some of the difficulties encountered when incrementally clustering sparse data and preserves information even with memory constraints. Large quantities of data (many images) of high dimensionality (due to multiple sensors) and new information arriving incrementally (a function of the temporal dynamics of any natural process) can now be processed. Furhermore, since the learning is done on-line, it can be implemented in real-time. The methodology developed was tested to determine external quality of tomatoes based on visual information. An improved model for color sorting which is stable and does not require recalibration for each season was developed for color determination. Excellent classification results were obtained for both color and firmness classification. Results indicted that maturity classification can be obtained using a drop-impact and a vision sensor in order to predict the storability and marketing of harvested fruits. In conclusion: We have been able to define quantitatively the critical parameters in the quality sorting and grading of both fresh market cantaloupes and tomatoes. We have been able to accomplish this using nondestructive measurements and in a manner consistent with expert human grading and in accordance with market acceptance. This research constructed and used large databases of both commodities, for comparative evaluation and optimization of expert system, statistical and/or neural network models. The models developed in this research were successfully tested, and should be applicable to a wide range of other fruits and vegetables. These findings are valuable for the development of on-line grading and sorting of agricultural produce through the incorporation of multiple measurement inputs that rapidly define quality in an automated manner, and in a manner consistent with the human graders and inspectors.
APA, Harvard, Vancouver, ISO, and other styles
6

Weinberg, Zwi G., Adegbola Adesogan, Itzhak Mizrahi, Shlomo Sela, Kwnag Jeong, and Diwakar Vyas. effect of selected lactic acid bacteria on the microbial composition and on the survival of pathogens in the rumen in context with their probiotic effects on ruminants. United States Department of Agriculture, January 2014. http://dx.doi.org/10.32747/2014.7598162.bard.

Full text
Abstract:
This research project was performed in context of the apparent probiotic effect of selected lactic acid bacteria (LAB) silage inoculants on the performance of ruminants (improved feed intake, faster live-weight gain, higher milk yields and improved feed efficiency). The overall objective was to find out how LAB affect ruminant performance. The project included several “chapters” as follows: 1. The effect of LAB silage inoculants on the survival of detrimental bacteria in rumen fluid, in vitro study (Weinberg et al., The Volcani Center). An in vitro model was developed to study the interaction between selected LAB and an E. coli strain tagged with green fluorescence protein (GFP) in buffered RF. Results indicated that both LAB inoculants and E. coli survived in the RF for several days; both LAB inoculants and LAB-treated silages did not affect survival of E. coli in rumen fluid in vitro. The effect of feeding baled wheat silages treated with or without three selected LAB silage inoculants on the performance of high-lactating cows (Weinberg et al., The Volcani Center). Treatments included control (no additive), Lacobacillusbuchneri40788 (LB), Lactobacillus plantarumMTD1 40027 (LP) and Pediococcuspentosaceus30168 (PP), each applied at 10⁶ cfu/g FM. The silages were included in the TMR of 32 high milking Holstein cows in a controlled feeding experiment. All baled silages were of good quality. The LB silage had the numerically highest acetic acid and were the most stable upon aerobic exposure. The cows fed the LB silages had the highest daily milk yields, percent milk fat and protein. The microbiome of baled wheat silages and changes during ensiling of wheat and corn (Sela et al., The Volcani Center). Bacterial community of the baled silages was dominated mainly of two genera in total, dominated by Lactobacillus and Clostridium_sensu_stricto_12 with 300 other genera at very low abundance. Fungal community was composed mainly of two genera in total, dominated by Candida and Monascuswith 20 other genera at very low abundance. In addition, changes in the microbiome during ensiling of wheat and corn with and without addition of L. plantarumMTD1 was studied in mini-silos. Overall 236 bacterial genera were identified in the fresh corn but after 3 months Lactobacillus outnumbered all other species by acquiring 95% of relative abundance. The wheat silage samples are still under analysis. The effect of applying LAB inoculants at ensiling on survival of E. coli O157:H7 in alfalfa and corn silages(Adesogan et al., University of Florida). E. coli (10⁵ cfu/g) was applied to fresh alfalfa and corn at ensiling with or without L. plantarumor L. buchneri. The pathogen was added again after about 3 moths at the beginning of an aerobic exposure period. The inoculants resulted in faster decrease in pH as compared with the control (no additives) or E. coli alone and therefore, the pathogen was eliminated faster from these silages. After aerobic exposure the pathogen was not detected in the LAB treated silages, whereas it was still present in the E. coli alone samples. 5. The effect of feeding corn silage treated with or without L. buchnerion shedding of E. coli O157:H7 by dairy cows (Adesogan et al., UFL). BARD Report - Project 4704 Page 2 of 12 Five hundred cows from the dairy herd of the University of Florida were screened for E. coli shedding, out of which 14 low and 13 high shedders were selected. These cows were fed a total mixed ration (TMR) which was inoculated with E. coli O157:H7 for 21 days. The TMR included corn silage treated with or without L. buchneri. The inoculated silages were more stable upon aerobic exposure than the control silages; the silage inoculant had no significant effect on any milk or cow blood parameters. However, the silage inoculant tended to reduce shedding of E. coli regardless of high or low shedders (p = 0.06). 6. The effect of feeding baled wheat silages treated with or without three selected LAB silage inoculants on the rumen microbiome (Mizrahi et al., BGU). Rumen fluid was sampled throughout the feeding experiment in which inoculated wheat silages were included in the rations. Microbial DNA was subsequently purified from each sample and the 16S rRNA was sequenced, thus obtaining an overview of the microbiome and its dynamic changes for each experimental treatment. We observed an increase in OTU richness in the group which received the baled silage inoculated with Lactobacillus Plantarum(LP). In contrast the group fed Lactobacillus buchneri(LB) inoculated silage resulted in a significant decrease in richness. Lower OTU richness was recently associated in lactating cows with higher performance (Ben Shabatet al., 2016). No significant clustering could be observed between the different inoculation treatments and the control in non metric multi-dimentional scaling, suggesting that the effect of the treatments is not the result of an overall modulation of the microbiome composition but possibly the result of more discrete interactions. Significant phylum level changes in composition also indicates that no broad changes in taxa identity and composition occurred under any treatment A more discrete modulation could be observed in the fold change of several taxonomic groups (genus level analysis), unique to each treatment, before and after the treatment. Of particular interest is the LB treated group, in which several taxa significantly decreased in abundance. BARD Report - Project 4704 Page 3 of 12
APA, Harvard, Vancouver, ISO, and other styles
7

Rankin, Nicole, Deborah McGregor, Candice Donnelly, Bethany Van Dort, Richard De Abreu Lourenco, Anne Cust, and Emily Stone. Lung cancer screening using low-dose computed tomography for high risk populations: Investigating effectiveness and screening program implementation considerations: An Evidence Check rapid review brokered by the Sax Institute (www.saxinstitute.org.au) for the Cancer Institute NSW. The Sax Institute, October 2019. http://dx.doi.org/10.57022/clzt5093.

Full text
Abstract:
Background Lung cancer is the number one cause of cancer death worldwide.(1) It is the fifth most commonly diagnosed cancer in Australia (12,741 cases diagnosed in 2018) and the leading cause of cancer death.(2) The number of years of potential life lost to lung cancer in Australia is estimated to be 58,450, similar to that of colorectal and breast cancer combined.(3) While tobacco control strategies are most effective for disease prevention in the general population, early detection via low dose computed tomography (LDCT) screening in high-risk populations is a viable option for detecting asymptomatic disease in current (13%) and former (24%) Australian smokers.(4) The purpose of this Evidence Check review is to identify and analyse existing and emerging evidence for LDCT lung cancer screening in high-risk individuals to guide future program and policy planning. Evidence Check questions This review aimed to address the following questions: 1. What is the evidence for the effectiveness of lung cancer screening for higher-risk individuals? 2. What is the evidence of potential harms from lung cancer screening for higher-risk individuals? 3. What are the main components of recent major lung cancer screening programs or trials? 4. What is the cost-effectiveness of lung cancer screening programs (include studies of cost–utility)? Summary of methods The authors searched the peer-reviewed literature across three databases (MEDLINE, PsycINFO and Embase) for existing systematic reviews and original studies published between 1 January 2009 and 8 August 2019. Fifteen systematic reviews (of which 8 were contemporary) and 64 original publications met the inclusion criteria set across the four questions. Key findings Question 1: What is the evidence for the effectiveness of lung cancer screening for higher-risk individuals? There is sufficient evidence from systematic reviews and meta-analyses of combined (pooled) data from screening trials (of high-risk individuals) to indicate that LDCT examination is clinically effective in reducing lung cancer mortality. In 2011, the landmark National Lung Cancer Screening Trial (NLST, a large-scale randomised controlled trial [RCT] conducted in the US) reported a 20% (95% CI 6.8% – 26.7%; P=0.004) relative reduction in mortality among long-term heavy smokers over three rounds of annual screening. High-risk eligibility criteria was defined as people aged 55–74 years with a smoking history of ≥30 pack-years (years in which a smoker has consumed 20-plus cigarettes each day) and, for former smokers, ≥30 pack-years and have quit within the past 15 years.(5) All-cause mortality was reduced by 6.7% (95% CI, 1.2% – 13.6%; P=0.02). Initial data from the second landmark RCT, the NEderlands-Leuvens Longkanker Screenings ONderzoek (known as the NELSON trial), have found an even greater reduction of 26% (95% CI, 9% – 41%) in lung cancer mortality, with full trial results yet to be published.(6, 7) Pooled analyses, including several smaller-scale European LDCT screening trials insufficiently powered in their own right, collectively demonstrate a statistically significant reduction in lung cancer mortality (RR 0.82, 95% CI 0.73–0.91).(8) Despite the reduction in all-cause mortality found in the NLST, pooled analyses of seven trials found no statistically significant difference in all-cause mortality (RR 0.95, 95% CI 0.90–1.00).(8) However, cancer-specific mortality is currently the most relevant outcome in cancer screening trials. These seven trials demonstrated a significantly greater proportion of early stage cancers in LDCT groups compared with controls (RR 2.08, 95% CI 1.43–3.03). Thus, when considering results across mortality outcomes and early stage cancers diagnosed, LDCT screening is considered to be clinically effective. Question 2: What is the evidence of potential harms from lung cancer screening for higher-risk individuals? The harms of LDCT lung cancer screening include false positive tests and the consequences of unnecessary invasive follow-up procedures for conditions that are eventually diagnosed as benign. While LDCT screening leads to an increased frequency of invasive procedures, it does not result in greater mortality soon after an invasive procedure (in trial settings when compared with the control arm).(8) Overdiagnosis, exposure to radiation, psychological distress and an impact on quality of life are other known harms. Systematic review evidence indicates the benefits of LDCT screening are likely to outweigh the harms. The potential harms are likely to be reduced as refinements are made to LDCT screening protocols through: i) the application of risk predication models (e.g. the PLCOm2012), which enable a more accurate selection of the high-risk population through the use of specific criteria (beyond age and smoking history); ii) the use of nodule management algorithms (e.g. Lung-RADS, PanCan), which assist in the diagnostic evaluation of screen-detected nodules and cancers (e.g. more precise volumetric assessment of nodules); and, iii) more judicious selection of patients for invasive procedures. Recent evidence suggests a positive LDCT result may transiently increase psychological distress but does not have long-term adverse effects on psychological distress or health-related quality of life (HRQoL). With regards to smoking cessation, there is no evidence to suggest screening participation invokes a false sense of assurance in smokers, nor a reduction in motivation to quit. The NELSON and Danish trials found no difference in smoking cessation rates between LDCT screening and control groups. Higher net cessation rates, compared with general population, suggest those who participate in screening trials may already be motivated to quit. Question 3: What are the main components of recent major lung cancer screening programs or trials? There are no systematic reviews that capture the main components of recent major lung cancer screening trials and programs. We extracted evidence from original studies and clinical guidance documents and organised this into key groups to form a concise set of components for potential implementation of a national lung cancer screening program in Australia: 1. Identifying the high-risk population: recruitment, eligibility, selection and referral 2. Educating the public, people at high risk and healthcare providers; this includes creating awareness of lung cancer, the benefits and harms of LDCT screening, and shared decision-making 3. Components necessary for health services to deliver a screening program: a. Planning phase: e.g. human resources to coordinate the program, electronic data systems that integrate medical records information and link to an established national registry b. Implementation phase: e.g. human and technological resources required to conduct LDCT examinations, interpretation of reports and communication of results to participants c. Monitoring and evaluation phase: e.g. monitoring outcomes across patients, radiological reporting, compliance with established standards and a quality assurance program 4. Data reporting and research, e.g. audit and feedback to multidisciplinary teams, reporting outcomes to enhance international research into LDCT screening 5. Incorporation of smoking cessation interventions, e.g. specific programs designed for LDCT screening or referral to existing community or hospital-based services that deliver cessation interventions. Most original studies are single-institution evaluations that contain descriptive data about the processes required to establish and implement a high-risk population-based screening program. Across all studies there is a consistent message as to the challenges and complexities of establishing LDCT screening programs to attract people at high risk who will receive the greatest benefits from participation. With regards to smoking cessation, evidence from one systematic review indicates the optimal strategy for incorporating smoking cessation interventions into a LDCT screening program is unclear. There is widespread agreement that LDCT screening attendance presents a ‘teachable moment’ for cessation advice, especially among those people who receive a positive scan result. Smoking cessation is an area of significant research investment; for instance, eight US-based clinical trials are now underway that aim to address how best to design and deliver cessation programs within large-scale LDCT screening programs.(9) Question 4: What is the cost-effectiveness of lung cancer screening programs (include studies of cost–utility)? Assessing the value or cost-effectiveness of LDCT screening involves a complex interplay of factors including data on effectiveness and costs, and institutional context. A key input is data about the effectiveness of potential and current screening programs with respect to case detection, and the likely outcomes of treating those cases sooner (in the presence of LDCT screening) as opposed to later (in the absence of LDCT screening). Evidence about the cost-effectiveness of LDCT screening programs has been summarised in two systematic reviews. We identified a further 13 studies—five modelling studies, one discrete choice experiment and seven articles—that used a variety of methods to assess cost-effectiveness. Three modelling studies indicated LDCT screening was cost-effective in the settings of the US and Europe. Two studies—one from Australia and one from New Zealand—reported LDCT screening would not be cost-effective using NLST-like protocols. We anticipate that, following the full publication of the NELSON trial, cost-effectiveness studies will likely be updated with new data that reduce uncertainty about factors that influence modelling outcomes, including the findings of indeterminate nodules. Gaps in the evidence There is a large and accessible body of evidence as to the effectiveness (Q1) and harms (Q2) of LDCT screening for lung cancer. Nevertheless, there are significant gaps in the evidence about the program components that are required to implement an effective LDCT screening program (Q3). Questions about LDCT screening acceptability and feasibility were not explicitly included in the scope. However, as the evidence is based primarily on US programs and UK pilot studies, the relevance to the local setting requires careful consideration. The Queensland Lung Cancer Screening Study provides feasibility data about clinical aspects of LDCT screening but little about program design. The International Lung Screening Trial is still in the recruitment phase and findings are not yet available for inclusion in this Evidence Check. The Australian Population Based Screening Framework was developed to “inform decision-makers on the key issues to be considered when assessing potential screening programs in Australia”.(10) As the Framework is specific to population-based, rather than high-risk, screening programs, there is a lack of clarity about transferability of criteria. However, the Framework criteria do stipulate that a screening program must be acceptable to “important subgroups such as target participants who are from culturally and linguistically diverse backgrounds, Aboriginal and Torres Strait Islander people, people from disadvantaged groups and people with a disability”.(10) An extensive search of the literature highlighted that there is very little information about the acceptability of LDCT screening to these population groups in Australia. Yet they are part of the high-risk population.(10) There are also considerable gaps in the evidence about the cost-effectiveness of LDCT screening in different settings, including Australia. The evidence base in this area is rapidly evolving and is likely to include new data from the NELSON trial and incorporate data about the costs of targeted- and immuno-therapies as these treatments become more widely available in Australia.
APA, Harvard, Vancouver, ISO, and other styles
8

Remediating Troubled Waters: Total Maximum Daily Load Development and Implementation. American Society of Civil Engineers, October 2022. http://dx.doi.org/10.1061/infographic.000008.

Full text
Abstract:
The 1972 US Clean Water Act and its amendments define the total maximum daily load (TMDL) ASCE collection on models, analytical methods, and insights related to TMDLs and clean water ASCE collection on the impact of extreme weather on the electrical grid TMDL indicates the maximum amount of a given pollutant that a waterbody can receive and still meet water quality standards
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography