Academic literature on the topic 'Reproducibility and Representativeness'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Reproducibility and Representativeness.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Reproducibility and Representativeness"

1

Samar, Thaer, Alejandro Bellogín, and Arjen P. de Vries. "The strange case of reproducibility versus representativeness in contextual suggestion test collections." Information Retrieval Journal 19, no. 3 (December 28, 2015): 230–55. http://dx.doi.org/10.1007/s10791-015-9276-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Bozkurt, Selen, Eli M. Cahan, Martin G. Seneviratne, Ran Sun, Juan A. Lossio-Ventura, John P. A. Ioannidis, and Tina Hernandez-Boussard. "Reporting of demographic data and representativeness in machine learning models using electronic health records." Journal of the American Medical Informatics Association 27, no. 12 (September 16, 2020): 1878–84. http://dx.doi.org/10.1093/jamia/ocaa164.

Full text
Abstract:
Abstract Objective The development of machine learning (ML) algorithms to address a variety of issues faced in clinical practice has increased rapidly. However, questions have arisen regarding biases in their development that can affect their applicability in specific populations. We sought to evaluate whether studies developing ML models from electronic health record (EHR) data report sufficient demographic data on the study populations to demonstrate representativeness and reproducibility. Materials and Methods We searched PubMed for articles applying ML models to improve clinical decision-making using EHR data. We limited our search to papers published between 2015 and 2019. Results Across the 164 studies reviewed, demographic variables were inconsistently reported and/or included as model inputs. Race/ethnicity was not reported in 64%; gender and age were not reported in 24% and 21% of studies, respectively. Socioeconomic status of the population was not reported in 92% of studies. Studies that mentioned these variables often did not report if they were included as model inputs. Few models (12%) were validated using external populations. Few studies (17%) open-sourced their code. Populations in the ML studies include higher proportions of White and Black yet fewer Hispanic subjects compared to the general US population. Discussion The demographic characteristics of study populations are poorly reported in the ML literature based on EHR data. Demographic representativeness in training data and model transparency is necessary to ensure that ML models are deployed in an equitable and reproducible manner. Wider adoption of reporting guidelines is warranted to improve representativeness and reproducibility.
APA, Harvard, Vancouver, ISO, and other styles
3

Araújo dos Santos, Ana Carolina, Gabriel Antônio Batista Nascimento, Natália Barbosa da Silva, Victor Laurent Sampaio Ferreira, Yan Valdez Santos Rodrigues, Ana Lúcia Barbosa, Ewerton Emmanuel Da Silva Calixto, and Fernando Luiz Pellegrini Pessoa. "Mathematical Modeling of the Extraction Process Essential Oils Schinus terebinthifolius Raddi Using Supercritical Fluids." JOURNAL OF BIOENGINEERING AND TECHNOLOGY APPLIED TO HEALTH 2, no. 4 (February 4, 2020): 130–35. http://dx.doi.org/10.34178/jbth.v2i4.91.

Full text
Abstract:
Schinus terebinthifolius Raddi is a plant rich in nutrients and is used medicinally and industrially. Supercritical oil extraction from S. terebinthifolius can result in higher value-added products. Mathematical models (Sovová and Esquível) are used to describe the behavior of supercritical extractions. This study aims to compare both models in terms of yields in conditions of 223 bar and 50°C. We observed that the model proposed by Sovová provided good reproducibility and representativeness.
APA, Harvard, Vancouver, ISO, and other styles
4

Makowski, Dominique, An Shu Te, Tam Pham, Zen Juen Lau, and S. H. Annabel Chen. "The Structure of Chaos: An Empirical Comparison of Fractal Physiology Complexity Indices Using NeuroKit2." Entropy 24, no. 8 (July 27, 2022): 1036. http://dx.doi.org/10.3390/e24081036.

Full text
Abstract:
Complexity quantification, through entropy, information theory and fractal dimension indices, is gaining a renewed traction in psychophsyiology, as new measures with promising qualities emerge from the computational and mathematical advances. Unfortunately, few studies compare the relationship and objective performance of the plethora of existing metrics, in turn hindering reproducibility, replicability, consistency, and clarity in the field. Using the NeuroKit2 Python software, we computed a list of 112 (predominantly used) complexity indices on signals varying in their characteristics (noise, length and frequency spectrum). We then systematically compared the indices by their computational weight, their representativeness of a multidimensional space of latent dimensions, and empirical proximity with other indices. Based on these considerations, we propose that a selection of 12 indices, together representing 85.97% of the total variance of all indices, might offer a parsimonious and complimentary choice in regards to the quantification of the complexity of time series. Our selection includes CWPEn, Line Length (LL), BubbEn, MSWPEn, MFDFA (Max), Hjorth Complexity, SVDEn, MFDFA (Width), MFDFA (Mean), MFDFA (Peak), MFDFA (Fluctuation), AttEn. Elements of consideration for alternative subsets are discussed, and data, analysis scripts and code for the figures are open-source.
APA, Harvard, Vancouver, ISO, and other styles
5

Kibugu, James, Raymond Mdachi, Leonard Munga, David Mburu, Thomas Whitaker, Thu P. Huynh, Delia Grace, and Johanna F. Lindahl. "Improved Sample Selection and Preparation Methods for Sampling Plans Used to Facilitate Rapid and Reliable Estimation of Aflatoxin in Chicken Feed." Toxins 13, no. 3 (March 16, 2021): 216. http://dx.doi.org/10.3390/toxins13030216.

Full text
Abstract:
Aflatoxin B1 (AFB1), a toxic fungal metabolite associated with human and animal diseases, is a natural contaminant encountered in agricultural commodities, food and feed. Heterogeneity of AFB1 makes risk estimation a challenge. To overcome this, novel sample selection, preparation and extraction steps were designed for representative sampling of chicken feed. Accuracy, precision, limits of detection and quantification, linearity, robustness and ruggedness were used as performance criteria to validate this modification and Horwitz function for evaluating precision. A modified sampling protocol that ensured representativeness is documented, including sample selection, sampling tools, random procedures, minimum size of field-collected aggregate samples (primary sampling), procedures for mass reduction to 2 kg laboratory (secondary sampling), 25 g test portion (tertiary sampling) and 1.3 g analytical samples (quaternary sampling). The improved coning and quartering procedure described herein (for secondary and tertiary sampling) has acceptable precision, with a Horwitz ratio (HorRat = 0.3) suitable for splitting of 25 g feed aliquots from laboratory samples (tertiary sampling). The water slurring innovation (quaternary sampling) increased aflatoxin extraction efficiency to 95.1% through reduction of both bias (−4.95) and variability of recovery (1.2–1.4) and improved both intra-laboratory precision (HorRat = 1.2–1.5) and within-laboratory reproducibility (HorRat = 0.9–1.3). Optimal extraction conditions are documented. The improved procedure showed satisfactory performance, good field applicability and reduced sample analysis turnaround time.
APA, Harvard, Vancouver, ISO, and other styles
6

Didona, Diego, Nikolas Ioannou, Radu Stoica, and Kornilios Kourtis. "Toward a better understanding and evaluation of tree structures on flash SSDs." Proceedings of the VLDB Endowment 14, no. 3 (November 2020): 364–77. http://dx.doi.org/10.14778/3430915.3430926.

Full text
Abstract:
Solid-state drives (SSDs) are extensively used to deploy persistent data stores, as they provide low latency random access, high write throughput, high data density, and low cost. Tree-based data structures are widely used to build persistent data stores, and indeed they lie at the backbone of many of the data management systems used in production and research today. We show that benchmarking a persistent tree-based data structure on an SSD is a complex process, which may easily incur subtle pitfalls that can lead to an inaccurate performance assessment. At a high-level, these pitfalls stem from the interaction of complex software running on complex hardware. On the one hand, tree structures implement internal operations that have non-trivial effects on performance. On the other hand, SSDs employ firmware logic to deal with the idiosyncrasies of the underlying flash memory, which are well known to also lead to complex performance dynamics. We identify seven benchmarking pitfalls using RocksDB and WiredTiger, two widespread implementations of an LSM-Tree and a B+Tree, respectively. We show that such pitfalls can lead to incorrect measurements of key performance indicators, hinder the reproducibility and the representativeness of the results, and lead to suboptimal deployments in production environments. We also provide guidelines on how to avoid these pitfalls to obtain more reliable performance measurements, and to perform more thorough and fair comparisons among different design points.
APA, Harvard, Vancouver, ISO, and other styles
7

Aboulela, Amr, Matthieu Peyre Lavigne, Amaury Buvignier, Marlène Fourré, Maud Schiettekatte, Tony Pons, Cédric Patapy, et al. "Laboratory Test to Evaluate the Resistance of Cementitious Materials to Biodeterioration in Sewer Network Conditions." Materials 14, no. 3 (February 2, 2021): 686. http://dx.doi.org/10.3390/ma14030686.

Full text
Abstract:
The biodeterioration of cementitious materials in sewer networks has become a major economic, ecological, and public health issue. Establishing a suitable standardized test is essential if sustainable construction materials are to be developed and qualified for sewerage environments. Since purely chemical tests are proven to not be representative of the actual deterioration phenomena in real sewer conditions, a biological test–named the Biogenic Acid Concrete (BAC) test–was developed at the University of Toulouse to reproduce the biological reactions involved in the process of concrete biodeterioration in sewers. The test consists in trickling a solution containing a safe reduced sulfur source onto the surface of cementitious substrates previously covered with a high diversity microbial consortium. In these conditions, a sulfur-oxidizing metabolism naturally develops in the biofilm and leads to the production of biogenic sulfuric acid on the surface of the material. The representativeness of the test in terms of deterioration mechanisms has been validated in previous studies. A wide range of cementitious materials have been exposed to the biodeterioration test during half a decade. On the basis of this large database and the expertise gained, the purpose of this paper is (i) to propose a simple and robust performance criterion for the test (standardized leached calcium as a function of sulfate produced by the biofilm), and (ii) to demonstrate the repeatability, reproducibility, and discriminability of the test method. In only a 3-month period, the test was able to highlight the differences in the performances of common cement-based materials (CEM I, CEM III, and CEM V) and special calcium aluminate cement (CAC) binders with different nature of aggregates (natural silica and synthetic calcium aluminate). The proposed performance indicator (relative standardized leached calcium) allowed the materials to be classified according to their resistance to biogenic acid attack in sewer conditions. The repeatability of the test was confirmed using three different specimens of the same material within the same experiment and the reproducibility of the results was demonstrated by standardizing the results using a reference material from 5 different test campaigns. Furthermore, developing post-testing processing and calculation methods constituted a first step toward a standardized test protocol.
APA, Harvard, Vancouver, ISO, and other styles
8

Mpouam, Serge Eugene, Jean Pierre Kilekoung Mingoas, Mohamed Moctar Mouliom Mouiche, Jean Marc Kameni Feussom, and Claude Saegerman. "Critical Systematic Review of Zoonoses and Transboundary Animal Diseases’ Prioritization in Africa." Pathogens 10, no. 8 (August 3, 2021): 976. http://dx.doi.org/10.3390/pathogens10080976.

Full text
Abstract:
Background: Disease prioritization aims to enhance resource use efficiency concerning human and animal health systems’ preparedness and response to the most important problems for the optimization of beneficial outcomes. In sub-Sahara Africa (SSA), several prioritizations of zoonoses and transboundary animal diseases (TADs) have been implemented at different scales to characterize potential disease impacts. Method and principal findings: In this systematic review, we analyze the methodologies used, outcomes, and their relevance by discussing criteria required to align decision-makers’ perceptions of impacts to those of other stakeholders for different prioritization in SSA. In general, the sectorial representativeness of stakeholders for processes implemented with the support of international partners showed slight differences with the absence of local stakeholders. Whatever the tool prioritized, zoonoses were similar in general because of the structured nature of those tools in assessing decision-makers’ preferences through value trade-offs between criteria while ensuring transparency and reproducibility. However, by involving field practitioners and farmers, there were different outcomes with processes concerning only decision makers and experts who were more sensitive to infectious TADs, while the former raised parasitic disease constraints. In this context, multicriteria decision analysis-based zoonoses and TADs prioritizations involving a balanced participation of stakeholders might contribute to bridging these divergences, whatever the scale. Conclusion and significance: Prioritization processes were important steps toward building and harmonizing technical laboratory and surveillance networks to coordinate projects to address priority zoonoses and TADs at the country and/or sub-regional level. Those processes should be enhanced.
APA, Harvard, Vancouver, ISO, and other styles
9

Lüning, Sebastian, and Philipp Lengsfeld. "How Reliable Are Global Temperature Reconstructions of the Common Era?" Earth 3, no. 1 (March 3, 2022): 401–8. http://dx.doi.org/10.3390/earth3010024.

Full text
Abstract:
Global mean annual temperature has increased by more than 1 °C during the past 150 years, as documented by thermometer measurements. Such observational data are, unfortunately, not available for the pre-industrial period of the Common Era (CE), for which the climate development is reconstructed using various types of palaeoclimatological proxies. In this analysis, we compared seven prominent hemispheric and global temperature reconstructions for the past 2000 years (T2k) which differed from each other in some segments by more than 0.5 °C. Whilst some T2k show negligible pre-industrial climate variability (“hockey sticks”), others suggest significant temperature fluctuations. We discuss possible sources of error and highlight three criteria that need to be considered to increase the quality and stability of future T2k reconstructions. Temperature proxy series are to be thoroughly validated with regards to (1) reproducibility, (2) seasonal stability, and (3) areal representativeness. The T2k represents key calibration data for climate models. The models need to first reproduce the reconstructed pre-industrial climate history before being validated and cleared for climate projections of the future. Precise attribution of modern warming to anthropogenic and natural causes will not be possible until T2k composites stabilize and are truly representative for a well-defined region and season. The discrepancies between the different T2k reconstructions directly translate into a major challenge with regards to the political interpretation of the climate change risk profile. As a rule of thumb, the larger/smaller the pre-industrial temperature changes, the higher/lower the natural contribution to the current warm period (CWP) will likely be, thus, reducing/increasing the CO2 climate sensitivity and the expected warming until 2100.
APA, Harvard, Vancouver, ISO, and other styles
10

Ternström, Sten, and Peter Pabon. "Voice Maps as a Tool for Understanding and Dealing with Variability in the Voice." Applied Sciences 12, no. 22 (November 9, 2022): 11353. http://dx.doi.org/10.3390/app122211353.

Full text
Abstract:
Individual acoustic and other physical metrics of vocal status have long struggled to prove their worth as clinical evidence. While combinations of metrics or “features” are now being intensely explored using data analytics methods, there is a risk that explainability and insight will suffer. The voice mapping paradigm discards the temporal dimension of vocal productions and uses fundamental frequency (fo) and sound pressure level (SPL) as independent control variables to implement a dense grid of measurement points over a relevant voice range. Such mapping visualizes how most physical voice metrics are greatly affected by fo and SPL, and more so individually than has been generally recognized. It is demonstrated that if fo and SPL are not controlled for during task elicitation, repeated measurements will generate “elicitation noise”, which can easily be large enough to obscure the effect of an intervention. It is observed that, although a given metric’s dependencies on fo and SPL often are complex and/or non-linear, they tend to be systematic and reproducible in any given individual. Once such personal trends are accounted for, ordinary voice metrics can be used to assess vocal status. The momentary value of any given metric needs to be interpreted in the context of the individual’s voice range, and voice mapping makes this possible. Examples are given of how voice mapping can be used to quantify voice variability, to eliminate elicitation noise, to improve the reproducibility and representativeness of already established metrics of the voice, and to assess reliably even subtle effects of interventions. Understanding variability at this level of detail will shed more light on the interdependent mechanisms of voice production, and facilitate progress toward more reliable objective assessments of voices across therapy or training.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography