To see the other types of publications on this topic, follow the link: Reproducibility of scenario.

Journal articles on the topic 'Reproducibility of scenario'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Reproducibility of scenario.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Simkus, Andrea, Frank PA Coolen, Tahani Coolen-Maturi, Natasha A. Karp, and Claus Bendtsen. "Statistical reproducibility for pairwise t-tests in pharmaceutical research." Statistical Methods in Medical Research 31, no. 4 (December 2, 2021): 673–88. http://dx.doi.org/10.1177/09622802211041765.

Full text
Abstract:
This paper investigates statistical reproducibility of the [Formula: see text]-test. We formulate reproducibility as a predictive inference problem and apply the nonparametric predictive inference method. Within our research framework, statistical reproducibility provides inference on the probability that the same test outcome would be reached, if the test were repeated under identical conditions. We present an nonparametric predictive inference algorithm to calculate the reproducibility of the [Formula: see text]-test and then use simulations to explore the reproducibility both under the null and alternative hypotheses. We then apply nonparametric predictive inference reproducibility to a real-life scenario of a preclinical experiment, which involves multiple pairwise comparisons of test groups, where different groups are given a different concentration of a drug. The aim of the experiment is to decide the concentration of the drug which is most effective. In both simulations and the application scenario, we study the relationship between reproducibility and two test statistics, the Cohen’s [Formula: see text] and the [Formula: see text]-value. We also compare the reproducibility of the [Formula: see text]-test with the reproducibility of the Wilcoxon Mann–Whitney test. Finally, we examine reproducibility for the final decision of choosing a particular dose in the multiple pairwise comparisons scenario. This paper presents advances on the topic of test reproducibility with relevance for tests used in pharmaceutical research.
APA, Harvard, Vancouver, ISO, and other styles
2

Fary, Camdon, Dean McKenzie, and Richard de Steiger. "Reproducibility of an Intraoperative Pressure Sensor in Total Knee Replacement." Sensors 21, no. 22 (November 18, 2021): 7679. http://dx.doi.org/10.3390/s21227679.

Full text
Abstract:
Appropriate soft tissue tension in total knee replacement (TKR) is an important factor for a successful outcome. The purpose of our study was to assess both the reproducibility of a modern intraoperative pressure sensor (IOP) and if a surgeon could unconsciously influence measurement. A consecutive series of 80 TKRs were assessed with an IOP between January 2018 and December 2020. In the first scenario, two blinded sequential measurements in 48 patients were taken; in a second scenario, an initial blinded measurement and a subsequent unblinded measurement in 32 patients were taken while looking at the sensor monitor screen. Reproducibility was assessed by intraclass correlation coefficients (ICCs). In the first scenario, the ICC ranged from 0.83 to 0.90, and in the second scenario it ranged from 0.80 to 0.90. All ICCs were 0.80 or higher, indicating reproducibility using a IOP and that a surgeon may not unconsciously influence the measurement. The use of a modern IOP to measure soft tissue tension in TKRs is a reproducible technique. A surgeon observing the measurements while performing IOP may not significantly influence the result. An IOP gives additional information that the surgeon can use to optimize outcomes in TKR.
APA, Harvard, Vancouver, ISO, and other styles
3

Hearn, Jeff. "Sexualities, organizations and organization sexualities: Future scenarios and the impact of socio-technologies (a transnational perspective from the global ‘north’)." Organization 21, no. 3 (February 24, 2014): 400–420. http://dx.doi.org/10.1177/1350508413519764.

Full text
Abstract:
The article opens by briefly reviewing studies of sexuality in and around organizations from the 1970s. These studies showed considerable theoretical, empirical and conceptual development, as in the concept of organization sexuality. Building on this, the article’s first task is to analyse alternative future scenarios for organization sexualities, by way of changing intersections of gender, sexuality and organizational forms. Possible gendered future scenarios are outlined based on, first, gender equality/inequality and, second, gender similarity/difference between women, men and further genders: hyper-patriarchy scenario—men and women becoming more divergent; with greater inequality; late capitalist gender scenario—genders becoming more convergent, with greater inequality; bi-polar scenario—men and women becoming more divergent, with greater equality; postgender scenario—genders becoming more convergent, with greater equality. Somewhat similar scenarios for organization sexualities are elaborated in terms of gender/sexual equality and inequality and sexual/gender similarity and difference: heteropatriarchies scenario—greater sexual/gender difference and greater sexual or sexual/gender inequality; late capitalist sexual scenario—greater sexual/gender similarity and greater sexual or gender/sexual inequality; sexual differentiation scenario—greater sexual/gender difference and greater sexual or sexual/gender equality; sexual blurring scenario—greater sexual/gender similarity and greater sexual or sexual/gender equality. The article’s second task is to addresses the impact of globalizations and transnationalizations, specifically information and communication technologies and other socio-technologies, for future scenarios of organization sexualities. The characteristic affordances of ICTs—technological control, virtual reproducibility, conditional communality, unfinished undecidability—are mapped onto the four scenarios above and the implications outlined.
APA, Harvard, Vancouver, ISO, and other styles
4

Shu, Lele, Paul Ullrich, Xianhong Meng, Christopher Duffy, Hao Chen, and Zhaoguo Li. "rSHUD v2.0: advancing the Simulator for Hydrologic Unstructured Domains and unstructured hydrological modeling in the R environment." Geoscientific Model Development 17, no. 2 (January 19, 2024): 497–527. http://dx.doi.org/10.5194/gmd-17-497-2024.

Full text
Abstract:
Abstract. Hydrological modeling is a crucial component in hydrology research, particularly for projecting future scenarios. However, achieving reproducibility and automation in distributed hydrological modeling research for modeling, simulation, and analysis is challenging. This paper introduces rSHUD v2.0, an innovative, open-source toolkit developed in the R environment to enhance the deployment and analysis of the Simulator for Hydrologic Unstructured Domains (SHUD). The SHUD is an integrated surface–subsurface hydrological model that employs a finite-volume method to simulate hydrological processes at various scales. The rSHUD toolkit includes pre- and post-processing tools, facilitating reproducibility and automation in hydrological modeling. The utility of rSHUD is demonstrated through case studies of the Shale Hills Critical Zone Observatory in the USA and the Waerma watershed in China. The rSHUD toolkit's ability to quickly and automatically deploy models while ensuring reproducibility has facilitated the implementation of the Global Hydrological Data Cloud (https://ghdc.ac.cn, last access: 1 September 2023), a platform for automatic data processing and model deployment. This work represents a significant advancement in hydrological modeling, with implications for future scenario projections and spatial analysis.
APA, Harvard, Vancouver, ISO, and other styles
5

Hafizi, Hamed, and Ali Arda Sorman. "Integrating Meteorological Forcing from Ground Observations and MSWX Dataset for Streamflow Prediction under Multiple Parameterization Scenarios." Water 14, no. 17 (September 1, 2022): 2721. http://dx.doi.org/10.3390/w14172721.

Full text
Abstract:
Precipitation and near-surface air temperatures are significant meteorological forcing for streamflow prediction where most basins are partially or fully data-scarce in many parts of the world. This study aims to evaluate the consistency of MSWXv100-based precipitation, temperatures, and estimated potential evapotranspiration (PET) by direct comparison with observed measurements and by utilizing an independent combination of MSWXv100 dataset and observed data for streamflow prediction under four distinct scenarios considering model parameter and output uncertainties. Initially, the model is calibrated/validated entirely based on observed data (Scenario 1), where for the second calibration/validation, the observed precipitation is replaced by MSWXv100 precipitation and the daily observed temperature and PET remained unchanged (Scenario 2). Furthermore, the model calibration/validation is done by considering observed precipitation and MSWXv100-based temperature and PET (Scenario 3), and finally, the model is calibrated/validated entirely based on the MSWXv100 dataset (Scenario 4). The Kling–Gupta Efficiency (KGE) and its components (correlation, ratio of bias, and variability ratio) are utilized for direct comparison, and the Hanssen–Kuiper (HK) skill score is employed to evaluate the detectability strength of MSWXv100 precipitation for different precipitation intensities. Moreover, the hydrologic utility of MSWXv100 dataset under four distinct scenarios is tested by exploiting a conceptual rainfall-runoff model under KGE and Nash–Sutcliffe Efficiency (NSE) metrics. The results indicate that each scenario depicts high streamflow reproducibility where, regardless of other meteorological forcing, utilizing observed precipitation (Scenario 1 and 3) as one of the model inputs, shows better model performance (KGE = 0.85) than MSWXv100-based precipitation, such as Scenario 2 and 4 (KGE = 0.78–0.80).
APA, Harvard, Vancouver, ISO, and other styles
6

Mazzorana, B., J. Hübl, and S. Fuchs. "Improving risk assessment by defining consistent and reliable system scenarios." Natural Hazards and Earth System Sciences 9, no. 1 (February 17, 2009): 145–59. http://dx.doi.org/10.5194/nhess-9-145-2009.

Full text
Abstract:
Abstract. During the entire procedure of risk assessment for hydrologic hazards, the selection of consistent and reliable scenarios, constructed in a strictly systematic way, is fundamental for the quality and reproducibility of the results. However, subjective assumptions on relevant impact variables such as sediment transport intensity on the system loading side and weak point response mechanisms repeatedly cause biases in the results, and consequently affect transparency and required quality standards. Furthermore, the system response of mitigation measures to extreme event loadings represents another key variable in hazard assessment, as well as the integral risk management including intervention planning. Formative Scenario Analysis, as a supplement to conventional risk assessment methods, is a technique to construct well-defined sets of assumptions to gain insight into a specific case and the potential system behaviour. By two case studies, carried out (1) to analyse sediment transport dynamics in a torrent section equipped with control measures, and (2) to identify hazards induced by woody debris transport at hydraulic weak points, the applicability of the Formative Scenario Analysis technique is presented. It is argued that during scenario planning in general and with respect to integral risk management in particular, Formative Scenario Analysis allows for the development of reliable and reproducible scenarios in order to design more specifically an application framework for the sustainable assessment of natural hazards impact. The overall aim is to optimise the hazard mapping and zoning procedure by methodologically integrating quantitative and qualitative knowledge.
APA, Harvard, Vancouver, ISO, and other styles
7

Refaee, Turkey, Zohaib Salahuddin, Yousif Widaatalla, Sergey Primakov, Henry C. Woodruff, Roland Hustinx, Felix M. Mottaghy, Abdalla Ibrahim, and Philippe Lambin. "CT Reconstruction Kernels and the Effect of Pre- and Post-Processing on the Reproducibility of Handcrafted Radiomic Features." Journal of Personalized Medicine 12, no. 4 (March 31, 2022): 553. http://dx.doi.org/10.3390/jpm12040553.

Full text
Abstract:
Handcrafted radiomics features (HRFs) are quantitative features extracted from medical images to decode biological information to improve clinical decision making. Despite the potential of the field, limitations have been identified. The most important identified limitation, currently, is the sensitivity of HRF to variations in image acquisition and reconstruction parameters. In this study, we investigated the use of Reconstruction Kernel Normalization (RKN) and ComBat harmonization to improve the reproducibility of HRFs across scans acquired with different reconstruction kernels. A set of phantom scans (n = 28) acquired on five different scanner models was analyzed. HRFs were extracted from the original scans, and scans were harmonized using the RKN method. ComBat harmonization was applied on both sets of HRFs. The reproducibility of HRFs was assessed using the concordance correlation coefficient. The difference in the number of reproducible HRFs in each scenario was assessed using McNemar’s test. The majority of HRFs were found to be sensitive to variations in the reconstruction kernels, and only six HRFs were found to be robust with respect to variations in reconstruction kernels. The use of RKN resulted in a significant increment in the number of reproducible HRFs in 19 out of the 67 investigated scenarios (28.4%), while the ComBat technique resulted in a significant increment in 36 (53.7%) scenarios. The combination of methods resulted in a significant increment in 53 (79.1%) scenarios compared to the HRFs extracted from original images. Since the benefit of applying the harmonization methods depended on the data being harmonized, reproducibility analysis is recommended before performing radiomics analysis. For future radiomics studies incorporating images acquired with similar image acquisition and reconstruction parameters, except for the reconstruction kernels, we recommend the systematic use of the pre- and post-processing approaches (respectively, RKN and ComBat).
APA, Harvard, Vancouver, ISO, and other styles
8

Tripathi, Veenu, and Stefano Caizzone. "Virtual Validation of In-Flight GNSS Signal Reception during Jamming for Aeronautics Applications." Aerospace 11, no. 3 (March 5, 2024): 204. http://dx.doi.org/10.3390/aerospace11030204.

Full text
Abstract:
Accurate navigation is a crucial asset for safe aviation operation. The GNSS (Global Navigation Satellite System) is set to play an always more important role in aviation but needs to cope with the risk of interference, possibly causing signal disruption and loss of navigation capability. It is crucial, therefore, to evaluate the impact of interference events on the GNSS system on board an aircraft, in order to plan countermeasures. This is currently obtained through expensive and time-consuming flight measurement campaigns. This paper shows on the other hand, a method developed to create a virtual digital twin, capable of reconstructing the entire flight scenario (including flight dynamics, actual antenna, and impact of installation on aircraft) and predicting the signal and interference reception at airborne level, with clear benefits in terms of reproducibility and easiness. Through simulations that incorporate jamming scenarios or any other interference scenarios, the effectiveness of the aircraft’s satellite navigation capability in the real environment can be evaluated, providing valuable insights for informed decision-making and system enhancement. By extension, the method shown can provide the ability to predict real-life outcomes even without the need for actual flight, enabling the analysis of different antenna-aircraft configurations in a specific interference scenario.
APA, Harvard, Vancouver, ISO, and other styles
9

Santana-Perez, Idafen, and María S. Pérez-Hernández. "Towards Reproducibility in Scientific Workflows: An Infrastructure-Based Approach." Scientific Programming 2015 (2015): 1–11. http://dx.doi.org/10.1155/2015/243180.

Full text
Abstract:
It is commonly agreed that in silico scientific experiments should be executable and repeatable processes. Most of the current approaches for computational experiment conservation and reproducibility have focused so far on two of the main components of the experiment, namely, data and method. In this paper, we propose a new approach that addresses the third cornerstone of experimental reproducibility: the equipment. This work focuses on the equipment of a computational experiment, that is, the set of software and hardware components that are involved in the execution of a scientific workflow. In order to demonstrate the feasibility of our proposal, we describe a use case scenario on the Text Analytics domain and the application of our approach to it. From the original workflow, we document its execution environment, by means of a set of semantic models and a catalogue of resources, and generate an equivalent infrastructure for reexecuting it.
APA, Harvard, Vancouver, ISO, and other styles
10

Lee, Daeeop, Giha Lee, Seongwon Kim, and Sungho Jung. "Future Runoff Analysis in the Mekong River Basin under a Climate Change Scenario Using Deep Learning." Water 12, no. 6 (May 29, 2020): 1556. http://dx.doi.org/10.3390/w12061556.

Full text
Abstract:
In establishing adequate climate change policies regarding water resource development and management, the most essential step is performing a rainfall-runoff analysis. To this end, although several physical models have been developed and tested in many studies, they require a complex grid-based parameterization that uses climate, topography, land-use, and geology data to simulate spatiotemporal runoff. Furthermore, physical rainfall-runoff models also suffer from uncertainty originating from insufficient data quality and quantity, unreliable parameters, and imperfect model structures. As an alternative, this study proposes a rainfall-runoff analysis system for the Kratie station on the Mekong River mainstream using the long short-term memory (LSTM) model, a data-based black-box method. Future runoff variations were simulated by applying a climate change scenario. To assess the applicability of the LSTM model, its result was compared with a runoff analysis using the Soil and Water Assessment Tool (SWAT) model. The following steps (dataset periods in parentheses) were carried out within the SWAT approach: parameter correction (2000–2005), verification (2006–2007), and prediction (2008–2100), while the LSTM model went through the process of training (1980–2005), verification (2006–2007), and prediction (2008–2100). Globally available data were fed into the algorithms, with the exception of the observed discharge and temperature data, which could not be acquired. The bias-corrected Representative Concentration Pathways (RCPs) 4.5 and 8.5 climate change scenarios were used to predict future runoff. When the reproducibility at the Kratie station for the verification period of the two models (2006–2007) was evaluated, the SWAT model showed a Nash–Sutcliffe efficiency (NSE) value of 0.84, while the LSTM model showed a higher accuracy, NSE = 0.99. The trend analysis result of the runoff prediction for the Kratie station over the 2008–2100 period did not show a statistically significant trend for neither scenario nor model. However, both models found that the annual mean flow rate in the RCP 8.5 scenario showed greater variability than in the RCP 4.5 scenario. These findings confirm that the LSTM runoff prediction presents a higher reproducibility than that of the SWAT model in simulating runoff variation according to time-series changes. Therefore, the LSTM model, which derives relatively accurate results with a small amount of data, is an effective approach to large-scale hydrologic modeling when only runoff time-series are available.
APA, Harvard, Vancouver, ISO, and other styles
11

Huppmann, Daniel, Matthew J. Gidden, Zebedee Nicholls, Jonas Hörsch, Robin Lamboll, Paul N. Kishimoto, Thorsten Burandt, et al. "pyam: Analysis and visualisation of integrated assessment and macro-energy scenarios." Open Research Europe 1 (June 28, 2021): 74. http://dx.doi.org/10.12688/openreseurope.13633.1.

Full text
Abstract:
The open-source Python package pyam provides a suite of features and methods for the analysis, validation and visualization of reference data and scenario results generated by integrated assessment models, macro-energy tools and other frameworks in the domain of energy transition, climate change mitigation and sustainable development. It bridges the gap between scenario processing and visualisation solutions that are "hard-wired" to specific modelling frameworks and generic data analysis or plotting packages. The package aims to facilitate reproducibility and reliability of scenario processing, validation and analysis by providing well-tested and documented methods for timeseries aggregation, downscaling and unit conversion. It supports various data formats, including sub-annual resolution using continuous time representation and "representative timeslices". The code base is implemented following best practices of collaborative scientific-software development. This manuscript describes the design principles of the package and the types of data which can be handled. The usefulness of pyam is illustrated by highlighting several recent applications.
APA, Harvard, Vancouver, ISO, and other styles
12

Grammer, Robert T. "Quantitation & Case-Study-Driven Inquiry to Enhance Yeast Fermentation Studies." American Biology Teacher 74, no. 6 (August 1, 2012): 414–20. http://dx.doi.org/10.1525/abt.2012.74.6.10.

Full text
Abstract:
We propose a procedure for the assay of fermentation in yeast in microcentrifuge tubes that is simple and rapid, permitting assay replicates, descriptive statistics, and the preparation of line graphs that indicate reproducibility. Using regression and simple derivatives to determine initial velocities, we suggest methods to compare the effects of experimental variables. This technique is straightforward enough to facilitate design of an inquiry lab based on a scenario that explores modifications to enhance the rate of fermentation.
APA, Harvard, Vancouver, ISO, and other styles
13

Álvarez-Alonso, Cristina, María Dolores Pérez-Murcia, Silvia Sánchez-Méndez, Encarnación Martínez-Sabater, Ignacio Irigoyen, Marga López, Isabel Nogués, et al. "Municipal Solid Waste Management in a Decentralized Composting Scenario: Assessment of the Process Reproducibility and Quality of the Obtained Composts." Agronomy 14, no. 1 (December 24, 2023): 54. http://dx.doi.org/10.3390/agronomy14010054.

Full text
Abstract:
Over the last several years, the models for organic waste management have changed to implement circular economy in the productive cycle. In this context, new scenarios have emerged, where the management of different organic waste streams by composting is conducted with decentralized models that manage organic wastes in a more local way. However, in these new models, the standardization of the process control and of the end-product characteristics is necessary to guarantee the quality and agronomic value of the compost obtained, avoiding potential risks for human health and the environment. Thus, the aim of this work was to study two different scenarios of community composting of the organic fraction of municipal solid waste separately collected in order to guarantee the effectiveness and reproducibility of the composting processes and the quality of the composts obtained. For this, the development of the process and the characteristics of the composts at agronomic, hygienic–sanitary and environmental levels were assessed in real conditions and during three cycles of the process. The results obtained show high similarity among the different composting cycles, indicating an important degree of reproducibility among the processes. In addition, the composts obtained showed a good sanitary quality, absence of phytotoxicity and low contents of potentially toxic elements, which guarantee their use in agriculture without posing any risk to human health and to the environment.
APA, Harvard, Vancouver, ISO, and other styles
14

Wercelens, Polyane, Waldeyr da Silva, Fernanda Hondo, Klayton Castro, Maria Emília Walter, Aletéia Araújo, Sergio Lifschitz, and Maristela Holanda. "Bioinformatics Workflows With NoSQL Database in Cloud Computing." Evolutionary Bioinformatics 15 (January 2019): 117693431988997. http://dx.doi.org/10.1177/1176934319889974.

Full text
Abstract:
Scientific workflows can be understood as arrangements of managed activities executed by different processing entities. It is a regular Bioinformatics approach applying workflows to solve problems in Molecular Biology, notably those related to sequence analyses. Due to the nature of the raw data and the in silico environment of Molecular Biology experiments, apart from the research subject, 2 practical and closely related problems have been studied: reproducibility and computational environment. When aiming to enhance the reproducibility of Bioinformatics experiments, various aspects should be considered. The reproducibility requirements comprise the data provenance, which enables the acquisition of knowledge about the trajectory of data over a defined workflow, the settings of the programs, and the entire computational environment. Cloud computing is a booming alternative that can provide this computational environment, hiding technical details, and delivering a more affordable, accessible, and configurable on-demand environment for researchers. Considering this specific scenario, we proposed a solution to improve the reproducibility of Bioinformatics workflows in a cloud computing environment using both Infrastructure as a Service (IaaS) and Not only SQL (NoSQL) database systems. To meet the goal, we have built 3 typical Bioinformatics workflows and ran them on 1 private and 2 public clouds, using different types of NoSQL database systems to persist the provenance data according to the Provenance Data Model (PROV-DM). We present here the results and a guide for the deployment of a cloud environment for Bioinformatics exploring the characteristics of various NoSQL database systems to persist provenance data.
APA, Harvard, Vancouver, ISO, and other styles
15

Huppmann, Daniel, Matthew J. Gidden, Zebedee Nicholls, Jonas Hörsch, Robin Lamboll, Paul N. Kishimoto, Thorsten Burandt, et al. "pyam: Analysis and visualisation of integrated assessment and macro-energy scenarios." Open Research Europe 1 (September 1, 2021): 74. http://dx.doi.org/10.12688/openreseurope.13633.2.

Full text
Abstract:
The open-source Python package pyam provides a suite of features and methods for the analysis, validation and visualization of reference data and scenario results generated by integrated assessment models, macro-energy tools and other frameworks in the domain of energy transition, climate change mitigation and sustainable development. It bridges the gap between scenario processing and visualisation solutions that are "hard-wired" to specific modelling frameworks and generic data analysis or plotting packages. The package aims to facilitate reproducibility and reliability of scenario processing, validation and analysis by providing well-tested and documented methods for working with timeseries data in the context of climate policy and energy systems. It supports various data formats, including sub-annual resolution using continuous time representation and "representative timeslices". The pyam package can be useful for modelers generating scenario results using their own tools as well as researchers and analysts working with existing scenario ensembles such as those supporting the IPCC reports or produced in research projects. It is structured in a way that it can be applied irrespective of a user's domain expertise or level of Python knowledge, supporting experts as well as novice users. The code base is implemented following best practices of collaborative scientific-software development. This manuscript describes the design principles of the package and the types of data which can be handled. The usefulness of pyam is illustrated by highlighting several recent applications.
APA, Harvard, Vancouver, ISO, and other styles
16

McCarroll, Rachel E., Beth M. Beadle, Danna Fullen, Peter A. Balter, David S. Followill, Francesco C. Stingo, Jinzhong Yang, and Laurence E. Court. "Reproducibility of patient setup in the seated treatment position: A novel treatment chair design." Journal of Applied Clinical Medical Physics 18, no. 1 (January 2017): 223–29. http://dx.doi.org/10.1002/acm2.12024.

Full text
Abstract:
AbstractRadiotherapy in a seated position may be indicated for patients who are unable to lie on the treatment couch for the duration of treatment, in scenarios where a seated treatment position provides superior anatomical positioning and dose distributions, or for a low‐cost system designed using a fixed treatment beam and rotating seated patient. In this study, we report a novel treatment chair that was constructed to allow for three‐dimensional imaging and treatment delivery while ensuring robust immobilization, providing reproducibility equivalent to that in the traditional supine position. Five patients undergoing radiation treatment for head‐and‐neck cancers were enrolled and were setup in the chair, with immobilization devices created, and then imaged with orthogonal X‐rays in a scenario that mimicked radiation treatments (without treatment delivery). Six subregions of the acquired images were rigidly registered to evaluate intra‐ and interfraction displacement and chair construction. Displacements under conditions of simulated image guidance were acquired by first registering one subregion; the residual displacement of other subregions was then measured. Additionally, we administered a patient questionnaire to gain patient feedback and assess comparison to the supine position. Average inter‐ and intrafraction displacements of all subregions in the seated position were less than 2 and 3 mm, respectively. When image guidance was simulated, L‐R and A‐P interfraction displacements were reduced by an average of 1 mm, providing setup of comparable quality to supine setups. The enrolled patients, who had no indication for a seated treatment position, reported no preference in the seated or the supine position. The novel chair design provides acceptable inter‐ and intrafraction displacement, with reproducibility equivalent to that reported for patients in the supine position. Patient feedback will be incorporated in the refinement of the chair, facilitating treatment of head‐and‐neck cancer in patients who are unable to lie for the duration of treatment or for use in an economical fixed‐beam setup.
APA, Harvard, Vancouver, ISO, and other styles
17

Akamatsu, Shusuke, Ryo Takata, Atsushi Takahashi, Takahiro Inoue, Michiaki Kubo, Naoyuki Kamatani, Johji Inazawa, et al. "Reproducibility, performance, and clinical utility of a genetic risk prediction model for prostate cancer in Japanese patients." Journal of Clinical Oncology 30, no. 15_suppl (May 20, 2012): 10520. http://dx.doi.org/10.1200/jco.2012.30.15_suppl.10520.

Full text
Abstract:
10520 Background: Prostate specific antigen (PSA) is widely used as a diagnostic biomarker for prostate cancer (PC). However, due to its low predictive performance, many patients without PC suffer from the harms of unnecessary prostate needle biopsies. The present study aims to evaluate the reproducibility and performance of a genetic risk prediction model and estimate its utility as a diagnostic biomarker in a clinical scenario. Methods: We created a logistic regression model incorporating 16 SNPs that were significantly associated with PC in a genome-wide association study of the Japanese. The model was validated by two independent sets of samples comprising 3,294 cases and 6,281 controls. Various cut offs were evaluated to be used in a clinical scenario. Results: The area under a curve (AUC) of the model was 0.679, 0.655, and 0.661 for the samples used to create the model and those used for validation respectively. The AUC of the model was not significantly altered in samples with PSA 1-10 ng/ml. 24.2% and 9.7% of the patients had odds ratio <0.5 (low risk) or >2 (high risk) in the model, and assuming the overall positive rate of prostate needle biopsies to be 20% in PSA gray zone (PSA 2-10 ng/ml), the positive biopsy rates were 10.7% and 42.4% respectively for the two genetic risk groups. Conclusions: The genetic risk prediction model was highly reproducible, and its predictive performance was not influenced by PSA. The model could have a potential to affect clinical decision when it is applied to patients with gray-zone PSA, which should be confirmed in future clinical studies.
APA, Harvard, Vancouver, ISO, and other styles
18

Stauffer, Glenn E., Erik R. Olson, Jerrold L. Belant, Jennifer L. Stenglein, Jennifer L. Price Tack, Timothy R. van Deelen, David M. MacFarland, and Nathan M. Roberts. "Uncertainty and precaution in hunting wolves twice in a year: Reanalysis of Treves and Louchouarn." PLOS ONE 19, no. 6 (June 12, 2024): e0301487. http://dx.doi.org/10.1371/journal.pone.0301487.

Full text
Abstract:
Management of wolves is controversial in many jurisdictions where wolves live, which underscores the importance of rigor, transparency, and reproducibility when evaluating outcomes of management actions. Treves and Louchouarn 2022 (hereafter TL) predicted outcomes for various fall 2021 hunting scenarios following Wisconsin’s judicially mandated hunting and trapping season in spring 2021, and concluded that even a zero harvest scenario could result in the wolf population declining below the population goal of 350 wolves specified in the 1999 Wisconsin wolf management plan. TL further concluded that with a fall harvest of > 16 wolves there was a “better than average possibility” that the wolf population size would decline below that 350-wolf threshold. We show that these conclusions are incorrect and that they resulted from mathematical errors and selected parameterizations that were consistently biased in the direction that maximized mortality and minimized reproduction (i.e., positively biased adult mortality, negatively biased pup survival, further halving pup survival to November, negatively biased number of breeding packs, and counting harvested wolves twice among the dead). These errors systematically exaggerated declines in predicted population size and resulted in erroneous conclusions that were not based on the best available or unbiased science. Corrected mathematical calculations and more rigorous parameterization resulted in predicted outcomes for the zero harvest scenario that more closely coincided with the empirical population estimates in 2022 following a judicially prevented fall hunt in 2021. Only in scenarios with simulated harvest of 300 or more wolves did probability of crossing the 350-wolf population threshold exceed zero. TL suggested that proponents of some policy positions bear a greater burden of proof than proponents of other positions to show that “their estimates are accurate, precise, and reproducible”. In their analysis, TL failed to meet this standard that they demanded of others.
APA, Harvard, Vancouver, ISO, and other styles
19

Fabbri, Rachele, Ludovica Cacopardo, Arti Ahluwalia, and Chiara Magliaro. "Advanced 3D Models of Human Brain Tissue Using Neural Cell Lines: State-of-the-Art and Future Prospects." Cells 12, no. 8 (April 18, 2023): 1181. http://dx.doi.org/10.3390/cells12081181.

Full text
Abstract:
Human-relevant three-dimensional (3D) models of cerebral tissue can be invaluable tools to boost our understanding of the cellular mechanisms underlying brain pathophysiology. Nowadays, the accessibility, isolation and harvesting of human neural cells represents a bottleneck for obtaining reproducible and accurate models and gaining insights in the fields of oncology, neurodegenerative diseases and toxicology. In this scenario, given their low cost, ease of culture and reproducibility, neural cell lines constitute a key tool for developing usable and reliable models of the human brain. Here, we review the most recent advances in 3D constructs laden with neural cell lines, highlighting their advantages and limitations and their possible future applications.
APA, Harvard, Vancouver, ISO, and other styles
20

Son, Weonil, Yunchul Ha, Taeyoung Oh, Seunghoon Woo, Sungwoo Cho, and Jinwoo Yoo. "PG-Based Vehicle-In-the-Loop Simulation for System Development and Consistency Validation." Electronics 11, no. 24 (December 7, 2022): 4073. http://dx.doi.org/10.3390/electronics11244073.

Full text
Abstract:
The concern over safety features in autonomous vehicles is increasing due to the rapid development and increasing use of autonomous driving technology. The safety evaluations performed for an autonomous driving system cannot depend only on existing safety verification methods, due to the lack of scenario reproducibility and the dynamic characteristics of the vehicle. Vehicle-In-the-Loop Simulation (VILS) utilizes both real vehicles and virtual simulations for the driving environment to overcome these drawbacks and is a suitable candidate for ensuring reproducibility. However, there may be differences between the behavior of the vehicle in the VILS and vehicle tests due to the implementation level of the virtual environment. This study proposes a novel VILS system that displays consistency with the vehicle tests. The proposed VILS system comprises virtual road generation, synchronization, virtual traffic manager generation, and perception sensor modeling, and implements a virtual driving environment similar to the vehicle test environment. Additionally, the effectiveness of the proposed VILS system and its consistency with the vehicle test is demonstrated using various verification methods. The proposed VILS system can be applied to various speeds, road types, and surrounding environments.
APA, Harvard, Vancouver, ISO, and other styles
21

Csippa, Benjamin, Dániel Gyürki, Gábor Závodszky, István Szikora, and György Paál. "Hydrodynamic Resistance of Intracranial Flow-Diverter Stents: Measurement Description and Data Evaluation." Cardiovascular Engineering and Technology 11, no. 1 (December 3, 2019): 1–13. http://dx.doi.org/10.1007/s13239-019-00445-y.

Full text
Abstract:
Abstract Purpose Intracranial aneurysms are malformations forming bulges on the walls of brain arteries. A flow diverter device is a fine braided wire structure used for the endovascular treatment of brain aneurysms. This work presents a rig and a protocol for the measurement of the hydrodynamic resistance of flow diverter stents. Hydrodynamic resistance is interpreted here as the pressure loss versus volumetric flow rate function through the mesh structure. The difficulty of the measurement is the very low flow rate range and the extreme sensitivity to contamination and disturbances. Methods Rigorous attention was paid to reproducibility, hence a strict protocol was designed to ensure controlled circumstances and accuracy. Somewhat unusually, the history of the development of the rig, including the pitfalls was included in the paper. In addition to the hydrodynamic resistance measurements, the geometrical properties—metallic surface area, pore density, deployed and unconstrained length and diameter—of the stent deployment were measured. Results Based on our evaluation method a confidence band can be determined for a given deployment scenario. Collectively analysing the hydrodynamic resistance and the geometric indices, a deeper understanding of an implantation can be obtained. Our results suggest that to correctly interpret the hydrodynamic resistance of a scenario, the deployment length has to be considered. To demonstrate the applicability of the measurement, as a pilot study the results of four intracranial flow diverter stents of two types and sizes have been reported in this work. The results of these measurements even on this small sample size provide valuable information on differences between stent types and deployment scenarios.
APA, Harvard, Vancouver, ISO, and other styles
22

Michel, Nicolas, Giovanni Chierchia, Romain Negrel, and Jean-François Bercher. "Learning Representations on the Unit Sphere: Investigating Angular Gaussian and Von Mises-Fisher Distributions for Online Continual Learning." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 13 (March 24, 2024): 14350–58. http://dx.doi.org/10.1609/aaai.v38i13.29348.

Full text
Abstract:
We use the maximum a posteriori estimation principle for learning representations distributed on the unit sphere. We propose to use the angular Gaussian distribution, which corresponds to a Gaussian projected on the unit-sphere and derive the associated loss function. We also consider the von Mises-Fisher distribution, which is the conditional of a Gaussian in the unit-sphere. The learned representations are pushed toward fixed directions, which are the prior means of the Gaussians; allowing for a learning strategy that is resilient to data drift. This makes it suitable for online continual learning, which is the problem of training neural networks on a continuous data stream, where multiple classification tasks are presented sequentially so that data from past tasks are no longer accessible, and data from the current task can be seen only once. To address this challenging scenario, we propose a memory-based representation learning technique equipped with our new loss functions. Our approach does not require negative data or knowledge of task boundaries and performs well with smaller batch sizes while being computationally efficient. We demonstrate with extensive experiments that the proposed method outperforms the current state-of-the-art methods on both standard evaluation scenarios and realistic scenarios with blurry task boundaries. For reproducibility, we use the same training pipeline for every compared method and share the code at https://github.com/Nicolas1203/ocl-fd.
APA, Harvard, Vancouver, ISO, and other styles
23

Lopes, Rônney Pinto, Vivian Dias Baptista Gagliardi, Felipe Torres Pacheco, and Rubens José Gagliardi. "Ischemic stroke with unknown onset of symptoms: current scenario and perspectives for the future." Arquivos de Neuro-Psiquiatria 80, no. 12 (December 2022): 1262–73. http://dx.doi.org/10.1055/s-0042-1755342.

Full text
Abstract:
Abstract Background Stroke is a major cause of disability worldwide and a neurological emergency. Intravenous thrombolysis and mechanical thrombectomy are effective in the reperfusion of the parenchyma in distress, but the impossibility to determine the exact time of onset was an important cause of exclusion from treatment until a few years ago. Objectives To review the clinical and radiological profile of patients with unknown-onset stroke, the imaging methods to guide the reperfusion treatment, and suggest a protocol for the therapeutic approach. Methods The different imaging methods were grouped according to current evidence-based treatments. Results Most studies found no difference between the clinical and imaging characteristics of patients with wake-up stroke and known-onset stroke, suggesting that the ictus, in the first group, occurs just prior to awakening. Regarding the treatment of patients with unknown-onset stroke, four main phase-three trials stand out: WAKE-UP and EXTEND for intravenous thrombolysis, and DAWN and DEFUSE-3 for mechanical thrombectomy. The length of the therapeutic window is based on the diffusion weighted imaging–fluid-attenuated inversion recovery (DWI-FLAIR) mismatch, core-penumbra mismatch, and clinical core mismatch paradigms. The challenges to approach unknown-onset stroke involve extending the length of the time window, the reproducibility of real-world imaging modalities, and the discovery of new methods and therapies for this condition. Conclusion The advance in the possibilities for the treatment of ischemic stroke, while guided by imaging concepts, has become evident. New studies in this field are essential and needed to structure the health care services for this new scenario.
APA, Harvard, Vancouver, ISO, and other styles
24

Niero, Giovanni, Filippo Cendron, Mauro Penasa, Massimo De Marchi, Giulio Cozzi, and Martino Cassandro. "Repeatability and Reproducibility of Measures of Bovine Methane Emissions Recorded using a Laser Detector." Animals 10, no. 4 (April 1, 2020): 606. http://dx.doi.org/10.3390/ani10040606.

Full text
Abstract:
Methane (CH4) emissions represent a worldwide problem due to their direct involvement in atmospheric warming and climate change. Ruminants are among the major players in the global scenario of CH4 emissions, and CH4 emissions are a problem for feed efficiency since enteric CH4 is eructed to the detriment of milk and meat production. The collection of CH4 phenotypes at the population level is still hampered by costly and time-demanding techniques. In the present study, a laser methane detector was used to assess repeatability and reproducibility of CH4 phenotypes, including mean and aggregate of CH4 records, slope of the linear equation modelling the aggregate function, and mean and number of CH4 peak records. Five repeated measurements were performed in a commercial farm on three Simmental heifers, and the same protocol was repeated over a period of three days. Methane emission phenotypes expressed as parts per million per linear meter (ppm × m) were not normally distributed and, thus, they were log-transformed to reach normality. Repeatability and reproducibility were calculated as the relative standard deviation of five measurements within the same day and 15 measurements across three days, respectively. All phenotypes showed higher repeatability and reproducibility for log-transformed data compared with data expressed as ppm × m. The linear equation modelling the aggregate function highlighted a very high coefficient of determination (≥0.99), which suggests that daily CH4 emissions might be derived using this approach. The number of CH4 peaks resulted as particularly diverse across animals and therefore it is a potential candidate to discriminate between high and low emitting animals. Results of this study suggest that laser methane detector is a promising tool to measure bovine CH4 emissions in field conditions.
APA, Harvard, Vancouver, ISO, and other styles
25

Wang, Lujia, Hyunsoo Yoon, Andrea Hawkins-Daarud, Kyle Singleton, Kamala Clark-Swanson, Bernard Bendok, Maciej Mrugala, et al. "NIMG-30. REPRODUCIBLE RADIOMIC MAPPING OF TUMOR CELL DENSITY BY MACHINE LEARNING AND DOMAIN ADAPTATION." Neuro-Oncology 21, Supplement_6 (November 2019): vi167. http://dx.doi.org/10.1093/neuonc/noz175.700.

Full text
Abstract:
Abstract BACKGROUND An important challenge in radiomics research is reproducibility. Images are collected on different image scanners and protocols, which introduces significant variability even for the same type of image across institutions. In the present proof-of-concept study, we address the reproducibility issue by using domain adaptation – an algorithm that transforms the radiomic features of each new patient to align with the distribution of features formed by the patient samples in a training set. METHOD Our dataset included 18 patients in training with a total of 82 biopsy sample. The pathological tumor cell density was available for each sample. Radiomic (statistical + texture) features were extracted from the region of six image contrasts locally matched with each biopsy sample. A Gaussian Process (GP) classifier was built to predict tumor cell density using radiomic features. Another 6 patients were used to test the training model. These patients had a total of 31 biopsy samples. The images of each test patient were purposely normalized using a different approach, i.e., using the CSF instead of the whole brain as the reference. This was to mimic the practical scenario of image source discrepancy between different patients. Domain adaptation was applied to each test patient. RESULTS Among the 18 training patients, the leave-one-patient-out cross validation accuracy is 0.81 AUC, 0.78 sensitivity, and 0.83 specificity. When the trained model was applied to the 6 test patients (purposely normalized using a different approach than that of the training data), the accuracy dramatically reduced to 0.39 AUC, 0.08 sensitivity, and 0.61 specificity. After using domain adaption, the accuracy improved to 0.68 AUC, 0.62 sensitivity, and 0.72 specificity. CONCLUSION We provide candidate enabling tools to address reproducibility in radiomics models by using domain adaption algorithms to account for discrepancy of the images between different patients.
APA, Harvard, Vancouver, ISO, and other styles
26

Kiamanesh, Bahareh, Ali Behravan, and Roman Obermaisser. "Realistic Simulation of Sensor/Actuator Faults for a Dependability Evaluation of Demand-Controlled Ventilation and Heating Systems." Energies 15, no. 8 (April 14, 2022): 2878. http://dx.doi.org/10.3390/en15082878.

Full text
Abstract:
In the development of fault-tolerant systems, simulation is a common technique used to obtain insights into performance and dependability because it saves time and avoids the risks of testing the behavior of real-world systems in the presence of faults. Fault injection in a simulation offers a high controllability and observability, and thus is ideal for an early dependability analysis and fault-tolerance evaluation. Heating, ventilation, and air conditioning (HVAC) systems in critical infrastructures, such as airports and hospitals, are safety-relevant systems, which not only determine energy consumption, system efficiency, and occupancy comfort but also play an essential role in emergency scenarios (e.g., fires, biological hazards). Hence, fault injection serves as a practical and essential solution to assess dependability in different fault scenarios of HVAC systems. Hence, in this paper, we present a simulation-based fault injection framework with a combination of two techniques, simulator command and simulation code modification, which are applied to fault injector blocks as saboteurs and an automated fault injector algorithm to automatically activate fault cases with certain fault attributes. The proposed fault injection framework supports a comprehensive range of faults and various fault attributes, including fault persistence, fault type, fault location, fault duration, and fault interarrival time. This framework considers noise in a demand-controlled ventilation (DCV) and heating system as a type of HVAC system since it has been demonstrated that any fault injection scenario is accompanied by some impacts on energy consumption, occupancy comfort, and a fire risk. It also supports the reproducibility for a set of specific fault scenarios or random fault injection scenarios. The system model was implemented and simulated in Matlab/Simulink, and fault injector blocks were developed by Stateflow diagrams. An experimental evaluation serves as the assessment of the presented fault injection framework with a defined example of fault scenarios. The results of the evaluation show the correctness, system behavior, accuracy, and other parameters of the system, such as the heater energy consumption and heater duty cycle of the fault injection framework in the presence of different fault cases. In conclusion, the present paper provides a novel simulation-based fault injection framework, which combines simulator command techniques and simulation code modifications for a realistic and automatic fault injection with comprehensive coverage of various fault types and a consideration of noise and uncertainty, allowing for reproducibility of the results. The outputs achieved from the fault injection framework can be applied to fault-tolerant studies in other application domains.
APA, Harvard, Vancouver, ISO, and other styles
27

Hinz, Matthias, Nico Lehmann, Kevin Melcher, Norman Aye, Vanja Radić, Herbert Wagner, and Marco Taubert. "Reliability of Perceptual-Cognitive Skills in a Complex, Laboratory-Based Team-Sport Setting." Applied Sciences 11, no. 11 (June 3, 2021): 5203. http://dx.doi.org/10.3390/app11115203.

Full text
Abstract:
The temporal occlusion paradigm is often used in anticipation and decision-making research in sports. Although it is considered as a valid measurement tool, evidence of its reproducibility is lacking but required for future cross-sectional and repeated-measures designs. Moreover, only a few studies on decision making in real-world environments exist. Here, we aimed at (a) implementing a temporal occlusion test with multi-dimensional motor response characteristics, and (b) assessing intra- and inter-session item reliability. Temporally occluded videos of attack sequences in a team handball scenario were created and combined with the SpeedCourt® contact plate system. Participants were instructed to perform pre-specified defensive actions in response to the video stimuli presented on a life-size projection screen. The intra- and inter-session (after at least 24 h) reproducibility of subjects’ motor responses were analyzed. Significant Cohen’s (0.44–0.54) and Fleiss’ (0.33–0.51) kappa statistics revealed moderate agreement of motor responses with the majority of attack situations in both intra- and inter-session analyses. Participants made faster choices with more visual information about the opponents’ unfolding action. Our findings indicate reliable decisions in a complex, near-game test environment for team handball players. The test provides a foundation for future temporal occlusion studies, including recommendations for new explanatory approaches in cognition research.
APA, Harvard, Vancouver, ISO, and other styles
28

Vigliar, Elena, Umberto Malapelle, Francesca Bono, Nicola Fusco, Diego Cortinovis, Emanuele Valtorta, Alexiadis Spyridon, et al. "The Reproducibility of the Immunohistochemical PD-L1 Testing in Non-Small-Cell Lung Cancer: A Multicentric Italian Experience." BioMed Research International 2019 (April 14, 2019): 1–7. http://dx.doi.org/10.1155/2019/6832909.

Full text
Abstract:
An important harmonization effort was produced by the scientific community to standardize both the preanalytical and interpretative phases of programmed death-ligand 1 (PD-L1) immunohistochemical (IHC) testing in non-small-cell lung cancer (NSCLC). This analysis is crucial for the selection of patients with advanced-stage tumors eligible for treatment with pembrolizumab and potentially with other anti-PD-1/PD-L1 checkpoint inhibitors. This multicentric retrospective study evaluated the reproducibility of PD-L1 testing in the Italian scenario both for closed and open platforms. In the evaluation of the well-known gold-standard combinations (Agilent 22C3 PharmDx on Dako Autostainer versus Roche’s Ventana SP263 on BenchMark), the results confirmed the literature data and showed complete overlapping between the two methods. With regard to the performances by using open platforms, the combination of 22C3 with Dako Omnis or Benchmark obtained good results basically, while the 28,8 clone seemed to be associated with worse scores.
APA, Harvard, Vancouver, ISO, and other styles
29

Martinez, Juana M. Plasencia, Jose M. Garcia Santos, Maria L. Paredes Marti­nez, and Ana Moreno Pastor. "Carotid intima-media thickness and hemodynamic parameters: reproducibility of manual measurements with Doppler ultrasound." Medical Ultrasonography 17, no. 2 (June 1, 2015): 167. http://dx.doi.org/10.11152/mu.2013.2066.172.ci-m.

Full text
Abstract:
Aims: To evaluate the carotid ultrasound intra- and interobserver agreements in a common clinical scenario when making manual measurements of the intima-media thickness (IMT) and peak systolic (PSV) and end diastolic (EDV) velocities in the common (CCA) and the internal carotid (ICA) arteries. Material and methods: Three different experienced operators per- formed two time-point carotid ultrasounds in 21 patients with cardiovascular risk factors. Each operator measured freehand the CCA IMT three consecutive times in each examination. The CCA and ICA hemodynamic parameters were acquired just once. For our purpose we took the average (IMTmean) and maximum (IMTmax) IMT values. Quantitative variables were analyzed with the t-student, and ANOVA test. Agreements were evaluated with the Intraclass Correlation Coefficient (ICC). Results: IMTmean intraobserver agreement was better on the left (ICC: 0.930-0.851-0.916, operators 1-2-3) than on the right (ICC: 0.789-0.580-0.673, operators 1-2-3). IMTmax agreements (Left ICC: 0.821-0.723-0.853, operators 1-2-3; Right ICC: 0.669- 0.421-0.480, operators 1-2-3) were lower and more variable. Interobserver agreements for IMTmean (ICC: 0.852-0.860; first-second ultrasound) and IMTmax (ICC: 0.859-0.835; first-second ultrasound) were excellent on the left, but fair-good and more variable on the right (IMTmean; ICC: 0.680-0.809; first-second ultrasound; IMTmax; 0.694-0.799; first-second ultra- sound). Intraobserver agreements were fair-moderate for PSVs and good-excellent for EDVs. Interobserver agreements were good-excellent for both PSVs and EDVs. Overall, 95% confidence intervals were narrower for the left IMTmean and CCA velocities. Conclusions: Intra and interobserver agreements in carotid ultrasound are variable. In order to improve carotid IMT agreements, IMTmean is preferable over IMTmax.
APA, Harvard, Vancouver, ISO, and other styles
30

Staziaki, Pedro Vinícius, Rutuparna Sarangi, Ujas N. Parikh, Jeffrey G. Brooks, Christina Alexandra LeBedis, and Kitt Shaffer. "An Objective Structured Clinical Examination for Medical Student Radiology Clerkships: Reproducibility Study." JMIR Medical Education 6, no. 1 (May 6, 2020): e15444. http://dx.doi.org/10.2196/15444.

Full text
Abstract:
Background Objective structured clinical examinations (OSCEs) are a useful method to evaluate medical students’ performance in the clerkship years. OSCEs are designed to assess skills and knowledge in a standardized clinical setting and through use of a preset standard grading sheet, so that clinical knowledge can be evaluated at a high level and in a reproducible way. Objective This study aimed to present our OSCE assessment tool designed specifically for radiology clerkship medical students, which we called the objective structured radiology examination (OSRE), with the intent to advance the assessment of clerkship medical students by providing an objective, structured, reproducible, and low-cost method to evaluate medical students’ radiology knowledge and the reproducibility of this assessment tool. Methods We designed 9 different OSRE cases for radiology clerkship classes with participating third- and fourth-year medical students. Each examination comprises 1 to 3 images, a clinical scenario, and structured questions, along with a standardized scoring sheet that allows for an objective and low-cost assessment. Each medical student completed 3 of 9 random examination cases during their rotation. To evaluate for reproducibility of our scoring sheet assessment tool, we used 5 examiners to grade the same students. Reproducibility for each case and consistency for each grader were assessed with a two-way mixed effects intraclass correlation coefficient (ICC). An ICC below 0.4 was deemed poor to fair, an ICC of 0.41 to 0.60 was moderate, an ICC of 0.6 to 0.8 was substantial, and an ICC greater than 0.8 was almost perfect. We also assessed the correlation of scores and the students’ clinical experience with a linear regression model and compared mean grades between third- and fourth-year students. Results A total of 181 students (156 third- and 25 fourth-year students) were included in the study for a full academic year. Moreover, 6 of 9 cases demonstrated average ICCs more than 0.6 (substantial correlation), and the average ICCs ranged from 0.36 to 0.80 (P<.001 for all the cases). The average ICC for each grader was more than 0.60 (substantial correlation). The average grade among the third-year students was 11.9 (SD 4.9), compared with 12.8 (SD 5) among the fourth-year students (P=.005). There was no correlation between clinical experience and OSRE grade (−0.02; P=.48), adjusting for the medical school year. Conclusions Our OSRE is a reproducible assessment tool with most of our OSRE cases showing substantial correlation, except for 3 cases. No expertise in radiology is needed to grade these examinations using our scoring sheet. There was no correlation between scores and the clinical experience of the medical students tested.
APA, Harvard, Vancouver, ISO, and other styles
31

Mangampadath, Abhilash. "Introspection into the Guna Wisdom of Ayurveda–A Review From the Clinical Perspective." International Research Journal of Ayurveda & Yoga 05, no. 02 (2022): 155–60. http://dx.doi.org/10.47223/irjay.2022.5228.

Full text
Abstract:
The clinical scenario in Ayurveda is currently positioned at a juncture where there is immense demand for standardization and development. Researchers are running after the methods to tackle the issues of individual variations and lack of reproducibility when it comes to the systematic practice of Ayurveda. Under these circumstances, the guna spectrum in Ayurveda, often neglected as a philosophical area that do not have direct clinical applications needs to be revived so that the quintessence of Ayurvedic clinical decision making is translated into the research arena. For this purpose, a thorough analysis of the concepts and their in-depth meaning has to be considered. Also, critical introspection into the current status and its background realities with focus on the systematic narration of Ayurveda is needed like the classification of gunas based upon their clinical importance and therapeutic potentials. This article tries to incorporate this aspect into the science of Ayurveda in light of the clinical and philosophical understanding making it more comprehensive and clinical friendly.
APA, Harvard, Vancouver, ISO, and other styles
32

Balboni, Andrea, Laura Gallina, Alessandra Palladini, Santino Prosperi, and Mara Battilani. "A Real-Time PCR Assay for Bat SARS-Like Coronavirus Detection and Its Application to Italian Greater Horseshoe Bat Faecal Sample Surveys." Scientific World Journal 2012 (2012): 1–8. http://dx.doi.org/10.1100/2012/989514.

Full text
Abstract:
Bats are source of coronaviruses closely related to the severe acute respiratory syndrome (SARS) virus. Numerous studies have been carried out to identify new bat viruses related to SARS-coronavirus (bat-SARS-like CoVs) using a reverse-transcribed-polymerase chain reaction assay. However, a qualitative PCR could underestimate the prevalence of infection, affecting the epidemiological evaluation of bats in viral ecology. In this work an SYBR Green-real time PCR assay was developed for diagnosing infection with SARS-related coronaviruses from bat guano and was applied as screening tool in a survey carried out on 45 greater horseshoe bats (Rhinolophus ferrumequinum) sampled in Italy in 2009. The assay showed high sensitivity and reproducibility. Its application on bats screening resulted in a prevalence of 42%. This method could be suitable as screening tool in epidemiological surveys about the presence of bat-SARS-like CoVs, consequently to obtain a more realistic scenario of the viral prevalence in the population.
APA, Harvard, Vancouver, ISO, and other styles
33

Nigon, Tyler, Gabriel Dias Paiao, David J. Mulla, Fabián G. Fernández, and Ce Yang. "The Influence of Aerial Hyperspectral Image Processing Workflow on Nitrogen Uptake Prediction Accuracy in Maize." Remote Sensing 14, no. 1 (December 29, 2021): 132. http://dx.doi.org/10.3390/rs14010132.

Full text
Abstract:
A meticulous image processing workflow is oftentimes required to derive quality image data from high-resolution, unmanned aerial systems. There are many subjective decisions to be made during image processing, but the effects of those decisions on prediction model accuracy have never been reported. This study introduced a framework for quantifying the effects of image processing methods on model accuracy. A demonstration of this framework was performed using high-resolution hyperspectral imagery (<10 cm pixel size) for predicting maize nitrogen uptake in the early to mid-vegetative developmental stages (V6–V14). Two supervised regression learning estimators (Lasso and partial least squares) were trained to make predictions from hyperspectral imagery. Data for this use case were collected from three experiments over two years (2018–2019) in southern Minnesota, USA (four site-years). The image processing steps that were evaluated include (i) reflectance conversion, (ii) cropping, (iii) spectral clipping, (iv) spectral smoothing, (v) binning, and (vi) segmentation. In total, 648 image processing workflow scenarios were evaluated, and results were analyzed to understand the influence of each image processing step on the cross-validated root mean squared error (RMSE) of the estimators. A sensitivity analysis revealed that the segmentation step was the most influential image processing step on the final estimator error. Across all workflow scenarios, the RMSE of predicted nitrogen uptake ranged from 14.3 to 19.8 kg ha−1 (relative RMSE ranged from 26.5% to 36.5%), a 38.5% increase in error from the lowest to the highest error workflow scenario. The framework introduced demonstrates the sensitivity and extent to which image processing affects prediction accuracy. It allows remote sensing analysts to improve model performance while providing data-driven justification to improve the reproducibility and objectivity of their work, similar to the benefits of hyperparameter tuning in machine learning applications.
APA, Harvard, Vancouver, ISO, and other styles
34

Pathak, Ashis, Harman Brar, and Abhishek Puri. "SDPS-11 IMPACT OF MOLECULAR MARKERS USING FISH TECHNIQUE ON GOS/SURVIVAL IN HIGH GRADE GLIOMAS IN INDIAN SCENARIO." Neuro-Oncology Advances 5, Supplement_3 (August 1, 2023): iii19. http://dx.doi.org/10.1093/noajnl/vdad070.071.

Full text
Abstract:
Abstract Aims Molecular markers (MGMT, IDH and 1p/19q, MIB-1 index, TP53) are used for present day diagnosis of high-grade glial tumours. This study evaluates outcome parameters [e.g., GOS, progression free (PFS) and overall survival(OS)] in treated high grade gliomas in Indian context using FISH technique for molecular characterisation. MATERIALS AND METHODS This study used FISH techniques for IDH, MGMT and other IHC markers prospectively on 10 patients with high-grade gliomas, after safe maximal resection. Outcomes were noted for PFS and OS. A GOS of 1-3 (considered unfavourable); 4-5 (favourable), was documented at every 3-monthly follow-up for 2 years. The standardised detection of IDH (by FISH) and methylation profiles of tumours is the strength of the study. RESULTS Most (80%) patients had a GOS &gt; 13 post operatively, and none were less than 9. There was no statistically significant effect of gender, extent of resection or tumour location. IDH-1 positivity was recorded in 10%, MGMT methylation in 40%, intermediate to low MIB-1 index in 83.3%. TP53 mutations were noted in 30% cohort who had adverse outcome. The median PFS was 8.60 months. Mean survival was 12.70 months, with no molecular marker having statistically significant effects on PFS or OS. GOS of 13-15 conferred the best OS in contrast to GOS of 9-12 (16.2 months versus 7.5 months); (p = 0.001). CONCLUSION GOS is an easy-to-use, validated scoring system with broad applicability and high reproducibility as a quantitative benchmark for clinical assessment. Since IHC lacks standardisation, FISH, despite being costlier, was used as a diagnostic benchmark. Further research on the impact of genomic profiling, extent of resection, especially in geriatric age groups is needed, to formulate an algorithm for optimal management.
APA, Harvard, Vancouver, ISO, and other styles
35

Ginés Clavero, Jonatan, Francisco Martín Rico, Francisco J. Rodríguez-Lera, José Miguel Guerrero Hernandéz, and Vicente Matellán Olivera. "Impact of decision-making system in social navigation." Multimedia Tools and Applications 81, no. 3 (January 2022): 3459–81. http://dx.doi.org/10.1007/s11042-021-11454-2.

Full text
Abstract:
AbstractFacing human activity-aware navigation with a cognitive architecture raises several difficulties integrating the components and orchestrating behaviors and skills to perform social tasks. In a real-world scenario, the navigation system should not only consider individuals like obstacles. It is necessary to offer particular and dynamic people representation to enhance the HRI experience. The robot’s behaviors must be modified by humans, directly or indirectly. In this paper, we integrate our human representation framework in a cognitive architecture to allow that people who interact with the robot could modify its behavior, not only with the interaction but also with their culture or the social context. The human representation framework represents and distributes the proxemic zones’ information in a standard way, through a cost map. We have evaluated the influence of the decision-making system in human-aware navigation and how a local planner may be decisive in this navigation. The material developed during this research can be found in a public repository (https://github.com/IntelligentRoboticsLabs/social_navigation2_WAF) and instructions to facilitate the reproducibility of the results.
APA, Harvard, Vancouver, ISO, and other styles
36

Brito, Amanda Rodrigues, Mayra do Nascimento Melo, Casandra Genoveva Rosales Martins Ponce de Leon, and Laiane Medeiros Ribeiro. "Cenário simulado com brinquedo terapêutico: uma ferramenta para educação em saúde." Revista Recien - Revista Científica de Enfermagem 12, no. 40 (December 19, 2022): 200–209. http://dx.doi.org/10.24276/rrecien2022.12.40.200-209.

Full text
Abstract:
Elaborar um cenário simulado com brinquedo terapêutico para uso em educação em saúde e validá-lo. Trata-se de uma pesquisa de desenvolvimento metodológico. Validada de forma remota através de formulário estruturado. Foi calculado o Índice de Validade de Conteúdo e analisado os comentários dos seis especialistas participantes. O cenário “O brinquedo terapêutico para educação em saúde” como meio para orientar o familiar sobre a traqueostomia, o check list de condutas esperadas, o Índice de Validade de Conteúdo de 0,93 e os comentários dos especialistas apontaram para a reprodutibilidade do cenário simulado criado. O cenário foi validado, espera-se estimular o uso do brinquedo-terapêutico para que a técnica seja difundida pelos profissionais de enfermagem. Descritores: Estudos de Validação, Enfermagem, Simulação, Pediatria, Jogos e Brinquedos. Simulated scenario with therapeutic toy: a tool for health education Abstract: To develop a simulated scenario with a therapeutic toy for use in health education and to validate it. This is a methodological development research. Validated remotely through a structured form. The Content Validity Index was calculated and the comments of the six participating experts were analyzed. The scenario "Therapeutic toy for health education" as a means to guide the family member about tracheostomy, the checklist of expected behaviors, the Content Validity Index of 0.93 and the experts' comments pointed to the reproducibility of the simulated scenario created. The scenario was validated, it is expected to stimulate the use of therapeutic toys so that the technique is disseminated by nursing professionals. Descriptors: Validation Study, Nursing, Simulation Technique, Pediatrics, Play and Playthings. Escenario simulado con juguete terapéutico: una herramienta para la educación en salud Resumen: Desarrollar un escenario simulado con un juguete terapéutico para uso en educación para la salud y validarlo. Se trata de una investigación de desarrollo metodológico. Validado remotamente a través de un formulario estructurado. Se calculó el Índice de Validez de Contenido y se analizaron los comentarios de los seis expertos participantes. El escenario "Juguete terapéutico para la educación en salud" como medio para orientar al familiar sobre la traqueotomía, la lista de verificación de los comportamientos esperados, el Índice de Validez de Contenido de 0,93 y los comentarios de los expertos apuntaron para la reproductibilidad del escenario simulado creado. El escenario fue validado, se espera estimular la utilización de juguetes terapéuticos para que la técnica sea difundida por los profesionales de enfermería. Descriptores: Estudio de Validación, Enfermería, Simulación, Pediatría, Juego e Implementos de Juego.
APA, Harvard, Vancouver, ISO, and other styles
37

Ma, Li, Erich Peterson, Mathew Steliga, Jason Muesse, Katy Marino, Konstantinos Arnaoutakis, Ikjae Shin, and Donald J. Johann. "Abstract 5038: Applying reproducible genomic data science methods for the analysis of a rare tumor type." Cancer Research 82, no. 12_Supplement (June 15, 2022): 5038. http://dx.doi.org/10.1158/1538-7445.am2022-5038.

Full text
Abstract:
Abstract Background: The new and emerging discipline of data science demands reproducibility, which is vital in science and presents a significant challenge for high throughput genomics. To further complicate matters, large and complex projects require collaboration by multiple investigators examining and analyzing the massive data of multiple genomic modalities from different perspectives. Today, researchers are rarely able to reproduce published genomic studies for a variety of reasons, for example: i) differences between versions of software used, ii) lack of detail regarding software parameters, iii) lack of data access, and iv) source code not provided. Here we combine our data infrastructure approach with a molecular infrastructure and apply it to the exploration of a multimodality genomic analysis of a patient with a pulmonary pneumocytoma. Methods: Open source methods are utilized that include the SQLite database with R and python packages and custom code. Results: Our approach is able to generate a wide variety of plots and tables for the purposes of exploratory data analysis (EDA) and/or other user-specific analyses, such as finding differentially expressed genes (DEGs). In addition to traditional EDA plots, the R library RCircos is used to visualize multiple NGS studies (eg, 7) in single plot. Differential gene expression (DGE) analysis takes normalized RNA-based read count data and performs a statistical analysis, to find quantitative changes in expression levels between different experimental groups. A DGE analysis report is routinely generated. An abbreviated copy number variation (CNV) report derived from an ultra-low-pass whole genome (tumor/germline) NGS approach is also generated. A Python/Jupyter Notebook utilizing a library from scikit-learn is used to generate a clustergram plot. This approach is used as part of finding the optimal number of clusters for a K-Means analysis. RNA-seq data normalized across three sample types using DESeq2 were used in this example. Finally, advanced pathway analysis is performed for the identification of activated and deactivated molecular pathways. Conclusion: The next evolution in oncology research and cancer care are being driven by data science. In the field of genomic data science, accuracy and reproducibility remains a considerable challenge due to the sheer size, complexity, and dynamic nature of the experimental data plus relative inventiveness of the quantitative biology approaches. The accuracy and reproducibility challenge does not just block the path to new scientific discoveries, more importantly, it may lead to a scenario where critical findings used for medical decision making are found to be incorrect. Our approach has been developed to meet the unmet need of improving accuracy and reproducibility in genomic data science. Specific findings related to the rare pneumocytoma tumor will be presented. Citation Format: Li Ma, Erich Peterson, Mathew Steliga, Jason Muesse, Katy Marino, Konstantinos Arnaoutakis, Ikjae Shin, Donald J. Johann. Applying reproducible genomic data science methods for the analysis of a rare tumor type [abstract]. In: Proceedings of the American Association for Cancer Research Annual Meeting 2022; 2022 Apr 8-13. Philadelphia (PA): AACR; Cancer Res 2022;82(12_Suppl):Abstract nr 5038.
APA, Harvard, Vancouver, ISO, and other styles
38

Silva, Thiago de Medeiros Silveira, Aneuri Souza De Amorim, Mario Cesar Viegas Balthar, Avelino Dos Santos, Rodrigo Carneiro Curzio, Domingos D'Oliveira Cardoso, Wallace Vallory Nunes, and Raphael Rocha França. "Methodology for checking the stability of the Cesium-137 irradiator system of the Radiation Monitors Calibration Laboratory (LabCal) at IDQBRN / Metodologia para verificar a estabilidade do sistema de irradiação Césio-137 do Laboratório de Calibração de Monitores de Radiação (LabCal) no IDQBRN." Brazilian Journal of Development 8, no. 1 (January 12, 2022): 2813–31. http://dx.doi.org/10.34117/bjdv8n1-186.

Full text
Abstract:
The provision for the Brazilian Army of equipment that provides reliable and safe measurements leads to the need for a metrology study of the calibration system used in the Institute's Radiation Monitor Calibration Laboratory (LabCal) of Chemical, Biological, Radiological and Nuclear Defense (IDQBRN). In order to verify the stability of dosimetry in Cesium-137, the ambient dose equivalent rate, H*(10), was experimentally obtained for certain distance settings and lead attenuators at different dates in order to compare them. To this end, the distance between the source of Cesium-137 and the ionization chamber was varied from 1000, 2000 and 3000 mm without attenuator and with lead attenuators with 15 and 32 mm. In this work, the reproducibility of the system was analyzed by comparing the scenario of equal distance and attenuator, but with a different test date. For this purpose, the correction due to the radioactive decay of the source up to a reference date was used for comparison.Furthermore,the stability of the LabCal in Cesium-137 was verified in the light of the relevant standards for exposures without attenuators and with select attenuators for the tested distances.
APA, Harvard, Vancouver, ISO, and other styles
39

Ncube, Mthokozisi, and Akpofure E. Taigbenu. "Assessment of apparent losses due to meter inaccuracy using an alternative, validated methodology." Water Supply 19, no. 4 (October 29, 2018): 1212–20. http://dx.doi.org/10.2166/ws.2018.178.

Full text
Abstract:
Abstract Despite wide acceptance of the IWA water balance as the basis of managing water losses, experience suggests that there are difficulties with its application. For apparent losses assessment, the traditional approach of deriving consumption profiles and testing water meters exceeds the resources of many utilities. While a few studies have explored alternative methodologies, these have largely not been validated and are susceptible to reproducibility and interpretation difficulties. This paper introduces an improved comparative billing analysis method that combines data preparation techniques, clustering analysis and classical regression analysis on monthly billing data of a water utility in Johannesburg, South Africa. Using the method, an average estimate of apparent losses due to metering errors of 8.2% was found against the best-case scenario of 9.4% using field investigations and laboratory tests, which also measure meter under-registration that the proposed methodology does not cater for. The validated results were possible at a fraction of the cost and effort, while also providing better insight into the underlying consumption patterns. The results show that data-driven discovery processes are viable alternatives for improved assessment and management of water losses.
APA, Harvard, Vancouver, ISO, and other styles
40

Heggy, Essam, Zane Sharkawy, and Abotalib Z. Abotalib. "Reply to Comment on ‘Egypt’s water budget deficit and suggested mitigation policies for the Grand Ethiopian Renaissance Dam filling scenarios’ by Kevin Wheeler et al’." Environmental Research Letters 17, no. 12 (November 29, 2022): 128001. http://dx.doi.org/10.1088/1748-9326/ac9c1b.

Full text
Abstract:
Abstract We thank Wheeler et al for positively confirming our results’ reproducibility; however, we show herein that their critique misrepresents the aim, approach, and interpretations reported in Heggy et al (2021 Environ. Res. Lett. 16 074022), which remain valid. The reply herein demonstrates that Wheeler et al incorrectly interpreted Heggy et al’s (2021 Environ. Res. Lett. 16 074022) estimates of the median unmitigated total water budget deficit for Egypt of 31 BCM yr−1 to be entirely caused by GERD. The comment overlooks the fact that this estimated value is the sum of Egypt’s existing intrinsic deficit (18.5 BCM yr−1), the initial reservoir seepage (2.5 BCM yr−1), and the median dam impoundment (9.5 BCM yr−1) under different GERD filling scenarios ranging from 2.5 to 29.6 years as shown in figure 2 and section 3.1 in Heggy et al (2021 Environ. Res. Lett. 16 074022). Consequently, our evaluation of the deficit was mistakenly deemed exaggerated as well as the socioeconomic impacts that rely on its estimate. These misinterpretations led to inappropriate comparisons between the results of the unmitigated total water budget deficit under the shortest filling scenario in Heggy et al (2021 Environ. Res. Lett. 16 074022) with longer ones from other studies that focus exclusively on GERD impoundment and assess the economic impacts of water shortage after applying several suggested mitigations that are not yet formally agreed upon, implemented, or budgeted. Instead, Heggy et al (2021 Environ. Res. Lett. 16 074022) provided a holistic evaluation of the current status of the total water budget deficit in Egypt (including intrinsic and GERD components) and its equivalent economic representation to support decision-makers in better implementing the fourth statement of the declaration of principles between the Nile’s riparian countries. The suggestion that the results of the unmitigated scenarios in Heggy et al (2021 Environ. Res. Lett. 16 074022) should match those of the mitigated ones cited in Wheeler et al is erroneous from both hydrological and policy perspectives.
APA, Harvard, Vancouver, ISO, and other styles
41

Dora, Amy V., Tara Vijayan, and Christopher J. Graber. "1138. Works Well Enough? Program Directors’ Perceptions of the Effectiveness and Transparency of Competency-Based Evaluations in Assessing Infectious Diseases Fellow Performance." Open Forum Infectious Diseases 7, Supplement_1 (October 1, 2020): S597—S598. http://dx.doi.org/10.1093/ofid/ofaa439.1324.

Full text
Abstract:
Abstract Background In July 2015, the Accreditation Council for Graduate Medical Education (ACGME) and the American Board of Internal Medicine (ABIM) jointly outlined an approach to assessing fellow performance using milestone-based core competencies for incorporation into standardized evaluation templates of trainee performance. Limited data exist regarding the clarity, effectiveness, and reproducibility of competency-based evaluations of infectious diseases fellows. Methods From March to May 2019, program directors of ACGME-accredited infectious diseases fellowship programs were invited to complete a Qualtrics-based survey of program characteristics and evaluation methods, including a trainee vignette to gauge evaluation reproducibility. Completed surveys were analyzed with descriptive statistics. Results Forty-three program directors initiated the survey, but 29 completed it. Seventeen (59%) were men, 19 (66%) were on a teaching service for over 8 weeks a year, and 19 (66%) had fewer than four first year fellows in their program. Most respondents agreed the competencies lacking the most clarity were systems-based practice (17/29, 58%), and practice based improvement (16/29, 55%). Eighteen (62%) were at least “somewhat satisfied” with their institution’s assessment tool, and 19 (66%) reported it was at least “moderately effective” in identifying academic deficiencies. Responses rating fellow performance from the vignette ranged from 1.5 to 4 on the standard milestone-based competency scale of 1-5 with 0.5 increments (median 3). For the same scenario using a qualitative ordinal scale, 66% (19/29) categorized the fellow as “early first year” and 34% (9/29) as “advanced first year.” Respondents offered a wide range of comments on milestone-based competencies, including “it works well enough” and “the process seems bloated and educratic.” Conclusion Clarity is needed on how to evaluate specific core competencies in infectious diseases, particularly systems-based practice and practice-based improvement. Describing anchoring milestones and evaluating fellows in accordance to stage in fellowship (i.e. early first year fellow) can help standardize responses. Further exploration on improving the evaluation process is warranted. Disclosures All Authors: No reported disclosures
APA, Harvard, Vancouver, ISO, and other styles
42

Olarte, Carlos Mario, Mauricio Zuluaga, Adriana Guzman, Julian Camacho, Pieralessandro Lasalvia, Nathaly Garzón-Orjuela, Laura Prieto, et al. "Analysis of the experience of the geriatric fracture program in two institutions in Colombia: a reproducible model?" Colombia Medica 52, no. 3 (November 19, 2021): e2034524. http://dx.doi.org/10.25100/cm.v52i3.4524.

Full text
Abstract:
Background: hip fracture is the major cause of morbidity and mortality. Geriatric fracture programs promise to improve the quality of care, health outcomes and reduce costs. Objective: To describe the results related to the Geriatric fracture programs implementation in two Colombian institutions. These results could then be compared to other published experiences to assess reproducibility of the program. Methods: A retrospective descriptive study of the patients treated under the Geriatric fracture programs in two institutions in Colombia was carried out. The information of each institution was collected from the initial year of program implementation until 2018. Demographic characteristics, length of stay, hospitalization complications, readmissions and mortality were described. Consumption of healthcare resources was defined using base cases determined with local experts and costs were estimated using standard methods. Results: 475 patients were included in the Geriatric fracture programs in two institutions. We observed an increase in the number of patients during the Geriatric fracture programs. The length of stay decreased between 8.5% and 26.1% as did the proportion of total complications, with delirium having the greatest reduction. A similar situation was seen for first year mortality (from 10.9% to 4.7% in one institution and form 11.4% to 5.1% in the other), in-hospital deaths and readmissions. Estimates of costs of stay and complications showed reductions in all scenarios, varying between 22% and 68.3% depending on the sensitivity scenario. Conclusions: The present study presents the experience of two institutions that implemented the Geriatric fracture programs with increase in the number of patients treated and reductions in the time of hospital stay, the proportion of complications, readmissions, mortality, and estimated costs. These are similar between both institutions and with other published implementations. This could hint that geriatric fracture program may be implemented with reproducible results.
APA, Harvard, Vancouver, ISO, and other styles
43

Mason, Anna Elizabeth, and Murali Varma. "Histopathology reporting for personalised medicine: focus on clinical utility." Journal of Clinical Pathology 75, no. 8 (July 19, 2022): 525–28. http://dx.doi.org/10.1136/jclinpath-2022-208185.

Full text
Abstract:
Histopathology guidelines generally focus on standardised collection of data items to facilitate completeness and reproducibility of histopathology reporting. A data item is categorised as either core (mandatory) or non-core (recommended but not mandatory), irrespective of the clinical scenario. However, a data item that is critical for patient management in one clinical setting may have little clinical significance in another setting. A diagnosis of limited extent Gleason score 3+3=6 prostate cancer is critical in a patient being investigated for raised serum prostate-specific antigen but would be clinically irrelevant in a repeat biopsy from a patient on an active surveillance protocol. We outline an alternative approach that is focused on the clinical utility of the data items and the requirements of personalised medicine. While all core data items are required to be reported, understanding how these parameters are used to guide patient management will enable pathologists to focus time and resources on the critical aspects of an individual case. Detailed immunohistochemical workup and obtaining a second opinion would not be necessary if resolution of the differential diagnosis is of limited clinical significance. We also highlight some challenges encountered when adopting this approach and suggest some solutions that could positively impact histopathology reporting and patient care.
APA, Harvard, Vancouver, ISO, and other styles
44

Toutsop, Borel, Benjamin Ducharne, Mickael Lallart, Laurent Morel, and Pierre Tsafack. "Characterization of Tensile Stress-Dependent Directional Magnetic Incremental Permeability in Iron-Cobalt Magnetic Sheet: Towards Internal Stress Estimation through Non-Destructive Testing." Sensors 22, no. 16 (August 21, 2022): 6296. http://dx.doi.org/10.3390/s22166296.

Full text
Abstract:
Iron-Cobalt ferromagnetic alloys are promoted for electrical energy conversion in aeronautic applications, but their high magnetostrictive coefficients may result in undesired behaviors. Internal stresses can be tuned to limit magnetostriction but must be adequately assessed in a non-destructive way during production. For this, directional magnetic incremental permeability is proposed in this work. For academic purposes, internal stresses have been replaced by homogenous external stress, which is easier to control using traction/compression testbench and results in similar effects. Tests have been limited to tensile stress stimuli, the worst-case scenario for magnetic stress observation on positive magnetostriction coefficient materials. Hysteresis cycles have been reconstructed from the incremental permeability measurement for stability and reproducibility of the measured quantities. The directionality of the sensor provides an additional degree of freedom in the magnetic response observation. The study reveals that an angle of π/2 between the DC (Hsurf DC) and the AC (Hsurf AC) magnetic excitations with a flux density Ba at HsurfDC = 10 kA·m−1 constitute the ideal experimental situation and the highest correlated parameter to a homogeneous imposed tensile stress. Magnetic incremental permeability is linked to the magnetic domain wall bulging magnetization mechanism; this study thus provides insights for understanding such a mechanism.
APA, Harvard, Vancouver, ISO, and other styles
45

Yang, Kun, Xinkai Xia, Fan Zhang, Huanzhou Ma, Shengbo Sang, Qiang Zhang, and Jianlong Ji. "Implementation of a Sponge-Based Flexible Electronic Skin for Safe Human–Robot Interaction." Micromachines 13, no. 8 (August 19, 2022): 1344. http://dx.doi.org/10.3390/mi13081344.

Full text
Abstract:
In current industrial production, robots have increasingly been taking the place of manual workers. With the improvements in production efficiency, accidents that involve operators occur frequently. In this study, a flexible sensor system was designed to promote the security performance of a collaborative robot. The flexible sensors, which was made by adsorbing graphene into a sponge, could accurately convert the pressure on a contact surface into a numerical signal. Ecoflex was selected as the substrate material for our sensing array so as to enable the sensors to better adapt to the sensing application scenario of the robot arm. A 3D printing mold was used to prepare the flexible substrate of the sensors, which made the positioning of each part within the sensors more accurate and ensured the unity of the sensing array. The sensing unit showed a correspondence between the input force and the output resistance that was in the range of 0–5 N. Our stability and reproducibility experiments indicated that the sensors had a good stability. In addition, a tactile acquisition system was designed to sample the tactile data from the sensor array. Our interaction experiment results showed that the proposed electronic skin could provide an efficient approach for secure human–robot interaction.
APA, Harvard, Vancouver, ISO, and other styles
46

Cerroni, Rocco, Daniele Pietrucci, Adelaide Teofani, Giovanni Chillemi, Claudio Liguori, Mariangela Pierantozzi, Valeria Unida, Sidorela Selmani, Nicola Biagio Mercuri, and Alessandro Stefani. "Not just a Snapshot: An Italian Longitudinal Evaluation of Stability of Gut Microbiota Findings in Parkinson’s Disease." Brain Sciences 12, no. 6 (June 4, 2022): 739. http://dx.doi.org/10.3390/brainsci12060739.

Full text
Abstract:
Most research analyzed gut-microbiota alterations in Parkinson’s disease (PD) through cross-sectional studies, as single snapshots, without considering the time factor to either confirm methods and findings or observe longitudinal variations. In this study, we introduce the time factor by comparing gut-microbiota composition in 18 PD patients and 13 healthy controls (HC) at baseline and at least 1 year later, also considering PD clinical features. PD patients and HC underwent a fecal sampling at baseline and at a follow-up appointment. Fecal samples underwent sequencing and 16S rRNA amplicons analysis. Patients’clinical features were valued through Hoehn&Yahr (H&Y) staging-scale and Movement Disorder Society Unified PD Rating Scale (MDS-UPDRS) Part-III. Results demonstrated stability in microbiota findings in both PD patients and HC over a period of 14 months: both alfa and beta diversity were maintained in PD patients and HC over the observation period. In addition, differences in microbiota composition between PD patients and HC remained stable over the time period. Moreover, during the same period, patients did not experience any worsening of either staging or motor impairment. Our findings, highlighting the stability and reproducibility of the method, correlate clinical and microbiota stability over time and open the scenario to more extensive longitudinal evaluations.
APA, Harvard, Vancouver, ISO, and other styles
47

Carrillo-Ávila, José Antonio, Purificación Catalina, and Rocío Aguilar-Quesada. "Quality Control of Cell Lines Using DNA as Target." DNA 2, no. 1 (February 16, 2022): 44–55. http://dx.doi.org/10.3390/dna2010004.

Full text
Abstract:
Cell lines are a widely used pre-clinical models for biomedical research. The accessibility and the relative simplicity of facilities necessary for the use of cell lines, along with the large number of potential applications, encourage many researchers to choose this model. However, the access to cell lines from a non-confident source or through the interlaboratory exchange results in uncontrollable cell lines of uncertain quality. Furthermore, the possibility of using cell lines as an endless resource through multiple passages can contribute to this uncontrolled scenario, the main consequence of which is the lack of reproducibility between the research results. Different initiatives have emerged to promote the best practices regarding the use of cell lines and minimize the effect on the scientific results reported, including comprehensive quality control in the frame of Good Cell Culture Practice (GCCP). Cell Banks, research infrastructures for the professional distribution of biological material of high and known quality and origin, are committed with these initiatives. Many of the quality controls used to test different attributes of cell lines are based on DNA. This review describes quality control protocols of cell lines whose target molecule is DNA, and details the scope or purpose and their corresponding functionality.
APA, Harvard, Vancouver, ISO, and other styles
48

Meindl, Bernhard, and Matthias Templ. "Feedback-Based Integration of the Whole Process of Data Anonymization in a Graphical Interface." Algorithms 12, no. 9 (September 10, 2019): 191. http://dx.doi.org/10.3390/a12090191.

Full text
Abstract:
The interactive, web-based point-and-click application presented in this article, allows anonymizing data without any knowledge in a programming language. Anonymization in data mining, but creating safe, anonymized data is by no means a trivial task. Both the methodological issues as well as know-how from subject matter specialists should be taken into account when anonymizing data. Even though specialized software such as sdcMicro exists, it is often difficult for nonexperts in a particular software and without programming skills to actually anonymize datasets without an appropriate app. The presented app is not restricted to apply disclosure limitation techniques but rather facilitates the entire anonymization process. This interface allows uploading data to the system, modifying them and to create an object defining the disclosure scenario. Once such a statistical disclosure control (SDC) problem has been defined, users can apply anonymization techniques to this object and get instant feedback on the impact on risk and data utility after SDC methods have been applied. Additional features, such as an Undo Button, the possibility to export the anonymized dataset or the required code for reproducibility reasons, as well its interactive features, make it convenient both for experts and nonexperts in R—the free software environment for statistical computing and graphics—to protect a dataset using this app.
APA, Harvard, Vancouver, ISO, and other styles
49

Stallone, Giovanni, and Giuseppe Grandaliano. "To discard or not to discard: transplantation and the art of scoring." Clinical Kidney Journal 12, no. 4 (April 16, 2019): 564–68. http://dx.doi.org/10.1093/ckj/sfz032.

Full text
Abstract:
Abstract The growing gap between inadequate supply and constantly high demand for kidney transplantation observed in the last two decades led to exploring the possibility of using organs from older donors with an increasing number of comorbidities. The main issue in this scenario is to identify transplantable organs and to allocate them to the most suitable recipients. A great number of clinical investigations proposed several acceptance/allocation criteria to reduce the discard rate of these kidneys and to improve their outcome, including histological features at the time of transplant. Despite the widespread use of several histological scoring systems, there is no consensus on their value in predicting allograft survival and there is established evidence that histological analysis is the most common reason to discard expanded criteria donor kidneys. To overcome this issue, a clinical scoring system, the Kidney Donor Profile Index (KDPI), was developed on the basis of easily accessible donor features. The KDPI score, adopted in the new US allocation procedure, has good reproducibility but presents several limitations, as suggested also in this issue of Clinical Kidney Journal. This observation should stimulate the search for novel scores combining clinical, histological and molecular features in an attempt to improve the decision process.
APA, Harvard, Vancouver, ISO, and other styles
50

Rendina-Ruedy, Elizabeth, Kelsey D. Hembree, Angela Sasaki, McKale R. Davis, Stan A. Lightfoot, Stephen L. Clarke, Edralin A. Lucas, and Brenda J. Smith. "A Comparative Study of the Metabolic and Skeletal Response of C57BL/6J and C57BL/6N Mice in a Diet-Induced Model of Type 2 Diabetes." Journal of Nutrition and Metabolism 2015 (2015): 1–13. http://dx.doi.org/10.1155/2015/758080.

Full text
Abstract:
Type 2 diabetes mellitus (T2DM) represents a complex clinical scenario of altered energy metabolism and increased fracture incidence. The C57BL/6 mouse model of diet-induced obesity has been used to study the mechanisms by which altered glucose homeostasis affects bone mass and quality, but genetic variations in substrains of C57BL/6 may have confounded data interpretation. This study investigated the long-term metabolic and skeletal consequences of two commonly used C57BL/6 substrains to a high fat (HF) diet. Male C57BL/6J, C57BL/6N, and the negative control strain, C3H/HeJ, mice were fed a control or HF diet for 24 wks. C57BL/6N mice on a HF diet demonstrated an increase in plasma insulin and blood glucose as early as 4 wk, whereas these responses were delayed in the C57BL/6J mice. The C57BL/6N mice exhibited more severe hepatic steatosis and inflammation. Only the C57BL/6N mice lost significant trabecular bone in response to the high fat diet. The C3H/HeJ mice were protected from bone loss. The data show that C57BL/6J and C57BL/6N mice differ in their metabolic and skeletal response when fed a HF diet. These substrain differences should be considered when designing experiments and are likely to have implications on data interpretation and reproducibility.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography