Literatura científica selecionada sobre o tema "Reproducibility of scenario"

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Consulte a lista de atuais artigos, livros, teses, anais de congressos e outras fontes científicas relevantes para o tema "Reproducibility of scenario".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Artigos de revistas sobre o assunto "Reproducibility of scenario"

1

Simkus, Andrea, Frank PA Coolen, Tahani Coolen-Maturi, Natasha A. Karp e Claus Bendtsen. "Statistical reproducibility for pairwise t-tests in pharmaceutical research". Statistical Methods in Medical Research 31, n.º 4 (2 de dezembro de 2021): 673–88. http://dx.doi.org/10.1177/09622802211041765.

Texto completo da fonte
Resumo:
This paper investigates statistical reproducibility of the [Formula: see text]-test. We formulate reproducibility as a predictive inference problem and apply the nonparametric predictive inference method. Within our research framework, statistical reproducibility provides inference on the probability that the same test outcome would be reached, if the test were repeated under identical conditions. We present an nonparametric predictive inference algorithm to calculate the reproducibility of the [Formula: see text]-test and then use simulations to explore the reproducibility both under the null and alternative hypotheses. We then apply nonparametric predictive inference reproducibility to a real-life scenario of a preclinical experiment, which involves multiple pairwise comparisons of test groups, where different groups are given a different concentration of a drug. The aim of the experiment is to decide the concentration of the drug which is most effective. In both simulations and the application scenario, we study the relationship between reproducibility and two test statistics, the Cohen’s [Formula: see text] and the [Formula: see text]-value. We also compare the reproducibility of the [Formula: see text]-test with the reproducibility of the Wilcoxon Mann–Whitney test. Finally, we examine reproducibility for the final decision of choosing a particular dose in the multiple pairwise comparisons scenario. This paper presents advances on the topic of test reproducibility with relevance for tests used in pharmaceutical research.
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Fary, Camdon, Dean McKenzie e Richard de Steiger. "Reproducibility of an Intraoperative Pressure Sensor in Total Knee Replacement". Sensors 21, n.º 22 (18 de novembro de 2021): 7679. http://dx.doi.org/10.3390/s21227679.

Texto completo da fonte
Resumo:
Appropriate soft tissue tension in total knee replacement (TKR) is an important factor for a successful outcome. The purpose of our study was to assess both the reproducibility of a modern intraoperative pressure sensor (IOP) and if a surgeon could unconsciously influence measurement. A consecutive series of 80 TKRs were assessed with an IOP between January 2018 and December 2020. In the first scenario, two blinded sequential measurements in 48 patients were taken; in a second scenario, an initial blinded measurement and a subsequent unblinded measurement in 32 patients were taken while looking at the sensor monitor screen. Reproducibility was assessed by intraclass correlation coefficients (ICCs). In the first scenario, the ICC ranged from 0.83 to 0.90, and in the second scenario it ranged from 0.80 to 0.90. All ICCs were 0.80 or higher, indicating reproducibility using a IOP and that a surgeon may not unconsciously influence the measurement. The use of a modern IOP to measure soft tissue tension in TKRs is a reproducible technique. A surgeon observing the measurements while performing IOP may not significantly influence the result. An IOP gives additional information that the surgeon can use to optimize outcomes in TKR.
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Hearn, Jeff. "Sexualities, organizations and organization sexualities: Future scenarios and the impact of socio-technologies (a transnational perspective from the global ‘north’)". Organization 21, n.º 3 (24 de fevereiro de 2014): 400–420. http://dx.doi.org/10.1177/1350508413519764.

Texto completo da fonte
Resumo:
The article opens by briefly reviewing studies of sexuality in and around organizations from the 1970s. These studies showed considerable theoretical, empirical and conceptual development, as in the concept of organization sexuality. Building on this, the article’s first task is to analyse alternative future scenarios for organization sexualities, by way of changing intersections of gender, sexuality and organizational forms. Possible gendered future scenarios are outlined based on, first, gender equality/inequality and, second, gender similarity/difference between women, men and further genders: hyper-patriarchy scenario—men and women becoming more divergent; with greater inequality; late capitalist gender scenario—genders becoming more convergent, with greater inequality; bi-polar scenario—men and women becoming more divergent, with greater equality; postgender scenario—genders becoming more convergent, with greater equality. Somewhat similar scenarios for organization sexualities are elaborated in terms of gender/sexual equality and inequality and sexual/gender similarity and difference: heteropatriarchies scenario—greater sexual/gender difference and greater sexual or sexual/gender inequality; late capitalist sexual scenario—greater sexual/gender similarity and greater sexual or gender/sexual inequality; sexual differentiation scenario—greater sexual/gender difference and greater sexual or sexual/gender equality; sexual blurring scenario—greater sexual/gender similarity and greater sexual or sexual/gender equality. The article’s second task is to addresses the impact of globalizations and transnationalizations, specifically information and communication technologies and other socio-technologies, for future scenarios of organization sexualities. The characteristic affordances of ICTs—technological control, virtual reproducibility, conditional communality, unfinished undecidability—are mapped onto the four scenarios above and the implications outlined.
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Shu, Lele, Paul Ullrich, Xianhong Meng, Christopher Duffy, Hao Chen e Zhaoguo Li. "rSHUD v2.0: advancing the Simulator for Hydrologic Unstructured Domains and unstructured hydrological modeling in the R environment". Geoscientific Model Development 17, n.º 2 (19 de janeiro de 2024): 497–527. http://dx.doi.org/10.5194/gmd-17-497-2024.

Texto completo da fonte
Resumo:
Abstract. Hydrological modeling is a crucial component in hydrology research, particularly for projecting future scenarios. However, achieving reproducibility and automation in distributed hydrological modeling research for modeling, simulation, and analysis is challenging. This paper introduces rSHUD v2.0, an innovative, open-source toolkit developed in the R environment to enhance the deployment and analysis of the Simulator for Hydrologic Unstructured Domains (SHUD). The SHUD is an integrated surface–subsurface hydrological model that employs a finite-volume method to simulate hydrological processes at various scales. The rSHUD toolkit includes pre- and post-processing tools, facilitating reproducibility and automation in hydrological modeling. The utility of rSHUD is demonstrated through case studies of the Shale Hills Critical Zone Observatory in the USA and the Waerma watershed in China. The rSHUD toolkit's ability to quickly and automatically deploy models while ensuring reproducibility has facilitated the implementation of the Global Hydrological Data Cloud (https://ghdc.ac.cn, last access: 1 September 2023), a platform for automatic data processing and model deployment. This work represents a significant advancement in hydrological modeling, with implications for future scenario projections and spatial analysis.
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Hafizi, Hamed, e Ali Arda Sorman. "Integrating Meteorological Forcing from Ground Observations and MSWX Dataset for Streamflow Prediction under Multiple Parameterization Scenarios". Water 14, n.º 17 (1 de setembro de 2022): 2721. http://dx.doi.org/10.3390/w14172721.

Texto completo da fonte
Resumo:
Precipitation and near-surface air temperatures are significant meteorological forcing for streamflow prediction where most basins are partially or fully data-scarce in many parts of the world. This study aims to evaluate the consistency of MSWXv100-based precipitation, temperatures, and estimated potential evapotranspiration (PET) by direct comparison with observed measurements and by utilizing an independent combination of MSWXv100 dataset and observed data for streamflow prediction under four distinct scenarios considering model parameter and output uncertainties. Initially, the model is calibrated/validated entirely based on observed data (Scenario 1), where for the second calibration/validation, the observed precipitation is replaced by MSWXv100 precipitation and the daily observed temperature and PET remained unchanged (Scenario 2). Furthermore, the model calibration/validation is done by considering observed precipitation and MSWXv100-based temperature and PET (Scenario 3), and finally, the model is calibrated/validated entirely based on the MSWXv100 dataset (Scenario 4). The Kling–Gupta Efficiency (KGE) and its components (correlation, ratio of bias, and variability ratio) are utilized for direct comparison, and the Hanssen–Kuiper (HK) skill score is employed to evaluate the detectability strength of MSWXv100 precipitation for different precipitation intensities. Moreover, the hydrologic utility of MSWXv100 dataset under four distinct scenarios is tested by exploiting a conceptual rainfall-runoff model under KGE and Nash–Sutcliffe Efficiency (NSE) metrics. The results indicate that each scenario depicts high streamflow reproducibility where, regardless of other meteorological forcing, utilizing observed precipitation (Scenario 1 and 3) as one of the model inputs, shows better model performance (KGE = 0.85) than MSWXv100-based precipitation, such as Scenario 2 and 4 (KGE = 0.78–0.80).
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Mazzorana, B., J. Hübl e S. Fuchs. "Improving risk assessment by defining consistent and reliable system scenarios". Natural Hazards and Earth System Sciences 9, n.º 1 (17 de fevereiro de 2009): 145–59. http://dx.doi.org/10.5194/nhess-9-145-2009.

Texto completo da fonte
Resumo:
Abstract. During the entire procedure of risk assessment for hydrologic hazards, the selection of consistent and reliable scenarios, constructed in a strictly systematic way, is fundamental for the quality and reproducibility of the results. However, subjective assumptions on relevant impact variables such as sediment transport intensity on the system loading side and weak point response mechanisms repeatedly cause biases in the results, and consequently affect transparency and required quality standards. Furthermore, the system response of mitigation measures to extreme event loadings represents another key variable in hazard assessment, as well as the integral risk management including intervention planning. Formative Scenario Analysis, as a supplement to conventional risk assessment methods, is a technique to construct well-defined sets of assumptions to gain insight into a specific case and the potential system behaviour. By two case studies, carried out (1) to analyse sediment transport dynamics in a torrent section equipped with control measures, and (2) to identify hazards induced by woody debris transport at hydraulic weak points, the applicability of the Formative Scenario Analysis technique is presented. It is argued that during scenario planning in general and with respect to integral risk management in particular, Formative Scenario Analysis allows for the development of reliable and reproducible scenarios in order to design more specifically an application framework for the sustainable assessment of natural hazards impact. The overall aim is to optimise the hazard mapping and zoning procedure by methodologically integrating quantitative and qualitative knowledge.
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Refaee, Turkey, Zohaib Salahuddin, Yousif Widaatalla, Sergey Primakov, Henry C. Woodruff, Roland Hustinx, Felix M. Mottaghy, Abdalla Ibrahim e Philippe Lambin. "CT Reconstruction Kernels and the Effect of Pre- and Post-Processing on the Reproducibility of Handcrafted Radiomic Features". Journal of Personalized Medicine 12, n.º 4 (31 de março de 2022): 553. http://dx.doi.org/10.3390/jpm12040553.

Texto completo da fonte
Resumo:
Handcrafted radiomics features (HRFs) are quantitative features extracted from medical images to decode biological information to improve clinical decision making. Despite the potential of the field, limitations have been identified. The most important identified limitation, currently, is the sensitivity of HRF to variations in image acquisition and reconstruction parameters. In this study, we investigated the use of Reconstruction Kernel Normalization (RKN) and ComBat harmonization to improve the reproducibility of HRFs across scans acquired with different reconstruction kernels. A set of phantom scans (n = 28) acquired on five different scanner models was analyzed. HRFs were extracted from the original scans, and scans were harmonized using the RKN method. ComBat harmonization was applied on both sets of HRFs. The reproducibility of HRFs was assessed using the concordance correlation coefficient. The difference in the number of reproducible HRFs in each scenario was assessed using McNemar’s test. The majority of HRFs were found to be sensitive to variations in the reconstruction kernels, and only six HRFs were found to be robust with respect to variations in reconstruction kernels. The use of RKN resulted in a significant increment in the number of reproducible HRFs in 19 out of the 67 investigated scenarios (28.4%), while the ComBat technique resulted in a significant increment in 36 (53.7%) scenarios. The combination of methods resulted in a significant increment in 53 (79.1%) scenarios compared to the HRFs extracted from original images. Since the benefit of applying the harmonization methods depended on the data being harmonized, reproducibility analysis is recommended before performing radiomics analysis. For future radiomics studies incorporating images acquired with similar image acquisition and reconstruction parameters, except for the reconstruction kernels, we recommend the systematic use of the pre- and post-processing approaches (respectively, RKN and ComBat).
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Tripathi, Veenu, e Stefano Caizzone. "Virtual Validation of In-Flight GNSS Signal Reception during Jamming for Aeronautics Applications". Aerospace 11, n.º 3 (5 de março de 2024): 204. http://dx.doi.org/10.3390/aerospace11030204.

Texto completo da fonte
Resumo:
Accurate navigation is a crucial asset for safe aviation operation. The GNSS (Global Navigation Satellite System) is set to play an always more important role in aviation but needs to cope with the risk of interference, possibly causing signal disruption and loss of navigation capability. It is crucial, therefore, to evaluate the impact of interference events on the GNSS system on board an aircraft, in order to plan countermeasures. This is currently obtained through expensive and time-consuming flight measurement campaigns. This paper shows on the other hand, a method developed to create a virtual digital twin, capable of reconstructing the entire flight scenario (including flight dynamics, actual antenna, and impact of installation on aircraft) and predicting the signal and interference reception at airborne level, with clear benefits in terms of reproducibility and easiness. Through simulations that incorporate jamming scenarios or any other interference scenarios, the effectiveness of the aircraft’s satellite navigation capability in the real environment can be evaluated, providing valuable insights for informed decision-making and system enhancement. By extension, the method shown can provide the ability to predict real-life outcomes even without the need for actual flight, enabling the analysis of different antenna-aircraft configurations in a specific interference scenario.
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Santana-Perez, Idafen, e María S. Pérez-Hernández. "Towards Reproducibility in Scientific Workflows: An Infrastructure-Based Approach". Scientific Programming 2015 (2015): 1–11. http://dx.doi.org/10.1155/2015/243180.

Texto completo da fonte
Resumo:
It is commonly agreed that in silico scientific experiments should be executable and repeatable processes. Most of the current approaches for computational experiment conservation and reproducibility have focused so far on two of the main components of the experiment, namely, data and method. In this paper, we propose a new approach that addresses the third cornerstone of experimental reproducibility: the equipment. This work focuses on the equipment of a computational experiment, that is, the set of software and hardware components that are involved in the execution of a scientific workflow. In order to demonstrate the feasibility of our proposal, we describe a use case scenario on the Text Analytics domain and the application of our approach to it. From the original workflow, we document its execution environment, by means of a set of semantic models and a catalogue of resources, and generate an equivalent infrastructure for reexecuting it.
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Lee, Daeeop, Giha Lee, Seongwon Kim e Sungho Jung. "Future Runoff Analysis in the Mekong River Basin under a Climate Change Scenario Using Deep Learning". Water 12, n.º 6 (29 de maio de 2020): 1556. http://dx.doi.org/10.3390/w12061556.

Texto completo da fonte
Resumo:
In establishing adequate climate change policies regarding water resource development and management, the most essential step is performing a rainfall-runoff analysis. To this end, although several physical models have been developed and tested in many studies, they require a complex grid-based parameterization that uses climate, topography, land-use, and geology data to simulate spatiotemporal runoff. Furthermore, physical rainfall-runoff models also suffer from uncertainty originating from insufficient data quality and quantity, unreliable parameters, and imperfect model structures. As an alternative, this study proposes a rainfall-runoff analysis system for the Kratie station on the Mekong River mainstream using the long short-term memory (LSTM) model, a data-based black-box method. Future runoff variations were simulated by applying a climate change scenario. To assess the applicability of the LSTM model, its result was compared with a runoff analysis using the Soil and Water Assessment Tool (SWAT) model. The following steps (dataset periods in parentheses) were carried out within the SWAT approach: parameter correction (2000–2005), verification (2006–2007), and prediction (2008–2100), while the LSTM model went through the process of training (1980–2005), verification (2006–2007), and prediction (2008–2100). Globally available data were fed into the algorithms, with the exception of the observed discharge and temperature data, which could not be acquired. The bias-corrected Representative Concentration Pathways (RCPs) 4.5 and 8.5 climate change scenarios were used to predict future runoff. When the reproducibility at the Kratie station for the verification period of the two models (2006–2007) was evaluated, the SWAT model showed a Nash–Sutcliffe efficiency (NSE) value of 0.84, while the LSTM model showed a higher accuracy, NSE = 0.99. The trend analysis result of the runoff prediction for the Kratie station over the 2008–2100 period did not show a statistically significant trend for neither scenario nor model. However, both models found that the annual mean flow rate in the RCP 8.5 scenario showed greater variability than in the RCP 4.5 scenario. These findings confirm that the LSTM runoff prediction presents a higher reproducibility than that of the SWAT model in simulating runoff variation according to time-series changes. Therefore, the LSTM model, which derives relatively accurate results with a small amount of data, is an effective approach to large-scale hydrologic modeling when only runoff time-series are available.
Estilos ABNT, Harvard, Vancouver, APA, etc.

Teses / dissertações sobre o assunto "Reproducibility of scenario"

1

Casseau, Christophe. "Accompagnement à l’exécution des notebooks Jupyter en milieu éducatif". Electronic Thesis or Diss., Bordeaux, 2024. http://www.theses.fr/2024BORD0102.

Texto completo da fonte
Resumo:
Les notebooks sont devenus des outils incontournables dans le domaine de l’analyse de données. Initiés dans les années 1980 avec des logiciels tels que Mathematica et inspirés par le concept de la programmation littéraire de Knuth, leur popularité se concrétise grâce au projet Jupyter en 2014. Ils ont transformé la manière dont les scientifiques communiquent leurs idées en combinant du code exécutable parmi une grande variété de langages de programmation, des visualisations et des explications textuelles dans un même document interactif. Ils ont également largement investi le monde éducatif par exemple avec le programme CANDYCE lancé par l’état français en 2021. Ce programme encourage l’utilisation de l’environnement Jupyter dans l’enseignement des sciences du numérique et ce à tous les niveaux, du primaire à l’enseignement supérieur en proposant des notebooks éducatifs qui sont au coeur de cette thèse. Dans ce contexte éducatif, malgré leurs avantages indéniables, les notebooks présentent également des défis importants, notamment en matière de reproductibilité et de modèle d’exécution. En effet, les notebooks éducatifs embarquent une activité pédagogique contenant des instructions textuelles guidant les étudiants à travers les différentes tâches à réaliser. Ensuite, l’enseignant cherche à reproduire les résultats des étudiants en suivant un ordre le plus souvent linéaire. La reproductibilité des résultats constitue une promesse des notebooks, mais plusieurs études ont révélé des difficultés à atteindre cet objectif, nécessitant le développement d’approches pour accompagner les utilisateurs dans la création de notebooks reproductibles. De plus, le modèle d’exécution flexible des notebooks donne la possibilité aux étudiants d’exécuter les cellules de code dans un ordre différent de celui prévu par l’enseignant pouvant occasionner des erreurs et/ou des résultats trompeurs. Dans cette thèse, nous nous penchons sur ces deux défis que sont la reproductibilité des résultats et l’exécution des notebooks éducatifs. Notre objectif est de proposer deux approches indépendantes du langage de programmation afin d’accompagner les étudiants i) vers la reproductibilité des résultats dans un modèle d’exécution linéaire du haut vers le bas et ii) à l’exécution d’un notebook contenant un scénario c’est à dire des instructions liées à son exécution. Pour répondre à ces deux défis nous avons développé des outils directement intégrés à l’environnement JupyterLab : NORMetMOON. Ces outils ont permis de mettre en évidence à travers des expérimentations menées avec des étudiants de C.P.G.E et de première année universitaire une nette amélioration concernant les deux défis sans entraver l’apprentissage des étudiants
Notebooks have become essential tools in the field of data science. Initiated in the 1980s with software such asMathematica and inspired by Knuth’s concept of literate programming, their popularity was solidified with the Jupyter project in 2014. They have transformed how scientists communicate their ideas by combining executable code from a wide variety of programming languages, visualizations, and textual explanations in a single interactive document. They have also gain in popularity in the educational world, for example, with the CANDYCE program launched by the French government in 2021. This program encourages the use of the Jupyter environment in teaching digital sciences at all levels, from primary to higher education, by offering educational notebooks that are at the heart of this thesis. In this educational context, despite their undeniable advantages, notebooks also present significant challenges, particularly in terms of reproducibility and execution model. Indeed, educational notebooks embed a pedagogical activity containing textual instructions guiding students through the different tasks to be completed. Then, the teacher attempts to reproduce the students’ results by following a predominantly linear order. The reproducibility of results is a promise of notebooks, but several studies have revealed difficulties in achieving this goal, necessitating the development of approaches to support users in creating reproducible notebooks. Additionally, the flexible execution model of notebooks allows students to execute code cells in a different order than intended by the instructor, potentially leading to errors and/or misleading results. In this thesis, we address these two challenges : the reproducibility of results and the execution of educational notebooks. Our goal is to propose two language-agnostic approaches to assist students i) towards result reproducibility in a top-downlinear execution model and ii) in the execution of a notebook containing a scenario, i.e., instructions related to its execution. To tackle these challenges, we have developed tools directly integrated into the JupyterLab environment : NORM and MOON. Through experiments conducted with students from C.P.G.E. and first-year university, these tools have demonstrated a significant improvement in both challenges without hindering student learning
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Baccarin, A. "LA STIFFNESS EPATICA NELLE MALATTIE EPATICHE CRONICHE: NUOVI SCENARI A CONFRONTO". Doctoral thesis, Università degli Studi di Milano, 2016. http://hdl.handle.net/2434/357498.

Texto completo da fonte
Resumo:
Abstract Background and aim. Liver stiffness (LS) measured by transient elastography (TE) accurately predicts severity of chronic liver disease (CLD). Point quantification shear-wave elastography (ElastPQ®- pSWE) is a newly developed technique to measure LS incorporated into a conventional ultrasound system. We evaluated feasibility, reproducibility and diagnostic accuracy of both techniques in consecutively recruited CLD patients who concomitantly underwent a liver biopsy. Methods. Over a two-year period, 186 CLD `patients (116 males, 53 years, 132 viral hepatitis ) consecutively underwent ElastPQ®-pSWE (10 valid measurements) blindly performed by two raters whereas TE was performed by one single operator. Interobserver agreement for ElastPQ®-pSWE was analyzed by intraclass correlation coefficient (ICC) and correlated with histological liver fibrosis by METAVIR. Main determinants of ElastPQ®-pSWE were investigated by a linear regression model. Results. 372 (100%) reliable measurements were obtained by ElastPQ®-pSWE and 184 by TE (2 failures, 99%). LS was 8.1±4.5 kPa for by ElastPQ®-pSWE with the first rater and 8.0±4.2 with the second one vs 8.8±3.6 kPa by TE. Overall, ElastPQ®-pSWE ICC was 0.89 (95%CI 0.85-0.91) that was not influenced by age, sex, BMI or liver enzymes. However, ICC increased with time , 1st year 0.86, 95% CI 0.81-0.90 vs second year 92, 95%CI 0.87-0.95. Liver fibrosis was the only independent determinant of LS on ElastPQ®-pSWE. AUROCs for diagnosing F≥2, F≥3 and F=4 were 0.77, 0.85 and 0.88 for ElastPQ®-pSWE vs 0.81, 0.88 and 0.94 for TE. However the ElastPQ®-pSWE AUCROCs after one year of training were 0.86, 0.94 and 0.91. Conclusions. pSWE-ElastPQ reliably and reproducibly evaluates LS, matching for accuracy TE after a learning curve of one year.
Estilos ABNT, Harvard, Vancouver, APA, etc.

Livros sobre o assunto "Reproducibility of scenario"

1

Alger, Bradley E. Defense of the Scientific Hypothesis. Oxford University Press, 2019. http://dx.doi.org/10.1093/oso/9780190881481.001.0001.

Texto completo da fonte
Resumo:
This book explains and defends the scientific hypothesis. Explanation is needed to counteract the misinformation and misunderstanding about the hypothesis that even scientists have concerning its nature and place in the tapestry of modern science. A survey revealed that most biological scientists receive little or no formal training in scientific thinking. Defense is needed because the hypothesis is under attack by critics who claim it is irrelevant to science. Defense is important, too, because the hypothesis is perhaps the major element in scientific thinking, and familiarity with it is necessary for an understanding of modern science and scientific thinking. The public needs to understand the hypothesis in order to appreciate and evaluate scientific controversies (e.g., global climate change, vaccine safety, etc.). The first chapters thoroughly describe and analyze in elementary terms the scientific hypothesis and examine various kinds of science. Following chapters that review the hypothesis in the context of the Reproducibility Crisis and present survey data, two chapters assess cognitive matters that affect the hypothesis. In a series of chapters, the book makes practical and policy recommendations for teaching and learning about the hypothesis. The final chapter considers two possible futures for the hypothesis in science as the Big Data revolution looms: in one scenario, the hypothesis is displaced by the Big Data Mindset that forgoes understanding in favor of correlation and prediction. In the other, robotic science incorporates the hypothesis into mechanized laboratories guided by artificial intelligence. An epilogue envisions a third way—the Centaur Scientist, a symbiotic relationship of human scientists and computers.
Estilos ABNT, Harvard, Vancouver, APA, etc.

Capítulos de livros sobre o assunto "Reproducibility of scenario"

1

Aliferis, Constantin, e Gyorgy Simon. "Overfitting, Underfitting and General Model Overconfidence and Under-Performance Pitfalls and Best Practices in Machine Learning and AI". In Health Informatics, 477–524. Cham: Springer International Publishing, 2024. http://dx.doi.org/10.1007/978-3-031-39355-6_10.

Texto completo da fonte
Resumo:
AbstractAvoiding over and under fitted analyses (OF, UF) and models is critical for ensuring as high generalization performance as possible and is of profound importance for the success of ML/AI modeling. In modern ML/AI practice OF/UF are typically interacting with error estimator procedures and model selection, as well as with sampling and reporting biases and thus need be considered together in context. The more general situations of over confidence (OC) about models and/or under-performing (UP) models can occur in many subtle and not so subtle ways especially in the presence of high-dimensional data, modest or small sample sizes, powerful learners and imperfect data designs. Because over/under confidence about models are closely related to model complexity, model selection, error estimation and sampling (as part of data design) we connect these concepts with the material of chapters “An Appraisal and Operating Characteristics of Major ML Methods Applicable in Healthcare and Health Science,” “Data Design,” and “Evaluation”. These concepts are also closely related to statistical significance and scientific reproducibility. We examine several common scenarios where over confidence in model performance and/or model under performance occur as well as detailed practices for preventing, testing and correcting them.
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Schmitt, S., S. Stephan, B. Kirsch, J. C. Aurich, H. M. Urbassek e H. Hasse. "Molecular Dynamics Simulation of Cutting Processes: The Influence of Cutting Fluids at the Atomistic Scale". In Proceedings of the 3rd Conference on Physical Modeling for Virtual Manufacturing Systems and Processes, 260–80. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-35779-4_14.

Texto completo da fonte
Resumo:
AbstractMolecular dynamics simulations are an attractive tool for studying the fundamental mechanisms of lubricated machining processes on the atomistic scale as it is not possible to access the small contact zone experimentally. Molecular dynamics simulations provide direct access to atomistic process properties of the contact zone of machining processes. In this work, lubricated machining processes were investigated, consisting of a workpiece, a tool, and a cutting fluid. The tool was fully immersed in the cutting fluid. Both, a simple model system and real substance systems were investigated. Using the simplified and generic model system, the influence of different process parameters and molecular interaction parameters were systematically studied. The real substance systems were used to represent specific real-world scenarios. The simulation results reveal that the fluid influences mainly the starting phase of an atomistic level cutting process by reducing the coefficient of friction in this phase compared to a dry case. After this starting phase of the lateral movement, the actual contact zone is mostly dry. For high pressure contacts, a tribofilm is formed between the workpiece and the cutting fluid, i.e. a significant amount of fluid particles is imprinted into the workpiece crystal structure. The presence of a cutting fluid significantly reduces the heat impact on the workpiece. Moreover, the cutting velocity is found to practically not influence the coefficient of friction, but significantly influences the dissipation and, therefore, the temperature in the contact zone. Finally, the reproducibility of the simulation method was assessed by studying replica sets of simulations of the model system.
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Bausell, R. Barker. "A (Very) Few Concluding Thoughts". In The Problem with Science, 261–70. Oxford University Press, 2021. http://dx.doi.org/10.1093/oso/9780197536537.003.0012.

Texto completo da fonte
Resumo:
In this chapter, educational recommendations for future scientists are suggested followed by possible scenarios that may characterize the future of the reproducibility initiatives discussed in previous chapters. One such scenario, while quite pessimistic, is not without historical precedent. Namely, that the entire movement may turn out to be little more than a publishing opportunity for methodologically oriented scientists—soon replaced by something else and forgotten by most—thereby allowing it to be reprised a few decades later under a different name by different academics. Alternately, and more optimistically, the procedural and statistical behaviors discussed here will receive an increased emphasis in the scientific curricula accompanied by a sea change in actual scientific practice and its culture—thereby producing a substantial reduction in the prevalence of avoidable false-positive scientific results. And indeed recent evidence does appear to suggest that the reproducibility initiatives instituted by the dedicated cadre of methodologically oriented scientists chronicled in this book have indeed begun the process of making substantive improvements in the quality and veracity of scientific inquiry itself.
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Thomas, Robert H., e Rupa Bessant. "History taking". In The Pocketbook for PACES. Oxford University Press, 2012. http://dx.doi.org/10.1093/oso/9780199574186.003.0012.

Texto completo da fonte
Resumo:
Welcome to station 2, history taking. We would like to start by saying that having worked very hard to pass the written papers you deserve to be sitting this prestigious exam and PACES is an opportunity to perform as a clinician and formally demonstrate your unique clinical skills that make you an excellent physician. History taking is traditionally an area of the PACES exam that candidates find very difficult. Taking a good history from a patient is the most fundamental skill of a training physician and PACES candidates often feel that their general daily work practice is adequate preparation for this station. This is not the case, despite most candidates being well skilled in clerking a patient from A&E or clinic where few limitations apply. Unfortunately, faced with strict time constraints, exam anxiety and often deliberately unusual scenarios, poorly prepared PACES candidates may demonstrate a fundamental weakness in what should be a basic part of physician training. Do not devalue your chance of success in the PACES examination by poor preparation or anxiety. The practice needed in order to succeed in the history taking station should not be underestimated. As well as running out of time, a candidate’s main concern is the broad range of scenarios that can be chosen as an examination topic and this can easily unsettle and panic even the best prepared. This chapter offers a structured approach that will help to alleviate this fear. The stopwatch begins once a candidate has been handed the scenario (usually in the form of a letter addressed to the candidate from another healthcare practitioner). • 5 minutes private preparation – candidate then called into the exam room. • 14 minutes with the patient being observed by examiners. • 1 minute of personal reflection time. • 5 minutes for questions from the examiners. The initial preparation period is of huge importance and when sitting PACES, we would suggest using the structured template in Fig. 4.1 which can be adapted to any scenario given to a PACES candidate in the exam. We would encourage you to memorize this template for reproducibility during this preparation time.
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Kumar Jain, Sachin, Rakhi Khabiya, Akanksha Dwivedi, Priyanka Soni e Vishal Soni. "Approaches and Challenges in Developing Quality Control Parameters for Herbal Drugs". In New Avenues in Drug Discovery and Bioactive Natural Products, 54–82. BENTHAM SCIENCE PUBLISHERS, 2023. http://dx.doi.org/10.2174/9789815136326123020005.

Texto completo da fonte
Resumo:
Herbs have been used as medicines from ancient times in the world. In the present scenario, awareness and acceptability towards herbal medicines have been raised tremendously due to their easy availability and few or no side effects. Unfortunately, due to the lack of stringent regulatory guidelines for herbal drugs, standard quality degradation may be associated with these herbal medicines through either intentional or unintentional adulterations, spurious drugs, the substitution of drugs with other drugs, etc. Hence, it becomes mandatory to control the quality standards of herbal medicines as they are being used for the betterment of human health. Improvements in various domains of herbal medicine have helped developed countries, such as USA, UK, Australia and European countries, adopt this ancient and enriched medicinal system leading to the “Herbal Renaissance”. Herbal medicines, however, are associated with a number of shortcomings such as quality assurance, safety, efficacy, purity, lack of appropriate standardization parameters, lack of accepted research methodology and toxicity studies. Despite the availability of numerous traditional quality control methods (e.g., thermal methods, HPTLC, HPLC, SFC) for herbal medicines, owing to the lacunae, there is a prerequisite for newer approaches in fostering quality parameters of herbal drugs. Chromatographic and spectral fingerprinting, DNA fingerprinting and metabolomics can be used as newer approaches to the authentication and standardisation of medicinal botanicals. Currently, the computational In-Silico technique for standardization of phytochemicals is in trend because of the number of pros like less time consumption, fast, and improved efficiency of the entire process with excellent reproducibility.
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Zeug, Gunter, e Dominik Brunner. "Disaster Management and Virtual Globes". In Geographic Information Systems, 1587–603. IGI Global, 2013. http://dx.doi.org/10.4018/978-1-4666-2038-4.ch095.

Texto completo da fonte
Resumo:
Today, the added value of geoinformation for crisis management is well known and accepted. However, experiences show that disaster management units on local administrative levels in the developing world often lack the use of Geographic Information Systems for analysing spatial interrelations and making their own maps. Various studies mention the shortage of financial resources, human capacity, and adequate knowledge as reasons for that. In recent years publically available virtual globes like Google Earth™, Microsoft® Bing™ Maps 3D or Nasa World Wind enjoy great popularity. The accessibility of worldwide high resolution satellite data, their intuitive user interface, and the ability to integrate own data support this success. In this chapter, the potential of these new geospatial technologies for supporting disaster preparedness and response is demonstrated, using the example of Google Earth™. Possibilities for the integration of data layers from third parties, the digitization of own layers, as well as the analytical capacities are examined. Furthermore, a printing module is presented, which supports the production of paper maps based on data previously collected and edited in Google Earth™. The efficiency of the proposed approach is demonstrated for a disaster management scenario in Legazpi, a Philippine city exposed to several natural hazards due to the vicinity to Mayon volcano and the annually occuring typhoons in the region. With this research, current technological trends in geospatial technologies are taken up and investigated on their potential for professional use. Moreover, it is demonstrated that by using freely available software general constraints for using GIS in developing countries can be overcome. Most importantly, the approach presented guarantees low cost for implementation and reproducibility, which is essential for its application in developing countries.
Estilos ABNT, Harvard, Vancouver, APA, etc.

Trabalhos de conferências sobre o assunto "Reproducibility of scenario"

1

Carvalho, Lucas, Joana Malaverri e Claudia Medeiros. "Implementing W2Share: Supporting Reproducibility and Quality Assessment in eScience". In XI Brazilian e-Science Workshop. Sociedade Brasileira de Computação - SBC, 2020. http://dx.doi.org/10.5753/bresci.2017.9916.

Texto completo da fonte
Resumo:
An open problem in scientific community is that of supporting reproducibility and quality assessment of scientific experiments. Solutions need to be able to help scientists to reproduce experimental procedures in a reliable manner and, at the same time, to provide mechanisms for documenting the experiments to enhance integrity and transparency. Moreover, solutions need to incorporate features that allow the assessment of procedures, data used and results of those experiments. In this context, we designed W2Share, a framework to meet these requirements. This paper introduces our first implementation of W2Share, which moreover guides scientists in step-by-step process to ensure reproducibility based on a script-to-workflow conversion strategy. W2Share also incorporates features that allow annotating experiments with quality information. We validate our prototype using a real-world scenario in Bioinformatics.
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Rudder, Sarah, e Daniel Herber. "Importance of Ontologies for Systems Engineering (SE) and Human Factors Engineering (HFE) Integration". In 6th International Conference on Human Systems Engineering and Design Future Trends and Applications (IHSED 2024). AHFE International, 2024. http://dx.doi.org/10.54941/ahfe1005544.

Texto completo da fonte
Resumo:
Due to the volatility of the sales markets, a pure sales forecast and technology calendar are not sufficient for the development of a product strategy. In a product strategy, far-reaching decisions are made as to which product ideas should be pursued further, considering limited development budgets and available expertise within a company. To this end, the prospects of success of alternative product ideas must be carefully weighed up against each other. The consistency-based scenario technique provides an excellent basis for this. The consistency-based scenario technique is characterised by anticipating contradiction-free - so called consistent - developments of key factors in the entrepreneurial business environment. As nobody knows which developments and disruptive events will actually occur in future, probabilities are not taken into account in the consistency-based scenario technique. Instead, the aim is to anticipate all conceivable future developments so that opportunities can be seized early and challenges are overcome more easily.The success of the consistency-based scenario technique is based not only on expertise in the factors influencing future developments, their interactions and consistencies, but also on the appropriate aggregation of the large number of possible developments - so-called raw scenarios - into holistic alternative visions of the future. Typically, three alternative future scenarios are developed, among those preferably one positive and one negative. This development of alternative visions of the future can be characterized by quality key figures. In this contribution, a new set of key figures is proposed that supports the selection of appropriate future scenarios as a starting point for the selection of promising product ideas. This reduces the scope for interpretation used in generating future scenarios and increases the quality of the resulting scenarios. The basis for discussion is thus objectified. The set of key figures comprises Four quality metrics. Metric one (M1) measures the normal distribution of the raw scenarios calculated by the algorithms. Metric two (M2) requires the fulfilment of a high Consistency sum in order to generate contradiction-free scenarios. Metric three (M3), Heterogeneity, ensures sufficient differentiability of the scenarios. Metric four (M4) measures the calculated Reproducibility of the final result achieved iteratively. The tool should be able to reproduce and create the scenarios at any time using the specific database. The quality metrics are used iteratively to achieve an overarching optimum of focusing, the normal distribution, heterogeneity, consistency and reproducibility. The scenario technique is used in an internationally operating company in the mechanical and plant engineering sector as a supporting method in the area of business analytics for strategy alignment. The proposed results are used to develop highly qualitative scenarios in eight workshops and three iterations with managing directors and to incorporate these into the future direction of the strategy. The metrics presented are used to arrive at reliable future scenarios for the product strategy.
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Carvalho, Luiz Paulo, Lucas Murakami, José Antonio Suzano, Jonice Oliveira, Kate Revoredo e Flávia Maria Santoro. "Ethics: What is the Research Scenario in the Brazilian Conference BRACIS?" In Encontro Nacional de Inteligência Artificial e Computacional. Sociedade Brasileira de Computação - SBC, 2022. http://dx.doi.org/10.5753/eniac.2022.227590.

Texto completo da fonte
Resumo:
Artificial Intelligence (AI) presents many ethical dilemmas, such as explainability, bias, military uses, surveillance capitalism, employment, and jobs. In the scientific context, AI can lead us to a crisis of reproducibility spread across several areas of knowledge and guide mathematicians to solve high complexity problems. Both companies and government forward their guidelines, recommendations, and materials combining Ethics and AI. In this paper, we investigate the involvement of the Brazilian academic-scientific community with moral or ethical aspects through its publications, covering the Brazilian Conference on Intelligent Systems (BRACIS) as the most prominent Brazilian AI conference. Through a Literature Systematic Review method, we answer the main research question: what is the panorama of the explicit occurrence of ethical aspects in the BRACIS, ENIAC, and STIL conference papers? The results indicate a low occurrence of ethical aspects and increasing behavior over the years. Ethical deliberation was fruitful, constructive, and critical among these few occurrences. Whether in the Brazilian or international context, there are spaces to be filled and open opportunities for exploration along this path.
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Pinto, Aline C., Beatriz S. Silva, Priscilla R. M. Carmo, Raphael L. A. Lima, Larisse S. P. Amorim, Rubio T. C. Viana, Daniel H. Dalip e Poliana A. C. Oliveira. "WebFeatures: A Web Tool to Extract Features from Collaborative Content". In Simpósio Brasileiro de Sistemas Multimídia e Web. Sociedade Brasileira de Computação, 2020. http://dx.doi.org/10.5753/webmedia_estendido.2020.13071.

Texto completo da fonte
Resumo:
The production from collaborative web content has grown in recent years. Thus, exploring the quality of these data repositories has also become relevant. This work proposes to develop a tool called WebFeature. Such system allows one to manage, extract, and share quality related feature sets from text, graph and article review. To accomplish this, diff erent types of metrics were implemented based on structure, style, and readability of the texts. In order to evalu- ate the WebFeature applicability, we presented a scenario with its main functionalities (creation of a feature set, extraction of features from a known dataset, and publishing the feature set). Our demon- stration shows that this framework can be useful for extracting features automatically, supporting quality prediction of collabo- rative contents, analyzing text characterization, and improving research reproducibility.
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Fadda, T. "ADS-B driven implementation of an augmented reality airport control tower platform". In Aeronautics and Astronautics. Materials Research Forum LLC, 2023. http://dx.doi.org/10.21741/9781644902813-164.

Texto completo da fonte
Resumo:
Abstract. This paper describes a real-world implementation of the solutions developed within the SESAR DTT Solution 97.1-EXE-002 project, which tested in a simulated scenario the use of Augmented Reality (AR) to assist the airport control tower operators (TWR). Following a user-centred design methodology, the requirements of a real-world live AR platform join with design concepts validated in previous projects, namely the Tracking Labels, the weather interface, and a low-visibility overlay, all used to increase the TWR situational awareness, performance and reactivity while reducing the workload. The designed AR platform performs the live tracking and visualization of real aircraft and surveillance information in the airport traffic zone. It bases on three key processes: the transmission of an ADS-B data flow to a Microsoft™ HoloLens2, the registration process of the AR platform, and the rendering of a real-time tracking system and other surveillance overlays. The concept has been first validated with the help of a TWR, preceding a technical validation to ensure the repeatability and reproducibility of the results. The results allow for defining new guidelines for the deployment in a control tower environment.
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

pezzola, marco ezio, Elisabetta Leo, niccolò taroni, simone calamari e federico cheli. "Innovative whole Vehicle-In the-Loop approach for Advanced Rider Assistance Systems calibration and verification: application to self-braking Adaptive Cruise Control". In The Evolving Scholar - BMD 2023, 5th Edition. The Evolving Scholar - BMD 2023, 5th Edition, 2023. http://dx.doi.org/10.59490/6491cea96173c4e52306e06f.

Texto completo da fonte
Resumo:
On board motorcycles’ control logics are arising in number and complexity. It follows how on road testing, for correct response verification, becomes danger and time consuming. E.g. the mandatory ABS system, that shall be tested on both the high and the low road friction with installation of outriggers to prevent falling (UNECE Reg. No. 78). The testing complexity induces the acceptance criteria to be mostly limited to the subjective feeling of the tester. With the more challenging Cornering-ABS it is almost not possible to test due to limited availability of steering lanes with dedicated road frictions, where calibration and verification can be successfully accomplished. It is becoming harder and harder to objectify systems’ performances with reproducible and repeatable metric. This becomes even harder when dealing with several interacting control logics. The latest Advanced Rider Assistance System (ARAS)s, for example, such as the self-braking, radar-based Adaptive Cruise Control (ACC): there is a lack of real scenarios on which to execute calibration and verification tests and, even when available, the dangerousness in execution increases, and the subjective final assessment falters, bringing the rider’s psychophysical capabilities to the limits (N.Valsecchi, 2020). With multiple demanding control logics’ performances and state dependencies, the needed time for on-road calibration amplifies in duration and mileage; moreover, weather conditions and proving ground availability may frustrate the calibration and verification results. The possibility of doing most of the work in-door, in safe and repeatable conditions, despite adverse weather, already exists exploiting the HIL approach, that nowadays is becoming more and more popular. But still some limitations occur, when attempting to make multiple control logics working together, with real systems in the loop operating as on the real riding conditions. It may result more efficient to have the human in the loop able to ride the real vehicle, fully connected to a real time (RT) computer, reproducing road scenarios in the simulation environment. Human reactions and scenarios-dependent behavior can be realistically reproduced. Driven by the above motivations, the innovative idea to connect the whole motorbike in the RT simulations loop enables this investigation capability and allows to reduce the on-road riding risks to properly verify the behavior of all the involved systems operating together and include the real human response as well. The whole Vehicle-In the-Loop proposed in this work allows testing in full safe, manned or unmanned riding modes, through the exploitation of the automation suite, decoupling systems complexity and, finally, executing hard to replicate on-road scenarios otherwise. The final scope of the work is to stress the ARAS ACC implemented on a state-of-the art motorcycle, investigating the performances of the system when running the typical public road scenarios. To achieve the scope, the target motorbike has been fully connected to the Real Time PC, cheating the ECUs, now fed by simulated PWM wheels speed encoders and IMU signals computed in the virtual environment. The original radar has been by-passed and object-injection has been established in order to make the system believe the existence of traffic. The High-Fidelity vehicle model has been implemented, including proper tires models characterized on the target surfaces (D.Vivenzi, 2019)(E. Leo et Al. 2019). Parametrizable use cases have been implemented, enabling to test on-road realistic critical situations with moving traffic objects. More in details, a forward car driving at constant speed has been implemented as traffic object; the ego motorcycle, riding at higher speed, approaches the forward car. Different scenarios have been then analyzed. Scenario#1_the platooning: the logic capability to detect the forward vehicle, compute the time-to-collision and decelerate the ego vehicle till the platooning condition; repetitions in different ACC user-modes (e.g. very-short, medium, very long relative distance); scenario#2_the emergency brake: the real rider operates the throttle, reducing the safety relative distance; the self-braking logic activates only while releasing of the throttle to re-establish the safety distance, avoiding the front collision (if/when possible); emergency signs shall promptly alert the rider; scenario#3_the µ-drop: while self-braking to achieve the platooning condition, the road friction µ drops while braking, at a given speed, activating the ABS logic; the two logics interaction is observed and the implemented hierarchical criteria are chosen and verified (e.g. ACC promptly disconnection). Despite the scenario variability and complexity, both the repeatability and reproducibility in testing execution have been guaranteed, maintaining the same boundaries conditions (initial conditions, environmental conditions, road conditions, unevenness and tire-to-surface friction characteristics), allowing selective sensitivity execution and control logics parameters setting. Between the most relevant results, the possibility to tune the brake pressures, front and rear, in order to achieve the target vehicle deceleration and the relative distance and velocity with respect to the vehicle in front; the verification of the vehicle response against expectations; the interaction between ACC and ABS. Finally, the possibility to monitor how the logic behaves in case of functional faults (BS ISO 26262, ed. 2020). The separation of the effects allows to simplify and speed the analysis, confirming the effectiveness in performances improvement.
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Cornwall, Rachelle Christine, Daniel Dima Shkorin, Rodrigo Alberto Guzman, Jalal Rojdi El-Majzoub, Mahrous Sadek El-Sedawy, Giftan Febrian Pribadi, Yunhui Deng e Mohammed Abdulrahman Alhur. "Unlocking Opportunities for Gas Lift Well Surveillance - Building the Framework for Consolidated Data Capture and Processing". In Abu Dhabi International Petroleum Exhibition & Conference. SPE, 2021. http://dx.doi.org/10.2118/208003-ms.

Texto completo da fonte
Resumo:
Abstract Gas lift operations are highly dependent on data quality and team competence to operate the asset efficiently. Traditional methods for gas lift well surveillance and diagnostics rely on wireline services, a method with growing constraints to adapt to constantly evolving well and operational challenges. The Well Intervention-less Tracer Surveillance System (WITSS) provides a cost effective, comprehensive approach to well surveillance without the reliance on tools entering the well. This results in reduced HSE risks and no associated deferred production. This paper describes a pilot implementation to evaluate the adequacy and accuracy of this technology in the context of ADNOC Onshore gas lift producers. The objective is to evaluate its performance against conventional method data sets and assess the reproducibility of data where no reference existed. The 10 well pilot included both accessible and obstructed wells. Data from the custom designed modular portable kit used for executing the surveillance activities, was analyzed and compared against conventional flowing gradient surveys with full data consumption in well models for comprehensive nodal analysis and opportunity identification. For this pilot, ten wells were surveyed twice using the WITSS method. Results were compared to traditional methods acquired through wireline surveys for accessible wells, and against established multi-phase flow correlations for obstructed wells. The pilot confirmed the WITSS method is as accurate as conventional gauge measurements in mapping pressure and temperature profiles in gas lifted wells. The WITSS method provided additional insight on accurate gas consumption based on the assessment of total gas lift utilization per well and allowed comprehensive model calibration and well performance definition. It also identified potential integrity issues via identification of primary injection at designed stations and secondary unwanted injection sites. Continuous compositional gas analysis of both injected and produced gas streams provided additional verification for analyzing gas lift injection performance. It also highlighted a change in fluid compositional analysis opening discussions for material selection review of the assets. Production uplift identified from 50% of wells was compliant with the reservoir management strategy. The value proposals of flow stabilization through gas lift valve re-calibrations and replacements, adjustment of injection flow rate and further controls on injection pressure management are under process for implementation. Full field scale up scenario is under preparation.
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Wehner, William, Menno Lauret, Eugenio Schuster, John R. Ferron, Chris Holcomb, Tim C. Luce, David A. Humphreys, Michael L. Walker, Ben G. Penaflor e Robert D. Johnson. "Predictive control of the tokamak q profile to facilitate reproducibility of high-qmin steady-state scenarios at DIII-D". In 2016 IEEE Conference on Control Applications (CCA). IEEE, 2016. http://dx.doi.org/10.1109/cca.2016.7587900.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Evalgelista, Lucas Gabriel Coimbra, e Elloá B. Guedes. "Computer-Aided Tuberculosis Detection from Chest X-Ray Images with Convolutional Neural Networks". In XV Encontro Nacional de Inteligência Artificial e Computacional. Sociedade Brasileira de Computação - SBC, 2018. http://dx.doi.org/10.5753/eniac.2018.4444.

Texto completo da fonte
Resumo:
Diagnosing Tuberculosis is crucial for proper treatment since it is one of the top 10 causes of deaths worldwide. Considering a computer-aided approach based on intelligent pattern recognition on chest X-ray with Convolutional Neural Networks, this work presents the proposition, training and test results of 9 different architectures to address this task as well as two ensembles. The highest performance verified reaches accuracy of 88.76%, surpassing human experts on similar data as previously reported by literature. The experimental data used comes from public medical datasets and comprise real-world examples from patients with different ages and physical characteristics, what favours reproducibility and application in practical scenarios.
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Cortès, Sven, Christian Dettmann, Philipp Boche, Niclas Schneider e Alexander Heubuch. "Impact of control types of a chassis dynamometer on the reproduction of real world driving scenarios". In FISITA World Congress 2021. FISITA, 2021. http://dx.doi.org/10.46720/f2021-epv-079.

Texto completo da fonte
Resumo:
The introduction of the Worldwide Harmonized Light Vehicle Test Procedure (WLTP) and Real Driving Emissions (RDE) test requirements for the certification of passenger cars requires mobile emission measurements during real driving cycles in addition to the common emission tests on chassis dynamometer. Due to the randomness of traffic and environmental conditions, it is not possible to repeat a real driving cycle with the same results to investigate issues, for example, the application of engine and transmission control units. This represents a big challenge for the manufacturers research and development departments. It is necessary to know the relevant influences from the real operation of the vehicle in order to reproduce real driving cycles on a test bench. The selection of a adequate validation environment is based on defined target values with which the requirements for accuracy and reproducibility of the test bench can be evaluated. In the classic load simulation of real driving on a chassis dynamometer, the driving resistance coefficients are determined in coastdown tests and then adjusted on the chassis dynamometer. Subsequently, the mapping of the height profile is necessary for the consideration of the track topology. The determination of the elevation profile via barometric altitude pressure or GPS measurement technology respectively map material is very complex in the necessary accuracy. An alternative to this procedure is described by the control modes v-alpha, n-alpha, F-v and F-n. For example, the roll in v-alpha control mode is controlled to a defined vehicle speed, independent of the currently applied traction force. At the same time, the angle of the accelerator pedal alpha of the test object is adjusted to the same speed. Since both control systems follow time-based setpoint values and no driving resistance simulation is running in parallel, the v-alpha control mode does not depend on the knowledge of the driving resistance coefficients. The same applies to the above mentioned control modes. These control modes are particularly suitable for the improvement of data states in which the driving performance remains unaffected, for example for the application of emission behaviour. This paper deals on the necessary extensions for chassis dynamometers as a selected validation environment to be able to use these rule types and classifies them in the IPEK XiL framework. The necessary measured variables for the comparison of the control modes with each other and with the classic load simulation are recorded in a specially constructed test vehicle both in real driving and in test bench operation mode. The results of speed, load and tractive force progression, as well as the energy flow within the vehicle, are then examined with regard to equality, accuracy and reproducibility.
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia