To see the other types of publications on this topic, follow the link: Weakly calibrated.

Journal articles on the topic 'Weakly calibrated'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Weakly calibrated.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

LI Guo-dong, 李国栋, 田国会 TIAN Guo-hui, 王洪君 WANG Hong-jun, and 尹建芹 YIN Jian-qin. "Euclidean epipolar rectification frame of weakly calibrated stereo pairs." Optics and Precision Engineering 22, no. 7 (2014): 1955–61. http://dx.doi.org/10.3788/ope.20142207.1955.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

JARUSIRISAWAD, Songkran, Takahide HOSOKAWA, and Hideo SAITO. "Diminished reality using plane-sweep algorithm with weakly-calibrated cameras." Progress in Informatics, no. 7 (March 2010): 11. http://dx.doi.org/10.2201/niipi.2010.7.3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Robert, L., and OD Faugeras. "Relative 3D positioning and 3D convex hull computation from a weakly calibrated stereo pair." Image and Vision Computing 13, no. 3 (April 1995): 189–96. http://dx.doi.org/10.1016/0262-8856(95)90839-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Shengyong Chen and Y. F. Li. "Finding Optimal Focusing Distance and Edge Blur Distribution for Weakly Calibrated 3-D Vision." IEEE Transactions on Industrial Informatics 9, no. 3 (August 2013): 1680–87. http://dx.doi.org/10.1109/tii.2012.2221471.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Ralis, S. J., B. Vikramaditya, and B. J. Nelson. "Micropositioning of a weakly calibrated microassembly system using coarse-to-fine visual servoing strategies." IEEE Transactions on Electronics Packaging Manufacturing 23, no. 2 (April 2000): 123–31. http://dx.doi.org/10.1109/6104.846935.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Houqin, Bian, and Su Jianbo. "Feature matching based on geometric constraints in weakly calibrated stereo views of curved scenes." Journal of Systems Engineering and Electronics 19, no. 3 (June 2008): 562–70. http://dx.doi.org/10.1016/s1004-4132(08)60121-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Andreassen, L. M., M. Huss, K. Melvold, H. Elvehøy, and S. H. Winsvold. "Ice thickness measurements and volume estimates for glaciers in Norway." Journal of Glaciology 61, no. 228 (2015): 763–75. http://dx.doi.org/10.3189/2015jog14j161.

Full text
Abstract:
AbstractGlacier volume and ice thickness distribution are important variables for water resource management in Norway and the assessment of future glacier changes. We present a detailed assessment of thickness distribution and total glacier volume for mainland Norway based on data and modelling. Glacier outlines from a Landsat-derived inventory from 1999 to 2006 covering an area of 2692 ± 81 km2 were used as input. We compiled a rich set of ice thickness observations collected over the past 30 years. Altogether, interpolated ice thickness measurements were available for 870 km2 (32%) of the current glacier area of Norway, with a total ice volume of 134 ± 23 km3. Results indicate that mean ice thickness is similar for all larger ice caps, and weakly correlates with their total area. Ice thickness data were used to calibrate a physically based distributed model for estimating the ice thickness of unmeasured glaciers. The results were also used to calibrate volume–area scaling relations. The calibrated total volume estimates for all Norwegian glaciers ranged from 257 to 300 km3.
APA, Harvard, Vancouver, ISO, and other styles
8

Ma, Meiyi, John Stankovic, Ezio Bartocci, and Lu Feng. "Predictive Monitoring with Logic-Calibrated Uncertainty for Cyber-Physical Systems." ACM Transactions on Embedded Computing Systems 20, no. 5s (October 31, 2021): 1–25. http://dx.doi.org/10.1145/3477032.

Full text
Abstract:
Predictive monitoring—making predictions about future states and monitoring if the predicted states satisfy requirements—offers a promising paradigm in supporting the decision making of Cyber-Physical Systems (CPS). Existing works of predictive monitoring mostly focus on monitoring individual predictions rather than sequential predictions. We develop a novel approach for monitoring sequential predictions generated from Bayesian Recurrent Neural Networks (RNNs) that can capture the inherent uncertainty in CPS, drawing on insights from our study of real-world CPS datasets. We propose a new logic named Signal Temporal Logic with Uncertainty (STL-U) to monitor a flowpipe containing an infinite set of uncertain sequences predicted by Bayesian RNNs. We define STL-U strong and weak satisfaction semantics based on whether all or some sequences contained in a flowpipe satisfy the requirement. We also develop methods to compute the range of confidence levels under which a flowpipe is guaranteed to strongly (weakly) satisfy an STL-U formula. Furthermore, we develop novel criteria that leverage STL-U monitoring results to calibrate the uncertainty estimation in Bayesian RNNs. Finally, we evaluate the proposed approach via experiments with real-world CPS datasets and a simulated smart city case study, which show very encouraging results of STL-U based predictive monitoring approach outperforming baselines.
APA, Harvard, Vancouver, ISO, and other styles
9

Liu, Yuzhuo, Hangting Chen, Jian Wang, Pei Wang, and Pengyuan Zhang. "Confidence Learning for Semi-Supervised Acoustic Event Detection." Applied Sciences 11, no. 18 (September 15, 2021): 8581. http://dx.doi.org/10.3390/app11188581.

Full text
Abstract:
In recent years, the involvement of synthetic strongly labeled data, weakly labeled data, and unlabeled data has drawn much research attention in semi-supervised acoustic event detection (SAED). The classic self-training method carries out predictions for unlabeled data and then selects predictions with high probabilities as pseudo-labels for retraining. Such models have shown its effectiveness in SAED. However, probabilities are poorly calibrated confidence estimates, and samples with low probabilities are ignored. Hence, we introduce a confidence-based semi-supervised Acoustic event detection (C-SAED) framework. The C-SAED method learns confidence deliberately and retrains all data distinctly by applying confidence as weights. Additionally, we apply a power pooling function whose coefficient can be trained automatically and use weakly labeled data more efficiently. The experimental results demonstrate that the generated confidence is proportional to the accuracy of the predictions. Our C-SAED framework achieves a relative error rate reduction of 34% in contrast to the baseline model.
APA, Harvard, Vancouver, ISO, and other styles
10

Papachristou, Christos, and Anastasios N. Delopoulos. "A method for the evaluation of projective geometric consistency in weakly calibrated stereo with application to point matching." Computer Vision and Image Understanding 119 (February 2014): 81–101. http://dx.doi.org/10.1016/j.cviu.2013.12.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

SAKAI, TAKAHIRO, and L. G. REDEKOPP. "A parametric study of the generation and degeneration of wind-forced long internal waves in narrow lakes." Journal of Fluid Mechanics 645 (February 4, 2010): 315–44. http://dx.doi.org/10.1017/s0022112009992746.

Full text
Abstract:
The generation and energy downscaling of wind-forced long internal waves in strongly stratified small-to-medium sized narrow lakes are studied. A two-layer nonlinear model with forcing and damping is employed. Even though the wave field is fundamentally bidirectional in nature, a domain folding technique is employed to simulate the leading-order internal wave field in terms of a weakly nonlinear weakly dispersive model equation of Korteweg–deVries type. Parametric effects of wind-forcing and environmental conditions, including variable topography and variable basin width, are examined. Energy downscaling from basin-scale waves to shorter scales are quantified by means of a time evolution of the wave energy spectra. It is demonstrated that an internal wave resonance is possible when repetitive wind-forcing events arise with a frequency near the linear seiche frequency. An attempt is made to apply the model to describe the shoaling of long waves on sloping endwall boundaries. Modelling of the energy loss and energy reflection during a shoaling event is calibrated against laboratory experiments.
APA, Harvard, Vancouver, ISO, and other styles
12

Grishnyaev, Evgeny, Aleksandr Dolgov, and Sergey Polosatkin. "The Computer Programm for Statistical Modelling of Fast Neutrons Scattering is Cryogenic Detector of Weakly Interacting Particles." Siberian Journal of Physics 8, no. 3 (October 1, 2013): 39–46. http://dx.doi.org/10.54362/1818-7919-2013-8-3-39-46.

Full text
Abstract:
The paper describes «Scattronix» code designed for modeling of recoiling nuclei spectra in cryogenic avalanche dark matter detector being calibrated with monoenergetic flow of neutrons. «Scattronix» is the code for direct Monte-Carlo modeling of fast neutrons movement in the active media of detector. The features of the task being solved (rare collisions, elastic scattering domination) allow significant simplification of code structure and consequent performance improvement in comparison with conventional codes for neutron-media interaction modeling. Physical basics of neutron scattering on 40Ar nuclei are briefly considered. Algorithm implemented in the code is described. An example of modeled spectra is given and compared with analytical estimation of spectral line width
APA, Harvard, Vancouver, ISO, and other styles
13

Zhao, Xiao Fan, Andreas Wimmer, and Michael F. Zaeh. "Experimental and simulative investigation of welding sequences on thermally induced distortions in wire arc additive manufacturing." Rapid Prototyping Journal 29, no. 11 (March 15, 2023): 53–63. http://dx.doi.org/10.1108/rpj-07-2022-0244.

Full text
Abstract:
Purpose The purpose of this paper is to demonstrate the impact of the welding sequence on the substrate plate distortion during the wire and arc additive manufacturing (WAAM) process. This paper also aims to show the capability of finite element simulations in the prediction of those thermally induced distortions. Design/methodology/approach An experiment was conducted in which solid aluminum blocks were manufactured using two different welding sequences. The distortion of the substrates was measured at predefined positions and converted into bending and torsion values. Subsequently, a weakly coupled thermo-mechanical finite element model was created using the Abaqus simulation software. The model was calibrated and validated with data gathered from the experiments. Findings The results of this paper showed that the welding sequence of a part significantly affects the formation of thermally induced distortions of the final part. The calibrated simulation model was able to capture the different distortion behavior attributed to the welding sequences. Originality/value Within this work, a simulation model was developed capable of predicting the distortion of WAAM parts in advance. The findings of this paper can be used to improve the design of WAAM welding sequences while avoiding high experimental efforts.
APA, Harvard, Vancouver, ISO, and other styles
14

Chen, Xipeng, Pengxu Wei, and Liang Lin. "Deductive Learning for Weakly-Supervised 3D Human Pose Estimation via Uncalibrated Cameras." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 2 (May 18, 2021): 1089–96. http://dx.doi.org/10.1609/aaai.v35i2.16194.

Full text
Abstract:
Without prohibitive and laborious 3D annotations, weakly-supervised 3D human pose methods mainly employ the model regularization with geometric projection consistency or geometry estimation from multi-view images. Nevertheless, those approaches explicitly need known parameters of calibrated cameras, exhibiting a limited model generalization in various realistic scenarios. To mitigate this issue, in this paper, we propose a Deductive Weakly-Supervised Learning (DWSL) for 3D human pose machine. Our DWSL firstly learns latent representations on depth and camera pose for 3D pose reconstruction. Since weak supervision usually causes ill-conditioned learning or inferior estimation, our DWSL introduces deductive reasoning to make an inference for the human pose from a view to another and develops a reconstruction loss to demonstrate what the model learns and infers is reliable. This learning by deduction strategy employs the view-transform demonstration and structural rules derived from depth, geometry and angle constraints, which improves the reliability of the model training with weak supervision. On three 3D human pose benchmarks, we conduct extensive experiments to evaluate our proposed method, which achieves superior performance in comparison with state-of-the-art weak-supervised methods. Particularly, our model shows an appealing potential for learning from 2D data captured in dynamic outdoor scenes, which demonstrates promising robustness and generalization in realistic scenarios. Our code is publicly available at https://github.com/Xipeng-Chen/DWSL-3D-pose.
APA, Harvard, Vancouver, ISO, and other styles
15

Veselý, Jakub, and Vít Šmilauer. "Hygro-mechanical model for concrete pavement with long-term drying analysis." Acta Polytechnica CTU Proceedings 40 (July 24, 2023): 104–10. http://dx.doi.org/10.14311/app.2023.40.0104.

Full text
Abstract:
Concrete pavements are subjected to the combination of moisture transport, heat transport and traffic loading. A hygro-mechanical 3D finite element model was created in OOFEM software in order to analyse the stress field and deformed shape from a long-term non-uniform drying. The model uses a staggered approach, solving moisture transfer weakly coupled with MPS viscoelastic model for ageing concrete creep and shrinkage. Moisture transport and mechanical sub-models are calibrated with lab experiments, long-term monitoring on D1 highway and data from 40 year old highway pavement. The slab geometry is 3.5×5.0×0.29 m, resting on elastic Winkler-Pasternak foundation. The validation covers autogenous and drying strain on the slab. The models predict drying-induced tensile stress up to 3.3 MPa, inducing additional loading on the slab, uncaptured by current design methods.
APA, Harvard, Vancouver, ISO, and other styles
16

Martin, P., and J. Belley. "O/H Abundances in the Ringed Galaxy NGC 4736: Mixing Processes in the Interstellar Medium." International Astronomical Union Colloquium 157 (1996): 111–13. http://dx.doi.org/10.1017/s0252921100049563.

Full text
Abstract:
AbstractImaging spectrophotometry in the main nebular lines has been performed on 65 H ɪɪ regions in the ringed galaxy NGC 4736. O/H abundances were derived using the line ratios [O ɪɪɪ]/Hβ and [N ɪɪ]/[O ɪɪɪ] calibrated by Edmunds & Pagel (1984). We show that the O/H scatter in the resonance ring of star forming regions is small, no greater than normally expected in the well-mixed ISM of disks of gas-rich galaxies. The global O/H gradient (−0.046 dex/kpc) in the disk of NGC 4736 is shallower than gradients of normal spirals but comparable to gradients observed in weakly barred spirals. This last result could indicate that radial mixing is or was present in NGC 4736. The oval distortion in the central regions can be responsible for this homogenization but it is also possible that a strong bar was present in the past.
APA, Harvard, Vancouver, ISO, and other styles
17

Cha, Junuk, Muhammad Saqlain, Changhwa Lee, Seongyeong Lee, Seungeun Lee, Donguk Kim, Won-Hee Park, and Seungryul Baek. "Towards Single 2D Image-Level Self-Supervision for 3D Human Pose and Shape Estimation." Applied Sciences 11, no. 20 (October 18, 2021): 9724. http://dx.doi.org/10.3390/app11209724.

Full text
Abstract:
Three-dimensional human pose and shape estimation is an important problem in the computer vision community, with numerous applications such as augmented reality, virtual reality, human computer interaction, and so on. However, training accurate 3D human pose and shape estimators based on deep learning approaches requires a large number of images and corresponding 3D ground-truth pose pairs, which are costly to collect. To relieve this constraint, various types of weakly or self-supervised pose estimation approaches have been proposed. Nevertheless, these methods still involve supervision signals, which require effort to collect, such as unpaired large-scale 3D ground truth data, a small subset of 3D labeled data, video priors, and so on. Often, they require installing equipment such as a calibrated multi-camera system to acquire strong multi-view priors. In this paper, we propose a self-supervised learning framework for 3D human pose and shape estimation that does not require other forms of supervision signals while using only single 2D images. Our framework inputs single 2D images, estimates human 3D meshes in the intermediate layers, and is trained to solve four types of self-supervision tasks (i.e., three image manipulation tasks and one neural rendering task) whose ground-truths are all based on the single 2D images themselves. Through experiments, we demonstrate the effectiveness of our approach on 3D human pose benchmark datasets (i.e., Human3.6M, 3DPW, and LSP), where we present the new state-of-the-art among weakly/self-supervised methods.
APA, Harvard, Vancouver, ISO, and other styles
18

Šutalo, Ilija D., Kurt Liffman, Michael M. D. Lawrence-Brown, and James B. Semmens. "Experimental Force Measurements on a Bifurcated Endoluminal Stent Graft Model: Comparison with Theory." Vascular 13, no. 2 (March 1, 2005): 98–106. http://dx.doi.org/10.1258/rsmvasc.13.2.98.

Full text
Abstract:
The goal of this study was to experimentally validate a steady-state mathematical model, which can be used to compute the forces acting on a bifurcated endoluminal stent graft. To accomplish this task, an acrylic model of a bifurcated graft was used for the force measurements. The graft model was connected to the inlet piping with a flexible rubber membrane that allowed the graft model to move. This allowed us to measure the force owing to the movement of the graft model with a calibrated load cell. Steady-state blood flow was assumed, and the working fluid was water. The experimental data were found to be consistent with the results from a previously published mathematical model: the graft force is strongly dependent on the proximal or inlet pressure and the inlet area. The force tends to be weakly dependent on flow rate. More research work will be required to determine whether the steady-state force model examined in this article provides a realistic determination of the forces on an endoluminal stent graft that is subject to pulsatile blood flow.
APA, Harvard, Vancouver, ISO, and other styles
19

Oliver, Tim, Michelle Leonard, Juliet Lee, Akira Ishihara, and Ken Jacobson. "Video-microscopic measurement of cell-substratum traction forces generated by locomoting keratocytes." Proceedings, annual meeting, Electron Microscopy Society of America 51 (August 1, 1993): 188–89. http://dx.doi.org/10.1017/s0424820100146783.

Full text
Abstract:
We are using video-enhanced light microscopy to investigate the pattern and magnitude of forces that fish keratocytes exert on flexible silicone rubber substrata. Our goal is a clearer understanding of the way molecular motors acting through the cytoskeleton co-ordinate their efforts into locomotion at cell velocities up to 1 μm/sec. Cell traction forces were previously observed as wrinkles(Fig.l) in strong silicone rubber films by Harris.(l) These forces are now measureable by two independant means.In the first of these assays, weakly crosslinked films are made, into which latex beads have been embedded.(Fig.2) These films report local cell-mediated traction forces as bead displacements in the plane of the film(Fig.3), which recover when the applied force is released. Calibrated flexible glass microneedles are then used to reproduce the translation of individual beads. We estimate the force required to distort these films to be 0.5 mdyne/μm of bead movement. Video-frame analysis of bead trajectories is providing data on the relative localisation, dissipation and kinetics of traction forces.
APA, Harvard, Vancouver, ISO, and other styles
20

Balkman, Geoffrey S., Alyssa M. Bamer, Phillip M. Stevens, Eric L. Weber, Sara J. Morgan, Rana Salem, Dagmar Amtmann, and Brian J. Hafner. "Development and initial validation of the Orthotic Patient-Reported Outcomes—Mobility (OPRO-M): An item bank for evaluating mobility of people who use lower-limb orthoses." PLOS ONE 18, no. 11 (November 2, 2023): e0293848. http://dx.doi.org/10.1371/journal.pone.0293848.

Full text
Abstract:
Lower limb orthoses (LLOs) are externally-applied leg braces that are designed to improve or maintain mobility in people with a variety of health conditions that affect lower limb function. Clinicians and researchers are therefore often motivated to measure LLO users’ mobility to select or assess the effectiveness of these devices. Patient-reported outcome measures (PROMs) can provide insights into important aspects of a LLO user’s mobility for these purposes. However, few PROMs are available to measure mobility of LLO users. Those few that exist have issues that may limit their clinical or scientific utility. The objective of this study was to create a population-specific item bank for measuring mobility of LLO users. Previously-developed candidate items were administered in a cross-sectional study to a large national sample of LLO users. Responses from study participants (n = 1036) were calibrated to a graded response statistical model using Item Response Theory methods. A set of 39 items was found to be unidimensional, locally independent, and function without bias due to characteristics unrelated to mobility. The set of final calibrated items, termed the Orthotic Patient-Reported Outcomes—Mobility (OPRO-M) item bank, was evaluated for initial evidence of convergent, divergent, and known groups construct validity. OPRO-M was strongly correlated with existing PROMs designed to measure aspects of physical function. Conversely, OPRO-M was weakly correlated with PROMs that measured unrelated constructs, like sleep disturbance and depression. OPRO-M also showed an ability to differentiate groups with expected mobility differences. Two fixed-length short forms were created from the OPRO-M item bank. Items on the short forms were selected based on statistical and clinical criteria. Collectively, results from this study indicate that OPRO-M can effectively measure mobility of LLO users, and OPRO-M short forms can now be recommended for use in routine clinical practice and research studies.
APA, Harvard, Vancouver, ISO, and other styles
21

Leavitt, P. R., and D. L. Findlay. "Comparison of Fossil Pigments with 20 Years of Phytoplankton Data from Eutrophic Lake 227, Experimental Lakes Area, Ontario." Canadian Journal of Fisheries and Aquatic Sciences 51, no. 10 (October 1, 1994): 2286–99. http://dx.doi.org/10.1139/f94-232.

Full text
Abstract:
Fossil pigments from annually laminated sediments were calibrated with coeval phytoplankton data (1970–1989) from experimentally eutrophied Lake 227 in the Experimental Lakes Area, Ontario. Concentrations of ubiquitous pigments (β-carotene, pheophytin a) were correlated to total algal biomass standing crop (r = 0.56–0.65; P < 0.01) during the ice-free seasons, but not to carbon fixation or water-column chlorophyll (Chl). Indicator pigments were correlated to ice-free season algal biomass for cyanobacteria (echinenone, aphanizophyll) and chlorophytes (lutein–zeaxanthin, pheophytin b)(r = 0.53–0.55, P < 0.05), weakly correlated for cryptophytes (alloxanthin, α-carotene; r = 0.32–0.40, P < 0.10), but were uncorrelated for chrysophytes and diatoms (fucoxanthin, Chl c) or dinoflagellates (peredinin). Premanipulation concentrations of fossil pigments (nmol pigment∙(g organic matter)−1) from green algae and filamentous cyanobacteria (myxoxanthophyll) increased 4- to 10-fold in response to eutrophication of Lake 227. N2-fixing cyanobacteria (recorded as aphanizophyll) replaced chlorophytes after the nitrogen additions decreased threefold in 1975. In contrast, accumulation rates of pigments (nmol pigment∙rrr−2∙yr−1) were rarely correlated with algal standing crop or production and were less satisfactory than fossil concentrations for the purpose of detecting changes in phytoplankton community composition.
APA, Harvard, Vancouver, ISO, and other styles
22

Thorsteinsdottir, Bjorg, LaTonya J. Hickson, Rachel Giblon, Atieh Pajouhi, Natalie Connell, Megan Branda, Amrit K. Vasdev, et al. "Validation of prognostic indices for short term mortality in an incident dialysis population of older adults >75." PLOS ONE 16, no. 1 (January 20, 2021): e0244081. http://dx.doi.org/10.1371/journal.pone.0244081.

Full text
Abstract:
Rational and objective Prognosis provides critical knowledge for shared decision making between patients and clinicians. While several prognostic indices for mortality in dialysis patients have been developed, their performance among elderly patients initiating dialysis is unknown, despite great need for reliable prognostication in that context. To assess the performance of 6 previously validated prognostic indices to predict 3 and/or 6 months mortality in a cohort of elderly incident dialysis patients. Study design Validation study of prognostic indices using retrospective cohort data. Indices were compared using the concordance (“c”)-statistic, i.e. area under the receiver operating characteristic curve (ROC). Calibration, sensitivity, specificity, positive and negative predictive values were also calculated. Setting & participants Incident elderly (age ≥75 years; n = 349) dialysis patients at a tertiary referral center. Established predictors Variables for six validated prognostic indices for short term (3 and 6 month) mortality prediction (Foley, NCI, REIN, updated REIN, Thamer, and Wick) were extracted from the electronic medical record. The indices were individually applied as per each index specifications to predict 3- and/or 6-month mortality. Results In our cohort of 349 patients, mean age was 81.5±4.4 years, 66% were male, and median survival was 351 days. The c-statistic for the risk prediction indices ranged from 0.57 to 0.73. Wick ROC 0.73 (0.68, 0.78) and Foley 0.67 (0.61, 0.73) indices performed best. The Foley index was weakly calibrated with poor overall model fit (p <0.01) and overestimated mortality risk, while the Wick index was relatively well-calibrated but underestimated mortality risk. Limitations Small sample size, use of secondary data, need for imputation, homogeneous population. Conclusion Most predictive indices for mortality performed moderately in our incident dialysis population. The Wick and Foley indices were the best performing, but had issues with under and over calibration. More accurate indices for predicting survival in older patients with kidney failure are needed.
APA, Harvard, Vancouver, ISO, and other styles
23

Majdalani, Joseph, James Barron, and William K. Van Moorhem. "Inception of Turbulence in the Stokes Boundary Layer Over a Transpiring Wall." Journal of Fluids Engineering 124, no. 3 (August 19, 2002): 678–84. http://dx.doi.org/10.1115/1.1490375.

Full text
Abstract:
In this work, the onset of turbulence inside a rectangular chamber is investigated, with and without side-wall injection, in the presence of an oscillatory pressure gradient. Two techniques are used to define the transition from laminar to turbulent regimes: statistical analysis and flow visualization. Calibrated hot film anemometry and a computer data acquisition system are used to record and analyze acoustical flow data. Four classifications of flow regimes are reported: (a) laminar, (b) distorted laminar, (c) weakly turbulent, and (d) conditionally turbulent. Despite numerous attempts to promote turbulence, a fully turbulent flow does not develop at any of the driving frequencies tested. Statistical measurements reveal that a periodic drop in standard deviation of axial velocity fluctuations always occurs, indicating relaminarization within each cycle. Transition between flow regimes is assessed from the standard deviation of velocity data correlated as a function of the acoustic Reynolds number ReA. Under predominantly laminar conditions, the standard deviation is found to vary approximately with the square of the acoustic Reynolds number. Under turbulent conditions, the standard deviation becomes almost directly proportional to the acoustic Reynolds number. Inception of turbulence in the oscillatory flow with side-wall injection is found to be reproducible at the same critical value of ReA≅200.
APA, Harvard, Vancouver, ISO, and other styles
24

Banks, William E., Anaïs Vignoles, Jessica Lacarrière, André Morala, and Laurent Klaric. "A Hierarchical Bayesian Examination of the Chronological Relationship between the Noaillian and Rayssian Phases of the French Middle Gravettian." Quaternary 7, no. 2 (June 12, 2024): 26. http://dx.doi.org/10.3390/quat7020026.

Full text
Abstract:
Issues of chronology are central to inferences pertaining to relationships between both contemporaneous and successive prehistoric typo-technological entities (i.e., archaeological cultures), culture–environment relationships, and ultimately the mechanisms at play behind cultural changes observed through time in the archaeological record. We refine the chronology of Upper Paleolithic archaeological cultures between 35–18 calibrated kiloanni before the present in present-day France by incorporating recently published radiocarbon data along with new 14C ages that we obtained from several Gravettian archaeological contexts. We present the results of a Bayesian age model that includes these new radiometric data and that, more importantly, separates Gravettian contexts in regions north of the Garonne River into two successive cultural phases: The Northern Noaillian and the Rayssian, respectively. This new age model places the beginning of the Noaillian during Greenland Stadial 5.2. The appearance of contexts containing assemblages associated with the Rayssian lithic technical system occurs immediately prior to the termination of Greenland Interstadial 5.1, and it is present throughout Heinrich Event 3 (GS-5.1) and into the following GI-4 climatic amelioration. Despite the Rayssian’s initial appearance during the brief and relatively weakly expressed Greenland Interstadial 5.1, its duration suggests that Rayssian lithic technology was well-suited to the environmental conditions of Greenland Stadial 5.1.
APA, Harvard, Vancouver, ISO, and other styles
25

Pantillon, Florian, Peter Knippertz, John H. Marsham, and Cathryn E. Birch. "A Parameterization of Convective Dust Storms for Models with Mass-Flux Convection Schemes." Journal of the Atmospheric Sciences 72, no. 6 (May 27, 2015): 2545–61. http://dx.doi.org/10.1175/jas-d-14-0341.1.

Full text
Abstract:
Abstract Cold pool outflows, generated by downdrafts from moist convection, can generate strong winds and therefore uplift of mineral dust. These so-called haboob convective dust storms occur over all major dust source areas worldwide and contribute substantially to emissions in northern Africa, the world’s largest source. Most large-scale models lack convective dust storms because they do not resolve moist convection, relying instead on convection schemes. The authors suggest a parameterization of convective dust storms to account for their contribution in such large-scale models. The parameterization is based on a simple conceptual model, in which the downdraft mass flux from the convection scheme spreads out radially in a cylindrical cold pool. The parameterization is tested with a set of Met Office Unified Model runs for June and July 2006 over West Africa. It is calibrated with a convection-permitting run and applied to a convection-parameterized run. The parameterization successfully produces the extensive area of dust-generating winds from cold pool outflows over the southern Sahara. However, this area extends farther to the east and dust-generating winds occur earlier in the day than in the convection-permitting run. These biases are caused by biases in the convection scheme. It is found that the location and timing of dust-generating winds are weakly sensitive to the parameters of the conceptual model. The results demonstrate that a simple parameterization has the potential to correct a major and long-standing limitation in global dust models.
APA, Harvard, Vancouver, ISO, and other styles
26

Barber, Katelyn A., and Gretchen L. Mullendore. "The Importance of Convective Stage on Out-of-Cloud Convectively Induced Turbulence from High-Resolution Simulations." Monthly Weather Review 148, no. 11 (November 2020): 4587–605. http://dx.doi.org/10.1175/mwr-d-20-0065.1.

Full text
Abstract:
AbstractTurbulence (clear-air, mountain wave, convectively induced) is an aviation hazard that is a challenge to forecast due to the coarse resolution ultilized in operational weather models. Turbulence indices are commonly used to aid pilots in avoiding turbulence, but these indices have been designed and calibrated for midlatitude clear-air turbulence prediction (e.g., the Ellrod index). A significant limitation with current convectively induced turbulence (CIT) prediction is the lack of storm stage dependency. In this study, six high-resolution simulations of tropical oceanic and midlatitude continental convection are performed to characterize the turbulent environment near various convective types during the developing and mature stages. Second-order structure functions, a diagnostic commonly used to identify turbulence in turbulence prediction systems, are used to characterize the probability of turbulence for various convective types. Turbulence likelihood was found to be independent of region (i.e., tropical vs midlatitude) but dependent on convective stage. The probability of turbulence increased near developing convection for the majority of cases. Additional analysis of static stability and vertical wind shear, indicators of turbulence potential, showed that the convective environment near developing convection was more favorable for turbulence production than mature convection. Near developing convection, static stability decreased and vertical wind shear increased. Vertical wind shear near mature and developing convection was found to be weakly correlated to turbulence intensity in both the tropics and the midlatitudes. This study emphasizes the need for turbulence avoidance guidelines for the aviation community that are dependent on convective stage.
APA, Harvard, Vancouver, ISO, and other styles
27

Dimberg, Axel, Ulrica Alström, Elisabeth Ståhle, and Christina Christersson. "Higher Preoperative Plasma Thrombin Potential in Patients Undergoing Surgery for Aortic Stenosis Compared to Surgery for Stable Coronary Artery Disease." Clinical and Applied Thrombosis/Hemostasis 24, no. 8 (May 16, 2018): 1282–90. http://dx.doi.org/10.1177/1076029618776374.

Full text
Abstract:
Aortic stenosis (AS) and coronary artery disease (CAD) influence the coagulation system, potentially affecting hemostasis during cardiac surgery. Our aim was to evaluate 2 preoperative global hemostasis assays, plasma thrombin potential and thromboelastometry, in patients with severe aortic valve stenosis compared to patients with CAD. A secondary aim was to test whether the assays were associated with postoperative bleeding. Calibrated automated thrombogram (CAT) in platelet-poor plasma and rotational thromboelastometry (ROTEM) in whole blood were analyzed in patients scheduled for elective surgery due to severe AS (n = 103) and stable CAD (n = 68). Patients with AS displayed higher plasma thrombin potential, both thrombin peak with median 252 nmol/L (interquartile range 187-319) and endogenous thrombin potential (ETP) with median 1552 nmol/L/min (interquartile range 1340-1838), when compared to patients with CAD where thrombin peak was median 174 nmol/L (interquartile range 147-229) and ETP median 1247 nmol/L/min (interquartile range 1034-1448; both P < .001). Differences persisted after adjustment for age, gender, comorbidity, and antithrombotic treatment. Differences observed in thromboelastometry between the groups did not persist after adjustment for baseline characteristics. Bleeding amount showed no relationship with plasma thrombin potential but weakly to thromboelastometry ( R2 = .064, P = .001). Patients with AS exhibited preoperatively increased plasma thrombin potential compared to patients with CAD. Plasma thrombin potential was not predictive for postoperative bleeding in patients scheduled for elective surgery.
APA, Harvard, Vancouver, ISO, and other styles
28

Hsu, N. C., R. Gautam, A. M. Sayer, C. Bettenhausen, C. Li, M. J. Jeong, S. C. Tsay, and B. N. Holben. "Global and regional trends of aerosol optical depth over land and ocean using SeaWiFS measurements from 1997 to 2010." Atmospheric Chemistry and Physics Discussions 12, no. 3 (March 29, 2012): 8465–501. http://dx.doi.org/10.5194/acpd-12-8465-2012.

Full text
Abstract:
Abstract. Both sensor calibration and satellite retrieval algorithm play an important role in the ability to determine accurately long-term trends from satellite data. Owing to the unprecedented accuracy and long-term stability of its radiometric calibration, the SeaWiFS measurements exhibit minimal uncertainty with respect to sensor calibration. In this study, we take advantage of this well-calibrated set of measurements by applying a newly-developed aerosol optical depth (AOD) retrieval algorithm over land and ocean to investigate the distribution of AOD, and to identify emerging patterns and trends in global and regional aerosol loading during its 13-yr mission. Our results indicate that the averaged AOD trend over global ocean is weakly positive from 1998 to 2010 and comparable to that observed by MODIS but opposite in sign to that observed by AVHRR during overlapping years. On a smaller scale, different trends are detected for different regions. For example, large upward trends are found over the Arabian Peninsula that indicate a strengthening of the seasonal cycle of dust emission and transport processes over the whole region as well as over downwind oceanic regions. In contrast, a negative-neutral tendency is observed over the desert/arid Saharan region as well as in the associated dust outflow over the North Atlantic. Additionally, we found decreasing trends over the Eastern US and Europe, and increasing trends over countries such as China and India that are experiencing rapid economic development. In general, these results are consistent with those derived from ground-based AERONET measurements.
APA, Harvard, Vancouver, ISO, and other styles
29

Lupiola, Jagoba, Javier F. Bárcena, Javier García-Alba, and Andrés García. "A Dynamic Estuarine Classification of the Vertical Structure Based on the Water Column Density Slope and the Potential Energy Anomaly." Water 15, no. 18 (September 18, 2023): 3294. http://dx.doi.org/10.3390/w15183294.

Full text
Abstract:
The aim of this work is to develop a new estuarine classification attending to their vertical structure by examining the advantages and disadvantages of the existing classifications. For this purpose, we reviewed the main classifications, finding that most of them analyze the entire estuary as a unique water body without considering the spatiotemporal variability of the mixing zone in estuaries. Furthermore, the proposed classifications require the calculation of parameters that are not easily measurable, such as tidal current amplitude. Thus, we developed a new classification based on density profile slopes of the water column, which has been correlated to the potential energy anomaly. To test this classification, the proposed method was applied to the Suances estuary (Spain) during the year 2020 and to analyze the potential estuarine modifications under four climate change projections (RCP 4.5 and 8.5 for the years 2050 and 2100). To carry out this study, a calibrated and validated high-resolution horizontal and vertical 3D model was used. The application showed a high variability in the vertical structure of the estuary due to the tide and river. According to the proposed classification, the well mixed category was predominant in the estuary in 2020 and tended to grow in the projections of climate change. As a result, the fully mixed and weakly stratified mixing classes were reduced in the projected scenarios due to a decrease of external forcing, such as river flow and sea level rise. Furthermore, areas classified as stratified tended to move upstream of the estuary.
APA, Harvard, Vancouver, ISO, and other styles
30

Milenković, M., W. Karel, C. Ressl, and N. Pfeifer. "A COMPARISON OF UAV AND TLS DATA FOR SOIL ROUGHNESS ASSESSMENT." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences III-5 (June 6, 2016): 145–52. http://dx.doi.org/10.5194/isprsannals-iii-5-145-2016.

Full text
Abstract:
Soil roughness represents fine-scale surface geometry which figures in many geophysical models. While static photogrammetric techniques (terrestrial images and laser scanning) have been recently proposed as a new source for deriving roughness heights, there is still need to overcome acquisition scale and viewing geometry issues. By contrast to the static techniques, images taken from unmanned aerial vehicles (UAV) can maintain near-nadir looking geometry over scales of several agricultural fields. This paper presents a pilot study on high-resolution, soil roughness reconstruction and assessment from UAV images over an agricultural plot. As a reference method, terrestrial laser scanning (TLS) was applied on a 10 m x 1.5 m subplot. The UAV images were self-calibrated and oriented within a bundle adjustment, and processed further up to a dense-matched digital surface model (DSM). The analysis of the UAV- and TLS-DSMs were performed in the spatial domain based on the surface autocorrelation function and the correlation length, and in the frequency domain based on the roughness spectrum and the surface fractal dimension (spectral slope). The TLS- and UAV-DSM differences were found to be under &plusmn;1 cm, while the UAV DSM showed a systematic pattern below this scale, which was explained by weakly tied sub-blocks of the bundle block. The results also confirmed that the existing TLS methods leads to roughness assessment up to 5 mm resolution. However, for our UAV data, this was not possible to achieve, though it was shown that for spatial scales of 12 cm and larger, both methods appear to be usable. Additionally, this paper suggests a method to propagate measurement errors to the correlation length.
APA, Harvard, Vancouver, ISO, and other styles
31

Milenković, M., W. Karel, C. Ressl, and N. Pfeifer. "A COMPARISON OF UAV AND TLS DATA FOR SOIL ROUGHNESS ASSESSMENT." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences III-5 (June 6, 2016): 145–52. http://dx.doi.org/10.5194/isprs-annals-iii-5-145-2016.

Full text
Abstract:
Soil roughness represents fine-scale surface geometry which figures in many geophysical models. While static photogrammetric techniques (terrestrial images and laser scanning) have been recently proposed as a new source for deriving roughness heights, there is still need to overcome acquisition scale and viewing geometry issues. By contrast to the static techniques, images taken from unmanned aerial vehicles (UAV) can maintain near-nadir looking geometry over scales of several agricultural fields. This paper presents a pilot study on high-resolution, soil roughness reconstruction and assessment from UAV images over an agricultural plot. As a reference method, terrestrial laser scanning (TLS) was applied on a 10 m x 1.5 m subplot. The UAV images were self-calibrated and oriented within a bundle adjustment, and processed further up to a dense-matched digital surface model (DSM). The analysis of the UAV- and TLS-DSMs were performed in the spatial domain based on the surface autocorrelation function and the correlation length, and in the frequency domain based on the roughness spectrum and the surface fractal dimension (spectral slope). The TLS- and UAV-DSM differences were found to be under &plusmn;1 cm, while the UAV DSM showed a systematic pattern below this scale, which was explained by weakly tied sub-blocks of the bundle block. The results also confirmed that the existing TLS methods leads to roughness assessment up to 5 mm resolution. However, for our UAV data, this was not possible to achieve, though it was shown that for spatial scales of 12 cm and larger, both methods appear to be usable. Additionally, this paper suggests a method to propagate measurement errors to the correlation length.
APA, Harvard, Vancouver, ISO, and other styles
32

Hutchinson, D. R., C. F. Michael Lewis, and G. E. Hund. "Regional Stratigraphic Framework of Surficial Sediments and Bedrock Beneath Lake Ontario." Géographie physique et Quaternaire 47, no. 3 (November 23, 2007): 337–52. http://dx.doi.org/10.7202/032962ar.

Full text
Abstract:
ABSTRACT Approximately 2550 km of single-channel high-resolution seismic reflection profiles have been interpreted and calibrated with lithological and geochronological information from four representative piston cores and one grab sample to provide a regional stratigraphie framework for the subbottom deposits of Lake Ontario. Five units overlying Paleozoic bedrock were identified and mapped. These are classified as informal units and represent, from oldest to youngest: (A) subglacial till (?) deposited by the Port Huron ice at the end of the Wisconsin glaciation; (B) an ice-marginal (?) unit confined to the western part of the lake that was probably deposited during retreat of the Port Huron ice shortly after 13 ka; (C) a regionally extensive unit of laminated glacio-lacustrine clay that accumulated until about 11 ka; (D) a weakly laminated to more massive lake clay deposited during a period of reduced water supply and rising water levels after the drawdown of the high-level glacial lakes (Iroquois and successors); and (E) modern lake clay less than 10 m thick that began accumulating around 6-8 ka with the subsequent return of upper Great Lakes drainage through the Ontario basin. Seismic reflections also define the configuration of the bedrock surface and pre-glacial stream valleys incised in the bedrock surface. Several anomalous bottom and subbottom features in the surficial sediments are mapped, such as discontinuous and offset reflections, furrows, gas pockets, and areas of large subbottom relief. None of these features appear to be spatially correlative with the diffuse seismicity that characterizes the lake area or with deeper structures such as Paleozoic bedrock faults or crustal-penetrating faults in the Precambrian basement.
APA, Harvard, Vancouver, ISO, and other styles
33

Witherspoon, L. R., S. E. Shuler, G. F. Joseph, E. F. Baird, H. R. Neely, and R. E. Sonnemaker. "Immunoassays for Quantifying Choriogonadotropin Compared for Assay Performance and Clinical Application." Clinical Chemistry 38, no. 6 (June 1, 1992): 887–94. http://dx.doi.org/10.1093/clinchem/38.6.887.

Full text
Abstract:
Abstract We examined calibration and accuracy, precision, sensitivity, specificity, and "hook" effects for recently revised automated choriogonadotropin (hCG) immunoassay systems (Baxter-Dade Stratus II, Abbott IMx intact hCG and total beta hCG) and compared them with a widely used immunoradiometric assay (Hybritech). We estimated hCG in pregnant women, women with trophoblastic disease, nonpregnant young and menopausal women, normal men, and men with testicular tumors. We found clinically unimportant differences in calibration (all calibrated to the 3rd International Standard). Detection of hCG by all four assays was limited by their responses in serum from nonpregnant women and men. Precision within-run was best for the automated instruments, but all four assays had similar between-run precision. The Hybritech, Stratus, and IMx intact assays are specific for intact hCG. The IMx total beta assay quantifies both free beta subunit and beta subunit present in intact hCG. There is a clinically important hook effect in the Hybritech assay but not the Stratus or IMx assays (to 1.2 x 10(6) int. units/L). Results for pregnant women were similar by all four assays. We measured "hCG" to 8 int. units/L in menopausal women, which weakly correlated with concentrations of lutropin and follitropin and was, in part, explained by crossreactivity. There was no sample-probe carryover in either instrument. We found the IMx diluting module as well as results at the extremes of the IMx calibration curves (less than 10, 800-1200 int. units/L) unreliable but encountered no such problems with the Stratus system. Both automated systems involve batch analyzers with limited throughput but provide hCG concentration estimates much more quickly than the Hybritech assay can.
APA, Harvard, Vancouver, ISO, and other styles
34

Dhakal, S., N. P. Bhandary, R. Yatabe, and N. Kinoshita. "Numerical and analytical investigation towards performance enhancement of a newly developed rockfall protective cable-net structure." Natural Hazards and Earth System Sciences 12, no. 4 (April 20, 2012): 1135–49. http://dx.doi.org/10.5194/nhess-12-1135-2012.

Full text
Abstract:
Abstract. In a previous companion paper, we presented a three-tier modelling of a particular type of rockfall protective cable-net structure (barrier), developed newly in Japan. Therein, we developed a three-dimensional, Finite Element based, nonlinear numerical model having been calibrated/back-calculated and verified with the element- and structure-level physical tests. Moreover, using a very simple, lumped-mass, single-degree-of-freedom, equivalently linear analytical model, a global-displacement-predictive correlation was devised by modifying the basic equation – obtained by combining the principles of conservation of linear momentum and energy – based on the back-analysis of the tests on the numerical model. In this paper, we use the developed models to explore the performance enhancement potential of the structure in terms of (a) the control of global displacement – possibly the major performance criterion for the proposed structure owing to a narrow space available in the targeted site, and (b) the increase in energy dissipation by the existing U-bolt-type Friction-brake Devices – which are identified to have performed weakly when integrated into the structure. A set of parametric investigations have revealed correlations to achieve the first objective in terms of the structure's mass, particularly by manipulating the wire-net's characteristics, and has additionally disclosed the effects of the impacting-block's parameters. Towards achieving the second objective, another set of parametric investigations have led to a proposal of a few innovative improvements in the constitutive behaviour (model) of the studied brake device (dissipator), in addition to an important recommendation of careful handling of the device based on the identified potential flaw.
APA, Harvard, Vancouver, ISO, and other styles
35

Tillman, Megan Taylor, Blakesley Burkhart, Stephanie Tonnesen, Simeon Bird, Greg L. Bryan, Daniel Anglés-Alcázar, Romeel Davé, and Shy Genel. "Efficient Long-range Active Galactic Nuclei (AGNs) Feedback Affects the Low-redshift Lyα Forest." Astrophysical Journal Letters 945, no. 1 (March 1, 2023): L17. http://dx.doi.org/10.3847/2041-8213/acb7f1.

Full text
Abstract:
Abstract Active galactic nuclei (AGNs) feedback models are generally calibrated to reproduce galaxy observables such as the stellar mass function and the bimodality in galaxy colors. We use variations of the AGN feedback implementations in the IllustrisTNG (TNG) and Simba cosmological hydrodynamic simulations to show that the low-redshift Lyα forest can provide constraints on the impact of AGN feedback. We show that TNG overpredicts the number density of absorbers at column densities N HI < 1014 cm−2 compared to data from the Cosmic Origins Spectrograph (in agreement with previous work), and we demonstrate explicitly that its kinetic feedback mode, which is primarily responsible for galaxy quenching, has a negligible impact on the column density distribution (CDD) of absorbers. In contrast, we show that the fiducial Simba model, which includes AGN jet feedback, is the preferred fit to the observed CDD of the z = 0.1 Lyα forest across 5 orders of magnitude in column density. We show that the Simba results with jets produce a quantitatively better fit to the observational data than the Simba results without jets, even when the ultraviolet background is left as a free parameter. AGN jets in Simba are high speed, collimated, weakly interacting with the interstellar medium (via brief hydrodynamic decoupling), and heated to the halo virial temperature. Collectively these properties result in stronger long-range impacts on the intergalactic medium when compared to TNG’s kinetic feedback mode, which drives isotropic winds with lower velocities at the galactic radius. Our results suggest that the low-redshift Lyα forest provides plausible evidence for long-range AGN jet feedback.
APA, Harvard, Vancouver, ISO, and other styles
36

Sayer, Andrew M., Luca Lelli, Brian Cairns, Bastiaan van Diedenhoven, Amir Ibrahim, Kirk D. Knobelspiesse, Sergey Korkin, and P. Jeremy Werdell. "The CHROMA cloud-top pressure retrieval algorithm for the Plankton, Aerosol, Cloud, ocean Ecosystem (PACE) satellite mission." Atmospheric Measurement Techniques 16, no. 4 (February 23, 2023): 969–96. http://dx.doi.org/10.5194/amt-16-969-2023.

Full text
Abstract:
Abstract. This paper provides the theoretical basis and simulated retrievals for the Cloud Height Retrieval from O2 Molecular Absorption (CHROMA) algorithm. Simulations are performed for the Ocean Color Instrument (OCI), which is the primary payload on the forthcoming NASA Plankton, Aerosol, Cloud, ocean Ecosystem (PACE) mission, and the Ocean Land Colour Instrument (OLCI) currently flying on the Sentinel 3 satellites. CHROMA is a Bayesian approach which simultaneously retrieves cloud optical thickness (COT), cloud-top pressure and height (CTP and CTH respectively), and (with a significant prior constraint) surface albedo. Simulated retrievals suggest that the sensor and algorithm should be able to meet the PACE mission goal for CTP error, which is ±60 mb for 65 % of opaque (COT ≥3) single-layer clouds on global average. CHROMA will provide pixel-level uncertainty estimates, which are demonstrated to have skill at telling low-error situations from high-error ones. CTP uncertainty estimates are well-calibrated in magnitude, although COT uncertainty is overestimated relative to observed errors. OLCI performance is found to be slightly better than OCI overall, demonstrating that it is a suitable proxy for the latter in advance of PACE's launch. CTP error is only weakly sensitive to correct cloud phase identification or assumed ice crystal habit/roughness. As with other similar algorithms, for simulated retrievals of multi-layer systems consisting of optically thin cirrus clouds above liquid clouds, retrieved height tends to be underestimated because the satellite signal is dominated by the optically thicker lower layer. Total (liquid plus ice) COT also becomes underestimated in these situations. However, retrieved CTP becomes closer to that of the upper ice layer for ice COT ≈3 or higher.
APA, Harvard, Vancouver, ISO, and other styles
37

Jee, Inh, Sherry H. Suyu, Eiichiro Komatsu, Christopher D. Fassnacht, Stefan Hilbert, and Léon V. E. Koopmans. "A measurement of the Hubble constant from angular diameter distances to two gravitational lenses." Science 365, no. 6458 (September 12, 2019): 1134–38. http://dx.doi.org/10.1126/science.aat7371.

Full text
Abstract:
The local expansion rate of the Universe is parametrized by the Hubble constant, H0, the ratio between recession velocity and distance. Different techniques lead to inconsistent estimates of H0. Observations of Type Ia supernovae (SNe) can be used to measure H0, but this requires an external calibrator to convert relative distances to absolute ones. We use the angular diameter distance to strong gravitational lenses as a suitable calibrator, which is only weakly sensitive to cosmological assumptions. We determine the angular diameter distances to two gravitational lenses, 810−130+160 and 1230−150+180 megaparsec, at redshifts z=0.295 and 0.6304. Using these absolute distances to calibrate 740 previously measured relative distances to SNe, we measure the Hubble constant to be H0=82.4−8.3+8.4 kilometers per second per megaparsec.
APA, Harvard, Vancouver, ISO, and other styles
38

Adane, Girma Berhe, Birtukan Abebe Hirpa, Belay Manjur Gebru, Cholho Song, and Woo-Kyun Lee. "Integrating Satellite Rainfall Estimates with Hydrological Water Balance Model: Rainfall-Runoff Modeling in Awash River Basin, Ethiopia." Water 13, no. 6 (March 15, 2021): 800. http://dx.doi.org/10.3390/w13060800.

Full text
Abstract:
Hydrologic models play an indispensable role in managing the scarce water resources of a region, and in developing countries, the availability and distribution of data are challenging. This research aimed to integrate and compare the satellite rainfall products, namely, Tropical Rainfall Measuring Mission (TRMM 3B43v7) and Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks-Climate Data Record (PERSIANN-CDR), with a GR2M hydrological water balance model over a diversified terrain of the Awash River Basin in Ethiopia. Nash–Sutcliffe efficiency (NSE), percent bias (PBIAS), coefficient of determination (R2), and root mean square error (RMSE) and Pearson correlation coefficient (PCC) were used to evaluate the satellite rainfall products and hydrologic model performances of the basin. The satellite rainfall estimations of both products showed a higher PCC (above 0.86) with areal observed rainfall in the Uplands, the Western highlands, and the Lower sub-basins. However, it was weakly associated in the Upper valley and the Eastern catchments of the basin ranging from 0.45 to 0.65. The findings of the assimilated satellite rainfall products with the GR2M model exhibited that 80% of the calibrated and 60% of the validated watersheds in a basin had lower magnitude of PBIAS (<±10), which resulted in better accuracy in flow simulation. The poor performance with higher PBIAS (≥±25) of the GR2M model was observed only in the Melka Kuntire (TRMM 3B43v7 and PERSIANN-CDR), Mojo (PERSIANN-CDR), Metehara (in all rainfall data sets), and Kessem (TRMM 3B43v7) watersheds. Therefore, integrating these satellite rainfall data, particularly in the data-scarce basin, with hydrological data, generally appeared to be useful. However, validation with the ground observed data is required for effective water resources planning and management in a basin. Furthermore, it is recommended to make bias corrections for watersheds with poorlyww performing satellite rainfall products of higher PBIAS before assimilating with the hydrologic model.
APA, Harvard, Vancouver, ISO, and other styles
39

Keller, I. E., A. V. Kazantsev, D. S. Dudin, G. L. Permyakov, and D. N. Trushnikov. "MODELING OF THE DISTRIBUTION OF RESIDUAL POROSITY OF A METAL PRODUCT IN ADDITIVE MANUFACTURING WITH LAYER-BY-LAYER FORGING." Problems of Strength and Plasticity 84, no. 2 (2022): 247–58. http://dx.doi.org/10.32326/1814-9146-2022-84-2-247-258.

Full text
Abstract:
Porosity, which occurs in products made of aluminum-magnesium alloys synthesized by wire-arc surfacing, significantly worsens the characteristics of fatigue strength. The developed technology of hybrid additive manufacturing with layer-by-layer forging of each deposited layer of material is able to minimize the porosity of the product. To select the rational parameters of the technological process, the evolution of the porosity distribution over the cross-section of a linear element after its single forging with a pneumatic hammer is investigated. A numerical model of the process is constructed in the LS-DYNA® package, where the Gurson–Tvergaard–Needleman relations are taken as constitutive equations of plastic deformation of the material and porosity evolution. To determine the parameters of the Johnson–Cook hardening law, tests of the AMg6 alloy were performed in a wide range of strain rates. The impact of the pneumatic hammer in the numerical model was calibrated using an accelerometric and strain-gauged steel target, as well as by distortions of the cross-section of a forged bar made of AMg6 alloy. Calculations according to the model are compared with experimental data, for which two linear segments were made by additive manufacturing with and without layer-by-layer forging, from the cross-sections of which the slots processed for pore visualization were prepared. With this method of pressure treatment, the decrease in porosity in the boundary layer of the workpiece mainly depends on the accumulated plastic deformations and does weakly sensitive the appearance of a stressed state. The model allows you to predict the size of the area under the hammer head depending on the processing mode, within which the porosity is eliminated by forging. The use of such modes will ensure the manufacturing of products without residual porosity in the processes of additive manufacturing by surfacing with layer-by-layer forging.
APA, Harvard, Vancouver, ISO, and other styles
40

Law, Charles J., Sage Crystian, Richard Teague, Karin I. Öberg, Evan A. Rich, Sean M. Andrews, Jaehan Bae, et al. "CO Line Emission Surfaces and Vertical Structure in Midinclination Protoplanetary Disks." Astrophysical Journal 932, no. 2 (June 1, 2022): 114. http://dx.doi.org/10.3847/1538-4357/ac6c02.

Full text
Abstract:
Abstract High spatial resolution CO observations of midinclination (≈30°–75°) protoplanetary disks offer an opportunity to study the vertical distribution of CO emission and temperature. The asymmetry of line emission relative to the disk major axis allows for a direct mapping of the emission height above the midplane, and for optically thick, spatially resolved emission in LTE, the intensity is a measure of the local gas temperature. Our analysis of Atacama Large Millimeter/submillimeter Array archival data yields CO emission surfaces, dynamically constrained stellar host masses, and disk atmosphere gas temperatures for the disks around the following: HD 142666, MY Lup, V4046 Sgr, HD 100546, GW Lup, WaOph 6, DoAr 25, Sz 91, CI Tau, and DM Tau. These sources span a wide range in stellar masses (0.50–2.10 M ⊙), ages (∼0.3–23 Myr), and CO gas radial emission extents (≈200–1000 au). This sample nearly triples the number of disks with mapped emission surfaces and confirms the wide diversity in line emitting heights (z/r ≈ 0.1 to ≳0.5) hinted at in previous studies. We compute the radial and vertical CO gas temperature distributions for each disk. A few disks show local temperature dips or enhancements, some of which correspond to dust substructures or the proposed locations of embedded planets. Several emission surfaces also show vertical substructures, which all align with rings and gaps in the millimeter dust. Combining our sample with literature sources, we find that CO line emitting heights weakly decline with stellar mass and gas temperature, which, despite large scatter, is consistent with simple scaling relations. We also observe a correlation between CO emission height and disk size, which is due to the flared structure of disks. Overall, CO emission surfaces trace ≈2–5× gas pressure scale heights (Hg) and could potentially be calibrated as empirical tracers of Hg.
APA, Harvard, Vancouver, ISO, and other styles
41

Wang, Hao, Di Zhu, Shiqi Li, Robert A. Cheke, Sanyi Tang, and Weike Zhou. "Home quarantine or centralized quarantine? A mathematical modelling study on the COVID-19 epidemic in Guangzhou in 2021." Mathematical Biosciences and Engineering 19, no. 9 (2022): 9060–78. http://dx.doi.org/10.3934/mbe.2022421.

Full text
Abstract:
<abstract> <p>Several outbreaks of COVID-19 caused by imported cases have occurred in China following the successful control of the outbreak in early 2020. In order to avoid recurrences of such local outbreaks, it is important to devise an efficient control and prevention strategy. In this paper, we developed a stochastic discrete model of the COVID-19 epidemic in Guangzhou in 2021 to compare the effectiveness of centralized quarantine and compulsory home quarantine measures. The model was calibrated by using the daily reported cases and newly centralized quarantined cases. The estimated results showed that the home quarantine measure increased the accuracy of contact tracing. The estimated basic reproduction number was lower than that in 2020, even with a much more transmissible variant, demonstrating the effectiveness of the vaccines and normalized control interventions. Sensitivity analysis indicated that a sufficiently implemented contact tracing and centralized quarantine strategy in the initial stage would contain the epidemic faster with less infections even with a weakly implemented compulsory home quarantine measure. However, if the accuracy of the contact tracing was insufficient, then early implementation of the compulsory home quarantine with strict contact tracing, screening and testing interventions on the key individuals would shorten the epidemic duration and reduce the total number of infected cases. Particularly, 94 infections would have been avoided if the home quarantine measure had been implemented 3 days earlier and an extra 190 infections would have arisen if the home quarantine measure was implemented 3 days later. The study suggested that more attention should be paid to the precise control strategy during the initial stage of the epidemic, otherwise the key group-based control measure should be implemented strictly.</p> </abstract>
APA, Harvard, Vancouver, ISO, and other styles
42

Lesur, Vincent, Benoît Heumez, Abdelkader Telali, Xavier Lalanne, and Anatoly Soloviev. "Estimating error statistics for Chambon-la-Forêt observatory definitive data." Annales Geophysicae 35, no. 4 (August 17, 2017): 939–52. http://dx.doi.org/10.5194/angeo-35-939-2017.

Full text
Abstract:
Abstract. We propose a new algorithm for calibrating definitive observatory data with the goal of providing users with estimates of the data error standard deviations (SDs). The algorithm has been implemented and tested using Chambon-la-Forêt observatory (CLF) data. The calibration process uses all available data. It is set as a large, weakly non-linear, inverse problem that ultimately provides estimates of baseline values in three orthogonal directions, together with their expected standard deviations. For this inverse problem, absolute data error statistics are estimated from two series of absolute measurements made within a day. Similarly, variometer data error statistics are derived by comparing variometer data time series between different pairs of instruments over few years. The comparisons of these time series led us to use an autoregressive process of order 1 (AR1 process) as a prior for the baselines. Therefore the obtained baselines do not vary smoothly in time. They have relatively small SDs, well below 300 pT when absolute data are recorded twice a week – i.e. within the daily to weekly measures recommended by INTERMAGNET. The algorithm was tested against the process traditionally used to derive baselines at CLF observatory, suggesting that statistics are less favourable when this latter process is used. Finally, two sets of definitive data were calibrated using the new algorithm. Their comparison shows that the definitive data SDs are less than 400 pT and may be slightly overestimated by our process: an indication that more work is required to have proper estimates of absolute data error statistics. For magnetic field modelling, the results show that even on isolated sites like CLF observatory, there are very localised signals over a large span of temporal frequencies that can be as large as 1 nT. The SDs reported here encompass signals of a few hundred metres and less than a day wavelengths.
APA, Harvard, Vancouver, ISO, and other styles
43

Arbogast, Alan F., Randall J. Schaetzl, Joseph P. Hupy, and Edward C. Hansen. "The Holland Paleosol: an informal pedostratigraphic unit in the coastal dunes of southeastern Lake Michigan." Canadian Journal of Earth Sciences 41, no. 11 (November 1, 2004): 1385–400. http://dx.doi.org/10.1139/e04-071.

Full text
Abstract:
A very prominent buried soil crops out in coastal sand dunes along an ~200 km section of the southeastern shore of Lake Michigan. This study is the first to investigate the character of this soil — informally described here as the Holland Paleosol — by focusing on six sites from Indiana Dunes National Lakeshore north to Montague, Michigan. Most dunes in this region are large (>40 m high) and contain numerous buried soils that indicate periods of reduced sand supply and comcomitant stabilization. Most of these soils are buried in the lower part of the dunes and are thin Entisols. The soil described here, in contrast, is relatively well developed, is buried in the upper part of many dunes, and formed by podzolization under forest vegetation. Radiocarbon dates indicate that this soil formed between ~3000 and 300 calibrated years BP. Pedons of the Holland Paleosol range in development from thick Entisols (Regosols) with A–Bw–BC–C horizonation to weakly developed Spodosols (Podzols) with A–E–Bs–Bw–BC–C profiles. Many profiles have overthickened and (or) stratified A horizons, indicative of slow and episodic burial. Differences in development are mainly due to paleolandscape position and variations in paleoclimate among the sites. The Holland Paleosol is significant because it represents a relatively long period of landscape stability in coastal dunes over a broad (200 km) area. This period of stability was concurrent with numerous fluctuations in Lake Michigan. Given the general sensitivity of coastal dunes to prehistoric lake-level fluctuations, the soil may reflect a time when the lake shore was farther west than it is today. The Holland Paleosol would probably qualify as a formal pedostratigraphic unit if it were buried by a formal lithostratgraphic or allostratigraphic unit.
APA, Harvard, Vancouver, ISO, and other styles
44

Hsu, N. C., R. Gautam, A. M. Sayer, C. Bettenhausen, C. Li, M. J. Jeong, S. C. Tsay, and B. N. Holben. "Global and regional trends of aerosol optical depth over land and ocean using SeaWiFS measurements from 1997 to 2010." Atmospheric Chemistry and Physics 12, no. 17 (September 10, 2012): 8037–53. http://dx.doi.org/10.5194/acp-12-8037-2012.

Full text
Abstract:
Abstract. Both sensor calibration and satellite retrieval algorithm play an important role in the ability to determine accurately long-term trends from satellite data. Owing to the unprecedented accuracy and long-term stability of its radiometric calibration, SeaWiFS measurements exhibit minimal uncertainty with respect to sensor calibration. In this study, we take advantage of this well-calibrated set of measurements by applying a newly-developed aerosol optical depth (AOD) retrieval algorithm over land and ocean to investigate the distribution of AOD, and to identify emerging patterns and trends in global and regional aerosol loading during its 13-yr mission. Our correlation analysis between climatic indices (such as ENSO) and AOD suggests strong relationships for Saharan dust export as well as biomass-burning activity in the tropics, associated with large-scale feedbacks. The results also indicate that the averaged AOD trend over global ocean is weakly positive from 1998 to 2010 and comparable to that observed by MODIS but opposite in sign to that observed by AVHRR during overlapping years. On regional scales, distinct tendencies are found for different regions associated with natural and anthropogenic aerosol emission and transport. For example, large upward trends are found over the Arabian Peninsula that indicate a strengthening of the seasonal cycle of dust emission and transport processes over the whole region as well as over downwind oceanic regions. In contrast, a negative-neutral tendency is observed over the desert/arid Saharan region as well as in the associated dust outflow over the north Atlantic. Additionally, we found decreasing trends over the eastern US and Europe, and increasing trends over countries such as China and India that are experiencing rapid economic development. In general, these results are consistent with those derived from ground-based AERONET measurements.
APA, Harvard, Vancouver, ISO, and other styles
45

Buttignol, Thomaz E. T., Matteo Colombo, and Marco di Prisco. "Load Induced Thermal Strains in Steel Fibre Reinforced Concrete Subjected to Uniaxial Compression." Key Engineering Materials 711 (September 2016): 525–32. http://dx.doi.org/10.4028/www.scientific.net/kem.711.525.

Full text
Abstract:
The effect of fibre reinforcement on Load Induced Thermal Strains (LITS) has not yet been significantly investigated up to now. Creep is becoming a key research topic only in the last few years. A semi-empirical model able to take into account both the thermo-mechanical damage associated to coarse aggregates and the thermo-chemical damage induced in the matrix and calibrated on the basis of the main results on plain concrete available in the scientific literature is presented. Some tests in uniaxial compression on Fibre Reinforced Concrete (FRC) cylinders characterized by a long age – 11-years-old – have been investigated and compared with the model to highlight fibre effects, if any. The uniaxial compressive strength at 28 days of the SFRC was 75 MPa; the specimens after 11 years showed a compressive strength exceeding 110 MPa. A strong increase of SLS residual strength was observed in post-cracking tension due to the long aging, while ULS residual strengths weakly increased. The cylindrical specimens were exposed to a maximum temperature of 200°C and 400°C and loaded with two load thresholds corresponding to 20% and 40% of the compressive strength detected at 28 days of aging, that means about 12.5% and 25% of the 11-years-old specimens. Two paths were investigated: pre-heated specimens up to 200°C or 400°C, then loaded with a compression stress equal to 0.2fc,28 and 0.4fc,28; and pre-loaded specimens up to 0.2fc,28 and 0.4fc,28 and then heated up to 200°C or 400°C. The duration of each test did not exceed 12 hours. Two main fibre effects were observed: a significant reduction of irreversible strains when the specimens were loaded and then heated and cooled and a different evolution in LITS passing from 200°C to 400°C, characterized by a significant reduction of the expected deformation.
APA, Harvard, Vancouver, ISO, and other styles
46

Cheng, Ziyu, Xianmin Wang, and Jing Li. "ProMatch: Semi-Supervised Learning with Prototype Consistency." Mathematics 11, no. 16 (August 16, 2023): 3537. http://dx.doi.org/10.3390/math11163537.

Full text
Abstract:
Recent state-of-the-art semi-supervised learning (SSL) methods have made significant advancements by combining consistency-regularization and pseudo-labeling in a joint learning paradigm. The core concept of these methods is to identify consistency targets (pseudo-labels) by selecting predicted distributions with high confidence from weakly augmented unlabeled samples. However, they often face the problem of erroneous high confident pseudo-labels, which can lead to noisy training. This issue arises due to two main reasons: (1) when the model is poorly calibrated, the prediction of a single sample may be overconfident and incorrect, and (2) propagating pseudo-labels from unlabeled samples can result in error accumulation due to the margin between the pseudo-label and the ground-truth label. To address this problem, we propose a novel consistency criterion called Prototype Consistency (PC) to improve the reliability of pseudo-labeling by leveraging the prototype similarities between labeled and unlabeled samples. First, we instantiate semantic-prototypes (centers of embeddings) and prediction-prototypes (centers of predictions) for each category using memory buffers that store the features of labeled examples. Second, for a given unlabeled sample, we determine the most similar semantic-prototype and prediction-prototype by assessing the similarities between the features of the unlabeled sample and the prototypes of the labeled samples. Finally, instead of using the prediction of the unlabeled sample as the pseudo-label, we select the most similar prediction-prototype as the consistency target, as long as the predicted category of the most similar prediction-prototype, the ground-truth category of the most similar semantic-prototype, and the ground-truth category of the most similar prediction-prototype are equivalent. By combining the PC approach with the techniques developed by the MixMatch family, our proposed ProMatch framework demonstrates significant performance improvements compared to previous algorithms on datasets such as CIFAR-10, CIFAR-100, SVHN, and Mini-ImageNet.
APA, Harvard, Vancouver, ISO, and other styles
47

SVETLOV, N. M. "SYSTEM DYNAMICS MODEL OF REGIONAL GRAIN MARKETS." Izvestiâ Timirâzevskoj selʹskohozâjstvennoj akademii, no. 3 (2021): 88–105. http://dx.doi.org/10.26897/0021-342x-2021-3-88-105.

Full text
Abstract:
The paper develops a methodology for modeling regional markets for field crops, taking grain markets as a case. It proposes and tests a new combination of model assumptions that solves the problem of reproducing the actually observed stability of product supply to the consumer in the context of predominantly market regulators. A basic model of the grain market functioning is developed, describing in continuous time the chain of commodity flows linking producers and consumers. Interregional transportation is taken into account. This is a prototype of a future model, which should be calibrated on actual data and include markets for grain processing products, as well as the possibility of simulating economic policy instruments. The model is based on the principle of market fundamentalism when modeling the volume of grain production, setting them in a sole dependence on prices. The same principle, yet with reservations, is applied in the modeling of exports and interregional transportation. Consumption modeling combines opposing principles of market fundamentalism, which guides consumers, and market skepticism of resellers – counterparties of consumers. Resellers, when deciding on the volume of supplies, neglect prices – they are guided by the size and dynamics of grain stock. As a result of computer simulations, it is shown that a set of assumptions underlying the model provides, with an appropriate selection of parameters, the necessary dynamic properties of the model: in all regions, the relative stability of consumption and prices during the season is secured, and short-term fluctuations in domestic prices effectively direct grain transportation to regions in need. Moreover, the volume of grain exports weakly correlates with domestic price of grain, which is typical for the real Russian grain market. The ultimate aim of the study, one of the stages of which is the algorithm presented in the paper, is to create tools for analyzing the interaction effects of the economic policy measures applied in different sections of the production and processing chains of various types of field crops for periods shorter than a year.
APA, Harvard, Vancouver, ISO, and other styles
48

Al-shawarby, Sherine, and Mai El Mossallamy. "Monetary-fiscal policies interactions and optimal rules in Egypt." Review of Economics and Political Science 4, no. 2 (June 5, 2019): 138–57. http://dx.doi.org/10.1108/reps-03-2019-0033.

Full text
Abstract:
Purpose This paper aims to estimate a New Keynesian small open economy dynamic stochastic general equilibrium (DSGE) model for Egypt using Bayesian techniques and data for the period FY2004/2005:Q1-FY2015/2016:Q4 to assess monetary and fiscal policy interactions and their impact on economic stabilization. Outcomes of monetary and fiscal authority commitment to policy instruments, interest rate, government spending and taxes, are evaluated using Taylor-type and optimal simple rules. Design/methodology/approach The study extends the stylized micro-founded small open economy New Keynesian DSGE model, proposed by Lubik and Schorfheide (2007), by explicitly introducing fiscal policy behavior into the model (Fragetta and Kirsanova, 2010 and Çebi, 2011). The model is calibrated using quarterly data for Egypt on key macroeconomic variables during FY2004/2005:Q1-FY2015/2016:Q4; and Bayesian methods are used in estimation. Findings The results show that monetary and fiscal policy instruments in Egypt contribute to economic stability through their effects on inflation, output and debt stock. The monetary policy Taylor rule estimates reveal that the Central Bank of Egypt (CBE) attaches significant importance to anti-inflationary policy and (to a lesser extent) to output targeting but responds weakly to nominal exchange rate variations. CBE decisions are significantly influenced by interest rate smoothing. Egyptian fiscal policy has an important role in output and government debt stabilization. Additionally, the fiscal authority chooses pro-cyclical government spending and counter-cyclical tax policies for output stabilization. Again, past values of the fiscal instruments are influential in the evolution of the future fiscal policy-making process. Originality/value A few studies have examined the interaction between monetary and fiscal policy in Egypt within a unified framework. The presented paper integrates the monetary and fiscal policy analysis within a unified dynamic general equilibrium open economy rational expectations framework. Without such a framework, it would not be easy to jointly analyze monetary and fiscal transmission mechanisms for output, inflation and debt. Also, it would be neither possible to contrast the outcome of monetary and fiscal authorities commitment to a simple Taylor instrument rule vis-à-vis optimal policy outcomes nor to assess the behavior of monetary and fiscal agents in macroeconomic stability in context of an active/passive policy decisions framework.
APA, Harvard, Vancouver, ISO, and other styles
49

GRUE, JOHN, ATLE JENSEN, PER-OLAV RUSÅS, and J. KRISTIAN SVEEN. "Properties of large-amplitude internal waves." Journal of Fluid Mechanics 380 (February 10, 1999): 257–78. http://dx.doi.org/10.1017/s0022112098003528.

Full text
Abstract:
Properties of solitary waves propagating in a two-layer fluid are investigated comparing experiments and theory. In the experiments the velocity field induced by the waves, the propagation speed and the wave shape are quite accurately measured using particle tracking velocimetry (PTV) and image analysis. The experiments are calibrated with a layer of fresh water above a layer of brine. The depth of the brine is 4.13 times the depth of the fresh water. Theoretical results are given for this depth ratio, and, in addition, in a few examples for larger ratios, up to 100[ratio ]1. The wave amplitudes in the experiments range from a small value up to almost maximal amplitude. The thickness of the pycnocline is in the range of approximately 0.13–0.26 times the depth of the thinner layer. Solitary waves are generated by releasing a volume of fresh water trapped behind a gate. By careful adjustment of the length and depth of the initial volume we always generate a single solitary wave, even for very large volumes. The experiments are very repeatable and the recording technique is very accurate. The error in the measured velocities non-dimensionalized by the linear long wave speed is less than about 7–8% in all cases. The experiments are compared with a fully nonlinear interface model and weakly nonlinear Korteweg–de Vries (KdV) theory. The fully nonlinear model compares excellently with the experiments for all quantities measured. This is true for the whole amplitude range, even for a pycnocline which is not very sharp. The KdV theory is relevant for small wave amplitude but exhibit a systematic deviation from the experiments and the fully nonlinear theory for wave amplitudes exceeding about 0.4 times the depth of the thinner layer. In the experiments with the largest waves, rolls develop behind the maximal displacement of the wave due to the Kelvin–Helmholtz instability. The recordings enable evaluation of the local Richardson number due to the flow in the pycnocline. We find that stability or instability of the flow occurs in approximate agreement with the theorem of Miles and Howard.
APA, Harvard, Vancouver, ISO, and other styles
50

Štancar, Ž., K. K. Kirov, F. Auriemma, H. T. Kim, M. Poradziński, R. Sharma, R. Lorenzini, et al. "Overview of interpretive modelling of fusion performance in JET DTE2 discharges with TRANSP." Nuclear Fusion 63, no. 12 (November 6, 2023): 126058. http://dx.doi.org/10.1088/1741-4326/ad0310.

Full text
Abstract:
Abstract In the paper we present an overview of interpretive modelling of a database of JET-ILW 2021 D-T discharges using the TRANSP code. The main aim is to assess our capability of computationally reproducing the fusion performance of various D-T plasma scenarios using different external heating and D-T mixtures, and to understand the performance driving mechanisms. We find that interpretive simulations confirm a general power-law relationship between increasing external heating power and fusion output, which is supported by absolutely calibrated neutron yield measurements. A comparison of measured and computed D-T neutron rates shows that the calculations’ discrepancy depends on the absolute neutron yield. The calculations are found to agree well with measurements for higher performing discharges with external heating power above ∼20 M W , while low-neutron shots display an average discrepancy of around +40% compared to measured neutron yields. A similar trend is found for the ratio between thermal and beam-target fusion, where larger discrepancies are seen in shots with dominant beam-driven performance. We compare the observations to studies of JET-ILW D discharges, to find that on average the fusion performance is well modelled over a range of heating power, although an increased unsystematic deviation for lower-performing shots is observed. The ratio between thermal and beam-induced D-T fusion is found to be increasing weakly with growing external heating power, with a maximum value of ≳ 1 achieved in a baseline scenario experiment. An evaluation of the fusion power computational uncertainty shows a strong dependence on the plasma scenario type and fusion drive characteristics, varying between ±25% and 35%. D-T fusion alpha simulations show that the ratio between volume-integrated electron and ion heating from alphas is ≲ 10 for the majority of analysed discharges. Alphas are computed to contribute between ∼15% and 40% to the total electron heating in the core of highest performing D-T discharges. An alternative workflow to TRANSP was employed to model JET D-T plasmas with the highest fusion yield and dominant non-thermal fusion component because of the use of fundamental radio-frequency heating of a large minority in the scenario, which is calculated to have provided ∼10% to the total fusion power.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography