Journal articles on the topic 'Average Threshold Crossing'

To see the other types of publications on this topic, follow the link: Average Threshold Crossing.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Average Threshold Crossing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Xue, Xiao, Maximilian Russ, Nodar Samkharadze, Brennan Undseth, Amir Sammak, Giordano Scappucci, and Lieven M. K. Vandersypen. "Quantum logic with spin qubits crossing the surface code threshold." Nature 601, no. 7893 (January 19, 2022): 343–47. http://dx.doi.org/10.1038/s41586-021-04273-w.

Full text
Abstract:
AbstractHigh-fidelity control of quantum bits is paramount for the reliable execution of quantum algorithms and for achieving fault tolerance—the ability to correct errors faster than they occur1. The central requirement for fault tolerance is expressed in terms of an error threshold. Whereas the actual threshold depends on many details, a common target is the approximately 1% error threshold of the well-known surface code2,3. Reaching two-qubit gate fidelities above 99% has been a long-standing major goal for semiconductor spin qubits. These qubits are promising for scaling, as they can leverage advanced semiconductor technology4. Here we report a spin-based quantum processor in silicon with single-qubit and two-qubit gate fidelities, all of which are above 99.5%, extracted from gate-set tomography. The average single-qubit gate fidelities remain above 99% when including crosstalk and idling errors on the neighbouring qubit. Using this high-fidelity gate set, we execute the demanding task of calculating molecular ground-state energies using a variational quantum eigensolver algorithm5. Having surpassed the 99% barrier for the two-qubit gate fidelity, semiconductor qubits are well positioned on the path to fault tolerance and to possible applications in the era of noisy intermediate-scale quantum devices.
APA, Harvard, Vancouver, ISO, and other styles
2

Yanyo, L. C., and F. N. Kelley. "Effect of Chain Length Distribution on the Tearing Energy of Silicone Elastomers." Rubber Chemistry and Technology 60, no. 1 (March 1, 1987): 78–88. http://dx.doi.org/10.5254/1.3536123.

Full text
Abstract:
Abstract The tearing energies of two endlinked PDMS networks, one a monomodal distribution of chain lengths and the other a bimodal mixture of very short and rather long chains, at the same average molecular weight between crosslinks, were compared. The bimodal network exhibited higher tearing strengths than the monomodal network under the same experimental conditions. At threshold conditions, the bimodal network tearing energy was 70% higher than the threshold strength of the monomodal network. A rederivation of the Lake and Thomas theory for the threshold tearing strength which includes a bimodal probability distribution of chain lengths is shown to predict the observed behavior. The strength increase of these bimodal networks is attributed to the presence of the long chains which increases the energy required for fracture while maintaining the same number of chains crossing the fracture plane as in the monomodal network of the same crosslink density, by including a large number of short chains.
APA, Harvard, Vancouver, ISO, and other styles
3

Yacoub, M. D., C. R. C. M. da Silva, and J. E. Vargas B. "Level crossing rate and average fade duration for pure selection and threshold selection diversity-combining systems." International Journal of Communication Systems 14, no. 10 (2001): 897–907. http://dx.doi.org/10.1002/dac.514.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Mascio, Paola Di, and Laura Moretti. "Hourly Capacity of a Two Crossing Runway Airport." Infrastructures 5, no. 12 (December 4, 2020): 111. http://dx.doi.org/10.3390/infrastructures5120111.

Full text
Abstract:
At the international level, the interest in airport capacity is growing in the last years because its maximization ensures the best performances of the infrastructure. However, infrastructure, procedure, human factor constraints should be considered to ensure a safe and regular flow to the flights. This paper analyzed the airport capacity of an airport with two crossing runways. The fast time simulation allowed modeling the baseline scenario (current traffic volume and composition) and six operative scenarios; for each scenario, the traffic was increased until double the current volume. The obtained results in terms of average delay and throughput were analyzed to identify the best performing and operative layout and the most suitable to manage increasing hourly movements within the threshold delay of 10 min. The obtained results refer to the specific examined layout, and all input data were provided by the airport management body: the results are reliable, and the pursued approach could be implemented to different airports.
APA, Harvard, Vancouver, ISO, and other styles
5

Chesher, Andrew, and Adam M. Rosen. "What Do Instrumental Variable Models Deliver with Discrete Dependent Variables?" American Economic Review 103, no. 3 (May 1, 2013): 557–62. http://dx.doi.org/10.1257/aer.103.3.557.

Full text
Abstract:
We compare nonparametric instrumental variables (IV) models with linear models and 2SLS methods when dependent variables are discrete. A 2SLS method can deliver a consistent estimator of a Local Average Treatment Effect but is not informative about other treatment effect parameters. The IV models set identify a range of interesting structural and treatment effect parameters. We give set identification results for a counterfactual probability and an Average Treatment Effect in a IV binary threshold crossing model. We illustrate using data on female employment and family size (employed by Joshua Angrist and William Evans (1998)) and compare with their LATE estimates.
APA, Harvard, Vancouver, ISO, and other styles
6

Steffen, Will, Johan Rockström, Katherine Richardson, Timothy M. Lenton, Carl Folke, Diana Liverman, Colin P. Summerhayes, et al. "Trajectories of the Earth System in the Anthropocene." Proceedings of the National Academy of Sciences 115, no. 33 (August 6, 2018): 8252–59. http://dx.doi.org/10.1073/pnas.1810141115.

Full text
Abstract:
We explore the risk that self-reinforcing feedbacks could push the Earth System toward a planetary threshold that, if crossed, could prevent stabilization of the climate at intermediate temperature rises and cause continued warming on a “Hothouse Earth” pathway even as human emissions are reduced. Crossing the threshold would lead to a much higher global average temperature than any interglacial in the past 1.2 million years and to sea levels significantly higher than at any time in the Holocene. We examine the evidence that such a threshold might exist and where it might be. If the threshold is crossed, the resulting trajectory would likely cause serious disruptions to ecosystems, society, and economies. Collective human action is required to steer the Earth System away from a potential threshold and stabilize it in a habitable interglacial-like state. Such action entails stewardship of the entire Earth System—biosphere, climate, and societies—and could include decarbonization of the global economy, enhancement of biosphere carbon sinks, behavioral changes, technological innovations, new governance arrangements, and transformed social values.
APA, Harvard, Vancouver, ISO, and other styles
7

Shi, Xue Chao, Zhen Zhong Sun, and Sheng Lin Lu. "A Novel Method of Sub-Pixel Linear Edge Detection Based on First Derivative Approach." Advanced Materials Research 139-141 (October 2010): 2107–11. http://dx.doi.org/10.4028/www.scientific.net/amr.139-141.2107.

Full text
Abstract:
In this paper, a novel algorithm is proposed to detect linear edge. Image gradient is acquired by Sobel or Prewitt filters. Logical addition is applied to enhance image contrast. Statistical method is employed to cope with gradient data—gradient projection in the horizontal and vertical direction to compute the average gradient value. Multi-level B-Spline interpolation is employed to smooth gradient data. At last, edge coordinates can be computed precisely by the number and interval of extreme points. The experiment results are presented to show validity of the algorithm, which precision and accuracy can reach to sub-pixel. The proposed approach puts merits of zero crossing method and threshold method together, which is very robust, convenient and efficient to detect linear edge in industrial environment.
APA, Harvard, Vancouver, ISO, and other styles
8

Jiang, Nan, and Ting Liu. "An Improved Speech Segmentation and Clustering Algorithm Based on SOM and K-Means." Mathematical Problems in Engineering 2020 (September 12, 2020): 1–19. http://dx.doi.org/10.1155/2020/3608286.

Full text
Abstract:
This paper studies the segmentation and clustering of speaker speech. In order to improve the accuracy of speech endpoint detection, the traditional double-threshold short-time average zero-crossing rate is replaced by a better spectrum centroid feature, and the local maxima of the statistical feature sequence histogram are used to select the threshold, and a new speech endpoint detection algorithm is proposed. Compared with the traditional double-threshold algorithm, it effectively improves the detection accuracy and antinoise in low SNR. The k-means algorithm of conventional clustering needs to give the number of clusters in advance and is greatly affected by the choice of initial cluster centers. At the same time, the self-organizing neural network algorithm converges slowly and cannot provide accurate clustering information. An improved k-means speaker clustering algorithm based on self-organizing neural network is proposed. The number of clusters is predicted by the winning situation of the competitive neurons in the trained network, and the weights of the neurons are used as the initial cluster centers of the k-means algorithm. The experimental results of multiperson mixed speech segmentation show that the proposed algorithm can effectively improve the accuracy of speech clustering and make up for the shortcomings of the k-means algorithm and self-organizing neural network algorithm.
APA, Harvard, Vancouver, ISO, and other styles
9

Chakravarthy, Murali, Sharmila Sengupta, Sanjeev Singh, Neeta Munshi, Tency Jose, and Vatsal Chhaya. "Incidence Rates of Healthcare-associated Infections in Hospitals: A Multicenter, Pooled Patient Data Analysis in India." International Journal of Research Foundation of Hospital and Healthcare Administration 3, no. 2 (2015): 86–90. http://dx.doi.org/10.5005/jp-journals-10035-1042.

Full text
Abstract:
ABSTRACT Aim The aim of this study was to collect the multicenter data of healthcare-associated infections (HAIs) to assess the infection control scenario in India in context with CDC/NHSN and INICC database. Materials and methods Four National Accreditation Board for Hospitals and Health Care Providers (NABH) accredited hospitals were selected on random basis and raw data on healthcare-associated infections—number of days and number of infections in all intensive care patients was obtained as per the CDC-NHSN definitions and formula. Three major device related infections were considered for analysis based on the prevalence of HAIs and discussions with subject matter experts. All nodal champions from each hospital were trained and common data collection sheet for surveillance in accordance to CDC-NHSN was formed. The pooled means for HAI rates and average of the pooled means for all were calculated using data from four hospitals and were compared with CDC/NHSN and international nosocomial infection control consortium (INICC) percentiles of HAIs rates. Results The Indian pooled mean HAI rates for all infections were above CDC/NHSN percentile threshold but below INICC percentile. Ventilator-associated pneumonia (VAP) was considered as matter of prime concern, crossing P90 line of CDC/NHSN threshold. However, no HAI rate was in limit of P25. Conclusion Indian HAI rates were higher when mapped with CDC threshold. This promotes the need for more standardized and evidence-based protocols been adhered to so as to bring HAI within CDC/NHSN thresholds. However, the four hospitals have better HAI rates as compared to pooled INICC database. How to cite this article Singh S, Chakravarthy M, Sengupta S, Munshi N, Jose T, Chhaya V. Incidence Rates of Healthcareassociated Infections in Hospitals: A Multicenter, Pooled Patient Data Analysis in India. Int J Res Foundation Hosp Healthc Adm 2015;3(2):86-90.
APA, Harvard, Vancouver, ISO, and other styles
10

Lu, Naiwei, Mohammad Noori, and Yang Liu. "First-passage probability of the deflection of a cable-stayed bridge under long-term site-specific traffic loading." Advances in Mechanical Engineering 9, no. 1 (January 2017): 168781401668727. http://dx.doi.org/10.1177/1687814016687271.

Full text
Abstract:
Long-span bridges suffer from higher traffic loads and the simultaneous presence of multiple vehicles, which in conjunction with the steady traffic growth may pose a threat to the bridge safety. This study presents a methodology for first-passage probability evaluation of long-span bridges subject to stochastic heavy traffic loading. Initially, the stochastic heavy traffic loading was simulated based on long-term weigh-in-motion measurements of a highway bridge in China. A computational framework was presented integrating Rice’s level-crossing theory and the first-passage criterion. The effectiveness of the computational framework was demonstrated through a case study of a cable-stayed bridge. Numerical results show that the upper tail fitting of the up-crossing rate is an appropriate description of probability characteristics of the extreme traffic load effects of long-span bridges. The average daily truck traffic growth increases the probability of exceedance due to an intensive heavy traffic flow and results in a higher first-passage probability, but this increased trend is weakening as the continuous increase of the traffic volume. Since the sustained growth of gross vehicle weight has a constant impact on the probability of failure, setting a reasonable threshold overload ratio is an effective scheme as a traffic management to ensure the bridge serviceability.
APA, Harvard, Vancouver, ISO, and other styles
11

Stefanov, Lachezar G., and Svilen E. Neykov. "Determination of Anaerobic Threshold by a new approach through the incremental exercise using proportion in HR and Ve changes in rowers." Pedagogy of Physical Culture and Sports 25, no. 2 (April 30, 2021): 89–97. http://dx.doi.org/10.15561/26649837.2021.0203.

Full text
Abstract:
Background and Study Aim. The aim of this research is to create a non-invasive and easy to apply in practice approach to determine the anaerobic threshold based only on measurement of the pulmonary ventilation and the hearth rate. It uses proportions, with which these variables were changed during a maximal incremental test. Material and Methods. Twenty athletes from the national rowing team of Bulgaria with average age of 17.5 years were tested. Participants performed a one-time graded incremental exercise test to exhaustion on a rowing ergometer. The proposed new approach for determining the anaerobic threshold is related to detecting the power. Thus, one curve (obtained from differences in percentages of hearth rate and pulmonary ventilation) crosses the other one (obtained from pulmonary ventilation in percentages). The crossing point corresponds to the anaerobic threshold. This approach was compared with two methods determining the lactate threshold, by blood lactate measurement. Results. The Shapiro-Wilk test results indicated, that the samples of the heart rate of the compared methods have a normal or close to the normal distribution. The Fisher's F-test demonstrated, that the standard deviations of the samples do not differ significantly two by two at ɑ=0.05. The Bland&Altman test presented, that the 95% of all measurement data points lie within the confidence interval limit for each of the comparisons made between the new approach and two methods. Conclusions. Our proposed approach is non-invasive and can be easily applied in the field conditions, without using gas-analysing devices. In addition, it is reliable, reproducible and comparable to the accepted for “Gold Standard” methods for determination of anaerobic threshold with 95% statistical significance.
APA, Harvard, Vancouver, ISO, and other styles
12

Kline, Patrick, and Christopher R. Walters. "On Heckits, LATE, and Numerical Equivalence." Econometrica 87, no. 2 (2019): 677–96. http://dx.doi.org/10.3982/ecta15444.

Full text
Abstract:
Structural econometric methods are often criticized for being sensitive to functional form assumptions. We study parametric estimators of the local average treatment effect (LATE) derived from a widely used class of latent threshold crossing models and show they yield LATE estimates algebraically equivalent to the instrumental variables (IV) estimator. Our leading example is Heckman's (1979) two‐step (“Heckit”) control function estimator which, with two‐sided non‐compliance, can be used to compute estimates of a variety of causal parameters. Equivalence with IV is established for a semiparametric family of control function estimators and shown to hold at interior solutions for a class of maximum likelihood estimators. Our results suggest differences between structural and IV estimates often stem from disagreements about the target parameter rather than from functional form assumptions per se. In cases where equivalence fails, reporting structural estimates of LATE alongside IV provides a simple means of assessing the credibility of structural extrapolation exercises.
APA, Harvard, Vancouver, ISO, and other styles
13

Yildiz, Neşe. "ESTIMATION OF BINARY CHOICE MODELS WITH LINEAR INDEX AND DUMMY ENDOGENOUS VARIABLES." Econometric Theory 29, no. 2 (March 28, 2013): 354–92. http://dx.doi.org/10.1017/s0266466612000436.

Full text
Abstract:
This paper presents computationally simple estimators for the index coefficients in a binary choice model with a binary endogenous regressor without relying on distributional assumptions or on large support conditions and yields root-n consistent and asymptotically normal estimators. We develop a multistep method for estimating the parameters in a triangular, linear index, threshold-crossing model with two equations. Such an econometric model might be used in testing for moral hazard while allowing for asymmetric information in insurance markets. In outlining this new estimation method two contributions are made. The first one is proposing a novel “matching” estimator for the coefficient on the binary endogenous variable in the outcome equation. Second, in order to establish the asymptotic properties of the proposed estimators for the coefficients of the exogenous regressors in the outcome equation, the results of Powell, Stock, and Stoker (1989, Econometrica 75, 1403–1430) are extended to cover the case where the average derivative estimation requires a first-step semiparametric procedure.
APA, Harvard, Vancouver, ISO, and other styles
14

Wang, Shuchao, Quan Zhou, Ruijin Liao, Lai Xing, Nengcheng Wu, and Qian Jiang. "The Impact of Cross-Linking Effect on the Space Charge Characteristics of Cross-Linked Polyethylene with Different Degrees of Cross-Linking under Strong Direct Current Electric Field." Polymers 11, no. 7 (July 4, 2019): 1149. http://dx.doi.org/10.3390/polym11071149.

Full text
Abstract:
Cross-linked polyethylene (XLPE) obtained by the crossing-linking reaction of polyethylene (PE) can greatly enhance the mechanical properties and other properties of PE, which makes XLPE widely applied in the field of electric power engineering. However, the space charges can distort the distribution of the electrical field strength in the XLPE applied in the insulation materials, which can shorten the service life of the insulation materials. Therefore, the space charge characteristics of XLPE under the strong direct current (DC) electric field have been the focus of scholars and engineers all over the world. This article has studied the impact of the cross-linking effect on the space charge characteristics of XLPE with different degrees of cross-linking. For this issue, we used dicumyl peroxide (DCP) as the cross-linking agent and low-density polyethylene (LDPE) as the base material for the preparation of samples. Besides, the space charge distribution was measured by the pulsed electro-acoustic method (PEA). In addition, the average charge density as a characteristic parameter was introduced into the experiment, which was used to quantitatively analyze the impact of the cross-linking effect on the space charge characteristics of XLPE with different degrees of cross-linking. Meanwhile, we also explained the impact of the cross-linking effect on XLPE with different degrees of cross-linking from a microscopic point of view. Ultimately, some important conclusions can be obtained. For instance, the cross-linking effect significantly increases the threshold electrical field strength of XLPE, and as the content of cross-linking agent increases, the threshold electrical field strength increases at first and then decreases, and the threshold electrical field strength reaches the maximum value when the content of the cross-linking agent is 1.0% or 2.1%. Besides, the cross-linking effect introduces negative charge traps into the LDPE and increases the densities of the deeper charge traps, and so on. In addition, we have also analyzed the average charge density, and we have summarized the theoretical model of the average charge decay, namely, Q ( t ) = Q 0 + α e − t β , which is very effective for explaining the dissipation characteristics (more conclusive contents can be seen in the conclusion section of this article).
APA, Harvard, Vancouver, ISO, and other styles
15

Sun, Rui, Xiao Ming Yuan, Long Wei Chen, and Zhen Zhong Cao. "Comparative Analysis of Time-Frequency Curves of Seismic Records on Liquefied and Non-Liquefied Sites." Advanced Materials Research 243-249 (May 2011): 824–31. http://dx.doi.org/10.4028/www.scientific.net/amr.243-249.824.

Full text
Abstract:
The frequency of ground records on liquefied and non-liquefied sites is different. The calculation method of frequency decreasing rate is given here and the division line between liquefied and non-liquefied sites is proposed. To analyze the time-frequency curves of acceleration, the zero-crossing method is employed. The soft sites and ordinary non-liquefied sites are included in non-liquefied sites. The results show: (1) The concept and calculation method of frequency decreasing ratio, which is proposed in this paper, can describe the characteristics and regulations of time-frequency on liquefied and non-liquefied sites; (2) Before peak ground acceleration (PGA), the difference of the average frequencies of acceleration on liquefied and non-liquefied sites is not obvious and the average frequency of acceleration on soft sites is smaller than that either on liquefied or non-liquefied sites; (3) After PGA, the average frequency of acceleration on ordinary non-liquefied sites is the highest in the three types of sites, that of soft sites is the middle and that of liquefied sites is the smallest; (4) If the absolute change of the time-frequency is used as the criteria, it will be confused between soft sites and liquefied sites; (5) The threshold value of frequency decreasing ratio is 0.5 between liquefied and non-liquefied sites, which can judge the liquefied sites, non-liquefied sites and soft sites correctly.
APA, Harvard, Vancouver, ISO, and other styles
16

Du, Yanping, Lizhi Peng, Shuihai Dou, Xianyang Su, and Xiaona Ren. "Research on Personalized Book Recommendation Based on Improved Similarity Calculation and Data Filling Collaborative Filtering Algorithm." Computational Intelligence and Neuroscience 2022 (September 17, 2022): 1–11. http://dx.doi.org/10.1155/2022/1900209.

Full text
Abstract:
(Purpose/Significance). This paper aims at the problems of inaccurate recommendation effect caused by data sparseness and cold start in the traditional collaborative filtering-based book personalized recommendation algorithm. So this paper proposes a collaborative filtering recommendation algorithm which improves the similarity solution method and the filling method of missing data. (Method/Process). By considering the influence of the user’s common rating book collection on the similarity calculation, the average rating value of all books is used as the threshold, and the user’s common rating weight is introduced into the user’s similarity calculation. As for data filling, according to the user’s average rating, the basic attributes such as the age and gender of users are coded, and then Euclidean distance is initially calculated, making hierarchical clustering on users. What’s more, Shope-one algorithm is used to calculate the filling value of the former m similar users,and add the weight value of the degree simultaneously to get the final filling value, so as to improve the data filling method. (Result/Conclusion). Experiments were carried out with the data set of Book-Crossing Data set through Python. The experimental results show that the improved collaborative filtering algorithm has a significantly improvement in the accuracy and quality of book recommendation.
APA, Harvard, Vancouver, ISO, and other styles
17

Osborn, H. P., M. Ansdell, Y. Ioannou, M. Sasdelli, D. Angerhausen, D. Caldwell, J. M. Jenkins, C. Räissi, and J. C. Smith. "Rapid classification of TESS planet candidates with convolutional neural networks." Astronomy & Astrophysics 633 (January 2020): A53. http://dx.doi.org/10.1051/0004-6361/201935345.

Full text
Abstract:
Aims. Accurately and rapidly classifying exoplanet candidates from transit surveys is a goal of growing importance as the data rates from space-based survey missions increase. This is especially true for the NASA TESS mission which generates thousands of new candidates each month. Here we created the first deep-learning model capable of classifying TESS planet candidates. Methods. We adapted an existing neural network model and then trained and tested this updated model on four sectors of high-fidelity, pixel-level TESS simulations data created using the Lilith simulator and processed using the full TESS pipeline. With the caveat that direct transfer of the model to real data will not perform as accurately, we also applied this model to four sectors of TESS candidates. Results. We find our model performs very well on our simulated data, with 97% average precision and 92% accuracy on planets in the two-class model. This accuracy is also boosted by another ~4% if planets found at the wrong periods are included. We also performed three-class and four-class classification of planets, blended and target eclipsing binaries, and non-astrophysical false positives, which have slightly lower average precision and planet accuracies but are useful for follow-up decisions. When applied to real TESS data, 61% of threshold crossing events (TCEs) coincident with currently published TESS objects of interest are recovered as planets, 4% more are suggested to be eclipsing binaries, and we propose a further 200 TCEs as planet candidates.
APA, Harvard, Vancouver, ISO, and other styles
18

Salcedo-Bosch, Andreu, Francesc Rocadenbosch, Miguel A. Gutiérrez-Antuñano, and Jordi Tiana-Alsina. "Estimation of Wave Period from Pitch and Roll of a Lidar Buoy." Sensors 21, no. 4 (February 12, 2021): 1310. http://dx.doi.org/10.3390/s21041310.

Full text
Abstract:
This work proposes a new wave-period estimation (L-dB) method based on the power-spectral-density (PSD) estimation of pitch and roll motional time series of a Doppler wind lidar buoy under the assumption of small angles (±22 deg) and slow yaw drifts (1 min), and the neglection of translational motion. We revisit the buoy’s simplified two-degrees-of-freedom (2-DoF) motional model and formulate the PSD associated with the eigenaxis tilt of the lidar buoy, which was modelled as a complex-number random process. From this, we present the L-dB method, which estimates the wave period as the average wavelength associated to the cutoff frequency span at which the spectral components drop off L decibels from the peak level. In the framework of the IJmuiden campaign (North Sea, 29 March–17 June 2015), the L-dB method is compared in reference to most common oceanographic wave-period estimation methods by using a TriaxysTM buoy. Parametric analysis showed good agreement (correlation coefficient, ρ = 0.86, root-mean-square error (RMSE) = 0.46 s, and mean difference, MD = 0.02 s) between the proposed L-dB method and the oceanographic zero-crossing method when the threshold L was set at 8 dB.
APA, Harvard, Vancouver, ISO, and other styles
19

Rossi, Fabio, Paolo Motto Ros, Ricardo Maximiliano Rosales, and Danilo Demarchi. "Embedded Bio-Mimetic System for Functional Electrical Stimulation Controlled by Event-Driven sEMG." Sensors 20, no. 5 (March 10, 2020): 1535. http://dx.doi.org/10.3390/s20051535.

Full text
Abstract:
The analysis of the surface ElectroMyoGraphic (sEMG) signal for controlling the Functional Electrical Stimulation (FES) therapy is being widely accepted as an active rehabilitation technique for the restoration of neuro-muscular disorders. Portability and real-time functionalities are major concerns, and, among others, two correlated challenges are the development of an embedded system and the implementation of lightweight signal processing approaches. In this respect, the event-driven nature of the Average Threshold Crossing (ATC) technique, considering its high correlation with the muscle force and the sparsity of its representation, could be an optimal solution. In this paper we present an embedded ATC-FES control system equipped with a multi-platform software featuring an easy-to-use Graphical User Interface (GUI). The system has been first characterized and validated by analyzing CPU and memory usage in different operating conditions, as well as measuring the system latency (fulfilling the real-time requirements with a 140 ms FES definition process). We also confirmed system effectiveness, testing it on 11 healthy subjects: The similarity between the voluntary movement and the stimulate one has been evaluated, computing the cross-correlation coefficient between the angular signals acquired during the limbs motion. We obtained high correlation values of 0.87 ± 0.07 and 0.93 ± 0.02 for the elbow flexion and knee extension exercises, respectively, proving good stimulation application in real therapy-scenarios.
APA, Harvard, Vancouver, ISO, and other styles
20

Bouhram, M., B. Klecker, G. Paschmann, S. Haaland, H. Hasegawa, A. Blagau, H. Rème, J. A. Sauvaud, L. M. Kistler, and A. Balogh. "Survey of energetic O<Sup>+</sup> ions near the dayside mid-latitude magnetopause with Cluster." Annales Geophysicae 23, no. 4 (June 3, 2005): 1281–94. http://dx.doi.org/10.5194/angeo-23-1281-2005.

Full text
Abstract:
Abstract. Since December 2000, the Cluster satellites have been conducting detailed measurements of the magnetospheric boundaries and have confirmed the unambiguous presence of ions of terrestrial origin (e.g. O+ in regions adjacent to the dayside, mid-latitude magnetopause. In the present paper, we focus on the statistical properties of the O+ ion component at energies ranging from 30eV up to 40keV, using three years of ion data at solar maximum from the Cluster Ion Spectrometry (CIS) experiment aboard two Cluster spacecraft. The O+ density decreases on average by a factor of 6, from 0.041 to 7x10-3cm-3 when crossing the magnetopause from the magnetosphere to the magnetosheath, but depends on several parameters, such as the geomagnetic activity or the modified disturbed storm time index (Dst*), and on their location. The O+ density is significantly higher in the dusk-side than in the dawn side region, which is consistent with the view that they originate mainly from the plasma sheet. A remarkable finding is that inward of the magnetopause, O+ is the dominant contributor to the mass density 30% of the time on the dusk-side in comparison to 3% in the dawnside and 4% near noon. On an event basis in the dusk flank of the magnetopause, we point out that O+ ions, when dominating the mass composition, lower the threshold for generating the Kelvin-Helmholtz instability, which may allow plasma exchange between the magnetosheath and the plasma sheet. We also discuss the effect of a substantial O+ ion component when present in a reconnection region.
APA, Harvard, Vancouver, ISO, and other styles
21

Lopez, J. R., R. A. Ghanbari, and A. Terzic. "A KATP channel opener protects cardiomyocytes from Ca2+ waves: a laser confocal microscopy study." American Journal of Physiology-Heart and Circulatory Physiology 270, no. 4 (April 1, 1996): H1384—H1389. http://dx.doi.org/10.1152/ajpheart.1996.270.4.h1384.

Full text
Abstract:
Laser confocal microscopy was used to visualize intracellular spatiotemporal Ca2+ patterns in single guinea pig ventricular myocytes loaded with the Ca2+ indicator, fluo 3-acetoxymethyl ester (fluo 3-AM), and exposed to moderately elevated extracellular K+ to induce partial membrane depolarization. Analysis of K(+)-induced intracellular Ca2+ elevation revealed three distinct paradigms: 1) diffuse, nonoscillatory Ca2+ elevation across the myocyte; 2) localized Ca2+ elevation in anatomically restricted areas (Ca2+ sparks); and 3) regenerative frontal propagations of Ca2+ that traversed the length of the cell (Ca2+ waves). The first two patterns were more frequently observed when the extracellular K+ concentration was raised to 8 mM. Ca2+ waves became more common when extracellular K+ concentration was increased to 16 mM, suggesting that a minimum threshold of increase in intracellular Ca2+ is necessary for the organization of Ca2+ waves. The velocity of propagation was typically approximately 60 microns/s with an average frequency of one wave per second crossing at a given point in the cell. Wave propagation resulted in spatial and temporal oscillations in cytosolic and nuclear Ca2+ concentration. Treating cardiac cells with aprikalim, a potassium channel-opening drug, prevented 16 mM K+ (but not 32 mM K+) from inducing an increase in Ca2+ concentration and from generating Ca2+ waves. In cardiomyocytes treated with glyburide, a selective antagonist of ATP-sensitive K+ channels, aprikalim failed to prevent 16 mM K+ from inducing Ca2+ waves. In summary, moderate hyperkalemia induces distinct nonuniform form patterns of intracellular Ca2+ elevation in ventricular cells, which can be prevented by a potassium channel-opening drug through a glyburide-sensitive mechanism.
APA, Harvard, Vancouver, ISO, and other styles
22

Хорошун, Г. М. "МЕТОД СТИСНЕННЯ ДАНИХ ДИФРАКЦІЙНИХ ТА ІНТЕРФЕРЕНЦІЙНИХ ЗОБРАЖЕНЬ." Open Information and Computer Integrated Technologies, no. 88 (November 6, 2020): 134–40. http://dx.doi.org/10.32620/oikit.2020.88.13.

Full text
Abstract:
In the paper the diffraction and interference images received by numerical simulation and experimentally in solving fundamental and applied problems of photonics. Images are structures with a special intensity distribution formed by the initial field and the optical system. To increase the speed of processing of the image data compression method with further implementation in databases is developed. The method of diffraction and interference images compression is based on the intensity quantization. An algorithm for image quantization has been developed: target intensity values have been determined, which allow setting quantization levels, and data visualization techniques, which determine the threshold values for these levels. The algorithm also contains image segmentation by the size of the minimum size of the topological object. The vicinity of topological object is defined under the conditions of a visual registration form and do not crossing with other regions. The topological objects of the diffraction field determine as the maximum, minimum and zero intensity, and in the interference pattern such topological objects are the maximum, minimum and the region of the band splitting. Important parameters are the average value of the intensity of the whole image - which highlights its overall structure and the average value of the intensity of the local segment. The following results were obtained by compressing data from an 8-bit image in grayscale to 2 bits of color depth are enough for an interference image, and to 3 bits, which are enough for a diffraction image. The quantization differences for diffraction images and interference patterns are shown. Data compression ratios are calculated. On the one hand, the application of the obtained results and recommendations is possible in various fields of medicine, biology and pharmacy , which use laser technology, and on the other hand in the development of separate IT identification of topological objects in the light field, optical image processing and decision support in optical problems.
APA, Harvard, Vancouver, ISO, and other styles
23

Tuzubekova, Madina, Gulnar Kazizova, Inara Sarybaeva, and Gulzhakhan Zhunussova. "Social policy of the state and the role of state programs in solving the problems of social protection of the population." Economic Consultant 37, no. 1 (March 1, 2022): 61–71. http://dx.doi.org/10.46224/ecoc.2022.1.5.

Full text
Abstract:
Introduction. In a market economy, the process of state development is often accompanied by a drop in the standard of living of the population, a deterioration in the demographic situation in the country, a reduction in the average life expectancy of people, which makes it necessary to improve the effectiveness of the social protection system of citizens. When developing a social policy, the question of social priorities arises as one of the most important, that is, such social tasks that are currently the most urgent and urgent for society, require priority solutions, such as social protection of the working and non-working population, pension provision, social support for low-income segments of the population and the unemployed, etc. Materials and methods. During the preparation of the article the authors used conference materials, as well as in the review of sources – modern foreign periodicals. Results. The world practice of social insurance shows that the greatest social effect is achieved with the three-channel formation of insurance funds at the expense of contributions of employees, employers and the state. This method of financing ensures joint responsibility of the participants for the prevention and compensation of risks, joint management and control of the funds. In case of compulsory social insurance, the distribution is considered the most reasonable, when the biggest part of financing falls on the employer – 50%, 20-40% – on the state and 10-30% – on the employee. Conclusion. Based on the analysis of international experience, we can conclude that the most effective and comprehensive social protection systems usually include the following main elements: state benefits, compulsory social insurance, funded pensions, social assistance. Since the success of fundamental and profound social reforms depends on the public recognition of their justice, the social policy pursued by the state should be based on modern and adequate social indicators and criteria that are adequate to the new economic principles. The latter determine the threshold values of indicators of social activity and social security. Crossing these borders is unacceptable, as it is fraught with negative social consequences that make further progress in the economic reform of society impossible.
APA, Harvard, Vancouver, ISO, and other styles
24

Joseph, Sheeba M., Christopher Cheng, Matthew J. Solomito, and J. Lee Pace. "LATERAL TROCHLEAR INCLINATION IN CHILDREN AND ADOLESCENTS: MODIFIED MEASUREMENT TECHNIQUE TO CHARACTERIZE PATELLAR INSTABILITY." Orthopaedic Journal of Sports Medicine 7, no. 3_suppl (March 1, 2019): 2325967119S0014. http://dx.doi.org/10.1177/2325967119s00146.

Full text
Abstract:
Background: Patellar instability (PI) is relatively rare but occurs most often in younger patients with underlying pathoanatomy. Trochlear dysplasia (TD) is one of many identified PI risk factors, but consensus is lacking on ideal radiographic measurements. The Dejour classification of TD on lateral radiographs is widely accepted but has suboptimal intra and interrater reliability and does not allow quantification of TD. Lateral trochlear inclination (LTI) measured on the most proximal axial magnetic resonance image (MRI) of the trochlear chondral surface is another described measurement of TD. LTI has historically been described with reference to the posterior aspect of the femur at the same axial level at which the proximal trochlea is measured. However, given the transitional anatomy of the distal femur, the LTI may be better represented by referencing the axis of the fully formed posterior femoral condyles. The posterior condyles represent a true axis of rotation that serves as an internal reference for knee motion and is clearly visible on MRI. We hypothesized that modified LTI measurements (LTI) referencing the posterior condylar axis would differ from the apparent LTI (ALTI) in a pediatric and adolescent population. We also hypothesized that the LTI would have stronger intra and inter reliability than the ALTI measurement and Dejour classification. Lastly, we hypothesized that the most proximal level of the trochlea would adequately represent overall proximal trochlear morphology. This is clinically relevant because dysplasia is most severe on the proximal trochlea and normalizes distally towards the intercondylar notch. Methods: Patients aged 9 to 18 years treated for PI at our tertiary referral center between January 2014 and August 2017 were identified. The Dejour classification was determined on lateral knee radiographs. The ALTI was measured as previously described on axial MRI images (Figure 1A). The LTI (also referred to as LTI #1) was measured on the same MRI image with respect to the angle of the posterior condyles (Figure 1B-C). The LTI was measured again in this fashion at the three subsequent, consecutive axial levels (LTI#2, LTI#3, LTI#4) to capture the first 12 mm of the proximal trochlea. The average of these measurements (LTI-avg) was calculated for each patient. All measurements were performed by two independent observers. A cohort of 30 patients were randomly selected for reliability analysis which was performed twice by three independent observers at least two weeks apart. Inter- and intra-rater correlation coefficients were calculated on this subgroup. Regression analysis was performed on the entire cohort. Results: Sixty-five patients met inclusion criteria for this study, and thirty patients were randomly selected for reliability analysis. Inter- and intra-rater reliability for ALTI showed good agreement but trended towards more variability than the inter- and intra-rater reliability for LTI#1 which had near perfect agreement (Table 1). Inter- and intra-rater reliability for all subsequent LTI measurements and LTI-avg had high or near perfect agreement (Table 2). The Dejour classification had poor to moderate inter-rater and good to near perfect intra-rater reliability. The crossing sign was the most reliable radiographic feature (Table 3). In the entire cohort of 65 patients, the average ALTI (9.2+/-12.6 degrees) was 7.0+/-3.4 degrees greater (less dysplastic) than the average LTI #1 (4.2+/-11.9 degrees) (p = 0.013). Referencing the 11 degrees LTI threshold value for trochlear dysplasia reported in the literature, the ALTI was below 11 degrees in 60% of our PI patients indicating dysplasia, while the LTI was less than 11 degrees in 71% of our PI patients. Regression analysis demonstrated statistically significant positive correlation between LTI#1 and LTI#2 (r=0.88, beta=0.81, p<0.0001), LTI#1 and LTI#3 (r=0.67, beta=0.54, p<0.0001), LTI#1 and LTI#4 (r=0.65, beta=0.43, p<0.001), and LTI#1 and LTI-avg (r=0.91, beta=0.70, p<0.0001). Conclusion: LTI has higher intra and interrater reliability when performed with reference to the posterior condyles compared to the historical measurement (ALTI) and the Dejour classification. The significant and strong correlation between LTI#1 and subsequent LTI measures as well as LTI-avg shows that 90% of TD is represented on the first, most proximal axial image and thus provides an appropriate, reliable and quantifiable measurement of TD in children and adolescents with PI. The significant difference found between LTI and ALTI shows that the historical measurement appears to underestimate dysplasia. Thus, previously described threshold values should be re-examined using this new technique to appropriately characterize trochlear dysplasia in patients with patellar instability as this can have implications for treatment algorithms for these patients. [Table: see text][Table: see text][Table: see text][Figure: see text]
APA, Harvard, Vancouver, ISO, and other styles
25

Faradiba, Faradiba. "Tingkat Kebisingan di Sekolah Sekitar Perlintasan Kereta Api." Prosiding SNFA (Seminar Nasional Fisika dan Aplikasinya) 2 (November 28, 2017): 62. http://dx.doi.org/10.20961/prosidingsnfa.v2i0.16366.

Full text
Abstract:
<p class="AbstractEnglish"><strong>Abstract:</strong>. Noise is a sound that can cause discomfort. One of them is rail activity. Noise generated enough to bring negative impacts to the surrounding environment, especially in the school environment.. This research uses descriptive analysis method with cross sectional approach. The location of this research is the school that is right next to the railway crossing i.e. SMA Negeri 37 Jakarta. Noise level data retrieval is performed using a sound level meter applications android-based. The data measured by the instantaneous sound pressure level for 5 minutes, or Leq (5 minutes) for each measurement point. There are 5 point measurements. From the results of measurements at SMA Negeri 37 Jakarta gained an average noise level for 5 measurement point is 70.50 dB. The figure exceeds the threshold if refers to the Kep-48 MNLH/11/1996 to 55,00 dB maximum school environment. Necessary noise control efforts at that school to minimise the negative impact caused. Because of the higher the intensity of noise, the more negative impact, especially for students in the school.<strong></strong></p><p class="KeywordsEngish"> </p><p class="AbstrakIndonesia"><strong>Abstrak:</strong> Bising merpukan sebuah bunyi yang dapat menimbulkan ketidaknyamanan. Salah satu yang mengakibatkan timbulnya suara bising yang cukup tinggi adalah aktivitas kereta api. Kebisingan yang dihasilkan cukup membawa dampak negatif bagi lingkungan disekitarnya, khususnya di lingkungan sekolah. penelitian ini menggunakan metode analisis deskriptif dengan pendekatan <em>cross sectional. </em>Lokasi penelitian ini adalah sekolah yang berada tepat di samping perlintasan rel kereta api yaitu SMA Negeri 37 Jakarta.<em> </em>Pengambilan data tingkat kebisingan dilakukan dengan menggunakan aplikasi <em>sound level meter</em><em> </em>berbasis android. Data diukur dengan tingkat tekanan bunyi sesaat selama 5 menit, atau Leq (5 menit) untuk setiap titik pengukuran. Terdapat 5 titik pengukuran. Dari hasil pengukuran pada SMA Negeri 37 Jakarta diperoleh rata-rata tingkat kebisingan untuk 5 titik pengukuran adalah 70,50 dB. Angka tersebut melebihi ambang batas jika merujuk pada Kep-48 MNLH/11/1996 untuk lingkungan sekolah maksimum 55 dB. Diperlukan upaya-upaya pengendalian kebisingan pada sekolah tersebut untuk meminimalisir dampak negatif yang ditimbulkan. Karena semakin tinggi instensitas kebisingan semakin memberikan dampak negatif khususnya bagi siswa di sekolah tersebut.</p>
APA, Harvard, Vancouver, ISO, and other styles
26

Ziyad, Jawad, Kalifa Goïta, Ramata Magagi, Fabien Blarel, and Frédéric Frappart. "Improving the Estimation of Water Level over Freshwater Ice Cover using Altimetry Satellite Active and Passive Observations." Remote Sensing 12, no. 6 (March 17, 2020): 967. http://dx.doi.org/10.3390/rs12060967.

Full text
Abstract:
Owing to its temporal resolution of 10-day and its polar orbit allowing several crossings over large lakes, the US National Aeronautics and Space Administration (NASA) and the French Centre National d’Etudes Spatiales (CNES) missions including Topex/Poseidon, Jason-1/2/3 demonstrated strong capabilities for the continuous and long-term monitoring (starting in 1992) of large and medium-sized water bodies. However, the presence of heterogeneous targets in the altimeter footprint, such as ice cover in boreal areas, remains a major issue to obtain estimates of water level over subarctic lakes of similar accuracy as over other inland water bodies using satellite altimetry (i.e., R ≥ 0.9 and RMSE ≤ 10 to 20 cm when compared to in-situ water stages). In this study, we aim to automatically identify the Jason-2 altimetry measurements corresponding to open water, ice and transition (water-ice) to improve the estimations of water level during freeze and thaw periods using only the point measurements of open water. Four Canadian lakes were selected to analyze active (waveform parameters) and passive (brightness temperature) microwave data acquired by the Jason-2 radar altimetry mission: Great Slave Lake, Lake Athabasca, Lake Winnipeg, and Lake of the Woods. To determine lake surface states, backscattering coefficient and peakiness at Ku-band derived from the radar altimeter waveform and brightness temperature at 18.7 and 37 GHz measured by the microwave radiometer contained in the geophysical data records (GDR) of Jason-2 were used in two different unsupervised classification techniques to define the thresholds of discrimination between open water and ice measurements. K-means technique provided better results than hierarchical clustering based upon silhouette criteria and the Calinski-Harabz index. Thresholds of discrimination between ice and water were validated with the Normalized Difference Snow Index (NDSI) snow cover products of the MODIS satellite. The use of open water threshold resulted in improved water level estimation compared to in situ water stages, especially in the presence of ice. For the four lakes, the Pearson coefficient (r) increased on average from about 0.8 without the use of the thresholds to more than 0.90. The unbiased RMSE were generally lower than 20 cm when the threshold of open water was used and more than 22 cm over smaller lakes, without using the thresholds.
APA, Harvard, Vancouver, ISO, and other styles
27

Salohub, А. M. "INFLUENCE OF GENOTYPIC AND PARATHIPIC FACTORS ON THE TRAITS OF MILK PRODUCTION OF COWS UKRAINIAN RED-AND-WHITE DAIRY BREED." Animal Breeding and Genetics 57 (April 24, 2019): 126–35. http://dx.doi.org/10.31073/abg.57.15.

Full text
Abstract:
The research was carried out to study the influence of genotypic and paratypical factors on the traits of milk production of cows Ukrainian Red-and-White dairy breed LLC "Mena-Avangard" Chernihiv region. In the conditions of pedigree herd, studied four groups of crossbred animals with conditional share heredity of Holstein breed: I – < 62.5%; ІІ – 62.6–75.0; III – 75.0–87.5 and IV – 87.6 and > . According to results of researches cow’s crossbred groups of breeding herd Ukrainian Red-and-White breed with a different conditional blood by Holstein breed, was found reliable influence of heredity of improving breed on the level of milk yield and output of milk fat in the dynamics of estimated lactation. Cow’s milk yield with each heredity increase of improver breed grew respectively by 12.5% within crossbred groups. Thus, when comparing cow's group of Holstein blood < 62.5% with their counterparts with blood 62.6–75.0%, advantage was 301 kg in favor of the latter with reliable difference at P < 0.01. Next blood increase to 75.0–87.5% led to a corresponding increase in yield at 262 kg (P < 0.01). Animals with heredity Holstein breed 87.6% higher than the previous generation dominated with a high difference 345 kg of milk (P < 0.001). The blood flow of Holstein breed by 25% resulted in increased milk yield of first-calf cows by 908 kg (P < 0.001). The fat content in milk through this crossing the blood decreased only 0.05% and was not confirmed by reliability, whereas milk fat yield increased with a high significance difference of 31.9 kg (P < 0.001). According to results of one-factor dispersion analysis, was found that value of yield and output level of milk fat in the first lactation cows of Ukrainian Red-and-White dairy breed, respectively, by 5.5–6.2 and 4.7–9.2%, depend on paratypical factors (year and season of birth and year and season of the first calving), as confirmed by high reliability according to Fisher's criterion. The level of milk yield and milk fat cows in first lactation significantly - correspondingly 25.3 and 15.8%, depending on the conditional share of heredity Holstein breed. However, the highest rates of influence on yield level and milk fat of the first-calf cows were obtained by size of comprehensive selection index of cow's mother (57.1 and 44.7%), average breeding value of mother for milk yield (64.4 and 45.4%), and for milk fat (53.5 and 38.9%). With high reliability on indicators of milk production influencing standardized breeding value parent by quantity of milk yield and milk fat (= 0.283 and 0.178). Calculations show that dispersion of traits development of yield and milk fat of cows first-calf due to the influence of heredity of sires used lines and is respectively 15.7 and 10.9%. Fisher's criterion reliability by these indicators exceed the threshold of the third level (P < 0.001). Thus, the power of influence heredity of Holstein breed, selection indexes of mothers and parents of cows and father lines on the milk yield level and milk yield output indicating the possibility of effective breeding of the studied dairy cattle by selection of ancestors with a high estimation of selection indices and pedigree value.
APA, Harvard, Vancouver, ISO, and other styles
28

Han, Yan, Ye Liu, Peng Hu, CS Cai, Guoji Xu, and Jiaying Huang. "Effect of unsteady aerodynamic loads on driving safety and comfort of trains running on bridges." Advances in Structural Engineering 23, no. 13 (June 5, 2020): 2898–910. http://dx.doi.org/10.1177/1369433220924794.

Full text
Abstract:
In order to investigate the effects of unsteady aerodynamic loads on the driving safety and comfort of trains running on bridges, a three-dimensional and multi-body system model of train–track–bridge was established and the dynamic responses of the coupling system were calculated by combining the finite element software ANSYS with the multi-body dynamics software SIMPACK. The driving safety and comfort of a train running on a bridge under steady and unsteady aerodynamic loads were compared and analyzed. The effects of different crosswind speeds on the driving safety of the train running on the bridge under unsteady aerodynamic loads were studied. It is found that the index values of the driving safety and comfort of the train at the speed of 200–300 km/h without the wind loads are smaller (meaning safer) than those of the train under the wind loads. When the average speed of crosswind is 20 m/s, the driving safety assessment results of the train are better and its comfort assessment results are more conservative with considering the unsteady aerodynamic loads than the steady wind load case. When the average speed of crosswind is smaller than 10 m/s and the train speed is 250 km/h, the driving safety and comfort of the train on the bridge meet the requirements, and the level of stability can reach “good” or above. Through the analysis of driving safety of the train on the bridge under different crosswind speeds, the threshold values of safe driving were obtained, which can provide a better basis for the safe operation of trains on bridges.
APA, Harvard, Vancouver, ISO, and other styles
29

Frech, Michael, Frank Holzäpfel, Arnold Tafferner, and Thomas Gerz. "High-Resolution Weather Database for the Terminal Area of Frankfurt Airport." Journal of Applied Meteorology and Climatology 46, no. 11 (November 1, 2007): 1913–32. http://dx.doi.org/10.1175/2007jamc1513.1.

Full text
Abstract:
Abstract A 1-yr meteorological dataset for the terminal area of Frankfurt Airport in Germany has been generated with a numerical weather prediction system to provide a synthetic though realistic database for the evaluation of new operational aircraft arrival procedures and their associated risks. The comparison of the 1-yr dataset with a local surface wind climatology indicates that the main climatological features are recovered. A subset of 40 days is validated against measurements from a sound detection and range/radio acoustic sounding system (SODAR/RASS) taken at Frankfurt Airport. The RMS errors of wind speed and direction are between 1.5 m s−1 at the surface and 2 m s−1 at 300 m and 40°, respectively. The frequency distribution of meteorological parameters, such as the wind component perpendicular to the glide path, shear, and thermal stratification, show good agreement with observations. The magnitude of the turbulent energy dissipation rate near the surface is systematically overestimated, whereas above 100 m the authors find on average a slight underestimation. The analysis of the database with respect to crosswind conditions along the glide path indicates only a time fraction of 12% for which the crosswind is above a threshold of 2 m s−1. A similar result is obtained using a grid point near the airport that mimics a wind profiler, which suggests that in a majority of cases a wind profiler appears sufficient to cover the expected crosswind conditions along the glide path. A simple parameterization to account for the crosswind variability along the glide path is proposed.
APA, Harvard, Vancouver, ISO, and other styles
30

Regaudie-de-Gioux, A., R. Vaquer-Sunyer, and C. M. Duarte. "Patterns in planktonic metabolism in the Mediterranean Sea." Biogeosciences Discussions 6, no. 4 (August 31, 2009): 8569–88. http://dx.doi.org/10.5194/bgd-6-8569-2009.

Full text
Abstract:
Abstract. Planktonic gross community production (GPP), net community production (NCP) and community respiration (CR) across the Mediterranean Sea was examined in two cruises, THRESHOLDS 2006 and 2007, each crossing the Mediterranean from West to East to test for consistent variation along this longitudinal gradient. GPP averaged 2.4±0.4 mmol O2m−3 d−1, CR averaged 3.8±0.5 mmol O2m−3 d−1, and NCP averaged -0.8&amp;plusmn0.6 mmol O2m−3 d−1across the studied sections, indicative of a tendency for a net heterotrophic metabolism, prevalent across studied sections of the Mediterranean Sea as reflected in 70% of negative NCP estimates. The median P/R ratio was 0.58, also indicating a strong prevalence of heterotrophic communities (P/R<1) along the studied sections of the Mediterranean Sea. The communities tended to be net heterotrophic (i.e. P/R<1) at GPP less than 3.5 mmol O2m−3 d−1. Although the Western Mediterranean supports a higher gross primary production than the Eastern basin does, it also supported a higher community respiration. The net heterotrophy nature of the studied sections of the Mediterranean Sea indicates that allochthonous carbon should be important to subsidise planktonic metabolism, and that the planktonic communities in the Mediterranean Sea acted as CO2 sources to the atmosphere during the study.
APA, Harvard, Vancouver, ISO, and other styles
31

Regaudie-de-Gioux, A., R. Vaquer-Sunyer, and C. M. Duarte. "Patterns in planktonic metabolism in the Mediterranean Sea." Biogeosciences 6, no. 12 (December 17, 2009): 3081–89. http://dx.doi.org/10.5194/bg-6-3081-2009.

Full text
Abstract:
Abstract. Planktonic gross community production (GPP), net community production (NCP) and community respiration (CR) across the Mediterranean Sea was examined in two cruises, Thresholds 2006 and 2007, each crossing the Mediterranean from West to East to test for consistent variation along this longitudinal gradient in late spring to early summer. GPP averaged 2.4±0.4 mmol O2 m−3 d−1, CR averaged 3.8±0.5 mmol O2 m−3 d−1, and NCP averaged – 0.8±0.6 mmol O2 m−3 d−1 across the studied sections, indicative of a tendency for a net heterotrophic metabolism in late spring to early summer, prevalent across studied sections of the Mediterranean Sea as reflected in 70% of negative NCP estimates. The median P/R ratio was 0.6, also indicating a strong prevalence of heterotrophic communities (P/R<1) along the studied sections of the Mediterranean Sea. The communities tended to be net heterotrophic (i.e. P/R<1) at GPP less than 2.8 mmol O2 m−3 d−1. The Western Mediterranean tended to support a higher gross primary production and community respiration than the Eastern basin did, but these differences were not statistically significant (t-test, p>0.05). The net heterotrophy of the studied sections of the Mediterranean Sea indicates that allochthonous carbon should be important to subsidise planktonic metabolism during the late spring.
APA, Harvard, Vancouver, ISO, and other styles
32

Galiatsatou, Panagiota, and Panayotis Prinos. "REDUCING UNCERTAINTY IN EXTREME WAVES AND STORM SURGES USING A COMBINED EXTREME VALUE MODEL AND WAVELETS." Coastal Engineering Proceedings 1, no. 33 (December 14, 2012): 6. http://dx.doi.org/10.9753/icce.v33.management.6.

Full text
Abstract:
In the present study the wavelet transform is combined with non-stationary statistical models for extreme value analysis, to provide more reliable and more accurate return level estimates. The continuous wavelet transform is first used to detect the significant “periodicities” of the wave height and storm surge signals under study by means of the wavelet global and scale-averaged power spectra and then it is used to reconstruct the part of the time series, represented by these significant and prominent features. A non-stationary point process is utilized to model the extremes. A time varying threshold with a period of one year and having an approximately uniform crossing rate throughout the year is used. The reconstructed part of the series variability representing the significant non-stationarities of each signal is incorporated in the both the location and the scale parameters of the point process model, together with selected harmonic functions, formulating a number of candidate extreme value models. The quality of the fitted models is assessed by means of the Akaike Information Criterion, as well as by means of diagnostic quantile plots. The models which incorporate the reconstructed part of the wavelet transform in their location parameter, as a separate component of the parameter without any scaling coefficient, result in narrower return level confidence intervals and therefore tend to reduce uncertainty in extrapolated extremes.
APA, Harvard, Vancouver, ISO, and other styles
33

Yanyo, L. C., and F. N. Kelley. "Effect of Network Chain Length on the Tearing Energy Master Curves of Poly(Dimethyldiphenyl Siloxane)." Rubber Chemistry and Technology 61, no. 1 (March 1, 1988): 100–118. http://dx.doi.org/10.5254/1.3536169.

Full text
Abstract:
Abstract Five different molecular weights of vinyl-terminated poly(dimethyldiphenyl siloxane) were endlinked with tetrakis(dimethylsiloxy)silane to produce networks of various crosslink densities. A hydrosilation endlinking reaction was chosen, since this method provides the advantage of easily controlling the network chain length and network morphology. Tear energies were determined using a modified trouser-tear test piece at several rates and temperatures. All of the tear energy data for each crosslink density could be shifted to a single master curve with the same WLF coefficients, C1=6 and C2=108 K. The threshold tear energy of PDMDPS is different from unsubstituted PDMS because the average molecular weight per backbone atom and density are increased by the presence of phenyl groups. Each of the tear-energy master curves of PDMDPS shift to a single curve with Mc1/2, except in regions near the glass transition. Deviations of the curves are observed to coincide with increases in the crack-tip diameter with increasing molecular weight between crosslinks. In the lower transition region of the tear-energy master curve, the loss function is determined by the hysteresis of the material. Near the glass transition, the effect of the strain energy and strain-energy distribution, as well as the hysteresis, must be considered. Deviations of the master curves with crosslink density can also be explained from the changes in the loss function for each curve due to the differences in strain and strain distribution for different crack-tip diameters which were visually evident. Andrews' representation of the loss function of the tear energy is known to be dependent on hysteresis, strain energy, and the strain-energy distribution near the crack tip. Modifying his relation, the tear energy was shown to increase with hysteresis as long as the strain energy and strain-energy distribution remained constant. The strain energy distribution was also shown to increase directly with the molecular weight between crosslinks of the network.
APA, Harvard, Vancouver, ISO, and other styles
34

Yan, C., A. T. T. Kan, W. Wang, F. Yan, L. Wang, and M. B. B. Tomson. "Sorption Study of ?-ALO(OH) Nanoparticle-Crosslinked Polymeric Scale Inhibitors and Their Improved Squeeze Performance in Porous Media." SPE Journal 19, no. 04 (January 30, 2014): 687–94. http://dx.doi.org/10.2118/164086-pa.

Full text
Abstract:
Summary Polymeric scale inhibitors are widely used in the oil and gas field because of their enhanced thermal stability and better environmental compatibility. However, the squeeze efficiency of such threshold inhibitors, not only polymeric scale inhibitors but also phosphonates, is typically poor in conventional squeeze treatment. In this research, nanoparticle (NP)-crosslinked polymeric scale inhibitors were developed for scale control. Nearly monodispersed boehmite [γ-AlO(OH)] NPs with average size of 2.8 nm were synthesized and used to crosslink sulfonated polycarboxylic acid (SPCA). Crosslinked AlO(OH)-SPCA nanoinhibitors were produced and developed to increase the retention of SPCA in formations by converting liquid-phase polymeric scale inhibitors into a viscous gel. Study of sorption of SPCA onto AlO(OH) NPs under different pHs with and without assistance of Ca2+ was discussed. In addition, study of sorption of various types of scale inhibitors [SPCA; phosphino-polycarboxylic acid (PPCA); and diethylenetriaminepentatakis(methylene phosphonic acid) (DTPMP)] onto AlO(OH) NPs was performed. Squeeze simulation of neat 3% SPCA, AlO(OH) (3%)-SPCA (3%) NPs, and AlO(OH) (3%)-SPCA (3%)-Ca NPs was investigated. The results showed that the addition of Ca2+ ions improves the squeeze performance of SPCA, and the normalized squeeze life (NSL) of such material (8,952 bbl/kg) was improved by a factor greater than 60 compared with that of SPCA alone (152 bbl/kg).
APA, Harvard, Vancouver, ISO, and other styles
35

Matrosov, Sergey Y. "Characteristics of Landfalling Atmospheric Rivers Inferred from Satellite Observations over the Eastern North Pacific Ocean." Monthly Weather Review 141, no. 11 (October 25, 2013): 3757–68. http://dx.doi.org/10.1175/mwr-d-12-00324.1.

Full text
Abstract:
Abstract Narrow elongated regions of moisture transport known as atmospheric rivers (ARs), which affect the West Coast of North America, were simultaneously observed over the eastern North Pacific Ocean by the polar-orbiting CloudSat and Aqua satellites. The presence, location, and extent of precipitation regions associated with ARs and their properties were retrieved from measurements taken at 265 satellite crossings of AR formations during the three consecutive cool seasons of the 2006–09 period. Novel independent retrievals of AR mean rain rate, precipitation regime types, and precipitation ice region properties from satellite measurements were performed. Relations between widths of precipitation bands and AR thicknesses (as defined by the integrated water vapor threshold of 20 mm) were quantified. Precipitation regime partitioning indicated that “cold” precipitation with a significant amount of melting precipitating ice and “warm” rainfall conditions with limited or no ice in the atmospheric column were observed, on average, with similar frequencies, though the cold rainfall fraction had an increasing trend as AR temperature decreased. Rain rates were generally higher for the cold precipitation regime. Precipitating ice cloud and rainfall retrievals indicated a significant correlation between the total ice amounts and the resultant rain rate. Observationally based statistical relations were derived between the boundaries of AR precipitation regions and integrated water vapor amounts and between the total content of precipitating ice and rain rate. No statistically significant differences of AR properties were found for three different cool seasons, which were characterized by differing phases of El Niño–Southern Oscillation.
APA, Harvard, Vancouver, ISO, and other styles
36

Schwenk, Michael, Linda L. Garland, Gurtej Singh Grewal, Dustin Holloway, Amy Muchna, Jane Mohler, and Bijan Najafi. "Wearable sensor-based balance training in older adult cancer patients with chemotherapy-induced neuropathy: A randomized controlled trial." Journal of Clinical Oncology 33, no. 29_suppl (October 10, 2015): 195. http://dx.doi.org/10.1200/jco.2015.33.29_suppl.195.

Full text
Abstract:
195 Background: Chemotherapy-induced peripheral neuropathy (CIPN) can affect lower extremity joint proprioception leading to balance deficits and increased fall risk. This study evaluated the effect of a sensor-based exercise program on improving postural control in cancer patients with CIPN. Methods: Twenty two cancer patients (Age 70.3±8.7 years) with objectively confirmed CIPN (vibration perception threshold test, VPT > 25 Volt) were randomized to twice weekly x 4 weeks sensor-based training including weight shifting and virtual obstacle crossing with real-time visual feedback of lower-extremities through wearable wireless sensors (intervention group, IG, n = 11) or no intervention (control group, CG, n = 11). Outcome measures included changes in sway of ankle, hip, and center of mass (CoM) in both medio-lateral (ML) and anterior-posterior (AP) directions during 30 second standing in feet closed position (both feet next to each other) with eyes open (EO) and eyes closed (EC), and semi-tandem position (big toe by arch of other foot) with EO, at baseline and post-intervention. All assessments were made using validated wearable sensors. Results: VPT score averaged 49.6 ± 26.7 Post intervention, sway of hip, ankle and CoM (ML) were significantly reduced in the IG compared to the CG during FC position (p = .010-.022). During the more challenging position of semi-tandem, all sway parameters except ankle were significantly reduced (p = .008-.039). Effect sizes were moderate-large (eta squared = .255-.388). Conclusions: This randomized controlled study using a novel wireless sensor-based training demonstrated improvements in balance in CIPN patients by measures of balance that have been related to fall risk. We speculate that the sensor-based training with real-time visual joint position feedback provided participants with enhanced information about joint movements and motor error in order to compensate for deteriorated/lost lower extremity joint proprioception from CIPN. This balance training system can be easily translated to an in-home setting and may decrease fall risk and thus improve cancer patients’ quality of life. Clinical trial information: NCT02043834.
APA, Harvard, Vancouver, ISO, and other styles
37

CATRAKIS, HARIS J., ROBERTO C. AGUIRRE, JESUS RUIZ-PLANCARTE, ROBERT D. THAYNE, BRENDA A. McDONALD, and JOSHUA W. HEARN. "Large-scale dynamics in turbulent mixing and the three-dimensional space–time behaviour of outer fluid interfaces." Journal of Fluid Mechanics 471 (November 5, 2002): 381–408. http://dx.doi.org/10.1017/s0022112002002240.

Full text
Abstract:
Experiments have been conducted to investigate turbulent mixing and the dynamics of outer fluid interfaces, i.e. the interfaces between mixed fluid and pure ambient fluid. A novel six-foot-diameter octagonal-tank flow facility was developed to enable the optical imaging of fluid interfaces above the mixing transition, corresponding to fully developed turbulence. Approximately 10003 whole-field three-dimensional space– time measurements of the concentration field were recorded using laser-induced- fluorescence digital-imaging techniques in turbulent jets at a Reynolds number of Re ∼ 20 000, Schmidt number of Sc ∼ 2000, and downstream distance of ∼ 500 nozzle diameters. Multiple large-scale regions of spatially nearly uniform-concentration fluid are evident in instantaneous visualizations, in agreement with previous findings above the mixing transition. The ensemble-averaged probability density function of concentration is found to exhibit linear dependence over a wide range of concentration thresholds. This can be accounted for in terms of the dynamics of large-scale well- mixed regions. Visualization of the three-dimensional space–time concentration field indicates that molecular mixing of entrained pure ambient fluid is dynamically initiated and accomplished in the vicinity of the unsteady large scales. Examination of the outer interfaces shows that they are dynamically confined primarily near the instantaneous large-scale boundaries of the flow. This behaviour is quantified in terms of the probability density of the location of the outer interfaces relative to the flow centreline and the probability of pure ambient fluid as a function of distance from the centreline. The current measurements show that the dynamics of outer interfaces above the mixing transition is significantly different from the behaviour below the transition, where previous studies have shown that unmixed ambient fluid can extend across a wide range of transverse locations in the flow interior. The present observations of dynamical confinement of the outer interfaces to the unsteady large scales, and considerations of entrainment, suggest that the mechanism responsible for this behaviour must be the coupling of large-scale flow dynamics with the presence of small-scale structures internal to the large-scale structures, above the mixing transition. The dynamics and structure of the outer interfaces across the entire range of space–time scales are quantified in terms of a distribution of generalized level-crossing scales. The outer-interface behaviour determines the mixing efficiency of the flow, i.e. fraction of mixed fluid. The present findings indicate that the large-scale dynamics of the outer interfaces above the mixing transition provides the dominant contribution to the mixing efficiency. This suggests a new way to quantify the mixing efficiency of turbulent flows at high Reynolds numbers.
APA, Harvard, Vancouver, ISO, and other styles
38

Tidwell, V. C., and J. L. Wilson. "Heterogeneity, Permeability Patterns, and Permeability Upscaling: Physical Characterization of a Block of Massillon Sandstone Exhibiting Nested Scales of Heterogeneity." SPE Reservoir Evaluation & Engineering 3, no. 04 (August 1, 2000): 283–91. http://dx.doi.org/10.2118/65282-pa.

Full text
Abstract:
Summary Over 75,000 permeability measurements were collected from a meter-scale block of Massillon sandstone, characterized by conspicuous crossbedding that forms two distinct nested scales of heterogeneity. With the aid of a gas minipermeameter, spatially exhaustive fields of permeability data were acquired at each of five different sample supports (i.e., sample volumes) from each block face. These data provide a unique opportunity to physically investigate the relationship between the multiscale cross-stratified attributes of the sandstone and the corresponding statistical characteristics of the permeability. These data also provide quantitative physical information concerning the permeability upscaling of a complex heterogeneous medium. Here, a portion of the data taken from a single block face cut normal to stratification is analyzed. The results indicate a strong relationship between the calculated summary statistics and the cross-stratified structural features visibly evident in the sandstone sample. Specifically, the permeability fields and semivariograms are characterized by two nested scales of heterogeneity, including a large-scale structure defined by the cross-stratified sets (delineated by distinct bounding surfaces) and a small-scale structure defined by the low-angle cross-stratification within each set. The permeability data also provide clear evidence of upscaling. That is, each calculated summary statistic exhibits distinct and consistent trends with increasing sample support. Among these trends are an increasing mean, decreasing variance, and an increasing semivariogram range. The results also clearly indicate that the different scales of heterogeneity upscale differently, with the small-scale structure being preferentially filtered from the data while the large-scale structure is preserved. Finally, the statistical and upscaling characteristics of individual cross-stratified sets were found to be very similar because of their shared depositional environment; however, some differences were noted that are likely the result of minor variations in the sediment load and/or flow conditions between depositional events. Introduction Geologic materials are inherently heterogeneous because of the depositional and diagenetic processes responsible for their formation. These heterogeneities often impose considerable influence on the performance of hydrocarbon bearing reservoirs. Unfortunately, quantitative characterization and integration of reservoir heterogeneity into predictive models are complicated by two challenging problems. First, the quantity of porous media observed and/or sampled is generally a minute faction of the reservoir under investigation. This gives rise to the need for models to predict material characteristics at unsampled locations. The second problem stems from technological constraints that often limit the measurement of material properties to sample supports (i.e., sample volumes) much smaller than can be accommodated in current predictive models. This disparity in support requires measured data be averaged or upscaled to yield effective properties at the desired scale of analysis. The concept of using "soft" geologic information to supplement often sparse "hard" physical data has received considerable attention.1,2 Successful application of this approach requires that some relationship be established between the difficult to measure material property (e.g., permeability) and that of a more easily observable feature of the geologic material. For example, Davis et al.3 correlated architectural-element mapping with the geostatistical characteristics of a fluvial/interfluvial formation in central New Mexico; Jordan and Pryor4 related permeability controls and reservoir productivity to six hierarchical levels of sand heterogeneity in a fluvial meander belt system; while Istok et al.5 found a strong correlation between hydraulic property measurements and visual trends in the degree of welding of ash flow tuffs at Yucca Mountain, Nevada. Phillips and Wilson6 mapped regions where the permeability exceeds some specified cutoff value and related their dimensions to the correlation length scale by means of threshold-crossing theory. Also, Journel and Alabert7 proposed a spatial connectivity model based on an indicator formalism and conditioned on geologic maps of observable, spatially connected, high-permeability features. The description and quantification of heterogeneity is necessarily related to the issue of scale. It is often assumed that geologic heterogeneity is structured according to a discrete and disparate hierarchy of scales. For example, the hierarchical models proposed by Dagan8 and by Haldorsen9 conveniently classify heterogeneities according to the pore, laboratory, formation, and regional scales. This assumed disparity in scales allows parameter variations occurring at scales smaller than the modeled flow/transport process to be spatially averaged to form effective media properties,10–14 while large-scale variations are treated as a simple deterministic trend.2,15 However, natural media are not always characterized by a large disparity in scales as assumed above;16 but rather, an infinite number of scales may coexist,17–20 leading to a fractal geometry or continuous hierarchy of scales.21
APA, Harvard, Vancouver, ISO, and other styles
39

Sanyal, Amit, James M. Heun, Jessica Sweeney, and Clemens Janssen. "Mobile-Health Tool to Improve Care of Patients with Hematological Malignancies." Blood 136, Supplement 1 (November 5, 2020): 35–36. http://dx.doi.org/10.1182/blood-2020-143405.

Full text
Abstract:
INTRODUCTION Adverse effects are common during treatment of hematological malignancies. Treatment toxicities can impact quality of life [1], impose financial hardship and cause cancer related distress[2]. Symptom monitoring using electronic technology can facilitate early detection of complications[3], reduce symptom burden[4], cost of care[5] and improve survival[6]. Cancer treatment also increases risk of mortality from infections such as coronavirus disease 2019 (COVID-19) and routine screening has been recommended[7]. METHODS We developed an application that periodically delivers toxicity questionnaires to patients during treatment . Based on NCI- PRO-CTCAE™, the questions are delivered through SMS or e-mail. Patient responses crossing prespecified thresholds trigger automated alerts on a dashboard, resulting in additional interventions as needed. Nature and time to intervention is tracked. Patient experience is measured using a Likert-scale and free-text box. Centers for Disease Control recommended COVID-19 screening questions were incorporated. Finally, a distress thermometer for cancer distress screening has been recently added. The app was offered to patients with hematological cancers in a community-based cancer center. RESULTS Since introduction in April 2020, we have enrolled 37 patients. 9 patients had chronic lymphocytic leukemia, 6 diffuse large B cell, 5 mantle cell, 4 Hodgkin's and 3 follicular lymphoma. 2 each had chronic myelogenous, multiple myeloma and Richter's syndrome. 1 each had hairy cell leukemia, acute myelogenous leukemia and T Cell lymphoma. Median age was 64 years (range 24-85). Patient experience has been favorable. On a scale of 1-5, 85.5% rated the experience as 3 or higher. Median patient engagement, calculated by dividing the number of forms completions by number of days enrolled was 34.2% (0.9-66.2 %). Symptom tracker captured 536 responses. Fatigue (153), no symptoms (152), shortness of breath (57), nausea/vomiting, diarrhea (46) and numbness/tingling (28) were the most common response categories. Of 1107 completed check ins, 75 triggered flags. There were 2 hospitalizations for neutropenic fever with the remainder managed as outpatients. Average time between patient generated response and provider intervention was 90.9 minutes. 88% follow-ups were completed within 1 business day. COVID-19 screening module captured 1096 responses. 988 were no symptoms. All positive responses (44 diarrhea, 39 cough, 23 shortness of breath and 2 fever) were false positives. Distress thermometer implemented a week before data cut-off captured 2 responses, 1 in the physical and 1 in the psychological domain. CONCLUSION We demonstrate feasibility of electronic capture of treatment toxicities and offer proof of concept that a mobile app can be used for infection screening. Additionally, the quick response time by care team indicated a high adoption rate. REFERENCES Doorduijn J, B.I., Holt B, Steijaert M, Uyl-de Groot C, Sonneveld P., Self-reported quality of life in elderly patients with aggressive non-Hodgkin's lymphoma treated with CHOP chemotherapy. . European Journal of Hemtology 2005. 75(2): p. 116-123.Troy JD, L.S., Samsa GP, Feliciano J, Richhariya A, LeBlanc TW., Patient-reported distress in Hodgkin lymphoma across the survivorship continuum. Supportive Care Cancer, 2019. 27(7): p. 2453-2462.Stover A M, H.S., Deal A M, Stricker C T, Bennett A V, Carr P M, Jansen J, Kottschade L A, Dueck A C, Basch E M, Methods for alerting clinicians to concerning symptom questionnaire responses during cancer care: Approaches from two randomized trials (STAR, AFT-39 PRO-TECT). Journal of Clinical Oncology 2018. 36(30 supplement): p. 158.Mooney KH, B.S., Wong B, Whisenant M, Donaldson G, Automated home monitoring and management of patient-reported symptoms during chemotherapy: results of the symptom care at home RCT. Cancer Medicine, 2017. 6(3): p. 537-546.Barkley R, S.M.-J., Wang J, Blau S, Page RD, Reducing Cancer Costs Through Symptom Management and Triage Pathways. Journal of Oncology Practice, 2019. 15(2): p. e91-e97.Denis F, B.E., Septans AL, Urban T, Dueck AC, Letellier C., Two-Year Survival Comparing Web-Based Symptom Monitoring vs Routine Surveillance Following Treatment for Lung Cancer. JAMA, 2019. 321(3): p. 306-307.ASCO Special Report: A guide to cancer care delivery during COVID-19 pandemic. 2020, ASCO: Alexandria, VA. Disclosures Janssen: wellbe Inc.: Current Employment.
APA, Harvard, Vancouver, ISO, and other styles
40

Kahale, Lara A., Assem M. Khamis, Batoul Diab, Yaping Chang, Luciane Cruz Lopes, Arnav Agarwal, Ling Li, et al. "Potential impact of missing outcome data on treatment effects in systematic reviews: imputation study." BMJ, August 26, 2020, m2898. http://dx.doi.org/10.1136/bmj.m2898.

Full text
Abstract:
AbstractObjectiveTo assess the risk of bias associated with missing outcome data in systematic reviews.DesignImputation study.SettingSystematic reviews.Population100 systematic reviews that included a group level meta-analysis with a statistically significant effect on a patient important dichotomous efficacy outcome.Main outcome measuresMedian percentage change in the relative effect estimate when applying each of the following assumption (four commonly discussed but implausible assumptions (best case scenario, none had the event, all had the event, and worst case scenario) and four plausible assumptions for missing data based on the informative missingness odds ratio (IMOR) approach (IMOR 1.5 (least stringent), IMOR 2, IMOR 3, IMOR 5 (most stringent)); percentage of meta-analyses that crossed the threshold of the null effect for each method; and percentage of meta-analyses that qualitatively changed direction of effect for each method. Sensitivity analyses based on the eight different methods of handling missing data were conducted.Results100 systematic reviews with 653 randomised controlled trials were included. When applying the implausible but commonly discussed assumptions, the median change in the relative effect estimate varied from 0% to 30.4%. The percentage of meta-analyses crossing the threshold of the null effect varied from 1% (best case scenario) to 60% (worst case scenario), and 26% changed direction with the worst case scenario. When applying the plausible assumptions, the median percentage change in relative effect estimate varied from 1.4% to 7.0%. The percentage of meta-analyses crossing the threshold of the null effect varied from 6% (IMOR 1.5) to 22% (IMOR 5) of meta-analyses, and 2% changed direction with the most stringent (IMOR 5).ConclusionEven when applying plausible assumptions to the outcomes of participants with definite missing data, the average change in pooled relative effect estimate is substantive, and almost a quarter (22%) of meta-analyses crossed the threshold of the null effect. Systematic review authors should present the potential impact of missing outcome data on their effect estimates and use this to inform their overall GRADE (grading of recommendations assessment, development, and evaluation) ratings of risk of bias and their interpretation of the results.
APA, Harvard, Vancouver, ISO, and other styles
41

Alizon, Samuel, Christian Selinger, Mircea T. Sofonea, Stéphanie Haim-Boukobza, Jean-Marc Giannoli, Laetitia Ninove, Sylvie Pillet, et al. "Epidemiological and clinical insights from SARS-CoV-2 RT-PCR crossing threshold values, France, January to November 2020." Eurosurveillance 27, no. 6 (February 10, 2022). http://dx.doi.org/10.2807/1560-7917.es.2022.27.6.2100406.

Full text
Abstract:
Background The COVID-19 pandemic has led to an unprecedented daily use of RT-PCR tests. These tests are interpreted qualitatively for diagnosis, and the relevance of the test result intensity, i.e. the number of quantification cycles (Cq), is debated because of strong potential biases. Aim We explored the possibility to use Cq values from SARS-CoV-2 screening tests to better understand the spread of an epidemic and to better understand the biology of the infection. Methods We used linear regression models to analyse a large database of 793,479 Cq values from tests performed on more than 2 million samples between 21 January and 30 November 2020, i.e. the first two pandemic waves. We performed time series analysis using autoregressive integrated moving average (ARIMA) models to estimate whether Cq data information improves short-term predictions of epidemiological dynamics. Results Although we found that the Cq values varied depending on the testing laboratory or the assay used, we detected strong significant trends associated with patient age, number of days after symptoms onset or the state of the epidemic (the temporal reproduction number) at the time of the test. Furthermore, knowing the quartiles of the Cq distribution greatly reduced the error in predicting the temporal reproduction number of the COVID-19 epidemic. Conclusion Our results suggest that Cq values of screening tests performed in the general population generate testable hypotheses and help improve short-term predictions for epidemic surveillance.
APA, Harvard, Vancouver, ISO, and other styles
42

Fiori, Robyn, Vickal V. Kumar, David H. Boteler, and Michael B. Terkildsen. "Occurrence rate and duration of space weather impacts to high frequency radio communication used by aviation." Journal of Space Weather and Space Climate, May 20, 2022. http://dx.doi.org/10.1051/swsc/2022017.

Full text
Abstract:
High frequency (HF) radio wave propagation is sensitive to space weather induced ionospheric disturbances that result from enhanced photoionization and energetic particle precipitation. Recognizing the potential risk to HF radio communication systems used by the aviation industry, as well as potential impacts to GNSS navigation and the risk of elevated radiation levels, the International Civil Aviation Organization (ICAO) initiated development of a space weather advisory service. For HF systems, this service specifically identifies shortwave fadeout, auroral absorption, polar cap absorption, and post storm maximum useable frequency depression (PSD) as phenomena impacting HF radio communication, and specifies moderate and severe event thresholds to describe event severity. This paper examines the occurrence rate and duration of events crossing the moderate and severe thresholds. Shortwave fadeout was evaluated based on thresholds in the solar X-ray flux. Analysis of 40-years of solar X-ray flux data showed that moderate and severe level solar X-ray flares were observed, on average, 123 and 5 times per 11-year solar cycle, respectively. The mean event duration was 68 minutes for moderate level events and 132 minutes for severe level events. Auroral absorption events crossed the moderate threshold for 40 events per solar cycle, with a mean event duration of 5.1 hours. The severe threshold was crossed for 3 events per solar cycle with a mean event duration of 12 hours. Polar cap absorption had the longest mean duration at ~8 hours for moderate events and 1.6 days for severe events; on average, 24 moderate and 13 severe events observed per solar cycle. Moderate and severe thresholds for shortwave fadeout, auroral absorption, and polar cap absorption were used to determine the expected impacts on HF radio communication. Results for polar cap absorption and shortwave fadeout were consistent with each other, but the expected impact for auroral absorption was shown to be 2-3 times higher. Analysis of 22 years of ionosonde data showed moderate and severe PSD events occurred, on average, 200 and 56 times per 11-year solar cycle, respectively. The mean event duration was 5.5 hours for moderate level events and 8.5 hours for severe level events. During solar cycles 22 and 23, HF radio communication was expected to experience moderate or severe impacts due to the ionospheric disturbances caused by space weather a maximum of 163 and 78 days per year, respectively, due to the combined effect of absorption and PSD. The distribution of events is highly non-uniform with respect to solar cycle: 70% of moderate or severe events were observed during solar maximum compared to solar minimum.
APA, Harvard, Vancouver, ISO, and other styles
43

Lindinger, Michael Ivan. "Ground Flaxseed – How Safe is it for Companion Animals and for Us?" Veterinary Science Research 1, no. 1 (September 30, 2019). http://dx.doi.org/10.30564/vsr.v1i1.1158.

Full text
Abstract:
EFSA released the 89-page Scientific Opinion “Evaluation of the health risks related to the presence of cyanogenic glycosides in foods other than raw apricot kernels”. This opinion, and the ensuring media coverage, has left uncertainty in the minds of consumers, feed and supplement manufacturers and flaxseed producers of how much ground flaxseed can safely be consumed without crossing the threshold of cyanide toxicity. This editorial updates the science and tries to bring clarity to the question “how much flaxseed can I safely feed my dog, cat, horse on a daily basis?” and “how much can I safely eat?” The great majority of ground flaxseed products have a cyanogenic glycoside content of less than 200 mg / kg seed. For people, consuming 30 grams of such flaxseed the average peak blood cyanide concentration will be about 5 µmole / L, much less than the toxic threshold value of 20 to 40 µmole / L favoured by EFSA. Thus, as much as 120 grams of crushed / ground flaxseed can be consumed by a 70 kg adult person before a toxic threshold of 40 µmole / L is reached (up to 1.7 grams ground flaxseed / kg body weight). The toxic threshold of cyanide for dogs is 2 to 4-fold greater than for humans, and unknown for cats and horses. The daily serving amounts for dogs and cats are about 0.23 grams / kg body mass per day, which will result in blood cyanide well below the toxic threshold. The highest recommended daily serving amount for horses is 454 grams per day, or 0.8 to 2 grams per kg / body mass depending on mass of the horse. This amount for horses should not be exceeded.
APA, Harvard, Vancouver, ISO, and other styles
44

Sanz-Leon, Paula, Nathan J. Stevenson, Robyn M. Stuart, Romesh G. Abeysuriya, James C. Pang, Stephen B. Lambert, Cliff C. Kerr, and James A. Roberts. "Risk of sustained SARS-CoV-2 transmission in Queensland, Australia." Scientific Reports 12, no. 1 (April 15, 2022). http://dx.doi.org/10.1038/s41598-022-10349-y.

Full text
Abstract:
AbstractWe used an agent-based model Covasim to assess the risk of sustained community transmission of SARSCoV-2/COVID-19 in Queensland (Australia) in the presence of high-transmission variants of the virus. The model was calibrated using the demographics, policies, and interventions implemented in the state. Then, using the calibrated model, we simulated possible epidemic trajectories that could eventuate due to leakage of infected cases with high-transmission variants, during a period without recorded cases of locally acquired infections, known in Australian settings as “zero community transmission”. We also examined how the threat of new variants reduces given a range of vaccination levels. Specifically, the model calibration covered the first-wave period from early March 2020 to May 2020. Predicted epidemic trajectories were simulated from early February 2021 to late March 2021. Our simulations showed that one infected agent with the ancestral (A.2.2) variant has a 14% chance of crossing a threshold of sustained community transmission (SCT) (i.e., > 5 infections per day, more than 3 days in a row), assuming no change in the prevailing preventative and counteracting policies. However, one agent carrying the alpha (B.1.1.7) variant has a 43% chance of crossing the same threshold; a threefold increase with respect to the ancestral strain; while, one agent carrying the delta (B.1.617.2) variant has a 60% chance of the same threshold, a fourfold increase with respect to the ancestral strain. The delta variant is 50% more likely to trigger SCT than the alpha variant. Doubling the average number of daily tests from ∼ 6,000 to 12,000 results in a decrease of this SCT probability from 43 to 33% for the alpha variant. However, if the delta variant is circulating we would need an average of 100,000 daily tests to achieve a similar decrease in SCT risk. Further, achieving a full-vaccination coverage of 70% of the adult population, with a vaccine with 70% effectiveness against infection, would decrease the probability of SCT from a single seed of alpha from 43 to 20%, on par with the ancestral strain in a naive population. In contrast, for the same vaccine coverage and same effectiveness, the probability of SCT from a single seed of delta would decrease from 62 to 48%, a risk slightly above the alpha variant in a naive population. Our results demonstrate that the introduction of even a small number of people infected with high-transmission variants dramatically increases the probability of sustained community transmission in Queensland. Until very high vaccine coverage is achieved, a swift implementation of policies and interventions, together with high quarantine adherence rates, will be required to minimise the probability of sustained community transmission.
APA, Harvard, Vancouver, ISO, and other styles
45

Khasawneh, Mohammad Ali, Aslam Ali Al-Omari, and Rabea Al-Jarazi. "Effect of On-Street Parking and Pedestrian Crossing on Through Traffic in Jordan." Open Transportation Journal 17, no. 1 (January 13, 2023). http://dx.doi.org/10.2174/18744478-v17-e230113-2022-42.

Full text
Abstract:
Background: With the dramatic rapid increase in population growth and in the use of the automobile in the world, especially in developing countries such as Jordan, related traffic problems have become more and more complex. Parking is one of the major problems that are created by increasing road traffic. Both on-street parking and pedestrian crossing are important components of the urban transport system. On-street parking is an important component of the parking system. Because of its occupancy of roadway resources, it can significantly impact traffic performance and safety. The lack of providing an adequate number of parking areas within urban central business districts (CBDs) and the lack of off-street facilities in urban neighborhood commercial areas, both result in increased on-street parking and disturbance of traffic stream. The management of on-street parking is one of the main parameters in traffic management. In developing countries, as the number of vehicles and parking demand had increased significantly in recent years, on-street parking-related concerns are no longer confined to the city center; they extend throughout the whole urban region. Furthermore, the correlation between on-street parking and traffic safety is still a controversial issue. The research findings of this study help to develop a better understanding of both on-street parking and pedestrian crossing at mid-block location and help to plan and design proper transportation facilities on urban streets. The model of this study can give quantitative results regarding the influence of on-street parking on through traffic operations. Findings of the study can support municipal engineering (policy decisions) for on-street parking hour permits. As delay on the street segment and road network can be calculated, municipal engineering (policy makers) can set up a maximum tolerable threshold in the delay profile to decide when street parking is permitted on which network path for how many vehicles. A time varying parking fee policy can be proposed according to actual delay caused by the parked vehicles at the entire network or the like. Many engineers are concerned about the increase in the number of accidents which are associated with on-street parking. In addition to the above-mentioned issues concerning the relationship of on street parking with through traffic movements, the pedestrian is one of the important components in urban transportation system and is vulnerable at unprotected mid-block locations under high traffic conditions. At unprotected mid-block locations, some vehicles may yield to pedestrians who are already at crosswalk location. Moreover, because of poor construction of separated facilities and road side development especially in developing countries, pedestrians usually cross the road at unprotected mid-block locations under high traffic conditions. Furthermore, the use of most major streets in developing countries such as Jordan are not properly monitored and managed especially with regards to on-street parking and pedestrian crossing, and therefore reducing the capacity of roadways and probably causing accidents. This research could provide insights for transportation agencies as it is related to traffic operations issues and safety especially in developing countries. Objective: 1. Investigate the effect of on-street parking on through traffic in terms of delay. 2. Develop a model that can estimate the effect of on-street parking on through traffic. 3. Investigate the effect of pedestrian crossing on travel time. Method: Data was collected using a video camera for a total of 37 street segments from three major cities in Jordan, during off-peak hour and dummy regression was used for analysis. Result and Discussion: Results revealed that on-street parking maneuvers (in/out) and pedestrian forced gap can significantly influence through traffic characteristics of Jordan traffic network. The trend of traffic volume was found to fall gradually, while average delay time was found to rise gradually with an increase in the number of parking maneuvers (in/out). The average percent reduction in maximum traffic volume at one lane was 28.7% which was more than that at two lanes (18.6%) under high parking maneuver conditions. The average delay time caused by heavy vehicles, angle parking, street without median, and street in a central business district (CBD) commercial area was more than that caused by the presence of passenger vehicles, parallel parking, street with median, and street in non-CBD commercial area by 27.8%, 31.7%, 29.7%, and 10.0%, respectively. The vehicular speeds had a significant drop reaching a value of 32.0% on average with pedestrian crossing when compared to no pedestrian crossing at mid-block location. A prediction model using dummy regression was proposed and shown to be able to predict average delay time. Conclusion: 1. On-street parking maneuvers can significantly influence the maximum traffic volume of street segments in Jordan. 2. A trend for traffic volume to fall gradually with an increase in the number of parking maneuver (in/out) was concluded. 3. The average percent reduction in maximum traffic volume at one lane (28.7%) was more than that at two lanes (18.6%) where there is high parking maneuver. Of all the factors that can impede the flow of traffic on the roadway, on-street parking can be considered the most important factor. The reduction of roadway width by the owners of shops who sometimes occupy one lane of the roads also affects the traffic operation and reduces traffic volume. 4. A trend for average delay time to rise gradually with an increase in the number of parking maneuver (in/out) was concluded. 5. The average delay time caused by heavy vehicles, angle parking, street without median and street in CBD commercial areas was more than that caused by the presence of passenger vehicles, parallel parking, street with median and street in non-CBD commercial by 27.8%, 31.7%, 29.7%, and 10.0%, respectively. Also, on-street parking maneuvers may be a contributory factor to causing temporary bottlenecks in moving traffic, which may result in causing more operational problems such as congestion and accidents. 6. The vehicular speeds were implicitly affected by pedestrian crossing when compared to no pedestrian crossing location. In other words, there was a significant drop of speed value reaching an average of 32.0%. 7. Compared with the effects of all the independent variables, number of parking maneuvers has the greatest influence on the estimated average delay time. 8. Prediction models using dummy regression were proposed and shown to be able to predict average delay time if provided inputs are within the data range used in developing the models.
APA, Harvard, Vancouver, ISO, and other styles
46

Tang, Oushan, Haoliang Zhou, Caidi Yuan, Yinhong Cheng, and Jin Lv. "Effect of implantation site of the His bundle pacing leads on pacing parameters: a single-center experience." BMC Cardiovascular Disorders 21, no. 1 (February 24, 2021). http://dx.doi.org/10.1186/s12872-020-01842-1.

Full text
Abstract:
Abstract Background HB pacing is a promising approach to achieve physiological pacing, but its efficacy and long-term effects require further validation. In current study, we deemed to investigate the effect of the His bundle pacing (HBP) lead location on pacing parameters. Methods 2D echocardiography imaging was performed after successful implantation, according to which the patients were divided into groups A (whose His lead tips were at the atrial side) and B (whose His lead tips were at the ventricular side). The capture thresholds, sensing values, and H-V intervals between the two groups were compared. Results Thirteen patients were in group A and 16 patients were in group B. The average capture thresholds during, 1 month, and 1 year after operation were 1.20 ± 0.34, 0.69 ± 0.29, and 0.92 ± 0.80 V/0.5 ms for group A and 1.14 ± 0.43, 0.81 ± 0.39, and 0.98 ± 0.59 V/0.5 ms for group B, respectively. The difference between the two groups was not significant. The threshold values in both groups decreased significantly in 1 month and slightly increased in 1 year. The sensing values of group A were 1.87 ± 0.82, 1.95 ± 0.76, and 1.88 ± 0.75 mV, while those of group B were 4.53 ± 1.37, 4.69 ± 1.38, and 4.59 ± 1.42 mV. The difference among the three time points was not significant. However, the sensing values in group A were consistently significantly lower than those in group B. The HV interval in group A was significantly longer than that in group B. Conclusions The implantation site of HBP leads has a significant effect on sensing values for that His leads crossing the tricuspid annulus toward the ventricle are associated with higher sensing values, compared to a more proximal location. Meanwhile, lead location has no evident effect on capture thresholds that is improved significantly shortly after operation.
APA, Harvard, Vancouver, ISO, and other styles
47

Podmetin, P., T. Y. Burak, I. N. Kochanov, and A. L. Kaledin. "P5747The effect of additional intracoronary papaverine administration during hyperemia with intravenous adenosine triphosphate infusion, on fractional flow reserve values." European Heart Journal 40, Supplement_1 (October 1, 2019). http://dx.doi.org/10.1093/eurheartj/ehz746.0687.

Full text
Abstract:
Abstract Introduction Fractional flow reserve (FFR) measurement requires the achievement of steady-state maximum hyperemia. One of the main agents used for the induction of hyperemia is adenosine-triphosphate (ATP). But, in some cases, hyperemia may be insufficient, which leads to an underestimation of the true value of the FFR. Purpose To determine the effect of additional intracoronary papaverine administration during hyperemia with intravenous ATP infusion, on FFR values in a group of patients with borderline values. Methods A total of 165 measurements of FFR were performed in 119 patients. Intravenous infusion of ATP 140 μg/kg/min was used in all patients. In the group of patients with borderline FFR values (28 pts, 0.79–0.86), during achieved ongoing hyperemia, papaverine was additionally administered intracoronary (20 mg for LCA and 12 mg for RCA) with a reassessment of FFR values. The change in the FFR values and hemodynamic parameters were determined. Results The average values of FFR during hyperemia with ATP were 0.82±0,02. After additional administration of papaverine, a significant decrease in the mean values of FFR to 0.79±0,03 (p<0.001) occurred. Wherein a decrease in the FFR value by 0.03 or more was noted in 12 patients (43%), a decrease by 0.01–0.02 - in 12 patients (43%), unchanged in 4 patients (14%). In 15 patients (53%), a change in FFR led to a crossing of the threshold value of 0.80 and a change in treatment strategy. With intravenous infusion of ATP, was observed a decrease in systolic blood pressure (BP) by 10.6% (132 vs 118 mm Hg, p<0.001), a decrease in mean BP by 12% (101 vs 88 mm Hg, p<0.001) compared to the base pressure. With additional administration of papaverine, systolic BP decreased by more 12% (to 104 mm Hg, p<0.001), mean BP – by more 11% (to 78 mm Hg, p<0.001). Changes in blood pressure Rest ATP Papaverine Systolic BP, mm Hg 132±22 118±23 104±19 Mean BP, mm Hg 100±13 88±14 79±11 Changes in the values of FFR Conclusions Additional intracoronary papaverine administration during hyperemia with intravenous adenosine triphosphate infusion led to a decrease in FFR values of 0.02 or more in 68% of cases. It may indicate a insufficiency of the initial hyperemia induced by only one vasodilator and require a combination of pharmacological agents to achieve lower and more accurate values. At borderline values of FFR obtained by induction of hyperemia with a single hyperemic agent (ATP), the additional administration of a second hyperemic agent (papaverine) in 53% led to a crossing of the threshold value of 0.80 and a change in treatment strategy.
APA, Harvard, Vancouver, ISO, and other styles
48

Procop, Gary W., Marion Tuohy, Christine Ramsey, Daniel D. Rhoads, Brian P. Rubin, and Richard Figler. "Asymptomatic Patient Testing After 10:1 Pooling Using the Xpert Xpress SARS-CoV-2 Assay." American Journal of Clinical Pathology, January 5, 2021. http://dx.doi.org/10.1093/ajcp/aqaa273.

Full text
Abstract:
Abstract Objectives Pool testing for severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) preserves testing resources at the risk of missing specimens through specimen dilution. Methods To determine whether SARS-CoV-2 specimens would be missed after 10:1 pooling, we identified 10 specimens with midrange (ie, 25-34 cycles) and 10 with late (ie, &gt;34-45 cycles) crossing threshold (Ct) values and tested these both neat and after 10:1 pooling. Final test results and Ct changes were compared. Results Overall, 17 of 20 specimens that contained SARS-CoV-2 were detected after 10:1 pooling with the Xpert Xpress SARS-CoV-2 Assay (Cepheid), rendering an 85% positive percentage of agreement. All 10 of 10 specimens with an undiluted Ct in the mid-Ct range were detected after 10:1 pooling, in contrast to 7 of 10 with an undiluted Ct in the late-Ct range. The overall Ct difference between the neat testing and the 10:1 pool was 2.9 cycles for the N2 gene target and 3 cycles for the E gene target. The N2 gene reaction was more sensitive than the E gene reaction, detecting 16 of 20 positive specimens after 10:1 pooling compared with 9 of 20 specimens. Conclusions An 85% positive percentage of agreement was achieved, with only specimens with low viral loads being missed following 10:1 pooling. The average impact on both reverse transcription polymerase chain reactions within this assay was about 3 cycles.
APA, Harvard, Vancouver, ISO, and other styles
49

Yang, Han, Bin Wang, Dandan Liu, and Stephen Grigg. "Investigation of delta-t mapping source location technique based on the arrival of A0 modes." e-Journal of Nondestructive Testing 28, no. 1 (January 2023). http://dx.doi.org/10.58286/27617.

Full text
Abstract:
The main assumptions made for traditional Time of Arrival (TOA) source location techniques are constant wave speed and straight wave path between the acoustic emission (AE) source and the sensors. However, because of material inhomogeneity and structural complexity, wave velocities may vary in different directions and a direct wave path may be very difficult to be achieved. Therefore, to solve these problems, researchers developed delta-T mapping source location technique, which has been shown to have good accuracy for source location in aerospace structure because complex geometric features are considered when training the mapping data and accurate wave speed data are not required. However, this technique relies on identifying the arrival time of S0 (extensional) Lamb mode, which may not be distinguished from background noise due to lowamplitude AE sources such as corrosion or large source-to-sensor distance. As shown in Figure 1, source location accuracy in a corrosion test on a simple plate was greatly improved after the velocity of A0 (flexural) Lamb mode was used in the source location algorithm. Therefore, identifying the arrival time of A0 modes and hence building delta-T maps are more feasible to solve the problem. Experiments have been carried out with an artificial AE source on a thin complex steel plate. A threshold crossing method based on wavelet coefficient was used to estimate the arrival time of A0 modes. The average error (3.9 mm) of the source locations predicted by delta-T mapping based on A0 arrival was larger than that (1.8 mm) of S0 arrival, which can be explained by the difficulties in accurately identifying the arrival of A0 modes.
APA, Harvard, Vancouver, ISO, and other styles
50

"Real-Time Anomaly Detection using Average Level-Crossing Rate." International Journal of Innovative Technology and Exploring Engineering 9, no. 4 (February 10, 2020): 2693–98. http://dx.doi.org/10.35940/ijitee.d1863.029420.

Full text
Abstract:
Vibration data collected from piezoelectric sensors serve as a means for detecting faults in machines that have rotating parts. The sensor output that is sampled at the Nyquist rate is stored for analysis of faults in the traditional condition monitoring system. The massive amount of data makes the analysis very difficult. Very complex procedures are adopted for anomaly detection in standard methods. The proposed system works on the analog output of the sensor and does not require conventional steps like sampling, feature extraction, classification, or computation of the spectrum. It is a simple system that performs real-time detection of anomalies in the bearing of a machine using vibration signals. Faults in the machines usually create an increase in the frequency of the vibration data. The amplitude of the signal also changes in some situations. The increase in amplitude or frequency leads to a corresponding increase in the level-crossing rate, which is a parameter indicating the rate of change of a signal. Based on the percentage increase in the average value of the level-crossing rate (ALCR), a suitable warning signal can be issued. It does not require the data from a faulty machine to set the thresholds. The proposed algorithm has been tested with standard data sets. There is a clear distinction between the ALCR values of normal and faulty machines, which has been used to release accurate indications about the fault. If the noise conditions do not vary much, the pre-processing of the input signal is not needed. The vibration signals acquired with faulty bearings have ALCR values, ranging from 3.48 times to 10.71 times the average value of ALCR obtained with normal bearing. Hence the proposed system offers bearing fault detection with100% accuracy
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography