Journal articles on the topic 'Summary parametric filter sensitivity'

To see the other types of publications on this topic, follow the link: Summary parametric filter sensitivity.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Summary parametric filter sensitivity.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Xiang, Feifei, Zhongke Xiang, and Wenming Cheng. "Structure Optimization of Air Filter Based on Parametric Sensitivity." Wuhan University Journal of Natural Sciences 24, no. 3 (May 14, 2019): 271–76. http://dx.doi.org/10.1007/s11859-019-1396-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Lian, J., B. He, and J. Hori. "Cortical Potential Imaging of Brain Electrical Activity by Means of Parametric Projection Filter." Methods of Information in Medicine 43, no. 01 (2004): 66–69. http://dx.doi.org/10.1055/s-0038-1633837.

Full text
Abstract:
Summary Objectives: The objective of this study was to explore suitable spatial filters for inverse estimation of cortical potentials from the scalp electroencephalogram. The effect of incorporating noise covariance into inverse procedures was examined by computer simulations and tested in human experiment. Methods: The parametric projection filter, which allows inverse estimation with the presence of information on the noise, was applied to an inhomogeneous three-concentric-sphere model under various noise conditions in order to estimate the cortical potentials from the scalp potentials. The method for determining the optimum regularization parameter, which can be applied for parametric inverse techniques, is also discussed. Results: Human visual evoked potential experiment was carried out to examine the performance of the proposed restoration method. The parametric projection filter gave more localized inverse solution of cortical potential distribution than the truncated SVD and Tikhonov regularization. Conclusion: The present simulation results suggest that incorporation of information on the noise covariance allows better estimation of cortical potentials, than inverse solutions without knowledge about the noise covariance, when the correlation between the signal and noise is low.
APA, Harvard, Vancouver, ISO, and other styles
3

Agrawal, Anil K., and Zhou Xu. "Parametric Sensitivity of Ground Motion Pulse Filter for Response Control of Base-Isolated Buildings." Journal of Earthquake Engineering 13, no. 4 (May 4, 2009): 407–25. http://dx.doi.org/10.1080/13632460902837694.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Bärenbold, Oliver, Amadou Garba, Daniel G. Colley, Fiona M. Fleming, Rufin K. Assaré, Edridah M. Tukahebwa, Biruck Kebede, et al. "Estimating true prevalence of Schistosoma mansoni from population summary measures based on the Kato-Katz diagnostic technique." PLOS Neglected Tropical Diseases 15, no. 4 (April 5, 2021): e0009310. http://dx.doi.org/10.1371/journal.pntd.0009310.

Full text
Abstract:
Background The prevalence of Schistosoma mansoni infection is usually assessed by the Kato-Katz diagnostic technique. However, Kato-Katz thick smears have low sensitivity, especially for light infections. Egg count models fitted on individual level data can adjust for the infection intensity-dependent sensitivity and estimate the ‘true’ prevalence in a population. However, application of these models is complex and there is a need for adjustments that can be done without modeling expertise. This study provides estimates of the ‘true’ S. mansoni prevalence from population summary measures of observed prevalence and infection intensity using extensive simulations parametrized with data from different settings in sub-Saharan Africa. Methodology An individual-level egg count model was applied to Kato-Katz data to determine the S. mansoni infection intensity-dependent sensitivity for various sampling schemes. Observations in populations with varying forces of transmission were simulated, using standard assumptions about the distribution of worms and their mating behavior. Summary measures such as the geometric mean infection, arithmetic mean infection, and the observed prevalence of the simulations were calculated, and parametric statistical models fitted to the summary measures for each sampling scheme. For validation, the simulation-based estimates are compared with an observational dataset not used to inform the simulation. Principal findings Overall, the sensitivity of Kato-Katz in a population varies according to the mean infection intensity. Using a parametric model, which takes into account different sampling schemes varying from single Kato-Katz to triplicate slides over three days, both geometric and arithmetic mean infection intensities improve estimation of sensitivity. The relation between observed and ‘true’ prevalence is remarkably linear and triplicate slides per day on three consecutive days ensure close to perfect sensitivity. Conclusions/significance Estimation of ‘true’ S. mansoni prevalence is improved when taking into account geometric or arithmetic mean infection intensity in a population. We supply parametric functions and corresponding estimates of their parameters to calculate the ‘true’ prevalence for sampling schemes up to 3 days with triplicate Kato-Katz thick smears per day that allow estimation of the ‘true’ prevalence.
APA, Harvard, Vancouver, ISO, and other styles
5

Xu, Fugang, Mengren Xuan, Zixiang Ben, Wenjuan Shang, and Guangran Ma. "Surface enhanced Raman scattering analysis with filter-based enhancement substrates: A mini review." Reviews in Analytical Chemistry 40, no. 1 (January 1, 2021): 75–92. http://dx.doi.org/10.1515/revac-2021-0126.

Full text
Abstract:
Abstract Surface enhanced Raman is a powerful analytical tool with high sensitivity and unique specificity and promising applications in various branches of analytical chemistry. Despite the fabrication of ingenious enhancement substrate used in laboratory research, the development of simple, flexible, and cost-effective substrate is also great important for promoting the application of SERS in practical analysis. Recently, paper and filter membrane as support to fabricate flexible SERS substrates received considerable attentions. Paper-based SERS substrate has been reviewed but no summary on filter-based SERS substrate is available. Compared with paper, filter membrane has unique advantage in robust mechanics, diverse component, and tunable pore size. These characteristics endow the filter-based substrates great advantages for practical SERS analysis including simple and low-cost substrate preparation, high efficiency in preconcentration, separation and detection procedure. Therefore, filter-based substrates have shown great promise in SERS analysis in environment monitoring, food safety with high sensitivity and efficiency. As more and more work has been emerged, it is necessary to summarize the state of such a research topic. Here, the research on filter involved SERS analysis in the past eight years is summarized. A short introduction was presented to understand the background, and then the brief history of filter-based substrate is introduced. After that, the preparation of filter-based substrate and the role of filter are summarized. Then, the application of filter involved SERS substrate in analysis is presented. Finally, the challenges and perspective on this topic is discussed.
APA, Harvard, Vancouver, ISO, and other styles
6

Heidrich, G., C. O. Sahlmann, U. Siefker, H. Luig, C. Werner, E. Brunner, J. Meller, and M. Schünemann. "Improvement of tomographic reconstruction in bone SPECT." Nuklearmedizin 45, no. 01 (2006): 35–40. http://dx.doi.org/10.1055/s-0038-1623932.

Full text
Abstract:
Summary Aim: The comparison between iterative reconstruction and filtered backprojection in the reconstruction of bone SPECT in the diagnosis of skeletal metastases. Patients, methods: 47 consecutive patients (vertebral segments: n = 435), with suspected malignancy of the vertebral column, were examined by bone scintigraphy and MRI (maximal interval between the two procedures ± 5 weeks). The SPECT-data were reconstructed with an iterative algorithm (ISA) and with filtered backprojection. We defined semiquantitative criteria in order to assess the quality of the tomograms. Conventional reconstruction was performed both by a Wiener-filter and a low-pass-filter. Iterative reconstruction was performed by the ISA algorithm. The clinical evaluation of the different reconstruction algorithms was performed by MRI as the gold-standard. Results: Sensitivity (%): 87.3 (ISA), 86.4 (low-pass), 79.7 (Wiener); specificity (%): 95.3 (ISA), 95 (low-pass), 85.4 (Wiener). The sensitivity of iterative reconstructed SPECT and low-pass reconstructed SPECT was significantly higher (p <0.05) compared with the sensitivity of SPECT reconstructed by the Wiener-filter. The specificity of iterative reconstruction ISA and low-pass-filter reconstructed SPECT were significantly higher compared with the SPECT data reconstructed by the Wiener-filter. ISA was significantly superior to the Wiener- SPECT relating to all criteria of quality. Iterative reconstruction was significantly superior to the low-pass-SPECT relating to 2 of 3 criteria. In addition the Wiener-SPECT was significantly inferior to the low-pass-SPECT regarding to 2 of 3 criteria. Conclusion: In our series the iterative algorithm ISA was the method of choice in the reconstruction of bone SPECT data. In comparison with conventional algorithms ISA offers a significantly higher quality of the tomograms and yields a high diagnostic accuracy.
APA, Harvard, Vancouver, ISO, and other styles
7

Uchida, Ricardo R., Cristina M. Del-Ben, David Araújo, Geraldo Busatto-Filho, Fábio L. S. Duran, José A. S. Crippa, and Frederico G. Graeff. "Correlation between voxel based morphometry and manual volumetry in magnetic resonance images of the human brain." Anais da Academia Brasileira de Ciências 80, no. 1 (March 2008): 149–56. http://dx.doi.org/10.1590/s0001-37652008000100010.

Full text
Abstract:
This is a comparative study between manual volumetry (MV) and voxel based morphometry (VBM) as methods of evaluating the volume of brain structures in magnetic resonance images. The volumes of the hippocampus and the amygdala of 16 panic disorder patients and 16 healthy controls measured through MV were correlated with the volumes of gray matter estimated by optimized modulated VBM. The chosen structures are composed almost exclusively of gray matter. Using a 4 mm Gaussian filter, statistically significant clusters were found bilaterally in the hippocampus and in the right amygdala in the statistical parametric map correlating with the respective manual volume. With the conventional 12 mm filter,a significant correlation was found only for the right hippocampus. Therefore,narrowfilters increase the sensitivity of the correlation procedure, especially when small brain structures are analyzed. The two techniques seem to consistently measure structural volume.
APA, Harvard, Vancouver, ISO, and other styles
8

Tobar, ME, DG Blair, EN Ivanov, F. van Kann, NP Linthorne, PJ Turner, and IS Heng. "The University of Western Australia?s Resonant-bar Gravitational Wave Experiment." Australian Journal of Physics 48, no. 6 (1995): 1007. http://dx.doi.org/10.1071/ph951007.

Full text
Abstract:
The cryogenic resonant-mass gravitational radiation antenna at the University of Western Australia (UWA) has obtained a noise temperature of <2 mK using a zero order predictor filter. This corresponds to aIms burst strain sensitivity of 7x 10-19 . The antenna has been in continuous operation since August 1993. The antenna operates at about 5 K and consists of a 1� 5 tonne niobium bar with a 710 Hz fundamental frequency, and a closely tuned secondary mass of 0�45 kg effective mass. The vibrational state of the secondary mass is continuously monitored by a 9�5 GHz superconducting parametric transducer. This paper presents the current design and operation of the detector. From a two-mode model we show how we calibrate, characterise and theoretically determine the sensitivity of our detector. Experimental results confirm the theory.
APA, Harvard, Vancouver, ISO, and other styles
9

Eiselt, M., C. Schelenz, H. Witte, and K. Schwab. "Time-variant Parametric Estimation of Transient Quadratic Phase Couplings during Electroencephalographic Burst Activity." Methods of Information in Medicine 44, no. 03 (2005): 374–83. http://dx.doi.org/10.1055/s-0038-1633980.

Full text
Abstract:
Summary Objectives: Electroencephalographic burst activity characteristic of burst-suppression pattern (BSP) in sedated patients and of burst-interburst pattern (BIP) in the quiet sleep of healthy neonates have similar linear and non-linear signal properties. Strong interrelations between a slow frequency component and rhythmic, spindle-like activities with higher frequencies have been identified in previous studies. Time-varying characteristics of BSP and BIP prevent a definite pattern-related analysis. A continuous estimation of the bispectrum is essential to analyze these patterns. Parametric bispectral approaches provide this opportunity. Methods: The adaptation of an AR model leads to a parametric bispectrum by using the transfer function of the estimated AR filter. Time-variant parametric bispectral approaches require an estimation of AR parameters which consider higher order moments to preserve phase information. Accordingly, a time-variant parametric estimation of the bispectrum was introduced. Data driven simulations were performed to provide optimal parameters. BSP (12 patients) and BIP (6 neonates) were analyzed using this novel approach. Results: Significant differences in the time course of burst pattern during BSP and burst-like pattern before the onset of BSP could be shown. A rhythmic quadratic phase coupling (period 10 sec) was identified during BIP in all neonates. Conclusion: Quadratic phase couplings during BSP increases in the time course depending on depth of sedation. The visually detected burst activity in BIP is only the temporarily observable EEG correlate of a hidden neural process. Time-variant bispectral approaches offer the possibility of a better characterization of underlying neural processes leading to improved diagnostic tools used in clinical routine.
APA, Harvard, Vancouver, ISO, and other styles
10

Miwa, T., T. Ohshima, B. He, and J. Hori. "Cortical Dipole Imaging of Movement-related Potentials by Means of Parametric Inverse Filters Incorporating with Signal and Noise Covariance." Methods of Information in Medicine 46, no. 02 (2007): 242–46. http://dx.doi.org/10.1055/s-0038-1625415.

Full text
Abstract:
Summary Objective : The objective of this study is to explore suitable spatial filters for inverse estimation of cortical equivalent dipole layer imaging from the scalp electroencephalogram. We utilize cortical dipole source imaging to locate the possible generators of scalpmeasured movement-related potentials (MRPs) in human. Methods : The effects of incorporating signal and noise covariance into inverse procedures were examined by computer simulations and experimental study. The parametric projection filter (PPF) and parametric Weiner filter (PWF) were applied to an inhomogeneous threesphere head model under various noise conditions. Results : The present simulation results suggest that the PWF incorporating signal information provides better cortical dipole layer imaging results than the PPF and Tikhonov regularization under the condition of moderate and high correlation between signal and noise distributions. On the other hand, the PPF has better performance than other inverse filters under the condition of low correlation between signal and noise distributions. The proposed methods were applied to self-paced MRPs in order to identify the anatomic substrate locations of neural generators. The dipole layer distributions estimated by means of PPF are well-localized as compared with blurred scalp potential maps and dipole layer distribution estimated by Tikhonov regularization. The proposed methods demonstrated that the contralateral premotor cortex was preponderantly activated in relation to movement performance. Conclusions : In cortical dipole source imaging, the PWF has better performance especiallywhen the correlation between the signal and noise is high. The proposed inverse method was applicable to human experiments of MRPs if the signal and noise covariances were obtained.
APA, Harvard, Vancouver, ISO, and other styles
11

Singalandapuram Mahadevan, Boopathi, John H. Johnson, and Mahdi Shahbakhti. "Development of a Kalman filter estimator for simulation and control of particulate matter distribution of a diesel catalyzed particulate filter." International Journal of Engine Research 21, no. 5 (July 17, 2018): 866–84. http://dx.doi.org/10.1177/1468087418785855.

Full text
Abstract:
The knowledge of the temperature and particulate matter mass distribution is essential for monitoring the performance and durability of a catalyzed particulate filter. A catalyzed particulate filter model was developed, and it showed capability to accurately predict temperature and particulate matter mass distribution and pressure drop across the catalyzed particulate filter. However, the high-fidelity model is computationally demanding. Therefore, a reduced order multi-zone particulate filter model was developed to reduce computational complexity with an acceptable level of accuracy. In order to develop a reduced order model, a parametric study was carried out to determine the number of zones necessary for aftertreatment control applications. The catalyzed particulate filter model was further reduced by carrying out a sensitivity study of the selected model assumptions. The reduced order multi-zone particulate filter model with 5 × 5 zones was selected to develop a catalyzed particulate filter state estimator considering its computational time and accuracy. Next, a Kalman filter–based catalyzed particulate filter estimator was developed to estimate unknown states of the catalyzed particulate filter such as temperature and particulate matter mass distribution and pressure drop (Δ P) using the sensor inputs to the engine electronic control unit and the reduced order multi-zone particulate filter model. A diesel oxidation catalyst estimator was also integrated with the catalyzed particulate filter estimator in order to provide estimates of diesel oxidation catalyst outlet concentrations of NO2 and hydrocarbons and inlet temperature for the catalyzed particulate filter estimator. The combined diesel oxidation catalyst–catalyzed particulate filter estimator was validated for an active regeneration experiment. The validation results for catalyzed particulate filter temperature distribution showed that the root mean square temperature error by using the diesel oxidation catalyst–catalyzed particulate filter estimator is within 3.2 °C compared to the experimental data. Similarly, the Δ P estimator closely simulated the measured total Δ P and the estimated cake pressure drop error is within 0.2 kPa compared to the high-fidelity catalyzed particulate filter model.
APA, Harvard, Vancouver, ISO, and other styles
12

Gelder, Edgar de, Marc van de Wal, Carsten Scherer, Camile Hol, and Okko Bosgra. "Nominal and Robust Feedforward Design With Time Domain Constraints Applied to a Wafer Stage." Journal of Dynamic Systems, Measurement, and Control 128, no. 2 (April 4, 2005): 204–15. http://dx.doi.org/10.1115/1.2192821.

Full text
Abstract:
A new method is proposed to design a feedforward controller for electromechanical servo systems. The settling time is minimized by iteratively solving a linear programming problem. A bound on the amplitude of the feedforward control signal can be imposed and the McMillan degree of the controller can be fixed a priori. We choose Laguerre basis functions for the feedforward filter. Since finding the optimal pole location is very difficult, we present a computationally cheap method to determine the pole location that works well in practice. Furthermore, we show how the method can account for plant and/or reference signal uncertainty. Uncertainty in servo systems can usually be modeled by additive norm-bounded dynamic uncertainty. We will show that, because the feedforward controller is designed for a finite-time interval, we can replace the dynamic uncertainty set by a parametric one. This allows us to design a robust feedforward controller by solving an LMI problem, under the assumption that the transfer functions of the plant, sensitivity, and process sensitivity depend affinely on the uncertainty. If the uncertainty set is a finite set, which is usually the case for uncertainty in the reference profiles, the feedforward design problem reduces to a linear program. These classes of uncertainty sets are well suited to describe variations in the plant and in the reference profile of a wafer stage, which is important for the practical application of the filter. Experimental results for a wafer stage demonstrate the performance improvement compared to a standard inertia feedforward filter.
APA, Harvard, Vancouver, ISO, and other styles
13

Piza, D. M., and S. N. Romanenko. "ADVANCED GRAM-SCHMIDT METHOD FOR RADAR SIGNAL PROCESSING." Radio Electronics, Computer Science, Control, no. 4 (January 5, 2022): 26–33. http://dx.doi.org/10.15588/1607-3274-2021-4-3.

Full text
Abstract:
Context. When protecting radar stations from active noise interference acting along the side lobes of the antenna directional pattern, spatial filtering of signals is used, which is realized by using antennas that are spaced apart in space. In this case, the difference in the directions of reception of the useful signal and the interference makes it possible to form the optimal value of the weighting coefficients of the adaptive spatial filters to suppress the interference. However, if the interfering source moves into the main beam region, then the spatial differences between the wanted signal and the interference are reduced. This leads to significant distortion of the main antenna radiation pattern. As a result, the accuracy of measuring the angular coordinates deteriorates, as well as the sensitivity of the radar receiver. The article proposes a structural-parametric method for adapting a spatial filter, which ensures effective operation of the radar when exposed to the active noise interference both from the direction of the side lobes and from the direction of the main beam. Goal. Improving the efficiency of the radar when the active noise interference source is shifted from the direction to the side lobes to the direction of the main beam. Method. The proposed method makes it possible, due to the structural adaptation of the multichannel spatial filter, to exclude the distortion of the main beam of the radiation pattern of the radar antenna and to ensure its operation under conditions of possible interference from the main beam. Structural adaptation of the spatial filter is realized by the current analysis of the weighting coefficients of the compensation blocks. Results. The structural diagram of the multichannel spatial filter by the Gram-Schmidt method with structural-parametric adaptation, as well as the structural diagram of the compensation block, has been improved. As a result of the simulation, the possibility of eliminating distortions of the radiation pattern of the main antenna of the radar in conditions of the possible impact of the active noise interference along the main beam of the radiation pattern of the radar has been confirmed. Conclusions. The scientific novelty of the work consists in the improvement of the signal-processing algorithm at spatial filtering both when exposed to the active noise interference from the direction of the side lobes, and when the interference source is shifted to the direction of the main beam of radar. The practical novelty of the work lies in the development of a structural diagram and a mathematical model of an improved spatial filter with structural-parametric adaptation.
APA, Harvard, Vancouver, ISO, and other styles
14

Sun, Lu, Xiuling Shan, Qihu Dong, Chong Wu, Mei Shan, Hongxia Guo, and Rui Lu. "Ultrasonic Elastography Combined with Human Papilloma Virus Detection Based on Intelligent Denoising Algorithm in Diagnosis of Cervical Intraepithelial Neoplasia." Computational and Mathematical Methods in Medicine 2021 (December 26, 2021): 1–7. http://dx.doi.org/10.1155/2021/8066133.

Full text
Abstract:
The aim of this research was to study the application of ultrasonic elastography combined with human papilloma virus (HPV) detection based on bilateral filter intelligent denoising algorithm in the diagnosis of cervical intraepithelial neoplasia (CIN) and provide a theoretical basis for clinical diagnosis and treatment of CIN. In this study, 100 patients with cervical lesions were selected as research objects and randomly divided into control group and experimental group, with 50 cases in each group. Patients in control group and experimental group were diagnosed by ultrasonic elastography combined with HPV detection. The experimental group used the optimized image map of bilateral filter intelligent denoising algorithm for denoising and optimization, while the control group did not use optimization, and the differences between them were analyzed and compared. The diagnostic effects of the two groups were compared. As a result, the three accuracy rates of the experimental group were 95%, 95%, and 98%, respectively; the three sensitivity rates were 96%, 92%, and 94%, respectively; and the three specificity rates were 99%, 97%, and 98%, respectively. In the control group, the three accuracy rates were 84%, 86%, and 84%, respectively; the three sensitivity rates were 88%, 84%, and 86%, respectively; and the three specificity rates were 81%, 83%, and 88%, respectively. The accuracy, sensitivity, and specificity of experiment group were significantly higher than those of control group, and the difference was statistically significant ( P < 0.05 ). In summary, the bilateral filter intelligent denoising algorithm has a good denoising effect on the ultrasonic elastography. The ultrasonic image processed by the algorithm combined with HPV detection has a better diagnosis of CIN.
APA, Harvard, Vancouver, ISO, and other styles
15

Sondergaard, Thomas, and Pierre F. J. Lermusiaux. "Data Assimilation with Gaussian Mixture Models Using the Dynamically Orthogonal Field Equations. Part II: Applications." Monthly Weather Review 141, no. 6 (June 1, 2013): 1761–85. http://dx.doi.org/10.1175/mwr-d-11-00296.1.

Full text
Abstract:
Abstract The properties and capabilities of the Gaussian Mixture Model–Dynamically Orthogonal filter (GMM-DO) are assessed and exemplified by applications to two dynamical systems: 1) the double well diffusion and 2) sudden expansion flows; both of which admit far-from-Gaussian statistics. The former test case, or twin experiment, validates the use of the Expectation-Maximization (EM) algorithm and Bayesian Information Criterion with GMMs in a filtering context; the latter further exemplifies its ability to efficiently handle state vectors of nontrivial dimensionality and dynamics with jets and eddies. For each test case, qualitative and quantitative comparisons are made with contemporary filters. The sensitivity to input parameters is illustrated and discussed. Properties of the filter are examined and its estimates are described, including the equation-based and adaptive prediction of the probability densities; the evolution of the mean field, stochastic subspace modes, and stochastic coefficients; the fitting of GMMs; and the efficient and analytical Bayesian updates at assimilation times and the corresponding data impacts. The advantages of respecting nonlinear dynamics and preserving non-Gaussian statistics are brought to light. For realistic test cases admitting complex distributions and with sparse or noisy measurements, the GMM-DO filter is shown to fundamentally improve the filtering skill, outperforming simpler schemes invoking the Gaussian parametric distribution.
APA, Harvard, Vancouver, ISO, and other styles
16

Reddy, Gangireddy Navitha, Chenkual Laltanpuii, and Rajesh Sonti. "Review on in vivo profiling of drug metabolites with LC-MS/MS in the past decade." Bioanalysis 13, no. 22 (November 2021): 1697–722. http://dx.doi.org/10.4155/bio-2021-0144.

Full text
Abstract:
Metabolite profiling is an indispensable part of drug discovery and development, enabling a comprehensive understanding of the drug's metabolic behavior. Liquid chromatography-mass spectrometry facilitates metabolite profiling by reducing sample complexity and providing high sensitivity. This review discusses the in vivo metabolite profiling involving LC-MS/MS and the utilization of QTOF, QQQ mass analyzers with a particular emphasis on a mass filter. Further, a summary of sample extraction procedures in biological matrices such as plasma, urine, feces, serum and hair as in vivo samples are outlined. toward the end, we present 15 case studies in biological matrices and their LC-MS/MS conditions to understand the metabolic disposition.
APA, Harvard, Vancouver, ISO, and other styles
17

Balmert, Lauren C., Ruosha Li, Limin Peng, and Jong-Hyeon Jeong. "Quantile regression on inactivity time." Statistical Methods in Medical Research 30, no. 5 (March 20, 2021): 1332–46. http://dx.doi.org/10.1177/0962280221995977.

Full text
Abstract:
The inactivity time, or lost lifespan specifically for mortality data, concerns time from occurrence of an event of interest to the current time point and has recently emerged as a new summary measure for cumulative information inherent in time-to-event data. This summary measure provides several benefits over the traditional methods, including more straightforward interpretation yet less sensitivity to heavy censoring. However, there exists no systematic modeling approach to inferring the quantile inactivity time in the literature. In this paper, we propose a semi-parametric regression method for the quantiles of the inactivity time distribution under right censoring. The consistency and asymptotic normality of the regression parameters are established. To avoid estimation of the probability density function of the inactivity time distribution under censoring, we propose a computationally efficient method for estimating the variance–covariance matrix of the regression coefficient estimates. Simulation results are presented to validate the finite sample properties of the proposed estimators and test statistics. The proposed method is illustrated with a real dataset from a clinical trial on breast cancer.
APA, Harvard, Vancouver, ISO, and other styles
18

Davarpanah, Afshin. "Parametric Study of Polymer-Nanoparticles-Assisted Injectivity Performance for Axisymmetric Two-Phase Flow in EOR Processes." Nanomaterials 10, no. 9 (September 12, 2020): 1818. http://dx.doi.org/10.3390/nano10091818.

Full text
Abstract:
Among a wide range of enhanced oil-recovery techniques, polymer flooding has been selected by petroleum industries due to the simplicity and lower cost of operational performances. The reason for this selection is due to the mobility-reduction of the water phase, facilitating the forward-movement of oil. The objective of this comprehensive study is to develop a mathematical model for simultaneous injection of polymer-assisted nanoparticles migration to calculate an oil-recovery factor. Then, a sensitivity analysis is provided to consider the significant influence of formation rheological characteristics as type curves. To achieve this, we concentrated on the driving mathematical equations for the recovery factor and compare each parameter significantly to nurture the differences explicitly. Consequently, due to the results of this extensive study, it is evident that a higher value of mobility ratio, higher polymer concentration and higher formation-damage coefficient leads to a higher recovery factor. The reason for this is that the external filter cake is being made in this period and the subsequent injection of polymer solution administered a higher sweep efficiency and higher recovery factor.
APA, Harvard, Vancouver, ISO, and other styles
19

Broz, Pavel, Daniel Rajdl, Jaroslav Novak, Milan Hromadka, Jaroslav Racek, Ladislav Trefil, and Vaclav Zeman. "High-Sensitivity Troponins after a Standardized 2-Hour Treadmill Run." Journal of Medical Biochemistry 37, no. 3 (July 1, 2018): 364–72. http://dx.doi.org/10.1515/jomb-2017-0055.

Full text
Abstract:
Summary The aim of this study was to examine high-sensitivity troponin T and I (hsTnT and hsTnI) after a treadmill run under laboratory conditions and to find a possible connection with echocardiographic, laboratory and other assessed parameters. Nineteen trained men underwent a standardized 2-hour-long treadmill run. Concentrations of hsTnT and hsTnI were assessed before the run, 60, 120 and 180 minutes after the start and 24 hours after the run. Changes in troponins were tested using non-parametric analysis of variance (ANOVA). The multiple linear regression model was used to find the explanatory variables for hsTnT and hsTnI changes. Values of troponins were evaluated using the 0h/1h algorithm. Changes in hsTnT and hsTnI levels were statistically significant (p<0.0001 and p<0.0001, respectively). In a multiple regression model (adjusted R2: 0.60, p=0.005 for hsTnT and adjusted R2: 0.60, p=0.005 for hsTnI), changes in both troponins can be explained by relative left wall thickness (LV), training volume, body temperature after the run and creatinine changes. According to the 0h/1h algorithm, none of the runners was evaluated as negative. Relative LV wall thickness, creatinine changes, training volume and body temperature after the run can predict changes in hsTnT and hsTnI levels. When medical attention is needed after physical exercise, hsTn levels should be tested only when clinical suspicion and the patient’s history indicate a high probability of myocardial damage.
APA, Harvard, Vancouver, ISO, and other styles
20

Labhart, Thomas, Jürgen Petzold, and Hansruedi Helbling. "Spatial integration in polarization-sensitive interneurones of crickets: a survey of evidence, mechanisms and benefits." Journal of Experimental Biology 204, no. 14 (July 15, 2001): 2423–30. http://dx.doi.org/10.1242/jeb.204.14.2423.

Full text
Abstract:
SUMMARY Many insects exploit the polarization pattern of the sky for compass orientation in navigation or cruising-course control. Polarization-sensitive neurones (POL1-neurones) in the polarization vision pathway of the cricket visual system have wide visual fields of approximately 60° diameter, i.e. these neurones integrate information over a large area of the sky. This results from two different mechanisms. (i) Optical integration; polarization vision is mediated by a group of specialized ommatidia at the dorsal rim of the eye. These ommatidia lack screening pigment, contain a wide rhabdom and have poor lens optics. As a result, the angular sensitivity of the polarization-sensitive photoreceptors is very wide (median approximately 20°). (ii) Neural integration; each POL1-neurone receives input from a large number of dorsal rim photoreceptors with diverging optical axes. Spatial integration in POL1-neurones acts as a spatial low-pass filter. It improves the quality of the celestial polarization signal by filtering out cloud-induced local disturbances in the polarization pattern and increases sensitivity.
APA, Harvard, Vancouver, ISO, and other styles
21

Szkudlarek, Marcin, Lene Terslev, Richard J. Wakefield, Marina Backhaus, Peter V. Balint, George A. W. Bruyn, Emilio Filippucci, et al. "Summary Findings of a Systematic Literature Review of the Ultrasound Assessment of Bone Erosions in Rheumatoid Arthritis." Journal of Rheumatology 43, no. 1 (December 1, 2015): 12–21. http://dx.doi.org/10.3899/jrheum.141416.

Full text
Abstract:
Objective.Bone erosions in rheumatoid arthritis (RA) have been studied in an increasing amount of research. Both earlier and present classification criteria of RA contain erosions as a significant classification component. Ultrasound (US) can detect bone changes in accessible surfaces. Therefore, the study group performed a systematic literature review of assessment of RA bone erosions with US.Methods.A systematic search of PubMed and Embase was performed. Data on the definitions of RA bone erosions, their size, scoring, relation to synovitis, comparators, and elements of the OMERACT (Outcome Measures in Rheumatology Clinical Trials) filter were collected and analyzed.Results.The selection process identified 58 original research papers. The assessed joints were most frequently metacarpophalangeal (MCP; 41 papers), proximal interphalangeal (19 papers), and metatarsophalangeal joints (MTP; 18 papers). The OMERACT definition of RA bone erosion on US was used most often (17 papers). Second and fifth MCP and fifth MTP were recommended as target joints. Conventional radiography was the most frequently used comparator (27 papers), then magnetic resonance imaging (17 papers) and computed tomography (5 papers). Reliability of assessment was presented in 20 papers and sensitivity to change in 11 papers.Conclusion.This paper presents results of a systematic literature review of bone erosion assessment in RA with US. The survey suggests that US can be a helpful adjunct to the existing methods of imaging bone erosions in RA. It analyzes definitions, scoring systems, used comparators, and elements of the OMERACT filter. It also presents recommendations for a future research agenda based on the results of the review.
APA, Harvard, Vancouver, ISO, and other styles
22

Segou, M., N. Voulgaris, and K. Makropoulos. "ON THE SENSITIVITY OF GROUND MOTION PREDICTION EQUATIONS IN GREECE." Bulletin of the Geological Society of Greece 43, no. 4 (January 25, 2017): 2163. http://dx.doi.org/10.12681/bgsg.11407.

Full text
Abstract:
Ground motion prediction equations, widely known as attenuation relations, are common input for probabilistic and deterministic seismic hazard studies. The construction of a ground motion model to describe such a complex phenomenon as the effects of seismic wave propagation is highly dependable on a number of parameters. The quality and the distribution of strong motion data, which is the original input for the calculation of any ground motion model, can be thought as one of the main parameters that heavily influence the form of ground motion prediction equations. The selected processing scheme, involving significant choices about a series of adjustments and filter specifications, implemented to remove low and high frequency noise, is related with the credibility of the calculated ground motion parameters such as the spectral ordinates. Once a set of response variables for a number of predictors is available, the researcher’s interest is related with the mathematical definition of the ground motion model, in terms of selecting the appropriate parameters and the determination of their coefficients of the equation. Another significant part involves the selection of the optimum solver in order to achieve high confidence level coefficients and a computationally inexpensive solution. Each method should be evaluated through statistics but the researcher should bear in mind that residual analysis and statistical errors, although they can adequately represent the efficiency of the mathematical equations, do not always provide information about where our efforts should lie in terms of further improvement. The scope of this paper is to point out the multi-parametric nature of the construction of ground motion prediction equations and how each of the aforementioned development stages influences the credibility of the proposed attenuation relations.
APA, Harvard, Vancouver, ISO, and other styles
23

MacIver, Claire L., Ayisha Al Busaidi, Balaji Ganeshan, John A. Maynard, Stephen Wastling, Harpreet Hyare, Sebastian Brandner, et al. "Filtration-Histogram Based Magnetic Resonance Texture Analysis (MRTA) for the Distinction of Primary Central Nervous System Lymphoma and Glioblastoma." Journal of Personalized Medicine 11, no. 9 (August 31, 2021): 876. http://dx.doi.org/10.3390/jpm11090876.

Full text
Abstract:
Primary central nervous system lymphoma (PCNSL) has variable imaging appearances, which overlap with those of glioblastoma (GBM), thereby necessitating invasive tissue diagnosis. We aimed to investigate whether a rapid filtration histogram analysis of clinical MRI data supports the distinction of PCNSL from GBM. Ninety tumours (PCNSL n = 48, GBM n = 42) were analysed using pre-treatment MRI sequences (T1-weighted contrast-enhanced (T1CE), T2-weighted (T2), and apparent diffusion coefficient maps (ADC)). The segmentations were completed with proprietary texture analysis software (TexRAD version 3.3). Filtered (five filter sizes SSF = 2–6 mm) and unfiltered (SSF = 0) histogram parameters were compared using Mann-Whitney U non-parametric testing, with receiver operating characteristic (ROC) derived area under the curve (AUC) analysis for significant results. Across all (n = 90) tumours, the optimal algorithm performance was achieved using an unfiltered ADC mean and the mean of positive pixels (MPP), with a sensitivity of 83.8%, specificity of 8.9%, and AUC of 0.88. For subgroup analysis with >1/3 necrosis masses, ADC permitted the identification of PCNSL with a sensitivity of 96.9% and specificity of 100%. For T1CE-derived regions, the distinction was less accurate, with a sensitivity of 71.4%, specificity of 77.1%, and AUC of 0.779. A role may exist for cross-sectional texture analysis without complex machine learning models to differentiate PCNSL from GBM. ADC appears the most suitable sequence, especially for necrotic lesion distinction.
APA, Harvard, Vancouver, ISO, and other styles
24

Gu, Si-Chun, Qing Ye, and Can-Xing Yuan. "Metabolic pattern analysis of 18F-FDG PET as a marker for Parkinson’s disease: a systematic review and meta-analysis." Reviews in the Neurosciences 30, no. 7 (October 25, 2019): 743–56. http://dx.doi.org/10.1515/revneuro-2018-0061.

Full text
Abstract:
Abstract A large number of articles have assessed the diagnostic accuracy of the metabolic pattern analysis of [18F]fluorodeoxyglucose (18F-FDG) positron emission tomography (PET) in Parkinson’s disease (PD); however, different studies involved small samples with various controls and methods, leading to discrepant conclusions. This study aims to consolidate the available observational studies and provide a comprehensive evaluation of the clinical utility of 18F-FDG PET for PD. The methods included a systematic literature search and a hierarchical summary receiver operating characteristic approach. Sensitivity analyses according to different pattern analysis methods (statistical parametric mapping versus scaled subprofile modeling/principal component analysis) and control population [healthy controls (HCs) versus atypical parkinsonian disorder (APD) patients] were performed to verify the consistency of the main results. Additional analyses for multiple system atrophy (MSA) and progressive supranuclear palsy (PSP) were conducted. Fifteen studies comprising 1446 subjects (660 PD patients, 499 APD patients, and 287 HCs) were included. The overall diagnostic accuracy of 18F-FDG in differentiating PD from APDs and HCs was quite high, with a pooled sensitivity of 0.88 [95% confidence interval (95% CI), 0.85–0.91] and a pooled specificity of 0.92 (95% CI, 0.89–0.94), with sensitivity analyses indicating statistically consistent results. Additional analyses showed an overall sensitivity and specificity of 0.87 (95% CI, 0.76–0.94) and 0.93 (95% CI, 0.89–0.96) for MSA and 0.91 (95% CI, 0.78–0.95) and 0.96 (95% CI, 0.92–0.98) for PSP. Our study suggests that the metabolic pattern analysis of 18F-FDG PET has high diagnostic accuracy in the differential diagnosis of parkinsonian disorders.
APA, Harvard, Vancouver, ISO, and other styles
25

Duarte-García, Alí, Ying Ying Leung, Laura C. Coates, Dorcas Beaton, Robin Christensen, Ethan T. Craig, Maarten de Wit, et al. "Endorsement of the 66/68 Joint Count for the Measurement of Musculoskeletal Disease Activity: OMERACT 2018 Psoriatic Arthritis Workshop Report." Journal of Rheumatology 46, no. 8 (February 15, 2019): 996–1005. http://dx.doi.org/10.3899/jrheum.181089.

Full text
Abstract:
Objective.The Psoriatic Arthritis (PsA) Core Domain Set for randomized controlled trials and longitudinal observational studies has recently been updated. The joint counts are central to the measurement of the peripheral arthritis component of the musculoskeletal (MSK) disease activity domain. We report the Outcome Measures in Rheumatology (OMERACT) 2018 meeting’s approaches to seek endorsement of the 66/68 swollen and tender joint count (SJC66/TJC68) for inclusion in the PsA Core Outcome Measurement Set (COS).Methods.Using the OMERACT Filter 2.1 Instrument Selection Process, the SJC66/TJC68 was assessed for (1) domain match, (2) feasibility, (3) numerical sense (construct validity), and (4) discrimination (test retest reliability, longitudinal construct validity, sensitivity in clinical trials, and thresholds of meaning). A protocol was designed to assess the measurement properties of the SJC66/TJC68 joint count. The results were summarized in a Summary of Measurement Properties table developed by OMERACT. OMERACT members discussed and voted on whether the strength of the evidence supported that the SJC66/TJC68 had passed the OMERACT Filter as an outcome measurement instrument for the PsA COS.Results.OMERACT delegates endorsed the use of the SJC66/TJC68 for the measurement of the peripheral arthritis component of the MSK disease activity domain. Among patient research partners, 100% voted for a “green” endorsement, whereas among the group of other stakeholders, 88% voted for a “green” endorsement.Conclusion.The SJC66/TJC68 is the first fully endorsed outcome measurement instrument using the OMERACT Filter 2.1 and the first instrument fully endorsed within the PsA COS.
APA, Harvard, Vancouver, ISO, and other styles
26

Carvalho, P. T. C., S. L. E. F. da Silva, E. F. Duarte, R. Brossier, G. Corso, and J. M. de Araújo. "Full waveform inversion based on the non-parametric estimate of the probability distribution of the residuals." Geophysical Journal International 229, no. 1 (October 27, 2021): 35–55. http://dx.doi.org/10.1093/gji/ggab441.

Full text
Abstract:
SUMMARY In an attempt to overcome the difficulties of the full waveform inversion (FWI), several alternative objective functions have been proposed over the last few years. Many of them are based on the assumption that the residuals (differences between modelled and observed seismic data) follow specific probability distributions when, in fact, the true probability distribution is unknown. This leads FWI to converge to an incorrect probability distribution if the assumed probability distribution is different from the real one and, consequently it may lead the FWI to achieve biased models of the subsurface. In this work, we propose an objective function which does not force the residuals to follow a specific probability distribution. Instead, we propose to use the non-parametric kernel density estimation technique (KDE) (which imposes the least possible assumptions about the residuals) to explore the probability distribution that may be more suitable. As evidenced by the results obtained in a synthetic model and in a typical P-wave velocity model of the Brazilian pre-salt fields, the proposed FWI reveals a greater potential to overcome more adverse situations (such as cycle-skipping) and also a lower sensitivity to noise in the observed data than conventional L2- and L1-norm objective functions and thus making it possible to obtain more accurate models of the subsurface. This greater potential is also illustrated by the smoother and less sinuous shape of the proposed objective function with fewer local minima compared with the conventional objective functions.
APA, Harvard, Vancouver, ISO, and other styles
27

Mayr, Verena, Mirko Hirschl, Peter Klein-Weigel, Luka Girardi, and Michael Kundi. "A randomized cross-over trial in patients suspected of PAD on diagnostic accuracy of ankle-brachial index by Doppler-based versus four-point oscillometry based measurements." Vasa 48, no. 6 (November 1, 2019): 516–22. http://dx.doi.org/10.1024/0301-1526/a000808.

Full text
Abstract:
Summary. Background: For diagnosis of peripheral arterial occlusive disease (PAD), a Doppler-based ankle-brachial-index (dABI) is recommended as the first non-invasive measurement. Due to limitations of dABI, oscillometry might be used as an alternative. The aim of our study was to investigate whether a semi-automatic, four-point oscillometric device provides comparable diagnostic accuracy. Furthermore, time requirements and patient preferences were evaluated. Patients and methods: 286 patients were recruited for the study; 140 without and 146 with PAD. The Doppler-based (dABI) and oscillometric (oABI and pulse wave index – PWI) measurements were performed on the same day in a randomized cross-over design. Specificity and sensitivity against verified PAD diagnosis were computed and compared by McNemar tests. ROC analyses were performed and areas under the curve were compared by non-parametric methods. Results: oABI had significantly lower sensitivity (65.8%, 95% CI: 59.2%–71.9%) compared to dABI (87.3%, CI: 81.9–91.3%) but significantly higher specificity (79.7%, 74.7–83.9% vs. 67.0%, 61.3–72.2%). PWI had a comparable sensitivity to dABI. The combination of oABI and PWI had the highest sensitivity (88.8%, 85.7–91.4%). ROC analysis revealed that PWI had the largest area under the curve, but no significant differences between oABI and dABI were observed. Time requirement for oABI was significantly shorter by about 5 min and significantly more patients would prefer oABI for future testing. Conclusions: Semi-automatic oABI measurements using the AngER-device provide comparable diagnostic results to the conventional Doppler method while PWI performed best. The time saved by oscillometry could be important, especially in high volume centers and epidemiologic studies.
APA, Harvard, Vancouver, ISO, and other styles
28

Elahi, Siavash Hakim, and Behnam Jafarpour. "Dynamic Fracture Characterization From Tracer-Test and Flow-Rate Data With Ensemble Kalman Filter." SPE Journal 23, no. 02 (February 5, 2018): 449–66. http://dx.doi.org/10.2118/189449-pa.

Full text
Abstract:
Summary Hydraulic fracturing is performed to enable production from low-permeability and organic-rich shale-oil/gas reservoirs by stimulating the rock to increase its permeability. Characterization and imaging of hydraulically induced fractures is critical for accurate prediction of production and of the stimulated reservoir volume (SRV). Recorded tracer concentrations during flowback and historical production data can reveal important information about fracture and matrix properties, including fracture geometry, hydraulic conductivity, and natural-fracture density. However, the complexity and uncertainty in fracture and reservoir descriptions, coupled with data limitations, complicate the estimation of these properties. In this paper, tracer-test and production data are used for dynamic characterization of important parameters of hydraulically fractured reservoirs, including matrix permeability and porosity, planar-fracture half-length and hydraulic conductivity, discrete-fracture-network (DFN) density and conductivity, and fracture-closing (conductivity-decline) rate during production. The ensemble Kalman filter (EnKF) is used to update uncertain model parameters by sequentially assimilating first the tracer-test data and then the production data. The results indicate that the tracer-test and production data have complementary information for estimating fracture half-length and conductivity, with the former being more sensitive to hydraulic conductivity and the latter being more affected by fracture half-length. For characterization of DFN, a stochastic representation is adopted and the parameters of the stochastic model along with matrix and hydraulic-fracture properties are updated. Numerical examples are presented to investigate the sensitivity of the observed production and tracer-test data to fracture and matrix properties and to evaluate the EnKF performance in estimating these parameters.
APA, Harvard, Vancouver, ISO, and other styles
29

Freudenberg, J., H. Boriss, and D. Hasenclever. "Comparison of Preprocessing Procedures for Oligo-nucleotide Micro-arrays by Parametric Bootstrap Simulation of Spike-in Experiments." Methods of Information in Medicine 43, no. 05 (2004): 434–38. http://dx.doi.org/10.1055/s-0038-1633893.

Full text
Abstract:
Summary Objective: Due to scarcity of calibration data for micro-array experiments, simulation methods are employed to assess preprocessing procedures. Here we analyze several procedures’ robustness against increasing numbers of differentially expressed genes and varying proportions of up-regulation. Methods: Raw probe data from oligo-nucleotide micro-arrays are assumed to be approximately multivariate normally distributed on the log scale. Chips can be simulated from a multivariate normal distribution with mean and variance-covariance matrix estimated from a real raw data set.A chip effect induces strong positive correlations. In reverse, sampling from a normal distribution with strong correlation variance-covariance matrix generates data exhibiting a chip effect. No explicit model of chip-effect is needed. Differences can be artificially spiked-in according to a given distribution of effect sizes.Thirty preprocessing procedures combining background correction, normalization, perfect match correction and summarization methods available from the BioConductor project were compared. Results: In the symmetrical setting “50% differentially expressed genes, 50% of which up-regulated” background correction reduces bias, but inflates low intensity probe variance as well as the mean squared error of the estimates. Any normalization reduces variance and increases sensitivity with no clear winner. Asymmetry between up and down regulation causes bias in the effect-size estimate of non-differentially expressed genes. This markedly inflates the false positive discovery rates. Variance stabilizing normalization (VSN) behaved best. Conclusion: A simple parametric bootstrap was used to simulate oligo-nucleotide micro-array raw data. Current normalization methods inflate the false positive rate when many genes show an effect in the same direction.
APA, Harvard, Vancouver, ISO, and other styles
30

Ling, Yun, Jiapei Qiu, and Jun Liu. "Coronary Artery Magnetic Resonance Angiography Combined with Computed Tomography Angiography in Diagnosis of Coronary Heart Disease by Reconstruction Algorithm." Contrast Media & Molecular Imaging 2022 (March 23, 2022): 1–9. http://dx.doi.org/10.1155/2022/8628668.

Full text
Abstract:
This research aimed at discussing the diagnosis effect of coronary artery magnetic resonance angiography (MRA) combined with computed tomography (CT) angiography (CTA) based on the back-projection filter reconstruction (BPFR) algorithm in coronary heart disease (CHD), and its role in the diagnosis of coronary artery disease (CAD). Sixty patients with CHD were selected and randomly rolled into group A (undergone MRA examination), group B (undergone CTA examination), and group C (undergone MRA + CTA), with 20 cases in each group. Taking the diagnostic results of coronary angiography as the gold standard, the MRA and CTA images were reconstructed using a BPFR algorithm, and a filter function was added to solve the problem of image sharpness. In addition, the iterative reconstruction algorithm and the Fourier transform analysis method were introduced. As a result, the image clarity and resolution obtained by the BPFR algorithm were better than those obtained by the Fourier transform analytical method and the iterative reconstruction algorithm. The accuracy of group C for the diagnosis of mild coronary stenosis, moderate stenosis, and severe stenosis was 94.02%, 96.13%, and 98.01%, respectively, which was significantly higher than that of group B (87.5%, 90.2%, and 88.4%) and group C (83.4%, 89.1%, and 91.5%) ( P < 0.05 ). The sensitivity and specificity for the diagnosis of noncalcified plaque in group C were 87.9% and 89.2%, respectively, and the sensitivity and specificity for the diagnosis of calcified plaque were 84.5% and 78.4%, respectively, which were significantly higher than those in groups B and C ( P < 0.05 ). In summary, the BPFR algorithm had good denoising and artifact removal effects on coronary MRA and CTA images. The combined detection of reconstructed MRA and CTA images had a high diagnostic value for CHD.
APA, Harvard, Vancouver, ISO, and other styles
31

Wen, Xian-Huan, and Wen H. Chen. "Real-Time Reservoir Model Updating Using Ensemble Kalman Filter With Confirming Option." SPE Journal 11, no. 04 (December 1, 2006): 431–42. http://dx.doi.org/10.2118/92991-pa.

Full text
Abstract:
Summary The ensemble Kalman Filter technique (EnKF) has been reported to be very efficient for real-time updating of reservoir models to match the most current production data. Using EnKF, an ensemble of reservoir models assimilating the most current observations of production data is always available. Thus, the estimations of reservoir model parameters, and their associated uncertainty, as well as the forecasts are always up-to-date. In this paper, we apply the EnKF for continuously updating an ensemble of permeability models to match real-time multiphase production data. We improve the previous EnKF by adding a confirming option (i.e., the flow equations are re-solved from the previous assimilating step to the current step using the updated current permeability models). By doing so, we ensure that the updated static and dynamic parameters are always consistent with the flow equations at the current step. However, it also creates some inconsistency between the static and dynamic parameters at the previous step where the confirming starts. Nevertheless, we show that, with the confirming approach, the filter shows better performance for the particular example investigated. We also investigate the sensitivity of using a different number of realizations in the EnKF. Our results show that a relatively large number of realizations are needed to obtain stable results, particularly for the reliable assessment of uncertainty. The sensitivity of using different covariance functions is also investigated. The efficiency and robustness of the EnKF is demonstrated using an example. By assimilating more production data, new features of heterogeneity in the reservoir model can be revealed with reduced uncertainty, resulting in more accurate predictions of reservoir production. Introduction The reliability of reservoir models could increase as more data are included in their construction. Traditionally, static (hard and soft) data, such as geological, geophysical, and well log/core data are incorporated into reservoir geological models through conditional geostatistical simulation (Deutsch and Journel 1998). Dynamic production data, such as historical measurements of reservoir production, account for the majority of reservoir data collected during the production phase. These data are directly related to the recovery process and to the response variables that form the basis for reservoir management decisions. Incorporation of dynamic data is typically done through a history-matching process. Traditionally, history matching adjusts model variables (such as permeability, porosity, and transmissibility) so that the flow simulation results using the adjusted parameters match the observations. It usually requires repeated flow simulations. Both manual and (semi-) automatic history-matching processes are available in the industry (Chen et al. 1974; He et al. 1996; Landa and Horne 1997; Milliken and Emanuel 1998; Vasco et al. 1998; Wen et al. 1998a, 1998b; Roggero and Hu 1998; Agarwal and Blunt 2003; Caers 2003; Cheng et al. 2004). Automatic history matching is usually formulated in the form of a minimization problem in which the mismatch between measurements and computed values is minimized (Tarantola 1987; Sun 1994). Gradient-based methods are widely employed for such minimization problems, which require the computation of sensitivity coefficients (Li et al. 2003; Wen et al. 2003; Gao and Reynolds 2006). In the recent decade, automatic history matching has been a very active research area with significant progress reported (Cheng et al. 2004; Gao and Reynolds 2006; Wen et al. 1997). However, most approaches are either limited to small and simple reservoir models or are computationally too intensive for practical applications. Under the framework of traditional history matching, the assessment of uncertainty is usually through a repeated history-matching process with different initial models, which makes the process even more CPU-demanding. In addition, the traditional history-matching methods are not designed in such a fashion that allows for continuous model updating. When new production data are available and are required to be incorporated, the history-matching process has to be repeated using all measured data. These limit the efficiency and applicability of the traditional automatic history-matching techniques.
APA, Harvard, Vancouver, ISO, and other styles
32

Wang, A., D. Leparoux, O. Abraham, and M. Le Feuvre. "Frequency derivative of Rayleigh wave phase velocity for fundamental mode dispersion inversion: parametric study and experimental application." Geophysical Journal International 224, no. 1 (September 4, 2020): 649–68. http://dx.doi.org/10.1093/gji/ggaa417.

Full text
Abstract:
SUMMARY Monitoring the small variations of a medium is increasingly important in subsurface geophysics due to climate change. Classical seismic surface wave dispersion methods are limited to quantitative estimations of these small variations when the variation ratio is smaller than 10 per cent, especially in the case of variations in deep media. Based on these findings, we propose to study the contributions of the Rayleigh wave phase velocity derivative with respect to frequency. More precisely, in the first step of assessing its feasibility, we analyse the effects of the phase velocity derivative on the inversion of the fundamental mode in the simple case of a two-layer model. The behaviour of the phase velocity derivative is first analysed qualitatively: the dispersion curves of phase velocity, group velocity and the phase velocity derivative are calculated theoretically for several series of media with small variations. It is shown that the phase velocity derivatives are more sensitive to variations of a medium. The sensitivity curves are then calculated for the phase velocity, the group velocity and the phase velocity derivative to perform quantitative analyses. Compared to the phase and group velocities, the phase velocity derivative is sensitive to variations of the shallow layer and the deep layer shear wave velocity in the same wavelength (frequency) range. Numerical data are used and processed to obtain dispersion curves to test the feasibility of the phase velocity derivative in the inversion. The inversion results of the phase velocity derivative are compared with those of phase and group velocities and show improved estimations for small variations (variation ratio less than 5 per cent) of deep layer shear wave velocities. The study is focused on laboratory experiments using two reduced-scale resin-epoxy models. The differences of these two-layer models are in the deep layer in which the variation ratio is estimated as 16.4 ± 1.1 per cent for the phase velocity inversion and 17.1 ± 0.3 per cent for the phase velocity derivative. The latter is closer to the reference value 17 per cent, with a smaller error.
APA, Harvard, Vancouver, ISO, and other styles
33

Zhang, Aiqiong, and Qiuxiang Chen. "Segmentation Algorithm-Based Safety Analysis of Cardiac Computed Tomography Angiography to Evaluate Doctor-Nurse-Patient Integrated Nursing Management for Cardiac Interventional Surgery." Computational and Mathematical Methods in Medicine 2022 (May 4, 2022): 1–9. http://dx.doi.org/10.1155/2022/2148566.

Full text
Abstract:
To deeply analyze the influences of doctor-nurse-patient integrated nursing management on cardiac interventional surgery, 120 patients with coronary heart disease undergoing cardiac interventional therapy were selected as the subjects and randomly divided into two groups, 60 cases in each group. The experimental group used the doctor-nurse-patient integrated nursing, while the control group adopted the routine nursing. The Hessian matrix enhanced filter segmentation algorithm was used to process the cardiac computed tomography angiography (CTA) images of patients to assess the algorithm performance and the safety of nursing methods. The results showed that the Jaccard, Dice, sensitivity, and specificity of cardiac CTA images of patients with coronary heart disease processed by Hessian matrix enhanced filter segmentation algorithm were 0.86, 0.93, 0.94, and 0.95, respectively; the disease self-management ability score and quality of life score of patients in the experimental group after nursing intervention were significantly better than those before nursing intervention, with significant differences ( P < 0.05 ). The number of cases with adverse vascular events in the experimental group was 3 cases, which was obviously lower than that in the control group (15 cases). The diagnostic accuracy of the two groups of patients after segmentation algorithm processing was 0.87 and 0.88, respectively, which was apparently superior than the diagnostic accuracy of conventional CTA (0.58 and 0.61). In summary, cardiac CTA evaluation of doctor-nurse-patient integrated nursing management cardiac interventional surgery based on segmentation algorithm had good safety and was worthy of further promotion in clinical cardiac interventional surgery.
APA, Harvard, Vancouver, ISO, and other styles
34

Prevost, Paoline, Kristel Chanard, Luce Fleitout, Eric Calais, Damian Walwer, Tonie van Dam, and Michael Ghil. "Data-adaptive spatio-temporal filtering of GRACE data." Geophysical Journal International 219, no. 3 (September 19, 2019): 2034–55. http://dx.doi.org/10.1093/gji/ggz409.

Full text
Abstract:
SUMMARY Measurements of the spatio-temporal variations of Earth’s gravity field from the Gravity Recovery and Climate Experiment (GRACE) mission have led to new insights into large spatial mass redistribution at secular, seasonal and subseasonal timescales. GRACE solutions from various processing centres, while adopting different processing strategies, result in rather coherent estimates. However, these solutions also exhibit random as well as systematic errors, with specific spatial patterns in the latter. In order to dampen the noise and enhance the geophysical signals in the GRACE data, we propose an approach based on a data-driven spatio-temporal filter, namely the Multichannel Singular Spectrum Analysis (M-SSA). M-SSA is a data-adaptive, multivariate, and non-parametric method that simultaneously exploits the spatial and temporal correlations of geophysical fields to extract common modes of variability. We perform an M-SSA analysis on 13 yr of GRACE spherical harmonics solutions from five different processing centres in a simultaneous setup. We show that the method allows us to extract common modes of variability between solutions, while removing solution-specific spatio-temporal errors that arise from the processing strategies. In particular, the method efficiently filters out the spurious north–south stripes, which are caused in all likelihood by aliasing, due to the imperfect geophysical correction models and low-frequency noise in measurements. Comparison of the M-SSA GRACE solution with mass concentration (mascons) solutions shows that, while the former remains noisier, it does retrieve geophysical signals masked by the mascons regularization procedure.
APA, Harvard, Vancouver, ISO, and other styles
35

Glegola, M., P. Ditmar, R. G. G. Hanea, F. C. C. Vossepoel, R. Arts, and R. Klees. "Gravimetric Monitoring of Water Influx Into a Gas Reservoir: A Numerical Study Based on the Ensemble Kalman Filter." SPE Journal 17, no. 01 (October 5, 2011): 163–76. http://dx.doi.org/10.2118/149578-pa.

Full text
Abstract:
Summary Water influx into gas fields can reduce recovery factors by 10–40%. Therefore, information about the magnitude and spatial distribution of water influx is essential for efficient management of waterdrive gas reservoirs. Modern geophysical techniques such as gravimetry may provide a direct measure of mass redistribution below the surface, yielding additional and valuable information for reservoir monitoring. In this paper, we investigate the added value of gravimetric observations for water-influx monitoring into a gas field. For this purpose, we use data assimilation with the ensemble Kalman filter (EnKF) method. To understand better the limitations of the gravimetric technique, a sensitivity study is performed. For a simplified gas-reservoir model, we assimilate the synthetic gravity measurements and estimate reservoir permeability. The updated reservoir model is used to predict the water-front position. We consider a number of possible scenarios, making various assumptions on the level of gravity measurement noise and on the distance from the gravity observation network to the reservoir formation. The results show that with increasing gravimetric noise and/or distance, the updated model permeability becomes smoother and its variance higher. Finally, we investigate the effect of a combined assimilation of gravity and production data. In the case when only production observations are used, the permeability estimates far from the wells can be erroneous, despite a very accurate history match of the data. In the case when both production and gravity data are combined within a single data assimilation framework, we obtain a considerably improved estimation of the reservoir permeability and an improved understanding of the subsurface mass flow. These results illustrate the complementarity of both types of measurements, and more generally, the experiments show clearly the added value of gravity data for monitoring water influx into a gas field.
APA, Harvard, Vancouver, ISO, and other styles
36

Bish, Melanie, Jason Fletcher, Cameron Knott, John Stephenson, and George Mnatzaganian. "Application of Accelerated Time Models to Compare Performance of Two Comorbidity-adjusting Methods with APACHE II in Predicting Short-term Mortality Among the Critically Ill." Methods of Information in Medicine 57, no. 01/02 (February 2018): 81–88. http://dx.doi.org/10.3414/me17-01-0097.

Full text
Abstract:
Summary Objective: This study aimed to determine how the abilities of the Charlson Index and Elixhauser comorbidities compared with the chronic health components of the Acute Physiology and Chronic Health Evaluation (APACHE II) to predict in-hospital 30 day mortality among adult critically ill patients treated inside and outside of Intensive Care Unit (ICU). Methods: A total of 701 critically ill patients, identified in a prevalence study design on four randomly selected days in five acute care hospitals, were followed up from the date of becoming critically ill for 30 days or until death, whichever occurred first. Multiple data sources including administrative, clinical, pathology, microbiology and laboratory patient records captured the presence of acute and chronic illnesses. The exponential, Gompertz, Weibull, and log-logistic distributions were assessed as candidate parametric distributions available for the modelling of survival data. Of these, the log-logistic distribution provided the best fit and was used to construct a series of parametric survival models. Results: Of the 701 patients identified in the initial prevalence study, 637 (90.9%) had complete data for all fields used to calculate APACHE II score. Controlling for age, sex and Acute Physiology Score (APS), the chronic health components of the APACHE II score, as a group, were better predictors of survival than Elixhauser comorbidities and Charlson Index. Of the APACHE II chronic health components, only the relatively uncommon conditions of liver failure (3.4%) and immunodeficiency (9.6%) were statistically associated with inferior patient survival with acceleration factors of 0.35 (95% CI 0.17, 0.72) for liver failure, and 0.42 (95% CI 0.26, 0.72) for immunodeficiency. Sensitivity analyses on an imputed dataset that also included the 64 individuals with imputed APACHE II score showed identical results. Conclusion: Our study suggests that, in acute critical illness, most co-existing comorbidities are not major determinants of shortterm survival, indicating that observed variations in ICU patient 30-day mortality may not be confounded by lack of adjustment to pre-existing comorbidities.
APA, Harvard, Vancouver, ISO, and other styles
37

Ferretti, Luca, Chandana Tennakoon, Adrian Silesian, and Graham Freimanis andPaolo Ribeca. "SiNPle: Fast and Sensitive Variant Calling for Deep Sequencing Data." Genes 10, no. 8 (July 25, 2019): 561. http://dx.doi.org/10.3390/genes10080561.

Full text
Abstract:
Current high-throughput sequencing technologies can generate sequence data and provide information on the genetic composition of samples at very high coverage. Deep sequencing approaches enable the detection of rare variants in heterogeneous samples, such as viral quasi-species, but also have the undesired effect of amplifying sequencing errors and artefacts. Distinguishing real variants from such noise is not straightforward. Variant callers that can handle pooled samples can be in trouble at extremely high read depths, while at lower depths sensitivity is often sacrificed to specificity. In this paper, we propose SiNPle (Simplified Inference of Novel Polymorphisms from Large coveragE), a fast and effective software for variant calling. SiNPle is based on a simplified Bayesian approach to compute the posterior probability that a variant is not generated by sequencing errors or PCR artefacts. The Bayesian model takes into consideration individual base qualities as well as their distribution, the baseline error rates during both the sequencing and the PCR stage, the prior distribution of variant frequencies and their strandedness. Our approach leads to an approximate but extremely fast computation of posterior probabilities even for very high coverage data, since the expression for the posterior distribution is a simple analytical formula in terms of summary statistics for the variants appearing at each site in the genome. These statistics can be used to filter out putative SNPs and indels according to the required level of sensitivity. We tested SiNPle on several simulated and real-life viral datasets to show that it is faster and more sensitive than existing methods. The source code for SiNPle is freely available to download and compile, or as a Conda/Bioconda package.
APA, Harvard, Vancouver, ISO, and other styles
38

Gao, Amy, and Michael S. Triantafyllou. "Independent caudal fin actuation enables high energy extraction and control in two-dimensional fish-like group swimming." Journal of Fluid Mechanics 850 (July 4, 2018): 304–35. http://dx.doi.org/10.1017/jfm.2018.456.

Full text
Abstract:
We study through numerical simulation the optimal hydrodynamic interactions and basic vorticity control mechanisms for two fish-like bodies swimming in tandem. We show that for a fish swimming in the wake of an upstream fish, using independent pitch control of its caudal fin, in addition to optimized body motion, results in reduction of the energy needed for self-propulsion by more than 50 %, providing a quasi-propulsive efficiency of 90 %, up from 60 % without independent caudal fin control. Such high efficiency is found over a narrow parametric range and is possible only when the caudal fin is allowed to pitch independently from the motion of the main body. We identify the vorticity control mechanisms employed by the body and tail to achieve this remarkable performance through thrust augmentation and destructive interference with the upstream fish-generated vortices. A high sensitivity of the propulsive performance to small variations in caudal fin parameters is found, underlying the importance of accurate flow sensing and feedback control. We further demonstrate that using lateral line-like flow measurements to drive an unscented Kalman filter, the near-field vortices can be localized within 1 % of the body length, and be used with a phase-lock controller to drive the body and tail undulation of a self-propelled fish, moving within the wake of an upstream fish, to stably reach the optimal gait and fully achieve maximum energy extraction.
APA, Harvard, Vancouver, ISO, and other styles
39

Liang, Lin, Ting Lei, Adam Donald, and Matthew Blyth. "Physics-Driven Machine-Learning-Based Borehole Sonic Interpretation in the Presence of Casing and Drillpipe." SPE Reservoir Evaluation & Engineering 24, no. 02 (February 12, 2021): 310–24. http://dx.doi.org/10.2118/201542-pa.

Full text
Abstract:
Summary Interpretation of sonic data acquired by a logging-while-drilling (LWD) tool or wireline tool in cased holes is complicated by the presence of drillpipe or casing because those steel pipes can act as a strong waveguide. Traditional solutions, which rely on using a frequency bandpass filter or waveform arrival-time separation to filter out the unwanted pipe mode, often fail when formation and pipe signals coexist in the same frequency band or arrival-time range. We hence developed a physics-driven machine-learning-based method to overcome the challenge. In this method, two synthetic databases are generated from a general root-finding mode-search routine on the basis of two assumed models: One is defined as a cemented cased hole for a wireline scenario, and the other is defined as a steel pipe immersed in a fluid-filled borehole for the logging-while-drilling scenario. The synthetic databases are used to train neural network models, which are first used to perform global sensitivity analysis on all relevant model parameters so that the influence of each parameter on the dipole dispersion data can be well understood. A least-squares inversion scheme using the trained model was developed and tested on synthetic cases. The scheme showed good results, and a reasonable uncertainty estimate was made for each parameter. We then extended the application of the trained model to develop a method for automated labeling and extraction of the dipole flexural dispersion mode from other disturbances. The method combines the clustering technique with the neural-network-model-based inversion and an adaptive filter. Testing on field data demonstrates that the new method is superior to traditional methods because it introduces a mechanism from which unwanted pipe mode can be physically filtered out. This novel physics-driven machine-learning-based method improved the interpretation of sonic dipole dispersion data to cope with the challenge brought by the existence of steel pipes. Unlike data-driven machine learning methods, it can provide global service with just one-time offline training. Compared with traditional methods, the new method is more accurate and reliable because the processing is confined by physical laws. This method is less dependent on input parameters; hence, a fully automated solution could be achieved.
APA, Harvard, Vancouver, ISO, and other styles
40

Reus, N. J., F. M. Vos, H. G. Lemij, A. M. Vossepoel, and K. A. Vermeer. "Split Bundle Detection in Polarimetric Images of the Human Retinal Nerve Fiber Layer." Methods of Information in Medicine 46, no. 04 (2007): 425–31. http://dx.doi.org/10.1160/me0400.

Full text
Abstract:
Summary Objectives: One method for assessing pathological retinal nerve fiber layer (NFL) appearance is by comparing the NFL to normative values, derived from healthy subjects. These normative values will be more specific when normal physiological differences are taken into account. One common variation is a split bundle. This paper describes a method to automatically detect these split bundles. Methods: The thickness profile along the NFL bundle is described by a non-split and a split bundle model. Based on these two fits, statistics are derived and used as features for two non-parametric classifiers (Parzen density based and k nearest neighbor). Features were selected by forward feature selection. Three hundred and nine superior and 324 inferior bundles were used to train and test this method. Results: The prevalence of split superior bundles was 68% and the split inferior bundles’ prevalence was 13%. The resulting estimated error of the Parzen density-based classifier was 12.5% for the superior bundle and 10.2% for the inferior bundle. The k nearest neighbor classifier errors were 11.7% and 9.2%. Conclusions: The classification error of automated detection of split inferior bundles is not much smaller than its prevalence, thereby limiting the usefulness of separate cut-offvalues for split and non-split inferior bundles. For superior bundles, however, the classification error was low compared to the prevalence. Application of specific cut-offvalues, selected by the proposed classification system, may therefore increase the specificity and sensitivity of pathological NFL detection.
APA, Harvard, Vancouver, ISO, and other styles
41

Zhao, Peidong, and K. E. Gray. "Analytical and Machine-Learning Analysis of Hydraulic Fracture-Induced Natural Fracture Slip." SPE Journal 26, no. 04 (March 8, 2021): 1722–38. http://dx.doi.org/10.2118/205346-pa.

Full text
Abstract:
Summary Stimulated reservoir volume (SRV) is a prime factor controlling well performance in unconventional shale plays. In general, SRV describes the extent of connected conductive fracture networks within the formation. Being a pre-existing weak interface, natural fractures (NFs) are the preferred failure paths. Therefore, the interaction of hydraulic fractures (HFs) and NFs is fundamental to fracture growth in a formation. Field observations of induced fracture systems have suggested complex failure zones occurring in the vicinity of HFs, which makes characterizing the SRV a significant challenge. Thus, this work uses a broad range of subsurface conditions to investigate the near-tip processes and to rank their influences on HF-NF interaction. In this study, a 2D analytical workflow is presented that delineates the potential slip zone (PSZ) induced by a HF. The explicit description of failure modes in the near-tip region explains possible mechanisms of fracture complexity observed in the field. The parametric analysis shows varying influences of HF-NF relative angle, stress state, net pressure, frictional coefficient, and HF length to the NF slip. This work analytically proves that an NF at a 30 ± 5° relative angle to an HF has the highest potential to be reactivated, which dominantly depends on the frictional coefficient of the interface. The spatial extension of the PSZ normal to the HF converges as the fracture propagates away and exhibits asymmetry depending on the relative angle. Then a machine-learning (ML) model [random forest (RF) regression] is built to replicate the physics-based model and statistically investigate parametric influences on NF slips. The ML model finds statistical significance of the predicting features in the order of relative angle between HF and NF, fracture gradient, frictional coefficient of the NF, overpressure index, stress differential, formation depth, and net pressure. The ML result is compared with sensitivity analysis and provides a new perspective on HF-NF interaction using statistical measures. The importance of formation depth on HF-NF interaction is stressed in both the physics-based and data-driven models, thus providing insight for field development of stacked resource plays. The proposed concept of PSZ can be used to measure and compare the intensity of HF-NF interactions at various geological settings.
APA, Harvard, Vancouver, ISO, and other styles
42

Lu, Weifan, Yijian Zhou, Zeyan Zhao, Han Yue, and Shiyong Zhou. "Aftershock sequence of the 2017 Mw 6.5 Jiuzhaigou, China earthquake monitored by an AsA network and its implication to fault structures and strength." Geophysical Journal International 228, no. 3 (October 29, 2021): 1763–79. http://dx.doi.org/10.1093/gji/ggab443.

Full text
Abstract:
SUMMARY We deployed a seismic network near the source region of the 2017 Mw 6.5 Jiuzhaigou earthquake to monitor aftershock activity and to investigate the local fault structure. An aftershock deployment of Array of small Arrays (AsA) and a Geometric Mean Envelop (GME) algorithm are adopted to enhance detection performance. We also adopt a set of association, relocation and matched-filter techniques to obtain a detailed regional catalogue. 16 742 events are detected and relocated, including 1279 aftershocks following the Mw 4.8 aftershock. We develop a joint inversion algorithm utilizing locations of event clusters and focal mechanisms to determine the geometry of planar faults. Six segments were finally determined, in which three segments are related to the Huya fault reflecting a change in fault dip direction near the main shock hypocentre, while the other segments reflect branches showing orthogonal and conjugate geometries with the Huya fault. Aftershocks were active on branching faults between the Huya and Minjiang faults indicating that the main shock may have ruptured both major faults. We also resolve a fault portion with ‘weak strength’ near the main shock hypocentre, which is characterized by limited coseismic slips, concentrated afterslip, low aftershock activities, high b-value and high sensitivity to stress changes. These phenomena can be explained by fault frictional properties at conditional stable sliding status, which may be related to the localized high pore-fluid pressure produced by the fluid intrusion.
APA, Harvard, Vancouver, ISO, and other styles
43

Sergeev, Philipp, Sadiksha Adhikari, Juho J. Miettinen, Maiju-Emilia Huppunen, Minna Suvela, Ana Slipicevic, Nina N. Nupponen, Fredrik Lehmann, and Caroline A. Heckman. "Single Cell RNA Sequencing Identifies Potential Molecular Indicators of Response to Melflufen in Multiple Myeloma." Blood 138, Supplement 1 (November 5, 2021): 1194. http://dx.doi.org/10.1182/blood-2021-147191.

Full text
Abstract:
Abstract Introduction Melphalan flufenamide (melflufen), is a novel peptide-drug conjugate that targets aminopeptidases and selectively delivers alkylating agents in tumors. Melflufen was recently FDA approved for the treatment of relapsed/refractory multiple myeloma (MM) patients. Considering the challenges in treating this group of patients, and the availability of several new drugs for MM, information that can support treatment selection is urgently needed. To identify potential indicators of response and mechanism of resistance to melflufen, we applied a multiparametric drug sensitivity assay to MM patient samples ex vivo and analyzed the samples by single cell RNA sequencing (scRNAseq). Ex vivo drug testing identified MM samples that were distinctly sensitive or resistant to melflufen, while differential gene expression analysis revealed pathways associated with response. Methods Bone marrow (BM) aspirates from 24 MM patients were obtained after written informed consent following approved protocols in compliance with the Declaration of Helsinki. BM mononuclear cells from 12 newly diagnosed (ND) and 12 relapsed/refractory (RR) patients were used for multi-parametric flow cytometry-based drug sensitivity and resistance testing (DSRT) evaluation to melflufen and melphalan, and for scRNAseq. Based on the results from the DSRT tests and drug sensitivity scores (DSS), we divided the samples into three groups - high sensitivity (HS, DSS &gt; 40 (melflufen) or DSS &gt; 16 (melphalan)), intermediate sensitivity (IS, 31 ≤ DSS ≤ 40 (melflufen) or 10 ≤ DSS ≤ 16 (melphalan)), and low sensitivity (LS, DSS &lt; 31 (melflufen) or DSS &lt; 10 (melphalan)). To identify genes, responsible for the general sensitivity to melphalan-based drugs we conducted differential gene expression (DGE) analyses separately for melphalan and melflufen focusing on the plasma cell populations, comparing gene expression between HS and LS samples for both drugs ("HS vs. LS melphalan" and "HS vs. LS for melflufen", respectively). In addition, to explain the increased sensitivity of RR samples, we conducted the DGE analysis for ND vs. RR samples and searched for similarities between these three datasets. Results DSRT data indicated that samples from RRMM patients were significantly more sensitive to melflufen compared to samples from NDMM (Fig. 1A). In addition, we observed that samples with a gain of 1q (+1q) were more sensitive to melflufen while those with deletion of 13q (del13q) appeared to be less sensitive, although these results lacked significance (Fig. 1A). After separating the samples into different drug sensitivity groups (HS, IS, LS), DGE analysis showed significant downregulation of the drug efflux and multidrug resistance protein family member ABCB9 in the melflufen HS group opposed to the LS group (2.2-fold, p &lt; 0.001). A similar pattern was detected for the melphalan HS vs. LS comparison suggesting that this alteration might be a common indicator of sensitivity to melphalan-based drugs. Furthermore, in the melflufen HS group we observed downregulation of the matrix metallopeptidase inhibitors TIMP1 and TIMP2 (3-fold and 1.6-fold, p &lt; 0.001, respectively), and cathepsin inhibitors CST3 and CSTB (3.2-fold and 1.3-fold, p &lt; 0.001, respectively) (Fig. 1B). This effect was observed in both "ND vs. RR" and "HS vs. LS for melflufen" comparisons, but not for melphalan, suggesting that these changes are associated with disease progression and specific indicators of sensitivity to melflufen. Moreover, gene set enrichment analysis (GSEA) showed activation of pathways related to protein synthesis, as well as amino acid starvation for malignant and normal cell populations in the HS group. Conclusion In summary, our results indicate that melflufen is more active in RRMM compared to NDMM. In addition, samples from MM patients with +1q, which is considered an indicator of high-risk disease, tended to be more sensitive to melflufen. Based on differential GSEA and pathway enrichment, several synergizing mechanisms could potentially explain the higher sensitivity to melflufen, such as decreased drug efflux and increased drug uptake. Although these results indicate potential indicators of response and mechanisms of drug efficacy, further validation of these findings is required using data from melflufen treated patients. Figure 1 Figure 1. Disclosures Slipicevic: Oncopeptides AB: Current Employment. Nupponen: Oncopeptides AB: Consultancy. Lehmann: Oncopeptides AB: Current Employment. Heckman: Orion Pharma: Research Funding; Oncopeptides: Consultancy, Research Funding; Novartis: Research Funding; Celgene/BMS: Research Funding; Kronos Bio, Inc.: Research Funding.
APA, Harvard, Vancouver, ISO, and other styles
44

Liu, Ning, and Dean S. Oliver. "Critical Evaluation of the Ensemble Kalman Filter on History Matching of Geologic Facies." SPE Reservoir Evaluation & Engineering 8, no. 06 (December 1, 2005): 470–77. http://dx.doi.org/10.2118/92867-pa.

Full text
Abstract:
Summary The objective of this paper is to compare the performance of the ensemble Kalman filter (EnKF) to the performance of a gradient-based minimization method for the problem of estimation of facies boundaries in history matching. The EnKF is a Monte Carlo method for data assimilation that uses an ensemble of reservoir models to represent and update the covariance of variables. In several published studies, it outperforms traditional history-matching algorithms in adaptability and efficiency. Because of the approximate nature of the EnKF, the realizations from one ensemble tend to underestimate uncertainty, especially for problems that are highly nonlinear. In this paper, the distributions of reservoir-model realizations from 20 independent ensembles are compared with the distributions from 20 randomized-maximum-likelihood (RML) realizations for a 2D waterflood model with one injector and four producers. RML is a gradient-based sampling method that generates one reservoir realization in each minimization of the objective function. It is an approximate sampling method, but its sampling properties are similar to the Markov-chain Monte Carlo (McMC) method on highly nonlinear problems and are relatively more efficient than McMC. Despite the nonlinear relationship between the data (such as production rates and facies observations) and the model variables, the EnKF was effective at history matching the production data. We find that the computational effort to generate 20 independent realizations was similar for the two methods, although the complexity of the code is substantially less for the EnKF. Introduction Several questions regarding the use of the EnKF for history matching are addressed in this paper. The most important is a comparison of the efficiency with a gradient-based method for a history-matching problem with known facies properties but unknown boundary locations. Secondly, the EnKF and a gradient-based method are unlikely to give identical estimates of model variables, so it is also important to know if one method generates better realizations. Finally, because there is often a desire to use the history-matched realizations to quantify uncertainty, it is important to determine if one of the methods is more efficient at generating independent realizations. Gradient-based history matching can be performed in several ways (e.g., assimilating data in batch or sequentially); a variety of minimization algorithms can be used (e.g., conjugate gradient or quasi-Newton); and several different methods for computing the gradient are available (e.g., adjoint or sensitivity equations). In this paper, we use what we believe is the most efficient of the traditional gradient-based methods: an adjoint method to compute the gradient of the squared data mismatch and the limited-memory Broyden-Fletcher-Goldfarb-Shanno (LBFGS) method to compute the direction of the change. The remaining choice is whether to incorporate all data at once or sequentially. Simultaneous, or batch, inversion of all data is clearly a well-established history-matching procedure. Although data from wells or sensors may arrive nearly continuously, the practice of updating reservoir models as the data arrive is not common. There are several reasons that make sequential assimilation of data difficult for large, nonlinear models:the covariance for all model variables must be updated as new data are assimilated, but the covariance matrix is very large;the covariance may not be a good measure of uncertainty for nonlinear problems; andthe sensitivity of adatum to changes in values of model variables is expensive to compute. Bayesian updating in general is described by Woodbury. Modifying a method described by Tarantola, Oliver evaluated the possibility of using a sequential assimilation approach for transient flow in porous media. He found that the results from sequential assimilation could be almost as good as those from batch assimilation if the order of the data was carefully selected. The problem was quite small, however, and an extension to large models was impractical. Although a sequential method has the advantage of generating a sequence of history-matched models that may all be useful at the time they are generated, our comparisons of efficiency will be based primarily on the effort required to assimilate all the data. If the intermediate predictions are needed (as they would be for control of a reservoir), the comparison provided here will underestimate the value of the sequential assimilation. A secondary objective of history matching is often to assess the uncertainty in the predictions of future reservoir performance or in the estimates of reservoir properties such as permeability, porosity, or saturation. In general, uncertainty is estimated from an examination of a moderate number of conditional simulations of the prediction or properties. Unless the realizations are generated fairly carefully and the sample is sufficiently large, however, the estimate of uncertainty could be quite poor. Two large comparative studies of the ability of Monte Carlo methods to quantify uncertainty in history matching have been carried out, one in groundwater and one in petroleum. Neither was conclusive, partly because of the small sample size. Liu and Oliver used a smaller reservoir model (fewer variables), but a much larger sample size. They found that the method that minimizes an objective function containing a model mismatch part and a data mismatch part, with noise added to observations, created realizations that were distributed nearly the same as realizations from McMC. The EnKF is a Monte Carlo method for updating reservoir models. It solves several problems with the application of the Kalman filter to large nonlinear problems. It has been applied to reservoir flow problems with generally good results. There has been no examination, however, of the distribution of the members of a single ensemble. The adequacy of the uncertainty estimate is completely unknown. In the first paper on the EnKF, Evensen described how the evolution of the probability density function for the model variables can be approximated by the motion of "particles" or ensemble members in phase space. Any desired statistical quantities can be estimated from the ensemble of points. When the size of the ensemble is relatively small, however, the approximation of the covariance from the ensemble almost certainly contains substantial errors. Houtekamer and Mitchell noted the tendency for a reduction in variance caused by "inbreeding." When the ensemble estimate is used in a Kalman filter, van Leeuwen explained how nonlinearity in the covariance update relation causes growth in the error as additional data are assimilated. In this paper, the comparison is made using history matching on a truncated plurigaussian model for geologic facies. It provides a difficult history-matching problem with significant nonlinearities that make both the EnKF and the LBFGS method difficult to apply.
APA, Harvard, Vancouver, ISO, and other styles
45

Bathla, Girish, Neetu Soni, Raymondo Endozo, and Balaji Ganeshan. "Magnetic resonance texture analysis utility in differentiating intraparenchymal neurosarcoidosis from primary central nervous system lymphoma: a preliminary analysis." Neuroradiology Journal 32, no. 3 (February 21, 2019): 203–9. http://dx.doi.org/10.1177/1971400919830173.

Full text
Abstract:
Purpose Neurosarcoidosis and primary central nervous system lymphomas, although distinct disease entities, can both have overlapping neuroimaging findings. The purpose of our preliminary study was to assess if magnetic resonance texture analysis can differentiate parenchymal mass-like neurosarcoidosis granulomas from primary central nervous system lymphomas. Methods A total of nine patients was evaluated, four with parenchymal neurosarcoidosis granulomas and five with primary central nervous system lymphomas. Magnetic resonance texture analysis was performed with commercial software using a filtration histogram technique. Texture features of different sizes and variations in signal intensity were extracted at six different spatial scale filters, followed by feature quantification using statistical and histogram parameters and 36 features were analysed for each sequence (T1-weighted, T2-weighted, fluid-attenuated inversion recovery, diffusion-weighted, apparent diffusion coefficient, T1-post contrast). The non-parametric Mann–Whitney test was used to evaluate the differences between different texture parameters. Results The differences in distribution of entropy on T2-weighted imaging, apparent diffusion coefficient and T1-weighted post-contrast images were statistically significant on all spatial scale filters. Magnetic resonance texture analysis using medium and coarse spatial scale filters was especially useful in discriminating neurosarcoidosis from primary central nervous system lymphomas for mean, mean positive pixels, kurtosis, and skewness on diffusion-weighted imaging ( P < 0.004–0.030). At spatial scale filter 5, entropy on T2-weighted imaging ( P = 0.001) was the most useful discriminator with a cut-off value of 6.12 ( P = 0.001, area under the curve (AUC)-1, sensitivity (Sn)-100%, specificity (Sp)-100%), followed by kurtosis and skewness on diffusion-weighted imaging with a cut-off value of −0.565 ( P = 0.011, AUC-0.97, Sn-100%, Sp-83%) and–0.365 ( P = 0.008, AUC-0.98, Sn-100%, Sp-100%) respectively. Conclusion Filtration histogram-based magnetic resonance texture analysis appears to be a promising modality to distinguish parenchymal neurosarcoidosis granulomas from primary central nervous system lymphomas.
APA, Harvard, Vancouver, ISO, and other styles
46

Haugen, Vibeke Eilwn J., Geir Naevdal, Lars-Joergen Natvik, Geir Evensen, Aina M. Berg, and Kristin M. Flornes. "History Matching Using the Ensemble Kalman Filter on a North Sea Field Case." SPE Journal 13, no. 04 (December 1, 2008): 382–91. http://dx.doi.org/10.2118/102430-pa.

Full text
Abstract:
Summary This paper applies the ensemble Kalman filter (EnKF) to history match a North Sea field model. This is, as far as we know, one of the first published studies in which the EnKF is applied in a realistic setting using real production data. The reservoir-simulation model has approximately 45,000 active grid cells, and 5 years of production data are assimilated. The estimated parameters consist of the permeability and porosity fields, and the results are compared with a model previously established using a manual history-matching procedure. It was found that the EnKF estimate improved the match to the production data. This study, therefore, supported previous findings when using synthetic models that the EnKF may provide a useful tool for history matching reservoir parameters such as the permeability and porosity fields. Introduction The EnKF developed by Evensen (1994, 2003, 2007) is a statistical method suitable for data assimilation in large-scale nonlinear models. It is a Monte Carlo method, where model uncertainty is represented by an ensemble of realizations. The prediction of the estimate and uncertainty is performed by ensemble integration using the reservoir-simulation model. The method provides error estimates at any time based on information from the ensemble. When production data are available, a variance-minimizing scheme is used to update the realizations. The EnKF provides a general and model-independent formulation and can be used to improve the estimates of both the parameters and variables in the model. The method has previously been applied in a number of applications [e.g., in dynamical ocean models (Haugen and Evensen 2002), in model systems describing the ocean ecosystems (Natvik and Evensen 2003a, 2003b), and in applications within meteorology (Houtekamer et al. 2005)]. This shows that the EnKF is capable of handling different types of complex- and nonlinear-model systems. The method was first introduced into the petroleum industry in studies related to well-flow modeling (Lorentzen et al. 2001, 2003). Nævdal et al. (2002) used the EnKF in a reservoir application to estimate model permeability focusing on a near-well reservoir model. They showed that there could be a great benefit from using the EnKF to improve the model through parameter estimation, and that this could lead to improved predictions. Nævdal et al. (2005) showed promising results estimating the permeability as a continuous field variable in a 2D field-like example. Gu and Oliver (2005) examined the EnKF for combined parameter and state estimation in a standardized reservoir test case. Gao et al. (2006) compared the EnKF with the randomized-maximum-likelihood method and pointed out several similarities between the methods. Liu and Oliver (2005a, 2005b) examined the EnKF for facies estimation in a reservoir-simulation model. This is a highly nonlinear problem where the probability-density function for the petrophysical properties becomes multimodal, and it is not clear how the EnKF can best handle this. A method was proposed in which the facies distribution for each ensemble member is represented by two normal distributed Gaussian fields using a method called truncated pluri-Gaussian simulation (Lantuéjoul 2002). Wen and Chen (2006) provided another discussion on the EnKF for estimation of the permeability field in a 2D reservoir-simulation model and examined the effect of the ensemble size. Lorentzen et al. (2005) focused on the sensitivity of the results with respect to the choice of initial ensemble using the PUNQ-S3. Skjervheim et al. (2007) used the EnKF to assimilate seismic 4D data. It was shown that the EnKF can handle these large data sets and that a positive impact could be found despite the high noise level in the data. The EnKF has some important advantages when compared to traditional assisted history-matching methods; the result is an ensemble of history-matched models that are all possible model realizations. The data are processed sequentially in time, meaning that new data are easily accounted for when they arrive. The method allows for simultaneous estimation of a huge number of poorly known parameters such as fields of properties defined in each grid cell. By analyzing the EnKF update equations, it is seen that the actual degrees of freedom in the estimation problem are limited equal to the ensemble size. One is still able to update the most important features of large-scale models. A limitation of the EnKF is the fact that its computations are based on first- and second-order moments, and there are problems that are difficult to handle, particularly when the probability distributions are multimodal (e.g., when representing a bimodal channel facies distribution). This paper considers the use of the EnKF for estimating dynamic and static parameters, focusing on permeability and porosity, in a field model of a StatoilHydro-operated field in the North Sea. The largest uncertainty in the model is expected to be related to the permeability values, especially in the upper part of the reservoir where the uncertainty may be as large as 30%.
APA, Harvard, Vancouver, ISO, and other styles
47

Hartig, F., C. Dislich, T. Wiegand, and A. Huth. "Technical Note: Approximate Bayesian parameterization of a process-based tropical forest model." Biogeosciences 11, no. 4 (February 27, 2014): 1261–72. http://dx.doi.org/10.5194/bg-11-1261-2014.

Full text
Abstract:
Abstract. Inverse parameter estimation of process-based models is a long-standing problem in many scientific disciplines. A key question for inverse parameter estimation is how to define the metric that quantifies how well model predictions fit to the data. This metric can be expressed by general cost or objective functions, but statistical inversion methods require a particular metric, the probability of observing the data given the model parameters, known as the likelihood. For technical and computational reasons, likelihoods for process-based stochastic models are usually based on general assumptions about variability in the observed data, and not on the stochasticity generated by the model. Only in recent years have new methods become available that allow the generation of likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional Markov chain Monte Carlo (MCMC) sampler, performs well in retrieving known parameter values from virtual inventory data generated by the forest model. We analyze the results of the parameter estimation, examine its sensitivity to the choice and aggregation of model outputs and observed data (summary statistics), and demonstrate the application of this method by fitting the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss how this approach differs from approximate Bayesian computation (ABC), another method commonly used to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation, can be successfully applied to process-based models of high complexity. The methodology is particularly suitable for heterogeneous and complex data structures and can easily be adjusted to other model types, including most stochastic population and individual-based models. Our study therefore provides a blueprint for a fairly general approach to parameter estimation of stochastic process-based models.
APA, Harvard, Vancouver, ISO, and other styles
48

Greenhalgh, J., A. Bagust, A. Boland, N. Fleeman, C. McLeod, Y. Dundar, C. Proudlove, and R. Shaw. "Cetuximab for the treatment of recurrent and/or metastatic squamous cell carcinoma of the head and neck." Health Technology Assessment 13, Suppl 3 (October 2009): 49–54. http://dx.doi.org/10.3310/hta13suppl3-08.

Full text
Abstract:
This paper presents a summary of the evidence review group (ERG) report into the clinical effectiveness and cost-effectiveness of cetuximab for recurrent and/or metastatic squamous cell carcinoma of the head and neck (SCCHN) based upon a review of the manufacturer’s submission to the National Institute for Health and Clinical Excellence (NICE) as part of the single technology appraisal (STA) process. The submission’s evidence came from a single reasonably high-quality randomised controlled trial (RCT) [EXTREME (Erbitux in First-Line Treatment of Recurrent or Metastatic Head and Neck Cancer); n = 442] comparing cetuximab plus chemotherapy (CTX) with CTX alone. Cetuximab plus CTX had significant effects compared with CTX alone on the primary outcome of overall survival (10.1 versus 7.4 months respectively) and the secondary outcomes of progression-free survival (PFS) (5.6 versus 3.3 months), best overall response to therapy (35.6% versus 19.5%), disease control rate (81.1% versus 60%) and time-totreatment failure (4.8 versus 3.0 months), but not on duration of response (5.6 months versus 4.7 months). No safety issues with cetuximab arose beyond those already previously documented. The manufacturer developed a two-arm state-transition Markov model to evaluate the cost-effectiveness of cetuximab plus CTX versus CTX alone, using clinical data from the EXTREME trial. The ERG recalculated the base-case cost-effectiveness results taking changes in parameters and assumptions into account. Subgroup and threshold analyses were also explored. The manufacturer reported an incremental cost-effectiveness ratio (ICER) of £121,367 per quality-adjusted life-year (QALY) gained and an incremental cost per life-year gained of £92,226. Univariate sensitivity analysis showed that varying the cost of day-case infusion and the utility values in the stable/response health state of the cetuximab plus CTX arm had the greatest impact on the ICER. Probabilistic sensitivity analysis illustrated that cetuximab plus CTX is unlikely to be cost-effective for patients with recurrent and/or metastatic SCCHN, even at what would usually be considered very high levels of willingness to pay for an additional QALY. With regard to the economic model the appropriateness and reliability of parametric survival projection beyond the duration of trial data could not be fully explored because of lack of information. The ERG also questioned the appropriateness of economic modelling in this STA as evidence is available only from a single RCT. In conclusion, the ERG considers that patients with metastatic SCCHN were not shown to receive a significant survival benefit from cetuximab plus CTX compared with CTX alone and that even setting a lower price for cetuximab would not strengthen the manufacturer’s case for cost-effectiveness.
APA, Harvard, Vancouver, ISO, and other styles
49

Álvarez Troncoso, J., C. M. Oñoro López, C. Soto Abánades, E. Ruiz Bravo, L. Blasco Santana, A. Noblejas Mozo, E. Martínez Robles, et al. "AB0333 USEFULNESS OF MULTI-PARAMETRIC EVALUATION INCLUDING MINOR SALIVARY GLAND BIOPSY FOR THE DIFFERENTIAL DIAGNOSIS OF SICCA SYNDROME IN A SPANISH SINGLE-CENTER EXPERIENCE." Annals of the Rheumatic Diseases 80, Suppl 1 (May 19, 2021): 1192.1–1192. http://dx.doi.org/10.1136/annrheumdis-2021-eular.3251.

Full text
Abstract:
Background:Sjögren’s syndrome (SS) is a systemic autoimmune disease characterized by mononuclear cell infiltration of the exocrine glands, which leads to sicca syndrome and systemic manifestations. The minor salivary gland biopsy (MSGB) is undoubtedly important for the classification, diagnosis and prognosis of SS. However, differentiating SS and non-Sjögren’s sicca syndrome (NSS) can be challenging.Objectives:The aim was to evaluate the histological characteristics of MSGB besides focus score (FS) in patients with sicca syndrome and the usefulness of the different clinical, serological and histological parameters to diagnose, classify and describe the prognosis of patients with Sjögren’s syndrome.Methods:Prospective observational single-center study of patients referred for study of sicca syndrome with multi-parametric evaluation from January 2019 to December 2020. A diagnostic protocol based on Schirmer’s test, unstimulated whole salivary flow (UWSF) and minimally invasive MSGB was applied. Patients fulfilling 2016 ACR-EULAR classification criteria were classified as SS.Results:In a cohort of 115 patients with sicca syndrome, 55 (47.8%) were diagnosed with SS. The mean age was 56.9±14.5 years and most of the patients were women (81,7%) with no significant differences between SS and NSS. SS were more likely to present positive Schirmer’s test, positive UWSF, anti-Ro+, FS≥1, antinuclear antibodies (ANA+), rheumatoid factor (RF+) and anti-La+ among others.MSGB was a safe procedure and very effective (only 7% insufficient biopsies) in our cohort. The mean gland size of the MSGB was 5.7±0.37 mm2. Furthermore, it was the individual parameter that most correlated with SS, even more than anti-Ro+, Schirmer’s and UWSF. Seronegative SS (Anti-Ro-) was 47.3%. These patients could not have been diagnosed except by MSGB. Scintigraphy did not help to differentiate SS from NSS, neither patient-referred xerostomia nor xerophthalmia. The most frequent histological diagnosis was focal lymphocytic sialadenitis (FLS) (81.8%) followed by nonspecific chronic sialadenitis (9.1%). However, only FLS had a correlation with SS. There were no MSGBs labeled normal among the SS patients. Mean FS was 2.22±0.2 (16.7% had FS≥3).The rest of the histological parameters that showed a positive correlation with SS were glandular atrophy (GA), germinal centers (GC), lymphoepithelial lesions (LEL) and lymphoid follicles (LF). FS≥1 is the current histological classification criteria for ACR/EULAR. However, the presence of lymphocytic infiltrates (LI) (although not FS≥1) and FLS were suggestive markers of SS with greater sensitivity (SE) and specificity (SP). FS≥3, GC, LEL and LF were only found in SS and were associated in previous studies with higher risk of lymphoma and systemic disease.PrevalenceTestSSNSSp valueSensitivitySpecificityClassification criteriaSchirmer’s test78.6%57.6%p=0.0190.780.42UWSF65.5%38.3%p=0.0040.650.62Anti-Ro+52.7%6.7%p<0.0010.530.93FS≥166.7%25%p=0,0270.670.75Non classification criteriaAnti-La+18.5%1.8%p=0.0030.430.87ANA+74.5%28.3%p<0.0010.750.72RF+38.2%10.0%p=0.0010.430.87Scintigraphy49.1%38.3%p=0.2450.490.62Xerostomia76.1%77.9%p=0.6630.760.2Xerophthalmia74.5%83.8%p=0.3020.740.16Histological characteristicsLI92.2%27.5%p<0.0010.90.78FLS81.8%6.7%p<0.0010.820.93GA75.5%50.9%p=0.0100.760.49GC2.0%-p=0.3100.021.00LEL12.2%-p=0.0110.111.00LF4.1%-p=0.1530.041.00Conclusion:SS is a heterogeneous disease that requires a comprehensive clinical, serological, functional and histological evaluation. MSGB is a simple, safe, repeatable procedure that provides enormous information. It was the single parameter that best correlated with SS and allowed the diagnosis of seronegative SS. In summary, the use of MSGB is essential not only for the differential diagnosis of sicca syndrome but also as a prognostic marker for SS.References: :[1]Bautista-Vargas et al. Autoimmun Rev. 2020 Dec;19(12):102690.Disclosure of Interests:None declared
APA, Harvard, Vancouver, ISO, and other styles
50

Schwenzer, K., K. Brinkbäumer, R. Schmid, U. Szeimies, G. Pöpperl, K. Hahn, and S. Dresel. "[F-18]FDG imaging of head and neck tumors: Comparison of hybrid PET, dedicated PET and CT." Nuklearmedizin 40, no. 05 (2001): 172–73. http://dx.doi.org/10.1055/s-0038-1623883.

Full text
Abstract:
Summary Aim: Aim of the study was to evaluate [F-18] FDG imaging of head and neck tumors using a Hybrid-PET device of the 2nd or 3rd generation. Examinations were compared to dedicated PET and Spiral-CT. Methods: 54 patients suffering from head and neck tumors were examined using dedicated PET and Hybrid-PET after injection of 185-350 MBq [F-18] FDG. Examinations were carried out on the dedicated PET first followed by a scan on the Hybrid-PET. Dedicated PET was acquired in 3D mode, Hybrid-PET was performed in list mode using an axial filter. Reconstruction of data was performed itera-tively on both, dedicated PET and Hybrid-PET. All patients received a CT scan in multislice technique. All finding have been verified by the goldstandard histology or in case of negative histology by follow up. Results: Using dedicated PET the primary or recurrent lesion was correctly diagnosed in 47/48 patients, using Hybrid-PET in 46/ 48 patients and using CT in 25/48 patients. Metastatic disease in cervical lymph nodes was diagnosed in 17/ 18 patients with dedicated PET, in 16/18 patients with Hybrid-PET and in 15/18 with CT. False positive results with regard to lymph node metastasis were seen with one patient for dedicated PET and Hybrid-PET, respectively, and with 18 patients for CT. In a total of 11 patients unknown metastastic lesions were seen with dedicated PET and with Hybrid-PET elsewhere in the body. Additional malignant disease other than the head and neck tumor was found in 4 patients. Conclusion: Using Hybrid-PET for [F-18] FDG imaging reveals a loss of sensitivity and specificity of about 1-5% as compared to dedicated PET in head and neck tumors. [F-18] FDG PET with both, dedicated PET and Hybrid-PET is superior to CT in the diagnosis of primary or recurrent lesions as well as in the assessment of lymph node involvement.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography