Journal articles on the topic 'Frequency analysis procedures'

To see the other types of publications on this topic, follow the link: Frequency analysis procedures.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Frequency analysis procedures.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Vieira, T. M. M., L. F. Oliveira, and J. Nadal. "Estimation procedures affect the center of pressure frequency analysis." Brazilian Journal of Medical and Biological Research 42, no. 7 (July 2009): 665–73. http://dx.doi.org/10.1590/s0100-879x2009000700012.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Wazneh, H., F. Chebana, and T. B. M. J. Ouarda. "Optimal depth-based regional frequency analysis." Hydrology and Earth System Sciences 17, no. 6 (June 21, 2013): 2281–96. http://dx.doi.org/10.5194/hess-17-2281-2013.

Full text
Abstract:
Abstract. Classical methods of regional frequency analysis (RFA) of hydrological variables face two drawbacks: (1) the restriction to a particular region which can lead to a loss of some information and (2) the definition of a region that generates a border effect. To reduce the impact of these drawbacks on regional modeling performance, an iterative method was proposed recently, based on the statistical notion of the depth function and a weight function φ. This depth-based RFA (DBRFA) approach was shown to be superior to traditional approaches in terms of flexibility, generality and performance. The main difficulty of the DBRFA approach is the optimal choice of the weight function ϕ (e.g., φ minimizing estimation errors). In order to avoid a subjective choice and naïve selection procedures of φ, the aim of the present paper is to propose an algorithm-based procedure to optimize the DBRFA and automate the choice of ϕ according to objective performance criteria. This procedure is applied to estimate flood quantiles in three different regions in North America. One of the findings from the application is that the optimal weight function depends on the considered region and can also quantify the region's homogeneity. By comparing the DBRFA to the canonical correlation analysis (CCA) method, results show that the DBRFA approach leads to better performances both in terms of relative bias and mean square error.
APA, Harvard, Vancouver, ISO, and other styles
3

Wazneh, H., F. Chebana, and T. B. M. J. Ouarda. "Optimal depth-based regional frequency analysis." Hydrology and Earth System Sciences Discussions 10, no. 1 (January 15, 2013): 519–55. http://dx.doi.org/10.5194/hessd-10-519-2013.

Full text
Abstract:
Abstract. Classical methods of regional frequency analysis (RFA) of hydrological variables face two drawbacks: (1) the restriction to a particular region which can correspond to a loss of some information and (2) the definition of a region that generates a border effect. To reduce the impact of these drawbacks on regional modeling performance, an iterative method was proposed recently. The proposed method is based on the statistical notion of the depth function and a weight function φ. This depth-based RFA (DBRFA) approach was shown to be superior to traditional approaches in terms of flexibility, generality and performance. The main difficulty of the DBRFA approach is the optimal choice of the weight function φ (e.g. φ minimizing estimation errors). In order to avoid subjective choice and naïve selection procedures of φ, the aim of the present paper is to propose an algorithm-based procedure to optimize the DBRFA and automate the choice of φ according to objective performance criteria. This procedure is applied to estimate flood quantiles in three different regions in North America. One of the findings from the application is that the optimal weight function depends on the considered region and can also quantify the region homogeneity. By comparing the DBRFA to the canonical correlation analysis (CCA) method, results show that the DBRFA approach leads to better performances both in terms of relative bias and mean square error.
APA, Harvard, Vancouver, ISO, and other styles
4

Nguyen, Denis D., Ryan T. Judd, Terence E. Imbery, and Michael B. Gluth. "Frequency-Specific Analysis of Hearing Outcomes Associated with Ossiculoplasty Versus Stapedotomy." Annals of Otology, Rhinology & Laryngology 130, no. 9 (January 29, 2021): 1010–15. http://dx.doi.org/10.1177/0003489421990164.

Full text
Abstract:
Objective: Surgery on the ossicular chain may impact its underlying mechanical properties. This study aims to investigate comparative differences in frequency-specific hearing outcomes for ossiculoplasty versus stapedotomy. Methods: A retrospective chart review was conducted on subjects who underwent ossiculoplasty with partial ossicular replacement prosthesis (PORP) or laser stapedotomy with self-crimping nitinol/fluoroplastic piston, and achieved closure of postoperative pure tone average air-bone gap (PTA-ABG) ≤ 15 dB. 45 PORP and 38 stapedotomy cases were included, with mean length of follow-up of 7.6 months. Results: The mean change in PTA-ABG was similar for the 2 procedures (−17.9 dB vs −18.1 dB, P = .98). Postoperative ABG closure for stapedotomy was superior at 1000 Hz (8.9 dB vs 13.9 dB, P = .0003) and 4000 Hz (11.8 dB vs 18.0 dB, P = .0073). Both procedures also had improved postoperative bone conduction (BC) thresholds at nearly all frequencies, but there was no statistical difference in the change in BC at any particular frequency between the 2 procedures. Conclusion: Both procedures achieved a similar mean change in PTA-ABG. Stapedotomy was superior to PORP at ABG closure at 1000 Hz and at 4000 Hz, with 1000 Hz the most discrepant. The exact mechanism responsible for these changes is unclear, but the specific frequencies affected suggest that differences in each procedure’s respective impact on the native resonant frequency and mass load of the system could be implicated.
APA, Harvard, Vancouver, ISO, and other styles
5

Saf, Betül. "Application of Index Procedures to Flood Frequency Analysis in Turkey1." JAWRA Journal of the American Water Resources Association 44, no. 1 (February 2008): 37–47. http://dx.doi.org/10.1111/j.1752-1688.2007.00136.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Loganathan, G. V., C. Y. Kuo, and T. C. McCormick. "Frequency Analysis of Low Flows." Hydrology Research 16, no. 2 (April 1, 1985): 105–28. http://dx.doi.org/10.2166/nh.1985.0009.

Full text
Abstract:
The transformations (i) SMEMAX (ii) Modified SMEMAX (iii) Power and Probability Distributions (iv) Weibull (α,β,γ) or Extreme value type III (v) Weibull (α,β,0) (vi) Log Pearson Type III (vii) Log Boughton are considered for the low flow analysis. Also, different parameter estimating procedures are considered. Both the Weibull and log Pearson can have positive lower bounds and thus their use in fitting low flow probabilities may not be physically justifiable. A new derivation generalizing the SMEMAX transformation is proposed. A new estimator for the log Boughton distribution is presented. It is found that the Boughton distribution with Cunnane's plotting position provides a good fit to low flows for Virginia streams.
APA, Harvard, Vancouver, ISO, and other styles
7

Guan, Wei, Zuo Jing Su, and Guo Qing Zhang. "Concise Robust Control for MIMO System Based on Frequency Domain Analysis." Applied Mechanics and Materials 278-280 (January 2013): 1555–60. http://dx.doi.org/10.4028/www.scientific.net/amm.278-280.1555.

Full text
Abstract:
In this paper, a concise nonlinear robust control scheme based on the frequency domain is proposed. Compared with the arbitrary selection of weighting function in classical H∞ mixed sensitivity robust control design procedures, the CGSA methods gives a relatively more straightforward and concise design procedure for MIMO robust control problem. In the simulations, the CGSA is applied to an integrated rudder and fin control loop to indicate that the integrated rudder and fin CGSA control scheme is very feasible for future practical application.
APA, Harvard, Vancouver, ISO, and other styles
8

Basu, Bidroha, and V. V. Srinivas. "Evaluation of the Index-Flood Approach Related Regional Frequency Analysis Procedures." Journal of Hydrologic Engineering 21, no. 1 (January 2016): 04015052. http://dx.doi.org/10.1061/(asce)he.1943-5584.0001264.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Everstine, G. C. "Prediction of Low Frequency Vibrational Frequencies of Submerged Structures." Journal of Vibration and Acoustics 113, no. 2 (April 1, 1991): 187–91. http://dx.doi.org/10.1115/1.2930168.

Full text
Abstract:
Practical numerical techniques are described for calculating the low frequency vibrational resonances of general submerged structures. Both finite element and boundary element approaches for calculating fully-coupled added mass matrices are presented and illustrated. The finite element approach is implemented using existing structural analysis capability in NASTRAN. The boundary element approach uses the NASHUA structural acoustics program in combination with NASTRAN to compute the added mass matrix. The two procedures are compared in application to a submerged cylindrical shell with flat end closures. Both procedures proved capable of computing accurate submerged resonances; the more elegant boundary element procedure is easier to use but may be more expensive computationally.
APA, Harvard, Vancouver, ISO, and other styles
10

Moon, Andrew S., Andrew S. McGee, Harshadkumar A. Patel, Ryan Cone, Gerald McGwin, Sameer Naranje, and Ashish Shah. "A Safety and Cost Analysis of Outpatient Versus Inpatient Hindfoot Fusion Surgery." Foot & Ankle Specialist 12, no. 4 (October 4, 2018): 336–44. http://dx.doi.org/10.1177/1938640018803699.

Full text
Abstract:
Background. Hindfoot fusion procedures are increasingly being performed in the outpatient setting. However, the cost savings of these procedures compared with the risks and benefits has not been clearly investigated. The objective of this study was to compare patient characteristics, costs, and short-term complications between inpatient and outpatient procedures. Methods. This was a retrospective review of all patients who underwent inpatient and outpatient hindfoot fusion procedures by a single surgeon, at 1 academic institution, from 2013 to 2017. Data collected included demographics, operative variables, comorbidities, complications, costs, and subsequent reencounters. Results. Of 124 procedures, 34 were inpatient and 90 were outpatient. Between procedural settings, with the numbers available, there was no significant increase in complication rate or frequency of reencounters within 90 days. There were no significant differences in the number of patients with reencounters related to the index procedure within 90 days (P = .43). There were 30 reencounters within 90 days after outpatient surgery versus 4 after inpatient surgery (P = .05). The total number of emergency room visits in the outpatient group within 90 days was significantly higher compared with the inpatient group (P = .04). The average cost for outpatient procedures was US$4159 less than inpatient procedures (P < .0001). Conclusion. Outpatient hindfoot fusion may be a safe alternative to inpatient surgery, with significant overall cost savings and similar rate of short-term complications. On the basis of these findings, we believe that outpatient management is preferable for the majority of patients, but further investigation is warranted. Levels of Evidence: Level III
APA, Harvard, Vancouver, ISO, and other styles
11

Staudt, Amanda M., Mithun R. Suresh, Jennifer M. Gurney, Jennifer D. Trevino, Krystal K. Valdez-Delgado, Christopher A. VanFosson, Frank K. Butler, Elizabeth A. Mann-Salinas, and Russ S. Kotwal. "Forward Surgical Team Procedural Burden and Non-operative Interventions by the U.S. Military Trauma System in Afghanistan, 2008–2014." Military Medicine 185, no. 5-6 (December 20, 2019): e759-e767. http://dx.doi.org/10.1093/milmed/usz402.

Full text
Abstract:
Abstract Introduction No published study has reported non-surgical interventions performed by forward surgical teams, and there are no current surgical benchmarks for forward surgical teams. The objective of the study was to describe operative procedures and non-operative interventions received by battlefield casualties and determine the operative procedural burden on the trauma system. Methods This was a retrospective analysis of data from the Joint Trauma System Forward Surgical Team Database using battle and non-battle injured casualties treated in Afghanistan from 2008–2014. Overall procedure frequency, mortality outcome, and survivor morbidity outcome were calculated using operating room procedure codes grouped by the Healthcare Cost and Utilization Project classification. Cumulative attributable burden of procedures was calculated by frequency, mortality, and morbidity. Morbidity and mortality burden were used to rank procedures. Results The study population was comprised of 10,992 casualties, primarily male (97.8%), with a median age interquartile range of 25.0 (22.0–30.0). Affiliations were non-U.S. military (40.0%), U.S. military (35.1%), and others (25.0%). Injuries were penetrating (65.2%), blunt (32.8), and burns (2.0%). Casualties included 4.4% who died and 14.9% who lived but had notable morbidity findings. After ranking by contribution to trauma system morbidity and mortality burden, the top 10 of 32 procedure groups accounted for 74.4% of operative care, 77.9% of mortality, and 73.1% of unexpected morbidity findings. These procedure groups included laparotomy, vascular procedures, thoracotomy, debridement, lower and upper gastrointestinal procedures, amputation, and therapeutic procedures on muscles and upper and lower extremity bones. Most common non-operative interventions included X-ray, ultrasound, wound care, catheterization, and intubation. Conclusions Forward surgical team training and performance improvement metrics should focus on optimizing commonly performed operative procedures and non-operative interventions. Operative procedures that were commonly performed, and those associated with higher rates of morbidity and mortality, can set surgical benchmarks and outline training and skillsets needed by forward surgical teams.
APA, Harvard, Vancouver, ISO, and other styles
12

Saul, Carlos, Guilherme Pereira Lima, Abdon Pacurucu Merchan, and Julio C. Pereira-Lima. "Endoscopic Capsule Retention: Frequency, Causes and Risk Factors Analysis in 244 Consecutive Procedures." Acta Gastroenterológica Latinoamericana 53, no. 2 (June 30, 2023): 164–68. http://dx.doi.org/10.52787/agl.v53i2.307.

Full text
Abstract:
Introduction. Endoscopic capsule is central to the study of the small bowel. Retention is its main complication. Objective. To analyze the frequency and risk factors associated with capsule retention. Methods. 244 consecutive examinations were analyzed. The event was defined as “definitive retention” if the capsule remained in the small bowel for 3 weeks after the procedure, and as “temporary retention” if the capsule remained in the small bowel at the end of the procedure but was eliminated spontaneously in the following days. Risk factors associated with retention were inflammatory small bowel strictures, tumours and large diverticula. Result. Of 244 procedures, lesions were found in 164 (67.2%), 130 of which were in the small bowel. There were 5 and 2 patients with definitive and temporary retention, respectively. Forty-four cases had risk factors. In 7 (15.9%) there was retention of the endoscopic capsule, with definitive retention in 5 cases. The 2 cases of temporary retention occurred in Meckel's diverticulum and in peptic ulcer scar. The 5 cases of definitive retention occurred in 2 patients with Crohn’s disease, 2 patients with stenosis related with anti-inflammatory drugs use and 1 patient with actinic stenosis. None of the 11 cases of small bowel neoplasia had capsule retention. Conclusions. There was no endoscopic capsule retention inpatients without risk factors. Definitive retention was observed in approximately one-tenth of all patients with small bowel risk factors. Recognition of risk factors and their identification prior to the procedure is of utmost importance, especially in patients with suspected inflammatory strictures.
APA, Harvard, Vancouver, ISO, and other styles
13

Chen, C. Y., S. Armbrust, and C. Llorente. "Random Extreme Wave Analysis of Deepwater Structures." Journal of Offshore Mechanics and Arctic Engineering 111, no. 4 (November 1, 1989): 331–36. http://dx.doi.org/10.1115/1.3257103.

Full text
Abstract:
This paper reviews various wave analysis procedures for designing deepwater structures under an extreme seastate. The random wave analysis procedures suitable for fixed stiff platforms and compliant towers are discussed. The random wave analysis procedures are then applied to a 1350-ft water depth fixed platform. The reduction in design force levels due to random waves is indicated by comparing with the conventional regular wave analysis approach. The second harmonic effects due to waves can be easily identified through the dynamic response spectrum which has two peaks occurring at the peak frequency of the input wave spectrum and the natural frequency of the structure. The study also shows that the expected extreme value estimated based on the upcrossing approach agrees well with the snapshot peak response derived from a wave record containing an extreme wave height.
APA, Harvard, Vancouver, ISO, and other styles
14

Lamont, Gretel S., R. S. Tucker, and G. A. M. Cross. "Analysis of antigen switching rates inTrypanosoma brucei." Parasitology 92, no. 2 (April 1986): 355–67. http://dx.doi.org/10.1017/s003118200006412x.

Full text
Abstract:
SUMMARYPreviously quoted figures for the frequency of antigen switching inTrypanosoma bruceiare based on incorrect assumptions. In order to determine the correct switching frequency, an equation was derived that takes the growth rates of the newly expressed antigen types into consideration as well as the proportion of switched trypanosomes and the number of generations since the population was antigenically homogeneous. When this equation was applied to publishedin vitrodata, variable values were obtained for the switching frequency in clonal populations originally expressing one antigen type. The calculated most likely switching frequencies ranged from 1·4×10−7to 3·5×10−6. This variation was probably caused by differences in the growth rates of the new antigen types in the population and failure to detect slow growing variants. To overcome these problems, an experimental procedure was developed to analyse the switching frequencyin vitro. Trypanosomes were cloned and grown in parallel cultures. After an appropriate number of generations, cells expressing the original antigen type were destroyed and, from the proportion of cultures that contained new antigen types, the switching frequency was calculated. The technique minimized subculturing or other procedures that could distort the results. Although the method was optimized for analysing switching frequency, the values differed between experiments, ranging from 2·2×10−7to 2·6×10−6for one variant. Possible causes for the variations in switching frequency are discussed.
APA, Harvard, Vancouver, ISO, and other styles
15

Smutný, Jaroslav, Dušan Janoštík, and Viktor Nohál. "APPLICATION OF UNCONVENTIONAL METHODS FOR FREQUENCY ANALYSIS IN ACOUSTICS." Akustika 36, no. 36 (2020): 25–32. http://dx.doi.org/10.36336/akustika20203625.

Full text
Abstract:
The goal of this study is to familiarize a wider professional public with not fully known procedures suitable for processing measured data in the frequency area. Described is the use of the so-called Multi-taper method to analyze the acoustic response. This transformation belongs to a group of nonparametric methods outgoing from discrete Fourier transform, and this study includes its mathematical analysis and description. In addition, the use of respective method in a specific application area and recommendations for practice are described.
APA, Harvard, Vancouver, ISO, and other styles
16

Perli, H. G., G. Hommel, and W. Lehmacher. "Test Procedures in Configural Frequency Analysis (CFA) Controlling the Local and Multiple Level." Biometrical Journal 29, no. 3 (1987): 255–67. http://dx.doi.org/10.1002/bimj.4710290302.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

B Shankar, B., and D. Jayadevappa. "Ensemble EMD based Time-Frequency Analysis of Continuous Adventitious Signal Processing." International Journal of Engineering & Technology 7, no. 4.10 (October 2, 2018): 896. http://dx.doi.org/10.14419/ijet.v7i4.10.26783.

Full text
Abstract:
The importance of lung sound analyses is increasing day by day very rapidly. In this paper, we present a new method for analysis of two classes of lung signals namely wheezes and crackles. The procedure used in this article is based on improved Empirical Mode Decomposition (EMD) called Ensemble Empirical Mode Decomposition (EEMD) to analyze and compare continuous and discontinuous adventitious sounds with EMD. These two proposed procedures decompose the lung signals into a set of instantaneous frequency components. Function (IMF). The continuous and discontinuous adventitious sounds are present in an asthmatic patient, produces a non-stationary and nonlinear signal pattern. The empirical mode decomposition (EMD) decomposes such characteristic signals. The instantaneous frequency and spectral analysis related to dual techniques specified above are utilized by IMF to investigate and present the outcome in the time-frequency distribution to investigate the qualities of inbuilt properties of lung sound waves. The Hilbert marginal spectrum has been used to represent total amplitude and energy contribution from every frequency value. Finally, the resultant EEMD analysis is better for wheezes that solves mode mixing issues and improvisation is seen over the EMD method.
APA, Harvard, Vancouver, ISO, and other styles
18

Koleková, Y., M. Petronijević, and G. Schmid. "Special dynamic soilstructure analysis procedures demonstrated for two tower-like structures." Slovak Journal of Civil Engineering 18, no. 2 (June 1, 2010): 26–33. http://dx.doi.org/10.2478/v10189-010-0008-2.

Full text
Abstract:
Special dynamic soilstructure analysis procedures demonstrated for two tower-like structuresMany problems in Earthquake Engineering require the modeling of the structure as a dynamic system including the sub-grade. A structural engineer is usually familiar with the Finite Element Method but has a problem modeling the sub-grade when its infinite extension and wave propagation are the essential features. If the dynamic equation of a soil-structure system is written in a frequency domain and the variables of the system are total displacements, then the governing equations are given as in statics. The dynamic stiffness matrix of the system is obtained as the sum of the stiffnesses of the structure and sub-grade sub-structures. To illustrate the influence of the sub-grade on the dynamic behavior of the structure, the frequency response of two tower-like structures excited by a seismic harmonic wave field is shown. The sub-grade is modeled as an elastic homogeneous half-space. The structure is modeled as a finite beam element with lumped masses
APA, Harvard, Vancouver, ISO, and other styles
19

Ben-Isaac, Eyal, Matthew Keefer, Michelle Thompson, and Vincent J. Wang. "Assessing the Utility of Procedural Training for Pediatrics Residents in General Pediatric Practice." Journal of Graduate Medical Education 5, no. 1 (March 1, 2013): 88–92. http://dx.doi.org/10.4300/jgme-d-11-00255.1.

Full text
Abstract:
Abstract Background The Accreditation Council for Graduate Medical Education (ACGME) recommends that residents gain broad procedural competence in pediatrics training. There is little recent information regarding practice patterns after graduation. Objective We analyzed reported procedures performed in actual practice by graduates of a general pediatrics residency program. Methods We conducted an online survey from April 2007 to April 2011 of graduates of a single pediatrics program from a large children's hospital. Eligible participants completed general pediatrics residency training between 1992 and 2006. Graduates were asked about the adequacy of their training for each procedure, as well as the frequency of commonly performed procedures in their practice. As the primary analysis, procedures were divided into emergent and urgent procedures. Results Our response rate was 54% (209 of 387). General pediatricians rarely performed emergent procedures, such as endotracheal intubation, intraosseous line placement, thoracostomy, and thoracentesis. Instead, they more commonly performed urgent procedures, such as laceration repair, fracture or dislocation care, bladder catheterization, foreign body removal, and incision and drainage of simple abscesses. Statistically significant differences existed between emergent and urgent procedures (P &lt; .001). Conclusions In a single, large, urban, pediatrics residency, 15 years of graduates who practiced general pediatrics after graduation reported they rarely performed emergent procedures, such as endotracheal intubation, but more often performed urgent procedures, such as laceration repair. These results may have implications for ACGME recommendations regarding the amount and type of procedural training required for general pediatrics residents.
APA, Harvard, Vancouver, ISO, and other styles
20

Anggraini, Nurul. "An Analysis of Translation Procedures of Noun Phrases in Carlo Collodi’s Novel Entitled “Pinocchio”." LUNAR 2, no. 02 (November 5, 2018): 1–17. http://dx.doi.org/10.36526/ln.v2i02.530.

Full text
Abstract:
This research is conducted to identify the noun phrase and its types in Carlo Collodi’s novel entitled Pinoccio as well as procedures used to translate them in the translation version entitled Pinokio which was translated by Wiwin Indiarti. The data of this descriptive qualitative research are noun phrases found in the novel and their translation in Indonesian. Documentation is used to collect the data. Meanwhile, content analysis method is applied for analyzing data in relation to their contexts. The result of this research shows that there are 5.283 noun phrases in chapter 1 to chapter 36 of Pinochio novel. It is also found that there are 3 types of noun phrase. They are: 1) noun phrase type I which is identified as Modifier + Head (M+H) with 2.985 data, 2) noun phrase type II which is identified as Head + Modifier (M+H) with 425 data, and 3) noun phrase type III which is identified as Modifier + Head + Modifier (M+H) with 1.873 data. Furthermore, the translator used eleven translation procedures to translate noun phrases proposed by Newmark. Those procedures are 1) translation by Transference with frequency of 93 data (1.8%), 2) translation by Naturalization with frequancy of 102 data (2.1%), 3) translation by Cultural Equivalent with frequency of 308 data (6.2%), 4) translation by Functional Equivalent with frequancy of 97 data (1.9%) translation by Componential Analysis with frequancy of 773 data (14.7%), 6) translation by Synonymy with frequancy of 738 data (14.9%), 7) translation by Through-translation with frequancy of 1.205 data (24.3%), 8) translation by Shift or Transposition with frequancy of 792 data (17%), 9) translation by Compensation with frequancy of 438 data (8%). 10 translation by Couplets with frequency 173 (3.4%), and 11) translation by Modulation with frequency 43 (1%). It can be concluded that the translator often used Through translation procedure to translate noun phrases into Indonesian. It was used because she wants to deliver the message of the text as natural as possible
APA, Harvard, Vancouver, ISO, and other styles
21

Demirhan, Osman. "The Genotoxic Effect of Interventional Cardiac Radiologic Procedures on Human Chromosomes." Clinical Medical Reviews and Reports 2, no. 5 (September 8, 2020): 01–10. http://dx.doi.org/10.31579/2690-8794/032.

Full text
Abstract:
In recent years, an important part of the ionizing radiation (IR) that human have been exposed for diagnostic purposes are interventional radiologic procedures. The X-rays and contrast media are used in angiography. The patients and staff members are exposed to X-ray during these procedures. While it is known that X-rays cause DNA damage and carcinogenesis, the effect of the contrast agent is still unknown. The aim of this study was to investigate the effect of X-rays and contrast agent on chromosomes of human patients. Peripheral blood samples were taken from 50 patients (30 males, 20 females, ages between 38-75 years). Chromosome analysis of peripheral lymphocytes in 50 patients were performed at 3 different periods; before the interventional radiologic procedure, 24 hours and 1-3 months after the procedure. Also, chromosome analysis was performed on 17 staff members working during interventional radiological procedures to investigate the effect of X-rays. Standard cytogenetic analysis techniques were used for this study. The frequency of chromosomal aberrations (CAs) was significantly higher in patients 24 hours after the interventional radiologic procedures than pretreatment (p=0,000). At the same time, CAs after 24 hours compared with those taken 1-3 months later, shown that the CAs were significantly reduced after 1-3 months (p=0,000). We also found that the frequency of CAs was also statistically higher in patients exposed to high radiation doses (p=0,042). Compared with the control group (n=30), CAs were found significantly higher in workers exposed to radiation. Our findings have shown that X-rays and contrast agents that used in interventional radiological cause chromosomal damage. For this reason, the dose of radiation to be given to the patient must be carefully selected. Due to the potentially high genetic damage of patients with coronary artery disease (CAD), the type and amount of medication to be given and the frequency of radiological diagnostic procedures to be performed should be meticulously adjusted.
APA, Harvard, Vancouver, ISO, and other styles
22

Jang, David W., Cecily Abraham, Derek D. Cyr, Kristine Schulz, Ralph Abi Hachem, and David L. Witsell. "Balloon Catheter Dilation of the Sinuses: A 2011-2014 MarketScan Analysis." Otolaryngology–Head and Neck Surgery 159, no. 6 (August 7, 2018): 1061–67. http://dx.doi.org/10.1177/0194599818791811.

Full text
Abstract:
Objective This study uses a large national claims-based database to analyze recent practice patterns related to balloon catheter dilation (BCD) of the sinuses. Study Design Retrospective study. Setting Academic. Subjects and Methods Patients with chronic rhinosinusitis (CRS) undergoing BCD and functional endoscopic sinus surgery (FESS) from 2011 to 2014 were identified in Truven Health MarketScan Databases with codes from the International Classification of Diseases, Ninth Revision, Clinical Modification and Current Procedural Terminology, Fourth Edition. Prevalence of CRS and frequency of sinus procedures were trended over the study period. Information related to site of service, demographics, and comorbidities was analyzed. Results Although the prevalence of CRS and sinus procedures remained stable over the study period, there was a consistent increase in the annual number of BCD procedures performed in the office. Among BCD procedures, multisinus dilation had the largest increase. A higher proportion of patients undergoing BCD were women, aged ≥65 years, and from the South. There was a higher prevalence of headache disorder and allergic rhinitis in the BCD group, as compared with the FESS and hybrid groups. Conclusion BCD, especially in the office, has risen in popularity since the introduction of Current Procedural Terminology codes in 2011. This study reveals significant differences in demographics and comorbidities between patients undergoing BCD and those undergoing FESS. Such disparities may highlight the need for better-defined indications for use of this technology.
APA, Harvard, Vancouver, ISO, and other styles
23

Stern, Caryn A., Zsolt T. Stockinger, William E. Todd, and Jennifer M. Gurney. "An Analysis of Orthopedic Surgical Procedures Performed During U.S. Combat Operations from 2002 to 2016." Military Medicine 184, no. 11-12 (April 24, 2019): 813–19. http://dx.doi.org/10.1093/milmed/usz093.

Full text
Abstract:
Abstract Introduction Orthopedic surgery constitutes 27% of procedures performed for combat injuries. General surgeons may deploy far forward without orthopedic surgeon support. This study examines the type and volume of orthopedic procedures during 15 years of combat operations in Iraq and Afghanistan. Materials and Methods Retrospective analysis of the US Department of Defense Trauma Registry (DoDTR) was performed for all Role 2 and Role 3 facilities, from January 2002 to May 2016. The 342 ICD-9-CM orthopedic surgical procedure codes identified were stratified into fifteen categories, with upper and lower extremity subgroups. Data analysis used Stata Version 14 (College Station, TX). Results A total of 51,159 orthopedic procedures were identified. Most (43,611, 85.2%) were reported at Role 3 s. More procedures were reported on lower extremities (21,688, 57.9%). Orthopedic caseload was extremely variable throughout the 15-year study period. Conclusions Orthopedic surgical procedures are common on the battlefield. Current dispersed military operations can occur without orthopedic surgeon support; general surgeons therefore become responsible for initial management of all injuries. Debridement of open fracture, fasciotomy, amputation and external fixation account for 2/3 of combat orthopedic volume; these procedures are no longer a significant part of general surgery training, and uncommonly performed by general/trauma surgeons at US hospitals. Given their frequency in war, expertise in orthopedic procedures by military general surgeons is imperative.
APA, Harvard, Vancouver, ISO, and other styles
24

Hummel, Regina, Daniel Wollschläger, Hans-Jürgen Baldering, Kristin Engelhard, Eva Wittenmeier, Katharina Epp, and Nina Pirlich. "Big data: Airway management at a university hospital over 16 years; a retrospective analysis." PLOS ONE 17, no. 9 (September 20, 2022): e0273549. http://dx.doi.org/10.1371/journal.pone.0273549.

Full text
Abstract:
Purpose Little is known about the current practice of airway management in Germany and its development over the last decades. The present study was, therefore, designed to answer the following questions. Which airway management procedures have been performed over the last 16 years and how has the frequency of these procedures changed over time? Is there a relationship between patient characteristics or surgical specialisation and the type of airway management performed? Methods In the present study, we used our in-house data acquisition and accounting system to retrospectively analyse airway management data for all patients who underwent a surgical or medical procedure with anaesthesiological care at our tertiary care facility over the past 16 years. 340,748 airway management procedures were analysed by type of procedure, medical/surgical specialty, and type of device used. Logistic regression was used to identify trends over time. Results Oral intubation was the most common technique over 16 years (65.7%), followed by supraglottic airway devices (18.1%), nasal intubation (7.5%), mask ventilation (1.6%), tracheal cannula (1.3%), double lumen tube (0.7%), and jet ventilation (0.6%). On average, the odds ratio of using supraglottic airway devices increased by 17.0% per year (OR per year = 1.072, 95% CI = 1.071–1.088) while oral intubation rates decreased. In 2005, supraglottic airway devices were used in about 10% of all airway management procedures. Until 2020, this proportion steadily increased by 27%. Frequency of oral intubation on the other hand decreased and was about 75% in 2005 and 53% in 2020. Over time, second-generation supraglottic airway devices were used more frequently than first-generation supraglottic airway devices. While second-generation devices made up about 9% of all supraglottic airway devices in 2010, in 2020 they represented a proportion of 82%. The use of fibreoptic intubation increased over time in otorhinolaryngology and dental, oral, and maxillofacial surgery, but showed no significant trends over the entire 16-year period. Conclusion Our data represent the first large-scale evaluation of airway management procedures over a long time. There was a significant upward trend in the use of supraglottic airway devices, with an increase in the use of second-generation masks while a decrease in oral intubations was observed.
APA, Harvard, Vancouver, ISO, and other styles
25

Kidson, R., and K. S. Richards. "Flood frequency analysis: assumptions and alternatives." Progress in Physical Geography: Earth and Environment 29, no. 3 (September 2005): 392–410. http://dx.doi.org/10.1191/0309133305pp454ra.

Full text
Abstract:
Flood frequency analysis (FFA) is a form of risk analysis, yet a risk analysis of the activity of FFA itself is rarely undertaken. The recent literature of FFA has been characterized by: (1) a proliferation of mathematical models, lacking theoretical hydrologic justification, but used to extrapolate the return periods of floods beyond the gauged record; (2) official mandating of particular models, which has resulted in (3) research focused on increasingly reductionist and statistically sophisticated procedures for parameter fitting to these models from the limited gauged data. These trends have evolved to such a refined state that FFA may be approaching the ‘limits of splitting’; at the very least, the emphasis was shifted early in the history of FFA from predicting and explaining extreme flood events to the more soluble issue of fitting distributions to the bulk of the data. However, recent evidence indicates that the very modelling basis itself may be ripe for revision. Self-similar (power law) models are not only analytically simpler than conventional models, but they also offer a plausible theoretical basis in complexity theory. Of most significance, however, is the empirical evidence for self-similarity in flood behaviour. Self-similarity is difficult to detect in gauged records of limited length; however, one positive aspect of the application of statistics to FFA has been the refinement of techniques for the incorporation of historical and palaeoflood data. It is these data types, even over modest timescales such as 100 years, which offer the best promise for testing alternative models of extreme flood behaviour across a wider range of basins. At stake is the accurate estimation of flood magnitude, used widely for design purposes: the power law model produces far more conservative estimates of return period of large floods compared to conventional models, and deserves closer study.
APA, Harvard, Vancouver, ISO, and other styles
26

Mohamed, Osama Ahmed, and Mohamed Sherif Mehana. "Assessment of Accidental Torsion in Building Structures Using Static and Dynamic Analysis Procedures." Applied Sciences 10, no. 16 (August 9, 2020): 5509. http://dx.doi.org/10.3390/app10165509.

Full text
Abstract:
This article presents the findings of a study on assessment of the increase in building’s response due to accidental torsion when subjected to seismic forces. Critical stiffness and geometrical parameters that define buildings torsional response are examined including: (1) the ratio, Ω, between uncoupled torsional frequency ωθ to uncoupled translation frequencies in the direction of ground motion ωx or ωy, (2) floor plan aspect ratio, b/r, which is a function of the floor dimension and radius of gyration. The increased response is assessed on symmetric multi-storey buildings using both static and dynamic analysis methods specified by ASCE-7 and considering parameters affecting the torsional response. It was concluded that static and dynamic analysis procedures predict different accidental torsion responses. Static analysis based on the Equivalent Lateral Force (ELF) method predicts more conservative accidental torsions responses for flexible structures with Ω < 0.7~0.80, while the responses are less conservative for stiffer buildings. The conservativism in static analysis method is attributed to the response amplification factor, Ax. Floor plans and their lateral support system having frequency ratio Ω = 1 will also have a torsional radius equal to radius of gyration, and will experience drop in torsional response relative to more torsionally flexible buildings. This article presents a procedure to overcome the shortcomings of static and dynamic analysis procedures in terms of estimating accidental torsion response of symmetric building structures.
APA, Harvard, Vancouver, ISO, and other styles
27

Matusiak, Katarzyna, Justyna Wolna, Aleksandra Jung, Leszek Sadowski, and Jolanta Pawlus. "Impact of the Frequency and Type of Procedures Performed in Nuclear Medicine Units on the Expected Radiological Hazard." International Journal of Environmental Research and Public Health 20, no. 6 (March 15, 2023): 5206. http://dx.doi.org/10.3390/ijerph20065206.

Full text
Abstract:
Nuclear medicine procedures play an important role in medical diagnostics and therapy. They are related to the use of ionizing radiation, which affects the radiological exposure of all of the persons involved in their performance. The goal of the study was to estimate the doses associated with the performance of various nuclear medicine procedures in order to optimize workload management. The analysis was performed for 158 myocardial perfusion scintigraphy procedures, 24 bone scintigraphies, 9 thyroid scintigraphies (6 with use of 131I and 3 with 99mTc), 5 parathyroid glands and 5 renal scintigraphies. In this evaluation, two possible locations of thermoluminescent detectors, used for measurements, were taken into consideration: in the control room and directly next to the patient. It was shown how the radiological exposure varies depending on the performed procedure. For high activity procedures, ambient dose equivalent registered in the control room reached the level over 50% of allowed dose limit. For example, ambient dose equivalent obtained in control room when performing bone scintigraphy only was 1.13 ± 0.3 mSv. It is 68% of calculated dose limit in the examined time span. It has been shown that risk associated with nuclear medicine procedures is influenced not only by the type of procedure, but also by the frequency of their performance and compliance with the ALARA principle. Myocardial perfusion scintigraphy accounted for 79% of all evaluated procedures. The use of radiation shielding reduced the obtained doses from 14.7 ± 2.1 mSv in patient’s vicinity to 1.47 ± 0.6 mSv behind the shielding. By comparing the results obtained for procedures and dose limits established by Polish Ministry of Health, it is possible to estimate what should be the optimal division of duties between staff, so that everyone receives similar doses.
APA, Harvard, Vancouver, ISO, and other styles
28

Kim, B., T. Vreeland, and T. Aloia. "Multiinstitutional analysis of frequency, risk factors and outcomes for 537 simultaneous pancreatectomy/hepatectomy procedures." HPB 19 (April 2017): S48. http://dx.doi.org/10.1016/j.hpb.2017.02.036.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Guan, Fuwang, Hong Xiao, Meiwu Shi, Weidong Yu, and Fumei Wang. "Realization of planar frequency selective fabrics and analysis of transmission characteristics." Textile Research Journal 87, no. 11 (June 16, 2016): 1360–66. http://dx.doi.org/10.1177/0040517516652348.

Full text
Abstract:
Based on traditional frequency selective surfaces (FSSs), the research ideas of novel frequency selective fabrics (FSFs) are proposed. In this paper, the specific square-loop patch FSF was chosen as an example to illustrate the design procedures, including ANSYS (HFSS) simulation and numerical calculation methods, and then a computer-based experiment was conducted to develop prototypes. Although the simulation, calculation, and experiment results have minor differences, especially the resonance frequency, they show good consistency overall, which demonstrates that traditional design methods could also apply to 2D FSFs. The experiment transmission curve shows obvious band-stop response, peaking at -37.12 dB at the resonance frequency 11.65 GHz, and the narrow bandwidth of -10 dB is predicted from 10.85 GHz to 12.55 GHz. To further verify the validity of design procedures, two complementary cross-shaped FSFs were fabricated through a computer embroidery process, and the experimental transmission curves are complementary as expected, peaking at -26.05 dB and 0 dB at the same resonance frequency 9.65 GHz, and the narrow bandwidths of -10 dB and -0.5 dB are 1.07 GHz and 0.41 GHz, respectively. Although many problems need to be solved in further research, this convenient fabrication method and theoretical basis could make relevant work feasible in later study.
APA, Harvard, Vancouver, ISO, and other styles
30

Poelman, Gaétan, Saeid Hedayatrasa, Joost Segers, Wim Van Paepegem, and Mathias Kersemans. "An Experimental Study on the Defect Detectability of Time- and Frequency-Domain Analyses for Flash Thermography." Applied Sciences 10, no. 22 (November 13, 2020): 8051. http://dx.doi.org/10.3390/app10228051.

Full text
Abstract:
A defect’s detectability in flash thermography is highly dependent on the applied post-processing methodology. The majority of the existing analysis techniques operate either on the time-temperature data or on the frequency-phase data. In this paper, we compare the efficiency of time- and frequency-domain analysis techniques in flash thermography for obtaining good defect detectability. Both single-bin and integrated-bin evaluation procedures are considered: dynamic thermal tomography and thermal signal area for the time-domain approach, and frequency domain tomography and adaptive spectral band integration for the frequency-domain approach. The techniques are applied on various carbon fiber reinforced polymer samples having a range of defect sizes and defect types. The advantages and drawbacks of the different post-processing techniques are evaluated and discussed. The best defect detectability is achieved using the integrated procedure in frequency domain.
APA, Harvard, Vancouver, ISO, and other styles
31

Aminzadeh, Hamed, and Dalton Martini Colombo. "Analysis and Design Procedures of CMOS OTAs Based on Settling Time." Journal of Integrated Circuits and Systems 17, no. 1 (April 30, 2022): 1–11. http://dx.doi.org/10.29292/jics.v17i1.590.

Full text
Abstract:
Analysis of generic single-pole, two-pole, and three-pole operational transconductance amplifiers (OTAs) is carried out based on settling time. The most important design metrics of the open-loop frequency response, such as the stability margins and the gain-bandwidth product (GBW) are related to the settling time of single-, two- and three-stage OTAs in closed-loop configuration, enabling to present a design procedure for each OTA based on the settling time specifications. Transistor-level design examples are provided for each case to validate the described settling-based design strategies.
APA, Harvard, Vancouver, ISO, and other styles
32

Sintoni, Michele, Elena Macrelli, Alberto Bellini, and Claudio Bianchini. "Condition Monitoring of Induction Machines: Quantitative Analysis and Comparison." Sensors 23, no. 2 (January 16, 2023): 1046. http://dx.doi.org/10.3390/s23021046.

Full text
Abstract:
In this paper, a diagnostic procedure for rotor bar faults in induction motors is presented, based on the Hilbert and discrete wavelet transforms. The method is compared with other procedures with the same data, which are based on time–frequency analysis, frequency analysis and time domain. The results show that this method improves the rotor fault detection in transient conditions. Variable speed drive applications are common in industry. However, traditional condition monitoring methods fail in time-varying conditions or with load oscillations. This method is based on the combined use of the Hilbert and discrete wavelet transforms, which compute the energy in a bandwidth corresponding to the maximum fault signature. Theoretical analysis, numerical simulation and experiments are presented, which confirm the enhanced performance of the proposed method with respect to prior solutions, especially in time-varying conditions. The comparison is based on quantitative analysis that helps in choosing the optimal trade-off between performance and (computational) cost.
APA, Harvard, Vancouver, ISO, and other styles
33

Everstine, G. C. "Dynamic Analysis of Fluid-Filled Piping Systems Using Finite Element Techniques." Journal of Pressure Vessel Technology 108, no. 1 (February 1, 1986): 57–61. http://dx.doi.org/10.1115/1.3264752.

Full text
Abstract:
Two finite element procedures are described for predicting the dynamic response of general 3-D fluid-filled elastic piping systems. The first approach, a low-frequency procedure, models each straight pipe or elbow as a sequence of beams. The contained fluid is modeled as a separate coincident sequence of axial members (rods) which are tied to the pipe in the lateral direction. The model includes the pipe hoop strain correction to the fluid sound speed and the flexibility factor correction to the elbow flexibility. The second modeling approach, an intermediate frequency procedure, follows generally the original Zienkiewicz-Newton scheme for coupled fluid-structure problems except that the velocity potential is used as the fundamental fluid unknown to symmetrize the coefficient matrices. From comparisons of the beam model predictions to both experimental data and the 3-D model, the beam model is validated for frequencies up to about two-thirds of the lowest fluid-filled lobar pipe mode. Accurate elbow flexibility factors are seen to be important for effective beam modeling of piping systems.
APA, Harvard, Vancouver, ISO, and other styles
34

Fink, Thomas, Vanessa Sciacca, Sebastian Feickert, Andreas Metzner, Tina Lin, Michael Schlüter, Roland Richard Tilz, et al. "Outcome of cardiac tamponades in interventional electrophysiology." EP Europace 22, no. 8 (June 4, 2020): 1240–51. http://dx.doi.org/10.1093/europace/euaa080.

Full text
Abstract:
Abstract Aims The aim of this study was to analyse tamponades following electrophysiological procedures regarding frequency and mortality in a high-volume centre and to identify independent predictors for severe tamponades. Methods and results We performed a retrospective study on 34 982 consecutive patients undergoing diagnostic electrophysiological studies or catheter ablation of cardiac arrhythmias. The combined endpoint was defined as severe tamponade. Criteria for severe tamponade included surgical repair, repeat pericardiocentesis, cardiopulmonary resuscitation, intrahospital death or death during follow-up, and thrombo-embolic events or complications due to therapeutic management. Multivariate analysis was performed to identify independent predictors for severe tamponade. A total of 226 tamponades were identified. Overall frequency of tamponades was 0.6%. Procedures requiring epicardial approach had the highest rate of tamponades (9.4%). Twenty-nine patients with tamponade underwent surgery (12.8% of all tamponades and 21.4% of tamponades during epicardial procedures). Overall tamponade-related mortality was 0.03% (9 deaths). Fifty-six patients (24.8%) experienced severe tamponade. Independent risk factors for severe tamponades were endocardial ablation of ventricular tachycardia, epicardial approach, balloon device ablation, high aspiration volume during pericardiocentesis and structural heart disease. Conclusion The frequency of tamponades is strongly dependent on the type of procedure performed. Overall tamponade-related mortality was low but significantly higher in patients undergoing epicardial procedures. Surgical backup should be considered for patients undergoing complex ventricular tachycardia ablation and left atrial ablation procedures.
APA, Harvard, Vancouver, ISO, and other styles
35

Hearn, G. E., K. C. Tong, and S. M. Lau. "Sensitivity of Wave Drift Damping Coefficient Predictions to the Hydrodynamic Analysis Models Used in the Added Resistance Gradient Method." Journal of Offshore Mechanics and Arctic Engineering 110, no. 4 (November 1, 1988): 337–47. http://dx.doi.org/10.1115/1.3257071.

Full text
Abstract:
This paper is concerned with the formulation and simplifications of the general fluid structure interaction analysis for an advancing oscillating vessel in waves to provide alternative 3D hydrodynamic models to determine first and second-order wave-induced fluid loadings, and, hence, the prediction of low-frequency wave damping coefficients. Heuristic arguments which lead to the Added Resistance Gradient (ARG) method of calculating low-frequency damping coefficients together with two 3D-based calculation procedures are presented. Predictions of added resistance and motion responses are compared with other published data. The intermediate hydrodynamic coefficient predictions based on 2D and 3D hydrodynamic models are compared. Low-frequency damping coefficient predictions based on the two proposed 3D calculation procedures are compared with experimental measurements and earlier published generalized strip theory values. Assessment of the applicability of the procedures, the result of their application, and further possible generalizations of the methods are discussed.
APA, Harvard, Vancouver, ISO, and other styles
36

Requena, Ana I., Fateh Chebana, and Taha B. M. J. Ouarda. "Heterogeneity measures in hydrological frequency analysis: review and new developments." Hydrology and Earth System Sciences 21, no. 3 (March 21, 2017): 1651–68. http://dx.doi.org/10.5194/hess-21-1651-2017.

Full text
Abstract:
Abstract. Some regional procedures to estimate hydrological quantiles at ungauged sites, such as the index-flood method, require the delineation of homogeneous regions as a basic step for their application. The homogeneity of these delineated regions is usually tested providing a yes/no decision. However, complementary measures that are able to quantify the degree of heterogeneity of a region are needed to compare regions, evaluate the impact of particular sites, and rank the performance of different delineating methods. Well-known existing heterogeneity measures are not well-defined for ranking regions, as they entail drawbacks such as assuming a given probability distribution, providing negative values and being affected by the region size. Therefore, a framework for defining and assessing desirable properties of a heterogeneity measure in the regional hydrological context is needed. In the present study, such a framework is proposed through a four-step procedure based on Monte Carlo simulations. Several heterogeneity measures, some of which commonly known and others which are derived from recent approaches or adapted from other fields, are presented and developed to be assessed. The assumption-free Gini index applied on the at-site L-variation coefficient (L-CV) over a region led to the best results. The measure of the percentage of sites for which the regional L-CV is outside the confidence interval of the at-site L-CV is also found to be relevant, as it leads to more stable results regardless of the regional L-CV value. An illustrative application is also presented for didactical purposes, through which the subjectivity of commonly used criteria to assess the performance of different delineation methods is underlined.
APA, Harvard, Vancouver, ISO, and other styles
37

Sowa, Paweł, and Daria Zychma. "Dynamic Equivalents in Power System Studies: A Review." Energies 15, no. 4 (February 14, 2022): 1396. http://dx.doi.org/10.3390/en15041396.

Full text
Abstract:
In this paper, the available methods and procedures for creating equivalents for the analysis of electromagnetic transients in power systems are presented and discussed. General requirements of power system representation during simulation of electromagnetic transients are shown. The main available procedures are shown, along with an assessment of their advantages and disadvantages. Methods to search for the optimal replacement of structures in time and frequency domains are discussed. Optimization and direct methods in the frequency domain are presented. Each of these methods is discussed with respect to their possible use in determining the structure of the equivalent circuit for the study of electromagnetic phenomena. Methods to reduce a complex power system, as one of the approaches to determining the structure and parameters of the equivalent circuit, are also presented. Contraindications to the search for equivalents in the frequency domain to study electromagnetic transients are discussed. An analysis of methods for the identification of parameters of the equivalents is presented. The latest advances in the search for the structure and parameters of equivalents are presented, particularly the use of artificial neural networks in the process of replacing parts of systems. Finally, the analyses conducted in this study, together with recommendations regarding the choice of the procedure during the search for equivalents for the analysis of electromagnetic transient phenomena, are summarized.
APA, Harvard, Vancouver, ISO, and other styles
38

Badrinathan, Avanti, Anuja L. Sarode, Christine E. Alvarado, Jillian Sinopoli, Jonathan D. Rice, Philip A. Linden, Matthew L. Moorman, and Christopher W. Towe. "Surgical subspecialization is associated with higher rate of rib fracture stabilization: a retrospective database analysis." Trauma Surgery & Acute Care Open 8, no. 1 (April 2023): e000994. http://dx.doi.org/10.1136/tsaco-2022-000994.

Full text
Abstract:
BackgroundSurgical stabilization of rib fractures (SSRF) is performed on only a small subset of patients who meet guideline-recommended indications for surgery. Although previous studies show that provider specialization was associated with SSRF procedural competency, little is known about the impact of provider specialization on SSRF performance frequency. We hypothesize that provider specialization would impact performance of SSRF.MethodsThe Premier Hospital Database was used to identify adult patients with rib fractures from 2015 and 2019. The outcome of interest was performance of SSRF, defined using International Classification of Diseases—10th Revision Procedure Coding System coding. Patients were categorized as receiving their procedures from a thoracic, general surgeon, or orthopedic surgeon. Patients with missing or other provider types were excluded. Multivariate modeling was performed to evaluate the effect of surgical specialization on outcomes of SSRF. Given a priori assumptions that trauma centers may have different practice patterns, a subgroup analysis was performed excluding patients with ‘trauma center’ admissions.ResultsAmong 39 733 patients admitted with rib fractures, 2865 (7.2%) received SSRF. Trauma center admission represented a minority (1034, 36%) of SSRF procedures relative to other admission types (1831, 64%, p=0.15). In a multivariable analysis, thoracic (OR 6.94, 95% CI 5.94–8.11) and orthopedic provider (OR 2.60, 95% CI 2.16–3.14) types were significantly more likely to perform SSRF. In further analyses of trauma center admissions versus non-trauma center admissions, this pattern of SSRF performance was found at non-trauma centers.ConclusionThe majority of SSRF procedures in the USA are being performed by general surgeons and at non-trauma centers. ‘Subspecialty’ providers in orthopedics and thoracic surgery are performing fewer total SSRF interventions, but are more likely to perform SSRF, especially at non-trauma centers. Provider specialization as a barrier to SSRF may be related to competence in the SSRF procedures and requires further study.TypeTherapeutic/care management.Level of evidenceIV
APA, Harvard, Vancouver, ISO, and other styles
39

He, Shi, and Aijun Wang. "Time and Frequency Domain Dynamic Analysis of Offshore Mooring." Journal of Marine Science and Engineering 9, no. 7 (July 19, 2021): 781. http://dx.doi.org/10.3390/jmse9070781.

Full text
Abstract:
The numerical procedures for dynamic analysis of mooring lines in the time domain and frequency domain were developed in this work. The lumped mass method was used to model the mooring lines. In the time domain dynamic analysis, the modified Euler method was used to solve the motion equation of mooring lines. The dynamic analyses of mooring lines under horizontal, vertical, and combined harmonic excitations were carried out. The cases of single-component and multicomponent mooring lines under these excitations were studied, respectively. The case considering the seabed contact was also included. The program was validated by comparing with the results from commercial software, Orcaflex. For the frequency domain dynamic analysis, an improved frame invariant stochastic linearization method was applied to the nonlinear hydrodynamic drag term. The cases of single-component and multicomponent mooring lines were studied. The comparison of results shows that frequency domain results agree well with nonlinear time domain results.
APA, Harvard, Vancouver, ISO, and other styles
40

Shaterian, Ashkaun, Lohrasb Ross Sayadi, Amanda Anderson, Wendy K. Y. Ng, Gregory R. D. Evans, and Amber Leis. "Characteristics of Secondary Procedures following Digit and Hand Replantation." Journal of Hand and Microsurgery 11, no. 03 (February 25, 2019): 127–33. http://dx.doi.org/10.1055/s-0039-1681981.

Full text
Abstract:
Abstract Introduction Secondary procedures following digit and hand replants are often necessary to optimize functional outcomes. To date, the incidence and characteristics of secondary procedures have yet to be fully defined. Materials and Methods A literature search was performed using the NCBI (National Center for Biotechnology Information) database for studies evaluating secondary procedures following digit and hand replantation/revascularization. Studies were evaluated for frequency and type of secondary procedure following replantation. Descriptive statistical analysis was conducted across the pooled dataset. Results Nineteen studies representing 1,485 replants were included in our analysis. A total of 1,124 secondary procedures were performed on the 1,485 replants. Secondary procedures most commonly addressed tendons (27.1%), bone/joints (16.1%), soft tissue coverage (15.4%), nerve (5.4%), and scar contractures (4.5%). A total of 12.7% of replants resulted in re-amputation (16.7% of secondary procedures). The details of secondary procedures are further described in the article. Conclusion Secondary procedures are often necessary following hand and digit replants. Patients should be informed of the possible need for subsequent surgery, including delayed amputation, to improve hand function. These data improve our understanding of replant outcomes and can help patients better comprehend the decision to undergo replantation.
APA, Harvard, Vancouver, ISO, and other styles
41

Provaznik, Mary Kay, and Rollin H. Hotchkiss. "Analysis of Gauging Station Flood Frequency Estimates in Nebraska Using L-Moments and Region of Influence Methods." Transportation Research Record: Journal of the Transportation Research Board 1647, no. 1 (January 1998): 53–60. http://dx.doi.org/10.3141/1647-07.

Full text
Abstract:
Recent advances in predicting flood magnitude and frequency at streamgauging stations are illustrated using stream flow data from Nebraska. Prediction methods were based on statistical techniques referred to as L-moments and the region of influence method (ROI). L-moments are less sensitive to extremely high or low floods than current procedures and may provide more stable estimates of flood frequency. The ROI method for predicting flood frequency does not depend on fixed hydrologic regions but uses information from all appropriate gauges in the state to form a unique region and frequency estimate for each site. Estimates of the 100-year flood using current procedures showed statistically significant differences from estimates made using a generalized extreme value distribution with L-moments. Differences were due to the treatment of extreme flood events and illustrate the robust character of L-moments. L-moments were less sensitive to extreme floods as expected. Creating regions using the ROI method was found to be sensitive to the selection of basin attributes for assembling sites, but was not sensitive to the number of gauges initially used to create a region, nor the criterion used to eliminate a gauge from a potential region. Statistical tests revealed insignificant differences between ROI estimates of the 100-year flood when compared with estimates using current procedures. The similarity in estimates is attributed to current “filtering” procedures used that reduce the impact of extreme events. The ROI method is viewed as a more objective method of achieving the same result.
APA, Harvard, Vancouver, ISO, and other styles
42

Williams, Jason P., Farzan Sasangohar, S. Camille Peres, Alec Smith, and M. Sam Mannan. "Investigating Written Procedures in Process Safety: Qualitative Data Analysis of Interviews from High Risk Facilities." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 61, no. 1 (September 2017): 1669–70. http://dx.doi.org/10.1177/1541931213601905.

Full text
Abstract:
Socio-technical systems, such as those in oil and gas, or the petrochemical and energy industries, are escalating in complexity, a consequence of increasingly advanced technologies, organizational constructs, and business functions that interact and depend on one another. These dynamic social and technological elements, coupled with the high risk inherent in these systems, have generated conditions that can bring about catastrophic failure and the tragic loss of human life, such as the disaster in Bhopal, India (1984) or the explosion in the Houston Ship Channel near Pasadena, Texas (1989). Historically, the perception of such complexities and the struggle to minimize catastrophic failures observed within the petrochemical industry have been attributed to the inherent variability in people. Therefore, process safety regulations associated with the Clean Air Act Amendments of 1990 and the Occupational Safety and Health Administration (OSHA) require employers to develop written process safety information or “procedures” which aim for consistency in plant operations and to help workers at the “sharp-end” of the system cope with unexpected events (OSHA, 2000). However, investigation reports since, such as the BP Texas City incident of 2005, suggest “outdated and ineffective procedures” as significant contributing factors to failure. Evidence from other studies suggest that procedures in complex environments are sometimes misunderstood, outdated, or simply not used (Bullemer & Hajdukiewicz, 2004). While there have been studies on procedural deviations and safety violations (Alper & Karsh, 2009; Jamison & Miller, 2000), employers continue to report a high rate of procedural breakdown as root causes for incidents (Bates & Holroyd, 2012). This warrants a contemporary, systems-oriented inquiry into process safety and behavior surrounding the use of the documents at different individual (e.g. cognitive), task, cultural, organizational, and environmental levels. This perspective appreciates the interdependent nature of these interrelated socio-technical elements and should provide insight into the effectiveness of current procedure systems, thereby informing future work in creating and empirically testing mitigation methods to address potential barriers. This research documents one part of a three part, large-scale project that investigates the issues with procedure forms, usage, adoption, and challenges in a wide range of high-risk industries. As such, the method was framed around first understanding the extent to which these challenges could be generalized between various locations. A grounded theory approach in qualitative data analysis, influenced by the Strauss & Corbin and Charmaz approaches (Bryman 2015) and facilitated by the analysis software MAXQDA-12 was used to examine 72 semi-structured interviews with operators of varying roles and experiences across 6 countries and an offshore drilling vessel. Findings reaffirm previous research, suggesting that the effectiveness of written procedures is limited by an abundance of outdated procedures plagued by information overload. New findings suggest that frequency of the task and the experience level of the worker would impact workers’ procedure use, with participants commenting that the perceived importance of these documents decreases significantly after initial training periods. Other unintended consequences associated with written procedural systems range from complications in using the documents around personal protective equipment (PPE) requirements and harsh weather, reactive organizational behavior surrounding changing procedures, and a general disconnect between the users and the writers of these documents. This is only exacerbated as management imposes pressure to use procedures on personnel despite the issues encountered with the documents, inhibiting valuable feedback within their organizations as personnel withhold information for fear of job security and potential punishment (in the form of 20-day suspension programs or termination). Moving forward, research is in-progress to identify the interdependencies between environmental, cultural, organizational, task, and personal factors unique to each location. This will provide insight regarding the extent to which procedures may not be generalized, after which a holistic view of procedure use in the industry will be offered. The resulting insight will point to recommendations for the future redesign of procedures’ role in promoting safe operations within petrochemical systems. Finally, the third part of this research project will demonstrate the efficacy of using visualizations as tools and methods in qualitative research for modeling complexity in socio- technical systems.
APA, Harvard, Vancouver, ISO, and other styles
43

Bošković, Nikola, Marija Radmilović-Radjenović, and Branislav Radjenović. "Finite Element Analysis of Microwave Tumor Ablation Based on Open-Source Software Components." Mathematics 11, no. 12 (June 10, 2023): 2654. http://dx.doi.org/10.3390/math11122654.

Full text
Abstract:
Microwave ablation is a procedure for treating various types of cancers during which a small needle-like probe is inserted inside the tumor, which delivers microwave energy, causes tissue heating, and effectively produces necrosis of the tumor tissue. Mathematical models of microwave ablation involve the modeling of multiple physical phenomena that occur during the procedure, including electromagnetic wave propagation, heat transfer, and tissue damage. In this study, a complete model of a microwave ablation procedure based on open-source software components is presented. First, the comprehensive procedure of mesh creation for the complete geometric arrangement of the microwave ablation, including a multi-slot coaxial antenna, a real liver tumor taken from the database, and the surrounding liver tissue, is described. It is demonstrated that utilizing smart meshing procedures significantly reduces the usage of computational resources and simulation time. An accurate custom explicit Euler time loop was designed to obtain temperature values and estimate tissue necrosis across the computational domain during the time of microwave ablation. The simulation results obtained by solving the electromagnetic field using the finite element method in the frequency domain are presented and analyzed. The simulation was performed for a microwave frequency of 2.45 GHz, and the volumetric distribution of temperature and estimation of cell damage over 600 s are presented.
APA, Harvard, Vancouver, ISO, and other styles
44

Harleman, Donald R. F., William C. Nolan, and Vernon C. Honsinger. "DYNAMIC ANALYSIS OF OFFSHORE STRUCTURES." Coastal Engineering Proceedings 1, no. 8 (January 29, 2011): 28. http://dx.doi.org/10.9753/icce.v8.28.

Full text
Abstract:
Analytical procedures are presented for calculation of the dynamic displacements of fixed offshore structures in oscillatory waves. The structure considered has four legs in a square configuration with waves impinging normal to one side; however, the procedures are general and may be applied to other configurations and wave directions. The horizontal displacement of the deck is determined as a function of time by application of vibration theory for a damped, spring-mass system subject to a harmonic force. The instantaneous wave force on each leg is composed of a hydrodynamic drag component and an inertial component as in the usual "statical" wave force analysis. The wave force expression is approximated by a Fourier series which permits calculation of the platform displacement by superposition of solutions of the equation of motion for the platform. Depending on the ratio of the wave frequency to the natural frequency of the platform, the structural stresses may be considerably high* than those found by methods which neglect the elastic behavior of the structure. The highest wave to be expected in a given locality is not necessarily the critical design wave. Maximum displacements and structural stresses may occur for smaller waves having periods producing a resonant response of the platform. Displacement measurements in a wave tank using a platform constructed of plastic are presented to show the validity of the analytical method. Both small and finite amplitude waves are used over a wide range of frequency ratios. A digital computer program (7090 FORTRAN) is used for the displacement calculation.
APA, Harvard, Vancouver, ISO, and other styles
45

Vogel, Richard M., and Charles N. Kroll. "The value of streamflow record augmentation procedures in low-flow and flood-flow frequency analysis." Journal of Hydrology 125, no. 3-4 (July 1991): 259–76. http://dx.doi.org/10.1016/0022-1694(91)90032-d.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Yaruss, J. Scott. "Real-Time Analysis of Speech Fluency." American Journal of Speech-Language Pathology 7, no. 2 (May 1998): 25–37. http://dx.doi.org/10.1044/1058-0360.0702.25.

Full text
Abstract:
Many authors have suggested that it is possible for clinicians to collect basic data regarding their client's speech fluency on-line, or in real time, while the client is speaking. Unfortunately, the literature contains relatively little in the way of detailed instructions on exactly how such data should be collected. This article provides specific instructions for real-time collection of information about the frequency and types of speech disfluencies produced by individuals who stutter. The paper also outlines procedures for training students and clinicians to use this technique reliably and accurately and proposes tolerance limits for determining whether frequency counts are sufficiently reliable for clinical use.
APA, Harvard, Vancouver, ISO, and other styles
47

Kipyatkov, Nikita Yuryevich, and Vladimir Borisovich Dutov. "Prospects of use of integrative indicators of computer processing of EEG in the structure of the express-analysis of neurocognitive status." Pediatrician (St. Petersburg) 5, no. 1 (March 15, 2014): 44–48. http://dx.doi.org/10.17816/ped5144-48.

Full text
Abstract:
The aim of the present study was to determine the set of procedures that allows to assess most adequately the neurocognitive profile of a person in conditions of time shortage.The procedure that we have suggested included the registration of EEG with its subsequent computer processing. The study group included 152 adults with age ranging from 18 to 65 years who were undergoing occupational selection for professions demanding heightened attention. Processing of the fragments of EEG with the computer program WinEEG allowed to calculate the index and power spectrum in every frequency range. The results of the examination of 34 psychoneurological dispensary patients served as the control database.In study group EEGs recorded during the testing were visually evaluated as free from any paroxysmal or focal abnormal activity. Statistical processing of the data obtained in EEG computer analysis allowed to reveal which differences between the two data groups were statistically significant: the indices and spectrums of EEG power of the theta frequency range in all channels. The indices and spectrums of EEG power of the alpha frequency range were also significantly different in occipital and temporal channels. Computer analysis of EEG makes this method a promising approach for further development of rapid diagnostics of human mental state in psychophysiological screening tests. The proposed procedures may be useful for integrated rapid diagnostics of human mental state.
APA, Harvard, Vancouver, ISO, and other styles
48

Staszewski, W. J. "Analysis of non-linear systems using wavelets." Proceedings of the Institution of Mechanical Engineers, Part C: Journal of Mechanical Engineering Science 214, no. 11 (November 1, 2000): 1339–53. http://dx.doi.org/10.1243/0954406001523317.

Full text
Abstract:
Analysis of non-linear systems is an essential part of engineering structural dynamics. A number of methods have been developed in recent years. Classical Fourier-based methods have been extended to the use of phase plane, combined time-frequency, time-scale approaches and multidimensional spectra. This paper is an attempt to collate in one place some of the recent advances in wavelet analysis for the study of non-linear systems. This includes methods related to system identification based on wavelet ridges and skeletons, damping estimation procedures, wavelet-based frequency response functions, cross-wavelet analysis, self-similar signals, coherent structures and chaos.
APA, Harvard, Vancouver, ISO, and other styles
49

Abe, Yoshiko, Curtis W. Marean, Peter J. Nilssen, Zelalem Assefa, and Elizabeth C. Stone. "The Analysis of Cutmarks on Archaeofauna: A Review and Critique of Quantification Procedures, and a New Image-Analysis GIS Approach." American Antiquity 67, no. 4 (October 2002): 643–63. http://dx.doi.org/10.2307/1593796.

Full text
Abstract:
Zooarchaeologists utilize a diverse set of approaches for quantifying cutmark frequencies. The least quantitative method for cutmark analysis relies on composite diagrams of cutmarks overlain on drawings of skeletal elements (diagramatic methods). To date, interpretations of these data have generally relied on qualitative and subjective assessments of cutmark frequency and placement. Many analysts count the number of fragments that have a cutmark, regardless of the number of cutmarks on the fragments (fragment-count data). Others count the number of cutmarks (cutmark-count data). Both can be expressed as simple counts (NISP data), or as a count of some more-derived measure of skeletal element abundance (MNE data). All of these approaches provide different types of data and are not intercomparable. Several researchers have shown that fragmentation of specimens impacts the frequency of cuts, and we show here that fragmentation impacts all these current approaches in ways that compromise comparative analysis when fragmentation differs between assemblages. We argue that cutmark frequencies from assemblages with differing levels of fragmentation are most effectively made comparable by correcting the frequency of cutmarks by the observed surface area. We present a new method that allows this surface area correction by using the image analysis abilities of GIS. This approach overcomes the fragmentation problem. We illustrate the power of this technique by comparing a highly fragmented archaeological assemblage to an unfragmented experimental collection.
APA, Harvard, Vancouver, ISO, and other styles
50

Zemunik, Petra, Jadranka Šepić, Havu Pellikka, Leon Ćatipović, and Ivica Vilibić. "Minute Sea-Level Analysis (MISELA): a high-frequency sea-level analysis global dataset." Earth System Science Data 13, no. 8 (August 24, 2021): 4121–32. http://dx.doi.org/10.5194/essd-13-4121-2021.

Full text
Abstract:
Abstract. Sea-level observations provide information on a variety of processes occurring over different temporal and spatial scales that may contribute to coastal flooding and hazards. However, global research on sea-level extremes is restricted to hourly datasets, which prevent the quantification and analyses of processes occurring at timescales between a few minutes and a few hours. These shorter-period processes, like seiches, meteotsunamis, infragravity and coastal waves, may even dominate in low tidal basins. Therefore, a new global 1 min sea-level dataset – MISELA (Minute Sea-Level Analysis) – has been developed, encompassing quality-checked records of nonseismic sea-level oscillations at tsunami timescales (T<2 h) obtained from 331 tide-gauge sites (https://doi.org/10.14284/456, Zemunik et al., 2021b). This paper describes data quality control procedures applied to the MISELA dataset, world and regional coverage of tide-gauge sites, and lengths of time series. The dataset is appropriate for global, regional or local research of atmospherically induced high-frequency sea-level oscillations, which should be included in the overall sea-level extremes assessments.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography