To see the other types of publications on this topic, follow the link: CALCULATIONS AND SIMULATIONS IN ANOTHER FORM.

Journal articles on the topic 'CALCULATIONS AND SIMULATIONS IN ANOTHER FORM'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'CALCULATIONS AND SIMULATIONS IN ANOTHER FORM.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Silvestrov, P. V., and S. T. Surzhikov. "Numerical Simulation of the HIFiRE-1 Ground Test." Herald of the Bauman Moscow State Technical University. Series Mechanical Engineering, no. 3 (132) (June 2020): 29–46. http://dx.doi.org/10.18698/0236-3941-2020-3-29-46.

Full text
Abstract:
The paper considers the problem of simulating the HIFiRE-1 ground test numerically. The aircraft geometry is represented by either a pointed or a blunted cone combined with a flared cylinder. Our digital simulation investigated the aerodynamics of two aircraft configurations: one featuring a pointed nose, another featuring a blunted nose with a radius of 2.5 mm. We used the UST3D software developed in the Ishlinsky Institute for Problems in Mechanics RAS, to perform our aerodynamic calculations. The software is specifically designed for numerical simulations of aerodynamics and thermodynamics in high-velocity aircraft. It implements a model of viscous compressible thermally conductive gas described by a non-steady-state spatial system of Navier --- Stokes equations solved over unstructured three-dimensional tetrahedral meshes. We compared the numerical simulation results in the form of pressure distribution in the tail segment of the aircraft to the empirical data obtained via ground tests in a wind tunnel. We analysed result convergence as a function of the mesh density used. We used methods of computational aerodynamics to investigate the turbulent flow field over the computation region from the leading shock wave to the far wake for various Mach numbers and attack angles
APA, Harvard, Vancouver, ISO, and other styles
2

Saeedian, Meysam, Edris Pouresmaeil, Emad Samadaei, Eduardo Manuel Godinho Rodrigues, Radu Godina, and Mousa Marzband. "An Innovative Dual-Boost Nine-Level Inverter with Low-Voltage Rating Switches." Energies 12, no. 2 (January 9, 2019): 207. http://dx.doi.org/10.3390/en12020207.

Full text
Abstract:
This article presents an innovative switched-capacitor based nine-level inverter employing single DC input for renewable and sustainable energy applications. The proposed configuration generates a step-up bipolar output voltage without end-side H-bridge, and the employed capacitors are charged in a self-balancing form. Applying low-voltage rated switches is another merit of the proposed inverter, which leads to extensive reduction in total standing voltage. Thereby, switching losses as well as inverter cost are reduced proportionally. Furthermore, the comparative analysis against other state-of-the-art inverters depicts that the number of required power electronic devices and implementation cost is reduced in the proposed structure. The working principle of the proposed circuit along with its efficiency calculations and thermal modeling are elaborated in detail. In the end, simulations and experimental tests are conducted to validate the flawless performance of the proposed nine-level topology in power systems.
APA, Harvard, Vancouver, ISO, and other styles
3

Kurkus-Gruszecka, Michalina, and Piotr Krawczyk. "Comparison of Two Single Stage Low-Pressure Rotary Lobe Expander Geometries in Terms of Operation." Energies 12, no. 23 (November 27, 2019): 4512. http://dx.doi.org/10.3390/en12234512.

Full text
Abstract:
In the article the computational fluid dynamics (CFD) simulation and calculated operational parameters of the single stage low-pressure rotary lobe expander compared with the values obtained from a different geometry simulation are presented. Low-pressure rotary lobe expanders are rotary engines that use a compressed gas to produce mechanical energy, which in turn can be converted into another form, i.e., electric energy. Currently, expanders are used in narrow areas, but have a large potential in the energy production from gases of low thermodynamic parameters. The first geometry model was designed on the basis of an industrial device and validated with the empirical data. Simulation of the second geometry was made based on a validated model in order to estimate the operational parameters of the device. The CFD model included the transient simulation of compressible fluid in the geometry changing over time and the rotors motion around two rotation axes. The numerical model was implemented in ANSYS CFX software. After obtaining simulation results in the form of parameters monitors for each time step, a number of calculations were performed using a written code analysing the CFD program output files. The article presents the calculation results and the geometries comparison in terms of work efficiency. The research indicated that the construction of the device on a small scale could cause a significant decrease in the aforementioned parameter, caused by medium leaks in the expander clearances.
APA, Harvard, Vancouver, ISO, and other styles
4

MELIGA, PHILIPPE, JEAN-MARC CHOMAZ, and DENIS SIPP. "Global mode interaction and pattern selection in the wake of a disk: a weakly nonlinear expansion." Journal of Fluid Mechanics 633 (August 25, 2009): 159–89. http://dx.doi.org/10.1017/s0022112009007290.

Full text
Abstract:
Direct numerical simulations (DNS) of the wake of a circular disk placed normal to a uniform flow show that, as the Reynolds number is increased, the flow undergoes a sequence of successive bifurcations, each state being characterized by specific time and space symmetry breaking or recovering (Fabre, Auguste & Magnaudet, Phys. Fluids, vol. 20 (5), 2008, p. 1). To explain this bifurcation scenario, we investigate the stability of the axisymmetric steady wake in the framework of the global stability theory. Both the direct and adjoint eigenvalue problems are solved. The threshold Reynolds numbers Re and characteristics of the destabilizing modes agree with the study of Natarajan & Acrivos (J. Fluid Mech., vol. 254, 1993, p. 323): the first destabilization occurs for a stationary mode of azimuthal wavenumber m = 1 at RecA = 116.9, and the second destabilization of the axisymmetric flow occurs for two oscillating modes of azimuthal wavenumbers m ± 1 at RecB = 125.3. Since these critical Reynolds numbers are close to one another, we use a multiple time scale expansion to compute analytically the leading-order equations that describe the nonlinear interaction of these three leading eigenmodes. This set of equations is given by imposing, at third order in the expansion, a Fredholm alternative to avoid any secular term. It turns out to be identical to the normal form predicted by symmetry arguments. Though, all coefficients of the normal form are here analytically computed as the scalar product of an adjoint global mode with a resonant third-order forcing term, arising from the second-order base flow modification and harmonics generation. We show that all nonlinear interactions between modes take place in the recirculation bubble, as the contribution to the scalar product of regions located outside the recirculation bubble is zero. The normal form accurately predicts the sequence of bifurcations, the associated thresholds and symmetry properties observed in the DNS calculations.
APA, Harvard, Vancouver, ISO, and other styles
5

HOROWITZ, C. J. "MULTI-MESSENGER OBSERVATIONS OF NEUTRON-RICH MATTER." International Journal of Modern Physics E 20, no. 10 (October 2011): 2077–100. http://dx.doi.org/10.1142/s0218301311020332.

Full text
Abstract:
At very high densities, electrons react with protons to form neutron-rich matter. This material is central to many fundamental questions in nuclear physics and astrophysics. Moreover, neutron-rich matter is being studied with an extraordinary variety of new tools such as Facility for Rare Isotope Beams (FRIB) and the Laser Interferometer Gravitational Wave Observatory (LIGO). We describe the Lead Radius Experiment (PREX) that uses parity violating electron scattering to measure the neutron radius in 208Pb. This has important implications for neutron stars and their crusts. We discuss X-ray observations of neutron star radii. These also have important implications for neutron-rich matter. Gravitational waves (GW) open a new window on neutron-rich matter. They come from sources such as neutron star mergers, rotating neutron star mountains, and collective r-mode oscillations. Using large scale molecular dynamics simulations, we find neutron star crust to be very strong. It can support mountains on rotating neutron stars large enough to generate detectable gravitational waves. Finally, neutrinos from core collapse supernovae (SN) provide another, qualitatively different probe of neutron-rich matter. Neutrinos escape from the surface of last scattering known as the neutrino-sphere. This is a low density warm gas of neutron-rich matter. Neutrino-sphere conditions can be simulated in the laboratory with heavy ion collisions. Observations of neutrinos can probe nucleosyntheses in SN. Simulations of SN depend on the equation of state (EOS) of neutron-rich matter. We discuss a new EOS based on virial and relativistic mean field calculations. We believe that combing astronomical observations using photons, GW, and neutrinos, with laboratory experiments on nuclei, heavy ion collisions, and radioactive beams will fundamentally advance our knowledge of compact objects in the heavens, the dense phases of QCD, the origin of the elements, and of neutron-rich matter.
APA, Harvard, Vancouver, ISO, and other styles
6

Belczynski, K., A. Askar, M. Arca-Sedda, M. Chruslinska, M. Donnari, M. Giersz, M. Benacquista, et al. "The origin of the first neutron star – neutron star merger." Astronomy & Astrophysics 615 (July 2018): A91. http://dx.doi.org/10.1051/0004-6361/201732428.

Full text
Abstract:
The first neutron star-neutron star (NS-NS) merger was discovered on August 17, 2017 through gravitational waves (GW170817) and followed with electromagnetic observations. This merger was detected in an old elliptical galaxy with no recent star formation. We perform a suite of numerical calculations to understand the formation mechanism of this merger. We probe three leading formation mechanisms of double compact objects: classical isolated binary star evolution, dynamical evolution in globular clusters, and nuclear cluster formation to test whether they are likely to produce NS-NS mergers in old host galaxies. Our simulations with optimistic assumptions show current NS-NS merger rates at the level of 10−2 yr−1 from binary stars, 5 × 10−5 yr−1 from globular clusters, and 10−5 yr−1 from nuclear clusters for all local elliptical galaxies (within 100 Mpc3). These models are thus in tension with the detection of GW170817 with an observed rate of 1.5−1.2+3.2 yr−1 (per 100 Mpc3; LIGO/Virgo 90% credible limits). Our results imply that either the detection of GW170817 by LIGO/Virgo at their current sensitivity in an elliptical galaxy is a statistical coincidence; that physics in at least one of our three models is incomplete in the context of the evolution of stars that can form NS-NS mergers; or that another very efficient (unknown) formation channel with a long delay time between star formation and merger is at play.
APA, Harvard, Vancouver, ISO, and other styles
7

Lanets, Oleksii, Oleksandr Kachur, and Vitaliy Korendiy. "Classical approach to determining the natural frequency of continual subsystem of three-mass inter-resonant vibratory machine." Ukrainian journal of mechanical engineering and materials science 5, no. 3-4 (2019): 77–87. http://dx.doi.org/10.23939/ujmems2019.03-04.077.

Full text
Abstract:
Problem statement. The three-mass vibratory system can be defined by five basic parameters: inertial parameters of the masses and stiffness parameters of two spring sets. Unlike the classical discrete system, the discrete-and-continual one consists of two rigid bodies connected by one spring set that form the discrete subsystem, and of the reactive mass considered as deformable (elastic) body characterized by certain stiffness and inertial parameters, which are related with one another. Purpose. The main objective of the paper consists in determining the first natural frequency of the continual subsystem of the three-mass discrete-and-continual vibratory machine. Methodology. While carrying out the investigations, it is used the classical theory of oscillations of straight elastic rods. Findings (results). The engineering technique of determining the first natural frequency of the continual subsystem of the three-mass vibratory machine is developed and approved by means of analytical calculations and numerical simulation. Originality (novelty). The optimal diagram of supporting the continual subsystem (elastic rod) is substantiated. The possibilities of exciting the vibrations of the three-mass discrete-and-continual mechanical system using the eccentric drive are considered. Practical value. The obtained research results and the developed calculation techniques can be used be engineers and designers dealing with various technological and manufacturing equipment that use vibratory drive. Scopes of further investigations. While carrying out further investigations, it is necessary to develop the model of combined discrete-and-continual system of three-mass vibratory machine, and to carry out the numerical simulation of the system’s motion under different operational conditions.
APA, Harvard, Vancouver, ISO, and other styles
8

Fan, Dan, and Kueiming Lo. "Recursive Identification for Dynamic Linear Systems from Noisy Input-Output Measurements." Journal of Applied Mathematics 2013 (2013): 1–8. http://dx.doi.org/10.1155/2013/318786.

Full text
Abstract:
Errors-in-variables (EIV) model is a kind of model with not only noisy output but also noisy input measurements, which can be used for system modeling in many engineering applications. However, the identification for EIV model is much complicated due to the input noises. This paper focuses on the adaptive identification problem of real-time EIV models. Some derivation errors in an accuracy research of the popular Frisch scheme used for EIV identification have been pointed out in a recent study. To solve the same modeling problem, a new algorithm is proposed in this paper. A Moving Average (MA) process is used as a substitute for the joint impact of the mutually independent input and output noises, and then system parameters and the noise properties are estimated in the view of the time domain and frequency domain separately. A recursive form of the first step calculation is constructed to improve the calculation efficiency and online computation ability. Another advantage of the proposed algorithm is its applicableness to different input processes situations. Numerical simulations are given to demonstrate the efficiency and robustness of the new algorithm.
APA, Harvard, Vancouver, ISO, and other styles
9

Pop, L., D. Hanslian, and J. Hošek. "Mapping of extreme wind speed for landscape modelling of the Bohemian Forest, Czech Republic." Natural Hazards and Earth System Sciences Discussions 2, no. 1 (January 17, 2014): 361–84. http://dx.doi.org/10.5194/nhessd-2-361-2014.

Full text
Abstract:
Abstract. Extreme wind events are among the most damaging weather-related hazards in the Czech Republic, forestry is heavily affected. In order to successfully run a landscape model dealing with such effects, spatial distribution of extreme wind speed statistics is needed. The presented method suggests using sector-wise wind field calculations together with extreme value statistics fitted at a reference station. A special algorithm is proposed to provide the data in the form expected by the landscape model, i.e. raster data of annual wind speed maxima. The method is demonstrated on the area of Bohemian Forest that represents one of largest and most compact forested mountains in Central Europe. The reference meteorological station Churáňov is located within the selected domain. Numerical calculations were based on linear model of WAsP Engineering methodology. Observations were cleaned of inhomogeneity and classified into convective and non-convective cases using index CAPE. Due to disjunct sampling of synoptic data, appropriate corrections were applied to the observed extremes. Finally they were fitted with Gumbel distribution. The output of numerical simulation is presented for the windiest direction sector. Another map shows probability that annual extreme exceeds required threshold. The method offers a tool for generation of spatially variable annual maxima of wind speed. It assumes a small limited model domain containing a reliable wind measurement. We believe that this is typical setup for applications similar to one presented in the paper.
APA, Harvard, Vancouver, ISO, and other styles
10

Sweeney, J. "Finite-Width correction factors for sen testing of orthotropic materials in opening mode." Journal of Strain Analysis for Engineering Design 21, no. 2 (April 1, 1986): 99–107. http://dx.doi.org/10.1243/03093247v212099.

Full text
Abstract:
Calculations of finite-width correction factors, Y, for mode I SEN tension testing of orthotropic materials have been made using a J integral implemented on results from finite element analyses. Y depends on the ratio of principal normal compliances S11/ S22 and another dimensionless quantity involving the material compliances. Aspect ratio is fixed at 1 and normalized crack depth varies between 0.3 and 0.6. S11/ S22 is fixed at four values; 20, 10, 1/10, and 1/20. Ranges of the other dimensionless material parameter relevant to existing materials are chosen. The use of linear interpolation for Y at intermediate values of S11/ S22, and the relevance of the results to specimens with aspect ratios in excess of 1, are discussed. Y is presented at each value of S11/ S22 in the form of a Chebyshev series in two variables. The results show that, if Y factors already established for isotropic materials are used to calculate stress intensity values for orthotropic specimens, errors in the stress intensity factor can be as high as 50 per cent.
APA, Harvard, Vancouver, ISO, and other styles
11

Maest, Ann, Robert Prucha, and Cameron Wobus. "Hydrologic and Water Quality Modeling of the Pebble Mine Project Pit Lake and Downstream Environment after Mine Closure." Minerals 10, no. 8 (August 18, 2020): 727. http://dx.doi.org/10.3390/min10080727.

Full text
Abstract:
The Pebble Project in Alaska is one of the world’s largest undeveloped copper deposits. The Environmental Impact Statement (EIS) proposes a 20-year open-pit extraction, sulfide flotation, and deposition of separated pyritic tailings and potentially acid-generating waste rock in the pit at closure. The pit will require perpetual pump and treat management. We conducted geochemical and integrated groundwater–surface water modeling and streamflow mixing calculations to examine alternative conceptual models and future mine abandonment leading to failure of the water management scheme 100 years after mine closure. Using EIS source water chemistry and volumes and assuming a well-mixed pit lake, PHREEQC modeling predicts an acidic (pH 3.5) pit lake with elevated copper concentrations (130 mg/L) under post-closure conditions. The results are similar to water quality in the Berkeley Pit in Montana, USA, another porphyry copper deposit pit lake in rocks with low neutralization potential. Integrated groundwater–surface water modeling using MIKE SHE examined the effects of the failure mode for the proposed 20-year and reasonably foreseeable 78-year expansion. Simulations predict that if pumping fails, the 20-year pit lake will irreversibly overtop within 3 to 4 years and mix with the South Fork Koktuli River, which contains salmon spawning and rearing habitat. The 78-year pit lake overtops more rapidly, within 1 year, and discharges into Upper Talarik Creek. Mixing calculations for the 20-year pit show that this spillover would lead to exceedances of Alaska’s copper surface water criteria in the river by a factor of 500–1000 times at 35 miles downstream. The combined modeling efforts show the importance of examining long-term failure modes, especially in areas with high potential impacts to stream ecological services.
APA, Harvard, Vancouver, ISO, and other styles
12

Izgec, Bulent, C. Shah Kabir, Ding Zhu, and A. Rashid Hasan. "Transient Fluid and Heat Flow Modeling in Coupled Wellbore/Reservoir Systems." SPE Reservoir Evaluation & Engineering 10, no. 03 (June 1, 2007): 294–301. http://dx.doi.org/10.2118/102070-pa.

Full text
Abstract:
Summary This paper presents a transient wellbore simulator coupled with a semianalytic temperature model for computing wellbore-fluid-temperature profiles in flowing and shut-in wells. Either an analytic or a numeric reservoir model can be combined with the transient wellbore model for rapid computations of pressure, temperature, and velocity. We verified the simulator with transient data from gas and oil wells, where both surface and downhole data were available. The accuracy of the heat-transfer calculations improved with a variable-earth-temperature model and a newly developed numerical-differentiation scheme. This approach improved the calculated wellbore fluid-temperature profile, which, in turn, increased the accuracy of pressure calculations at both bottomhole and wellhead. The proposed simulator accurately mimics afterflow during surface shut-in by computing the velocity profile at each timestep and its consequent impact on temperature and density profiles in the wellbore. Surrounding formation temperature is updated in every timestep to account for changes in heat-transfer rate between the hotter wellbore fluid and the cooler formation. The optional hybrid numerical-differentiation routine removes the limitations imposed by the constant relaxation-parameter assumption used in previous analytic-temperature models. Both forward and reverse simulations are feasible. Forward simulations entail computing pressure, temperature, and velocity profiles at each wellbore node to allow matching field data gathered at any point in the wellbore. In contrast, reverse simulation allows translating pressures from one point to another in the wellbore, such as wellhead to bottomhole condition. Introduction Modeling of the changing pressure, temperature, and density profiles in the wellbore as a function of time is crucial for the design and analysis of pressure-transient tests, particularly when data are gathered off-bottom or in a deepwater setting, and the identification of potential flow-assurance issues. Other applications of this modeling approach include improving the design of production tubulars and artificial-lift systems, gathering pressure data for continuous reservoir management, and estimating flow rates from multiple producing horizons. A coupled wellbore/reservoir simulator entails simultaneous solution of mass, momentum, and energy balance equations, providing pressure and temperature as a function of depth and time for a predetermined surface flow rate. Almehaideb et al. (1989) studied the effects of multiphase flow and wellbore phase segregation during well testing. They used a fully implicit scheme to couple the wellbore and an isothermal black-oil reservoir model. The wellbore model accounts only for mass and momentum changes with time. Similarly, Winterfeld (1989) showed the simulations of buildup tests for both single and two-phase flows in relation to wellbore storage and phase redistribution. The Fairuzov et al. (2002) model formulation also falls into this category. Miller (1980) developed one of the earliest transient wellbore simulators, which accounts for changes in geothermal-fluid energy while flowing up the wellbore. In this model, mass and momentum equations are combined with the energy equation to yield an expression for pressure. After solving for pressure, density, energy, and velocity are calculated for the new timestep at a well gridblock. Hasan and his coworkers presented wellbore/reservoir simulators for gas (Kabir et al. 1996), oil (Hasan et al. 1997), and two-phase (Hasan et al. 1998) flows. Their formulation consists of a solution of coupled mass, momentum, and energy equations, all written in finite-difference form, and requires time-consuming separate matrix operations. In all cases, the wellbore model is coupled with an analytic reservoir model. Fan et al. (2000) developed a wellbore simulator for analyzing gas-well buildup tests. Their model uses a finite-difference scheme for heat transfer in the vertical direction. The heat loss from the fluid to the surroundings in the radial direction is represented by an analytical model.
APA, Harvard, Vancouver, ISO, and other styles
13

Prasetyo, Agung, Rusda Rusda, and Masing Masing. "Analisis Kinerja Relai Arus Lebih pada PLTU Embalut PT. Cahaya Fajar Kaltim Unit 1×60 MW dengan Simulasi." Jurnal Teknik Mesin Sinergi 17, no. 2 (May 4, 2020): 123. http://dx.doi.org/10.31963/sinergi.v17i2.2093.

Full text
Abstract:
Embalut power plant is one of the power plants that supply electricity in East Kalimantan. The plant which is operated by PT. Cahaya Fajar Kaltim, has one PLTU unit with a capacity of 2x25 MW and another with the capacity of 1x60 MW. As an electricity company that must keep continuity of electric supply to customers, a reliable electrical system is necessary. Such reliable system requires protection system to detect a problem and avoid electrical equipment damage. A proper protection system should isolate the affected area and prevent black out on the other area. A type of problem may occur is a short circuit. This study analyzes the performance of overcurrent relays in 1 × 60 MW power plant unit. The analysis was performed through ETAP 12.6.0 software which was also used to design the single line diagrams, calculate the setting currents of short circuit current, also to simulate the coordination of several overcurrent relays in the system. Adjustment of the current and time value in the overcurrent relay is obtained from the result of manual calculations. The results then are displayed in the form of a characteristic curve. Afterwards, a simulation is performed in a situation where three-phase short circuit occurs at BFWP 1.3 Bus, TR AUX.3 Bus and TR 3A.3 Bus. The results show that the overcurrent relays work properly and could overcome the problem quickly.
APA, Harvard, Vancouver, ISO, and other styles
14

Ormel, Chris W., and Beibei Liu. "Catching drifting pebbles." Astronomy & Astrophysics 615 (July 2018): A178. http://dx.doi.org/10.1051/0004-6361/201732562.

Full text
Abstract:
Turbulence plays a key role in the transport of pebble-sized particles. It also affects the ability of pebbles to be accreted by protoplanets because it stirs pebbles out of the disk midplane. In addition, turbulence suppresses pebble accretion once the relative velocities become too high for the settling mechanism to be viable. Following Paper I, we aim to quantify these effects by calculating the pebble accretion efficiency ε using three-body simulations. To model the effect of turbulence on the pebbles, we derive a stochastic equation of motion (SEOM) applicable to stratified disk configurations. In the strong coupling limit (ignoring particle inertia) the limiting form of this equation agrees with previous works. We conduct a parameter study and calculate ε in 3D, varying pebble and gas turbulence properties and accounting for the planet inclination. We find that strong turbulence suppresses pebble accretion through turbulent diffusion, agreeing closely with previous works. Another reduction of ε occurs when the turbulent rms motions are high and the settling mechanism fails. In terms of efficiency, the outer disk regions are more affected by turbulence than the inner regions. At the location of the H2O iceline, planets around low-mass stars achieve much higher efficiencies. Including the results from Paper I, we present a framework to obtain ε under general circumstances.
APA, Harvard, Vancouver, ISO, and other styles
15

Abrahamsen, Petter, Ragnar Hauge, Knut Heggland, and Petter Mostad. "Estimation of Gross Rock Volume of Filled Geological Structures With Uncertainty Measures." SPE Reservoir Evaluation & Engineering 3, no. 04 (August 1, 2000): 304–9. http://dx.doi.org/10.2118/65419-pa.

Full text
Abstract:
Summary The gross rock volume of a filled structure is uncertain because of uncertainty in the determination of caprock depth and the uncertainty in depth to the hydrocarbon contact determined by the spill point of the caprock. Ignoring this uncertainty might lead to biased volume estimates. This paper reports two procedures to assist with assessing this uncertainty to obtain better estimates. The first is to use conditional simulation techniques to generate realizations of the depth to the caprock. The second procedure is a new fast algorithm that determines the location of the spill point and trapped area of each caprock realization. Taken together, the two procedures determine the thickness and lateral extension of each reservoir realization. Finally, gross rock volume for each realization can be calculated and the volumetric uncertainty can be quantified in terms of expectation, histograms, percentiles, etc., for the whole set of realizations. A synthetic example and an example from the North Sea illustrate the use of these procedures. A method for including knowledge of the spill-point depth for improving depth maps is also presented. Introduction The uncertainty in gross rock volume can be large and even dominate the uncertainty in STOOIP and recoverable reserves.1 To assess this uncertainty Monte Carlo approaches are widely used. These range from simple spreadsheet methods2 to elaborate approaches including stochastic simulation of surface geometry and two-dimensional (2D) or three-dimensional (3D) stochastic simulation of reservoir properties.1 The purpose of this paper is to establish a general method for estimating gross rock volume of filled structures where the uncertainty in the caprock depth is believed to be significant. The key parts are the conditional simulation of caprock depth and the new spill-point detection algorithm. Both are explained in some detail in the following sections. The importance of having a procedure that takes account of the uncertainty in caprock depth is illustrated by two examples. A synthetic example is used to show that the expected volume decreases with higher depth uncertainty for a simplistic anticline. The second example is from real data and shows that the expected volume is larger when considering the uncertainty in caprock depth. This shows that uncertainty in caprock depth influences volumetric estimates, and that every caprock structure needs individual consideration. Construction of Caprock Depth There are two principally different approaches to generating the depth maps for the caprock. Either we use some prediction method for obtaining "the best map," or we can use some Monte Carlo method to obtain a set of realizations of depth maps. The traditional approach is to consider a prediction and use this as the basis for further calculations and decisions. In the presence of uncertainties, calculations based on the prediction can be biased so decisions are made on false assumptions. Considering a set of realizations spanning the uncertainty partly solves this problem, but at the expense of more calculations and a more complicated situation for decision making. In the authors opinion, kriging methods are the best methods for making the depth map prediction. Kriging methods can include flexible trends from seismic data, and they can be calibrated to available well data. Alternative gridding algorithms such as splines and triangulation may give very good results but they lack the possibility of efficient estimation of parameters in the algorithms. Another appealing property of kriging methods is that they are closely linked to conditional simulation (Monte Carlo) methods. There is a conditional simulation method corresponding to every choice of kriging method. The kriging and the corresponding simulation method share the same set of model assumptions, input data, and modal parameters. Thus, we can prepare a single set of assumptions, data, and parameters and choose either kriging or conditional simulation for investigating different properties of the phenomenon under study. This ensures that results from different investigations are consistent in the sense that they rely on the same assumptions and data. Moreover, the average of a large set of conditional simulated depth map realizations will coincide with the predicted map given by kriging. Thus, taking the average of a large set of conditional simulations is an inefficient way of calculating a prediction. Kriging. The most flexible (standard) kriging method is universal kriging. The basic assumption is that we can write, e.g., the depth to the caprock as depth (x, y)=trend(x, y)+residual(x, y). The trend must be linear in unknown coefficients (a, b, c, . . .) and must have the form trend(x,y)=af1(x,y)+bf2(x,y)+cf3(x,y)+⋯ where fi(x, y)are known functions. The residual is assumed to have zero mean and a variogram which must be specified, or if possible estimated. Universal kriging can be formulated as a two-step procedure. First, the trend is fitted to data by generalleast-squares estimation of the coefficients. Second, the residual is predicted using simple kriging so that the sum of the fitted trend and predicted residual interpolates the observations. The result is a depth surface with large-scale features given by the trend and with small modifications near wells given by the predicted residual. An error map showing the uncertainty in the depth prediction can also be calculated.
APA, Harvard, Vancouver, ISO, and other styles
16

HUNT, J. C. R., N. D. SANDHAM, J. C. VASSILICOS, B. E. LAUNDER, P. A. MONKEWITZ, and G. F. HEWITT. "Developments in turbulence research: a review based on the 1999 Programme of the Isaac Newton Institute, Cambridge." Journal of Fluid Mechanics 436 (June 10, 2001): 353–91. http://dx.doi.org/10.1017/s002211200100430x.

Full text
Abstract:
Recent research is making progress in framing more precisely the basic dynamical and statistical questions about turbulence and in answering them. It is helping both to define the likely limits to current methods for modelling industrial and environmental turbulent flows, and to suggest new approaches to overcome these limitations. Our selective review is based on the themes and new results that emerged from more than 300 presentations during the Programme held in 1999 at the Isaac Newton Institute, Cambridge, UK, and on research reported elsewhere. A general conclusion is that, although turbulence is not a universal state of nature, there are certain statistical measures and kinematic features of the small-scale flow field that occur in most turbulent flows, while the large-scale eddy motions have qualitative similarities within particular types of turbulence defined by the mean flow, initial or boundary conditions, and in some cases, the range of Reynolds numbers involved. The forced transition to turbulence of laminar flows caused by strong external disturbances was shown to be highly dependent on their amplitude, location, and the type of flow. Global and elliptical instabilities explain much of the three-dimensional and sudden nature of the transition phenomena. A review of experimental results shows how the structure of turbulence, especially in shear flows, continues to change as the Reynolds number of the turbulence increases well above about 104 in ways that current numerical simulations cannot reproduce. Studies of the dynamics of small eddy structures and their mutual interactions indicate that there is a set of characteristic mechanisms in which vortices develop (vortex stretching, roll-up of instability sheets, formation of vortex tubes) and another set in which they break up (through instabilities and self- destructive interactions). Numerical simulations and theoretical arguments suggest that these often occur sequentially in randomly occurring cycles. The factors that determine the overall spectrum of turbulence were reviewed. For a narrow distribution of eddy scales, the form of the spectrum can be defined by characteristic forms of individual eddies. However, if the distribution covers a wide range of scales (as in elongated eddies in the ‘wall’ layer of turbulent boundary layers), they collectively determine the spectra (as assumed in classical theory). Mathematical analyses of the Navier–Stokes and Euler equations applied to eddy structures lead to certain limits being defined regarding the tendencies of the vorticity field to become infinitely large locally. Approximate solutions for eigen modes and Fourier components reveal striking features of the temporal, near-wall structure such as bursting, and of the very elongated, spatial spectra of sheared inhomogeneous turbulence; but other kinds of eddy concepts are needed in less structured parts of the turbulence. Renormalized perturbation methods can now calculate consistently, and in good agreement with experiment, the evolution of second- and third-order spectra of homogeneous and isotropic turbulence. The fact that these calculations do not explicitly include high-order moments and extreme events, suggests that they may play a minor role in the basic dynamics. New methods of approximate numerical simulations of the larger scales of turbulence or ‘very large eddy simulation’ (VLES) based on using statistical models for the smaller scales (as is common in meteorological modelling) enable some turbulent flows with a non-local and non-equilibrium structure, such as impinging or convective flows, to be calculated more efficiently than by using large eddy simulation (LES), and more accurately than by using ‘engineering’ models for statistics at a single point. Generally it is shown that where the turbulence in a fluid volume is changing rapidly and is very inhomogeneous there are flows where even the most complex ‘engineering’ Reynolds stress transport models are only satisfactory with some special adaptation; this may entail the use of transport equations for the third moments or non-universal modelling methods designed explicitly for particular types of flow. LES methods may also need flow-specific corrections for accurate modelling of different types of very high Reynolds number turbulent flow including those near rigid surfaces.This paper is dedicated to the memory of George Batchelor who was the inspiration of so much research in turbulence and who died on 30th March 2000. These results were presented at the last fluid mechanics seminar in DAMTP Cambridge that he attended in November 1999.
APA, Harvard, Vancouver, ISO, and other styles
17

Jakimowicz, Aleksander. "Fundamental Sources of Economic Complexity." International Journal of Nonlinear Sciences and Numerical Simulation 17, no. 1 (February 1, 2016): 1–13. http://dx.doi.org/10.1515/ijnsns-2014-0085.

Full text
Abstract:
AbstractThis article analyses the basic sources and types of economic complexity: chaotic attractors and repellers, complexity catastrophes, coexistence of attractors, sensitive dependence on parameters, final state sensitivity, effects of fractal basin boundaries and chaotic saddles. Four nonlinear classic models have been used for this purpose: virtual duopoly model, model of a centrally planned economy, cobweb model with adaptive expectations and the business cycle model. The issue of economic complexity has not been sufficiently dealt with in the literature. Studies of complexity in economics usually focus on identifying the conditions under which deterministic chaos emerges in models as the main form of complexity, while analyses of other forms of complexity are much less frequent. The article has two objectives: methodological and explicative, which are to shed some new light on the issue. The first objective is to make as comprehensive a catalogue of sources of economic complexity as possible; this is to be achieved by the numerical calculations presented in this article. The issue of accumulation of complexity has been emphasized, which is a type of system dynamics which has its roots in coincidence and overlapping of complexity originating in different sources. The second objective involves an explanation of the role which is played in generating complexity by classic laws of economics. It appears that there is another overarching law, which is independent of the type of system or the level of economic analysis, which states that the long-term effect of conventional economic laws is an inevitable increase in the complexity of markets and economies. Therefore, the sources of complexity discussed in this article are called fundamental ones.
APA, Harvard, Vancouver, ISO, and other styles
18

Tabibian, Shadan, and Philippe Lorong. "Study of an Industrial FEM Tool for Line Boring Process of Cylinder Blocks." Key Engineering Materials 651-653 (July 2015): 1171–82. http://dx.doi.org/10.4028/www.scientific.net/kem.651-653.1171.

Full text
Abstract:
The aim of this study is to realize a simple simulation tool, in order to predict the form defect of cylinder block bore liners in the moment of rough boring process. Geometrical defect prediction is critical for Process Engineering in order to optimize all machining sequences in the production line, to grantee the finale product, according to the norms defined by the Design Engineering. In revenge, Process Engineering can suggest a new product design according to geometrical defect predictions. Simulation can significantly reduce the time of Process-Product parameters adjustment (pre-project).In this study a simple static FEM model, based on the cylinder block geometry, is proposed to predict the form defect of the bore liners in the moment of process. The cutting tool is supposed as a rigid part in this model. The clamping condition and meshing information are applied on the part in the initial state. Calculation of cutting force components is performed through the Kienzle cutting law and applied on the bore liners by means of a Python script. The Python script runs the calculation automatically by means of ABQUS software. Another Python script is in charge of simulation results post-processing. The interface of this tool is an Excel sheet which allows us to inter the process parameters and automatically run the FEM calculation. Out-put excel file contains the form defect of each bore according to 3 levels of bore (top, middle and the bottom) and different angular position.The simulation results put forward that the clamping condition plays an import role in the bore distortion. Consequently, optimizing the clamping pressure and its localization is critical, before cutting parameters adjustment, in line boring process. Experimental validation is performed in parallel with the simulation. The first correlation between experimentation and simulation results shows that the first influent factor which disturbs the correlation is the initial form defect of rough part due to the casting process. Integration of casting form defect in the simulation is crucial and should be taken into account in the next studied.
APA, Harvard, Vancouver, ISO, and other styles
19

Möbus, G., T. Gemming, W. Nüchter, M. Exner, P. Gumbsch, A. Weickenmeier, M. Wilson, and M. Rühle. "Are Common Atom Form Factors in HREM-Simulations Accurate Enough for Quantitative Image Matching?" Microscopy and Microanalysis 3, S2 (August 1997): 1159–60. http://dx.doi.org/10.1017/s143192760001268x.

Full text
Abstract:
1.Introduction:If digital image matching between an experimental HREM-image and the simulated image fails, one of the suggested reasons is the “inaccuracy of the atomic form factors“ describing the electron scattering in common simulation packages using the free and neutral atom approximation from Hartree-Fock calculations [1]. In detail, three contributions within the usual form factor calculations are mainly missing: (i) the redistribution of charge in ionic crystals, (ii) the accumulation of charge away from atom sites in covalent crystals, (iii) the inclusion of thermal diffuse scattering (TDS) as well as the correlated vibration of atoms beyond the Einstein-approximation within the Debye-Waller factor (DWF) theory. Each of the three effects have been checked separately:2.Simulations of TDS by Coupling of Molecular Dynamics Time Series to HREM- Multislice Calculations:70 snap shots of a series of structures of NiAl (4.9 × 5.1 × l0nm) are stored from an equilibrated molecular dynamics simulation with vibration amplitudes corresponding to room temperature.
APA, Harvard, Vancouver, ISO, and other styles
20

Bahar, A., and M. Kelkar. "Journey From Well Logs/Cores to Integrated Geological and Petrophysical Properties Simulation: A Methodology and Application." SPE Reservoir Evaluation & Engineering 3, no. 05 (October 1, 2000): 444–56. http://dx.doi.org/10.2118/66284-pa.

Full text
Abstract:
Summary Reservoir studies performed in the industry are moving towards an integrated approach. Most data available for this purpose are mainly from well cores and/or well logs. The translation of these data into petrophysical properties, i.e., porosity and permeability, at interwell locations that are consistent with the underlying geological description is a critical process. This paper presents a methodology that can be used to achieve this goal. The method has been applied at several field applications where full reservoir characterization study is conducted. The framework developed starts with a geological interpretation, i.e., facies and petrophysical properties, at well locations. A new technique for evaluating horizontal spatial relationships is provided. The technique uses the average properties of the vertical data to infer the low-frequency characteristics of the horizontal data. Additionally, a correction in calculating the indicator variogram, that is used to capture the facies' spatial relationship, is provided. A new co-simulation technique to generate petrophysical properties consistent with the underlying geological description is also developed. The technique uses conditional simulation tools of geostatistical methodology and has been applied successfully using field data (sandstone and carbonate fields). The simulated geological descriptions match well the geologists' interpretation. All of these techniques are combined into a single user-friendly computer program that works on a personal computer platform. Introduction Reservoir characterization is the process of defining reservoir properties, mainly, porosity and permeability, by integration of many data types. An ultimate goal of reservoir characterization is improved prediction of the future performance of the reservoir. But, before we reach that goal a journey through various processes must come to pass. The more exhaustive the processes, the more accurate the prediction will be. The most important processes in this journey are the incorporation and analysis of available geological information.1–3 The most common data types available for this purpose are in the form of well logs and/or well cores. The translation of these data into petrophysical properties, i.e., porosity and permeability, at interwell locations that are consistent with the underlying geological description is a critical step. The work presented in this paper provides a methodology to achieve this goal. This methodology is based on the geostatistical technique of conditional simulation. The step-by-step procedure starts with the work of the geologist where the isochronal planes across the whole reservoir are determined. This step is followed by the assignment of facies and petrophysical properties at well locations for each isochronal interval. Using these results, spatial analysis of the reservoir attributes, i.e., facies, porosity, and permeability, can be conducted in both vertical and horizontal directions. Due to the nature of how the data are typically distributed, i.e., abundant in the vertical direction but sparse in the horizontal direction, this step is far from a simple task, and practitioners have used various approximations to overcome this problem.4–6 A new technique for evaluating the horizontal spatial relationship is proposed in this work. The technique uses the average properties of the vertical data to infer the low-frequency characteristics of the horizontal data. Additionally, a correction in calculating the indicator variogram, that is used to capture the facies spatial relationship, is provided. Once the spatial relationship of the reservoir attributes has been established, the generation of internally consistent facies and petrophysical properties at the gridblock level can be done through a simulation process. Common practice in the industry is to perform conditional simulation of petrophysical properties by adapting a two-stage approach.7–10 In the first stage, the geological description is simulated using a conditional simulation technique such as sequential indicator simulation or Gaussian truncated simulation. In the second stage, petrophysical properties are simulated for each type of geological facies/unit using a conditional simulation technique such as sequential Gaussian simulation or simulated annealing. The simulated petrophysical properties are then filtered using the generated geological simulation to produce the final simulation result. The drawback of this approach is its inefficiency, since it requires several simulations, and hence, intensive computation time. Additionally, the effort to jointly simulate or to co-simulate interdependent attributes such as facies, porosity, and permeability has been discussed by several authors.11–13 The techniques used by these authors have produced useful results. Common disadvantages of these techniques are the requirement of tedious inference and modeling of covariances and cross covariances. Also, a large amount of CPU time is required to solve the numerical problem of a large co-kriging system. Another co-simulation technique that eliminates the requirement of solving the full co-kriging system has been proposed by Almeida.14 The technique is based on a collocated co-kriging and a Markov-type hypothesis. This hypothesis simplifies the inference and modeling of the cross covariances. Since the collocated technique is used, an assumption of a linear relationship among the attributes needs to be applied. The co-simulation technique developed in this work avoids the two-stage approach described above. The technique is based on a combination of simultaneous sequential Gaussian simulations and a conditional distribution technique. Using this technique there is no large co-kriging system to solve and there is no need to assume a relationship among reservoir attributes. The absence of co-kriging from the process also means that the user is free from developing the cross variograms. This improves the practical application of the technique.
APA, Harvard, Vancouver, ISO, and other styles
21

WITTIG, HARTMUT. "LOW-ENERGY QCD II — STATUS OF LATTICE CALCULATIONS." Modern Physics Letters A 28, no. 25 (August 14, 2013): 1360013. http://dx.doi.org/10.1142/s0217732313600134.

Full text
Abstract:
The current status of lattice calculations is reviewed, with a particular emphasis on the question whether lattice simulations have matured to a stage where there is full interaction with experiment. Particular examples include the hadron spectrum, mesonic form factors and decay constants, the axial charge of the nucleon, and the hadronic vacuum polarization contribution to the muon (g-2).
APA, Harvard, Vancouver, ISO, and other styles
22

Oerlemans, J., and N. C. Hoogendoorn. "Mass-Balance Gradients and Climatic Change." Journal of Glaciology 35, no. 121 (1989): 399–405. http://dx.doi.org/10.1017/s0022143000009333.

Full text
Abstract:
AbstractIt is generally assumed that the mass-balance gradient on glaciers is more or less conserved under climatic change. In studies of the dynamic response of glaciers to climatic change, one of the following assumptions is normally made: (i) the mass-balance perturbation is independent of altitude or (ii) the mass-balance profile does not change — it simply shifts up and down. Observational evidence for such an approach is not convincing; on some glaciers the inter-annual changes in mass balance seem to be independent of altitude, on others not at all. Moreover, it is questionable whether inter-annual variation can be “projected“ on different climatic states.To see what a physical approach might contribute, we developed an altitude-dependent mass-balance model. It is based on the energy balance of the ice/snow surface, where precipitation is included in a parameterized form and numerical integrations are done through an entire balance year (with a 30 min time step). Atmospheric temperature, snowfall, and atmospheric transmissivity for solar radiation are all dependent on altitude, so a mass-balance profile can be calculated. Slope and exposure of the ice/snow surface are taken into account (and the effects of these parameters studied). In general, the calculations were done for 100m elevation intervals.Climatological data from the Sonnblick Observatory (Austria; 3106 m a.s.l.) and from Vent (2000 m a.s.l.; Oetztal Alps, Austria) served as input for a number of runs. Simulation of the mass-balance profiles for Hinterseisferner (north-easterly exposure) and Kesselwandferner (south-easterly exposure) yields reasonable results. The larger balance gradient on Kesselwandferner is produced by the model, so exposure appears to be an important factor here.Sensitivity of mass-balance profiles to shading effects, different slope, and exposure are systematically studied. Another section deals with the sensitivity to climatic change. Perturbations of air temperature, cloudiness, albedo, and precipitation are imposed to see their effects on the mass-balance profiles. The results clearly show that, in general, mass-balance perturbations depend strongly on altitude. They generally increase down-glacier, and are not always symmetric about the reference state.For typical climatic conditions in the Alps, we found that a 1 K temperature change leads to a change in equilibrium-line altitude of 130 m. Three factors contribute to this large value; turbulent heat flux, longwave radiation from the atmosphere, and fraction of precipitation falling as snow. Here, the albedo feed-back increases the sensitivity in a significant way.
APA, Harvard, Vancouver, ISO, and other styles
23

LIU, LANBO, HAO XIE, DONALD G. ALBERT, PAUL R. ELLER, and JING-RU C. CHENG. "A SCENARIO STUDY FOR IMPROVING COST-EFFECTIVENESS IN ACOUSTIC TIME-REVERSAL SOURCE RELOCATION IN AN URBAN ENVIRONMENT." Journal of Computational Acoustics 20, no. 02 (June 2012): 1240003. http://dx.doi.org/10.1142/s0218396x12400036.

Full text
Abstract:
Through finite difference time domain (FDTD) numerical simulation, we have studied the possible observation settings to improve the cost effectiveness in time-reversal (TR) source relocation in a two-dimensional (2D) urban setting under a number of typical scenarios. All scenario studies were based on the FDTD computation of the acoustic wave field resulted from an impulse source, propagated through an artificial village composed of 15 buildings and a set of sources and receivers, a typical urban setting has been extensively analyzed in previous studies. The FDTD numerical modeling code can be executed on an off-the-shelf graphic processor unit (GPU) that increases the speed of the time-reversal calculations by a factor of 200. With this approach the computational results lead to some significant conclusions. In general, using only one non-line-of-sight (NLOS) single receiver is not enough to do a quality work to re-locate the source via time-reversal. This is particularly true when there are more than one path between the source and this receiver with similar wave energy travel time. However, when the single sensor is located in an acoustic channel, reverberation inside the waveguide may increase the effective aperture of the single receiver enough to give a good location. It is equivalent to say that the waveguide and the single receiver form a "virtual array". It appears that a sensor array with a minimum number of three receivers might be the most cost-effective way to carry out TR source relocation in an urban environment. The most optimal geometry of a sensor array with a minimum number of three receivers could be an equal side-length triangle. Simple analysis showed that by this setup it is possible to catch sound sources from almost all possible azimuths. Effective source relocation essentially depends on the geometry, relativity to the scatters, etc. of the sensing array. Generally, adding another single sensor relatively far away from the main array will not improve the results. It is practically useful and achievable to have a sensor array mounted on the outside of a single building, and in these cases successful source relocations were obtained. As stated by the fundamental TR theory, increasing the number of scatters, here, increasing the number of buildings will definitely be helpful to increase the effectiveness of TR source relocation.
APA, Harvard, Vancouver, ISO, and other styles
24

Oerlemans, J., and N. C. Hoogendoorn. "Mass-Balance Gradients and Climatic Change." Journal of Glaciology 35, no. 121 (1989): 399–405. http://dx.doi.org/10.3189/s0022143000009333.

Full text
Abstract:
AbstractIt is generally assumed that the mass-balance gradient on glaciers is more or less conserved under climatic change. In studies of the dynamic response of glaciers to climatic change, one of the following assumptions is normally made: (i) the mass-balance perturbation is independent of altitude or (ii) the mass-balance profile does not change — it simply shifts up and down. Observational evidence for such an approach is not convincing; on some glaciers the inter-annual changes in mass balance seem to be independent of altitude, on others not at all. Moreover, it is questionable whether inter-annual variation can be “projected“ on different climatic states.To see what a physical approach might contribute, we developed an altitude-dependent mass-balance model. It is based on the energy balance of the ice/snow surface, where precipitation is included in a parameterized form and numerical integrations are done through an entire balance year (with a 30 min time step). Atmospheric temperature, snowfall, and atmospheric transmissivity for solar radiation are all dependent on altitude, so a mass-balance profile can be calculated. Slope and exposure of the ice/snow surface are taken into account (and the effects of these parameters studied). In general, the calculations were done for 100m elevation intervals.Climatological data from the Sonnblick Observatory (Austria; 3106 m a.s.l.) and from Vent (2000 m a.s.l.; Oetztal Alps, Austria) served as input for a number of runs. Simulation of the mass-balance profiles for Hinterseisferner (north-easterly exposure) and Kesselwandferner (south-easterly exposure) yields reasonable results. The larger balance gradient on Kesselwandferner is produced by the model, so exposure appears to be an important factor here.Sensitivity of mass-balance profiles to shading effects, different slope, and exposure are systematically studied. Another section deals with the sensitivity to climatic change. Perturbations of air temperature, cloudiness, albedo, and precipitation are imposed to see their effects on the mass-balance profiles. The results clearly show that, in general, mass-balance perturbations depend strongly on altitude. They generally increase down-glacier, and are not always symmetric about the reference state.For typical climatic conditions in the Alps, we found that a 1 K temperature change leads to a change in equilibrium-line altitude of 130 m. Three factors contribute to this large value; turbulent heat flux, longwave radiation from the atmosphere, and fraction of precipitation falling as snow. Here, the albedo feed-back increases the sensitivity in a significant way.
APA, Harvard, Vancouver, ISO, and other styles
25

Zhang, Yaowen, Shutong Yang, and Canglong Wang. "First-principles calculations for point defects in MAX phases Ti2AlN." Modern Physics Letters B 30, no. 09 (April 10, 2016): 1650101. http://dx.doi.org/10.1142/s0217984916501013.

Full text
Abstract:
This paper outlines general physical issues associated with performing computational numerical simulations of primary point defects in MAX phases Ti2AlN. First-principles solutions are possible due to the development of computational resources of software and hardware. The calculation accuracy is a good agreement with the experimental results. As an important application of our simulations, the results could provide a theoretical guidance for future experiments and application of Ti2AlN. For example, the N mono-vacancy is the most difficult to form. On the contrary, the mono-vacancy formation in Ti2AlN is energetically most favorable for the Al atom. The essence of the phenomena is explained by the calculated density of state (DOS).
APA, Harvard, Vancouver, ISO, and other styles
26

Lee, Wook, and Spiridoula Matsika. "Conformational and electronic effects on the formation of anti cyclobutane pyrimidine dimers in G-quadruplex structures." Physical Chemistry Chemical Physics 19, no. 4 (2017): 3325–36. http://dx.doi.org/10.1039/c6cp05604k.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

IWAI, T., C. W. HONG, and P. GREIL. "FAST PARTICLE PAIR DETECTION ALGORITHMS FOR PARTICLE SIMULATIONS." International Journal of Modern Physics C 10, no. 05 (July 1999): 823–37. http://dx.doi.org/10.1142/s0129183199000644.

Full text
Abstract:
New algorithms with O(N) complexity have been developed for fast particle-pair detections in particle simulations like the discrete element method (DEM) and molecular dynamic (MD). They exhibit robustness against broad particle size distributions when compared with conventional boxing methods. Almost similar calculation speeds are achieved at particle size distributions from is mono-size to 1:10 while the linked-cell method results in calculations more than 20 times. The basic algorithm, level-boxing, uses the variable search range according to each particle. The advanced method, multi-level boxing, employs multiple cell layers to reduce the particle size discrepancy. Another method, indexed-level boxing, reduces the size of cell arrays by introducing the hash procedure to access the cell array, and is effective for sparse particle systems with a large number of particles.
APA, Harvard, Vancouver, ISO, and other styles
28

Li, Yan, Ning Liu, Chengna Dai, Ruinian Xu, Bin Wu, Gangqiang Yu, and Biaohua Chen. "Mechanistic insight into H2-mediated Ni surface diffusion and deposition to form branched Ni nanocrystals: a theoretical study." Physical Chemistry Chemical Physics 22, no. 41 (2020): 23869–77. http://dx.doi.org/10.1039/d0cp03126g.

Full text
Abstract:
Present work investigates the kinetic role of H2 during Ni surface diffusion and deposition to generate branched Ni nanostructures by employing density functional theory (DFT) calculations and ab initio molecule dynamic (AIMD) simulations.
APA, Harvard, Vancouver, ISO, and other styles
29

Gu, R. J., J. D. Hovanesian, and Y. Y. Hung. "Calculations of Strains and Internal Displacement Fields Using Computerized Tomography." Journal of Applied Mechanics 58, no. 1 (March 1, 1991): 24–27. http://dx.doi.org/10.1115/1.2897159.

Full text
Abstract:
A novel treatment of computerized tomography is developed to calculate strains and displacements at internal nodes from known boundary displacements. The formulation is blended with finite element numerical scheme. Numerical simulations are performed to verify accuracy of this new technique using several plane problems whose solutions exist in closed form. Excellent agreement shows the potential of developing the tomographic technique into a hybrid numerical-experimental method for solving mechanics problems.
APA, Harvard, Vancouver, ISO, and other styles
30

Wang, Shuangxi, and Ping Zhang. "Adsorption and dissociation of H2O monomer on ceria(111): Density functional theory calculations." Modern Physics Letters B 34, no. 24 (June 30, 2020): 2050254. http://dx.doi.org/10.1142/s0217984920502541.

Full text
Abstract:
The adsorption properties of isolated H2O molecule on stoichiometric and reduced ceria(111) surfaces are theoretically investigated by first-principles calculations and molecular dynamics simulations. We find that the most stable adsorption configurations form two hydrogen bonds between the adsorbate and substrate. The water molecule is very inert on the stoichiometric surface unless up to a high temperature of 600 K. For the reduced surface, we find that the oxygen vacancy enhances the interaction. Moreover, simulations at low temperature of 100 K confirm that it is facilitated for water to dissociate into H and OH species.
APA, Harvard, Vancouver, ISO, and other styles
31

Tryggvason, G., W. J. A. Dahm, and K. Sbeih. "Fine Structure of Vortex Sheet Rollup by Viscous and Inviscid Simulation." Journal of Fluids Engineering 113, no. 1 (March 1, 1991): 31–36. http://dx.doi.org/10.1115/1.2926492.

Full text
Abstract:
Numerical simulations of the large amplitude stage of the Kelvin-Helmholtz instability of a relatively thin vorticity layer are discussed. At high Reynolds number, the effect of viscosity is commonly neglected and the thin layer is modeled as a vortex sheet separating one potential flow region from another. Since such vortex sheets are susceptible to a short wavelength instability, as well as singularity formation, it is necessary to provide an artificial “regularization” for long time calculations. We examine the effect of this regularization by comparing vortex sheet calculations with fully viscous finite difference calculations of the Navier-Stokes equations. In particular, we compare the limiting behavior of the viscous simulations for high Reynolds numbers and small initial layer thickness with the limiting solution for the roll-up of an inviscid vortex sheet. Results show that the inviscid regularization effectively reproduces many of the features associated with the thickness of viscous vorticity layers with increasing Reynolds number, though the simplified dynamics of the inviscid model allows it to accurately simulate only the large scale features of the vorticity field. Our results also show that the limiting solution of zero regularization for the inviscid model and high Reynolds number and zero initial thickness for the viscous simulations appear to be the same.
APA, Harvard, Vancouver, ISO, and other styles
32

Tong, Mingqiong, Qing Wang, Yan Wang, and Guangju Chen. "Structures and energies of the transition between two conformations of the alternate frame folding calbindin-D9k protein: a theoretical study." RSC Advances 5, no. 81 (2015): 65798–810. http://dx.doi.org/10.1039/c5ra11234f.

Full text
Abstract:
We carried out molecular dynamics simulations and energy calculations for the two states of the alternate frame folding (AFF) calbindin-D9k protein and their conformational transition in Ca2+-free form to address their dynamical transition mechanism.
APA, Harvard, Vancouver, ISO, and other styles
33

Dunin-Borkowski, R. E., and J. M. Cowley. "Simulations for imaging with atomic focusers." Acta Crystallographica Section A Foundations of Crystallography 55, no. 2 (March 1, 1999): 119–26. http://dx.doi.org/10.1107/s0108767398006989.

Full text
Abstract:
The basis has been explored for the possible application of the various schemes that have been proposed for making use of the focusing properties of single heavy atoms, or rows of atoms extending through thin crystals in axial directions, for the attainment of ultra-high resolution in electron microscopy. Calculations are reported for the form of 200 keV electron beams channeled along rows of atoms through crystals and propagated in the vacuum beyond the crystals. The conditions for forming beams less than 0.05 nm in diameter have been established. Simulations of images having resolutions of this order are reported for the case that the specimen is placed at the Fourier image position beyond the exit face of a thin crystal and the transmission of the periodic array of ultra-fine beams, translated laterally by tilting the incident beam, may be observable using a conventional transmission-electron-microscopy (TEM) instrument.
APA, Harvard, Vancouver, ISO, and other styles
34

Comte, J. C. "Exact Compact-Like Traveling Kinks and Pulses: Another Way for High Flow Communications." International Journal of Bifurcation and Chaos 13, no. 06 (June 2003): 1565–72. http://dx.doi.org/10.1142/s0218127403007412.

Full text
Abstract:
We show that by suitably choosing the analytical form of a solitary wave solution of discrete ϕ4 models, it is possible to calculate the potential parameters which allow the propagation of compact (kink and pulses) solutions. Our numerical simulations show that narrow kinks and pulses with finite extent can propagate freely. Moveover, our numerical simulations reveal that the two successive pulses at a relative distance of two lattice spacings propagate freely, i.e. without interaction, which presents considerable interest in the data transmission field. Finally an experimental electronic device holding the propagation of such entities is proposed.
APA, Harvard, Vancouver, ISO, and other styles
35

Miąsik, Przemysław, and Lech Lichołai. "The influence of a thermal bridge in the corner of the walls on the possibility of water vapour condensation." E3S Web of Conferences 49 (2018): 00072. http://dx.doi.org/10.1051/e3sconf/20184900072.

Full text
Abstract:
The article presents an analysis of temperature on an internal wall surface. Simulations on the external wall corner were also carried out. It is a place where the surface temperature is lower due to the thermal bridge effect. The calculations were performed with the ADINA program used for numerical simulations on heat transfer through divisional structures. Finite element analysis was employed to solve the task. The calculations were performed for five case studies with different corner structures and different methods of insulation. The baseline was a wall with the heat transfer coefficient U = 0,30 W/(m2K). The reason for selecting such a coefficient for analysis was due to the fact that in most Polish buildings thermal resistance of walls results from technical norms from before January 2014. The findings of the numerical simulations were used to determine the maximum relative humidity of the internal air where water vapour condensation may occur on the internal surface of the corner. The calculations were crucial to making a qualitative assessment of the employed solutions. The findings showed that it is possible to improve the thermal functioning of a wall in the corner thanks to an additional layer of thermal insulation, for example in the form of an avant-corps, placed within the corner.
APA, Harvard, Vancouver, ISO, and other styles
36

Shetgaonkar, Samata E., Shiva Prasad Kollur, Renjith Raveendran Pillai, Karthick Thangavel, Sanja J. Armaković, Stevan Armaković, Chandan Shivamallu, et al. "Investigation of Pharmaceutical Importance of 2H-Pyran-2-One Analogues via Computational Approaches." Symmetry 13, no. 9 (September 3, 2021): 1619. http://dx.doi.org/10.3390/sym13091619.

Full text
Abstract:
Highly functionalized spirocyclic ketals were synthesized through asymmetric oxidative spirocyclization via carbanion-induced ring transformation of 2H-pyran-2-ones with 1,4-cyclohexandione monoethyleneketal under alkaline conditions. Further acidic-hydrolysis of obtained spirocyclic ketals yields highly substituted 2-tetralone in good yield. Computational analysis based on the DFT calculations and MD simulations has been performed in order to predict and understand global and local reactivity properties of newly synthesized derivatives. DFT calculations covered fundamental reactivity descriptors such as molecular electrostatic potential and average local ionization energies. Nitrogen atom and benzene rings have been recognized as the most important molecular sites from these aspects. Additionally, to predict whether studied compounds are stable towards the autoxidation mechanism, we have also studied the bond dissociation energies for hydrogen abstraction and identified the derivative which might form potentially genotoxic impurities. Interactions with water, including both global and local aspects, have been covered thanks to the MD simulations and calculations of interaction energies with water, counting of formed hydrogen interactions, and radial distribution functions. MD simulations were also used to identify which excipient could be used together with these compounds, and it has been established that the polyvinylpyrrolidone polymer could be highly compatible with these compounds, from the aspect of calculated solubility parameters.
APA, Harvard, Vancouver, ISO, and other styles
37

DIAMOND, PHIL, KEONHEE LEE, and YINGHAO HAN. "BISHADOWING AND HYPERBOLICITY." International Journal of Bifurcation and Chaos 12, no. 08 (August 2002): 1779–88. http://dx.doi.org/10.1142/s0218127402005455.

Full text
Abstract:
Shadowing of a dynamical system is often used to justify the validity of computer simulations of the system, and in numerical calculations an inverse form of the shadowing concept is also of some interest. In this paper we characterize the notion of shadowing in terms of stability, and express the notion of hyperbolicity using the concept of inverse shadowing.
APA, Harvard, Vancouver, ISO, and other styles
38

Hogan, S. J., Idith Gruman, and M. Stiassnie. "On the changes in phase speed of one train of water waves in the presence of another." Journal of Fluid Mechanics 192 (July 1988): 97–114. http://dx.doi.org/10.1017/s0022112088001806.

Full text
Abstract:
We present calculations of the change in phase speed of one train of water waves in the presence of another. We use a general method, based on Zakharov's (1968) integral equation. It is shown that the change in phase speed of each wavetrain is directly proportional to the square of the amplitude of the other. This generalizes the work of Longuet-Higgins & Phillips (1962) who considered gravity waves only.In the important case of gravity-capillary waves, we present the correct form of the Zakharov kernel. This is used to find the expressions for the changes in phase speed. These results are then checked using a perturbation method based on that of Longuet-Higgins & Phillips (1962). Agreement to 6 significant digits has been obtained between the calculations based on these two distinct methods. Full numerical results in the form of polar diagrams over a wide range of wavelengths, away from conditions of triad resonance, are provided.
APA, Harvard, Vancouver, ISO, and other styles
39

Krishnamoorthy, Gautham, Rydell Klosterman, and Dylan Shallbetter. "A Radiative Transfer Modeling Methodology in Gas-Liquid Multiphase Flow Simulations." Journal of Engineering 2014 (2014): 1–14. http://dx.doi.org/10.1155/2014/793238.

Full text
Abstract:
A methodology for performing radiative transfer calculations in computational fluid dynamic simulations of gas-liquid multiphase flows is presented. By considering an externally irradiated bubble column photoreactor as our model system, the bubble scattering coefficients were determined through add-on functions by employing as inputs the bubble volume fractions, number densities, and the fractional contribution of each bubble size to the bubble volume from four different multiphase modeling options. The scattering coefficient profiles resulting from the models were significantly different from one another and aligned closely with their predicted gas-phase volume fraction distributions. The impacts of the multiphase modeling option, initial bubble diameter, and gas flow rates on the radiation distribution patterns within the reactor were also examined. An increase in air inlet velocities resulted in an increase in the fraction of larger sized bubbles and their contribution to the scattering coefficient. However, the initial bubble sizes were found to have the strongest impact on the radiation field.
APA, Harvard, Vancouver, ISO, and other styles
40

Tomczak, J., Z. Pater, and T. Bulzak. "Thermo-Mechanical Analysis of a Lever Preform Forming from Magnesium Alloy AZ31 / Termomechaniczna Analiza Kształtowania Przedkuwki Dźwigni Ze Stopu Magnezu AZ31." Archives of Metallurgy and Materials 57, no. 4 (December 1, 2012): 1211–18. http://dx.doi.org/10.2478/v10172-012-0135-z.

Full text
Abstract:
This paper presents the results of numerical analysis of metal forming process of a lever preform from magnesium alloy AZ31, which will be used as a semi-finished product in the forging process of a lever part. Presently, the lever forging is formed from semi-finished product in the form of a bar, which is connected with large material losses. Numerical simulations were made for two different metal forming methods: forging longitudinal rolling and cross-wedge rolling. Calculations were conducted basing on finite element method (FEM), applying commercial software DEFORM-3D. Geometrical models used in calculations were discussed. Simulations, made in conditions of three dimensional state of strain, allowed for determining distributions of strain intensity, temperature, cracking criterion, and mainly for determining the possibility of a lever preform manufacturing on the basis of rolling processes. Considering the obtained results of numerical simulations, the design of tools for semi-finished products rolling was worked out; these semi-finished products will be used for experimental verification of the lever preforms forming.
APA, Harvard, Vancouver, ISO, and other styles
41

Mkrtychev, Oleg V., and Anton Y. Savenkov. "Methods of simulating the front of the air shock wave for calculating the industrial structure." Vestnik MGSU, no. 2 (February 2020): 223–34. http://dx.doi.org/10.22227/1997-0935.2020.2.223-234.

Full text
Abstract:
Introduction. The paper considers existing methods of simulating a wide front of an air shock wave for solving problems of shock wave interaction with an installation using gas-dynamic methods. When solving the problem of the air shock wave interaction with an installation in a dynamic setting, it was revealed that, when simulating a wide front of a distant explosion using point explosions, it is possible to obtain an underestimated time of the shock wave action. This results in a downward bias of loads to the installation. Thus, the loads obtained in this case do not correspond to the loads for which it is necessary to carry out the calculation of industrial installations protected from shock waves in accordance with domestic and international regulatory documents. To eliminate this drawback, another approach is proposed. It consists in setting the load on the computational region in the form of a pressure graph with specified parameters of overpressure and exposure time. Materials and methods. The interaction of the shock wave front with the installation is carried out using numerical simulation in a nonlinear dynamic setting using gas-dynamic methods in the LS-DYNA software package. Results. The following analyses were conducted in the scope of the study: an analysis of existing methods of forming the wide shock wave front of the distant explosion and an analysis of the parameters of the shock wave during the formation of the wide shock wave front of the distant explosion by setting the pressure graph with the specified parameters of the overpressure and the exposure time. Conclusions. The result of the analysis of methods for numerical simulation of the interaction of the air shock wave wide front with the installation showed that simulation of the explosion source in the form of volume elements and simulation of the shock wave using the CONWEP function of the LS-DYNA software package have disadvantages. These disadvantages do not allow obtaining the main parameters of the shock wave for the further use. A method for modeling the wide shock wave front is given by setting a pressure graph at the boundary of the computational region with the required overpressure parameters and exposure time.
APA, Harvard, Vancouver, ISO, and other styles
42

Dong, Longjun, Xibing Li, and Gongnan Xie. "An Analytical Solution for Acoustic Emission Source Location for Known P Wave Velocity System." Mathematical Problems in Engineering 2014 (2014): 1–6. http://dx.doi.org/10.1155/2014/290686.

Full text
Abstract:
This paper presents a three-dimensional analytical solution for acoustic emission source location using time difference of arrival (TDOA) measurements from N receivers, N⩾5. The nonlinear location equations for TDOA are simplified to linear equations, and the direct analytical solution is obtained by solving the linear equations. There are not calculations of square roots in solution equations. The method solved the problems of the existence and multiplicity of solutions induced by the calculations of square roots in existed close-form methods. Simulations are included to study the algorithms' performance and compare with the existing technique.
APA, Harvard, Vancouver, ISO, and other styles
43

Peters, Brandon L., Jinxia Deng, and Andrew L. Ferguson. "Free energy calculations of the functional selectivity of 5-HT2B G protein-coupled receptor." PLOS ONE 15, no. 12 (December 9, 2020): e0243313. http://dx.doi.org/10.1371/journal.pone.0243313.

Full text
Abstract:
G Protein-Coupled Receptors (GPCRs) mediate intracellular signaling in response to extracellular ligand binding and are the target of one-third of approved drugs. Ligand binding modulates the GPCR molecular free energy landscape by preferentially stabilizing active or inactive conformations that dictate intracellular protein recruitment and downstream signaling. We perform enhanced sampling molecular dynamics simulations to recover the free energy surfaces of a thermostable mutant of the GPCR serotonin receptor 5-HT2B in the unliganded form and bound to a lysergic acid diethylamide (LSD) agonist and lisuride antagonist. LSD binding imparts a ∼110 kJ/mol driving force for conformational rearrangement into an active state. The lisuride-bound form is structurally similar to the apo form and only ∼24 kJ/mol more stable. This work quantifies ligand-induced conformational specificity and functional selectivity of 5-HT2B and presents a platform for high-throughput virtual screening of ligands and rational engineering of the ligand-bound molecular free energy landscape.
APA, Harvard, Vancouver, ISO, and other styles
44

Janoszek, Tomasz, and Wojciech Masny. "CFD Simulations of Allothermal Steam Gasification Process for Hydrogen Production." Energies 14, no. 6 (March 10, 2021): 1532. http://dx.doi.org/10.3390/en14061532.

Full text
Abstract:
The article presents an experimental laboratory setup used for the empirical determination of the gasification of coal samples in the form of solid rock, cut out in the form of a cylinder. An experimental laboratory set enabled a series of experiments carried out at 700 °C with steam as the gasification agent. The samples were prepared from the coal seam, the use of which can be planned in future underground and ground gasification experiments. The result of the conducted coal gasification process, using steam as the gasification agent, was the syngas, including hydrogen (H2) with a concentration between 46% and 58%, carbon dioxide (CO2) with a concentration between 13% and 17%, carbon monoxide (CO) with a concentration between 7% and 11.5%, and methane(CH4) with a concentration between 9.6% and 20.1%.The results from the ex-situ experiments were compared with the results of numerical simulations using computational fluid dynamics (CFD) methods. A three-dimensional numerical model for the coal gasification process was developed using Ansys-Fluent software to simulate an ex-situ allothermal coal gasification experiment using low-moisture content hard coal under atmospheric conditions. In the numerical model, the mass exchange (flow of the gasification agent), the turbulence description model, heat exchange, the method of simulating the chemical reactions, and the method of mapping the porosity medium were included. Using the construction data of an experimental laboratory set, a numerical model was developed and its discretization (development of a numerical grid, based on which calculations are made) was carried out. Tip on the reactor, supply method, and parameters maintained during the gasification process were used to define the numerical model in the Ansys-Fluent code. A part of the data were supplemented on the basis of literature sources. Where necessary, the literature parameters were converted to the conditions corresponding to the experiment, which were carried out. After performing the calculations, the obtained results were compared with the available experimental data. The experimental and the simulated results were in good agreement, showing a similar tendency.
APA, Harvard, Vancouver, ISO, and other styles
45

Mazurek, Anna Helena, Łukasz Szeleszczuk, and Tomasz Gubica. "Application of Molecular Dynamics Simulations in the Analysis of Cyclodextrin Complexes." International Journal of Molecular Sciences 22, no. 17 (August 30, 2021): 9422. http://dx.doi.org/10.3390/ijms22179422.

Full text
Abstract:
Cyclodextrins (CDs) are highly respected for their ability to form inclusion complexes via host–guest noncovalent interactions and, thus, ensofance other molecular properties. Various molecular modeling methods have found their applications in the analysis of those complexes. However, as showed in this review, molecular dynamics (MD) simulations could provide the information unobtainable by any other means. It is therefore not surprising that published works on MD simulations used in this field have rapidly increased since the early 2010s. This review provides an overview of the successful applications of MD simulations in the studies on CD complexes. Information that is crucial for MD simulations, such as application of force fields, the length of the simulation, or solvent treatment method, are thoroughly discussed. Therefore, this work can serve as a guide to properly set up such calculations and analyze their results.
APA, Harvard, Vancouver, ISO, and other styles
46

PETROVA, N. V., V. D. OSOVSKII, D. YU BALAKIN, YU G. PTUSHINSKII, and I. N. YAKOVKIN. "ABSENCE OF CO DISSOCIATION ON Mo(110): TPD AND DFT STUDY." Surface Review and Letters 17, no. 05n06 (October 2010): 469–75. http://dx.doi.org/10.1142/s0218625x10014314.

Full text
Abstract:
The problem of the CO dissociation on Mo (110) has been addressed by means of temperature-programmed desorption (TPD) and density-functional (DFT) calculations. The TPD spectra show a first-order CO desorption, which indicates the desorption from a "virgin" state, not a recombinative form of desorption. The height of the potential barrier for the dissociation (2.75 eV), estimated from DFT calculations, substantially exceeds the energy of CO chemisorption (2.1 eV), which makes the thermally induced CO dissociation on Mo improbable. Monte Carlo simulations of TPD spectra, performed using estimated chemisorption energies, are in good agreement with experiment and demonstrate that the two-peak shape of the spectra can be explained without involving the CO dissociation. Thus, the results of the present study finally refute the concept of a dissociative form of CO adsorption on Mo surfaces.
APA, Harvard, Vancouver, ISO, and other styles
47

Pruess, Karsten. "A Practical Method for Modeling Fluid and Heat Flow in Fractured Porous Media." Society of Petroleum Engineers Journal 25, no. 01 (February 1, 1985): 14–26. http://dx.doi.org/10.2118/10509-pa.

Full text
Abstract:
Abstract A multiple interacting continua (MINC) method is presented, which is applicable for numerical simulation presented, which is applicable for numerical simulation of heat and multiphase fluid flow in multidimensional, fractured porous media. This method is a generalization of the double-porosity concept. The partitioning of the flow domain into computational volume elements is based on the criterion of approximate thermodynamic equilibrium at all times within each element. The thermodynamic conditions in the rock matrix are assumed to be controlled primarily by the distance from the fractures, which leads to the use of nested gridblocks. The MINC concept is implemented through the integral finite difference (IFD) method. No analytical approximations are made for coupling between the fracture and matrix continua. Instead, the transient flow of fluid and heat between matrix and fractures is treated by a numerical method. The geometric parameters needed in simulation are preprocessed from a specification of fracture spacings and apertures and geometry of the matrix blocks. The numerical implementation of the MINC method is verified by comparison with the analytical solution of Warren and Root. Illustrative applications are given for several geothermal reservoir engineering problems. Introduction In this paper, we present a numerical method for simulating transient nonisothermal, two-phase flow of water in fractured porous medium. The method is base on a generalization of a concept originally proposed by Barenblatt et al. and introduced into the petroleum literature by Warren and Root, Odeh, and others in the form of what has been termed the "double-porosity" model. The essence of this approach is that in a fractured porous medium, fractures are characterized by much porous medium, fractures are characterized by much larger diffusivities (and hence, much smaller response times) than the rock matrix. Therefore, the early system response is influenced by the matrix. In seeking to analytically solve such a system, all fractures were grouped into one continuum and all the matrix blocks into another, resulting in two interacting continua coupled through a mass transfer function determined by the size and shape of the blocks, as well as the local difference in potentials between the two continua. Later, Kazemi and Duguid and Lee incorporated the double-porosity concept into a numerical model. For a more detailed description of the concept and its application, see Refs. 6 through 8. Very little work has been done in investigating nonisothermal, two-phase fluid flow in fractured porous media. Moench and coworkers used the discrete fracture approach to study the behavior of fissured, vapor-dominated geothermal reservoirs. The purpose of our work is first to generalize the double-porosity concept into one of many interacting continua. We then incorporate the MINC model into a simulator for nonisothermal transport of a homogeneous two-phase fluid (water and steam) in multidimensional systems. Our approach is considerably broader in scope and more general than any previous models discussed in the literature. The MINC previous models discussed in the literature. The MINC method permits treatment of multiphase fluids with large and variable compressibility and allows for phase transitions with latent heat effects, as well as for coupling between fluid and heat flow. The transient interaction between matrix and fractures is treated in a realistic way. Although the model can permit alternative formulations for the equation of motion, we shall assume that, macroscopically, each continuum obeys Darcy's law; in particular, we shall use the "cubic law" for the flow of particular, we shall use the "cubic law" for the flow of fluids in fracture. While the methodology presented in this paper is generally applicable to multiphase compositional thermal systems, our illustrative calculations were restricted to geothermal reservoir problems. The numerical method chosen to implement the MINC concept is the IFD method. In this method, all thermophysical and thermodynamic properties are represented by averages over explicitly defined finite subdomains, while fluxes of mass or energy across surface segments are evaluated through finite difference approximations. An important aspect of this method is that the geometric quantities required to evaluate the conductance between two communicating volume elements are provided directly as input data rather than having them generated from data on nodal arrangements and nodal coordinates. Thus, a remarkable flexibility is attained by which one can allow a volume element in any one continuum to communicate with another element in its own or any other continuum. Inasmuch as the interaction between volume elements of different continua is handled as a geometric feature, the IFD methodology does not distinguish between the MINC method and the conventional porous-medium type approaches to modeling. porous-medium type approaches to modeling. SPEJ p. 14
APA, Harvard, Vancouver, ISO, and other styles
48

Pan, D., and R. S. Sharp. "Discrete-Time Control of Fast Non-Linear Motion of Robot Manipulators with Ideal Actuators." Proceedings of the Institution of Mechanical Engineers, Part C: Mechanical Engineering Science 203, no. 4 (July 1989): 283–93. http://dx.doi.org/10.1243/pime_proc_1989_203_115_02.

Full text
Abstract:
Non-linear and coupled dynamic equations of motion automatically generated by computer in algebraic form are piecewise linearized and discretized to form a set of state difference equations by which a mathematical basis is provided for computer control, numerical integration and simulations. Two adaptive control strategies are proposed to drive robot manipulators to follow desired trajectories. The first strategy uses the idea of the combination of feedforward plus feedback control. A complete on-line control scheme is designed in the second strategy. All the calculations, both on-line and off-line, are performed by step-by-step optimization so that a quadratic function is minimized for each step. The control delay associated with time taken for on-line computation is eliminated by a state-and-control prediction strategy. Based on the proposed strategies a general FORTRAN program has been developed for simulations. An example is taken to investigate the tracking ability of robot manipulators in trajectories with different velocity amplitudes and various frequencies by simulations. The tracking ability is also examined by using a random trajectory generator.
APA, Harvard, Vancouver, ISO, and other styles
49

Gray, L. J., T. Kaplan, J. D. Richardson, and G. H. Paulino. "Green’s Functions and Boundary Integral Analysis for Exponentially Graded Materials: Heat Conduction." Journal of Applied Mechanics 70, no. 4 (July 1, 2003): 543–49. http://dx.doi.org/10.1115/1.1485753.

Full text
Abstract:
Free space Green’s functions are derived for graded materials in which the thermal conductivity varies exponentially in one coordinate. Closed-form expressions are obtained for the steady-state diffusion equation, in two and three dimensions. The corresponding boundary integral equation formulations for these problems are derived, and the three-dimensional case is solved numerically using a Galerkin approximation. The results of test calculations are in excellent agreement with exact solutions and finite element simulations.
APA, Harvard, Vancouver, ISO, and other styles
50

WINGATE, MATTHEW, CHRISTINE DAVIES, ALAN GRAY, EMEL GULEZ, JUNKO SHIGEMITSU, and G. PETER LEPAGE. "B DECAYS ON THE LATTICE AND RESULTS FOR PHENOMENOLOGY." International Journal of Modern Physics A 20, no. 16 (June 30, 2005): 3651–53. http://dx.doi.org/10.1142/s0217751x05027205.

Full text
Abstract:
Lattice Monte Carlo simulations now include the effects of 2 light sea quarks and 1 strange sea quark through the use of an improved staggered fermion action. Consequently, results important to phenomenology are free of the approximate 10% errors inherent in the quenched approximation. This talk reports on calculations of the B and Bs, decay constants and B → πℓν form factors. Accurate determinations of these quantities will lead to tighter constraints on CKM matrix elements.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography