Academic literature on the topic 'Space-time combined correlation integral'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Space-time combined correlation integral.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Space-time combined correlation integral"

1

Fox, M., F. Herman, S. D. Willett, and D. A. May. "A linear inversion method to infer exhumation rates in space and time from thermochronometric data." Earth Surface Dynamics Discussions 1, no. 1 (July 19, 2013): 207–59. http://dx.doi.org/10.5194/esurfd-1-207-2013.

Full text
Abstract:
Abstract. We present a formal inverse procedure to extract exhumation rates from spatially distributed low temperature thermochronometric data. Our method is based on a Gaussian linear inversion approach in which we define a linear problem relating exhumation rate to thermochronometric age with rates being parameterized as variable in both space and time. The basis of our linear forward model is the fact that the depth to the "closure isotherm" can be described as the integral of exhumation rate, ė, from the cooling age to the present day. For each age, a one-dimensional thermal model is used to calculate a characteristic closure temperature, and is combined with a spectral method to estimate the conductive effects of topography on the underlying isotherms. This approximation to the four-dimensional thermal problem allows us to calculate closure depths for datasets that span large spatial regions. By discretizing the integral expressions into time intervals we express the problem as a single linear system of equations. In addition, we assume that exhumation rates vary smoothly in space, and so can be described through a spatial correlation function. Therefore, exhumation rate history is discretized over a set of time intervals, but is spatially correlated over each time interval. We use an a priori estimate of the model parameters, in order to invert this linear system and obtain the maximum likelihood solution for the exhumation rate history. An estimate of the resolving power of the data is also obtained by computing the a posteriori variance of the parameters, and by analyzing the resolution matrix. Finally, we illustrate our inversion procedure using examples from the literature.
APA, Harvard, Vancouver, ISO, and other styles
2

Malizia, Angela, Mariateresa Fiocchi, Lorenzo Natalucci, Vito Sguera, John B. Stephen, Loredana Bassani, Angela Bazzano, Pietro Ubertini, Elena Pian, and Antony J. Bird. "INTEGRAL View of TeV Sources: A Legacy for the CTA Project." Universe 7, no. 5 (May 7, 2021): 135. http://dx.doi.org/10.3390/universe7050135.

Full text
Abstract:
Investigations that were carried out over the last two decades with novel and more sensitive instrumentation have dramatically improved our knowledge of the more violent physical processes taking place in galactic and extra-galactic Black-Holes, Neutron Stars, Supernova Remnants/Pulsar Wind Nebulae, and other regions of the Universe where relativistic acceleration processes are in place. In particular, simultaneous and/or combined observations with γγ-ray satellites and ground based high-energy telescopes, have clarified the scenario of the mechanisms responsible for high energy photon emission by leptonic and hadronic accelerated particles in the presence of magnetic fields. Specifically, the European Space Agency INTEGRAL soft γγ-ray observatory has detected more than 1000 sources in the soft γγ-ray band, providing accurate positions, light curves and time resolved spectral data for them. Space observations with Fermi-LAT and observations that were carried out from the ground with H.E.S.S., MAGIC, VERITAS, and other telescopes sensitive in the GeV-TeV domain have, at the same time, provided evidence that a substantial fraction of the cosmic sources detected are emitting in the keV to TeV band via Synchrotron-Inverse Compton processes, in particular from stellar galactic BH systems as well as from distant black holes. In this work, employing a spatial cross correlation technique, we compare the INTEGRAL/IBIS and TeV all-sky data in search of secure or likely associations. Although this analysis is based on a subset of the INTEGRAL all-sky observations (1000 orbits), we find that there is a significant correlation: 39 objects (∼20% of the VHE γγ-ray catalogue) show emission in both soft γγ-ray and TeV wavebands. The full INTEGRAL database, now comprising almost 19 years of public data available, will represent an important legacy that will be useful for the Cherenkov Telescope Array (CTA) and other ground based large projects.
APA, Harvard, Vancouver, ISO, and other styles
3

Fox, M., F. Herman, S. D. Willett, and D. A. May. "A linear inversion method to infer exhumation rates in space and time from thermochronometric data." Earth Surface Dynamics 2, no. 1 (January 28, 2014): 47–65. http://dx.doi.org/10.5194/esurf-2-47-2014.

Full text
Abstract:
Abstract. We present a formal inverse procedure to extract exhumation rates from spatially distributed low temperature thermochronometric data. Our method is based on a Gaussian linear inversion approach in which we define a linear problem relating exhumation rate to thermochronometric age with rates being parameterized as variable in both space and time. The basis of our linear forward model is the fact that the depth to the "closure isotherm" can be described as the integral of exhumation rate, ..., from the cooling age to the present day. For each age, a one-dimensional thermal model is used to calculate a characteristic closure temperature, and is combined with a spectral method to estimate the conductive effects of topography on the underlying isotherms. This approximation to the four-dimensional thermal problem allows us to calculate closure depths for data sets that span large spatial regions. By discretizing the integral expressions into time intervals we express the problem as a single linear system of equations. In addition, we assume that exhumation rates vary smoothly in space, and so can be described through a spatial correlation function. Therefore, exhumation rate history is discretized over a set of time intervals, but is spatially correlated over each time interval. We use an a priori estimate of the model parameters in order to invert this linear system and obtain the maximum likelihood solution for the exhumation rate history. An estimate of the resolving power of the data is also obtained by computing the a posteriori variance of the parameters and by analyzing the resolution matrix. The method is applicable when data from multiple thermochronometers and elevations/depths are available. However, it is not applicable when there has been burial and reheating. We illustrate our inversion procedure using examples from the literature.
APA, Harvard, Vancouver, ISO, and other styles
4

Mishchenko, T. M. "Methodology and Models of Combined Modeling of Electromagnetic Pro-cesses in Electric Traction Systems." Science and Transport Progress. Bulletin of Dnipropetrovsk National University of Railway Transport, no. 2(92) (April 15, 2021): 40–49. http://dx.doi.org/10.15802/stp2021/237404.

Full text
Abstract:
Purpose. The main purpose of the work is the development of identification models and a new method of modeling electromagnetic processes in electric traction systems with simultaneous consideration of all its subsystems, as well as several feeder zones of the electrified section. Methodology. To achieve this purpose, the methods of mathematical modelling, the basics of the theory of random processes and the methodology of their probabilistic-statistical processing, the methods for solving integral equations and analysis of electric traction circuits in electric traction systems are used. Findings. The requirements to be met by an adequate, stochastic identification model of electric traction devices are established. The solution of Fredholm’s integral correlation equation of the first kind is performed. The analytical expression of the identification dynamic model of the electric locomotive DE–1 is obtained and its adequacy is checked. The methodology of combined modeling of electromagnetic processes in devices and subsystems of electric traction systems is developed and presented tabularly. Originality. For the first time it is proposed to use the pulse transition function as identification models of traction substation and traction network with electric rolling stock in predictive modeling of electromagnetic and electric power processes in electric traction systems. A new method has been developed, a method of complex modeling of electromagnetic and electric power processes in the system of electric traction with simultaneous consideration of all its subsystems, as well as several inter-substation zones of the electrified section. For the first time, a method of partitioning the correlation functions for solving an integral correlation equation has been proposed, which allows defining a pulse transition function as an identification model of any subsystem of an electric traction system. Practical value. The developed identification models and the method of combined modeling make it possible to predict electromagnetic processes simultaneously in all feeder zones of the electrified section of the electric traction system. The obtained identification model of the electric locomotive DE–1 can be adapted with its subsequent use in modeling processes in the traction circuits of electric locomotives of other types. The method of factorization of correlation functions used in solving the Volterra integral correlation equation of the first kind (convolution type) can be adapted to the solution of other integral equations, which describe the processes in electric traction systems.
APA, Harvard, Vancouver, ISO, and other styles
5

Rodionova, Irina A., and Uliana V. Mizerovskaya. "Correlation Between the Level of Socio-economic Development and the Use of the Information and Communication Technologies." Studies of the Industrial Geography Commission of the Polish Geographical Society 22 (June 1, 2013): 93–103. http://dx.doi.org/10.24917/20801653.22.6.

Full text
Abstract:
During recent decades, the rate of structural shifts in the world economy has been especially fast. One of the factors used to influence these processes was to actively develop hi-tech industries and information and communication technologies. With the course of time, the level of informatization of society becomes a defining factor for a country’s competitiveness and predefines its ability to integrate into the global economy. The article characterizes the readiness of different countries to make a move to an innovative way of development based on analysis of combined rating tables that contain integral indices of society’s informatization level. The level of accomplishment of the task to form an innovative type of economy can be assessed in the link between implementation of science and technology progress achievements (i.e.: through the use of information and communication technologies) and the level of socio-economic development of the world countries. Current positions held by Russia and Poland according to some integral indices are also being analyzed.
APA, Harvard, Vancouver, ISO, and other styles
6

Hu, Heng, Yunchang Cao, Chuang Shi, Yong Lei, Hao Wen, Hong Liang, Manhong Tu, et al. "Analysis of the Precipitable Water Vapor Observation in Yunnan–Guizhou Plateau during the Convective Weather System in Summer." Atmosphere 12, no. 8 (August 23, 2021): 1085. http://dx.doi.org/10.3390/atmos12081085.

Full text
Abstract:
The ERA5 reanalysis dataset of the European Center for Medium-Range Weather Forecasts (ECMWF) in the summers from 2015 to 2020 was used to compare and analyze the features of the precipitable water vapor (PWV) observed by six ground-based Global Navigation Satellite System (GNSS) meteorology (GNSS/MET) stations in the Yunnan–Guizhou Plateau. The correlation coefficients of the two datasets ranged between 0.804 and 0.878, the standard deviations ranged between 4.686 and 7.338 mm, and the monthly average deviations ranged between −4.153 and 9.459 mm, which increased with the altitude of the station. Matching the quality-controlled ground precipitation data with the PWV in time and space revealed that most precipitation occurred when the PWV was between 30 and 65 mm and roughly met the normal distribution. We used the vertical integral of divergence of moisture flux (∇p) and S-band Doppler radar networking products combined with the PWV to study the convergence and divergence process and the water vapor delivery conditions during the deep convective weather process from August 24 to 26, 2020, which can be used to analyze the real-time observation capability and continuity of PWV in small-scale and mesoscale weather processes. Furthermore, the 1 h precipitation and the cloud top temperature (ctt) data at the same site were used to demonstrate the effect of PWV on the transit of convective weather systems from different time–space scales.
APA, Harvard, Vancouver, ISO, and other styles
7

Hacar, A., M. Tafalla, J. Forbrich, J. Alves, S. Meingast, J. Grossschedl, and P. S. Teixeira. "An ALMA study of the Orion Integral Filament." Astronomy & Astrophysics 610 (February 2018): A77. http://dx.doi.org/10.1051/0004-6361/201731894.

Full text
Abstract:
Aim. We have investigated the gas organization within the paradigmatic Integral Shape Filament (ISF) in Orion in order to decipher whether or not all filaments are bundles of fibers. Methods. We combined two new ALMA Cycle 3 mosaics with previous IRAM 30m observations to produce a high-dynamic range N2H+ (1-0) emission map of the ISF tracing its high-density material and velocity structure down to scales of 0.009 pc (or ~2000 AU). Results. From the analysis of the gas kinematics, we identify a total of 55 dense fibers in the central region of the ISF. Independently of their location in the cloud, these fibers are characterized by transonic internal motions, lengths of ~0.15 pc, and masses per unit length close to those expected in hydrostatic equilibrium. The ISF fibers are spatially organized forming a dense bundle with multiple hub-like associations likely shaped by the local gravitational potential. Within this complex network, the ISF fibers show a compact radial emission profile with a median FWHM of 0.035 pc systematically narrower than the previously proposed universal 0.1 pc filament width. Conclusions. Our ALMA observations reveal complex bundles of fibers in the ISF, suggesting strong similarities between the internal substructure of this massive filament and previously studied lower-mass objects. The fibers show identical dynamic properties in both low- and high-mass regions, and their widespread detection in nearby clouds suggests a preferred organizational mechanism of gas in which the physical fiber dimensions (width and length) are self-regulated depending on their intrinsic gas density. Combining these results with previous works in Musca, Taurus, and Perseus, we identify a systematic increase of the surface density of fibers as a function of the total mass per-unit-length in filamentary clouds. Based on this empirical correlation, we propose a unified star-formation scenario where the observed differences between low- and high-mass clouds, and the origin of clusters, emerge naturally from the initial concentration of fibers.
APA, Harvard, Vancouver, ISO, and other styles
8

Kamraj, Nikita, Murray Brightman, Fiona A. Harrison, Daniel Stern, Javier A. García, Mislav Baloković, Claudio Ricci, et al. "X-Ray Coronal Properties of Swift/BAT-selected Seyfert 1 Active Galactic Nuclei." Astrophysical Journal 927, no. 1 (March 1, 2022): 42. http://dx.doi.org/10.3847/1538-4357/ac45f6.

Full text
Abstract:
Abstract The corona is an integral component of active galactic nuclei (AGNs) which produces the bulk of the X-ray emission above 1–2 keV. However, many of its physical properties and the mechanisms powering this emission remain a mystery. In particular, the temperature of the coronal plasma has been difficult to constrain for large samples of AGNs, as constraints require high-quality broadband X-ray spectral coverage extending above 10 keV in order to measure the high-energy cutoff, which provides constraints on the combination of coronal optical depth and temperature. We present constraints on the coronal temperature for a large sample of Seyfert 1 AGNs selected from the Swift/BAT survey using high-quality hard X-ray data from the NuSTAR observatory combined with simultaneous soft X-ray data from Swift/XRT or XMM-Newton. When applying a physically motivated, nonrelativistic disk-reflection model to the X-ray spectra, we find a mean coronal temperature kT e = 84 ± 9 keV. We find no significant correlation between the coronal cutoff energy and accretion parameters such as the Eddington ratio and black hole mass. We also do not find a statistically significant correlation between the X-ray photon index, Γ, and Eddington ratio. This calls into question the use of such relations to infer properties of supermassive black hole systems.
APA, Harvard, Vancouver, ISO, and other styles
9

Senior, K., J. Kouba, and J. Ray. "Status and Prospects for Combined GPS LOD and VLBI UT1 Measurements." Artificial Satellites 45, no. 2 (January 1, 2010): 57–73. http://dx.doi.org/10.2478/v10018-010-0006-7.

Full text
Abstract:
Status and Prospects for Combined GPS LOD and VLBI UT1 Measurements A Kalman filter was developed to combine VLBI estimates of UT1-TAI with biased length of day (LOD) estimates from GPS. The VLBI results are the analyses of the NASA Goddard Space Flight Center group from 24-hr multi-station observing sessions several times per week and the nearly daily 1-hr single-baseline sessions. Daily GPS LOD estimates from the International GNSS Service (IGS) are combined with the VLBI UT1-TAI by modeling the natural excitation of LOD as the integral of a white noise process (i.e., as a random walk) and the UT1 variations as the integration of LOD, similar to the method described by Morabito et al. (1988). To account for GPS technique errors, which express themselves mostly as temporally correlated biases in the LOD measurements, a Gauss-Markov model has been added to assimilate the IGS data, together with a fortnightly sinusoidal term to capture errors in the IGS treatments of tidal effects. Evaluated against independent atmospheric and oceanic axial angular momentum (AAM + OAM) excitations and compared to other UT1/LOD combinations, ours performs best overall in terms of lowest RMS residual and highest correlation with (AAM + OAM) over sliding intervals down to 3 d. The IERS 05C04 and Bulletin A combinations show strong high-frequency smoothing and other problems. Until modified, the JPL SPACE series suffered in the high frequencies from not including any GPS-based LODs. We find, surprisingly, that further improvements are possible in the Kalman filter combination by selective rejection of some VLBI data. The best combined results are obtained by excluding all the 1-hr single-baseline UT1 data as well as those 24-hr UT1 measurements with formal errors greater than 5 μs (about 18% of the multi-baseline sessions). A rescaling of the VLBI formal errors, rather than rejection, was not an effective strategy. These results suggest that the UT1 errors of the 1-hr and weaker 24-hr VLBI sessions are non-Gaussian and more heterogeneous than expected, possibly due to the diversity of observing geometries used, other neglected systematic effects, or to the much shorter observational averaging interval of the single-baseline sessions. UT1 prediction services could benefit from better handling of VLBI inputs together with proper assimilation of IGS LOD products, including using the Ultra-rapid series that is updated four times daily with 15 hr delay.
APA, Harvard, Vancouver, ISO, and other styles
10

CELANI, ANTONIO, MARCO MARTINS AFONSO, and ANDREA MAZZINO. "Point-source scalar turbulence." Journal of Fluid Mechanics 583 (July 4, 2007): 189–98. http://dx.doi.org/10.1017/s0022112007006520.

Full text
Abstract:
The statistics of a passive scalar randomly emitted from a point source is investigated analytically for the Kraichnan ensemble. Attention is focused on the two-point equal-time scalar correlation function, a statistical indicator widely used both in experiments and in numerical simulations. The only source of inhomogeneity/anisotropy is the injection mechanism, the advecting velocity being here statistically homogeneous and isotropic. The main question we address is on the possible existence of an inertial range of scales and a consequent scaling behaviour. The question arises from the observation that for a point source the injection scale is formally zero and the standard cascade mechanism cannot thus be taken for granted. We find from first principles that an intrinsic integral scale, whose value depends on the distance from the source, emerges as a result of sweeping effects. For separations smaller than this integral scale a standard forward cascade occurs. This is characterized by a Kolmogorov–Obukhov power-law behaviour as in the homogeneous case, except that the dissipation rate is also dependent on the distance from the source. Finally, we also find that the combined effect of a finite inertial-range extent and of inhomogeneities causes the emergence of subleading anisotropic corrections to the leading isotropic term, that are here quantified and discussed.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Space-time combined correlation integral"

1

Paus, Tomáš. Combining brain imaging with brain stimulation: causality and connectivity. Edited by Charles M. Epstein, Eric M. Wassermann, and Ulf Ziemann. Oxford University Press, 2012. http://dx.doi.org/10.1093/oxfordhb/9780198568926.013.0034.

Full text
Abstract:
This article establishes the concept of a methodological approach to combine brain imaging with brain stimulation. Transcranial magnetic stimulation (TMS) is a tool that allows perturbing neural activity, in time and space, in a noninvasive manner. This approach allows the study of the brain-behaviour relationship. Under certain circumstances, the influence of one region on other, called the effective connectivity, can be measured. Functional connectivity is the extent of correlation in brain activity measured across a number of spatially distinct brain regions. This tool of connectivity can be applied to any dataset acquired with brain-mapping tools. However, its interpretation is complex. Also, the technical complexity of the combined studies needs to be resolved. Future studies may benefit from focusing on neurochemical transmission in specific neural circuits and on temporal dynamics of cortico-cortical interactions.
APA, Harvard, Vancouver, ISO, and other styles
2

Nikiforov, Konstantin V., Anna K. Aleksandrova, Ella G. Zadorozhnyuk, and Aleksandr S. Stykalin, eds. Transformational Revolutions in the Countries of Central And South-Eastern Europe on their Thirtieth Anniversary. 1989–2019. Institute of Slavic Studies, Russian Academy of Sciences; Nestor-Istoriia, 2021. http://dx.doi.org/10.31168/2712-8342.2021.2.

Full text
Abstract:
This collective monograph validates the relevance of the complex concept of “Transformational Revolutions” introduced here for the first time in academic circulation, which essentially expands the perspective of revolutionary origins and outcomes in Central and South-Eastern Europe. The authors analyze the prerequisites, course, and results of transformational revolutions in the countries of the region during the thirty-year period of their modern history. The studies describe the features of post-socialist modernization and the domestic and foreign political crises inherent in each country, the pros and cons of their involvement in the processes of European integration, and the benefits of joining NATO. The previously used term, “Velvet” revolution, does not cover the entire set of fundamental transformations in these countries in domestic and foreign policy. The researchers underline the specifics of a democratic political structure combined with a market economy for the countries in the region, with particular emphasis on ideological and political confrontation between the forces of the left and right in the framework of a multiparty system, and characterize the mechanism of changes in power during elections. They portray the correlation of euro-optimism and euro-scepticism in different countries, and their opposition to the dictates of Brussels. The authors emphasize that not only the Soviet perestroika, but also the various versions of revolution in the countries of the region led to the reformatting of the European and even global civilizational space. They reveal that many events of 30 years ago still determine the course of current events in the countries of the region and these countries may have incomplete transformation processes. The authors for the first time conduct a comparative analysis of the inclusion of the former GDR as part of a single German state in the EU and the divergent processes in the former socialist federations of Czechoslovakia and Yugoslavia. They pay special attention to the relationship between European, regional, and national components in the course of the revolutions and also the resulting conflicts. The authors also examine the specifics of the entry of Central European countries and later the Balkan subregions into NATO and the EU, and the role played by religious-cultural factors in individual countries. This monograph examines the lessons of Greece's recovery from the financial and economic crisis, as well as on Turkey's special Balkan interest in a larger Euro-Asian context. These revolutions are investigated from a comparative historical point of view with the reasons, processes, and results of the deep changes in the countries of Central and South-Eastern Europe during their 30-year modern history analyzed. In addition, their experiences of post-socialist modernization, which includes their search and elaboration of optimal models for interaction among themselves as well as with the countries of the East, particularly Russia, and West, is described, and hindering factors are identified.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Space-time combined correlation integral"

1

De Rubeis, V., V. Loreto, L. Pietronero, and P. Tosi1. "Space-time Combined Correlation Between Earthquakes and a New, Self-Consistent Definition of Aftershocks." In Modelling Critical and Catastrophic Phenomena in Geoscience, 259–79. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/3-540-35375-5_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Xia, Sen. "Computation of Optimal Embedding Dimension and Time Lag in Reconstruction of Phase Space Based on Correlation Integral." In Lecture Notes in Electrical Engineering, 351–56. London: Springer London, 2013. http://dx.doi.org/10.1007/978-1-4471-4844-9_48.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Knott, Jeffrey R., Andrei M. Sarna-Wojcicki, John A. Barron, Elmira Wan, Lynn Heizler, and Priscilla Martinez. "Tephrochronology of the Miocene Monterey and Modelo Formations, California." In Understanding the Monterey Formation and Similar Biosiliceous Units across Space and Time. Geological Society of America, 2022. http://dx.doi.org/10.1130/2022.2556(08).

Full text
Abstract:
ABSTRACT Tuff beds (volcanic ash beds and tuffs) have been known in the Miocene Monterey and Modelo Formations since they were initially described nearly 100 yr ago. Yet, these tephra layers have remained largely ignored. The ages and correlation of the Monterey and Modelo Formations are predominantly based on associated biostratigraphy. Here, we combined tephrochronology and biostratigraphy to provide more precise numerical age control for eight sedimentary sequences of the Monterey and Modelo Formations from Monterey County to Orange County in California. We correlated 38 tephra beds in the Monterey and Modelo Formations to 26 different dated tephra layers found mainly in nonmarine sequences in Nevada, Idaho, and New Mexico. We also present geochemical data for an additional 19 tephra layers in the Monterey and Modelo Formations, for which there are no known correlative tephra layers, and geochemical data for another 11 previously uncharacterized tephra layers in other areas of western North America. Correlated tephra layers range in age from 16 to 7 Ma; 31 tephra layers erupted from volcanic centers of the Snake River Plain, northern Nevada to eastern Idaho; 13 other tephra layers erupted from the Southern Nevada volcanic field; and the eruptive source is unknown for 12 other tephra layers. These tephra layers provide new time-stratigraphic markers for the Monterey and Modelo Formations and for other marine and nonmarine sequences in western North America. We identified tephra deposits of four supereruptions as much as 1200 km from the eruptive sources: Rainier Mesa (Southern Nevada volcanic field) and Cougar Point Tuff XI, Cougar Point Tuff XIII, and McMullen Creek (all Snake River Plain).
APA, Harvard, Vancouver, ISO, and other styles
4

Kennel, Charles F. "Correlation Of Geomagnetic Activity With The Solar Wind." In Convection and Substorms. Oxford University Press, 1996. http://dx.doi.org/10.1093/oso/9780195085297.003.0009.

Full text
Abstract:
Even if a steady convection state could exist in principle, the magnetosphere will be rarely in it, since the interplanetary magnetic field is hardly ever stationary over the 2-4 hour convection cycle (Rostoker et al., 1988). Indeed, the hourly average north-south component of the interplanetary field retained the same sign for two consecutive hours only 12.2% of the time during solar cycles 20 and 21 (Hapgood et al., 1991). If only for this reason, we cannot avoid dealing with time-dependent convection. In this section, we take up one method of coping with the issue. Correlation studies take advantage of solar wind variability without ever needing to consider the precise nature of the time-dependent response of the magnetosphere. Though laborious, they are a procedurally straightforward way to test the viscous and reconnection models of convection. Geomagnetic activity, the response of geomagnetic field to currents flowing in the ionosphere and in space, has been monitored in an increasingly systematic way since the beginning of the eighteenth century. Today, a worldwide network of ground stations provides continuous records of the magnetic field at many different locations on the earth’s surface. Before computational data displays enabled large quantities of data to be summarized at a glance, the complex multi-station records were combined into single parameters called geomagnetic indices, which were designed to characterize one aspect or another of geomagnetic activity on a global scale. We will refer frequently to the auroral electrojet (AE) index, which was designed by Davis and Sugiura (1966) as a measure of electrojet activity in the auroral zone. The index is derived from the horizontal, northern component of the geomagnetic perturbation field measured at a number of observatories in the northern hemisphere. The number of observing stations contributing to the index is occasionally indicated in parentheses as AE(12) or AE(32), and so on. The maximum and minimum perturbations recorded at any given time at the stations in the AE network are called the AU and AL indices, respectively, for “upper” and “lower.” These provide a measure of the westward and eastward electrojet strengths. The difference between AU and AL is the AE index.
APA, Harvard, Vancouver, ISO, and other styles
5

Zinn-Justin, Jean. "Perturbative quantum field theory (QFT): Algebraic methods." In Quantum Field Theory and Critical Phenomena, 125–59. Oxford University Press, 2021. http://dx.doi.org/10.1093/oso/9780198834625.003.0007.

Full text
Abstract:
This chapter discusses systematically the algebraic properties of perturbation theory in the example of a local, relativistic scalar quantum field theory (QFT). Although only scalar fields are considered, many results can be easily generalized to relativistic fermions. The Euclidean formulation of QFT, based on the density matrix at thermal equilibrium, is studied, mainly in the simpler zero-temperature limit, where all d coordinates, Euclidean time and space, can be treated symmetrically. The discussion is based on field integrals, which define a functional measure. The corresponding expectation values of product of fields called correlation functions are analytic continuations to imaginary (Euclidean) time of the vacuum expectation values of time-ordered products of field operators. They have also an interpretation as correlation functions in some models of classical statistical physics, in continuum formulations or, at equal time, of finite temperature QFT. The field integral, corresponding to an action to which a term linear in the field coupled to an external source J has been added, defines a generating functional Z(J) of field correlation functions. The functional W(J) = ln Z(J) is the generating functional of connected correlation functions, to which contribute only connected Feynman diagrams. In a local field theory connected correlation functions, as a consequence of locality, have cluster properties. The Legendre transform Γ(φ) [N1]of W(J) is the generating functional of vertex functions. To vertex functions contribute only one-line irreducible Feynman diagrams, also called one-particle irreducible (1PI).
APA, Harvard, Vancouver, ISO, and other styles
6

Sturmer, Daniel M., Patricia H. Cashman, Simon R. Poulson, and James H. Trexler. "Evolution of the Pennsylvanian Ely–Bird Spring Basin: Insights from Carbon Isotope Stratigraphy." In Late Paleozoic and Early Mesozoic Tectonostratigraphy and Biostratigraphy of Western Pangea, 127–48. SEPM (Society for Sedimentary Geology), 2022. http://dx.doi.org/10.2110/sepmsp.113.04.

Full text
Abstract:
Analysis and correlation of strata in ancient basins are commonly difficult due to a lack of high-resolution age control. This study tackled this problem for the latest Mississippian to middle Pennsylvanian Ely–Bird Spring basin. Here, 1095 new carbon isotope analyses combined with existing biostratigraphy at six sections throughout the basin constrain changes in relative sediment accumulation rates in time and space. The Ely–Bird Spring basin contains dominantly shallow-water carbonates exposed in eastern and southern Nevada, western Utah, and southeastern California. It formed as part of the complex late Paleozoic southwestern Laurentian plate margin. However, the detailed evolution of the basin, and hence the tectonic driver(s) of deformation, is poorly understood. The combined isotopic and biostratigraphic data were correlated using the Match-2.3 dynamic programming algorithm. The correlations show a complex picture of sediment accumulation throughout the life of the Ely–Bird Spring basin. Initially, the most rapid sediment accumulation was in the eastern part of the basin. Throughout Morrowan time, the most rapid sediment accumulation migrated to the northwestern part of the basin, culminating in a peak of sediment accumulation in Atokan time. This peak records tectonic loading at the north or northwest margin of the basin. Basin sedimentation was interrupted by early Desmoinesian time in the north by formation of northwest-directed thrust faults, folds, uplift, and an associated unconformity. Deposition continued in the south with a correlative conformity and increased clastic input. The combination of isotopic and biostratigraphic data for correlation is therefore a valuable tool for elucidating temporal basin evolution and can be readily applied to tectonically complex carbonate basins worldwide.
APA, Harvard, Vancouver, ISO, and other styles
7

"Red Snapper: Ecology and Fisheries in the U.S. Gulf of Mexico." In Red Snapper: Ecology and Fisheries in the U.S. Gulf of Mexico, edited by ROBERT J. ALLMAN and GARY R. FITZHUGH. American Fisheries Society, 2007. http://dx.doi.org/10.47886/9781888569971.ch21.

Full text
Abstract:
<em>Abstract.</em>—Red snapper <em>Lutjanus campechanus </em>sagittal otoliths were sampled from U.S. Gulf of Mexico commercial vertical hook and line, longline and recreational landings over a twelve year period (1991–2002). Our objectives were to examine the empirical age structure of red snapper through space and time, to gauge the relative year-class strength over time, and to compare the impact of strong year-classes upon annual age structure by fishing sector. The recreational fishery selected the youngest fish with a mode at 3 years and a mean age of 3.2 years. The commercial vertical hook and line fishery selected for slightly older fish with a mode of 3 years and a mean age of 4.1 years. The commercial longline fishery selected the oldest individuals with fish first fully recruited to the fishery by age 5 and, a mean age of 7.8 years. Only the commercial longline fishery age distributions were significantly different between the eastern and western Gulf of Mexico. Based on age progressions, strong 1989 and 1995 year-classes were dominant in the landings of the recreational and commercial vertical hook and line fisheries and the 1995 year-class was dominant in the commercial longline landings. A relative year-class index further highlighted these results, and we noted a significant correlation in year-class strength between recreational and commercial vertical hook and line sectors. The year-class index for combined sectors was also significantly correlated between eastern and western Gulf of Mexico with 1989 and 1995 year-classes similarly dominating both regions. An empirical age progression year-class index could be valuable in correlation with early life abundance indices of red snapper and serve to provide inference about the relative error of recruitment data.
APA, Harvard, Vancouver, ISO, and other styles
8

Zubok, Yulia A., and Alexander S. Lyubutov. "Youth in the Socio-Cultural Reality of Russian Society: Semantic Determinants of Self-Regulation." In Russia in Reform: Year-Book [collection of scientific articles], 339–78. Federal Center of Theoretical and Applied Sociology of the Russian Academy of Sciences, Moscow, Russian Federation, 2022. http://dx.doi.org/10.19181/ezheg.2022.13.

Full text
Abstract:
The process of self-regulation is considered as the interaction of young people in the space of their life activity, as a result of which communication structures are formed, representing groups of young people united by common meanings. The meanings are formed in the process of youth interiorization (internalization) of traditional and modern culture and are the result of their interaction with the basic culture and subculture. Inherited initially in an unconscious form, and afterwards transferred to the fi eld of consciousness, manifested in the form of archetypes, mental and modern character traits, the meanings, considered in their interrelation as an ordered integrity, represent the hierarchical structure of the multidimensional semantic space of socio- cultural reality. To study it, the method of structural- taxonomic analysis was applied, which refl ected the connections between the elements of the self-regulation mechanism and the semantic characteristics of the life of young people. On the basis of their conjugation, the integral structure of the socio- cultural space of youth is shown, the main semantic determinants and related models of self-regulation are identified: a model based on the spiritual meanings of a common culture; hybrid model with a complex layered structure of the “centaur” type; focused on basic material values as an existential basis for a calm, prosperous everyday life, achieved in an institutionally approved way— through labor; a model based on traditional family values and at the same time statist orientations; a confrontational model determined by the total distrust of young people, mainly in socio-political institutions; a model of consent and solidarity, due to the opposite attitude towards trust, associated with the same public and political institutions; modern liberal model of self-regulation of an individualistic type; a model based on the ideas of imperial patriotism combined with an extreme form of demonstrative nationalism. Two models have been identifi ed — “hybrid” and “traditional”, the semantic confi gurations of which are “root” for six other models. The “hybrid” model is associated with “spiritual”, “material” and “confrontational”, and the “traditional” (traditional family and socio-political values) — with “spiritual”, “trust and solidarity”, but also “imperial nationalism”. Nodal taxa have been identified with their corresponding semantic fi elds, which have connections with other types of self-regulation and are “root”, i.e. meaningful.
APA, Harvard, Vancouver, ISO, and other styles
9

Bartkowiak, Piotr, and Tomasz Nowacki. "Determinanty wpływające na obniżenie wartości nieruchomości mieszkaniowych na rynku wtórnym." In Tendencje rozwoju współczesnego rynku nieruchomości mieszkaniowych, 162–85. Wydawnictwo Uniwersytetu Ekonomicznego w Poznaniu, 2022. http://dx.doi.org/10.18559/978-83-8211-124-8/10.

Full text
Abstract:
Determinants causing the reduction of the value of residential real estate on the secondary market. Purpose: The aim of the study is to determine the impact of noise, treated as a phenomenon accompanying transport infrastructure, on the market value of residential real estate. Design/methodology/approach: The article presents a study of the seasonality of phenomena, which makes it possible to determine their cyclical character, e.g. price jumps on the market taking place for a specific period of time. The analysis of the dynamics of these phenomena allows to show changes in the economic situation, e.g. an increase or decrease in the price for 1 m2 of living or usable space. Additionally, the authors have also included a study of the interdependence of phenomena (correlation), which made it possible to determine the interrelationships between the phenomena (or their absence), i.e. the impact (or no impact) of noise on the price for 1 m2 of flat area in dwellings located in the civil or military flight zones. The compilation of the obtained data has been combined with the analysis of the structure of dwellings in terms of their area, floor on which they were located, number of rooms, as well as the age of the building. Findings: The conducted research has shown that noise is an important price factor on the housing market. A number of residential real estate offers have confirmed the relationship between the falling price and increasing noise, and vice versa – the lower the noise level, the higher the price. However, the amount of research into the effect of noise on the price still seems to be insufficient, which makes it difficult to forecast the impact of the noise level on the future value of dwellings. Therefore, it is problematic to determine the trend of such an impact. Originality and value: The noise factor is an important element not only in the decisionmaking process concerning the purchase of a dwelling, but also during investment activities carried out by developers. Locating an investment in the vicinity of a source of noise may significantly reduce the potential income from the sale of dwellings due to a drop in their value. Noise, which affects human life processes, is indirectly reflected in land and housing prices. The impact of the noise level on the decrease in the real estate value is determined by the noise depreciation index (NDI) or noise sensitivity depreciation index (NSDI). These indices show how a change of 1 dB in the noise level in the vicinity of a real estate affects its value.
APA, Harvard, Vancouver, ISO, and other styles
10

Biagioli, Francesca. "Neo-Kantianism." In Routledge Encyclopedia of Philosophy. London: Routledge, 2022. http://dx.doi.org/10.4324/9780415249126-dc055-2.

Full text
Abstract:
The term ‘neo-Kantianism’ indicates various attempts at a renewal of Kant’s philosophy in the modern context. It began with the rehabilitation of Kant to overcome the speculative turn of classical German idealism and ground philosophy in the investigation of the conditions of knowledge. In this sense, the origins of neo-Kantianism are sometimes dated back to figures opposing speculative idealism in the early nineteenth-century philosophical landscape, including Johann Friedrich Herbart, Jakob Friedrich Fries, Friedrich Eduard Beneke (Beiser 2014). Philosophers from the next generation sharing the commitment to a Kantian theory of knowledge also include Kuno Fischer, Eduard Zeller, Otto Liebmann, Jürgen Bona Meyer, Friedrich Albert Lange. More specifically, ‘neo-Kantianism’ is used to indicate a philosophical movement developed beginning in the 1870s with the intent to shed light on the basic tenets of Kant’s work and face challenges to traditional philosophy coming from nineteenth-century scientific advancements in the spirit of Kant’s critical philosophy (see, e.g., Köhnke 1991; Patton 2005; Luft 2015). The neo-Kantian movement started with Hermann Cohen’s seminal interpretation of Kant (Cohen 1871a; 1877; 1885), and subsequently flourished in German universities, with two main centres in Marburg, where Cohen was appointed lecturer in 1873, and in South West Germany. The development of experimental methods in nineteenth-century life sciences offered important insights for the theory of knowledge, but also raised the question about the possibility of reducing cognitive processes to physical ones. Kant’s critical philosophy offered a powerful argument against materialism, by limiting the validity of causal explanations to the realm of appearances rather than replacing them with metaphysical explanations. In conjunction with the materialism controversy, the 1860s saw a resurgence of interest in classical interpretative issues concerning Kant, including the assumption of a thing in itself, its relation to the sensibility, the status of a priori elements of knowledge. Following a suggestion first made by the physiologist and physicist Hermann von Helmholtz, some of those who argued for a return to Kant believed that Kant’s a priori forms deserved an empirical explanation. In contrast with this, Cohen emphasized that the individuation of a priori elements of knowledge requires a meta-level inquiry into the presuppositions of the sciences, that is, what Kant identified as the transcendental cognition. Cohen took Kant to imply that experience is given in the fact of science, and the transcendental task is to derive the preconditions for the possibility of this fact by regressive analysis. This formulation allowed Cohen to address the controversial issues raised in the Kant scholarship by emphasizing the logical structure of experience, while considering part of Kant’s considerations about the natural sources of knowledge to be a remainder of his reliance on empirical psychology in the precritical period. At the same time, Cohen’s interpretation of Kant set the task of a novel investigation of the historically documented facts of science and culture in the wake of the transcendental method. Cohen’s interpretation set a standard, not only for its contribution to the historical reconstruction of the development of Kant’s thought, but also for the idea of a fruitful correlation between interpretation and philosophical theorizing. In this sense, the revival of Kant’s critical philosophy involved also the idea of a constant renewal of it. Over the next decade, other influential interpretations were developed with various theoretical purposes, from the attempt to integrate the Kantian theory of a priori cognition with insights derived from the empiricist theory of knowledge (Riehl 1876; 1879) to the appreciation of Kant’s attempt to account for the application of universal rules of thought outside the domain of the mathematical science of nature in the third Critique (Windelband 1878–80). The Marburg School formed in the wake of Cohen’s characterization of the transcendental method. Its main representatives were Paul Natorp, who became Cohen’s colleague at Marburg in 1885, and Ernst Cassirer, who studied there from 1896 to 1899, and continued to subscribe to the methodology of his Marburg teachers throughout his intellectual career. The South West German School developed around Wilhelm Windelband’s teaching at the universities of Freiburg from 1877 to 1882, Strasbourg from 1882 to 1903, and Heidelberg from 1903 to 1915. Other leading figures of this school were Heinrich Rickert and his student Emil Lask. There were also neo-Kantians who did not strictly belong to a school or combined neo-Kantianism with other philosophical traditions. This includes, for example, Alois Riehl, Jonas Cohn, Richard Hönigswald. Each school focused on some common themes. Marburg neo-Kantians gradually broadened the scope of their research from Kant to the philosophical and scientific roots of what they called a critical or scientific form of idealism, according to which the objects of experience are constructed by scientific concepts. They sought to provide an account for the various spheres of human experience by extending the transcendental inquiry from the fact of science to the facts of culture. South West German neo-Kantians focused on the question concerning the grounds for relating unconditionally valid values to contingent experience. This led them to engage in the late nineteenth and early twentieth century debate about the possibility of an autonomous foundation of the human sciences. They pursued the project of a philosophy of culture offering a unitary account of the various human activities from the standpoint of the theory of values. These commonalities notwithstanding, neo-Kantianism was a complex movement, with internal debates and major developments within the same school, as well as connections between different schools and traditions. Neo-Kantianism dominated the philosophical scene until the early 1910s, and remained in the background of the main philosophical innovations in the German-speaking world for the next two decades until the rise of Nazism. It is considered to have made lasting contribution in epistemology, philosophy of science, history of philosophy and philosophy of culture (see, e.g., Luft and Makkreel 2010; De Warren and Staiti 2015; Edgar and Patton 2018; Kinzel and Patton 2021).
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Space-time combined correlation integral"

1

Shah, Neil, Dharmeshkumar M. Agrawal, and Niranajan Pedanekar. "Adding Crowd Noise to Sports Commentary using Generative Models." In Life Improvement in Quality by Ubiquitous Experiences Workshop. Brazilian Computing Society, 2021. http://dx.doi.org/10.5753/lique.2021.15715.

Full text
Abstract:
Crowd noise forms an integral part of a live sports experience. In the post-COVID era, when live audiences are absent, crowd noise needs to be added to the live commentary. This paper exploits the correlation between commentary and crowd noise of a live sports event and presents an audio stylizing sports commentary method by generating live stadium-like sound using neural generative models. We use the Generative Adversarial Network (GAN)-based architectures such as Cycle-consistent GANs (Cycle-GANs) and Mel-GANs to generate live stadium-like sound samples given the live commentary. Due to the unavailability of raw commentary sound samples, we use end-to-end time-domain source separation models (SEGAN and Wave-U-Net) to extract commentary sound from combined recordings of the live sound acquired from YouTube highlights of soccer videos. We present a qualitative and a subjective user evaluation of the similarity of the generated live sound with the reference live sound.
APA, Harvard, Vancouver, ISO, and other styles
2

Auersperg, Jürgen, Andreas Schubert, Dietmar Vogel, Bernd Michel, and Herbert Reichl. "Fracture and Damage Evaluation in Chip Scale Packages and Flip-Chip-Assemblies by FEA and MicroDAC." In ASME 1997 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 1997. http://dx.doi.org/10.1115/imece1997-0508.

Full text
Abstract:
Abstract The thermo-mechanical reliability of electronic packages such as flip chip assemblies and chip scale packages is one of the most important conditions for adopting these technologies in industrial applications. On the other hand, various kinds of inhomogeneities, localized stresses and thermal mismatch between the silicon die and the substrate lead to interface delaminations, chip cracking and fatigue of solder interconnects. The contribution shows the use of nonlinear finite element simulations with respect to the nonlinear, temperature and rate dependent behavior of the different materials used (metals, polymeric and solder materials) and the combination with experimental investigations. The development and application of failure models (e.g. thermal fatigue, life time prediction by Coffin-Manson type equations, integral fracture mechanics approaches — J–, J^–, ΔT*– integral — and evaluation of critical regions) is explained, in detail. Furthermore, the simulation of damage growth in solder interconnects by an automatic adaptive finite element technique is performed. Inherent local damage models allow us to study the correctness of crack- and damage models. For this reason, some results have been compared to micrographs from damaged interconnects and to strain measurement results obtained by experimental methods. In particular, the microDAC measurement method is a powerful tool which inspects the displacement fields on the basis of the gray scale correlation method applied to micrographs from scanning electron microscopy, laser scanning microscopy and optical microscopy. The application of those combined investigations should help to better understand the failure mechanisms especially in solder joints and directly support further applications for enhancing the thermo-mechanical reliability of advanced electronic assemblies.
APA, Harvard, Vancouver, ISO, and other styles
3

Bagnoli, K. E., Z. A. Cater-Cyker, C. A. Hay, R. L. Holloman, Y. Hioe, G. Wilkowski, B. C. Rollins, and K. M. Nikbin. "Assessment of Flaws in Non-Stress Relieved Carbon Steel Welds Caused by Hydrogen Attack." In ASME 2021 Pressure Vessels & Piping Conference. American Society of Mechanical Engineers, 2021. http://dx.doi.org/10.1115/pvp2021-61603.

Full text
Abstract:
Abstract Cracking due to high temperature hydrogen attack (HTHA) has been observed in non-PWHT’d carbon steel process equipment at conditions of temperature and hydrogen partial pressure below the original design limits recommended in API RP 941, necessitating changes to that standard. Consequently, flaw assessment procedures are needed to manage defects detected during inspection, or to establish appropriate inspection frequency. The latter typically involving estimation of the time for a detected or postulated crack to reach a critical size. This type of evaluation has been difficult to perform owing to the scarcity of data of fracture toughness as well as crack growth rate for steels in high temperature hydrogen. To address this gap, an experimental program was undertaken to help describe the ductile tearing characteristics of steel removed from service with various levels of HTHA damage. Near full thickness single edge notched tension SEN(T) specimens were machined from field samples and tested using existing “natural” cracks as the starter-crack. This provided insight into the behavior of real flaws subject to constraint conditions closely matching circumferential flaws in piping. Tests of undamaged steel were also performed in hydrogen at conditions designed to produce HTHA and compared with tests run in nitrogen. Crack growth tests obtained from the literature have been used to develop an empirical crack growth law for use in fitness for service assessments. The C* integral was also explored as a parameter for describing crack growth rate due to the strong similarity of HTHA damage to creep. The key results show substantial reduction in tearing resistance resulting from HTHA damage. A crack growth law similar to the Nikbin-Smith-Webster (NSW) model using the C* integral was found to show promise in describing the combined effects of creep and HTHA on crack propagation, although additional testing is needed to validate the correlation.
APA, Harvard, Vancouver, ISO, and other styles
4

Stalker, K. T., P. A. Molley, M. B. Sandoval, and S. L. Humphreys. "Acoustooptic processor for real-time synthetic aperture radar Image formation." In OSA Annual Meeting. Washington, D.C.: Optica Publishing Group, 1986. http://dx.doi.org/10.1364/oam.1986.tus2.

Full text
Abstract:
A real-time optical processor has been built to investigate the applicability of optical processing to synthetic aperture radar image formation. By taking advantage of the high processing speed and large time-bandwidth product of acoustooptic devices (AODs) combined with the multichannel correlation capability of CCD detectors used in the time delay and integrate (TDI) mode, a small real-time SAR processor can be built. The required 2-D matched filtering operation is first performed in range using the AOD and then the azimuthal matched filtering is performed using either a fixed or alterable mask in a TDI correlator configuration.1 A processor has been built to investigate the effect of system architecture on image quality and system complexity. The system is described and experimentally obtained PSF and image data are shown. The potential for system miniaturization and ruggedization is also discussed.
APA, Harvard, Vancouver, ISO, and other styles
5

Rondinella, V. V., T. Wiss, J. P. Hiernaut, and J. Cobos. "Studies on Spent Fuel Alterations During Storage and Radiolysis Effects on Corrosion Behaviour Using Alpha-Doped UO2." In ASME 2003 9th International Conference on Radioactive Waste Management and Environmental Remediation. ASMEDC, 2003. http://dx.doi.org/10.1115/icem2003-4593.

Full text
Abstract:
UO2 containing different fractions of short-lived alpha-emitters, the so-called alpha-doped UO2 simulates the level of activity of spent fuel after different storage times, and can be used to study the effects of radiolysis on the corrosion behaviour of aged spent fuel exposed to groundwater in a geologic repository. Furthermore, the integral over time of the alpha-decay in alpha-doped UO2 can simulate the decay damage accumulated in spent fuel during storage. This allows investigating property modifications occurring to the fuel during storage periods of interest (e.g. in view of spent fuel retrieval or in view of final disposal) within a laboratory-acceptable timescale. Periodical measurements of lattice parameter are performed on high activity alpha-doped UO2 to investigate the build-up of radiation damage and evaluate possible dose rate effects. Additionally, annealing tests combined with He-release measurements using a Knudsen cell and with microstructure examination using TEM are performed to establish a correlation among the annealing of damage in the microstructure (mainly characterized by dislocation loops) and the release behaviour of He. The effects on the microstructure due to the accumulation of He and α-decay damage are of interest as they may considerably affect the mechanical integrity of the fuel rods, by causing e.g. swelling or cracking in the fuel and/or overpressurization of the cladding. Alpha-doped UO2 with specific activities spanning over three orders of magnitude and undoped UO2 were used in static leaching experiments at room temperature in deionized water under nominally anoxic conditions. Under these experimental conditions (single effect studies) a clear dissolution enhancing effect of alpha-radiolysis was observed coupled with the establishment of higher redox potential due to the radiolytic process. An alpha-activity dependence of the dissolution behaviour was observed.
APA, Harvard, Vancouver, ISO, and other styles
6

van Lil, Thorsten, Matthias Voigt, Konrad Vogeler, Christian Wacker, and Uwe Rockstroh. "Probabilistic Analysis of Radial Gear Compressors." In ASME Turbo Expo 2012: Turbine Technical Conference and Exposition. American Society of Mechanical Engineers, 2012. http://dx.doi.org/10.1115/gt2012-69647.

Full text
Abstract:
Integrally geared compressors offer a wide range of applications for probabilistic analysis. The combination of a multi shaft compressor with an integral gear under changing operating conditions creates a lot of design challenges. The gear design needs to meet the requirements of the compression process, like the rotating speed of the pinions or the pinion power. These requirements lead to a specific gear with its specific properties. The examination and verification of the internal correlations between thermodynamics and gear design is one significant objective of the project as a high efficiency of the compression process may be connected with high gear losses, or the other way around. These design challenges ought to be investigated with probabilistic methods, such as the Monte-Carlo-Simulation. With such methods, it is possible to explore a wide design space automatically in order to learn about correlations between probabilistic input and output parameters as well as in order to choose a better design. In a first step of this project, all process steps relevant for designing an integrally geared compressor have been combined to form one single automated algorithm. This algorithm is used for Monte-Carlo-Simulations (MCS) with optimal Latin-Hypercube as the sampling method. On the basis of the MCS results, response surfaces can be created to describe the scatter and the behaviour of the result parameters. Furthermore, response surfaces can be used as meta models for optimization and prediction. This paper seeks to address the use and the performance of response surfaces.
APA, Harvard, Vancouver, ISO, and other styles
7

Korivi, Vamshi M., Su K. Cho, and Amer A. Amer. "Port DOE With Parametric Modeling and CFD." In ASME 2006 2nd Joint U.S.-European Fluids Engineering Summer Meeting Collocated With the 14th International Conference on Nuclear Engineering. ASMEDC, 2006. http://dx.doi.org/10.1115/fedsm2006-98522.

Full text
Abstract:
Port design is an integral part of a combustion system. For a spark-ignited engine, it impacts both in-cylinder charge motion and performance potential. While turbulence intensity and air-mixture quality affect dilution tolerance and fuel economy as a result, breathing ability affects wide open throttle performance. Traditional approaches deploy experimental techniques to reach a target balance between the charge motion and breathing capacity. Such techniques do not necessarily result in an optimized solution. Unrelenting development of Computational Fluid Dynamics (CFD) tools, Design of Experiment (DOE) and optimization techniques combined with increased computational power led to the development of new methodologies over the past decade. Such advancements have the potential to deliver optimized solutions. Recent releases of engineering CAD packages, like CATIA V5 and Pro-Engineer, enable both parametric modeling and associative design update. This paper demonstrates a coupling process between CFD analysis and engineering CAD software using process integration and design optimization software (PIDO). CATIA V5, ICEM-CFD meshing tool and FLUENT-UNS CFD code were integrated to run through many port designs using ISIGHT. The automatic coupling was aimed at optimizing the port layout for a certain cost function such as flow restriction or charge motion, subject to manufacturing and packaging constraints. Accomplishing this task necessitates running the executables of various software using macros and scripts. This integrating methodology utilized best design practices for an intake port, and numerous numerical experiments were attempted. This methodology was demonstrated on a V-engine intake port with geometric, manufacturing and packaging constraints. In order to prove the methodology described, two distinct designs were attempted, the first of which demonstrated high flow at the expense of charge motion, while the other targeted tumble charge motion to the detriment of flow. Both concepts were prototyped and evaluated on the flow bench. Good correlation between simulation and test results was demonstrated in this study. It was concluded that this process could be reliably adopted in a production environment with reasonable amount of turn around time.
APA, Harvard, Vancouver, ISO, and other styles
8

Gallatin, Gregg M. "Functional integral representation of the dynamics of squeezed states." In OSA Annual Meeting. Washington, D.C.: Optica Publishing Group, 1989. http://dx.doi.org/10.1364/oam.1989.thii5.

Full text
Abstract:
Functional integrals provide a powerful and useful formalism for deriving both perturbative and nonperturbative results in quantum theory. Here we consider their application to the problem of formulating the dynamics of coherent and squeezed states. We derive the functional integral representation of the coherent state density matrix by combining the closed time path representation of the density matrix in terms of functional integrals in configuration space with the projection of coherent states onto configuration space states. In this formalism the atomic variables can, without loss of generality, be integrated out exactly yielding a single functional integral which describes the combined quantum dynamics of both the atomic and the coherent states.
APA, Harvard, Vancouver, ISO, and other styles
9

Bulnes Aguirre, Francisco, Eduardo Herna´ndez Alvarez, and M. C. Juan Carlos Maya Castellanos. "Design of Measurement and Detection Devices of Curvature Through the Synergic Integral Operators of the Mechanics on Light Waves." In ASME 2009 International Mechanical Engineering Congress and Exposition. ASMEDC, 2009. http://dx.doi.org/10.1115/imece2009-10038.

Full text
Abstract:
Of a theory of integrals for the determination of observables of fields and particles in mathematics and mechanics [1–3], and applying a generalization of the principle from the minimum action to an entire field of objects to obtain a combined action of all the useful movement trajectories in the evaluation of an observable through synergic integral operators, [1, 4], it is designed and it develops an electromagnetic device to measure the curvature of objects using the values of these integrals on geodesic and movement trajectories generated by the field and the deviations of the waves generated by the device. The most concrete case is the obtaining of a device that measures the curvature through waves of light and its reflections with tomography on the surfaces of bodies. As future addresses of this investigation it is wanted to use this device to measure curvature and torsion of the universe, as well as to detect fields, particles and regions of the susceptible space-time for the interstellar trips.
APA, Harvard, Vancouver, ISO, and other styles
10

Martínez, Adrián, Carlos Lledó Ardila, Jordi Gutiérrez Cabello, and Pilar Gil Pons. "Further evidence of the long-term thermospheric density variation using 1U CubeSats." In Symposium on Space Educational Activities (SSAE). Universitat Politècnica de Catalunya, 2022. http://dx.doi.org/10.5821/conference-9788419184405.041.

Full text
Abstract:
Faculty members, undergraduate and graduate students of the School of Communication and Aerospace Engineering (Polytechnical University of Catalonia) are participating in a series of studies to determine the thermospheric density. These studies involve planning a space mission, designing and constructing small satellites, and performing related data analysis. This article presents a method for determining the thermospheric density and summarises the academic context in which we develop our work. Several studies have reported the existence of a downtrend in thermospheric density, with relative values ranging from –2% to –7% per decade. Although it is well known that solar and geomagnetic activity are the main drivers of the variations of the thermospheric density, this downtrend was reported to be caused by the rise of greenhouse gases. We present an update of this progression, considering the last solar cycle (2009-2021) and using Two-Line Elements sets (TLE) of 1U CubeSats and the spherical satellites ANDE-2. TLEs were used to propagate the orbits numerically using SGP4 (Simplified General Perturbations), and then compute the average density between two consecutive TLEs by integrating the appropriate differential equation. Then, using the NRLMSISE-00 (Picone 2002) and JB2008 (Bowman 2008) atmospheric models, we calculated an average density deviation per year. We built a comprehensive time series of the thermospheric density values, ranging from 1967 to the present. We merged Emmert (2015) thermospheric density data and our results computed both with NRLMSISE-00 and with JB2008. A linear regression on the combined dataset yields a decreasing trend of –5.1% per decade. We also studied the geomagnetic and solar activity to isolate the possible greenhouse gasses effect during the considered period. Our results show a strong correlation between geomagnetic activity and density deviation near the solar minima, and we propose that the cause of the previously reported long-term density deviation could be a poor adjustment of the effects of geomagnetic activity. Finally, we proved that orbital information from small satellites could be efficiently used to assess the evolution of thermospheric density variations. Additional data obtained from future missions (as the one proposed by our group) will eventually allow a better characterisation of the atmospheric density and help disentangle the possible greenhouse gasses effects on its variations
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography