Artículos de revistas sobre el tema "Gravitational Time Advancement"

Siga este enlace para ver otros tipos de publicaciones sobre el tema: Gravitational Time Advancement.

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 28 mejores artículos de revistas para su investigación sobre el tema "Gravitational Time Advancement".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore artículos de revistas sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Deng, Xue-Mei y Yi Xie. "Gravitational time advancement under gravity's rainbow". Physics Letters B 772 (septiembre de 2017): 152–58. http://dx.doi.org/10.1016/j.physletb.2017.06.036.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Bhadra, Arunava y Kamal K. Nandi. "Gravitational time advancement and its possible detection". General Relativity and Gravitation 42, n.º 2 (18 de junio de 2009): 293–302. http://dx.doi.org/10.1007/s10714-009-0842-6.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Deng, Xue-Mei. "Probing f ( T ) gravity with gravitational time advancement". Classical and Quantum Gravity 35, n.º 17 (1 de agosto de 2018): 175013. http://dx.doi.org/10.1088/1361-6382/aad391.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Deng, Xue-Mei. "Effects of a brane world on gravitational time advancement". Modern Physics Letters A 33, n.º 19 (21 de junio de 2018): 1850110. http://dx.doi.org/10.1142/s0217732318501109.

Texto completo
Resumen
Solar System tests of a brane world, which is called DMPR model, were studied in recent works. The correction of DMPR model to the general relativity (GR) in the four-dimensional curved spacetime can be parametrized by a “tidal charge” parameter [Formula: see text]. The parameter [Formula: see text] in this model was obtained and improved as [Formula: see text] by the Earth–Mercury ranging. A new test of the DMPR model based on gravitational time advancement is proposed and investigated in this work. The advancement is a gravitational consequence on round-trip proper time duration of a photon. For ranging a distant spacecraft, it is shown that (1) the “tidal charge” parameter can make the advancement larger or smaller than the one of GR, depending on the sign of [Formula: see text]; (2) the superior conjunction (SC) and the inferior conjunction (IC) are all suitable for detecting the advancement; (3) the advancement can be complementary to the classical test of Shapiro time delay for detecting the brane world; and (4) the implementation of optical clocks and planetary laser ranging will provide more insights on the brane world model in the future.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Li, Gang y Xue-Mei Deng. "Testing Photons Coupled to Weyl Tensor with Gravitational Time Advancement". Communications in Theoretical Physics 70, n.º 6 (diciembre de 2018): 721. http://dx.doi.org/10.1088/0253-6102/70/6/721.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Bhadra, Arunava, Ramil N. Izmailov y Kamal K. Nandi. "On the Possibility of Observing Negative Shapiro-like Delay Using Michelson–Morley-Type Experiments". Universe 9, n.º 6 (31 de mayo de 2023): 263. http://dx.doi.org/10.3390/universe9060263.

Texto completo
Resumen
The possibility of observing negative Shapiro-like gravitational time delay (or time advancement) due to the Earth’s gravity employing interferometric experiments on the Earth’s surface is discussed. It is suggested that such a measurement may be realized in the near future with the help of modern versions of Michelson–Morley-type experiments.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Lu, Xinyi. "Improved path planning method for unmanned aerial vehicles based on artificial potential field". Applied and Computational Engineering 10, n.º 1 (25 de septiembre de 2023): 64–71. http://dx.doi.org/10.54254/2755-2721/10/20230142.

Texto completo
Resumen
With the advancement of unmanned aerial vehicle (UAV) technology and its widespread use in many facets of manufacturing and daily life, the need for UAV mission automation is becoming more and more practical. In order to improve the automatic obstacle avoidance and path planning performance of UAVs, this essay proposes an optimized route planning algorithm based on the artificial potential field (APF) method, which have solved the typical issue of the APF. This method chooses to conduct a pre planning trajectory of the UAV based on a rapidly expanding random tree (RRT). The pre-planned path will be split into continuous particles, and then generating intermediate waypoints. A nearby waypoint offers a gravitational force to aid the UAV in escaping the local minimum when it enters it. At the same time, taking the distance from the UAV to the obstacle and the radius of influence of the obstacle itself into consideration, dynamically adjust the gravitational and repulsive coefficients, set up a non-gravitational zone around the obstacle, which is beneficial for the UAV to elude the obstacle. And set up a repulsive limited action zone to reduce unnecessary turns in the trajectory to achieve the effect of path optimization. Considering the difficulty of single UAV missions in most cases, this paper discusses cooperative flight path planning for multiple UAVs into consideration.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Amati, L., P. T. O’Brien, D. Götz, E. Bozzo, A. Santangelo, N. Tanvir, F. Frontera et al. "The THESEUS space mission: science goals, requirements and mission concept". Experimental Astronomy 52, n.º 3 (9 de noviembre de 2021): 183–218. http://dx.doi.org/10.1007/s10686-021-09807-8.

Texto completo
Resumen
AbstractTHESEUS, one of the two space mission concepts being studied by ESA as candidates for next M5 mission within its Comsic Vision programme, aims at fully exploiting Gamma-Ray Bursts (GRB) to solve key questions about the early Universe, as well as becoming a cornerstone of multi-messenger and time-domain astrophysics. By investigating the first billion years of the Universe through high-redshift GRBs, THESEUS will shed light on the main open issues in modern cosmology, such as the population of primordial low mass and luminosity galaxies, sources and evolution of cosmic re-ionization, SFR and metallicity evolution up to the “cosmic dawn” and across Pop-III stars. At the same time, the mission will provide a substantial advancement of multi-messenger and time-domain astrophysics by enabling the identification, accurate localisation and study of electromagnetic counterparts to sources of gravitational waves and neutrinos, which will be routinely detected in the late ‘20s and early ‘30s by the second and third generation Gravitational Wave (GW) interferometers and future neutrino detectors, as well as of all kinds of GRBs and most classes of other X/gamma-ray transient sources. Under all these respects, THESEUS will provide great synergies with future large observing facilities in the multi-messenger domain. A Guest Observer programme, comprising Target of Opportunity (ToO) observations, will expand the science return of the mission, to include, e.g., solar system minor bodies, exoplanets, and AGN.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Nazir, Muhammad Shahzad, Ahmed N. Abdalla, Ahmed Sayed M. Metwally, Muhammad Imran, Patrizia Bocchetta y Muhammad Sufyan Javed. "Cryogenic-Energy-Storage-Based Optimized Green Growth of an Integrated and Sustainable Energy System". Sustainability 14, n.º 9 (28 de abril de 2022): 5301. http://dx.doi.org/10.3390/su14095301.

Texto completo
Resumen
The advancement of using the cryogenic energy storage (CES) system has enabled efficient utilization of abandoned wind and solar energy, and the system can be dispatched in the peak hours of regional power load demand to release energy. It can fill the demand gap, which is conducive to the peak regulation of the power system and can further promote the rapid development of new energy. This study optimizes the various types of energy complementary to the CES system using hybrid gravitational search algorithm-local search optimization (hGSA-LS). First, the mathematical model of the energy storage system (ESS) including the CES system is briefly described. Second, an economic scheduling optimization model of the IES is constructed by minimizing the operating cost of the system. Third, the hGSA-LS methods to solve the optimization problem are proposed. Simulations show that the hGSA-LS methodology is more efficient. The simulation results verify the feasibility of CES compared with traditional systems in terms of economic benefits, new energy consumption rate, primary energy saving rate, and carbon emissions under different fluctuations in energy prices. Optimization of the system operation using the proposed hGSA-LS algorithm takes 5.87 s; however, the GA, PSO, and GSA require 12.56, 10.33, and 7.95 s, respectively. Thus, the hGSA-LS algorithm shows a comparatively better performance than GA, PSO, and GSA in terms of time.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Michelini, Maurizio. "Discussion on Fundamental Problems of Physics Hidden in Cosmology". Applied Physics Research 8, n.º 5 (30 de septiembre de 2016): 19. http://dx.doi.org/10.5539/apr.v8n5p19.

Texto completo
Resumen
<p class="1Body">Astronomers and physicists denounce difficulties in carrying out Cosmological Research in the middle of pseudoscientific strategies. At present, Cosmological modelling follows either some revised Expanding model (i.e. Accelerating universe) or the Static-Evolving model based on the large and old universe observed in the last decades, which convinced astronomers to abandon the Big bang and the Recession of galaxies (Doppler <em>interpretation</em> of Hubble’s redshift). Research on gravitation is made more difficult due to the presence of some misconceptions that hindered the elaboration of the Quantum theory of gravitation. Some years ago we suggested leaving the strategy of quantising Newton’s <em>pulling</em> gravitation, since the very nature of quanta is to generate a Quantum <em>Pushing</em> forces between particles, due to the mutual shielding effect between two particles immersed in a uniform flux of quanta. Equating the Quantum Pushing force to the <em>measured</em> gravitational force, gives us the flux f<sub>o</sub> and others constants of micro-quanta (see Table 1). This quantum concept in Gravitational theory allows to by pass all mathematical and physical problems that hindered the old - and not very clear - project of harmonizing Quantum Mechanics and General Relativity. Some papers published in the last decade, particularly on Applied Physics Research, showed that the flux of micro-quanta filling the universe is able to solve the following problems: <strong>1</strong>) generation of the Quantum gravitational force (see par.6) and the Inertial forces (see par.5), <strong>2</strong>) the homogeneous cosmological redshift (par.6.2) coming from the collisions of micro-quanta with photons, <strong>3</strong>) the cosmic collapse - threatening the old Einstein’s static universe - is prevented by the exponential attenuation theoretically required (see par.6.1) in the extended Quantum Pushing gravity, <strong>4</strong>) all photons emitted from luminous stars are exponentially redshifted by collisions with micro-quanta and constitute the CMB, i.e. the millimetre waves uniformly coming from cosmos, measured in 1964.</p><p class="1Body">These four results brought revolution in the preceding physics, as described in the Introduction. Whether the hypothesised Big bang happened or not, the enormous stellar radiation emitted in the universe <em>must be taken into account</em> in generating the CMB. Strangely, supporters of the Initial Explosion avoided to assess this contribution, which is anyway required because for at least 13 Giga-years all stars of receding galaxies continued their emissions. A calculation (Michelini, 2013) revealed that assuming everywhere (par. 6.3) the cosmological redshift z = <em>exp</em>(<em>x/L<sub>o</sub></em>) -1 (where <em>x</em> is the distance of the luminous source and <em>L<sub>o</sub></em> is the mean free path<em> </em>in space of micro-quanta) the energy of all redshifted photons results about <em>equal</em> to the measured CMB.</p><p class="1Body">Part 2 shows that the same analytical form of redshift, due to the same dependence on Compton’s effect, was obtained (Brynjolfsson, 2005) from the theory of plasma redshift at Large Universe. But this formulation remained useless due to the difficulty of verifying the electron plasma density at far intergalactic spaces. It was noticed that the new Cosmological redshift gives, at small distances <em>x</em> &lt;&lt; <em>L<sub>o</sub></em> , the same result z <em>x</em>/<em>L<sub>o</sub></em> of the Hubble’s redshift valid in Near Universe, demonstrating that the characteristic length <em>L<sub>o</sub></em> does not vary across the universe. Thanks to the measurements of Planck’s satellite, <em>L<sub>o</sub></em> resulted equal to about 1.3x10<sup>26</sup>. This gave credibility to Static-evolving Cosmology, where the mean free path<em> L<sub>o</sub></em> of micro-quanta rules <em>not only</em> the cosmological redshift, but <em>also</em> the extended Quantum Pushing Gravity and the CMB cosmic background, showing the unitary structure of space.</p><p class="1Body">Part 3 shows the advancement that micro-quanta Paradigm introduced in physics by correctly obtaining the relativistic equations of motion (in S. R this was done on pure kinematical bases) from the <em>dynamical </em>balance of momentum released to particles through collisions with the flux of micro-quanta. This advancement makes free S.R. from the deadly paradoxes that came out along its development. A great advancement was also the establishment of the pulsating Quantum Pushing Gravity in substitution of the Newtonian Gravity.</p><p class="1Body">Part 4 shows that the classical theory of the globule “<em>collapse</em>” proposed by Jeans in the first ‘900 does not constitute a model of Star Formation. Applying that theory to the Bok’s globules - discovered in 1945 - gives no definite results since the classical energy balance of the contracting globules inadequate to obtain the contraction velocity. The Jeans’ hypothesis of the <em>free fall</em> contraction appears ridiculous. observational evidence that Bok’s globules are <em>incubators</em> of stars needs an adequate theoretical model of Star Forming which allows to calculate the Incubation time. The gravitational accretion of galactic gas upon an extinct star, was developed to explain the formation of obscure Supermasses in AGN’s. The calculated incubation times resulted well higher than the Big bang age of universe. Accretion can be also adopted for small inert masses (fragments of Supernovae, cosmic powder, planetary nebulae, etc) giving rise to Star Forming models (par. 9 bis).</p><p class="1Body">In Part 5 is pointed out that any kind of quanta colliding with particles release some energy that is ruled by Compton’s equation. Calculating the energy that micro-quanta of <em>pulsating</em> Quantum Pushing Gravity due to the Hydrogen nucleus, release at regular intervals on the electron, it is found an average power <em>p<sub>e</sub></em> 2x10<sup>-54</sup> watt. This is not a Hypothesis, but a result standing on well founded Principles of Physics.</p>
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Stratta, Giulia y Francesco Pannarale. "Neutron Star Binary Mergers: The Legacy of GW170817 and Future Prospects". Universe 8, n.º 9 (2 de septiembre de 2022): 459. http://dx.doi.org/10.3390/universe8090459.

Texto completo
Resumen
In 2015, the Advanced Laser Interferometer Gravitational-wave Observatory (LIGO) and Advanced Virgo began observing the Universe in a revolutionary way. Gravitational waves from cosmic sources were detected for the first time, confirming their existence predicted almost one century before, and also directly revealing the existence of black holes in binary systems and characterizing their properties. In 2017, a new revolution was achieved with the first observation of a binary neutron star merger, GW170817, and its associated electromagnetic emission. The combination of the information from gravitational-wave and electromagnetic radiation produced a wealth of results, still growing, spectacularly demonstrating the power of the newly born field of gravitational-wave Multi Messenger Astrophysics. We discuss the discovery of GW170817 in the context of the achievements it brought to Gamma-Ray Burst astrophysics, and we also provide a few examples of advancements in fundamental physics and cosmology. The detection rates of binary neutron star mergers expected in the next decade for third generation gravitational-wave interferometers will open the new perspective of a statistical approach to the study of these multi-messenger sources.
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Zhong, Zimu. "Principle and State-of-art Observation Scenarios of Black Holes". Highlights in Science, Engineering and Technology 72 (15 de diciembre de 2023): 129–35. http://dx.doi.org/10.54097/p5450f83.

Texto completo
Resumen
Tracing back to the 17th century, the concept of black holes emerged as "dark stars," celestial bodies with gravitational pull surpassing the speed of light. Schwarzschild's solution to Einstein's equations in 1916 introduced the concept of a singularity and the Schwarzschild radius, defining black holes as objects compressed within this boundary. Two main classification methods for black holes based on mass and charge/angular momentum are discussed’. The study explores the principles of detection: gravitational waves, generated by events like black hole mergers, and gravitational lensing, where immense mass bends space and time, distorting light from distant objects. Facilities such as LIGO, Virgo, and KAGRA are introduced, showcasing their contributions in detecting gravitational waves. Optical telescopes like the Hubble Space Telescope and the Event Horizon Telescope play a vital role in visualizing black holes. Recent results include numerous successful black hole detections, testing the validity of General Relativity, and providing precise information on black hole masses and locations. While limitations in sensitivity and resolution persist, the future outlook is promising. Advancements in observatory technology, third-generation gravitational wave detectors, and multi-messenger astronomy collaborations will deepen our understanding of black holes and their role in the cosmos, fueling ongoing exploration of these enigmatic cosmic entities.
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Shao, Xiaoyun, Zhoujian Cao, Xilong Fan y Shichao Wu. "Probing the Large-scale Structure of the Universe Through Gravitational Wave Observations". Research in Astronomy and Astrophysics 22, n.º 1 (1 de enero de 2022): 015006. http://dx.doi.org/10.1088/1674-4527/ac32b4.

Texto completo
Resumen
Abstract The improvements in the sensitivity of the gravitational wave (GW) network enable the detection of several large redshift GW sources by third-generation GW detectors. These advancements provide an independent method to probe the large-scale structure of the universe by using the clustering of the binary black holes (BBHs). The black hole catalogs are complementary to the galaxy catalogs because of large redshifts of GW events, which may imply that BBHs are a better choice than galaxies to probe the large-scale structure of the universe and cosmic evolution over a large redshift range. To probe the large-scale structure, we used the sky position of the BBHs observed by third-generation GW detectors to calculate the angular correlation function and the bias factor of the population of BBHs. This method is also statistically significant as 5000 BBHs are simulated. Moreover, for the third-generation GW detectors, we found that the bias factor can be recovered to within 33% with an observational time of ten years. This method only depends on the GW source-location posteriors; hence, it can be an independent method to reveal the formation mechanisms and origin of the BBH mergers compared to the electromagnetic method.
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Graue, A., B. G. Viksund y B. A. Baldwin. "Reproducible Wettability Alteration of Low-Permeable Outcrop Chalk". SPE Reservoir Evaluation & Engineering 2, n.º 02 (1 de abril de 1999): 134–40. http://dx.doi.org/10.2118/55904-pa.

Texto completo
Resumen
Summary A total of 41 chalk core plugs, cut with the same orientation from large blocks of outcrop chalk, have been aged in crude oil at 90 °C for different time periods, in duplicate sets. Different filtration techniques, filtration temperatures and injection temperatures were used for the crude oil. Oil recovery by spontaneous, room temperature imbibition, followed by a waterflood, was used to produce the Amott water index for cores containing aged crude oil and for cores where the aged crude oil was exchanged by fresh crude oil or decane. The main objective was to establish a reproducible method for altering the wettability of outcrop chalk. A secondary objective was to determine mechanisms involved and the stability of the wettability change. The aging technique was found to be reproducible and could alter wettability in Rørdal chalk selectively, from strongly water-wet to nearly neutral-wet. A consistent change in wettability towards a less water-wet state with increased aging time was observed. Introduction Wettability is defined as the tendency of one fluid to spread on, or adhere to, a solid surface in the presence of other immiscible fluids (Ref. 1). Wettability is a major factor controlling the location, flow, and distribution of fluids in a reservoir. Hydrocarbon recovery results from a combination of capillary and viscous forces and gravity. In most chalk reservoirs spontaneous imbibition is the major recovery mechanism. This dominance of capillary forces is due to narrow pore throats, more or less water-wet conditions, and the low permeability of this rock. Thus the wettability is a very important parameter controlling the capillary pressure (Refs. 2, 3). However, wettability is not an explicit parameter in the equations that describe flow in porous media and the effects are therefore reflected by the change in capillary pressure and relative permeability (Ref. 4). The literature (Refs. 1, 5-7) reports that polar components in the crude oil (CO), like resin and asphaltene groups, may alter the wettability of porous rock. These oil components can adsorb on the rock surface by different mechanisms including polar, acid/base, and ion-binding interactions (Ref. 7). Oil composition, surface rock mineralogy, history of the fluids exposed to the rock surface, pore roughness, water saturation, and water composition (Refs. 1-10) are all critical parameters affecting wettability alteration. However, the influence from each parameter on wettability is not well known and may well involve synergistic interactions. This study was initiated by the need to quantitatively assess the relative importance of the forces acting during waterflooding fractured chalk blocks at different wettabilities (Ref. 11). A series of experiments were conducted at water-wet conditions which investigated spontaneous axial imbibition in stacked cores and the impact of fractures on hydrocarbon displacement mechanisms for large chalk blocks. During this study a dynamic in situ, nuclear tracer imaging technique was used to track the water advancement (Refs. 12,13). In addition the scaling of oil recovery by spontaneous imbibition in cores of various lengths and areas of the exposed faces to brine, i.e., one- (1D), two- (2D), and three-dimensional (3D) exposure, and the gravitational effects on vertically stacked cores have been investigated (Refs. 14-18). To extend this study to relevant reservoir conditions experiments have been performed to alter wettability of outcrop chalk and produce a porous rock which mimics the reservoir rock during laboratory studies (Ref. 5). In this paper we describe reproducible ways of altering the wettability in low-permeable outcrop chalk. The methods most often used to measure wettability in core plugs are the Amott test (Ref. 19) and the United States Bureaus of Mines (USBM) method (Ref. 20). In this paper the Amott wettability index for water together with the imbibition rate will be used to characterize the wettability of the cores. Both the Amott test and the USBM method only give one parameter to characterize the average wettability of the cores. Both methods have serious weaknesses with respect to discriminating between different wettabilities in a certain range of wettability (Ref. 21). Lately a new test has been proposed where the early imbibition rate was used to determine the core wettability (Ref. 9). Experiment The Rørdal chalk used in this study was obtained from the Portland cement factory in A°lborg, Denmark. Core data for the outcrop chalk is found in Table 1. The rock formation is Maastrichan age and consists mainly of cocolitt deposits (Ref. 22) with about 99% calcite and 1% quartz. The brine permeability and porosity for the Rørdal chalk cores ranged from 1-4 mD and 45-48%, respectively. All the core samples were drilled in the same direction from large chalk blocks to obtain analogous material and to ensure the same orientation relative to bedding planes or laminations. The chalk cores were dried at 90 °C for at least seven days before being used. The composition of the brine was 5 wt. % NaCl+5 wt. % CaCl2. CaCl2 was added to the brine to minimize dissolution of the chalk. Sodium azide, 0.01 wt. %, was added to prevent bacterial growth. The density and viscosity of the brine were g/cm3 and 1.09 cP at 20 °C, respectively. The brine was filtered through a 0.45 µm paper filter membrane. The salts used in the brine were: NaCl obtained from Phil Inc. with a purity of 99.5%, CaCl2 obtained from Phil Inc. with a purity of 99.5%. Sodium azide had a purity of 99.5%. The materials were used as received. The physical properties of the fluids are summarized in Table 2. A North Sea stock tank crude oil was used to alter the wettability of the Rørdal chalk cores by aging; i.e., submersing cores in the crude oil at elevated temperature for various length of time. Crude oil composition was measured to 0.90 wt. % asphaltenes, 53 wt. % saturated hydrocarbons, 35 wt. % aromatics, and 12 wt. % nitrogen-sulphur-oxygen containing components (NSO). The acid number was measured at 0.094 and the base number at 1.79.
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

D'Amboise, Christopher J. L., Michael Neuhauser, Michaela Teich, Andreas Huber, Andreas Kofler, Frank Perzl, Reinhard Fromm, Karl Kleemayr y Jan-Thomas Fischer. "Flow-Py v1.0: a customizable, open-source simulation tool to estimate runout and intensity of gravitational mass flows". Geoscientific Model Development 15, n.º 6 (21 de marzo de 2022): 2423–39. http://dx.doi.org/10.5194/gmd-15-2423-2022.

Texto completo
Resumen
Abstract. Models and simulation tools for gravitational mass flows (GMFs) such as snow avalanches, rockfall, landslides, and debris flows are important for research, education, and practice. In addition to basic simulations and classic applications (e.g., hazard zone mapping), the importance and adaptability of GMF simulation tools for new and advanced applications (e.g., automatic classification of terrain susceptible for GMF initiation or identification of forests with a protective function) are currently driving model developments. In principle, two types of modeling approaches exist: process-based physically motivated and data-based empirically motivated models. The choice for one or the other modeling approach depends on the addressed question, the availability of input data, the required accuracy of the simulation output, and the applied spatial scale. Here we present the computationally inexpensive open-source GMF simulation tool Flow-Py. Flow-Py's model equations are implemented via the Python computer language and based on geometrical relations motivated by the classical data-based runout angle concepts and path routing in three-dimensional terrain. That is, Flow-Py employs a data-based modeling approach to identify process areas and corresponding intensities of GMFs by combining models for routing and stopping, which depend on local terrain and prior movement. The only required input data are a digital elevation model, the positions of starting zones, and a minimum of four model parameters. In addition to the major advantage that the open-source code is freely available for further model development, we illustrate and discuss Flow-Py's key advancements and simulation performance by means of three computational experiments. Implementation and validation. We provide a well-organized and easily adaptable solver and present its application to GMFs on generic topographies. Performance. Flow-Py's performance and low computation time are demonstrated by applying the simulation tool to a case study of snow avalanche modeling on a regional scale. Modularity and expandability. The modular and adaptive Flow-Py development environment allows access to spatial information easily and consistently, which enables, e.g., back-tracking of GMF paths that interact with obstacles to their starting zones. The aim of this contribution is to enable the reader to reproduce and understand the basic concepts of GMF modeling at the level of (1) derivation of model equations and (2) their implementation in the Flow-Py code. Therefore, Flow-Py is an educational, innovative GMF simulation tool that can be applied for basic simulations but also for more sophisticated and custom applications such as identifying forests with a protective function or quantifying effects of forests on snow avalanches, rockfall, landslides, and debris flows.
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Bajpai, Shrish, Siddiqui Sajida Asif y Syed Adnan Akhtar. "Electromagnetic Education in India". Comparative Professional Pedagogy 6, n.º 2 (1 de junio de 2016): 60–66. http://dx.doi.org/10.1515/rpp-2016-0020.

Texto completo
Resumen
Abstract Out of the four fundamental interactions in nature, electromagnetics is one of them along with gravitation, strong interaction and weak interaction. The field of electromagnetics has made much of the modern age possible. Electromagnets are common in day-to-day appliances and are becoming more conventional as the need for technology increases. Electromagnetism has played a vital role in the progress of human kind ever since it has been understood. Electromagnets are found everywhere. One can find them in speakers, doorbells, home security systems, anti-shoplifting systems, hard drives, mobiles, microphones, Maglev trains, motors and many other everyday appliances and products. Before diving into the education system, it is necessary to reiterate its importance in various technologies that have evolved over time. Almost every domain of social life has electromagnetic playing its role. Be it the mobile vibrators you depend upon, a water pump, windshield wipers during rain and the power windows of your car or even the RFID tags that may ease your job during shopping. A flavor of electromagnetics is essential during primary level of schooling for the student to understand its future prospects and open his/her mind to a broad ocean of ideas. Due to such advancements this field can offer, study on such a field is highly beneficial for a developing country like India. The paper presents the scenario of electromagnetic education in India, its importance and numerous schemes taken by the government of India to uplift and acquaint the people about the importance of EM and its applications.
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Taheri, Mohammad Ali. "The Cosmic Black Hole". Scientific Journal of Cosmointel 3, TC2EN (24 de abril de 2024): 12–37. http://dx.doi.org/10.61450/joci.v3itc2en.176.

Texto completo
Resumen
Concurrent with advancements in cosmology, various theories have been proposed about the origin of the universe or its ultimate fate. These theories include the Steady State Theory, Oscillating Universe Theory or Cyclic model, Multiverse Theory, String Theory/M-Theory, Quantum Gravity or Loop Quantum Cosmology, The Big Bang Theory, Inflationary Universe Theory, and the No-Boundary Proposal. Among these, the Big Bang theory has been widely accepted by most scientists, with the Inflation theory serving as its complementary addition. On the other hand, theories such as the big rip, flat universe, and big crunch have also addressed the ultimate fate of the cosmos. However, T-Consciousness Cosmology presents a new hypothesis to explain how the universe was born from a black hole named the ‘Cosmic Black Hole,’ or the initial seed of the universe. This hypothesis not only addresses how this particular type of black hole forms, contingent on the reversion of the cosmos according to the Spherical Cosmos model but also details its fundamental differences from known black holes, referred to as “intra-cosmic black holes.” The reversion of the cosmos in the ‘Spherical Cosmos Model,’ is described through a mechanism known as space Rebound, which is distinct from the Big Crunch. The Terminal Edge of the cosmos is defined as the maximum radius at which space mesh is capable of rebound during the universe's volume increase. In the process of the rebound of space to its ultimate extent, objects within the cosmos face complete disintegration and transform into waves called absolute waves. Due to the inherent rotation of the cosmos and its reversion from the Terminal Edge, these waves collide and create gravitational centers in the central regions of the spherical cosmos, forming new types of matter known as light-dark matter, dark-dark matter, and thermal matter. Each of these mentioned materials represents a new type of matter introduced by T-Consciousness Cosmology. Additionally, according to this model, the cosmos contracts into a very tiny point with a final quench, where all types of matter and fundamental forces unite, forming a new type of absolute matter called ‘Taheri Absolute Matter’ (TAM). The spherical cosmos model also divides time into various types: ‘Longitudinal’ and ‘Transverse,’ and unlike the theory of relativity, where time is considered a dimension, it classifies it as one of the types of transverse time introduced as an entropic force. In essence, this type of force (time) acts against the force of gravity and, by disintegrating all types of objects from fundamental to large scale, acts as an agent of stress or tension release from the space mesh.
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Ghosh, Samrat, Arunava Bhadra y Amitabha Mukhopadhyay. "Probing dark matter and dark energy through gravitational time advancement". General Relativity and Gravitation 51, n.º 4 (abril de 2019). http://dx.doi.org/10.1007/s10714-019-2538-x.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Ghosh, Samrat y Arunava Bhadra. "Influences of dark energy and dark matter on gravitational time advancement". European Physical Journal C 75, n.º 10 (octubre de 2015). http://dx.doi.org/10.1140/epjc/s10052-015-3719-8.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Tuleganova, G. Y., R. Kh Karimov, R. N. Izmailov, A. A. Potapov, A. Bhadra y K. K. Nandi. "Gravitational time advancement effect in Bumblebee gravity for Earth bound systems". European Physical Journal Plus 138, n.º 1 (28 de enero de 2023). http://dx.doi.org/10.1140/epjp/s13360-023-03713-y.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Allred, Aaron R., Victoria G. Kravets, Nisar Ahmed y Torin K. Clark. "Modeling orientation perception adaptation to altered gravity environments with memory of past sensorimotor states". Frontiers in Neural Circuits 17 (20 de julio de 2023). http://dx.doi.org/10.3389/fncir.2023.1190582.

Texto completo
Resumen
Transitioning between gravitational environments results in a central reinterpretation of sensory information, producing an adapted sensorimotor state suitable for motor actions and perceptions in the new environment. Critically, this central adaptation is not instantaneous, and complete adaptation may require weeks of prolonged exposure to novel environments. To mitigate risks associated with the lagging time course of adaptation (e.g., spatial orientation misperceptions, alterations in locomotor and postural control, and motion sickness), it is critical that we better understand sensorimotor states during adaptation. Recently, efforts have emerged to model human perception of orientation and self-motion during sensorimotor adaptation to new gravity stimuli. While these nascent computational frameworks are well suited for modeling exposure to novel gravitational stimuli, they have yet to distinguish how the central nervous system (CNS) reinterprets sensory information from familiar environmental stimuli (i.e., readaptation). Here, we present a theoretical framework and resulting computational model of vestibular adaptation to gravity transitions which captures the role of implicit memory. This advancement enables faster readaptation to familiar gravitational stimuli, which has been observed in repeat flyers, by considering vestibular signals dependent on the new gravity environment, through Bayesian inference. The evolution and weighting of hypotheses considered by the CNS is modeled via a Rao-Blackwellized particle filter algorithm. Sensorimotor adaptation learning is facilitated by retaining a memory of past harmonious states, represented by a conditional state transition probability density function, which allows the model to consider previously experienced gravity levels (while also dynamically learning new states) when formulating new alternative hypotheses of gravity. In order to demonstrate our theoretical framework and motivate future experiments, we perform a variety of simulations. These simulations demonstrate the effectiveness of this model and its potential to advance our understanding of transitory states during which central reinterpretation occurs, ultimately mitigating the risks associated with the lagging time course of adaptation to gravitational environments.
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Gordon, Jacob, Martin Frank y Chishimba Nathan Mowa. "The Mechanobiology of Cervical Remodeling: Characterization of Select Cytoskeletal and Rho‐GTPase Signaling Factors in Mice Cervical Tissue During Pregnancy". FASEB Journal 31, S1 (abril de 2017). http://dx.doi.org/10.1096/fasebj.31.1_supplement.881.1.

Texto completo
Resumen
During pregnancy the developing fetus exerts an increasing gravitational force as it grows in size. However, the effects and underlying molecular mechanism of this process on the cervix is not well known. Our earlier proteomics studies have provided evidence of presence and changes in levels of cytoskeletal mechano‐mediators with advancement of pregnancy. In the present study, we use real time PCR analysis to further characterize expression of select cytoskeletal genes (Filamin A, Gelsolin, Vimentin, Actinin 1, Caveolin 1, Transgelin, Keratin 1, Profilin 1 and Focal Adhesion Kinase) (day 0, day 11 and day 17) to verify findings of the earlier proteomics study. Additionally, we also investigate the expression levels of Rho‐GTPases (RhoA, RhoB and Cdc42), molecules that play cell‐signaling roles upstream of proliferation and cytoskeletal protein organization. The levels of mRNA of all the (12) genes examined increased over the course of pregnancy except 3 that decreased. We conclude that the growing fetus influences cervical remodeling and likely the timing of birth by manipulating mechano‐transduction during pregnancy.Support or Funding InformationOffice of Students Research, Appalachian State University.
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Mendonca da Silveira, Francisco Eugenio. "Adiabatic collapse of non-homogeneous self-gravitating gas cloud". Europhysics Letters, 25 de octubre de 2023. http://dx.doi.org/10.1209/0295-5075/ad06ee.

Texto completo
Resumen
Abstract In this letter, we find the critical mass of a self-gravitating, spherically symmetric gas cloud, above which the fluid, within the bubble, collapses. Our analysis departs from a non-homogeneous equilibrium density, satisfying the Boltzmann relation. A time scale is defined in terms of the adiabatic index of the gas. Subsequently, a sinusoidal perturbation around equilibrium is regarded, thereby leading to a dispersion relation of frequency with wavelength, which does not depend on geometrical curvature effects. Such a formulation clearly justifies that the collapse occurs much faster than predicted by the well-known Jeans approach. The equilibrium profiles of the density, gravitational field, and potential are obtained as functions of the spherical radius coordinate at marginal instability. Since our theory captures the essential physics of gravitational collapse, it can be used as the starting point for several advancements in galactic dynamics.
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Ramaswamy, Shreyas y Mark R. Giovinazzi. "The journey to Proxima Centauri b". Journal of Emerging Investigators, 2024. http://dx.doi.org/10.59720/23-081.

Texto completo
Resumen
In recent years, exoplanets have become one of the most exciting topics in astronomy. While sending humans to a planet light-years away from Earth remains science fiction, recent technological advancements have made the prospect of sending a small probe to an exoplanet not only fascinating but also realistic. In addition to time and ample resources, developing such a probe and guiding it to a world beyond our solar system would require a deep understanding of gravitational dynamics. Here, we considered a simulated journey to Proxima Centauri b to determine the feasibility that a rocket could travel there in the timescale of a human lifetime. We hypothesized that a rocket mission from Earth to Proxima Centauri b (including the sending of data from the rocket back to Earth) could be carried out within a human lifetime and that the gravitational influence of celestial objects would be minimal in affecting the trajectory. We found that a rocket traveling at an average speed of 3 x 107 m/s can reach Proxima Centauri b in as little as 50 years, including acceleration and deceleration times. We used VPython to visually simulate the entire trajectory. Our results demonstrated that interstellar travel to other exoplanets is feasible with sufficiently powerful rockets.
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Burns, Eric. "Neutron star mergers and how to study them". Living Reviews in Relativity 23, n.º 1 (26 de noviembre de 2020). http://dx.doi.org/10.1007/s41114-020-00028-7.

Texto completo
Resumen
AbstractNeutron star mergers are the canonical multimessenger events: they have been observed through photons for half a century, gravitational waves since 2017, and are likely to be sources of neutrinos and cosmic rays. Studies of these events enable unique insights into astrophysics, particles in the ultrarelativistic regime, the heavy element enrichment history through cosmic time, cosmology, dense matter, and fundamental physics. Uncovering this science requires vast observational resources, unparalleled coordination, and advancements in theory and simulation, which are constrained by our current understanding of nuclear, atomic, and astroparticle physics. This review begins with a summary of our current knowledge of these events, the expected observational signatures, and estimated detection rates for the next decade. I then present the key observations necessary to advance our understanding of these sources, followed by the broad science this enables. I close with a discussion on the necessary future capabilities to fully utilize these enigmatic sources to understand our universe.
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Bhagyashree Patil. "AI in Astronomy". International Journal of Advanced Research in Science, Communication and Technology, 24 de julio de 2023, 476–85. http://dx.doi.org/10.48175/ijarsct-12167.

Texto completo
Resumen
This research paper investigates the transformative influence of Artificial Intelligence (AI) on the field of astronomy, revolutionizing data analysis, celestial object classification, exoplanet discovery, and real-time observations. Over the last decade, astronomers have harnessed the power of AI techniques, including machine learning, deep learning, and data mining, to explore the cosmos in unprecedented ways. The first section of this paper examines how AI has significantly enhanced data processing and analysis capabilities in astronomy. AI algorithms efficiently handle vast amounts of observational data from ground-based telescopes and space missions, enabling astronomers to identify celestial objects and detect subtle signals concealed within complex datasets. Additionally, the integration of AI with adaptive optics systems has improved the quality of observations, enhancing the study of distant galaxies and exoplanets. Moving on, the paper discusses how AI-driven classification models have played a crucial role in categorizing stars, galaxies, and other astronomical entities based on their unique characteristics. These advancements expedite the cataloguing process and enable the identification of rare and novel astronomical phenomena, facilitating comprehensive explorations of the universe. Furthermore, the research investigates how AI contributes to the discovery of exoplanets and the understanding of their potential habitability. AI-based algorithms efficiently analyse light curves and radial velocity data, leading to the detection of exoplanets from extensive surveys. Moreover, AI-driven atmospheric modelling provides valuable insights into the habitability potential of these distant worlds, expanding the search for extraterrestrial life.discovery of cosmic events such as supernovae, gamma-ray bursts, and gravitational wave sources.
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

"Manipulating the McGinty Equation to Create Stable Micro-Wormholes". International Journal of Theoretical & Computational Physics, 6 de marzo de 2024. http://dx.doi.org/10.47485/2767-3901.1038.

Texto completo
Resumen
Fractal wormholes represent a novel theoretical concept at the intersection of classical physics and quantum mechanics. This article introduces the ΨFractal equation, a theoretical construct that seeks to describe these hypothetical entities. The equation integrates fundamental constants with parameters like mass, charge, and fractal dimension, suggesting intriguing properties and interactions with the cosmos. The McGinty equation, Ψ(x,t) = ΨQFT(x,t) + ΨFractal(x,t,D,m,q,s), can be used to help explain the fractal structure of space-time. The ΨFractal(x,t,D,m,q,s) term in the equation represents the fractal properties of space-time, where D represents the fractal dimension, m represents the mass of the system, q represents the charge, and s represents the spin. By manipulating the values of these variables, scientists can create a stable micro-wormhole in a controlled environment. The fractal dimension D, for example, can be used to control the size and stability of the wormhole. A higher fractal dimension value would result in a larger and more stable wormhole, while a lower value would result in a smaller and less stable wormhole. The mass of the system, represented by the variable m, can also play a role in the stability of the wormhole. A higher mass would result in a more stable wormhole, while a lower mass would result in a less stable wormhole. The charge and spin of the system, represented by the variables q and s, respectively, can also have an effect on the stability of the wormhole. A higher charge would result in a more stable wormhole, while a lower charge would result in a less stable wormhole. Similarly, a higher spin would result in a more stable wormhole, while a lower spin would result in a less stable wormhole. By manipulating the values of these variables, scientists can create a stable micro-wormhole in a controlled environment. This leads to advancements in fractal engine technology which in turn leads to the development of practical applications for faster-than-light travel, such as faster communication and interstellar exploration. For example, if we consider the equation for a fractal wormhole, ΨFractal(x,t,D,m,q,s) = [(G * m^2 * D) / (h * q * s)] * e^-(D * m * x^2) Where G is the gravitational constant, h is the Planck constant, and x is distance. By manipulating the value of D, m, q and s, scientists can control the size and stability of the wormhole, which in turn can be used for faster than light communication and interstellar travel.
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Gerhard, David. "Three Degrees of “G”s: How an Airbag Deployment Sensor Transformed Video Games, Exercise, and Dance". M/C Journal 16, n.º 6 (7 de noviembre de 2013). http://dx.doi.org/10.5204/mcj.742.

Texto completo
Resumen
Introduction The accelerometer seems, at first, both advanced and dated, both too complex and not complex enough. It sits in our video game controllers and our smartphones allowing us to move beyond mere button presses into immersive experiences where the motion of the hand is directly translated into the motion on the screen, where our flesh is transformed into the flesh of a superhero. Or at least that was the promise in 2005. Since then, motion control has moved from a promised revitalization of the video game industry to a not-quite-good-enough gimmick that all games use but none use well. Rogers describes the diffusion of innovation, as an invention or technology comes to market, in five phases: First, innovators will take risks with a new invention. Second, early adopters will establish a market and lead opinion. Third, the early majority shows that the product has wide appeal and application. Fourth, the late majority adopt the technology only after their skepticism has been allayed. Finally the laggards adopt the technology only when no other options are present (62). Not every technology makes it through the diffusion, however, and there are many who have never warmed to the accelerometer-controlled video game. Once an innovation has moved into the mainstream, additional waves of innovation may take place, when innovators or early adopters may find new uses for existing technology, and bring these uses into the majority. This is the case with the accelerometer that began as an airbag trigger and today is used for measuring and augmenting human motion, from dance to health (Walter 84). In many ways, gestural control of video games, an augmentation technology, was an interlude in the advancement of motion control. History In the early 1920s, bulky proofs-of-concept were produced that manipulated electrical voltage levels based on the movement of a probe, many related to early pressure or force sensors. The relationships between pressure, force, velocity and acceleration are well understood, but development of a tool that could measure one and infer the others was a many-fronted activity. Each of these individual sensors has its own specific application and many are still in use today, as pressure triggers, reaction devices, or other sensor-based interactivity, such as video games (Latulipe et al. 2995) and dance (Chu et al. 184). Over the years, the probes and devices became smaller and more accurate, and eventually migrated to the semiconductor, allowing the measurement of acceleration to take place within an almost inconsequential form-factor. Today, accelerometer chips are in many consumer devices and athletes wear battery-powered wireless accelerometer bracelets that report their every movement in real-time, a concept unimaginable only 20 years ago. One of the significant initial uses for accelerometers was as a sensor for the deployment of airbags in automobiles (Varat and Husher 1). The sensor was placed in the front bumper, detecting quick changes in speed that would indicate a crash. The system was a significant advance in the safety of automobiles, and followed Rogers’ diffusion through to the point where all new cars have airbags as a standard component. Airbags, and the accelerometers which allow them to function fast enough to save lives, are a ubiquitous, commoditized technology that most people take for granted, and served as the primary motivating factor for the mass-production of silicon-based accelerometer chips. On 14 September 2005, a device was introduced which would fundamentally alter the principal market for accelerometer microchips. The accelerometer was the ADXL335, a small, low-power, 3-Axis device capable of measuring up to 3g (1g is the acceleration due to gravity), and the device that used this accelerometer was the Wii remote, also called the Wiimote. Developed by Nintendo and its holding companies, the Wii remote was to be a defining feature of Nintendo’s 7th-generation video game console, in direct competition with the Xbox 360 and the Playstation 3. The Wii remote was so successful that both Microsoft and Sony added motion control to their platforms, in the form of the accelerometer-based “dual shock” controller for the Playstation, and later the Playstation Move controller; as well as an integrated accelerometer in the Xbox 360 controller and the later release of the Microsoft Kinect 3D motion sensing camera. Simultaneously, computer manufacturing companies saw a different, more pedantic use of the accelerometer. The primary storage medium in most computers today is the Hard Disk Drive (HDD), a set of spinning platters of electro-magnetically stored information. Much like a record player, the HDD contains a “head” which sweeps back and forth across the platter, reading and writing data. As computers changed from desktops to laptops, people moved their computers more often, and a problem arose. If the HDD inside a laptop was active when the laptop was moved, the read head might touch the surface of the disk, damaging the HDD and destroying information. Two solutions were implemented: vibration dampening in the manufacturing process, and the use of an accelerometer to detect motion. When the laptop is bumped, or dropped, the hard disk will sense the motion and immediately park the head, saving the disk and the valuable data inside. As a consequence of laptop computers and Wii remotes using accelerometers, the market for these devices began to swing from their use within car airbag systems toward their use in computer systems. And with an accelerometer in every computer, it wasn’t long before clever programmers began to make use of the information coming from the accelerometer for more than just protecting the hard drive. Programs began to appear that would use the accelerometer within a laptop to “lock” it when the user was away, invoking a loud noise like a car alarm to alert passers-by to any potential theft. Other programmers began to use the accelerometer as a gaming input, and this was the beginning of gesture control and the augmentation of human motion. Like laptops, most smartphones and tablets today have accelerometers included among their sensor suite (Brezmes et al. 796). These accelerometers strictly a user-interface tool, allowing the phone to re-orient its interface based on how the user is holding it, and allowing the user to play games and track health information using the phone. Many other consumer electronic devices use accelerometers, such as digital cameras for image stabilization and landscape/portrait orientation. Allowing a device to know its relative orientation and motion provides a wide range of augmentation possibilities. The Language of Measuring Motion When studying accelerometers, their function, and applications, a critical first step is to examine the language used to describe these devices. As the name implies, the accelerometer is a device which measures acceleration, however, our everyday connotation of this term is problematic at best. In colloquial language, we say “accelerate” when we mean “speed up”, but this is, in fact, two connotations removed from the physical property being measured by the device, and we must unwrap these layers of meaning before we can understand what is being measured. Physicists use the term “accelerate” to mean any change in velocity. It is worth reminding ourselves that velocity (to the physicists) is actually a pair of quantities: a speed coupled with a direction. Given this definition, when an object changes velocity (accelerates), it can be changing its speed, its direction, or both. So a car can be said to be accelerating when speeding up, slowing down, or even turning while maintaining a speed. This is why the accelerometer could be used as an airbag sensor in the first place. The airbags should deploy when a car suddenly changes velocity in any direction, including getting faster (due to being hit from behind), getting slower (from a front impact crash) or changing direction (being hit from the side). It is because of this ability to measure changes in velocity that accelerometers have come into common usage for laptop drop sensors and video game motion controllers. But even this understanding of accelerometers is incomplete. Because of the way that accelerometers are constructed, they actually measure “proper acceleration” within the context of a relativistic frame of reference. Discussing general relativity is beyond the scope of this paper, but it is sufficient to describe a relativistic frame of reference as one in which no forces are felt. A familiar example is being in orbit around the planet, when astronauts (and their equipment) float freely in space. A state of “free-fall” is one in which no forces are felt, and this is the only situation in which an accelerometer reads 0 acceleration. Since most of us are not in free-fall most of the time, any accelerometers in devices in normal use do not experience 0 proper acceleration, even when apparently sitting still. This is, of course, because of the force due to gravity. An accelerometer sitting on a table experiences 1g of force from the table, acting against the gravitational acceleration. This non-zero reading for a stationary object is the reason that accelerometers can serve a second (and, today, much more common) use: measuring orientation with respect to gravity. Gravity and Tilt Accelerometers typically measure forces with respect to three linear dimensions, labeled x, y, and z. These three directions orient along the axes of the accelerometer chip itself, with x and y normally orienting along the long faces of the device, and the z direction often pointing through the face of the device. Relative motion within a gravity field can easily be inferred assuming that the only force acting on the device is gravity. In this case, the single force is distributed among the three axes depending on the orientation of the device. This is how personal smartphones and video game controllers are able to use “tilt” control. When held in a natural position, the software extracts the relative value on all three axes and uses that as a reference point. When the user tilts the device, the new direction of the gravitational acceleration is then compared to the reference value and used to infer the tilt. This can be done hundreds of times a second and can be used to control and augment any aspect of the user experience. If, however, gravity is not the only force present, it becomes more difficult to infer orientation. Another common use for accelerometers is to measure physical activity like walking steps. In this case, it is the forces on the accelerometer from each footfall that are interpreted to measure fitness features. Tilt is unreliable in this circumstance because both gravity and the forces from the footfall are measured by the accelerometer, and it is impossible to separate the two forces from a single measurement. Velocity and Position A second common assumption with accelerometers is that since they can measure acceleration (rate of change of velocity), it should be possible to infer the velocity. If the device begins at rest, then any measured acceleration can be interpreted as changes to the velocity in some direction, thus inferring the new velocity. Although this is theoretically possible, real-world factors come in to play which prevent this from being realized. First, the assumption of beginning from a state of rest is not always reasonable. Further, if we don’t know whether the device is moving or not, knowing its acceleration at any moment will not help us to determine it’s new speed or position. The most important real-world problem, however, is that accelerometers typically show small variations even when the object is at rest. This is because of inaccuracies in the way that the accelerometer itself is interpreted. In normal operation, these small changes are ignored, but when trying to infer velocity or position, these little errors will quickly add up to the point where any inferred velocity or position would be unreliable. A common solution to these problems is in the combination of devices. Many new smartphones combine an accelerometer and a gyroscopes (a device which measures changes in rotational inertia) to provide a sensing system known as an IMU (Inertial measurement unit), which makes the readings from each more reliable. In this case, the gyroscope can be used to directly measure tilt (instead of inferring it from gravity) and this tilt information can be subtracted from the accelerometer reading to separate out the motion of the device from the force of gravity. Augmentation Applications in Health, Gaming, and Art Accelerometer-based devices have been used extensively in healthcare (Ward et al. 582), either using the accelerometer within a smartphone worn in the pocket (Yoshioka et al. 502) or using a standalone accelerometer device such as a wristband or shoe tab (Paradiso and Hu 165). In many cases, these devices have been used to measure specific activity such as swimming, gait (Henriksen et al. 288), and muscular activity (Thompson and Bemben 897), as well as general activity for tracking health (Troiano et al. 181), both in children (Stone et al. 136) and the elderly (Davis and Fox 581). These simple measurements are the first step in allowing athletes to modify their performance based on past activity. In the past, athletes would pour over recorded video to analyze and improve their performance, but with accelerometer devices, they can receive feedback in real time and modify their own behaviour based on these measurements. This augmentation is a competitive advantage but could be seen as unfair considering the current non-equal access to computer and electronic technology, i.e. the digital divide (Buente and Robbin 1743). When video games were augmented with motion controls, many assumed that this would have a positive impact on health. Physical activity in children is a common concern (Treuth et al. 1259), and there was a hope that if children had to move to play games, an activity that used to be considered a problem for health could be turned into an opportunity (Mellecker et al. 343). Unfortunately, the impact of children playing motion controlled video games has been less than successful. Although fitness games have been created, it is relatively easy to figure out how to activate controls with the least possible motion, thereby nullifying any potential benefit. One of the most interesting applications of accelerometers, in the context of this paper, is the application to dance-based video games (Brezmes et al. 796). In these systems, participants wear devices originally intended for health tracking in order to increase the sensitivity and control options for dance. This has evolved both from the use of accelerometers for gestural control in video games and for measuring and augmenting sport. Researchers and artists have also recently used accelerometers to augment dance systems in many ways (Latulipe et al. 2995) including combining multiple sensors (Yang et al. 121), as discussed above. Conclusions Although more and more people are using accelerometers in their research and art practice, it is significant that there is a lack of widespread knowledge about how the devices actually work. This can be seen in the many art installations and sports research studies that do not take full advantage of the capabilities of the accelerometer, or infer information or data that is unreliable because of the way that accelerometers behave. This lack of understanding of accelerometers also serves to limit the increased utilization of this powerful device, specifically in the context of augmentation tools. Being able to detect, analyze and interpret the motion of a body part has significant applications in augmentation that are only starting to be realized. The history of accelerometers is interesting and varied, and it is worthwhile, when exploring new ideas for applications of accelerometers, to be fully aware of the previous uses, current trends and technical limitations. It is clear that applications of accelerometers to the measurement of human motion are increasing, and that many new opportunities exist, especially in the application of combinations of sensors and new software techniques. The real novelty, however, will come from researchers and artists using accelerometers and sensors in novel and unusual ways. References Brezmes, Tomas, Juan-Luis Gorricho, and Josep Cotrina. “Activity Recognition from Accelerometer Data on a Mobile Phone.” In Distributed Computing, Artificial Intelligence, Bioinformatics, Soft Computing, and Ambient Assisted Living. Springer, 2009. Buente, Wayne, and Alice Robbin. “Trends in Internet Information Behavior, 2000-2004.” Journal of the American Society for Information Science and Technology 59.11 (2008).Chu, Narisa N.Y., Chang-Ming Yang, and Chih-Chung Wu. “Game Interface Using Digital Textile Sensors, Accelerometer and Gyroscope.” IEEE Transactions on Consumer Electronics 58.2 (2012): 184-189. Davis, Mark G., and Kenneth R. Fox. “Physical Activity Patterns Assessed by Accelerometry in Older People.” European Journal of Applied Physiology 100.5 (2007): 581-589.Hagstromer, Maria, Pekka Oja, and Michael Sjostrom. “Physical Activity and Inactivity in an Adult Population Assessed by Accelerometry.” Medical Science and Sports Exercise. 39.9 (2007): 1502-08. Henriksen, Marius, H. Lund, R. Moe-Nilssen, H. Bliddal, and B. Danneskiod-Samsøe. “Test–Retest Reliability of Trunk Accelerometric Gait Analysis.” Gait & Posture 19.3 (2004): 288-297. Latulipe, Celine, David Wilson, Sybil Huskey, Melissa Word, Arthur Carroll, Erin Carroll, Berto Gonzalez, Vikash Singh, Mike Wirth, and Danielle Lottridge. “Exploring the Design Space in Technology-Augmented Dance.” In CHI’10 Extended Abstracts on Human Factors in Computing Systems. ACM, 2010. Mellecker, Robin R., Lorraine Lanningham-Foster, James A. Levine, and Alison M. McManus. “Energy Intake during Activity Enhanced Video Game Play.” Appetite 55.2 (2010): 343-347. Paradiso, Joseph A., and Eric Hu. “Expressive Footwear for Computer-Augmented Dance Performance.” In First International Symposium on Wearable Computers. IEEE, 1997. Rogers, Everett M. Diffusion of Innovations. New York: Free Press of Glencoe, 1962. Stone, Michelle R., Ann V. Rowlands, and Roger G. Eston. "Relationships between Accelerometer-Assessed Physical Activity and Health in Children: Impact of the Activity-Intensity Classification Method" The Free Library 1 Mar. 2009. Thompson, Christian J., and Michael G. Bemben. “Reliability and Comparability of the Accelerometer as a Measure of Muscular Power.” Medicine and Science in Sports and Exercise. 31.6 (1999): 897-902.Treuth, Margarita S., Kathryn Schmitz, Diane J. Catellier, Robert G. McMurray, David M. Murray, M. Joao Almeida, Scott Going, James E. Norman, and Russell Pate. “Defining Accelerometer Thresholds for Activity Intensities in Adolescent Girls.” Medicine and Science in Sports and Exercise 36.7 (2004):1259-1266Troiano, Richard P., David Berrigan, Kevin W. Dodd, Louise C. Masse, Timothy Tilert, Margaret McDowell, et al. “Physical Activity in the United States Measured by Accelerometer.” Medicine and Science in Sports and Exercise, 40.1 (2008):181-88. Varat, Michael S., and Stein E. Husher. “Vehicle Impact Response Analysis through the Use of Accelerometer Data.” In SAE World Congress, 2000. Walter, Patrick L. “The History of the Accelerometer”. Sound and Vibration (Mar. 1997): 16-22. Ward, Dianne S., Kelly R. Evenson, Amber Vaughn, Anne Brown Rodgers, Richard P. Troiano, et al. “Accelerometer Use in Physical Activity: Best Practices and Research Recommendations.” Medicine and Science in Sports and Exercise 37.11 (2005): S582-8. Yang, Chang-Ming, Jwu-Sheng Hu, Ching-Wen Yang, Chih-Chung Wu, and Narisa Chu. “Dancing Game by Digital Textile Sensor, Accelerometer and Gyroscope.” In IEEE International Games Innovation Conference. IEEE, 2011.Yoshioka, M., M. Ayabe, T. Yahiro, H. Higuchi, Y. Higaki, J. St-Amand, H. Miyazaki, Y. Yoshitake, M. Shindo, and H. Tanaka. “Long-Period Accelerometer Monitoring Shows the Role of Physical Activity in Overweight and Obesity.” International Journal of Obesity 29.5 (2005): 502-508.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía