To see the other types of publications on this topic, follow the link: Geo-Located time series.

Journal articles on the topic 'Geo-Located time series'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 42 journal articles for your research on the topic 'Geo-Located time series.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Guzmán-Vargas, L., A. Ramírez-Rojas, and F. Angulo-Brown. "Multiscale entropy analysis of electroseismic time series." Natural Hazards and Earth System Sciences 8, no. 4 (August 15, 2008): 855–60. http://dx.doi.org/10.5194/nhess-8-855-2008.

Full text
Abstract:
Abstract. In this work we use the multiscale entropy method to analyse the variability of geo-electric time series monitored in two sites located in Mexico. In our analysis we consider a period of time from January 1995 to December 1995. We systematically calculate the sample entropy of electroseismic time series. Important differences in the entropy profile for several time scales are observed in records from the same station. In particular, a complex behaviour is observed in the vicinity of a M=7.4 EQ occurred on 14 September 1995. Besides, we also compare the changes in the entropy of the original data with their corresponding shuffled version.
APA, Harvard, Vancouver, ISO, and other styles
2

Rödel, R., and T. Hoffmann. "Quantifying the efficiency of river regulation." Advances in Geosciences 5 (December 16, 2005): 75–82. http://dx.doi.org/10.5194/adgeo-5-75-2005.

Full text
Abstract:
Abstract. Dam-affected hydrologic time series give rise to uncertainties when they are used for calibrating large-scale hydrologic models or for analysing runoff records. It is therefore necessary to identify and to quantify the impact of impoundments on runoff time series. Two different approaches were employed. The first, classic approach compares the volume of the dams that are located upstream from a station with the annual discharge. The catchment areas of the stations are calculated and then related to geo-referenced dam attributes. The paper introduces a data set of geo-referenced dams linked with 677 gauging stations in Europe. Second, the intensity of the impoundment impact on runoff times series can be quantified more exactly and directly when long-term runoff records are available. Dams cause a change in the variability of flow regimes. This effect can be measured using the model of linear single storage. The dam-caused storage change ΔS can be assessed through the volume of the emptying process between two flow regimes. As an example, the storage change ΔS is calculated for regulated long-term series of the Luleälven in northern Sweden.
APA, Harvard, Vancouver, ISO, and other styles
3

O'Dea, Annika, Katherine L. Brodie, and Preston Hartzell. "Continuous Coastal Monitoring with an Automated Terrestrial Lidar Scanner." Journal of Marine Science and Engineering 7, no. 2 (February 7, 2019): 37. http://dx.doi.org/10.3390/jmse7020037.

Full text
Abstract:
This paper details the collection, geo-referencing, and data processing algorithms for a fully-automated, permanently deployed terrestrial lidar system for coastal monitoring. The lidar is fixed on a 4-m structure located on a shore-backing dune in Duck, North Carolina. Each hour, the lidar collects a three-dimensional framescan of the nearshore region along with a 30-min two-dimensional linescan time series oriented directly offshore, with a linescan repetition rate of approximately 7 Hz. The data are geo-referenced each hour using a rigorous co-registration process that fits 11 fixed planes to a baseline scan to account for small platform movements, and the residual errors from the fit are used to assess the accuracy of the rectification. This process decreased the mean error (defined as the magnitude of the offset in three planes) over a two-year period by 24.41 cm relative to using a fixed rectification matrix. The automated data processing algorithm then filters and grids the data to generate a dry-beach digital elevation model (DEM) from the framescan along with hourly wave runup, hydrodynamic, and morphologic statistics from the linescan time series. The lidar has collected data semi-continuously since January 2015 (with gaps occurring while the lidar was malfunctioning or being serviced), resulting in an hourly data set spanning four years as of January 2019. Examples of data products and potential applications spanning a range of spatial and temporal scales relevant to coastal processes are discussed.
APA, Harvard, Vancouver, ISO, and other styles
4

Koltsida, Evgenia, and Andreas Kallioras. "Groundwater flow simulation through the application of the FREEWAT modeling platform." Journal of Hydroinformatics 21, no. 5 (July 10, 2019): 812–33. http://dx.doi.org/10.2166/hydro.2019.040.

Full text
Abstract:
Abstract FREEWAT is a free and open source QGIS-integrated platform, developed to simulate several hydrological processes by combining the capabilities of geographic information system (GIS) for geo-processing and post-processing tools with several codes of the well-known USGS MODFLOW ‘family’. FREEWAT platform was applied for the groundwater flow simulation of a coastal aquifer system, located in northern Greece. The simulation was conducted using the MODFLOW_2005 code, the Observation Analysis Tool (a FREEWAT module facilitating the integration of time series observations into modeling), while the UCODE_2014 code was used as the main module for the sensitivity analysis and parameter estimation. The statistics used include composite scaled sensitivities, parameter correlation coefficients, and leverage. The simulation of the investigated aquifer system was found to be satisfactory, indicating that the simulated level values were slightly greater than the observed values after the optimization.
APA, Harvard, Vancouver, ISO, and other styles
5

Zhu, Gaoyang, Muzhi Gao, Fanmin Kong, and Kang Li. "Application of Logging While Drilling Tool in Formation Boundary Detection and Geo-steering." Sensors 19, no. 12 (June 19, 2019): 2754. http://dx.doi.org/10.3390/s19122754.

Full text
Abstract:
Logging while drilling (LWD) plays a crucial role in geo-steering, which can determine the formation boundary and resistivity in real time. In this study, an efficient inversion, which can accurately invert formation information in real time on the basis of fast-forward modeling, is presented. In forward modeling, the Gauss–Legendre quadrature combined with the continued fraction method is used to calculate the response of the LWD instrument in a layered formation. In inversion modeling, the Levenberg–Marquardt (LM) algorithm, combined with the line search method of the Armijo criterion, are used to minimize the cost function, and a constraint algorithm is added to ensure the stability of the inversion. A positive and negative sign is added to the distance parameter to determine whether the LWD instrument is located above or below the formation boundary. We have carried out a series of experiments to verify the accuracy of the inversion. The experimental results suggest that the forward algorithm can make the infinite integral of the Bessel function rapidly converge, and accurately obtain the response of the LWD instrument in a layered formation. The inversion can accurately determine the formation resistivity and boundary in real time. This is significant for geological exploration.
APA, Harvard, Vancouver, ISO, and other styles
6

Clapuyt, François, Veerle Vanacker, Fritz Schlunegger, and Kristof Van Oost. "Unravelling earth flow dynamics with 3-D time series derived from UAV-SfM models." Earth Surface Dynamics 5, no. 4 (December 5, 2017): 791–806. http://dx.doi.org/10.5194/esurf-5-791-2017.

Full text
Abstract:
Abstract. Accurately assessing geo-hazards and quantifying landslide risks in mountainous environments are gaining importance in the context of the ongoing global warming. For an in-depth understanding of slope failure mechanisms, accurate monitoring of the mass movement topography at high spatial and temporal resolutions remains essential. The choice of the acquisition framework for high-resolution topographic reconstructions will mainly result from the trade-off between the spatial resolution needed and the extent of the study area. Recent advances in the development of unmanned aerial vehicle (UAV)-based image acquisition combined with the structure-from-motion (SfM) algorithm for three-dimensional (3-D) reconstruction make the UAV-SfM framework a competitive alternative to other high-resolution topographic techniques. In this study, we aim at gaining in-depth knowledge of the Schimbrig earthflow located in the foothills of the Central Swiss Alps by monitoring ground surface displacements at very high spatial and temporal resolution using the efficiency of the UAV-SfM framework. We produced distinct topographic datasets for three acquisition dates between 2013 and 2015 in order to conduct a comprehensive 3-D analysis of the landslide. Therefore, we computed (1) the sediment budget of the hillslope, and (2) the horizontal and (3) the three-dimensional surface displacements. The multitemporal UAV-SfM based topographic reconstructions allowed us to quantify rates of sediment redistribution and surface movements. Our data show that the Schimbrig earthflow is very active, with mean annual horizontal displacement ranging between 6 and 9 m. Combination and careful interpretation of high-resolution topographic analyses reveal the internal mechanisms of the earthflow and its complex rotational structure. In addition to variation in horizontal surface movements through time, we interestingly showed that the configuration of nested rotational units changes through time. Although there are major changes in the internal structure of the earthflow in the 2013–2015 period, the sediment budget of the drainage basin is nearly in equilibrium. As a consequence, our data show that the time lag between sediment mobilization by landslides and enhanced sediment fluxes in the river network can be considerable.
APA, Harvard, Vancouver, ISO, and other styles
7

Miller, Aaron, Inder Singh, Sarah Pilewski, Vladimir Petrovic, and Philip M. Polgreen. "691. Real-Time Local Influenza Forecasting Using Smartphone-Connected Thermometer Readings." Open Forum Infectious Diseases 5, suppl_1 (November 2018): S249. http://dx.doi.org/10.1093/ofid/ofy210.698.

Full text
Abstract:
Abstract Background Information regarding influenza activity can inform clinical and public health activities. However, current surveillance approaches induce a delay in influenza activity reports (typically 1–2 weeks). Recently, we used data from smartphone connected thermometers to accurately forecast real-time influenza activity at a national level. Because thermometer readings can be geo-located, we used state-level thermometer data to determine whether these data can improve state-level surveillance estimates. Methods We used temperature readings collected by the Kinsa smart-thermometer and mobile device app to develop state-level forecasting models to predict real-time influenza activity (1–2 weeks in advance of surveillance reports). We used state-reported influenza-like illness (ILI) to represent state influenza activity for 48 US states with sufficient surveillance data. Counts of temperature readings, fever episodes and reported symptoms were computed by week. We developed autoregressive time-series models and evaluated model performance in an adaptive out-of-sample manner. We compared baseline time-series models containing lagged state-reported ILI activity to models incorporating exogenous thermometer readings. Results A total of 10,262,212 temperature readings were recorded from October 30, 2015 to March 29, 2018. In nearly all of the 48 states considered, weekly forecasts of ILI activity improved considerably when thermometer readings were incorporated. On average, state-level forecasting accuracy improved by 23.9% compared with baseline time-series models. In many states, such as PA, New Mexico, MA, Virginia, New York and SC, out-of-sample forecast error was reduced by more than 50% when thermometer data were incorporated. In general, forecasts were most accurate in states with the greatest number of device readings. During the 2017–2018 influenza season, the average improvement in forecast accuracy was 24.4%, and thermometer readings improved forecasting accuracy in 41, out of 48, states. Conclusion Data from smart thermometers accurately track real-time influenza activity at a state level. Local surveillance efforts may be improved by incorporating such information. Such data may also be useful for longer-term local forecasts. Disclosures I. Singh, Kinsa Inc.: Board Member, Employee and Shareholder, equity received and Salary. S. Pilewski, Kinsa Inc.: Employee and Shareholder, equity received and Salary. V. Petrovic, Kinsa Inc.: Employee and Shareholder, equity received and Salary.
APA, Harvard, Vancouver, ISO, and other styles
8

Abdur Rehman, Nabeel, Henrik Salje, Moritz U. G. Kraemer, Lakshminarayanan Subramanian, Umar Saif, and Rumi Chunara. "Quantifying the localized relationship between vector containment activities and dengue incidence in a real-world setting: A spatial and time series modelling analysis based on geo-located data from Pakistan." PLOS Neglected Tropical Diseases 14, no. 5 (May 11, 2020): e0008273. http://dx.doi.org/10.1371/journal.pntd.0008273.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Trinh, Nghia Quoc, and Krishna Kanta Panthi. "Evaluation of Seismic Events Occurred after Filling and Drawdown of the Reservoir at Song Tranh 2 HPP in Vietnam." Hydro Nepal: Journal of Water, Energy and Environment 15 (October 22, 2014): 16–20. http://dx.doi.org/10.3126/hn.v15i0.11285.

Full text
Abstract:
Reservoir-induced earthquakes are a challenging issue for hydropower, and have occurred at many sites around the world. However, each event is unique in itself and depends on the geo-tectonics and geo-hydrology of the area in which the event is situated. This article focuses on seismic events at the Song Tranh 2 hydropower project located in Quang Nam province, Vietnam. The construction of the 96 meter high Roller Compacted Concrete (RCC) dam of this project was completed in August 2011. Approximately one year after commissioning, the dam began experiencing a serious leakage problem through the dam body. In addition, a series of earthquakes occurred near the project area, and continued for several months. The high intensity and magnitude of the earthquakes caused damage to the project and promoted fear among the local people living in the downstream valley. The issues drew significant media attention and thousands of articles about this project were written within a short time. As a result, dam authorities have been subject to extreme public pressure.This paper describes the earthquake events and difficult situation that both the local population and authorities faced in its aftermath. In addition, we analyze seismic events qualitatively,using data and information on the water filling and drawdown processes. Our analysis provides an insight into these seismic events, as we reconstruct the earthquake scenarios and test a hypothesis of earthquake occurrence. Future earthquake activities are also predicted and compared.DOI: http://dx.doi.org/10.3126/hn.v15i0.11285HYDRO Nepal JournalJournal of Water, Energy and EnvironmentVolume: 15, 2014, JulyPage: 16-20
APA, Harvard, Vancouver, ISO, and other styles
10

Baude, Mike, and Burghard C. Meyer. "Changes of landscape structure and soil production function since the 18th century in north-west saxony." Journal of Environmental Geography 3, no. 1-4 (January 1, 2010): 11–23. http://dx.doi.org/10.14232/jengeo-2010-43779.

Full text
Abstract:
The objectives of this paper are (1) to reconstruct time series of the historical and current landscape structures based on historical documents and serial cadastral maps, (2) to analyse the changes of agricultural production function by the application of historical soil assessments and (3) to analyse the connections between landscape structure and production function in reference to the social and economic driving forces. The case study area is today an intensively-used agricultural landscape located nearby Taucha-Eilenburg (NW-Saxony), Germany. Arable landscapes in Germany are changing with increasing dynamics: valuable structures and landscape functions of the traditional and multifunctional landscape were lost. New landscape structures replaced the traditional ones slowly or sometimes also in short time steps. Therefore, this paper focuses on the changes of landscape structures and that of the soil production function induced by land use since the 18th century. The changes are analysed on the basis of historical and serial cadastral maps and documents by covering four time steps from 1750 to 2005. The historical maps were scanned, geo-referenced and digitalised in GIS. Thus, quantitative analysis of landscape structure changes on parcel level is enabled. The production function is explicitly reconstructed on the basis of the Prussian Taxation of the real estate of 1864 (Preußische Grundsteuerbonitierung) and The German Soil Taxation (Reichsbodenschätzung) of 1937. Changes observed on the serial cadastral maps were linked with the social and economical driving forces and the soil production function. Moreover, there is a high demand for the development of methodologies to analyse and to assess time series of landscape structures, land use and landscape functions in the historical context of landscape development.
APA, Harvard, Vancouver, ISO, and other styles
11

Danezis, Chris, Miltiadis Chatzinikos, and Christopher Kotsakis. "Linear and Nonlinear Deformation Effects in the Permanent GNSS Network of Cyprus." Sensors 20, no. 6 (March 22, 2020): 1768. http://dx.doi.org/10.3390/s20061768.

Full text
Abstract:
Permanent Global Navigation Satellite Systems (GNSS) reference stations are well established as a powerful tool for the estimation of deformation induced by man-made or physical processes. GNSS sensors are successfully used to determine positions and velocities over a specified time period, with unprecedented accuracy, promoting research in many safety-critical areas, such as geophysics and geo-tectonics, tackling problems that torment traditional equipment and providing deformation products with absolute accuracy. Cyprus, being located at the Mediterranean fault, exhibits a very interesting geodynamic regime, which has yet to be investigated thoroughly. Accordingly, this research revolves around the estimation of crustal deformation in Cyprus using GNSS receivers. CYPOS (CYprus POsitioning System), a network of seven permanent GNSS stations has been operating since 2008, under the responsibility of the Department of Lands and Surveys. The continuous flow of positioning data collected over this network, offers the required information to investigate the behavior of the crustal deformation field of Cyprus using GNSS sensors for the first time. This paper presents the results of a multi-year analysis (11/2011–01/2017) of daily GNSS data and provides inferences of linear and nonlinear deforming signals into the position time series of the network stations. Specifically, 3D station velocities and seasonal periodic displacements are jointly estimated and presented via a data stacking approach with respect to the IGb08 reference frame.
APA, Harvard, Vancouver, ISO, and other styles
12

Zhao, Zhou, Li, Cao, He, Yu, Li, Elvidge, Cheng, and Zhou. "Applications of Satellite Remote Sensing of Nighttime Light Observations: Advances, Challenges, and Perspectives." Remote Sensing 11, no. 17 (August 21, 2019): 1971. http://dx.doi.org/10.3390/rs11171971.

Full text
Abstract:
Nighttime light observations from remote sensing provide us with a timely and spatially explicit measure of human activities, and therefore enable a host of applications such as tracking urbanization and socioeconomic dynamics, evaluating armed conflicts and disasters, investigating fisheries, assessing greenhouse gas emissions and energy use, and analyzing light pollution and health effects. The new and improved sensors, algorithms, and products for nighttime lights, in association with other Earth observations and ancillary data (e.g., geo-located big data), together offer great potential for a deep understanding of human activities and related environmental consequences in a changing world. This paper reviews the advances of nighttime light sensors and products and examines the contributions of nighttime light remote sensing to perceiving the changing world from two aspects (i.e., human activities and environmental changes). Based on the historical review of the advances in nighttime light remote sensing, we summarize the challenges in current nighttime light remote sensing research and propose four strategic directions, including: Improving nighttime light data; developing a long time series of consistent nighttime light data; integrating nighttime light observations with other data and knowledge; and promoting multidisciplinary and interdisciplinary analyses of nighttime light observations.
APA, Harvard, Vancouver, ISO, and other styles
13

Jin, Cheng, Kai Yu, and Ke Zhang. "Evaluation of MODIS-based Vegetation Restoration After the 2008 Wenchuan Earthquake." E3S Web of Conferences 308 (2021): 02005. http://dx.doi.org/10.1051/e3sconf/202130802005.

Full text
Abstract:
Mountainous vegetation recovery after major earthquakes has been significant for preventing post-seismic soil erosion and geo-hazards. Magnitude 7.9 Wenchuan earthquake struck western Sichuan, China in 2008, caused salient number of geological hazards and caused major vegetation damage. This recovery process could be a very long and fluctuating. And Remote sensing has been an important method of vegetation restoration monitoring. This study aims to use remote sensing technology data to analyze the post-seismic vegetation damage and recovery situation of the 2008 Wenchuan earthquake over years to 2020, and find the relevant factors affecting the restoration of ecological vegetation. This paper examined the vegetation recovery processes following the 2008 Wenchuan earthquake using 16-day interval MODIS normalized difference vegetation index time series from 2000 to 2020. It has been found that the vegetation recovery rate generally increased by years, the entire study area has recovered 49.89% by 2020. In addition, by combining remote sensing imagery and geographic information data, we also found that the heavily affected vegetation areas are mainly located along the southern part of the earthquake surface rupture, where have a very high slope which mainly over 60 degrees. It makes this part having higher probabilities to experiences secondary natural hazards and a fluctuating vegetation recovery rate. Through this research, it can be concluded that remote sensing is an effective method for monitoring vegetation dynamics in a long series. For soil and soil retention and ecological vegetation protection of landslides after the earthquake, it should be more concerned about the areas where have steep slope that over 60 degrees.
APA, Harvard, Vancouver, ISO, and other styles
14

PANZHIN, A. A., and N. A. PANZHINA. "RESEARCH OF SHORT-PERIOD GEODYNAMICS OF ROCK ARRAY OF KACHKANAR MINING AND PROCESSING PLANT." News of the Tula state university. Sciences of Earth 2, no. 1 (2020): 318–29. http://dx.doi.org/10.46689/2218-5194-2020-2-1-318-329.

Full text
Abstract:
This paper presents the methodology and results of the study of modern short-period geodynamics of the activity of the rock mass of the Kachkanar mining and processing plant (MPP). The determination of the parameters of geodynamic movements is carried out in the form of continuous monitoring by satellite geodesy complexes over a point system for several hours or days. The need to conduct studies on the organization of geodynamic monitoring in the quarries of the Kachkanarsky MPP is due to the peculiarities of the tectonic structure of the enclosing massif, in particular, the influence of a series of active tectonic disturbances crossing the quarries and tailings. The manifestations of geo-dynamic activity, which is realized in the form of trend and cyclical short-period shifts in tectonic disturbances, are associated with deformation processes occurring on the north-western side of the Main Quarry, as well as a breakthrough of the tailings dam. The role of cyclic geodynamic movements in the formation of the general stress-strain state of the area where quarries and tailings are located is shown. At the same time, these geodynamic movements lead to the formation of fatigue effects in structures and materials, “loosening” of the massif of rocks composing the instrument massif, a change in the strength properties of soils at the base of engineering structures, and the manifestation of the thixotropy effect.
APA, Harvard, Vancouver, ISO, and other styles
15

Toker, M., E. Çolak, and F. Sunar. "SPATIOTEMPORAL CHANGE ANALYSIS OF THE PROTECTED AREAS: A CASE STUDY – İĞNEADA FLOODPLAIN FORESTS." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B3-2021 (June 29, 2021): 735–40. http://dx.doi.org/10.5194/isprs-archives-xliii-b3-2021-735-2021.

Full text
Abstract:
Abstract. Protected areas are important with land or water body ecosystems that have biodiversity, flora and fauna species. In Turkey, National Parks are one of the protected areas managed according to the National Parks Law No. 2873. Among them, the İğneada Floodplain Forests National Park, located in İğneada town in the province of Kırklareli, Turkey has been declared as a national park in 2007, and has an importance being a rare ecosystem, which consists of wetland, swamp, lakes and coastal sand dunes. Planning of Protected Areas can be done in a variety of ways, taking into account the balance of protection/use and should follow policies and guidelines. Today, for the sustainability and effective management of forest ecosystems, remote sensing technology provides an effective tool for assessing and monitoring ecosystem health at different temporal and spatial scales. In this study, potential temporal changes in the National Park were analyzed with Landsat satellite time series images using two different methods. First method, the Landtrendr algorithm (Landsat-based Detection of Trends in Disturbance and Recovery) developed for multitemporal satellite data, uses pixel values as input data and analysis them by using regression models to capture, label and map the changes. In this context, Landsat satellite time series images were taken quinquennial between 1987 and 2007 and biennially until 2017 for Landtrendr analysis (i.e. before and after its declaration as a National Park, respectively). As a second approach, the Google Earth Engine (GEE) cloud-based platform, which facilitates access to high-performance computing resources to process large long-term data sets, was used to analyze the impact of land cover changes. The results showed that the area was subjected to various pressures (i.e. due to illegal felling, pollution, etc.) until it was declared as a National park. Although there was general improvement and recovery after the region declared as a Park, it was seen that the sensitive dynamics of the region require continuous monitoring and protection using geo-information technologies.
APA, Harvard, Vancouver, ISO, and other styles
16

IEMELIANOV, V. O., M. M. IEVLEV, and O. V. CHUBENKO. "Objects of Antique and Medieval Cultural Heritage as a resource of the geoecosystems of the Northern Black Sea Coast and of the Black Sea Shelf in the area of the Bug Estuary." Geology and Mineral Resources of World Ocean 19, no. 1 (2023): 3–16. http://dx.doi.org/10.15407/gpimo2023.01.003.

Full text
Abstract:
This article is a continuation of the author’s series of publications devoted to the study of the resource potential of archaeological artifacts — sites of ancient and medieval heritage as anthropogenic components of modern geoecosystems of the Northern Black Sea coast (GESNBSC) and the nearby Black Sea shelf including the estuaries. These sites are currently partially or fully located in the transitional part of the Black Sea Geoecosystem (BSG) space and associated geoecological subsystems of estuaries, where modern natural aerial, aquatic and geological environmental subsystems interface with each other. The article presents the results of research, characterization and typification of archaeological sites of the cultural heritage of ancient and medieval times, which are located in the space of geoecosystems that were formed in this region. Among them are the underwater archaeological sites dis- covered in the space of ancient coastal and littoral paleogeoecosystems of the Bug Estuary, their connection with the most significant archaeological sites that were once formed as part of geoecosystems of the land. A preliminary reconstruction of the borders of the Bug Estuary in ancient and medieval times is made; there is demonstrated the resource archaeological potential of GESNBSC spaces and the adjacent zone of the Black Sea shelf with estuaries, including the ancient and medieval archaeo- logical sites of cultural heritage, which were created and for some time belonged to the spaces of cer- tain paleo-GESNBSC, but are now located in the space of modern geoecosystem of the Bug Estuary. The article gives a brief description of the already known archaeological, partially submerged sites of ancient and medieval cultural heritage, some other interesting underwater archaeological artifacts, and shows the feasibility of expanding their study with reference to bathymetric and geomorphological features of the boundaries between modern natural BSG environmental subsystems. Such knowledge is important both for solving the issues related to a more profound understanding of the causes and consequences of changes in the paleogeoecological and modern geoecological conditions of the shelf of Azov-Black Sea basin and GESNBSC, predicting their dynamics and directions of development, and for historical and archaeological reconstructions, in particular, to identify new cultural heritage sites. The results of geoarchaeological (that is, a complex of geoecological and archaeological) studies published in the article are important not only for understanding the formation and functioning of transitional geoecosystems with their resources, which include archaeological artifacts as anthropogenic components, but also for the development of public culture, historical memory population and expansion of its historical consciousness. In addition, such knowledge is necessary to consciously and competently address a number of economic problems in the development of communities in the Northern Black Sea region. In particular, to attract investment in the development of coastal and underwater tourism, to predict the impact of many geo-environmental processes inherent in the research region under specific conditions of the search, identification, use and preservation of the historical and archaeological potential of ancient and medieval cultural heritage located in the space of modern GESNBSC and BSG as their components. These anthropogenic components are objective evidence of the impact of natural geo-environ- mental conditions and their dynamics on the peculiarities of life and migration of the region’s inhabitants from ancient times up to the present.
APA, Harvard, Vancouver, ISO, and other styles
17

Bozzano, Francesca, Carlo Esposito, Paolo Mazzanti, Mauro Patti, and Stefano Scancella. "Imaging Multi-Age Construction Settlement Behaviour by Advanced SAR Interferometry." Remote Sensing 10, no. 7 (July 18, 2018): 1137. http://dx.doi.org/10.3390/rs10071137.

Full text
Abstract:
This paper focuses on the application of Advanced Satellite Synthetic Aperture Radar Interferometry (A-DInSAR) to subsidence-related issues, with particular reference to ground settlements due to external loads. Beyond the stratigraphic setting and the geotechnical properties of the subsoil, other relevant boundary conditions strongly influence the reliability of remotely sensed data for quantitative analyses and risk mitigation purposes. Because most of the Persistent Scatterer Interferometry (PSI) measurement points (Persistent Scatterers, PSs) lie on structures and infrastructures, the foundation type and the age of a construction are key factors for a proper interpretation of the time series of ground displacements. To exemplify a methodological approach to evaluate these issues, this paper refers to an analysis carried out in the coastal/deltaic plain west of Rome (Rome and Fiumicino municipalities) affected by subsidence and related damages to structures. This region is characterized by a complex geological setting (alternation of recent deposits with low and high compressibilities) and has been subjected to different urbanisation phases starting in the late 1800s, with a strong acceleration in the last few decades. The results of A-DInSAR analyses conducted from 1992 to 2015 have been interpreted in light of high-resolution geological/geotechnical models, the age of the construction, and the types of foundations of the buildings on which the PSs are located. Collection, interpretation, and processing of geo-thematic data were fundamental to obtain high-resolution models; change detection analyses of the land cover allowed us to classify structures/infrastructures in terms of the construction period. Additional information was collected to define the types of foundations, i.e., shallow versus deep foundations. As a result, we found that only by filtering and partitioning the A-DInSAR datasets on the basis of the above-mentioned boundary conditions can the related time series be considered a proxy of the consolidation process governing the subsidence related to external loads as confirmed by a comparison with results from a physically based back analysis based on Terzaghi’s theory. Therefore, if properly managed, the A-DInSAR data represents a powerful tool for capturing the evolutionary stage of the process for a single building and has potential for forecasting the behaviour of the terrain–foundation–structure combination.
APA, Harvard, Vancouver, ISO, and other styles
18

YAITSKAYA, Natalia, Vladimir BRIGIDA, Oksana GAVRINA, and Alexander KOPYLOV. "Photogrammetric assessment of deformation processes on landslide slopes while ensuring sustainable development of Caucasus Territories." Sustainable Development of Mountain Territories 15, no. 3 (September 30, 2023): 558–67. http://dx.doi.org/10.21177/1998-4502-2023-15-3-558-567.

Full text
Abstract:
Introduction. Sustainable development of geo-resources in modern realities is a priority for ensuring sustainable development of the territories of the Russian Federation. In the absence of real mechanisms for its implementation, the territories most sensitive to climate change (the subtropical zone of the Caucasus) will be exposed to increased risks of an increase in dangerous hydro meteorological phenomena. At the same time, the issues of improving methods for assessing deformation processes on landslide slopes become particularly relevant. Materials and methods. The object of the study was a slope located near the “Winter Theater” (Sochi) on the road surface of the descent to the sea where a system of cracks was observed. The area under study was photographed point by point with a Nikon D3100 camera, after which the photographs were depixelized to obtain PNG ASCII files. After this file, standard smoothing processed the containing data arrays and three-dimensional interpolation procedures to obtain regression models that were presented in the software «gnuplot». Results. As a result of completed stage of research, a methodology for determining the deformation processes of slope systems using the photometric method, which is based on the processing of a high-precision raster image (photo) by analytical methods, was adapted and formed. In our case, there would be a series of values only for the line of sites (n = 9) for all pickets (N from 11 to 23). When analyzing only this series, we could conclude that ε varies nonlinearly from 12 (N= 23 m) to the first local maximum of 20 mm (N= 21 to 22 m). Afterwards there is a decline to 17 mm (N = 20 m) and so on, but in general, they do not exceed 23 mm. While when using the photometric method, we can observe in certain areas (segments) that the amount of deformation of the asphalt road reaches up to 40 mm. Discussion. Compared to most similar studies (geo-statistical methods for processing satellite images, classical geodetic methods, methods of geophysics, the photometric method for identifying fractured structures, as well as assessing their development, is more accurate and less labor-intensive in conducting field experiments. Conclusion. To ensure sustainable development of the Caucasus territories, it is necessary to significantly improve the quality of monitoring of road surfaces located in foothill and mountain areas (especially in the presence of serpentine roads). As a result of the completed stage of research, a methodology for determining the deformation processes of slope systems using the photometric method, which is based on the processing of a high-precision raster image (photo) by analytical methods, was adapted and formed. Future research should focus on the methodology for transitioning to displacement rates and considering the response spaces of crack profile dynamics. Resume. The article presents the results of studies of deformation processes of slope systems in a dangerous landslide zone (Winter Theater of Sochi) of the Eastern Black Sea region. It has been established that the width of the walls of crack No. 2 on the road section under study in 2022 increases nonlinearly according to a polynomial law, while the absolute values of deformations can vary from 3 to 40 mm. In addition, a methodology for using the photometric method to assess the degree of cracking in slope systems is presented. The results of the study can be used in the development of a methodology for geoecological monitoring of the condition of road surfaces of highways in mountainous areas. The proposed methodological approaches to processing raster data sets can be used to develop algorithms for managing natural and technical systems in the mining industry.
APA, Harvard, Vancouver, ISO, and other styles
19

Bakij G., Shibdawa M. A, Kolo A. M., Mahmoud A. A., Gambo N. N., Lubis S., Apagu N. T., and Denji B. K. "Determination of Some Heavy Metals Concentrations in Water and Irrigation Farms along Wulmi River in Pankshin Local Government, Plateau State, Nigeria." International Journal of Plant & Soil Science 35, no. 20 (October 12, 2023): 1374–89. http://dx.doi.org/10.9734/ijpss/2023/v35i203938.

Full text
Abstract:
Levels of four heavy metals (Co, Cu, Pb and Cd) and three physico-chemical parameters (pH, temperature, total dissolved solids) were determined from Wulmi River at five sampling points (S1-S5) at an interval of 200m between points using atomic absorption spectrophotometer (AAS) and approved standard procedures respectively, and the control site located about 1000 meters away from the study area. Sampling was done monthly in wet season from May-December 2017. The weighted means of physico-chemical parameters determined at each sampling point in the river were in the range 6.53±0.21 - 6.85±0.17 for pH. 25.35±0.79 0C - 25.92±2.310C for temperature which is within the permissible limit of 300C (USEPA,2002), 9.43±3.90 mg/l - 26.71±2.75 mg/l for TDS which is also within the permissible limit of 1000 mg/l (USEPA,2002). The weighted mean of heavy metal concentrations in water at sampling points in the river ranged between 0.11±0.07 mg/l - 0.29 ± 0.19 mg/l for copper, 1.17±0.39 mg/l - 1.76± 0.31 mg/l for cadmium, 0.08 ± 0.05 mg/l - 0.91±0.03 mg/l for lead, 1.53± 0.39 mg/l - 6.48± 3.36 mg/l for cobalt. The soil samples from five irrigation farmlands (F1-F5) around the Wulmi River were also analysed for the heavy metals concentrations. The heavy metals concentrations in the soil ranged between 12.27 ± 3.46 μg/g - 28.05 ± 1.99 μg/g for copper, 5.49 ± 3.09 μg/g - 17.92 ± 2.18 μg/g for cadmium, 2.24 ± 0.02μg/g - 9.85 ± 1.43 μg/g for Lead, 13.48 ± 3.72 μg/g - 27.82 ± 2.65 μg/g for cobalt .Lead and cobalt concentrations in the soils are within the permissible limit set by USEPA (2002) and WHO [1] of 10μg/g and 50μg/g respectively. All the metals under investigation have geo-accumulation input in soils around Wulmi River, except in irrigation farms 2 and 4 which have geo-accumulation input of Pb to be 0.00. Analysis of variance indicates that there is significant difference in pH, concentrations of TDS, Cd, and Co from one sampling point to another throughout the periods of analyses. The data generated will be used to develop a computer based time series model, which can be used to predict the concentrations of the heavy metals in the near future at these sampling points in Wulmi River.
APA, Harvard, Vancouver, ISO, and other styles
20

Liang, Hongwu, Alimujiang Kasimu, Haitao Ma, Yongyu Zhao, Xueling Zhang, and Bohao Wei. "Exploring the Variations and Influencing Factors of Land Surface Temperature in the Urban Agglomeration on the Northern Slope of the Tianshan Mountains." Sustainability 14, no. 17 (August 26, 2022): 10663. http://dx.doi.org/10.3390/su141710663.

Full text
Abstract:
Changes in land surface temperature (LST) can have serious impacts on the water cycle and ecological environment evolution, which in turn threaten the sustainability of ecosystems. The urban agglomeration on the northern slopes of the Tianshan Mountains (UANSTM) is located in the arid and semi-arid regions of northwest China, with an extremely fragile ecological environment and sensitive to climate change. However, studies on the LST of the UANSTM have not received much attention. Therefore, this study explored the spatial distribution pattern, fluctuation characteristics, and influencing factors of the LST of the UANSTM from 2005 to 2021 based on MODIS time series LST data and the geo-detector model with optimal parameters. The results show that the UANSTM is dominated by medium- and high-temperature classes, with high- and extremely high-temperature classes clustered in Turpan City. The daytime and nighttime LST patterns are significantly different, with a typical “daytime cold island and nighttime heat island” feature in the oasis region. During 2005–2021, LST fluctuated greatly in the northwestern part of the UANSTM, with LST showing an increasing trend during both daytime and nighttime, and the warming rate was more intense during daytime than nighttime. The increasing trend of LST in Urumqi, Changji Hui Autonomous Prefecture, Shihezi, and Wujiaqu is very significant and will remain consistent in the future. Precipitation, DEM, and AOD are the most important influencing factors of LST in the UANSTM, where precipitation and DEM are negatively correlated with LST, and AOD is positively correlated with LST. Land cover factors (LULC, NDVI,, and NDBSI) are the next most influential, and socioeconomic factors (NTL, GDP, and POP) are the least influential. The results of this study can provide a scientific reference for the conservation and sustainable development of the ecological environment of the UANSTM.
APA, Harvard, Vancouver, ISO, and other styles
21

Zerefos, C., D. Balis, M. Tzortziou, A. Bais, K. Tourpali, C. Meleti, G. Bernhard, and J. Herman. "A note on the interannual variations of UV-B erythemal doses and solar irradiance from ground-based and satellite observations." Annales Geophysicae 19, no. 1 (January 31, 2001): 115–20. http://dx.doi.org/10.5194/angeo-19-115-2001.

Full text
Abstract:
Abstract. This study examines three UV-B data sets: ground-based long-term spectral records at Thessaloniki, Greece (40.5° N, 22.9° E) and San Diego, California, USA (32.7° N, 117.2° W) as well as a global data set of daily erythemal dose obtained from the Total Ozone Mapping Spectrometer (TOMS) onboard the Nimbus-7 satellite. Both ground-based stations have long enough records of spectral UV-B measurements to allow independent time series analyses. For 63° solar zenith angle (SZA) and clear sky conditions the quasi biennial oscillation (QBO) effect in solar irradiance at 305nm E305 is about 32% of the annual cycle for both San Diego and Thessaloniki. The effect slightly increases with cloud cover of up to 4/8, and decreases thereafter for cloud cover greater than 4/8. The data reveal that cloudiness can-not offset interannual signals in UV-B records. The observations at San Diego provide an independent confirmation of the widespread nature of the QBO in UV-B, which about coincides in amplitude at the two station studies, both located in the latitude zone 30°– 40° N. The peak-to-peak amplitude of the QBO in erythemal dose derived from TOMS/Nimbus-7 data is 6.5% at Thessaloniki. This is similar to the values calculated from ground-based measurements from this station. Based on satellite data, we find that the amplitude of the QBO in the erythemal dose is almost 40% of the amplitude of the annual cycle only in the tropics. The ratio of the amplitudes of the QBO over the annual cycle in erythemal dose decreases towards the extratropics, becoming less than 5% over middle latitudes.Key words. Atmospheric composition and structure (geo-chemical cycles; transmission and scattering of radiation)
APA, Harvard, Vancouver, ISO, and other styles
22

Rodriguez-Padilla, Isaac, Bruno Castelle, Vincent Marieu, and Denis Morichon. "A Simple and Efficient Image Stabilization Method for Coastal Monitoring Video Systems." Remote Sensing 12, no. 1 (December 24, 2019): 70. http://dx.doi.org/10.3390/rs12010070.

Full text
Abstract:
Fixed video camera systems are consistently prone to importune motions over time due to either thermal effects or mechanical factors. Even subtle displacements are mostly overlooked or ignored, although they can lead to large geo-rectification errors. This paper describes a simple and efficient method to stabilize an either continuous or sub-sampled image sequence based on feature matching and sub-pixel cross-correlation techniques. The method requires the presence and identification of different land-sub-image regions containing static recognizable features, such as corners or salient points, referred to as keypoints. A Canny edge detector ( C E D ) is used to locate and extract the boundaries of the features. Keypoints are matched against themselves after computing their two-dimensional displacement with respect to a reference frame. Pairs of keypoints are subsequently used as control points to fit a geometric transformation in order to align the whole frame with the reference image. The stabilization method is applied to five years of daily images collected from a three-camera permanent video system located at Anglet Beach in southwestern France. Azimuth, tilt, and roll deviations are computed for each camera. The three cameras showed motions on a wide range of time scales, with a prominent annual signal in azimuth and tilt deviation. Camera movement amplitude reached up to 10 pixels in azimuth, 30 pixels in tilt, and 0.4° in roll, together with a quasi-steady counter-clockwise trend over the five-year time series. Moreover, camera viewing angle deviations were found to induce large rectification errors of up to 400 m at a distance of 2.5 km from the camera. The mean shoreline apparent position was also affected by an approximately 10–20 m bias during the 2013/2014 outstanding winter period. The stabilization semi-automatic method successfully corrects camera geometry for fixed video monitoring systems and is able to process at least 90% of the frames without user assistance. The use of the C E D greatly improves the performance of the cross-correlation algorithm by making it more robust against contrast and brightness variations between frames. The method appears as a promising tool for other coastal imaging applications such as removal of undesired high-frequency movements of cameras equipped in unmanned aerial vehicles (UAVs).
APA, Harvard, Vancouver, ISO, and other styles
23

Bannari, Abderrazak, and Zahra M. Al-Ali. "Assessing Climate Change Impact on Soil Salinity Dynamics between 1987–2017 in Arid Landscape Using Landsat TM, ETM+ and OLI Data." Remote Sensing 12, no. 17 (August 28, 2020): 2794. http://dx.doi.org/10.3390/rs12172794.

Full text
Abstract:
This paper examines the climate change impact on the spatiotemporal soil salinity dynamics during the last 30 years (1987–2017) in the arid landscape. The state of Kuwait, located at the northwest Arabian Peninsula, was selected as a pilot study area. To achieve this, a Landsat- Operational Land Imager (OLI) image acquired thereabouts simultaneously to a field survey was preprocessed and processed to derive a soil salinity map using a previously developed semi-empirical predictive model (SEPM). During the field survey, 100 geo-referenced soil samples were collected representing different soil salinity classes (non-saline, low, moderate, high, very high and extreme salinity). The laboratory analysis of soil samples was accomplished to measure the electrical conductivity (EC-Lab) to validate the selected and used SEPM. The results are statistically analyzed (p ˂ 0.05) to determine whether the differences are significant between the predicted salinity (EC-Predicted) and the measured ground truth (EC-Lab). Subsequently, the Landsat serial time’s datasets acquired over the study area with the Thematic Mapper (TM), Enhanced Thematic Mapper Plus (ETM+) and OLI sensors during the last three decades over the intervals (1987, 1992, 1998, 2000, 2002, 2006, 2009, 2013, 2016 and 2017) were radiometrically calibrated. Likewise, the datasets were atmospherically and spectrally normalized by applying a semi-empirical line approach (SELA) based on the pseudo-invariant targets. Afterwards, a series of soil salinity maps were derived through the application of the SEPM on the images sequence. The trend of salinity changes was statistically tested according to climatic variables (temperatures and precipitations). The results revealed that the EC-Predicted validation display a best fits in comparison to the EC-Lab by indicating a good index of agreement (D = 0.84), an excellent correlation coefficient (R2 = 0.97) and low overall root mean square error (RMSE) (13%). This also demonstrates the validity of SEPM to be applicable to the other images acquired multi-temporally. For cross-calibration among the Landsat serial time’s datasets, the SELA performed significantly with an RMSE ≤ ± 5% between all homologous spectral reflectances bands of the considered sensors. This accuracy is considered suitable and fits well the calibration standards of TM, ETM+ and OLI sensors for multi-temporal studies. Moreover, remarkable changes of soil salinity were observed in response to changes in climate that have warmed by more than 1.1 °C with a drastic decrease in precipitations during the last 30 years over the study area. Thus, salinized soils have expanded continuously in space and time and significantly correlated to precipitation rates (R2 = 0.73 and D = 0.85).
APA, Harvard, Vancouver, ISO, and other styles
24

Tian, Haofei, Ganyu Li, Jinyong Choi, Wenlou Luan, Xingtao Cui, Shen Wang, Mengqi Jin, et al. "Petrogenesis and tectonic significance of the Mengjiaping beschtauite in the southern Taihang mountains." Open Geosciences 13, no. 1 (January 1, 2021): 1711–31. http://dx.doi.org/10.1515/geo-2020-0324.

Full text
Abstract:
Abstract The evolution process of the North China Craton has been discussed by many scholars; however, the frame for the timing of the Trans-North China Block has not been fully agreed upon. Related research has mostly focused on the northern and southern sections of the Trans-North China Block, and in-depth studies on intrusive rocks in the central region are lacking. In this study, we conduct a systematic study of the petrography, the whole-rock geochemistry, and the zircon U–Pb dating for the beschtauite intrusion, located in the Mengjiaping area of the Southern Taihang Mountains. Our results demonstrate that the dyke intrusion is mainly composed of beschtauite. Laser ablation inductively coupled plasma mass spectrometry zircon U–Pb dating shows that the beschtauite intrusion occurred at ∼1,880 ± 69 Ma. The beschtauite belongs to I-type granite, Arc tholeiite series, and Cale-alkaline series, with low total alkali, low potassium, and high aluminum. They are also enriched in large-ion lithophile elements, relatively depleted in high-field strength elements, and low total rare-earth elements. Based on the abovementioned data, it is suggested that the magmas for the beschtauite intrusion were metasomatized by oceanic slab subduction in the Late Paleoproterozoic. The formation time of the North China Craton basement should be set to after 1,880 Ma.
APA, Harvard, Vancouver, ISO, and other styles
25

Vinter, Michael. "Kortlægning af marksystemer fra jernalderen – En kildekritisk vurdering af luftfotografiers anvendelighed." Kuml 60, no. 60 (October 31, 2011): 83–114. http://dx.doi.org/10.7146/kuml.v60i60.24511.

Full text
Abstract:
Mapping Iron Age field systemsAn assessment of the applicability of aerial photographyThere is little doubt that agriculture constituted the fundamental activity in prehistoric Denmark following its introduction 6000 years ago. Traces of cultivation are, however, almost solely preserved in the form of ard marks on surfaces sealed beneath barrows or layers of aeolian sand. Only one period in prehistory shows coherent traces revealing how field systems were formed and how they fitted into the landscape. During the course of the Late Bronze Age (1000-500 BC), a system of cultivation was introduced over large parts of NW Europe in which the individual fields or plots were separated from one another by low earthen banks and terrace edges or lynchets. These field systems could extend over several hundred hectares.These cultivation systems appear primarily to have been in use between 500 BC and AD 200. Research into prehistoric field systems has a long tradition extending all the way back to the 1920s in England, The Netherlands and Denmark, whereas in NW Germany and on Gotland work took place during the 1970s, with the Baltic Countries being involved in the 1990s. Early research was directed in particular towards mapping the field systems which, at that time, lay untouched in agriculturally marginal areas such as heath and woodland.In Denmark, Gudmund Hatt was a pioneer in this field. During the course of several campaigns, especially during the 1930s, he recorded 120 occurrences of field systems, primarily on the heaths of Northern and Western Jutland. These were published in 1949 in his major work Oldtidsagre (i.e. Prehistoric Fields). His work was continued by Viggo Nielsen who recorded 200 field systems in the forests of Zealand and Bornholm, largely between 1953 and 1963. In the former Aarhus county, the record has subsequently been augmented by a systematic reconnaissance of the forests which took place between 1988 and 1992. Subsequently, this led to the extensive investigations of field systems at Alstrup Krat near Mariager. As early as the 1920s, English researchers were aware of the fact that both ploughed-down and preserved field systems were visible on aerial photographs. However, the method was first applied in Denmark, The Netherlands and NW Germany in the 1970s, leading to a several-fold increase in the number known localities. In Denmark, P.H. Sørensen recorded 447 field systems in the former Viborg and North Jutland counties alone. P. H. Sørensen has published a series of articles dealing with various aspects of aerial photography in relation to ancient field systems. For example, the colour and origin of the various soil marks, the shape and size of the plots, different types of field systems and the relationship with soil type. He has also published several surveys of individual field systems. A significant problem with P.H. Sørensen’s work relates to the very few published plans showing the field systems and to the fact that these are based exclusively on a single series of aerial photographs.The main aim of this article is to demonstrate the potential for mapping field systems on the basis of not one but several series of aerial photographs. This is done through the detailed survey and mapping of three individual field systems and access to a series of data sources with respect to the interpretation of information contained in the aerial photographs. These comprise an interpretation of the origin of soil marks of banks and lynchets and an evaluation of the degree to which this interpretation is influenced by subjectivity. It is beyond the scope of this investigation to locate the field systems within a settlement and landscape context.Sources and study areaIn order to explore the problems and questions outlined above, three field systems were chosen in the central part of Himmerland: Skørbæk Hede, Gundersted and Store Binderup (fig. 1). This selection took place on the basis of an examination and assessment of almost all recorded field systems in Himmerland evident on several series of aerial photographs. These three field systems chosen are among those best preserved and also the most cohesive. Furthermore, all three have been mapped previously: Skørbæk Hede by Hatt on the basis of field survey, and the two others by P.H. Sørensen on the basis of aerial photographs. This provides the opportunity to evaluate any possible subjectivity in the procedure employed. Hatt makes a distinction between field boundary banks and lynchets. This opens up the possibility of evaluating how the two forms of boundary appear on aerial photographs. At Gundersted Hatt cut two sections through boundary banks. These, together with sections from other of Hatt’s excavations and more recent examples from the investigations at Alsing Krat, form the basis for an investigation of how soil marks arise and develop over time. In this investigation, use has also been made of historical maps in order to reveal the influence of historical cultivation on the presence/absence of soil marks. The earliest maps are from c. 1780. The primary source remains, however, series of vertical aerial photographs. Access to the latter has become considerably easier in recent years. A large proportion is now accessible via various web portals, and recently an overview became available of the contents of private and public archives. For the purposes of this investigation, use has been made of scanned contact copies of aerial photograph series from 1954, 1961 and 1967. From digital archives, use has been made of aerial photographs from 1979 and 1981 and the orthophoto maps from 2007 and 2008, respectively.Digitalisation and rectification of aerial photographsPreviously, mapping on the basis of aerial photographs was a laborious process involving tracing paper and the transfer of features to topographic maps. The introduction of GIS has, however, eased the process considerably and has also made it easy to compare various map themes such as soil-type, land-use, and digital finds databases. Before mapping can commence, the aerial photograph must be scanned, rectified and geo-referenced. rectification was carried out using the programme Airphoto, while geo-referencing and drawing in of the features were done in MapInfo. An example is shown in figure 2.Soil marks – how do they originate?In order to understand how the boundary banks and lynchets between plots appear as soil marks on the aerial photographs, it is necessary to examine how these boundaries were built up and also the influences to which they have been exposed from their creation and up until the time when they are visible on aerial photographs. Figures 3 and 4 show sections through two boundary banks at Gundersted These were carried out by Hatt at the beginning of the 1930s, just prior to the area coming under cultivation again and 20-25 years before the first aerial photographs revealed pale traces of boundary banks. As the area had not been cultivated since the Iron Age, the stratigraphy is the result of natural soil-formation processes: a podsol has been formed, comprising a heath mor layer uppermost, beneath this a bleached sand layer and an iron pan, and at the base the old cultivation layer and the topsoil core of the boundary bank, consisting of brown and grey sand. Ploughing of the boundary banks will, initially, not result in significant soil marks as the three uppermost layers are of equal thickness along the whole length of the section. A pale soil mark will, however, appear when the boundary bank has been levelled out and the plough begins to turn up material from the light topsoil core. This soil transport can in some instances continue for more than 70 years, but the soil marks will as a consequence also become wide and fragmented. This account of the processes leading to the appearance of the pale soil marks is completely different from the only other theory proposed in this respect, i.e. that of P.H. Sørensen. He describes a development involving three phases, beginning with the ploughing up of the bleached sand horizon which generates a pale soil-colour trace. Later in the development there is a shift to a dark trace, when the material in the topsoil core becomes ploughed up. In the final phase, the trace shifts again to a pale colour, when the plough begins to bring up the subsoil. However, these two sections show neither a bleached sand horizon nor a darker topsoil core. Furthermore, no colour changes have been observed at any of the localities. The fact that the boundary banks are apparent as pale soil marks is not due to ploughing up of the bleached sand layer but of the topsoil bank core. Ploughing down of the other boundary form, the terrace edge or lynchet, as shown in figure 5, will similarly result in the formation of a pale soil-colour trace through material being brought up from the pale topsoil core. P.H. Sørensen was also fully aware of this situation, and it can be confirmed by comparing Hatt’s map of the Skørbæk Hede site, where a distinction is made between boundary banks and lynchets, with the soil marks apparent on the aerial photograph series Basic Cover 1954 (fig. 6).Dark vegetation marks and pale erosion marksAlmost all the soil marks that form a basis for the mapping of the three field systems appear pale in relation to the surroundings. There are, however, occasional exceptions to this rule in the form of dark marks in areas of heather heathland and newly-ploughed heath. On the aerial photograph of Skørbæk Hede from 1954, a few dark marks can be seen directly south of Trenddalen (fig. 6) which correspond with the results of Hatt’s survey. These lie in an area which was cultivated between 1937 and 1954. In 1961, the area was taken out of cultivation and became covered with small trees. A corresponding phenomenon can be observed to the west of the settlement where the heather heathland was cultivated between 1954 and 1961 (fig. 7). These marks probably arise from the vegetation as a consequence of better growing conditions over the topsoil cores of the boundary banks. The fact that lynchets and boundary banks offer different growing conditions has been documented at Alstrup Krat where it could be seen that in several places anemones grew on the lynchets. Differences in the vegetation on the field surfaces and the boundary banks have also been observed on aerial photographs showing the scheduled examples of field systems at Lundby Hede and Øster Lem Hede.The final type of soil-colour trace to be dealt with here comprises the very pale patches that occur on both sides of Trenddalen at Skørbæk Hede and on the western margins of the field system at Gundersted. These could possibly be interpreted as ploughed-up deposits of aeolian sand, but this is not the case. By comparison with the topography and through stereoscopic viewing of the aerial photographs it becomes clear that these features are located on steeply sloping terrain and that they are due to ploughing up of the sandy subsoil. They become both larger and more pronounced with time as more and more subsoil sand is progressively eroded out due to ploughing (figs. 6, 7, 8 and 9).The influence of historic cultivation on soil marksThe fact that Hatt could still see boundary banks and lynchets in the landscape during his investigations in the 1930s was of course due to these areas not having been ploughed since they were abandoned at some time during the Iron Age. The Royal Danish Academy of Sciences and Letters’ conceptual map from the end of the 18th century shows that 30% of Himmerland was covered by heath, 42% was cultivated, 21% lay as meadow and bog and only 4% was covered by woodland (fig. 1). By comparing the identified field systems with the heath areas on the maps, an idea can be gained of the duration of cultivation and how it has influenced the soil marks. Correspondingly, by comparing plans showing soil marks with the cultivated area shown on the conceptual map, it is possible to investigate whether cultivation, presumably continuous here since the 12th century, has erased traces of field systems dating from the Early Iron Age. Plates I-III show combined plans of soil marks from boundary banks, lynchets and recorded barrows at the three localities. The ordnance maps from the 1880s have been chosen as a background, showing contour lines, land use and wetland areas, and the cultivated areas have been added from the conceptual map. At both Gundersted and Skørbæk Hede, there are clearly no soil marks in the areas marked as cultivated on the conceptual map. Conversely, the immediately adjacent heath areas show many coherent traces. On this basis, it must be assumed that the field systems from the Early Iron Age also once extended into areas shown as cultivated on the conceptual map but that the long-term cultivation has apparently erased any trace of them. It should, however, be mentioned that Lis Helles Olesen’s investigations in NW Jutland only reveal a slight preponderance of field systems located on the old heath areas, so there may well be regional differences.The original total extent of the field systems is of course difficult to assess, but the field system at Store Binderup provides an idea of the order of magnitude. This field system is apparent as a well-defined topographic unit surrounded by wetland areas; the latter are shown on the conceptual map to be completely covered by heath. The field system extends over c. 75% of the cultivable area. In order to examine the influence of modern cultivation on the clarity of the soil marks, plans showing traces of the boundary banks have been compared with a series of historical maps. In general, the soil marks at all three localities appear most clear in areas which were cultivated latest. Former heath areas completely lacking in soil marks have probably never been cultivated. The last 50 years of cultivation with large agricultural machinery has had a dramatic effect on the soil marks. On figures 7, 8 and 9, clear evidence of ploughing out can be seen, whereby the soil marks in several places increase from 5 to 9 m in width. The negative effect of long-term cultivation on soil marks documented here only applies to pale soil marks on sandy soils. A number of field systems are apparent as dark soil marks, the visibility of which does not appear to be affected to the same extent by long-term cultivation. These make up only 3% of those recorded by P.H. Sørensen, and no sections through boundary banks are available from any of these field systems.Comparison of maps produced by field survey and from aerial photographsEvery map expresses an interpretation of what has been observed. This also applies of course to both Hatt’s mapping of the field systems on the ground in the 1930s and the subsequent mapping conducted on the basis of aerial photographs. Quality and credibility are, however, increased considerably, if the features observed can be confirmed by several sources or several researchers, reducing the subjective aspect to a minimum.On figures 10 and 11, the author’s plan of Skørbæk Hede based on aerial photographs is compared with the results of Hatt’s field survey. There is no doubt whatsoever that the aerial photographs are better able to show the overall extent of the field system. Conversely, the resulting plan is less detailed than Hatt’s map. In a few cases, however, sub-divisions of the fields are seen on the aerial photographs which Hatt did not record in his survey (figs. 8-9). In order to investigate subjectivity in the interpretation of the aerial photographs, a comparison has been made between the author’s and P.H. Sørensen’s plans of the field systems at Gundersted and Store Binderup (figs. 12, 13 and 14). Good agreement can be seen in the interpretation of the soil marks apparent on the aerial photographs of both localities. This suggests that the subjective aspect of the interpretational process is not a major problem.Evaluation of the method’s range with respect to studies of the agrarian landscapeAerial photographs encompass a great research potential relative to studies of the arable landscape during the Late Bronze Age and Early Iron Age. They are the only source available with respect to mapping the morphology and extent of the field systems, with the exception of the few remains tangible which still exist in woodland and on heaths. Field systems are particularly important in a cultural-historical context because they constitute the sole example from prehistory of the appearance of a total integrated cultivation system and how it was adapted to the landscape.The information contained on the aerial photographs, particularly in the form of pale soil marks resulting from the exposure or ploughing-up of the topsoil core of the boundary banks and lynchets, is a credible source relative to the mapping of the morphology and extent of field systems. Comparison between the maps and plans produced by several researchers mapping does not give cause to perceive the interpretation of the information as the aerial photographs as being particularly subjective. On the contrary, very good agreement can be seen between these interpretations.In a mapping exercise, use should be made of a number of different series of vertical aerial photographs as this provides the most detailed picture of the morphology of the field systems.A very significant source of error has been identified which must be taken into account when mapping the extent of the field systems, i.e. cultivation during historical times. In areas that were cultivated prior to the enclosure movement, i.e. in the very great majority of cases presumably since the 12th century, it cannot be expected to find pale soil marks. Long-term cultivation and the consequent mixing of the upper soil layers have erased most traces of boundary banks and lynchets. Renewed cultivation within the last 100-150 years appears, conversely, only seems to have had a marginal effect on the occurrence of soil marks. As mentioned above there can, however, be marked regional differences on the influence of historical cultivation on the clarity and degree of preservation of the soil marks. This is an aspect it will be interesting to study in more detail in the future.Michael VinterMoesgård Museum
APA, Harvard, Vancouver, ISO, and other styles
26

Kosteniuk, Liudmyla. "Features of the hydrological regime and channel processes on the Iltsya river (Chornyy Cheremosh basin)." Scientific Herald of Chernivtsi University. Geography, no. 824 (January 30, 2020): 37–47. http://dx.doi.org/10.31861/geo.2020.824.37-47.

Full text
Abstract:
This publication analyzes the features of the hydrological regime and channel processes of the Iltsya River, based on regular observations and during the expedition trip in June 2019.The schemes of basin distribution and geomorphological zoning of the studied object, curves of connection of water consumption and levels Q = f (H), graphs of the course of maximum, average and minimum water levels, and also cross sections on key sites are presented in the work. Conditions of channel formation of natural watercourses are a complex and multilateral process which is closely connected with natural features of the territory of their basin.The main factors of the natural channel process are the geological structure of the area, sediment flow and grain size. These factors are key, however, the processes of channel formation are also influenced by additional factors that are temporary or local in nature. At the same time, we should not forget about the influence of the anthropogenic factor, the results of which for the basins of small rivers can sometimes even dominate for some time over the main ones mentioned above. All these factors not only affect the channel process, but also difficult to interact with each other. The geological structure, including the lithology of rocks, as well as the relief of the territory, have a direct impact on the shape of the valley, the longitudinal profile, the composition of channel-forming sediments, and hence the stability of the channel. Water runoff is the main active factor that depends on the hydrological regime of the river and determines its water content and size. The nature of the sediments and their regime is a factor that determines the rate and direction of vertical and horizontal deformations of the channel. The object of our study - is the river Iltsya, a small left tributary of the ChornyyCheremosh, which is characterized by specific geological conditions, and therefore significantly different from other small rivers in the region, including neighboring tributaries of the main river (ChornyyCheremosh). The second feature of the studied river is that the lower part of its basin is located within the Vorokhta-Putilsky ancient terraced lowlands, while the sources are formed within the Pokutsko-Bukovynian Carpathians, which in turn determines a certain specificity in the formation of its channel. This geological and geomorphological feature of the Iltsya river basin distinguishes it among numerous similar small rivers of this region of the Ukrainian Carpathians. Summarizing all the above, we have made some conclusions, the basin of the river Iltsya is currently little studied, both in terms of hydrological regime, channel processes and geomorphological studies, although in fact it is quite interesting and not typical of project, which raises many questions that require more detailed study. The presence of a long series of observations, allows us to identify general tendencies to the gradual incision of the channel in the area of the hydropost, although with little intensity. Visual inspection showed more manifestations of horizontal transformations associated with the movement of alluvial ridges and erosion of the shores. The greatest feature of the Iltsya river basin is the polymorphic formation of its channel system, because within the Vorokhta-Putil lowlands the river Iltsya and VelykyiRosysh occupy a wide valley of the ancient pliocene river, and this nuance has the greatest influence on the nature of riverbeds within its limits.
APA, Harvard, Vancouver, ISO, and other styles
27

Cortesi, U., J. C. Lambert, C. De Clercq, G. Bianchini, T. Blumenstock, A. Bracher, E. Castelli, et al. "Geophysical validation of MIPAS-ENVISAT operational ozone data." Atmospheric Chemistry and Physics Discussions 7, no. 3 (May 7, 2007): 5805–939. http://dx.doi.org/10.5194/acpd-7-5805-2007.

Full text
Abstract:
Abstract. The Michelson Interferometer for Passive Atmospheric Sounding (MIPAS), on-board the European ENVIronmental SATellite (ENVISAT) launched on 1 March 2002, is a middle infrared Fourier Transform spectrometer measuring the atmospheric emission spectrum in limb sounding geometry. The instrument is capable to retrieve the vertical distribution of temperature and trace gases, aiming at the study of climate and atmospheric chemistry and dynamics, and at applications to data assimilation and weather forecasting. MIPAS operated in its standard observation mode for approximately two years, from July 2002 to March 2004, with scans performed at nominal spectral resolution of 0.025 cm−1 and covering the altitude range from the mesosphere to the upper troposphere with relatively high vertical resolution (about 3 km in the stratosphere). Only reduced spectral resolution measurements have been performed subsequently. MIPAS data were re-processed by ESA using updated versions of the Instrument Processing Facility (IPF v4.61 and v4.62) and provided a complete set of level-2 operational products (geo-located vertical profiles of temperature and volume mixing ratio of H2O,O3, HNO3, CH4, N2O and NO2) with quasi continuous and global coverage in the period of MIPAS full spectral resolution mission. In this paper, we report a detailed description of the validation of MIPAS-ENVISAT operational ozone data, that was based on the comparison between MIPAS v4.61 (and, to a lesser extent, v4.62) O3 VMR profiles and a comprehensive set of correlative data, including observations from ozone sondes,ground-based lidar, FTIR and microwave radiometers, remote-sensing and in situ instruments on-board stratospheric aircraft and balloons, concurrent satellite sensors and ozone fields assimilated by the European Center for Medium-range Weather Forecasting. A coordinated effort was carried out, using common criteria for the selection of individual validation data sets, and similar methods for the comparisons. This enabled merging the individual results from a variety of independent reference measurements of proven quality (i.e., well characterised error budget) into an overall evaluation of MIPAS O3 data quality, having both statistical strength and the widest spatial and temporal coverage. Collocated measurements from ozone sondes and ground-based lidar and microwave radiometers of the Network for Detection Atmospheric Composition Change (NDACC) were selected to carry out comparisons with time series of MIPAS O3 partial columns and to identify groups of stations and time periods with a uniform pattern of ozone differences, that were subsequently used for a vertically resolved statistical analysis. The results of the comparison are classified according to synoptic and regional systems and to altitude intervals, showing a generally good agreement within the comparison error bars in the upper and middle stratosphere. Significant differences emerge in the lower stratosphere and are only partly explained by the larger contributions of horizontal and vertical smoothing differences and of collocation errors to the total uncertainty. Further results obtained from a purely statistical analysis of the same data set from NDACC ground-based lidar stations, as well as from additional ozone soundings at middle latitudes and from NDACC ground-based FTIR measurements, confirm the validity of MIPAS O3 profiles down to the lower stratosphere, with evidence of larger discrepancies at the lowest altitudes. The validation against O3 VMR profiles using collocated observations performed by other satellite sensors (SAGE II, POAM III, ODIN-SMR, ACE-FTS, HALOE, GOME) and ECMWF assimilated ozone fields leads to consistent results, that are to a great extent compatible with those obtained from the comparison with ground-based measurements. Excellent agreement in the full vertical range of the comparison is shown with respect to collocated ozone data from stratospheric aircraft and balloon instruments, that was mostly obtained in very good spatial and temporal coincidence with MIPAS scans. This might suggest that the larger differences observed in the upper troposphere and lowermost stratosphere with respect to collocated ground-based and satellite O3 data are only partly due to a degradation of MIPAS data quality. They should be rather largely ascribed to the natural variability of these altitude regions and to other components of the comparison errors. By combining the results of this large number of validation data sets we derived a general assessment of MIPAS v4.61 and v4.62 ozone data quality. A clear indication of the validity of MIPAS O3 vertical profiles is obtained for most of the stratosphere, where the mean relative difference with the individual correlative data sets is always lower than 10%. Furthermore, these differences always fall within the combined systematic error (from 1 hPa to 50 hPa) and the standard deviation is fully consistent with the random error of the comparison (from 1 hPa to ~30–40 hPa). A degradation in the quality of the agreement is generally observed in the lower stratosphere and upper troposphere, with biases up to 25% at 100 hPa and standard deviation of the global mean differences up to three times larger than the combined random error in the range 50–100 hPa. The larger differences observed at the bottom end of MIPAS retrieved profiles can be associated, as already noticed, to the effects of stronger atmospheric gradients in the UTLS that are perceived differently by the various measurement techniques. However, further components that may degrade the results of the comparison at lower altitudes can be identified as potentially including cloud contamination, which is likely not to have been fully filtered using the current settings of the MIPAS cloud detection algorithm, and in the linear approximation of the forward model that was used for the climatological estimate of systematic error components. The latter, when affecting systematic contributions with a random variability over the spatial and temporal scales of global averages, might result in an underestimation of the random error of the comparison and add up to other error sources, such as the possible underestimates of the p and T error propagation based on the assumption of a 1 K and 2% uncertainties, respectively, on MIPAS temperature and pressure retrievals. At pressure lower than 1 hPa, only a small fraction of the selected validation data set provides correlative ozone data of adequate quality and it is difficult to derive quantitative conclusions about the performance of MIPAS O3 retrieval for the topmost layers.
APA, Harvard, Vancouver, ISO, and other styles
28

Cortesi, U., J. C. Lambert, C. De Clercq, G. Bianchini, T. Blumenstock, A. Bracher, E. Castelli, et al. "Geophysical validation of MIPAS-ENVISAT operational ozone data." Atmospheric Chemistry and Physics 7, no. 18 (September 21, 2007): 4807–67. http://dx.doi.org/10.5194/acp-7-4807-2007.

Full text
Abstract:
Abstract. The Michelson Interferometer for Passive Atmospheric Sounding (MIPAS), on-board the European ENVIronmental SATellite (ENVISAT) launched on 1 March 2002, is a middle infrared Fourier Transform spectrometer measuring the atmospheric emission spectrum in limb sounding geometry. The instrument is capable to retrieve the vertical distribution of temperature and trace gases, aiming at the study of climate and atmospheric chemistry and dynamics, and at applications to data assimilation and weather forecasting. MIPAS operated in its standard observation mode for approximately two years, from July 2002 to March 2004, with scans performed at nominal spectral resolution of 0.025 cm−1 and covering the altitude range from the mesosphere to the upper troposphere with relatively high vertical resolution (about 3 km in the stratosphere). Only reduced spectral resolution measurements have been performed subsequently. MIPAS data were re-processed by ESA using updated versions of the Instrument Processing Facility (IPF v4.61 and v4.62) and provided a complete set of level-2 operational products (geo-located vertical profiles of temperature and volume mixing ratio of H2O, O3, HNO3, CH4, N2O and NO2) with quasi continuous and global coverage in the period of MIPAS full spectral resolution mission. In this paper, we report a detailed description of the validation of MIPAS-ENVISAT operational ozone data, that was based on the comparison between MIPAS v4.61 (and, to a lesser extent, v4.62) O3 VMR profiles and a comprehensive set of correlative data, including observations from ozone sondes, ground-based lidar, FTIR and microwave radiometers, remote-sensing and in situ instruments on-board stratospheric aircraft and balloons, concurrent satellite sensors and ozone fields assimilated by the European Center for Medium-range Weather Forecasting. A coordinated effort was carried out, using common criteria for the selection of individual validation data sets, and similar methods for the comparisons. This enabled merging the individual results from a variety of independent reference measurements of proven quality (i.e. well characterized error budget) into an overall evaluation of MIPAS O3 data quality, having both statistical strength and the widest spatial and temporal coverage. Collocated measurements from ozone sondes and ground-based lidar and microwave radiometers of the Network for the Detection Atmospheric Composition Change (NDACC) were selected to carry out comparisons with time series of MIPAS O3 partial columns and to identify groups of stations and time periods with a uniform pattern of ozone differences, that were subsequently used for a vertically resolved statistical analysis. The results of the comparison are classified according to synoptic and regional systems and to altitude intervals, showing a generally good agreement within the comparison error bars in the upper and middle stratosphere. Significant differences emerge in the lower stratosphere and are only partly explained by the larger contributions of horizontal and vertical smoothing differences and of collocation errors to the total uncertainty. Further results obtained from a purely statistical analysis of the same data set from NDACC ground-based lidar stations, as well as from additional ozone soundings at middle latitudes and from NDACC ground-based FTIR measurements, confirm the validity of MIPAS O3 profiles down to the lower stratosphere, with evidence of larger discrepancies at the lowest altitudes. The validation against O3 VMR profiles using collocated observations performed by other satellite sensors (SAGE II, POAM III, ODIN-SMR, ACE-FTS, HALOE, GOME) and ECMWF assimilated ozone fields leads to consistent results, that are to a great extent compatible with those obtained from the comparison with ground-based measurements. Excellent agreement in the full vertical range of the comparison is shown with respect to collocated ozone data from stratospheric aircraft and balloon instruments, that was mostly obtained in very good spatial and temporal coincidence with MIPAS scans. This might suggest that the larger differences observed in the upper troposphere and lowermost stratosphere with respect to collocated ground-based and satellite O3 data are only partly due to a degradation of MIPAS data quality. They should be rather largely ascribed to the natural variability of these altitude regions and to other components of the comparison errors. By combining the results of this large number of validation data sets we derived a general assessment of MIPAS v4.61 and v4.62 ozone data quality. A clear indication of the validity of MIPAS O3 vertical profiles is obtained for most of the stratosphere, where the mean relative difference with the individual correlative data sets is always lower than ±10%. Furthermore, these differences always fall within the combined systematic error (from 1 hPa to 50 hPa) and the standard deviation is fully consistent with the random error of the comparison (from 1 hPa to ~30–40 hPa). A degradation in the quality of the agreement is generally observed in the lower stratosphere and upper troposphere, with biases up to 25% at 100 hPa and standard deviation of the global mean differences up to three times larger than the combined random error in the range 50–100 hPa. The larger differences observed at the bottom end of MIPAS retrieved profiles can be associated, as already noticed, to the effects of stronger atmospheric gradients in the UTLS that are perceived differently by the various measurement techniques. However, further components that may degrade the results of the comparison at lower altitudes can be identified as potentially including cloud contamination, which is likely not to have been fully filtered using the current settings of the MIPAS cloud detection algorithm, and in the linear approximation of the forward model that was used for the a priori estimate of systematic error components. The latter, when affecting systematic contributions with a random variability over the spatial and temporal scales of global averages, might result in an underestimation of the random error of the comparison and add up to other error sources, such as the possible underestimates of the p and T error propagation based on the assumption of a 1 K and 2% uncertainties, respectively, on MIPAS temperature and pressure retrievals. At pressure lower than 1 hPa, only a small fraction of the selected validation data set provides correlative ozone data of adequate quality and it is difficult to derive quantitative conclusions about the performance of MIPAS O3 retrieval for the topmost layers.
APA, Harvard, Vancouver, ISO, and other styles
29

KOCJANČIČ, KLEMEN. "REVIEW, ON THE IMPORTANCE OF MILITARY GEOSCIENCE." CONTEMPORARY MILITARY CHALLENGES 2022, no. 24/3 (September 30, 2022): 107–11. http://dx.doi.org/10.33179/bsv.99.svi.11.cmc.24.3.rew.

Full text
Abstract:
In 2022, the Swiss branch of the international publishing house Springer published a book, a collection of papers entitled Military Geoscience: A Multifaceted Approach to the Study of Warfare. It consists of selected contributions by international researchers in the field of military geoscience, presented at the 13th International Conference on Military Geosciences, held in Padua in June 2019. The first paper is by the editors, Aldin Bondesan and Judy Ehlen, and provides a brief overview of understanding the concept of military geoscience as an application of geology and geography to the military domain, and the historical development of the discipline. It should also be pointed out that the International Conferences on Military Geosciences (ICMG), which organises this biennial international conference, has over the past two decades also covered other aspects, such as conflict archaeology. The publication is further divided into three parts. The first part comprises three contributions covering military geoscience up to the 20th century. The first paper, by Chris Fuhriman and Jason Ridgeway, provides an insights into the Battle of Marathon through topography visualisation. The geography of the Marathon field, the valley between Mt. Cotroni and Mt. Agrieliki, allowed the Greek defenders to nullify the advantage of the Persian cavalry and archers, who were unable to develop their full potential. This is followed by a paper by Judy Ehlen, who explores the geological background of the Anglo-British coastal fortification system along the English Channel, focusing on the Portsmouth area of Hampshire. The author thus points out that changes in artillery technology and naval tactics between the 16th and 19th centuries necessitated changes in the construction of coastal fortifications, both in terms of the form of the fortifications and the method of construction, including the choice of basic building materials, as well as the siting of the fortifications in space. The next article is then dedicated to the Monte Baldo Fortress in north-eastern Italy, between Lake Garda and the Adige River. In his article, Francesco Premi analyses the presence of the fortress in the transition area between the Germanic world and the Mediterranean, and the importance of this part of Italy (at the southernmost part of the pre-Alpine mountains) in military history, as reflected in the large number of important military and war relics and monuments. The second part of the book, which is the most comprehensive, focuses on the two World Wars and consists of nine papers. The first paper in this part provides an analysis of the operation of trench warfare training camps in the Aube region of France. The group of authors, Jérôme Brenot, Yves Desfossés, Robin Perarnau, Marc Lozano and Alain Devos, initially note that static warfare training camps have not received much attention so far. Using aerial photography of the region dating from 1948 and surviving World War II photographic material, they identified some 20 sites where soldiers of the Entente forces were trained for front-line service in trenches. Combined archaeological and sociological fieldwork followed, confirming the presence of these camps, both through preserved remains and the collective memory. The second paper in this volume also concerns the survey on trenches, located in northern Italy in the Venezia Tridentina Veneto area in northern Italy. The authors Luigi Magnini, Giulia Rovera, Armando De Guio and Giovanni Azzalin thus use digital classification methods and archaeology to determine how Italian and Austro-Hungarian First World War trenches have been preserved or, in case they have disappeared, why this was the case, both from the point of view of the natural features as well as from the anthropological point of view of the restoration of the pre-war settings. The next paper, by Paolo Macini and Paolo Sammuri, analyses the activities of the miners and pioneers of the Italian Corps of Engineers during the First World War, in particular with regard to innovative approaches to underground mine warfare. In the Dolomites, the Italian engineers, using various listening devices, drilling machinery and geophysical methods, developed a system for drilling underground mine chambers, which they intended to use and actually used to destroy parts of Austro-Hungarian positions. The paper by Elena Dai Prà, Nicola Gabellieri and Matteo Boschian Bailo concerns the Italian Army's operations during the First World War. It focuses on the use of tactical maps with emphasis on typological classification, the use of symbols, and digital cartography. The authors thus analysed the tactical maps of the Italian Third Army, which were being constantly updated by plotting the changes in positions and tactical movements of both sides. These changes were examined both in terms of the use of new symbols and the analysis of the movements. This is followed by a geographical presentation of the Italian Army's activities during the First World War. The authors Paolo Plini, Sabina Di Franco and Rosamaria Salvatori have thus collected 21,856 toponyms by analysing documents and maps. The locations were also geolocated to give an overview of the places where the Italian Army operated during the First World War. The analysis initially revealed the complexity of the events on the battlefields, but also that the sources had misidentified the places of operation, as toponyms were misidentified, especially in the case of homonyms. Consequently, the area of operation was misidentified as well. In this respect, the case of Vipava was highlighted, which can refer to both a river and a settlement. The following paper is the first on the Second World War. It is the article by H. A. P. Smith on Italian prisoners of war in South Africa. The author outlines the circumstances in which Italian soldiers arrived to and lived in the southern African continent, and the contribution they made to the local environment and the society, and the remnants of their presence preserved to the present day. In their article, William W. Doe III and Michael R. Czaja analyse the history, geography and significance of Camp Hale in the state of Colorado. In doing so, they focus on the analysis of the military organization and its impact on the local community. Camp Hale was thus the first military installation of the U.S. Army, designated to test and train U.S. soldiers in mountain and alpine warfare. It was here that the U.S. 10th Mountain Division was formed, which concluded its war path on Slovenian soil. The Division's presence in this former camp, which was in military use also after the war until 1965, and in the surrounding area is still visible through numerous monuments. This is followed by a paper by Hermann Häusler, who deals with German military geography and geology on the Eastern Front of the Second World War. A good year before the German attack on the Soviet Union, German and Austrian military geologists began an analysis of the topography, population and infrastructure of the European part of the Soviet Union, which led to a series of publications, including maps showing the suitability of the terrain for military operations. During the war, military geological teams then followed the frontline units and carried out geotechnical tasks such as water supply, construction of fortifications, supply of building materials for transport infrastructure, and analysis of the suitability of the terrain for all-terrain driving of tracked and other vehicles. The same author also authored a paper in the next chapter, this time focusing on the activities of German military geologists in the Adriatic area. Similarly to his first contribution, the author presents the work of military geologists in northern Italy and north-western Slovenia. He also focuses on the construction of fortification systems in northern Italy and presents the work of karst hunters in the Operational Zone of the Adriatic Littoral. Part 3 covers the 21st century with five different papers (chapters). The first paper by Alexander K. Stewart deals with the operations of the U.S. Army specialised teams in Afghanistan. These Agribusiness Development Teams (ADTs) carried out a specialised form of counter-guerrilla warfare in which they sought to improve the conditions for the development of local communities through agricultural assistance to the local population. In this way, they were also counteracting support for the Taliban. The author notes that, in the decade after the programme's launch, the project had only a 19% success rate. However, he stresses that such forms of civil-military cooperation should be present in future operations. The next chapter, by Francis A. Galgan, analyses the activities of modern pirates through military-geographical or geological methods. Pirates, who pose a major international security threat, are present in four regions of the world: South and South-East Asia, East Africa and the Gulf of Guinea. Building on the data on pirate attacks between 1997 and 2017, the author shows the temporal and spatial patterns of pirate activities, as well as the influence of the geography of coastal areas on their activities. This is followed by another chapter with a maritime topic. Mark Stephen Blaine discusses the geography of territorial disputes in the South China Sea. Through a presentation of international law, the strategic importance of the sea (sea lanes, natural resources) and the overlapping territorial claims of China, Taiwan, Malaysia, Vietnam and Indonesia, the author shows the increasing level of conflict in the area and calls for the utmost efforts to be made to prevent the outbreak of hostilities or war. M. H. Bulmer's paper analyses the Turkish Armed Forces' activities in Syria from the perspective of military geology. The author focuses on the Kurdish forces' defence projects, which mainly involved the construction of gun trenches, observation towers or points, tunnels and underground facilities, as well as on the Turkish armed forces' actions against this military infrastructure. This involved both mountain and underground warfare activities. While these defensive infrastructures proved to be successful during the guerrilla warfare period, direct Turkish attacks on these installations demonstrated their vulnerability. The last chapter deals with the current operational needs and limitations of military geosciences from the perspective of the Austrian Armed Forces. Friedrich Teichmann points out that the global operational interest of states determines the need for accurate geo-data as well as geo-support in case of rapidly evolving requirements. In this context, geoscience must respond to new forms of threats, both asymmetric and cyber, at a time when resources for geospatial services are limited, which also requires greater synergy and an innovative approach to finding solutions among multiple stakeholders. This also includes increased digitisation, including the use of satellite and other space technologies. The number of chapters in the publication illustrates the breadth and depth of military geoscience, as well as the relevance of geoscience to past, present and future conflicts or military operations and missions. The current military operations in Ukraine demonstrate the need to take into account the geo-geological realities of the environment and that terrain remains one of the decisive factors for success on the battlefield, irrespective of the technological developments in military engineering and technology. This can also be an incentive for Slovenian researchers and the Slovenian Armed Forces to increase research activities in the field of military geosciences, especially in view of the rich military and war history in the geographically and geologically diverse territory of Slovenia.
APA, Harvard, Vancouver, ISO, and other styles
30

Marti, Beatrice, Andrey Yakovlev, Dirk Nikolaus Karger, Silvan Ragettli, Aidar Zhumabaev, Abdul Wakil Wakil, and Tobias Siegfried. "CA-discharge: Geo-Located Discharge Time Series for Mountainous Rivers in Central Asia." Scientific Data 10, no. 1 (September 4, 2023). http://dx.doi.org/10.1038/s41597-023-02474-8.

Full text
Abstract:
AbstractWe present a collection of 295 gauge locations in mountainous Central Asia with norm discharge as well as time series of river discharge from 135 of these locations collected from hydrological yearbooks in Central Asia. Time series have monthly, 10-day and daily temporal resolution and are available for different duration. A collection of third-party data allows basin characterization for all gauges. The time series data is validated using standard quality checks. Norm discharge is validated against literature values and by using a water balance approach. The novelty of the data consists in the combination of discharge time series and gauge locations for mountainous rivers in Central Asia which is not available anywhere else. The geo-located discharge time series can be used for water balance modelling and training of forecast models for river runoff in mountainous Central Asia.
APA, Harvard, Vancouver, ISO, and other styles
31

Kim, Byung-Ho, Khawar Rehman, Yong-Sik Cho, and Seung Ho Hong. "Tsunami waveform forecasting at cooling water intakes of nuclear reactors with deep learning model." Physics of Fluids 35, no. 7 (July 1, 2023). http://dx.doi.org/10.1063/5.0156882.

Full text
Abstract:
The Fukushima nuclear disaster highlights the importance of accurate and fast predictions of tsunami hazard to critical coastal infrastructure to devise mitigation strategies in both long-term and real-time events. Recently, deep learning models allowed us to make accurate and rapid forecasts on high dimensional, non-linear, and non-stationary time series data such as that associated with tsunami waveforms. Thus, this study uses a one-dimensional convolutional neural network (CNN) model to predict waveforms at cooling water intakes of nuclear power plant at Uljin in South Korea. The site is particularly vulnerable to tsunamis originating from the west coast of Japan. Data for the CNN model are generated by numerical simulation of 1107 cases of tsunami propagation initiating from fault locations. The time series data for waveforms were predicted at 13 virtual gauges located in the nearshore region of the study area, 10 of which were classified as observation points and 3 gauges situated at the cooling water intakes were categorized as target locations. The performance assessment of the model's forecasts showed excellent results with rapid predictions. The study highlights two main points: (i) deep learning models can be based on sparse waveform in situ data (such as that recorded by deep-ocean assessment and reporting of tsunamis or any locally operating monitoring stations for ocean waves) or numerically simulated data at only a few points along the dominant wave propagation direction, and (ii) deep learning models are fully capable of accurate and fast predictions of complex geo-hazards that prompt rapid emergency response to coordinate mitigation efforts.
APA, Harvard, Vancouver, ISO, and other styles
32

Miller, Aaron C., Ryan A. Peterson, Inder Singh, Sarah Pilewski, and Philip M. Polgreen. "Improving State-Level Influenza Surveillance by Incorporating Real-Time Smartphone-Connected Thermometer Readings Across Different Geographic Domains." Open Forum Infectious Diseases, October 30, 2019. http://dx.doi.org/10.1093/ofid/ofz455.

Full text
Abstract:
Abstract Background Timely estimates of influenza activity are important for clinical and public health practice. However, traditional surveillance sources may be associated with reporting delays. Smartphone-connected thermometers can capture real-time illness symptoms, and these geo-located readings may help improve state-level forecast accuracy. Methods Temperature recordings were collected from smart thermometers and an associated mobile phone application. Using temperature recordings, we developed forecasting models of real-time state-reported influenza-like illness (ILI) 2 weeks before the availability of published reports. We compared time-series models that incorporated thermometer readings at various levels of spatial aggregation and evaluated out-of-sample model performance in an adaptive manner comparing each model to baseline models without thermometer information. Results More than 12 million temperature readings were recorded from over 500,000 devices from August 30, 2015 to April 15, 2018. Readings were voluntarily reported from anonymous device users, with potentially multiple users for a single device. We developed forecasting models of real-time outpatient ILI for 46 states with sufficient state-reported ILI data. Forecast accuracy improved considerably when information from thermometers was incorporated. On average, thermometer readings reduced the squared error of state-level forecasting by 43% during influenza season and more than 50% in many states. In general, best-performing models tended to result from incorporating thermometer information at multiple levels of spatial aggregation. Conclusion Local forecasts of current influenza activity, measured by outpatient ILI, can be improved by incorporating real-time information from mobile-devices. Information aggregated across neighboring states, regions, and the nation can lead to more reliable forecasts, benefiting local surveillance efforts.
APA, Harvard, Vancouver, ISO, and other styles
33

Ma, Haiping, Hui Zhang, Minjuan Li, Shanyi Wu, Pengtao Wang, Qian Wang, Jing Zhao, and Zhiqiang Ma. "Characteristics of the present crustal deformation in the Tibetan Plateau and its relationship with strong earthquakes." Open Geosciences 15, no. 1 (January 1, 2023). http://dx.doi.org/10.1515/geo-2022-0387.

Full text
Abstract:
Abstract To study the characteristics of the present crustal movement in the Tibetan Plateau and explore its relationship between strong earthquakes with magnitudes of 8 and above, the velocity field size was analyzed based on the global position system (GPS) campaign observations and the time series of site north-ward displacement and long baseline were discussed using the GPS continuous observations. The results show that the velocity field size in the Tibetan Plateau decreases from southwest to north, northeast, and southeast, and the value of the velocity in the west is significantly greater than that in the east in the same dimension. The maximum value is located in the southwest and the minimum value is located in the east. The Wenchuan earthquake is located in the mutation region, where the rate and the direction of the crustal movement are quite different. The crustal deformation extent is large in the region close to the seismic source before the earthquake, reflecting that the regional stress accumulation is fast and its required time is relatively short. However, the crustal deformation extent is relatively small in the region away from the seismic source before the earthquake, reflecting that the regional stress accumulation is slow and its required time is relatively long. The N-ward movement became significantly strong after Nepal M S 8.1 earthquake; the occurrence of this earthquake may have caused the unlocking of large-scale faults near the seismic source, which further intensified the NE-ward subduction and collision of the Indian Plate. The compression of the Indian Plate to the Tibetan Plateau slowed down after the 2008 Wenchuan M S 8.0 earthquake, and increased significantly after 2015, which boosted strain accumulation in the Tibetan Plateau, and attention needs to be paid continuously to strong earthquake risk in this region.
APA, Harvard, Vancouver, ISO, and other styles
34

Souza, Roberto, Daniel B. Neill, Renato M. Assuncao, and Wagner Meira, Jr. "Identifying High-Risk Areas for Dengue Infection Using Mobility Patterns on Twitter." Online Journal of Public Health Informatics 11, no. 1 (May 30, 2019). http://dx.doi.org/10.5210/ojphi.v11i1.9754.

Full text
Abstract:
ObjectiveWe develop new spatial scan models that use individuals' movement data, rather than a single location per individual, in order to identify areas with a high relative risk of infection by dengue disease.IntroductionTraditionally, surveillance systems for dengue and other infectious diseases locate each individual case by home address, aggregate these locations to small areas, and monitor the number of cases in each area over time. However, human mobility plays a key role in dengue transmission, especially due to the mosquito day-biting habit,1 and relying solely on individuals’ residential address as a proxy for dengue infection ignores a multitude of exposures that individuals are subjected to during their daily routines. Residence locations may be a poor indicator of the actual regions where humans and infected vectors tend to interact more, and hence, provide little information for dengue prevention. The increasing availability of geolocated data in online platforms such as Twitter offers a unique opportunity: in addition to identifying diseased individuals based on the textual content, we can also follow them in time and space as they move on the map and model their movement patterns. Comparing the observed mobility patterns for case and control individuals can provide relevant information to detect localized regions with higher risk of dengue infection. Incorporating the mobility of individuals into risk modeling requires the development of new spatial models that can cope with this type of data in a principled way and efficient algorithms to deal with the ever-growing amount of data. We propose new spatial scan models and exploit geo-located data from Twitter to detect geographic clusters of dengue infection risk.MethodsAs the spatial tracking of a large sample of infected and non-infected individuals is expensive and raises serious privacy issues, we instead analyze geo-located Twitter data (tweets), which is readily and publicly available. We identify “infected” individuals (cases) as those individuals who have at least one tweet classified as a current, personal experience with dengue. We note that, because of the incubation period and recovery time, infected Twitter users are likely to mention dengue in their tweets days after they are infected, and usually not at the location where the exposure (mosquito bite) occurred. Once we have identified cases and controls based on the textual content of the messages, we then compare the mobility patterns of the two groups. The key aspect of our method is that the input is a series of locations rather than a single location, such as the residence address, for each individual. The number of positions ni composing each mobility pattern can vary substantially between individuals i, and thus simple approaches like counting the total numbers of case and control tweets per location would be biased and inaccurate; moreover, individuals with larger numbers of tweets may be more likely to be identified as a case. Nevertheless, our assumption is that the entire mobility patterns will be informative of the riskier areas if we compare the spatial patterns from infected and non-infected individuals.We have developed two new spatial scan methods (unconditional and conditional spatial logistic models) which correctly account for the multiple, varying number of spatial locations per individual. Both models use the proportion of an individual’s tweets in each location as an estimate of the proportion of time spent in that location; the estimate is biased by individuals’ propensity to tweet in different locations, but is expected to capture the large amounts of time spent at frequently visited locations. Our unconditional model controls the variable contribution of each individual through a non-parametric estimation of the odds of being a case and has a semi-parametric logistic specification. When estimating the previous offset becomes a complex task, we propose a case-control matching strategy in the conditional model to control for the number of tweets ni. Based on the subset scan approach,3 we search for localized regions where the infection risk is substantially higher than in the rest of the map by maximizing a log-likelihood ratio statistic over subsets of the data.ResultsWe demonstrate the detection of high-risk clusters for dengue infection using Twitter data we collected in Brazil during the year of 2015, when a strong surge of dengue hit several cities. We apply our method to the cities with highest number of case individuals. There are many points of interest, such as hospitals and parks, inside the detected regions. As those places are non-residential, standard approaches would fail to consider them as potential infection places in the event of a spike in the number of cases. Figure 1 shows the detected regions in the city of Campinas, Brazil. Synthetic and real-world evaluation results demonstrate that our methods work better than either just mapping each individual to their most frequent location (which is a proxy for home address) and running a traditional spatial scan, or scanning using tweet volume as an input.ConclusionsIdentifying places where people have higher risk of being infected, rather than focusing on residential address locations, may be key to surveillance for vector-borne diseases such as malaria and dengue, allowing public health officials to focus mitigation actions. The stochasticity of location data is not appropriate for typical spatial cluster detection tools such as the traditional spatial scan statistic.2 Each user is represented by a different number of geographic points and the variability of these numbers is large; traditional approaches can be easily misled if not extended to account for this special structure. Dengue is just one of many infectious diseases with a well-known etiology but a huge number of uncertain and difficult to obtain parameters that quantify factors such as infected mosquito population, likelihood of being bitten by an infected mosquito, and human movement in the mosquito-infested areas. Our methods add to the set of tools that spatial epidemiologists have available to search for spatially localized risk clusters using readily available Twitter data. We expect that our method will also be useful to other public health surveillance problems where movement data can bring relevant information.References1. Stoddard, ST., et al. The role of human movement in the transmission of vector-borne pathogens. PLOS NTDS. 2009; 3 (7): 1–92. Kulldorff M. A spatial scan statistic. Commun Stat Theory Methods. 1997; 26(2): 1481-14963. Neill DB. Fast subset scan for spatial pattern detection. J. Royal Stat. Soc. B. 2012; 74(2): 337-360
APA, Harvard, Vancouver, ISO, and other styles
35

Lawrence, Robert. "Locate, Combine, Contradict, Iterate: Serial Strategies for PostInternet Art." M/C Journal 21, no. 1 (March 14, 2018). http://dx.doi.org/10.5204/mcj.1374.

Full text
Abstract:
We (I, Robert Lawrence and, in a rare display of unity, all my online avatars and agents)hereby render and proclaim thisMANIFESTO OF PIECES AND BITS IN SERVICE OF CONTRADICTIONAL AESTHETICSWe start with the simple premise that art has the job of telling us who we are, and that through the modern age doing this job while KEEPING UP with accelerating cultural change has necessitated the invention of something we might call the avant-garde. Along the way there has been an on-again-off-again affair between said avant-garde and technology. We are now in a new phase of the new and the technology under consideration is the Internet.The recent hyperventilating about the term postInternet reflects the artworld’s overdue recognition of the effect of the Internet on the culture at large, and on art as a cultural practice, a market, and a historical process.I propose that we cannot fully understand what the Internet is doing to us through a consideration of what happens on the screen, nor by considering what happens in the physical space we occupy either before or behind the screen. Rather we must critically and creatively fathom the flow of cultural practice between and across these realms. This requires Hybrid art combining both physical and Internet forms.I do not mean to imply that single discipline-based art cannot communicate complexity, but I believe that Internet culture introduces complexities that can only be approached through hybrid practices. And this is especially critical for an art that, in doing the job of “telling us who we are”, wants to address the contradictory ways we now form and promote, or conceal and revise, our multiple identities through online social media profiles inconsistent with our fleshly selves.We need a different way of talking about identity. A history of identity:In the ancient world, individual identity as we understand it did not exist.The renaissance invented the individual.Modernism prioritized and alienated him (sic).Post-Modernism fragmented him/her.The Internet hyper-circulates and amplifies all these modalities, exploding the possibilities of identity.While reducing us to demographic market targets, the Web facilitates mass indulgence in perversely individual interests. The now common act of creating an “online profile” is a regular reiteration of the simple fact that identity is an open-ended hypothesis. We can now live double, or extravagantly multiple, virtual lives. The “me meme” is a ceaseless morph. This is a profound change in how identity was understood just a decade ago. Other historical transformations of identity happened over centuries. This latest and most radical change has occurred in the click of a mouse. Selfhood is now imbued with new complexity, fluidity and amplified contradictions.To fully understand what is actually happening to us, we need an art that engages the variant contracts of the physical and the virtual. We need a Hybrid art that addresses variant temporal and spatial modes of the physical and virtual. We need an art that offers articulations through the ubiquitous web in concert with the distinct perspectives that a physical gallery experience uniquely offers: engagement and removal, reflection and transference. Art that tells us who we are today calls for an aesthetics of contradiction. — Ro Lawrence (and all avatars) 2011, revised 2013, 2015, 2018. The manifesto above grew from an artistic practice beginning in 1998 as I started producing a website for every project that I made in traditional media. The Internet work does not just document or promote the project, nor is it “Netart” in the common sense of creative work restricted to a browser window. All of my efforts with the Internet are directly linked to my projects in traditional media and the web components offer parallel aesthetic voices that augment or overtly contradict the reading suggested by the traditional visual components of each project.This hybrid work grew out of a previous decade of transmedia work in video installation and sculpture, where I would create physical contexts for silent video as a way to remove the video image from the seamless flow of broadcast culture. A video image can signify very differently in a physical context that separates it from the flow of mass media and rather reconnects it to lived physical culture. A significant part of the aesthetic pleasure of this kind of work comes from nuances of dissonance arising from contradictory ways viewers had learned to read the object world and the ways we were then still learning to read the electronic image world. This video installation work was about “relocating” the electronic image, but I was also “locating” the electronic image in another sense, within the boundaries of geographic and cultural location. Linking all my projects to specific geographic locations set up contrasts with the spatial ubiquity of electronic media. In 1998 I amplified this contrast with my addition of extensive Internet components with each installation I made.The Way Things Grow (1998) began as an installation of sculptures combining video with segments of birch trees. Each piece in the gallery was linked to a specific geographic location within driving distance of the gallery exhibiting the work. In the years just before this piece I had moved from a practice of text-augmented video installations to the point where I had reduced the text to small printed handouts that featured absurd Scripts for Performance. These text handouts that viewers could take with them suggested that the work was to be completed by the viewer later outside the gallery. This to-be-continued dynamic was the genesis of a serial form in work going forward from then on. Thematic and narrative elements in the work were serialized via possible actions viewers would perform after leaving the gallery. In the installation for The Way Things Grow, there was no text in the gallery at all to suggest interpretations of this series of video sculptures. Even the titles offered no direct textual help. Rather than telling the viewers something about the work before them in the gallery, the title of each piece led the viewer away from the gallery toward serial actions in the specific geographic locations the works referred to. Each piece was titled with an Internet address.Figure 1: Lawrence, Robert, The Way Things Grow, video Installation with web components at http://www.h-e-r-e.com/grow.html, 1998.When people went to the web site for each piece they found only a black page referencing a physical horizon with a long line of text that they could scroll to right for meters. Unlike the determinedly embodied work in the gallery, the web components were disembodied texts floating in a black void, but texts about very specific physical locations.Figure 2: Lawrence, Robert, The Way Things Grow, partial view of webpage at http://www.h-e-r-e.com/growth_variant4.html, 1998.The texts began with the exact longitude and latitude of a geographical site in some way related to birch trees. ... A particularly old or large tree... a factory that turned birch trees into popsicle sticks and medical tongue depressors... etc. The website texts included directions to the site, and absurd scripts for performance. In this way the Internet component transformed the suite of sculptures in the gallery to a series of virtual, and possibly actual, events beyond the gallery. These potential narratives that viewers were invited into comprised an open-ended serial structure. The gallery work was formal, minimal, essentialist. On the web it was social, locative, deconstructive. In both locations, it was located. Here follows an excerpt from the website. GROWTH VARIANT #25: North 44:57:58 by West 93:15:56. On the south side of the Hennepin County Government Center is a park with 9 birch trees. These are urban birches, and they display random scratchings, as well as proclamations of affection expressed with pairs of initials and a “+” –both with and without encircling heart symbols. RECOMMENDED PERFORMANCE: Visit these urban birches once each month. Photograph all changes in their bark made by humans. After 20 years compile a document entitled, "Human Mark Making on Urban Birches, a Visual Study of Specific Universalities". Bring it into the Hennepin County Government Center and ask that it be placed in the archives.An Acre of Art (2000) was a collaborative project with sculptor Mark Knierim. Like The Way Things Grow, this new work, commissioned by the Minneapolis Art Institute, played out in the gallery, in a specific geographic location, and online. In the Art Institute was a gallery installation combining sculptures with absurd combinations of physical rural culture fitting contradictorily into an urban "high art" context. One of the pieces, entitled Landscape (2000), was an 18’ chicken coop faced with a gold picture frame. Inside were two bard rock hens and an iMac. The computer was programmed to stream to the Internet live video from the coop, the world’s first video chicken cam. As a work unfolding across a long stretch of time, the web cam video was a serial narrative without determined division into episodes. The gallery works also referenced a specific acre of agricultural land an hour from the Institute. Here we planted a row of dwarf corn at a diagonal to the mid-western American rural geometric grid of farmland. Visitors to the rural site could sit on “rural art furniture,” contemplate the corn growing, and occasionally witness absurd performances. The third stream of the piece was an extensive website, which playfully theorized the rural/urban/art trialectic. Each of the three locations of the work was exploited to provide a richer transmedia interpretation of the project’s themes than any one venue or medium could. Location Sequence is a serial installation begun in 1999. Each installation has completely different physical elements. The only consistent physical element is 72 segments of a 72” collapsible carpenter's ruler evenly spaced to wrap around the gallery walls. Each of the 72 segments of the ruler displays an Internet web address. Reversing the notion of the Internet as a place of rapid change compared to a more enduring physical world, in this case the Internet components do not change with each new episode of the work, while the physical components transform with each new installation. Thematically, all aspects of the work deal with various shades of meaning of the term "location." Beginning/Middle/End is a 30-year conceptual serial begun in 2002, presenting a series of site-specific actions, objects, or interventions combined with corresponding web pages that collectively negotiate concepts related to time, location, and narrative. Realizing a 30-year project via the web in this manner is a self-conscious contradiction of the culture of the instantaneous that the Internet manifests and propagates.The installation documented here was completed for a one-night event in 2002 with Szilage Gallery in St Petersburg, Florida. Bricks moulded with the URLs for three web sites were placed in a historic brick road with the intention that they would remain there through a historical time frame. The URLs were also projected in light on a creek parallel to the brick road and seen only for several hours. The corresponding web site components speculate on temporal/narrative structures crossing with geographic features, natural and manufactured.Figure 3: Lawrence, Robert, Beginning/Middle/End, site-specific installation with website in conjunction with 30-year series, http://www.h-e-r-e.com/beginning.html, 2002-32.The most recent instalment was done as part of Conflux Festival in 2014 in collaboration with painter Ld Lawrence. White shapes appeared in various public spaces in downtown Manhattan. Upon closer inspection people realized that they were not painted tags or stickers, but magnetic sheets that could be moved or removed. An optical scan tag hidden on the back of each shape directed to a website which encouraged people to move the objects to other locations and send a geo-located photo to the web site to trace the shape's motion through the world. The work online could trace the serial narrative of the physical installation components following the installation during Conflux Festival. Figure 4: Lawrence, Robert w/Lawrence, Ld, Gravity Ace on the Move, site-specific installation with geo-tracking website at http://www.h-e-r-e.com/gravityace/. Completed for Conflux Festival NYC, 2014, as part of Beginning/Middle/End.Dad's Boots (2003) was a multi-sited sculpture/performance. Three different physical manifestations of the work were installed at the same time in three locations: Shirakawa-go Art Festival in Japan; the Phipps Art Center in Hudson, Wisconsin; and at the Tampa Museum of Art in Florida. Physical components of the work included silent video projection, digital photography, computer key caps, and my father's boots. Each of these three different installations referred back to one web site. Because all these shows were up at the same time, the work was a distributed synchronous serial. In each installation space the title of the work was displayed as an Internet address. At the website was a series of popup texts suggesting performances focused, however absurdly, on reassessing paternal relationships.Figure 5: Lawrence, Robert, Dad’s Boots, simultaneous gallery installation in Florida, Wisconsin and Japan, with website, 2003. Coincidently, beginning the same time as my transmedia physical/Internet art practice, since 1998 I have had a secret other-life as a tango dancer. I came to this practice drawn by the music and the attraction of an after-dark subculture that ran by different rules than the rest of life. While my life as a tanguero was most certainly an escape strategy, I quickly began to see that although tango was different from the rest of the world, it was indeed a part of this world. It had a place and a time and a history. Further, it was a fascinating history about the interplays of power, class, wealth, race, and desire. Figure 6: Lawrence, Robert, Tango Intervention, site-specific dance interventions with extensive web components, 2007-12.As Marta Savigliano points out in Tango and the Political Economy of Passion, “Tango is a practice already ready for struggle. It knows about taking sides, positions, risks. It has the experience of domination/resistance from within. …Tango is a language of decolonization. So pick and choose. Improvise... let your feet do the thinking. Be comfortable in your restlessness. Tango” (17). The realization that tango, my sensual escape from critical thought, was actually political came just about the time I was beginning to understand the essential dynamic of contradiction between the physical and Internet streams of my work. Tango Intervention began in 2007. I have now, as of 2018, done tango interventions in over 40 cities. Overall, the project can be seen as a serial performance of contradictions. In each case the physical dance interventions are manifestations of sensual fantasy in public space, and the Internet components recontextualize the public actions as site-specific performances with a political edge, revealing a hidden history or current social situation related to the political economy of tango. These themes are further developed in a series of related digital prints and videos shown here in various formats and contexts.In Tango Panopticon (2009), a “spin off” from the Tango Intervention series, the hidden social issue was the growing video surveillance of public space. The first Tango Panopticon production was Mayday 2009 with people dancing tango under public video surveillance in 15 cities. Mayday 2010 was Tango Panopticon 2.0, with tangointervention.org streaming live cell phone video from 16 simultaneous dance interventions on 4 continents. The public encountered the interventions as a sensual reclaiming of public space. Contradictorily, on the web Tango Panopticon 2.0 became a distributed worldwide action against the growing spectre of video surveillance and the increasing control of public commons. Each intervention team was automatically located on an online map when they started streaming video. Visitors to the website could choose an action from the list of cities or click on the map pins to choose which live video to load into the grid of 6 streaming signals. Visitors to the physical intervention sites could download our free open source software and stream their own videos to tangointervention.org.Figure 7: Lawrence, Robert, Tango Panopticon 2.0, worldwide synchronous dance intervention with live streaming video and extensive web components, 2010.Tango Panopticon also has a life as a serial installation, initially installed as part of the annual conference of “Digital Resources for Humanities and the Arts” at Brunel University, London. All shots in the grid of videos are swish pans from close-ups of surveillance cameras to tango interveners dancing under their gaze. Each ongoing installation in the series physically adapts to the site, and with each installation more lines of video frames are added until the images become too small to read.Figure 8: Lawrence, Robert, Tango Panopticon 2.0 (For Osvaldo), video installation based on worldwide dance intervention series with live streaming video, 2011.My new work Equivalence (in development) is quite didactic in its contradictions between the online and gallery components. A series of square prints of clouds in a gallery are titled with web addresses that open with other cloud images and then fade into randomly loading excerpts from the CIA torture manual used at Guantanamo Bay Detention Center.Figure 9: Lawrence, Robert, Eauivalence, digital prints, excerpts from CIA Guantanamo Detention Center torture manual, work-in-progress.The gallery images recall Stieglitz’s Equivalents photographs from the early 20th century. Made in the 1920s to 30s, the Equivalents comprise a pivotal change in photographic history, from the early pictorial movement in which photography tried to imitate painting, and a new artistic approach that embraced features distinct to the photographic medium. Stieglitz’s Equivalents merged photographic realism with abstraction and symbolist undertones of transcendent spirituality. Many of the 20th century masters of photography, from Ansel Adams to Minor White, acknowledged the profound influence these photographs had on them. Several images from the Equivalents series were the first photographic art to be acquired by a major art museum in the US, the Boston Museum of Fine Arts.My series Equivalence serves as the latest episode in a serial art history narrative. Since the “Pictures Generation” movement in the 1970s, photography has cannibalized its history, but perhaps no photographic body of work has been as quoted as Stieglitz’s Equivalents. A partial list includes: John Baldessari’s series Blowing Cigar Smoke to Match Clouds That Are the Same(1973), William Eggleston’s series Wedgwood Blue (1979), John Pfahl’s smoke stack series (1982-89), George Legrady’s Equivalents II(1993), Vik Muniz’sEquivalents(1997), Lisa Oppenheim (2012), and most recently, Berndnaut Smilde’s Nimbus Series, begun in 2012. Over the course of more than four decades each of these series has presented a unique vision, but all rest on Stieglitz’s shoulders. From that position they make choices about how to operate relative the original Equivalents, ranging from Baldessari and Muniz’s phenomenological playfulness to Eggleston and Smilde’s neo-essentialist approach.My series Equivalence follows along in this serial modernist image franchise. What distinguishes it is that it does not take a single position relative to other Equivalents tribute works. Rather, it exploits its gallery/Internet transmediality to simultaneously assume two contradictory positions. The dissonance of this positioning is one of my main points with the work, and it is in some ways resonant with the contradictions concerning photographic abstraction and representation that Stieglitz engaged in the original Equivalents series almost a century ago.While hanging on the walls of a gallery, Equivalence suggests the same metaphysical intentions as Stieglitz’s Equivalents. Simultaneously, in its manifestation on the Internet, my Equivalence series transcends its implied transcendence and claims a very specific time and place –a small brutal encampment on the island of Cuba where the United States abandoned any remaining claim to moral authority. In this illegal prison, forgotten lives drag on invisibly, outside of time, like untold serial narratives without resolution and without justice.Partially to balance the political insistence of Equivalence, I am also working on another series that operates with very different modalities. Following up on the live streaming technology that I developed for my Tango Panopticon public intervention series, I have started Horizon (In Development).Figure 10: Lawrence, Robert, Horizon, worldwide synchronous horizon interventions with live streaming video to Internet, work-in-progress.In Horizon I again use live cell phone video, this time streamed to an infinitely wide web page from live actions around the world done in direct engagement with the horizon line. The performances will begin and automatically come online live at noon in their respective time zone, each added to the growing horizontal line of moving images. As the actions complete, the streamed footage will begin endlessly looping. The project will also stream live during the event to galleries, and then HD footage from the events will be edited and incorporated into video installations. Leading up to this major event day, I will have a series of smaller instalments of the piece, with either live or recorded video. The first of these preliminary versions was completed during the Live Performers Workshop in Rome. Horizon continues to develop, leading to the worldwide synchronous event in 2020.Certainly, artists have always worked in series. However, exploiting the unique temporal dimensions of the Internet, a series of works can develop episodically as a serial work. If that work unfolds with contradictory thematics in its embodied and online forms, it reaches further toward an understanding of the complexities of postInternet culture and identity. ReferencesSaviligliano, Marta. Tango and the Political Economy of Passion. Boulder: Westview Press, 1995.
APA, Harvard, Vancouver, ISO, and other styles
36

Chen, Zhengyu, Qirong Qin, Hu Li, Jiling Zhou, and Jie Wang. "Influence of nappe structure on the Carboniferous volcanic reservoir in the middle of the Hongche Fault Zone, Junggar Basin, China." Open Geosciences 15, no. 1 (January 1, 2023). http://dx.doi.org/10.1515/geo-2022-0591.

Full text
Abstract:
Abstract This work presents an in-depth examination of the Carboniferous volcanic reservoir within the CH471 well area, situated in the central portion of the Hongche fault zone on the northwestern margin of the Junggar Basin. Leveraging seismic data and well connection comparisons, we scrutinize the tectonic evolution model and elucidate the impact of the nappe structure of the Hongche fault zone on the volcanic reservoir. The study has obtained the following understanding: after the formation of Carboniferous volcanic rocks, affected by the Hongche fault structure, a series of structural superpositions from extension to extrusion and finally thrust occurred, resulting in a northwestward tilt of the volcanic rock mass, and a large number of cracks were generated inside the rock mass. At the same time, the top was uplifted and affected by weathering and leaching to form a weathering crust, eventually forming a reservoir. The northern part is located in the edge area of the eruption center, and the rock mass has good stratification. The rock strata have certain constraints on the reservoir distribution, and the reservoir is inclined along the rock mass. The southern part is close to the eruption center and features large volcanic breccia accumulation bodies with strong internal heterogeneity. The reservoir developed mainly in the superposition of the range of control of the weathering crust and dense fracture development, and the rock mass morphology does not control the area. Structure is the key to forming a volcanic rock reservoir, mainly reflected in the following aspects. First, tectonic activity is accompanied by fracture development, and fractures are densely developed in areas with strong activity, which can effectively improve the physical properties of volcanic reservoirs. Second, tectonic activity leads to the strata uplift and weathering denudation, forming a weathering crust. Within the range of control of weathering and leaching, the physical properties of volcanic rocks are improved, and it is easier to form high-quality reservoirs. Third, the distribution of volcanic rock masses is controlled by tectonic activity, which affects the reservoir controlled by the dominant lithology.
APA, Harvard, Vancouver, ISO, and other styles
37

Nandi, Debabrata, Rashmi Ranjan Das, Debasish Sing, Indrajit Bera, Partha Sarathi Mishra, Pramod Chandra Sahu, and Kamal Lochan Mohanta. "A comprehensive investigation on the effect of UHI in Baripada city using geo‐spatial technique and MLTHP model." Environmental Quality Management, April 8, 2024. http://dx.doi.org/10.1002/tqem.22233.

Full text
Abstract:
AbstractAccording to the Times of India, Baripada city recorded the highest temperature in the world on April 14, 2023, at 43.5°C. It is located in the south‐west direction of the Similipal Biosphere Reserve. Observing the third highest temperature in the world, we are interested in finding out the reasons affecting climate change. This paper investigates the reasons for the temperature change in Baripada city and its surroundings by considering factors like Normalized Difference Vegetation Index (NDVI), Normalized Multi‐band Drought Index (NMDI), Normalized Difference Water Index (NDWI), Normalized Difference Built‐up Index (NDBI), forest fires, rainfall patterns, wind patterns, and registration of motor vehicles. For this study, data has been collected from USGS (United States Geological Survey), NASA POWER, Forest Survey of India, ArcGIS Online, etc. The study of NDVI shows a significant relationship with the LST. Similarly, NMDI shows an abnormal change in the drought index from 0.092 in 1990 to 0.276 in 2022, and NDWI shows a continuous reduction of the water index of −0.382 in 1990 and −0.176 in 2010, followed by −0.212 in 2022. NDBI values show that the urban build‐up index is increased to an extent due to rapid urban settlements removing natural vegetation. The overall forest fire count found an abnormal rise in the Baripada region during 2011 and 2022 to 143 times. Focusing on all the above factors, an experimental study is finally carried out to confirm the rise in temperature using time‐series analysis and a multi‐layer locally tuned hidden layer perceptron (MLTHP) model to ensure the Times of India's report.
APA, Harvard, Vancouver, ISO, and other styles
38

Burns, Alex. "The Worldflash of a Coming Future." M/C Journal 6, no. 2 (April 1, 2003). http://dx.doi.org/10.5204/mcj.2168.

Full text
Abstract:
History is not over and that includes media history. Jay Rosen (Zelizer & Allan 33) The media in their reporting on terrorism tend to be judgmental, inflammatory, and sensationalistic. — Susan D. Moeller (169) In short, we are directed in time, and our relation to the future is different than our relation to the past. All our questions are conditioned by this asymmetry, and all our answers to these questions are equally conditioned by it. Norbert Wiener (44) The Clash of Geopolitical Pundits America’s geo-strategic engagement with the world underwent a dramatic shift in the decade after the Cold War ended. United States military forces undertook a series of humanitarian interventions from northern Iraq (1991) and Somalia (1992) to NATO’s bombing campaign on Kosovo (1999). Wall Street financial speculators embraced market-oriented globalization and technology-based industries (Friedman 1999). Meanwhile the geo-strategic pundits debated several different scenarios at deeper layers of epistemology and macrohistory including the breakdown of nation-states (Kaplan), the ‘clash of civilizations’ along religiopolitical fault-lines (Huntington) and the fashionable ‘end of history’ thesis (Fukuyama). Media theorists expressed this geo-strategic shift in reference to the ‘CNN Effect’: the power of real-time media ‘to provoke major responses from domestic audiences and political elites to both global and national events’ (Robinson 2). This media ecology is often contrasted with ‘Gateholder’ and ‘Manufacturing Consent’ models. The ‘CNN Effect’ privileges humanitarian and non-government organisations whereas the latter models focus upon the conformist mind-sets and shared worldviews of government and policy decision-makers. The September 11 attacks generated an uncertain interdependency between the terrorists, government officials, and favourable media coverage. It provided a test case, as had the humanitarian interventions (Robinson 37) before it, to test the claim by proponents that the ‘CNN Effect’ had policy leverage during critical stress points. The attacks also revived a long-running debate in media circles about the risk factors of global media. McLuhan (1964) and Ballard (1990) had prophesied that the global media would pose a real-time challenge to decision-making processes and that its visual imagery would have unforeseen psychological effects on viewers. Wark (1994) noted that journalists who covered real-time events including the Wall Street crash (1987) and collapse of the Berlin Wall (1989) were traumatised by their ‘virtual’ geographies. The ‘War on Terror’ as 21st Century Myth Three recent books explore how the 1990s humanitarian interventions and the September 11 attacks have remapped this ‘virtual’ territory with all too real consequences. Piers Robinson’s The CNN Effect (2002) critiques the theory and proposes the policy-media interaction model. Barbie Zelizer and Stuart Allan’s anthology Journalism After September 11 (2002) examines how September 11 affected the journalists who covered it and the implications for news values. Sandra Silberstein’s War of Words (2002) uncovers how strategic language framed the U.S. response to September 11. Robinson provides the contextual background; Silberstein contributes the specifics; and Zelizer and Allan surface broader perspectives. These books offer insights into the social construction of the nebulous War on Terror and why certain images and trajectories were chosen at the expense of other possibilities. Silberstein locates this world-historical moment in the three-week transition between September 11’s aftermath and the U.S. bombings of Afghanistan’s Taliban regime. Descriptions like the ‘War on Terror’ and ‘Axis of Evil’ framed the U.S. military response, provided a conceptual justification for the bombings, and also brought into being the geo-strategic context for other nations. The crucial element in this process was when U.S. President George W. Bush adopted a pedagogical style for his public speeches, underpinned by the illusions of communal symbols and shared meanings (Silberstein 6-8). Bush’s initial address to the nation on September 11 invoked the ambiguous pronoun ‘we’ to recreate ‘a unified nation, under God’ (Silberstein 4). The 1990s humanitarian interventions had frequently been debated in Daniel Hallin’s sphere of ‘legitimate controversy’; however the grammar used by Bush and his political advisers located the debate in the sphere of ‘consensus’. This brief period of enforced consensus was reinforced by the structural limitations of North American media outlets. September 11 combined ‘tragedy, public danger and a grave threat to national security’, Michael Schudson observed, and in the aftermath North American journalism shifted ‘toward a prose of solidarity rather than a prose of information’ (Zelizer & Allan 41). Debate about why America was hated did not go much beyond Bush’s explanation that ‘they hated our freedoms’ (Silberstein 14). Robert W. McChesney noted that alternatives to the ‘war’ paradigm were rarely mentioned in the mainstream media (Zelizer & Allan 93). A new myth for the 21st century had been unleashed. The Cycle of Integration Propaganda Journalistic prose masked the propaganda of social integration that atomised the individual within a larger collective (Ellul). The War on Terror was constructed by geopolitical pundits as a Manichean battle between ‘an “evil” them and a national us’ (Silberstein 47). But the national crisis made ‘us’ suddenly problematic. Resurgent patriotism focused on the American flag instead of Constitutional rights. Debates about military tribunals and the USA Patriot Act resurrected the dystopian fears of a surveillance society. New York City mayor Rudy Guiliani suddenly became a leadership icon and Time magazine awarded him Person of the Year (Silberstein 92). Guiliani suggested at the Concert for New York on 20 October 2001 that ‘New Yorkers and Americans have been united as never before’ (Silberstein 104). Even the series of Public Service Announcements created by the Ad Council and U.S. advertising agencies succeeded in blurring the lines between cultural tolerance, social inclusion, and social integration (Silberstein 108-16). In this climate the in-depth discussion of alternate options and informed dissent became thought-crimes. The American Council of Trustees and Alumni’s report Defending Civilization: How Our Universities are Failing America (2002), which singled out “blame America first” academics, ignited a firestorm of debate about educational curriculums, interpreting history, and the limits of academic freedom. Silberstein’s perceptive analysis surfaces how ACTA assumed moral authority and collective misunderstandings as justification for its interrogation of internal enemies. The errors she notes included presumed conclusions, hasty generalisations, bifurcated worldviews, and false analogies (Silberstein 133, 135, 139, 141). Op-ed columnists soon exposed ACTA’s gambit as a pre-packaged witch-hunt. But newscasters then channel-skipped into military metaphors as the Afghanistan campaign began. The weeks after the attacks New York City sidewalk traders moved incense and tourist photos to make way for World Trade Center memorabilia and anti-Osama shirts. Chevy and Ford morphed September 11 catchphrases (notably Todd Beamer’s last words “Let’s Roll” on Flight 93) and imagery into car advertising campaigns (Silberstein 124-5). American self-identity was finally reasserted in the face of a domestic recession through this wave of vulgar commercialism. The ‘Simulated’ Fall of Elite Journalism For Columbia University professor James Carey the ‘failure of journalism on September 11’ signaled the ‘collapse of the elites of American journalism’ (Zelizer & Allan 77). Carey traces the rise-and-fall of adversarial and investigative journalism from the Pentagon Papers and Watergate through the intermediation of the press to the myopic self-interest of the 1988 and 1992 Presidential campaigns. Carey’s framing echoes the earlier criticisms of Carl Bernstein and Hunter S. Thompson. However this critique overlooks several complexities. Piers Robinson cites Alison Preston’s insight that diplomacy, geopolitics and elite reportage defines itself through the sense of distance from its subjects. Robinson distinguished between two reportage types: distance framing ‘creates emotional distance’ between the viewers and victims whilst support framing accepts the ‘official policy’ (28). The upsurge in patriotism, the vulgar commercialism, and the mini-cycle of memorabilia and publishing all combined to enhance the support framing of the U.S. federal government. Empathy generated for September 11’s victims was tied to support of military intervention. However this closeness rapidly became the distance framing of the Afghanistan campaign. News coverage recycled the familiar visuals of in-progress bombings and Taliban barbarians. The alternative press, peace movements, and social activists then retaliated against this coverage by reinstating the support framing that revealed structural violence and gave voice to silenced minorities and victims. What really unfolded after September 11 was not the demise of journalism’s elite but rather the renegotiation of reportage boundaries and shared meanings. Journalists scoured the Internet for eyewitness accounts and to interview survivors (Zelizer & Allan 129). The same medium was used by others to spread conspiracy theories and viral rumors that numerology predicted the date September 11 or that the “face of Satan” could be seen in photographs of the World Trade Center (Zelizer & Allan 133). Karim H. Karim notes that the Jihad frame of an “Islamic Peril” was socially constructed by media outlets but then challenged by individual journalists who had learnt ‘to question the essentialist bases of her own socialization and placing herself in the Other’s shoes’ (Zelizer & Allan 112). Other journalists forgot that Jihad and McWorld were not separate but two intertwined worldviews that fed upon each other. The September 11 attacks on the Pentagon and the World Trade Center also had deep symbolic resonances for American sociopolitical ideals that some journalists explored through analysis of myths and metaphors. The Rise of Strategic Geography However these renegotiated boundariesof new media, multiperspectival frames, and ‘layered’ depth approaches to issues analysiswere essentially minority reports. The rationalist mode of journalism was soon reasserted through normative appeals to strategic geography. The U.S. networks framed their documentaries on Islam and the Middle East in bluntly realpolitik terms. The documentary “Minefield: The United States and the Muslim World” (ABC, 11 October 2001) made explicit strategic assumptions of ‘the U.S. as “managing” the region’ and ‘a definite tinge of superiority’ (Silberstein 153). ABC and CNN stressed the similarities between the world’s major monotheistic religions and their scriptural doctrines. Both networks limited their coverage of critiques and dissent to internecine schisms within these traditions (Silberstein 158). CNN also created different coverage for its North American and international audiences. The BBC was more cautious in its September 11 coverage and more global in outlook. Three United Kingdom specials – Panorama (Clash of Cultures, BBC1, 21 October 2001), Question Time (Question Time Special, BBC1, 13 September 2001), and “War Without End” (War on Trial, Channel 4, 27 October 2001) – drew upon the British traditions of parliamentary assembly, expert panels, and legal trials as ways to explore the multiple dimensions of the ‘War on Terror’ (Zelizer & Allan 180). These latter debates weren’t value free: the programs sanctioned ‘a tightly controlled and hierarchical agora’ through different containment strategies (Zelizer & Allan 183). Program formats, selected experts and presenters, and editorial/on-screen graphics were factors that pre-empted the viewer’s experience and conclusions. The traditional emphasis of news values on the expert was renewed. These subtle forms of thought-control enabled policy-makers to inform the public whilst inoculating them against terrorist propaganda. However the ‘CNN Effect’ also had counter-offensive capabilities. Osama bin Laden’s videotaped sermons and the al-Jazeera network’s broadcasts undermined the psychological operations maxim that enemies must not gain access to the mindshare of domestic audiences. Ingrid Volkmer recounts how the Los Angeles based National Iranian Television Network used satellite broadcasts to criticize the Iranian leadership and spark public riots (Zelizer & Allan 242). These incidents hint at why the ‘War on Terror’ myth, now unleashed upon the world, may become far more destabilizing to the world system than previous conflicts. Risk Reportage and Mediated Trauma When media analysts were considering the ‘CNN Effect’ a group of social contract theorists including Anthony Giddens, Zygmunt Bauman, and Ulrich Beck were debating, simultaneously, the status of modernity and the ‘unbounded contours’ of globalization. Beck termed this new environment of escalating uncertainties and uninsurable dangers the ‘world risk society’ (Beck). Although they drew upon constructivist and realist traditions Beck and Giddens ‘did not place risk perception at the center of their analysis’ (Zelizer & Allan 203). Instead this was the role of journalist as ‘witness’ to Ballard-style ‘institutionalized disaster areas’. The terrorist attacks on September 11 materialized this risk and obliterated the journalistic norms of detachment and objectivity. The trauma ‘destabilizes a sense of self’ within individuals (Zelizer & Allan 205) and disrupts the image-generating capacity of collective societies. Barbie Zelizer found that the press selection of September 11 photos and witnesses re-enacted the ‘Holocaust aesthetic’ created when Allied Forces freed the Nazi internment camps in 1945 (Zelizer & Allan 55-7). The visceral nature of September 11 imagery inverted the trend, from the Gulf War to NATO’s Kosovo bombings, for news outlets to depict war in detached video-game imagery (Zelizer & Allan 253). Coverage of the September 11 attacks and the subsequent Bali bombings (on 12 October 2002) followed a four-part pattern news cycle of assassinations and terrorism (Moeller 164-7). Moeller found that coverage moved from the initial event to a hunt for the perpetrators, public mourning, and finally, a sense of closure ‘when the media reassert the supremacy of the established political and social order’ (167). In both events the shock of the initial devastation was rapidly followed by the arrest of al Qaeda and Jamaah Islamiyah members, the creation and copying of the New York Times ‘Portraits of Grief’ template, and the mediation of trauma by a re-established moral order. News pundits had clearly studied the literature on bereavement and grief cycles (Kubler-Ross). However the neo-noir work culture of some outlets also fueled bitter disputes about how post-traumatic stress affected journalists themselves (Zelizer & Allan 253). Reconfiguring the Future After September 11 the geopolitical pundits, a reactive cycle of integration propaganda, pecking order shifts within journalism elites, strategic language, and mediated trauma all combined to bring a specific future into being. This outcome reflected the ‘media-state relationship’ in which coverage ‘still reflected policy preferences of parts of the U.S. elite foreign-policy-making community’ (Robinson 129). Although Internet media and non-elite analysts embraced Hallin’s ‘sphere of deviance’ there is no clear evidence yet that they have altered the opinions of policy-makers. The geopolitical segue from September 11 into the U.S.-led campaign against Iraq also has disturbing implications for the ‘CNN Effect’. Robinson found that its mythic reputation was overstated and tied to issues of policy certainty that the theory’s proponents often failed to examine. Media coverage molded a ‘domestic constituency ... for policy-makers to take action in Somalia’ (Robinson 62). He found greater support in ‘anecdotal evidence’ that the United Nations Security Council’s ‘safe area’ for Iraqi Kurds was driven by Turkey’s geo-strategic fears of ‘unwanted Kurdish refugees’ (Robinson 71). Media coverage did impact upon policy-makers to create Bosnian ‘safe areas’, however, ‘the Kosovo, Rwanda, and Iraq case studies’ showed that the ‘CNN Effect’ was unlikely as a key factor ‘when policy certainty exists’ (Robinson 118). The clear implication from Robinson’s studies is that empathy framing, humanitarian values, and searing visual imagery won’t be enough to challenge policy-makers. What remains to be done? Fortunately there are some possibilities that straddle the pragmatic, realpolitik and emancipatory approaches. Today’s activists and analysts are also aware of the dangers of ‘unfreedom’ and un-reflective dissent (Fromm). Peter Gabriel’s organisation Witness, which documents human rights abuses, is one benchmark of how to use real-time media and the video camera in an effective way. The domains of anthropology, negotiation studies, neuro-linguistics, and social psychology offer valuable lessons on techniques of non-coercive influence. The emancipatory tradition of futures studies offers a rich tradition of self-awareness exercises, institution rebuilding, and social imaging, offsets the pragmatic lure of normative scenarios. The final lesson from these books is that activists and analysts must co-adapt as the ‘War on Terror’ mutates into new and terrifying forms. Works Cited Amis, Martin. “Fear and Loathing.” The Guardian (18 Sep. 2001). 1 March 2001 <http://www.guardian.co.uk/Archive/Article/0,4273,4259170,00.php>. Ballard, J.G. The Atrocity Exhibition (rev. ed.). Los Angeles: V/Search Publications, 1990. Beck, Ulrich. World Risk Society. Malden, MA: Polity Press, 1999. Ellul, Jacques. Propaganda: The Formation of Men’s Attitudes. New York: Vintage Books, 1973. Friedman, Thomas. The Lexus and the Olive Tree. New York: Farrar, Straus & Giroux, 1999. Fromm, Erich. Escape from Freedom. New York: Farrar & Rhinehart, 1941. Fukuyama, Francis. The End of History and the Last Man. New York: Free Press, 1992. Huntington, Samuel P. The Clash of Civilizations and the Remaking of World Order. New York: Simon & Schuster, 1996. Kaplan, Robert. The Coming Anarchy: Shattering the Dreams of the Post Cold War. New York: Random House, 2000. Kubler-Ross, Elizabeth. On Death and Dying. London: Tavistock, 1969. McLuhan, Marshall. Understanding Media: The Extensions of Man. London: Routledge & Kegan Paul, 1964. Moeller, Susan D. Compassion Fatigue: How the Media Sell Disease, Famine, War, and Death. New York: Routledge, 1999. Robinson, Piers. The CNN Effect: The Myth of News, Foreign Policy and Intervention. New York: Routledge, 2002. Silberstein, Sandra. War of Words: Language, Politics and 9/11. New York: Routledge, 2002. Wark, McKenzie. Virtual Geography: Living with Global Media Events. Bloomington IN: Indiana UP, 1994. Wiener, Norbert. Cybernetics: Or Control and Communication in the Animal and the Machine. New York: John Wiley & Sons, 1948. Zelizer, Barbie, and Stuart Allan (eds.). Journalism after September 11. New York: Routledge, 2002. Links http://www.guardian.co.uk/Archive/Article/0 Citation reference for this article Substitute your date of access for Dn Month Year etc... MLA Style Burns, Alex. "The Worldflash of a Coming Future" M/C: A Journal of Media and Culture< http://www.media-culture.org.au/0304/08-worldflash.php>. APA Style Burns, A. (2003, Apr 23). The Worldflash of a Coming Future. M/C: A Journal of Media and Culture, 6,< http://www.media-culture.org.au/0304/08-worldflash.php>
APA, Harvard, Vancouver, ISO, and other styles
39

Pirjalili, Ali, Asal Bidarmaghz, Arman Khoshghalb, and Adrian Russell. "novel experimental technique to evaluate soil thermal conductivity in a transient state." Symposium on Energy Geotechnics 2023, September 28, 2023. http://dx.doi.org/10.59490/seg.2023.519.

Full text
Abstract:
As a result of climate change, heat island effects, and an increase in geothermal applications, soil deposits are being subjected to increasingly variable thermal stress, resulting in changes in soil properties and behaviour. Analysing the variations in soil behaviour caused by thermal loads is crucial due to the substantial impact these changes have on the state of soil stress. A critical step towards this goal is to thoroughly understand the heat transmission process in a soil deposit. Thermal loads are transported to the soil primarily by heat conduction and convection. When there is no water seepage, heat transmission is dominated by conduction [1]. The heat conduction is governed by the soil thermal conductivity, k, which is affected by the thermal conductivities of the soil components (i.e., air, water, and solid particles) [2], as well as other physical parameters such as moisture content and density [1, 2]. Investigating the thermal conductivity in different soils has resulted in the development of various predictive models, including theoretical, empirical, and mathematical models [4]. Also, two main techniques are typically used to assess thermal conductivity in experimental tests: needle probes and thermal cells [3]. This study introduces a novel approach to understanding the influence of affecting parameters on thermal properties in a transient state of a cylindrical soil sample. The proposed method enables the comprehension of spatio-temporal variability of thermal conductivity in soils subjected to temperature fluctuations. Most studies have relied on steady-state assumptions to measure or estimate thermal conductivity contradicting the variable nature of soil thermal properties. However, the existing transient tools and methods suffer from limitations such as a small sample size and very short testing durations [3]. Consequently, these constraints prevent an accurate capture of the extent to which thermal conductivity would vary due to thermally induced variations in physical properties. As a result, reliable thermo-mechanical analysis of soil subjected to heat sources is hindered. To achieve this goal, a state-of-the-art thermal cell was meticulously designed and constructed at the UNSW Geo-Energy Laboratory. This innovative thermal cell enables the monitoring of thermal and physical properties variations in a cylindrical soil sample with a height of 500 mm and a diameter of 20 mm, facilitating the assessment of temporal and spatial variability in thermal properties. The application of thermal loads to the thermal cell is accomplished by utilising a spiral copper pipe wrapped around the outer walls of the cell. To ensure efficient heat transfer, a layer of graphite sheet is applied to the cylinder wall, promoting uniform heat transmission between the cylinder wall and the pipe. The thermal cell and a schematic representation of the sample set-up are illustrated in Figure 1. When the soil sample is prepared within the thermal cell, the thermal loading mechanism is activated to quickly elevate the temperature of the cylinder's outer wall to a target level. Once the thermal load is applied, the temperature gradient between the soil sample (with an initial temperature of T0) and the applied temperature (i.e., Tout) produces a transient, one-directional radial heat flow in the sample resulting in variations in the soil temperature which begin at the cylinder’s wall and progress towards the centre until the system reaches a steady-state condition. The resultant temperature distribution through the sample, however, depends on the soil moisture content, initial void ratio, density, etc., and as the heat transfer is governed by thermal conductivity, it could be concluded that thermal conductivity would change spatially and temporally as a result of variations in the mentioned parameters. Thus, to monitor temperature changes, a series of thermal sensors (8 sensors) are located at different distances from the origin of the cylinder (to observe temperature distribution from the source of the thermal load) and also at different elevations (to check the uniformity of the thermal load through the cylinder height). Finally, an analytical solution is employed to analyse the obtained data and estimate the thermal conductivity of the soil, which would be different for various experimental scenarios (i.e., initial void ratios, density, moisture content, etc.). The analytical solution is developed by extending the solution proposed by [5] to capture changes in thermal conductivity in a soil cylinder exposed to temperature increase. Figure 1a shows typical test results in terms of the variations of the temperature through the test which are obtained under the constant applied temperature of 55 C for a dry soil sample with an initial void ratio of 0.4. For a constant initial void ratio and density, when the results of various applied dimensionless temperatures (Q) are plotted against dimensionless time (Fourier number, Fo) (Figure 1b), it is observed that the temperature profile at a specific radial distance is independent of the applied target temperature and the resultant temperature gradient while the soil density is constant and the soil fully dry. Finally, to test the validity of this methodology, the induced temperature variability measured at various depths and radial distances in the soil is compared with the results from a FE heat transfer model, showing good agreement (the comparison is not shown here due to space limitation). It is concluded that the proposed experimental approach is reliable for investigating the spatio-temporal variability of thermal conductivity for a cylindrical soil sample subjected to temperature elevation hence physical parameters variability.
APA, Harvard, Vancouver, ISO, and other styles
40

Piatti-Farnell, Lorna. "“The Blood Never Stops Flowing and the Party Never Ends”: The Originals and the Afterlife of New Orleans as a Vampire City." M/C Journal 20, no. 5 (October 13, 2017). http://dx.doi.org/10.5204/mcj.1314.

Full text
Abstract:
IntroductionAs both a historical and cultural entity, the city of New Orleans has long-maintained a reputation as a centre for hedonistic and carnivaleque pleasures. Historically, images of mardi gras, jazz, and parties on the shores of the Mississippi have pervaded the cultural vision of the city as a “mecca” for “social life” (Marina 2), and successfully fed its tourism narratives. Simultaneously, however, a different kind of narrative also exists in the historical folds of the city’s urban mythology. Many tales of vampire sightings and supernatural accounts surround the area, and have contributed, over the years, to the establishment and mystification of New Orleans as a ‘vampire city’. This has produced, in turn, its own brand of vampire tourism (Murphy 2015). Mixed with historical rumours and Gothic folklore, the recent narratives of popular culture lie at the centre of the re-imagination of New Orleans as a vampire hub. Taking this idea as a point of departure, this article provides culturally- and historically-informed critical considerations of New Orleans as a ‘vampire city’, especially as portrayed in The Originals (2013-2017), a contemporary television series where vampires are the main protagonists. In the series, the historical narratives of New Orleans become entangled with – and are, at times, almost inseparable from – the fictional chronicles of the vampire in both aesthetic and conceptual terms.The critical connection between urban narratives and vampires representation, as far as New Orleans is concerned, is profoundly entangled with notions of both tourism and fictionalised popular accounts of folklore (Piatti-Farnell 172). In approaching the conceptual relationship between New Orleans as a cultural and historical entity and the vampire — in its folkloristic and imaginative context — the analysis will take a three-pronged approach: firstly, it will consider the historical narrative of tourism for the city of New Orleans; secondly, the city’s connection to vampires and other Gothicised entities will be considered, both historically and narratively; and finally, the analysis will focus on how the connection between New Orleans and Gothic folklore of the vampire is represented in The Originals, with the issue of cultural authenticity being brought into the foreground. A critical footnote must be given to the understanding of the term ‘New Orleans’ in this article as meaning primarily the French Quarter – or, the Vieux Carre – and its various representations. This geographical focus principally owes its existence to the profound cultural significance that the French Quarter has occupied in the history of New Orleans as a city, and, in particular, in its connection to narratives of magic and Gothic folklore, as well as the broader historical and contemporary tourism structures. A History of TourismSocial historian Kevin Fox Gotham agues that New Orleans as a city has been particularly successful in fabricating a sellable image of itself; tourism, Gotham reminds us, is about “the production of local difference, local cultures, and different local histories that appeal to visitors’ tastes for the exotic and the unique” (“Gentrification” 1100). In these terms, both the history and the socio-cultural ‘feel’ of the city cannot be separated from the visual constructs that accompany it. Over the decades, New Orleans has fabricated a distinct network of representational patterns for the Vieux Carre in particular, where the deployment of specific images, themes and motifs – which are, in truth, only peripherally tied to the city’ actual social and political history, and owe their creation and realisation more to the success of fictional narratives from film and literature – is employed to “stimulate tourist demands to buy and consume” (Gotham, “Gentrification” 1102). This image of the city as hedonistic site is well-acknowledged, has to be understood, at least partially, as a conscious construct aimed at the production an identity for itself, which the city can in turn sell to visitors, both domestically and internationally. New Orleans, Gotham suggests, is a ‘complex and constantly mutating city’, in which “meanings of place and community” are “inexorably intertwined with tourism” (Authentic 5). The view of New Orleans as a site of hedonistic pleasure is something that has been heavily capitalised upon by the tourism industry of the city for decades, if not centuries. A keen look at advertising pamphlets for the city, dating form the late Nineteenth century onwards, provides an overview of thematic selling points, that primarily focus on notions of jazz, endless parties and, in particular, nostalgic and distinctly rose-tinted views of the Old South and its glorious plantations (Thomas 7). The decadent view of New Orleans as a centre of carnal pleasures has often been recalled by scholars and lay observers alike; this vision of he city indeed holds deep historical roots, and is entangled with the city’s own economic structures, as well as its acculturated tourism ones. In the late 19th and early 20th century one of the things that New Orleans was very famous for was actually Storyville, the city’s red-light district, sanctioned in 1897 by municipal ordinance. Storyville quickly became a centralized attraction in the heart of New Orleans, so much so that it began being heavily advertised, especially through the publication of the ‘Blue Book’, a resource created for tourists. The Blue Book contained, in alphabetical order, information on all the prostitutes of Storyville. Storyville remained very popular and the most famous attraction in New Orleans until its demolition in 1919 Anthony Stanonis suggests that, in its ability to promote a sellable image for the city, “Storyville meshed with the intersts of business men in the age before mass tourism” (105).Even after the disappearance of Storyville, New Orleans continued to foster its image a site of hedonism, a narrative aided by a favourable administration, especially in the 1930s and 1940s. The French Quarter, in particular, “became a tawdry mélange of brothers and gambling dens operating with impunity under lax law enforcement” (Souther 16). The image of the city as a site for pleasures of worldly nature continued to be deeply rooted, and even survives in the following decades today, as visible in the numerous exotic dance parlours located on the famous Bourbon Street.Vampire TourismSimultaneously, however, a different kind of narrative also exists in the recent historical folds of the city’s urban mythology, where vampires, magic, and voodoo are an unavoidable presence. Many tales of vampire sightings and supernatural accounts surround the area, and have contributed, over the years, to the establishment and mystification of New Orleans as a ‘vampire city’. Kenneth Holditch contends that ‘”New Orleans is a city in love with its myths, mysteries and fantasies” (quoted in McKinney 8). In the contemporary era, these qualities are profoundly reflected in the city’s urban tourism image, where the vampire narrative is pushed into the foreground. When in the city, one might be lucky enough to take one of the many ‘vampire tours’ — often coupled with narratives of haunted locations — or visit the vampire bookshop, or even take part in the annual vampire ball. Indeed, the presence of vampires in New Orleans’s contemporary tourism narrative is so pervasive that one might be tempted to assume that it has always occupied a prominent place in the city’s cultural fabric. Nonetheless, this perception is not accurate: the historical evidence from tourism pamphlets for the city do not make any mentions of vampire tourism before the 1990s, and even then, the focus on the occult side of new Orleans tended to privilege stories of voodoo and hoodoo — a presence that still survives strongly in the cultural narrative city itself (Murphy 91). While the connection between vampires and New Orleans is a undoubtedly recent one, the development and establishment of New Orleans as vampire city cannot be thought of as a straight line. A number of cultural and historical currents appear to converge in the creation of the city’s vampire mystique. The history and geography of the city here could be an important factor, and a useful starting point; as the site of extreme immigration and ethnic and racial mingling New Orleans holds a reputation for mystery. The city was, of course, the regrettable site of a huge marketplace for the slave trade, so discussions of political economy could also be important here, although I’ll leave them for another time. As a city, New Orleans has often been described – by novelists, poets, and historians alike – as being somewhat ‘peculiar’. Simone de Behaviour was known to have remarked that that the city is surrounded by a “pearl grey” and ‘luminous’ air” (McKinney 1). In similar fashion, Oliver Evans claims the city carries “opalescent hints” (quoted in McKinney 1). New Orleans is famous for having a quite thick mist, the result of a high humidity levels in the air. To an observing eye, New Orleans seems immersed in an almost otherworldly ‘glow’, which bestows upon its limits an ethereal and mysterious quality (Piatti-Farnell 173). While this intention here is not to suggest that New Orleans is the only city to have mist – especially in the Southern States – one might venture to say that this physical phenomenon, joined with other occurrences and legends, has certainly contributed to the city’s Gothicised image. The geography of the city also makes it sadly famous for floods and their subsequent devastation, which over centuries have wrecked parts of the city irrevocably. New Orleans sits at a less than desirable geographical position, is no more than 17 feet above sea level, and much of it is at least five feet below (McKinney 5). In spite of its lamentable fame, hurricane Katrina was not the first devastating geo-meteorological phenomenon to hit and destroy most of New Orleans; one can trace similar hurricane occurrences in 1812 and 1915, which at the time significantly damaged parts of the French Quarter. The geographical position of New Orleans also owes to the city’s well-known history of disease such as the plague and tuberculosis – often associated, in previous centuries, with the miasma proper to reclaimed river lands. In similar terms, one must not forget New Orleans’s history of devastating fires – primarily in the years 1788, 1794, 1816, 1866 and 1919 – which slowly destroyed the main historical parts of the city, particularly in the Vieux Carre, and to some extent opened the way for regeneration and later gentrification as well. As a result of its troubled and destructive history, Louise McKinnon claims that the city ‒ perhaps unlike any others in the United States ‒ hinges on perpetual cycles of destruction and regeneration, continuously showing “the wear and tear of human life” (McKinney 6).It is indeed in this extremely important element that New Orleans finds a conceptual source in its connection to notions of the undead, and the vampire in particular. Historically, one can identify the pervasive use of Gothic terminology to describe New Orleans, even if, the descriptions themselves were more attuned to perceptions of the city’s architecture and metrological conditions, rather than the recollection of any folklore-inspired narratives of unread creatures. Because of its mutating, and often ill-maintained historical architecture – especially in the French Quarter - New Orleans has steadily maintained a reputation as a city of “splendid decay” (McKinney, 6). This highly lyrical and metaphorical approach plays an important part in building the city as a site of mystery and enchantment. Its decaying outlook functions as an unavoidable sign of how New Orleans continues to absorb, and simultaneously repel, as McKinney puts it, “the effects of its own history” (6).Nonetheless, the history of New Orleans as a cultural entity, especially in terms of tourism, has not been tied to vampires for centuries, as many imagine, and the city itself insists in its contemporary tourism narratives. Although a lot of folklore has survived around the city in connection to magic and mysticism, for a number of reasons, vampires have not always been in the foreground of its publicised cultural narratives. Mixed with historical rumours and Gothic folklore, the recent narratives of popular culture lie at the centre of the re-imagination of New Orleans as a vampire spot: most scholars claim that it all started with the publication of Anne Rice’s Interview with the Vampire (1976), but actually evidence shows that the vampire narrative for the city of New Orleans did not fully explode until the release of Neil Jordan’s cinematic adaptation of Interview with the Vampire (1994). This film really put New Orleans at the centre of the vampire narrative, indulging in the use of many iconic locations in the city as tied to vampire, and cementing the idea of New Orleans as a vampiric city (Piatti-Farnell 175). The impact of Rice’s work, and its adaptations, has also been picked up by numerous other examples of popular culture, including Charlaine Harris’s Southern Vampire mystery series, and its well-known television adaptation True Blood. Harris herself states in one of her novels: “New Orleans had been the place to go for vampires and those who wanted to be around them ever since Anne Rice had been proven right about their existence” (2). In spite of the fact that popular culture, rather than actual historical evidence, lies at the heart of the city’s cultural relationship with vampires, this does not detract from the fact that vampires themselves – as fabricated figures lying somewhere between folklore, history, and fiction – represent an influential part of New Orleans’s contemporary tourism narrative, building a bridge between historical storytelling, mythologised identities, and consumerism. The Originals: Vampires in the CityIndeed, the impact of popular culture in establishing and re-establishing the success of the vampire tourism narrative in New Orleans is undeniable. Contemporary examples continue to capitalise on the visual, cultural, and suggestively historical connection between the city’s landmarks and vampire tales, cementing the notion of New Orleans as a solid entity within the Gothic tourism narrative. One such successful example is The Originals. This television show is actually a spin-off of the Vampires Diaries, and begins with three vampires, the Mikaelson siblings (Niklaus, Elijah, and Rebekkah) returning to the city of New Orleans for the first time since 1919, when they were forced to flee by their vengeful father. In their absence, Niklaus's protégé, Marcel, took charge of the city. The storyline of The Originals focuses on battles within the vampire factions to regain control of the city, and eliminate the hold of other mystical creatures such as werewolves and witches (Anyiwo 175). The central narrative here is that the city belongs to the vampire, and there can be no other real Gothic presence in the Quarter. One can only wonder, even at this embryonic level, how this connects functions in a multifaceted way, extending the critique of the vampire’s relationship to New Orleans from the textual dimension of the TV show to the real life cultural narrative of the city itself. A large number of the narrative strands in The Originals are tied to city and its festivals, its celebrations, and its visions of the past, whether historically recorded, or living in the pages of its Gothic folklore. Vampires are actually claimed to have made New Orleans what it is today, and they undoubtedly rule it. As Marcel puts it: “The blood never stops flowing, and the party never ends” (Episode 1, “Always and Forever”). Even the vampiric mantra for New Orleans in The Originals is tied to the city’s existing and long-standing tourism narrative, as “the party never ends” is a reference to one of Bourbon Street’s famous slogans. Indeed, the pictorial influence of the city’s primary landmarks in The Originals is undeniable. In spite of the fact the inside scenes for The Originals were filmed in a studio, the outside shots in the series reveal a strong connections to the city itself, as viewers are left with no doubt as to the show’s setting. New Orleans is continuously mentioned and put on show – and pervasively referred to as “our city”, by the vampires. So much so, that New Orleans becomes the centre of the feud between supernatural forces, as the vampires fight witches and werewolves – among others- to maintain control over the city’s historical heart. The French Quarter, in particular, is given renewed life from the ashes of history into the beating heart of the vampire narrative, so much so that it almost becomes its own character in its own right, instrumental in constructing the vampire mystique. The impact of the vampire on constructing an image for the city of New Orleans is made explicit in The Originals, as the series explicitly shows vampires at the centre of the city’s history. Indeed, the show’s narrative goes as far as justifying the French Quarter’s history and even legends through the vampire metaphor. For instance, the series explains the devastating fire that destroyed the French Opera House in 1919 as the result of a Mikaelson vampire family feud. In similar terms, the vampires of the French Quarter are shown at the heart of the Casquette Girls narrative, a well-known tale from Eighteenth-century colonial New Orleans, where young women were shipped from France to the new Louisiana colony, in order to marry. The young women were said to bring small chests – or casquettes – containing their clothes (Crandle 47). The Originals, however, capitalises on the folkloristic interpretation that perceives the girls’ luggage as coffins potentially containing the undead, a popular version of the tale that can often be heard if taking part in one of the many vampire tours in New Orleans. One can see here how the chronicles of the French Quarter in New Orleans and the presumed narratives of the vampire in the city merge to become one and the same, blurring the lines between history and fiction, and presenting the notion of folklore as a verifiable entity of the everyday (Kirshenblatt-Gimblett 25) It is essential to remember, en passant, that, as far as giving the undead their own historical chronicles in connection to New Orleans, The Originals is not alone in doing this. Other TV series like American Horror Story have provided Gothicised histories for the city, although in this case more connected to witchcraft, hoodoo, and voodoo, rather than vampires.What one can see taking place in The Originals is a form of alternate and revisionist history that is reminiscent of several instances of pulp and science fiction from the early 20th century, where the Gothic element lies at the centre of not only the fictional narrative, but also of the re-conceptualisation of historical time and space, as not absolute entities, but as narratives open to interpretation (Singles 103). The re-interpretation here is of course connected to the cultural anxieties that are intrinsic to the Gothic – of changes, shifts, and unwanted returns - and the vampire as a figure of intersections, signalling the shift between stages of existence. If it is true that, to paraphrase Paul Ricoeur’s famous contention, the past returns to “haunt” us (105), then the history of New Orleans in The Originals is both established and haunted by vampires, a pervasive shadow that provides the city itself with an almost tangible Gothic afterlife. This connection, of course, extends beyond the fictional world of the television series, and finds fertile ground in the cultural narratives that the city constructs for itself. The tourism narrative of New Orleans also lies at the heart of the reconstructive historical imagination, which purposefully re-invents the city as a constructed entity that is, in itself, extremely sellable. The Originals mentions on multiple occasions that certain bars — owned, of course, by vampires — host regular ‘vampire themed events’, to “keep the tourists happy”. The importance of maintaining a steady influx of vampire tourism into the Quarter is made very clear throughout, and the vampires are complicit in fostering it for a number of reasons: not only because it provides them and the city with a constant revenue, but also because it brings a continuous source of fresh blood for the vampires to feed on. As Marcel puts it: “Something's gotta draw in the out-of-towners. Otherwise we'd all go hungry” (Episode 1, “Always and Forever”). New Orleans, it is made clear, is not only portrayed as a vampire hub, but also as a hot spot for vampire tourism; as part of the tourism narratives, the vampires themselves — who commonly feign humanity — actually further ‘pretend’ to be vampires for the tourists, who expect to find vampires in the city. It is made clear in The Originals that vampires often put on a show – and bear in mind, these are vampires who pretend to be human, who pretend to be vampires for the tourists. They channel stereotypes that belong in Gothic novels and films, and that are, as far as the ‘real’ vampires of the series, are concerned, mostly fictional. The vampires that are presented to the tourists in The Originals are, inevitably, inauthentic, for the real vampires themselves purposefully portray the vision of vampires put forward by popular culture, together with its own motifs and stereotypes. The vampires happily perform their popular culture role, in order to meet the expectations of the tourist. This interaction — which sociologist Dean MacCannell would refer to, when discussing the dynamics of tourism, as “staged authenticity” (591) — is the basis of the appeal, and what continues to bring tourists back, generating profits for vampires and humans alike. Nina Auerbach has persuasively argued that the vampire is often eroticised through its connections to the “self-obsessed’ glamour of consumerism that ‘subordinates history to seductive object” (57).With the issue of authenticity brought into sharp relief, The Originals also foregrounds questions of authenticity in relation to New Orleans’s own vampire tourism narrative, which ostensibly bases into historical narratives of magic, horror, and folklore, and constructs a fictionalised urban tale, suitable to the tourism trade. The vampires of the French Quarter in The Originals act as the embodiment of the constructed image of New Orleans as the epitome of a vampire tourist destination. ConclusionThere is a clear suggestion in The Originals that vampires have evolved from simple creatures of old folklore, to ‘products’ that can be sold to expectant tourists. This evolution, as far as popular culture is concerned, is also inevitably tied to the conceptualisation of certain locations as ‘vampiric’, a notion that, in the contemporary era, hinges on intersecting narratives of culture, history, and identity. Within this, New Orleans has successfully constructed an image for itself as a vampire city, exploiting, in a number ways, the popular and purposefully historicised connection to the undead. In both tourism narratives and popular culture, of which The Originals is an ideal example, New Orleans’s urban image — often sited in constructions and re-constructions, re-birth and decay — is presented as a result of the vampire’s own existence, and thrives in the Gothicised afterlife of imagery, symbolism, and cultural persuasion. In these terms, the ‘inauthentic’ vampires of The Originals are an ideal allegory that provides a channelling ground for the issues surrounding the ‘inauthentic’ state of New Orleans a sellable tourism entity. As both hinge on images of popular representation and desirable symbols, the historical narratives of New Orleans become entangled with — and are, at times, almost inseparable from — the fictional chronicles of the vampire in both aesthetic and conceptual terms. ReferencesAnyiwo, U. Melissa. “The Female Vampire in Popular Culture.” Gender in the Vampire Narrative. Eds. Amanda Hobson and U. Melissa Anyiwo. Rotterdam: Sense Publishers, 2016. 173-192. Auerbach, Nina. Our Vampires, Ourselves. Chicago: University of Chicago Press, 1995.Crandle, Marita Woywod. New Orleans Vampires: History and Legend. Stroud: The History Press, 2017.Gotham, Kevin Fox. Authentic New Orleans: Tourism, Culture, and Race in the Big Easy. New York: New York University Press, 2007.———. “Tourism Gentrification: The Case of New Orleans’ Vieux Carre’.” Urban Studies 42.7 (2005): 1099-1121. Harris, Charlaine. All Together Dead. London: Gollancz, 2008.Interview with the Vampire. Dir. Neil Jordan. Geffen Pictures, 1994. Kirshenblatt-Gimblett, Barbara. “Mistaken Dichotomies.” Public Folklore. Eds. Robert Baron and Nick Spitzer. Oxford: University of Missisippi Press, 2007. 28-48.Marina, Peter J. Down and Out in New Orleans: Trangressive Living in the Informal Economy. New York: Columia University Press, 2017. McKinney, Louise. New Orleans: A Cultural History. Oxford: Oxford University Press, 2006.Murphy, Michael. Fear Dat New Orleans: A Guide to the Voodoo, Vampires, Graveyards & Ghosts of the Crescent City. New York: W.W. Norton & Company, 2015.Piatti-Farnell, Lorna. The Vampire in Contemporary Popular Literature. London: Routledge, 2014. Ricoeur, Paul. Memory, History, Forgetting. Chicago: University of Chicago Press, 2004. Singles, Kathleen. Alternate History: Playing with Contingency and Necessity. Boston: de Gruyter, 2013.Souther, Mark. New Orleans on Parade: Tourism and the Transformation of the Crescent City. Baton Rouge: University of Louisiana Press, 2006. Stanonis, Anthony J. Creating the Big Easy: New Orleans and the Emergence of Modern Tourism, 1918-1945. Athens: University of Georgia Press, 2006.The Originals. Seasons 1-4. CBS/Warner Bros Television. 2013-2017.Thomas, Lynell. Desire and Disaster in New Orleans: Tourism, Race, and Historical Memory. Durham: Duke University Press, 2014.
APA, Harvard, Vancouver, ISO, and other styles
41

Droumeva, Milena. "Curating Everyday Life: Approaches to Documenting Everyday Soundscapes." M/C Journal 18, no. 4 (August 10, 2015). http://dx.doi.org/10.5204/mcj.1009.

Full text
Abstract:
In the last decade, the cell phone’s transformation from a tool for mobile telephony into a multi-modal, computational “smart” media device has engendered a new kind of emplacement, and the ubiquity of technological mediation into the everyday settings of urban life. With it, a new kind of media literacy has become necessary for participation in the networked social publics (Ito; Jenkins et al.). Increasingly, the way we experience our physical environments, make sense of immediate events, and form impressions is through the lens of the camera and through the ear of the microphone, framed by the mediating possibilities of smartphones. Adopting these practices as a kind of new media “grammar” (Burn 29)—a multi-modal language for public and interpersonal communication—offers new perspectives for thinking about the way in which mobile computing technologies allow us to explore our environments and produce new types of cultural knowledge. Living in the Social Multiverse Many of us are concerned about new cultural practices that communication technologies bring about. In her now classic TED talk “Connected but alone?” Sherry Turkle talks about the world of instant communication as having the illusion of control through which we micromanage our immersion in mobile media and split virtual-physical presence. According to Turkle, what we fear is, on the one hand, being caught unprepared in a spontaneous event and, on the other hand, missing out or not documenting or recording events—a phenomenon that Abha Dawesar calls living in the “digital now.” There is, at the same time, a growing number of ways in which mobile computing devices connect us to new dimensions of everyday life and everyday experience: geo-locative services and augmented reality, convergent media and instantaneous participation in the social web. These technological capabilities arguably shift the nature of presence and set the stage for mobile users to communicate the flow of their everyday life through digital storytelling and media production. According to a Digital Insights survey on social media trends (Bennett), more than 500 million tweets are sent per day and 5 Vines tweeted every second; 100 hours of video are uploaded to YouTube every minute; more than 20 billion photos have been shared on Instagram to date; and close to 7 million people actively produce and publish content using social blogging platforms. There are more than 1 billion smartphones in the US alone, and most social media platforms are primarily accessed using mobile devices. The question is: how do we understand the enormity of these statistics as a coherent new media phenomenon and as a predominant form of media production and cultural participation? More importantly, how do mobile technologies re-mediate the way we see, hear, and perceive our surrounding evironment as part of the cultural circuit of capturing, sharing, and communicating with and through media artefacts? Such questions have furnished communication theory even before McLuhan’s famous tagline “the medium is the message”. Much of the discourse around communication technology and the senses has been marked by distinctions between “orality” and “literacy” understood as forms of collective consciousness engendered by technological shifts. Leveraging Jonathan Sterne’s critique of this “audio-visual litany”, an exploration of convergent multi-modal technologies allows us to focus instead on practices and techniques of use, considered as both perceptual and cultural constructs that reflect and inform social life. Here in particular, a focus on sound—or aurality—can help provide a fresh new entry point into studying technology and culture. The phenomenon of everyday photography is already well conceptualised as a cultural expression and a practice connected with identity construction and interpersonal communication (Pink, Visual). Much more rarely do we study the act of capturing information using mobile media devices as a multi-sensory practice that entails perceptual techniques as well as aesthetic considerations, and as something that in turn informs our unmediated sensory experience. Daisuke and Ito argue that—in contrast to hobbyist high-quality photographers—users of camera phones redefine the materiality of urban surroundings as “picture-worthy” (or not) and elevate the “mundane into a photographic object.” Indeed, whereas traditionally recordings and photographs hold institutional legitimacy as reliable archival references, the proliferation of portable smart technologies has transformed user-generated content into the gold standard for authentically representing the everyday. Given that visual approaches to studying these phenomena are well underway, this project takes a sound studies perspective, focusing on mediated aural practices in order to explore the way people make sense of their everyday acoustic environments using mobile media. Curation, in this sense, is a metaphor for everyday media production, illuminated by the practice of listening with mobile technology. Everyday Listening with Technology: A Case Study The present conceptualisation of curation emerged out of a participant-driven qualitative case study focused on using mobile media to make sense of urban everyday life. The study comprised 10 participants using iPod Touches (a device equivalent to an iPhone, without the phone part) to produce daily “aural postcards” of their everyday soundscapes and sonic experiences, over the course of two to four weeks. This work was further informed by, and updates, sonic ethnography approaches nascent in the World Soundscape Project, and the field of soundscape studies more broadly. Participants were asked to fill out a questionnaire about their media and technology use, in order to establish their participation in new media culture and correlate that to the documentary styles used in their aural postcards. With regard to capturing sonic material, participants were given open-ended instructions as to content and location, and encouraged to use the full capabilities of the device—that is, to record audio, video, and images, and to use any applications on the device. Specifically, I drew their attention to a recording app (Recorder) and a decibel measurement app (dB), which combines a photo with a static readout of ambient sound levels. One way most participants described the experience of capturing sound in a collection of recordings for a period of time was as making a “digital scrapbook” or a “media diary.” Even though they had recorded individual (often unrelated) soundscapes, almost everyone felt that the final product came together as a stand-alone collection—a kind of gallery of personalised everyday experiences that participants, if anything, wished to further organise, annotate, and flesh out. Examples of aural postcard formats used by participants: decibel photographs of everyday environments and a comparison audio recording of rain on a car roof with and without wipers (in the middle). Working with 139 aural postcards comprising more than 250 audio files and 150 photos and videos, the first step in the analysis was to articulate approaches to media documentation in terms of format, modality, and duration as deliberate choices in conversation with dominant media forms that participants regularly consume and are familiar with. Ambient sonic recordings (audio-only) comprised a large chunk of the data, and within this category there were two approaches: the sonic highlight, a short vignette of a given soundscape with minimal or no introduction or voice-over; and the process recording, featuring the entire duration of an unfolding soundscape or event. Live commentaries, similar to the conventions set forth by radio documentaries, represented voice-over entries at the location of the sound event, sometimes stationary and often in motion as the event unfolded. Voice memos described verbal reflections, pre- or post- sound event, with no discernable ambience—that is, participants intended them to serve as reflective devices rather than as part of the event. Finally, a number of participants also used the sound level meter app, which allowed them to generate visual records of the sonic levels of a given environment or location in the form of sound level photographs. Recording as a Way of Listening In their community soundwalking practice, Förnstrom and Taylor refer to recording sound in everyday settings as taking world experience, mediating it through one’s body and one’s memories and translating it into approximate experience. The media artefacts generated by participants as part of this study constitute precisely such ‘approximations’ of everyday life accessed through aural experience and mediated by the technological capabilities of the iPod. Thinking of aural postcards along this technological axis, the act of documenting everyday soundscapes involves participants acting as media producers, ‘framing’ urban everyday life through a mobile documentary rubric. In the process of curating these documentaries, they have to make decisions about the significance and stylistic framing of each entry and the message they wish to communicate. In order to bring the scope of these curatorial decisions into dialogue with established media forms, in this work’s analysis I combine Bill Nichols’s classification of documentary modes in cinema with Karin Bijsterveld’s concept of soundscape ‘staging’ to characterise the various approaches participants took to the multi-modal curation of their everyday (sonic) experience. In her recent book on the staging of urban soundscapes in both creative and documentary/archival media, Bijsterveld describes the representation of sound as particular ‘dramatisations’ that construct different kinds of meanings about urban space and engender different kinds of listening positions. Nichols’s articulation of cinematic documentary modes helps detail ways in which the author’s intentionality is reflected in the styling, design, and presentation of filmic narratives. Michel Chion’s discussion of cinematic listening modes further contextualises the cultural construction of listening that is a central part of both design and experience of media artefacts. The conceptual lens is especially relevant to understanding mobile curation of mediated sonic experience as a kind of mobile digital storytelling. Working across all postcards, settings, and formats, the following four themes capture some of the dominant stylistic dimensions of mobile media documentation. The exploratory approach describes a methodology for representing everyday life as a flow, predominantly through ambient recordings of unfolding processes that participants referred to in the final discussion as a ‘turn it on and forget it’ approach to recording. As a stylistic method, the exploratory approach aligns most closely with Nichols’s poetic and observational documentary modes, combining a ‘window to the world’ aesthetic with minimal narration, striving to convey the ‘inner truth’ of phenomenal experience. In terms of listening modes reflected in this approach, exploratory aural postcards most strongly engage causal listening, to use Chion’s framework of cinematic listening modes. By and large, the exploratory approach describes incidental documentaries of routine events: soundscapes that are featured as a result of greater attentiveness and investment in the sonic aspects of everyday life. The entries created using this approach reflect a process of discovering (seeing and hearing) the ordinary as extra-ordinary; re-experiencing sometimes mundane and routine places and activities with a fresh perspective; and actively exploring hidden characteristics, nuances of meaning, and significance. For instance, in the following example, one participant explores a new neighborhood while on a work errand:The narrative approach to creating aural postcards stages sound as a springboard for recollecting memories and storytelling through reflecting on associations with other soundscapes, environments, and interactions. Rather than highlighting place, routine, or sound itself, this methodology constructs sound as a window into the identity and inner life of the recordist, mobilising most strongly a semantic listening mode through association and narrative around sound’s meaning in context (Chion 28). This approach combines a subjective narrative development with a participatory aesthetic that draws the listener into the unfolding story. This approach is also performative, in that it stages sound as a deeply subjective experience and approaches the narrative from a personally significant perspective. Most often this type of sound staging was curated using voice memo narratives about a particular sonic experience in conjunction with an ambient sonic highlight, or as a live commentary. Recollections typically emerged from incidental encounters, or in the midst of other observations about sound. In the following example a participant reminisces about the sound of wind, which, interestingly, she did not record: Today I have been listening to the wind. It’s really rainy and windy outside today and it was reminding me how much I like the sound of wind. And you know when I was growing up on the wide prairies, we sure had a lot of wind and sometimes I kind of miss the sound of it… (Participant 1) The aesthetic approach describes instances where the creation of aural postcards was motivated by a reduced listening position (Chion 29)—driven primarily by the qualities and features of the soundscape itself. This curatorial practice for staging mediated aural experience combines a largely subjective approach to documenting with an absence of traditional narrative development and an affective and evocative aesthetic. Where the exploratory documentary approach seeks to represent place, routine, environment, and context through sonic characteristics, the aesthetic approach features sound first and foremost, aiming to represent and comment on sound qualities and characteristics in a more ‘authentic’ manner. The media formats most often used in conjunction with this approach were the incidental ambient sonic highlight and the live commentary. In the following example we have the sound of coffee being made as an important domestic ritual where important auditory qualities are foregrounded: That’s the sound of a stovetop percolator which I’ve been using for many years and I pretty much know exactly how long it takes to make a pot of coffee by the sound that it makes. As soon as it starts gurgling I know I have about a minute before it burns. It’s like the coffee calls and I come. (Participant 6) The analytical approach characterises entries that stage mediated aural experience as a way of systematically and inductively investigating everyday phenomena. It is a conceptual and analytical experimental methodology employed to move towards confirming or disproving a ‘hypothesis’ or forming a theory about sonic relations developed in the course of the study. As such, this approach most strongly aligns with Chion’s semantic listening mode, with the addition of the interactive element of analytical inquiry. In this context, sound is treated as a variable to be measured, compared, researched, and theorised about in an explicit attempt to form conclusions about social relationships, personal significance, place, or function. This analytical methodology combines an explicit and critical focus to the process of documenting itself (whether it be measuring decibels or systematically attending to sonic qualities) with a distinctive analytical synthesis that presents as ‘formal discovery’ or even ‘truth.’ In using this approach, participants most often mobilised the format of short sonic highlights and follow-up voice memos. While these aural postcards typically contained sound level photographs (decibel measurement values), in some cases the inquiry and subsequent conclusions were made inductively through sustained observation of a series of soundscapes. The following example is by a participant who exclusively recorded and compared various domestic spaces in terms of sound levels, comparing and contrasting them using voice memos. This is a sound level photograph of his home computer system: So I decided to record sitting next to my computer today just because my computer is loud, so I wanted to see exactly how loud it really was. But I kept the door closed just to be sort of fair, see how quiet it could possibly get. I think it peaked at 75 decibels, and that’s like, I looked up a decibel scale, and apparently a lawn mower is like 90 decibels. (Participant 2) Mediated Curation as a New Media Cultural Practice? One aspect of adopting the metaphor of ‘curation’ towards everyday media production is that it shifts the critical discourse on aesthetic expression from the realm of specialised expertise to general practice (“Everyone’s a photographer”). The act of curation is filtered through the aesthetic and technological capabilities of the smartphone, a device that has become co-constitutive of our routine sensorial encounters with the world. Revisiting McLuhan-inspired discourses on communication technologies stages the iPhone not as a device that itself shifts consciousness but as an agent in a media ecology co-constructed by the forces of use and design—a “crystallization of cultural practices” (Sterne). As such, mobile technology is continuously re-crystalised as design ‘constraints’ meet both normative and transgressive user approaches to interacting with everyday life. The concept of ‘social curation’ already exists in commercial discourse for social web marketing (O’Connell; Allton). High-traffic, wide-integration web services such as Digg and Pinterest, as well as older portals such as Reddit, all work on the principles of arranging user-generated, web-aggregated, and re-purposed content around custom themes. From a business perspective, the notion of ‘social curation’ captures, unsurprisingly, only the surface level of consumer behaviour rather than the kinds of values and meaning that this process holds for people. In the more traditional sense, art curation involves aesthetic, pragmatic, epistemological, and communication choices about the subject of (re)presentation, including considerations such as manner of display, intended audience, and affective and phenomenal impact. In his 2012 book tracing the discourse and culture of curating, Paul O’Neill proposes that over the last few decades the role of the curator has shifted from one of arts administrator to important agent in the production of cultural experiences, an influential cultural figure in her own right, independent of artistic content (88). Such discursive shifts in the formulation of ‘curatorship’ can easily be transposed from a specialised to a generalised context of cultural production, in which everyone with the technological means to capture, share, and frame the material and sensory content of everyday life is a curator of sorts. Each of us is an agent with a unique aesthetic and epistemological perspective, regardless of the content we curate. The entire communicative exchange is necessarily located within a nexus of new media practices as an activity that simultaneously frames a cultural construction of sensory experience and serves as a cultural production of the self. To return to the question of listening and a sound studies perspective into mediated cultural practices, technology has not single-handedly changed the way we listen and attend to everyday experience, but it has certainly influenced the range and manner in which we make sense of the sensory ‘everyday’. Unlike acoustic listening, mobile digital technologies prompt us to frame sonic experience in a multi-modal and multi-medial fashion—through the microphone, through the camera, and through the interactive, analytical capabilities of the device itself. Each decision for sensory capture as a curatorial act is both epistemological and aesthetic; it implies value of personal significance and an intention to communicate meaning. The occurrences that are captured constitute impressions, highlights, significant moments, emotions, reflections, experiments, and creative efforts—very different knowledge artefacts from those produced through textual means. Framing phenomenal experience—in this case, listening—in this way is, I argue, a core characteristic of a more general type of new media literacy and sensibility: that of multi-modal documenting of sensory materialities, or the curation of everyday life. References Allton, Mike. “5 Cool Content Curation Tools for Social Marketers.” Social Media Today. 15 Apr. 2013. 10 June 2015 ‹http://socialmediatoday.com/mike-allton/1378881/5-cool-content-curation-tools-social-marketers›. Bennett, Shea. “Social Media Stats 2014.” Mediabistro. 9 June 2014. 20 June 2015 ‹http://www.mediabistro.com/alltwitter/social-media-statistics-2014_b57746›. Bijsterveld, Karin, ed. Soundscapes of the Urban Past: Staged Sound as Mediated Cultural Heritage. Bielefeld: Transcript-Verlag, 2013. Burn, Andrew. Making New Media: Creative Production and Digital Literacies. New York, NY: Peter Lang Publishing, 2009. Daisuke, Okabe, and Mizuko Ito. “Camera Phones Changing the Definition of Picture-worthy.” Japan Media Review. 8 Aug. 2015 ‹http://www.dourish.com/classes/ics234cw04/ito3.pdf›. Chion, Michel. Audio-Vision: Sound on Screen. New York, NY: Columbia UP, 1994. Förnstrom, Mikael, and Sean Taylor. “Creative Soundwalks.” Urban Soundscapes and Critical Citizenship Symposium. Limerick, Ireland. 27–29 March 2014. Ito, Mizuko, ed. Hanging Out, Messing Around, and Geeking Out: Kids Living and Learning with New Media. Cambridge, MA: The MIT Press, 2010. Jenkins, Henry, Ravi Purushotma, Margaret Weigel, Katie Clinton, and Alice J. Robison. Confronting the Challenges of Participatory Culture: Media Education for the 21st Century. White Paper prepared for the McArthur Foundation, 2006. McLuhan, Marshall. Understanding Media: The Extensions of Man. New York: McGraw-Hill, 1964. Nichols, Brian. Introduction to Documentary. Bloomington & Indianapolis, Indiana: Indiana UP, 2001. Nielsen. “State of the Media – The Social Media Report.” Nielsen 4 Dec. 2012. 12 May 2015 ‹http://www.nielsen.com/us/en/insights/reports/2012/state-of-the-media-the-social-media-report-2012.html›. O’Connel, Judy. “Social Content Curation – A Shift from the Traditional.” 8 Aug. 2011. 11 May 2015 ‹http://judyoconnell.com/2011/08/08/social-content-curation-a-shift-from-the-traditional/›. O’Neill, Paul. The Culture of Curating and the Curating of Culture(s). Cambridge, MA: MIT Press, 2012. Pink, Sarah. Doing Visual Ethnography. London, UK: Sage, 2007. ———. Situating Everyday Life. London, UK: Sage, 2012. Sterne, Jonathan. The Audible Past: Cultural Origins of Sound Reproduction. Durham, NC: Duke UP, 2003. Schafer, R. Murray, ed. World Soundscape Project. European Sound Diary (reprinted). Vancouver: A.R.C. Publications, 1977. Turkle, Sherry. “Connected But Alone?” TED Talk, Feb. 2012. 8 Aug. 2015 ‹http://www.ted.com/talks/sherry_turkle_alone_together?language=en›.
APA, Harvard, Vancouver, ISO, and other styles
42

West, Patrick Leslie, and Cher Coad. "The CCTV Headquarters—Horizontal Skyscraper or Vertical Courtyard? Anomalies of Beijing Architecture, Urbanism, and Globalisation." M/C Journal 23, no. 5 (October 7, 2020). http://dx.doi.org/10.5204/mcj.1680.

Full text
Abstract:
I have decided to launch a campaign against the skyscraper, that hideous, mediocre form of architecture…. Today we only have an empty version of it, only competing in height.— Rem Koolhaas, “Kool Enough for Beijing?”Figure 1: The CCTV Headquarters—A Courtyard in the Air. Cher Coad, 2020.Introduction: An Anomaly within an Anomaly Construction of Beijing’s China Central Television Headquarters (henceforth CCTV Headquarters) began in 2004 and the building was officially completed in 2012. It is a project by the Office for Metropolitan Architecture (OMA) headed by Rem Koolhaas (1944-), who has been called “the coolest, hippest, and most cutting-edge architect on the planet”(“Rem Koolhaas Biography”). The CCTV Headquarters is a distinctive feature of downtown Beijing and is heavily associated in the Western world with 21st-century China. It is often used as the backdrop for reports from the China correspondent for the Australian Broadcasting Corporation (ABC), Bill Birtles. The construction of the CCTV Headquarters, however, was very much an international enterprise. Koolhaas himself is Dutch, and the building was one of the first projects the OMA did outside of America after 9/11. As Koolhaas describes it: we had incredible emphasis on New York for five years, and America for five years, and what we decided to do after September 11 when we realized that, you know, things were going to be different in America: [was] to also orient ourselves eastwards [Koolhaas goes on to describe two projects: the Hermitage Museum, St. Petersburg, Russia and the CCTV Headquarters]. (Rem Koolhaas Interview) Problematically, Koolhaas claims that the building we created for CCTV could never have been conceived by the Chinese and could never have been built by Europeans. It is a hybrid by definition. It was also a partnership, not a foreign imposition…. There was a huge Chinese component from the very beginning. We tried to do a building that conveys that it has emerged from the local situation. (Fraioli 117) Our article reinterprets this reading. We suggest that the OMA’s “incredible emphasis” on America—home of the world’s first skyscraper: the Home Insurance Building built in 1885 in Chicago, Illinois—pivotally spills over into its engagement with China. The emergence of the CCTV Headquarters “from the local situation”, such as it is, is more in spite of Koolhaas’s stated “hybrid” approach than because of it, for what’s missing from his analysis of the CCTV Headquarters’ provenance is the siheyuan or classical Chinese courtyard house. We will argue that the CCTV Headquarters is an anomaly within an anomaly in contemporary Beijing’s urban landscape, to the extent that it turns the typologies of both the (vertical, American) skyscraper and the (horizontal, Chinese) siheyuan on a 90 degree angle. The important point to make here, however, is that these two anomalous elements of the building are not of the same order. While the anomalous re-configuration of the skyscraper typology is clearly part of Koolhaas’s architectural manifesto, it is against his architectural intentionality that the CCTV Headquarters sustains the typology of the siheyuan. This bespeaks the persistent and perhaps functional presence of traditional Chinese architecture and urbanism in the building. Koolhaas’s building contains both starkly evident and more secretive anomalies. Ironically then, there is a certain truth in Koolhaas’s words, beneath the critique we made of it above as an example of American-dominated, homogenising globalisation. And the significance of the CCTV Headquarters’ hybridity as both skyscraper and siheyuan can be elaborated through Daniel M. Abramson’s thesis that a consideration of unbuilt architecture has the potential to re-open architecture to its historical conditions. Roberto Schwarz argues that “forms are the abstract of specific social relationships” (53). Drawing on Schwarz’s work and Abramson’s, we conclude that the historical presence—as secretive anomaly—of the siheyuan in the CCTV Headquarters suggests that the building’s formal debt to the siheyuan (more so than to the American skyscraper) may continue to unsettle the “specific social relationship” of Chinese to Western society (Schwarz 53). The site of this unsettlement, we suggest, is data. The CCTV Headquarters might well be the most data-rich site in all of China—it is, after all, a monumental television station. Suggestively, this wealth of airborne data is literally enclosed within the aerial “courtyard”, with its classical Chinese form, of the CCTV Headquarters. This could hardly be irrelevant in the context of the geo-politics of globalised data. The “form of data”, to coin a phrase, radiates through all the social consequences of data flow and usage, and here the form of data is entwined with a form always already saturated with social consequence. The secretive architectural anomaly of Koolhaas’s building is thus a heterotopic space within the broader Western engagement with China, so much of which relates to flows and captures of data. The Ubiquitous Siheyuan or Classical Chinese Courtyard House According to Ying Liu and Adenrele Awotona, “the courtyard house, a residential compound with buildings surrounding a courtyard on four (or sometimes three) sides, has been representative of housing patterns for over one thousand years in China” (248). Liu and Awotona state that “courtyard house patterns could be found in many parts of China, but the most typical forms are those located in the Old City in Beijing, the capital of China for over eight hundred years” (252). In their reading, the siheyuan is a peculiarly elastic architectural typology, whose influence is present as much in the Forbidden City as in the humble family home (252). Prima facie then, it is not surprising that it has also secreted itself within the architectural form of Koolhaas’s creation. It is important to note, however, that while the “most typical forms” of the siheyuan are indeed still to be found in Beijing, the courtyard house is an increasingly uncommon sight in the Chinese capital. An article in the China Daily from 2004 refers to the “few remaining siheyuan” (“Kool Enough for Beijing?”). That said, all is not lost for the siheyuan. Liu and Awotona discuss how the classical form of the courtyard house has been modified to more effectively house current residents in the older parts of Beijing while protecting “the horizontal planning feature of traditional Beijing” (254). “Basic design principles” (255) of the siheyuan have supported “a transition from the traditional single-household courtyard housing form to a contemporary multi-household courtyard housing form” (254). In this process, approaches of “urban renewal [involving] demolition” and “preservation, renovation and rebuilding” have been taken (255). Donia Zhang extends the work of Liu and Awotona in the elaboration of her thesis that “Chinese-Americans interested in building Chinese-style courtyard houses in America are keen to learn about their architectural heritage” (47). Zhang’s article concludes with an illustration that shows how the siheyuan may be merged with the typical American suburban dwelling (66). The final thing to emphasise about the siheyuan is what Liu and Awotona describe as its “special introverted quality” (249). The form is saturated with social consequence by virtue of its philosophical undergirding. The coincidence of philosophies of Daoism (including feng-shui) and Confucianism in the architecture and spatiality of the classical Chinese courtyard house makes it an exceedingly odd anomaly of passivity and power (250-51). The courtyard itself has a highly charged role in the management of family, social and cultural life, which, we suggest, survives its transposition into novel architectural environments. Figure 2: The CCTV Headquarters—Looking Up at “The Overhang”. Cher Coad, 2020. The CCTV Headquarters: A New Type of Skyscraper? Rem Koolhaas is not the only architect to interrogate the standard skyscraper typology. In his essay from 1999, “The Architecture of the Future”, Norman Foster argues that “the world’s increasing ecological crisis” (278) is in part a function of “unchecked urban sprawl” (279). A new type of skyscraper, he suggests, might at least ameliorate the sprawl of our cities: the Millennium Tower that we have proposed in Tokyo takes a traditional horizontal city quarter—housing, shops, restaurants, cinemas, museums, sporting facilities, green spaces and public transport networks—and turns it on its side to create a super-tall building with a multiplicity of uses … . It would create a virtually self-sufficient, fully self-sustaining community in the sky. (279) Koolhaas follows suit, arguing that “the actual point of the skyscraper—to increase worker density—has been lost. Skyscrapers are now only momentary points of high density spaced so far apart that they don’t actually increase density at all” (“Kool Enough for Beijing?”). Foster’s solution to urban sprawl is to make the horizontal (an urban segment) vertical; Koolhaas’s is to make the vertical horizontal: “we’ve [OMA] come up with two types: a very low-rise series of buildings, or a single, condensed hyperbuilding. What we’re doing with CCTV is a prototype of the hyperbuilding” (“Kool Enough for Beijing?”). Interestingly, the “low-rise” type mentioned here brings to mind the siheyuan—textual evidence, perhaps, that the siheyuan is always already a silent fellow traveller of the CCTV Headquarters project. The CCTV Headquarters is, even at over 200 metres tall itself, an anomaly of horizontalism amidst Beijing’s pervasive skyscraper verticality. As Paul Goldberger reports, “some Beijingers have taken to calling it Big Shorts”, which again evokes horizontality. This is its most obvious anomaly, and a somewhat melancholy reminder of “the horizontal planning feature of traditional Beijing” now mutilated by skyscrapers (Liu and Awotona 254). In the same gesture, however, with which it lays the skyscraper on its side, Koolhaas’s creation raises into the air the shape of the courtyard of a classical Chinese house. To our knowledge, no one has noticed this before, let alone written about it. It is, to be sure, a genuine courtyard shape—not merely an archway or a bridge with unoccupied space between. Pure building entirely surrounds the vertical courtyard shape formed in the air. Most images of the building provide an orientation that maximises the size of its vertical courtyard. To this extent, the (secret) courtyard shape of the building is hidden in plain sight. It is possible, however, to make the courtyard narrow to a mere slit of space, and finally to nothing, by circumnavigating the building. Certain perspectives on the building can even make it look like a more-or-less ordinary skyscraper. But, as a quick google-image search reveals, such views are rare. What seems to make the building special to people is precisely that part of it that is not building. Furthermore, anyone approaching the CCTV Headquarters with the intention of locating a courtyard typology within its form will be disappointed unless they look to its vertical plane. There is no hint of a courtyard at the base of the building. Figure 3: The CCTV Headquarters—View from “The Overhang”. Cher Coad, 2020.Figure 4: The CCTV Headquarters—Looking through the Floor of “The Overhang”. Cher Coad, 2020.Visiting the CCTV Headquarters: A “Special Introverted Quality?” In January 2020, we visited the CCTV Headquarters, ostensibly as audience members for a recording of a science spectacular show. Towards the end of the recording, we were granted a quick tour of the building. It is rare for foreigners to gain access to the sections of the building we visited. Taking the lift about 40 floors up, we arrived at the cantilever level—known informally as “the overhang”. Glass discs in the floor allow one to walk out over nothingness, looking down on ant-like pedestrians. Looking down like this was also to peer into the vacant “courtyard” of the building—into a structure “turned or pushed inward on itself”, which is the anatomical definition of “introverted” (Oxford Languages Dictionary). Workers in the building evinced no great affection for it, and certainly nothing of our wide-eyed wonder. Somebody said, “it’s just a place to work”. One of this article’s authors, Patrick West, seemed to feel the overhang almost imperceptibly vibrating beneath him. (Still, he has also experienced this sensation in conventional skyscrapers.) We were told the rumour that the building has started to tilt over dangerously. Being high in the air, but also high on the air, with nothing but air beneath us, felt edgy—somehow special—our own little world. Koolhaas promotes the CCTV Headquarters as (in paraphrase) “its own city, its own community” (“Kool Enough for Beijing?”). This resonated with us on our visit. Conventional skyscrapers fracture any sense of community through their segregated floor-upon-floor verticality; there is never enough room for a little patch of horizontal urbanism to unroll. Within “the overhang”, the CCTV Headquarters felt unlike a standard skyscraper, as if we were in an urban space magically levitated from the streets below. Sure, we had been told by one of the building’s inhabitants that it was “just a place to work”—but compared to the bleak sterility of most skyscraper work places, it wasn’t that sterile. The phrase Liu and Awotona use of the siheyuan comes to mind here, as we recall our experience; somehow, we had been inside a different type of building, one with its own “special introverted quality” (249). Special, that is, in the sense of containing just so much of horizontal urbanism as allows the building to retain its introverted quality as “its own city” (“Kool Enough for Beijing?”). Figure 5: The CCTV Headquarters—View from “The Overhang”. Cher Coad, 2020.Figure 6: The CCTV Headquarters—Inside “The Overhang”. Cher Coad, 2020. Unbuilt Architecture: The Visionary and the Contingent Within the present that it constitutes, built architecture is surrounded by unbuilt architecture at two interfaces: where the past ends; where the future begins. The soupy mix of urbanism continually spawns myriad architectural possibilities, and any given skyscraper is haunted by all the skyscrapers it might have been. History and the past hang heavily from them. Meanwhile, architectural programme or ambition—such as it is—pulls in the other direction: towards an idealised (if not impossible to practically realise) future. Along these lines, Koolhaas and the OMA are plainly a future-directed, as well as self-aware, architectural unit: at OMA we try to build in the greatest possible tolerance and the least amount of rigidity in terms of embodying one particular moment. We want our buildings to evolve. A building has at least two lives—the one imagined by its maker and the life it lives afterward—and they are never the same. (Fraioli 115) Koolhaas makes the same point even more starkly with regard to the CCTV Headquarters project through his use of the word “prototype”: “what we’re doing with CCTV is a prototype of the hyperbuilding” (“Kool Enough for Beijing?”). At the same time, however, as the presence of the siheyuan within the architecture of the CCTV Headquarters shows, the work of the OMA cannot escape from the superabundance of history, within which, as Roberto Schwarz claims, “forms are the abstract of specific social relationships” (53). Supporting our contentions here, Daniel M. Abramson notes that unbuilt architecture implies two sub-categories … the visionary unbuilt, and the contingent … . Visionary schemes invite a forward glance, down one true, vanguard path to a reformed society and discipline. The contingent unbuilts, conversely, invite a backward glance, along multiple routes history might have gone, each with its own likelihood and validity; no privileged truths. (Abramson)Introducing Abramson’s theory to the example of the CCTV Headquarters, the “visionary unbuilt” lines up with Koolhaas’ thesis that the building is a future-directed “prototype”. while the clearest candidate for the “contingent unbuilt”, we suggest, is the siheyuan. Why? Firstly, the siheyuan is hidden in plain sight, within the framing architecture of the CCTV Headquarters; secondly, it is ubiquitous in Beijing urbanism—little wonder then that it turns up, unannounced, in this Beijing building; thirdly, and related to the second point, the two buildings share a “special introverted quality” (Liu and Awotona 249). “The contingent”, in this case, is the anomaly nestled within the much more blatant “visionary” (or futuristic) anomaly—the hyperbuilding to come—of the Beijing-embedded CCTV Headquarters. Koolhaas’s building’s most fascinating anomaly relates, not to any forecast of the future, but to the subtle persistence of the past—its muted quotation of the ancient siheyuan form. Our article is, in part, a response to Abramson’s invitation to “pursue … the consequences of the unbuilt … [and thus] to open architectural history more fully to history”. We have supplemented Abramson’s idea with Schwarz’s suggestion that “forms are the abstract of specific social relationships” (53). The anomaly of the siheyuan—alongside that of the hyperbuilding—within the CCTV headquarters, opens the building up (paraphrasing Abramson) to a fuller analysis of its historical positioning within Western and Eastern flows of globalisation (or better, as we are about to suggest, of glocalisation). In parallel, its form (paraphrasing Schwarz) abstracts and re-presents this history’s specific social relationships. Figure 7: The CCTV Headquarters—A Courtyard of Data. Cher Coad, 2020.Conclusion: A Courtyard of Data and Tensions of Glocalisation Koolhaas proposes that the CCTV Headquarters was “a partnership, not a foreign imposition” and that the building “emerged from the local situation” (Fraioli 117). To us, this smacks of Pollyanna globalisation. The CCTV Headquarters is, we suggest, more accurately read as an imposition of the American skyscraper typology, albeit in anomalous form. (One might even argue that the building’s horizontal deviation from the vertical norm reinforces that norm.) Still, amidst a thicket of conventionally vertical skyscrapers, the building’s horizontalism does have the anomalous effect of recalling “the horizontal planning feature of traditional Beijing” (Liu and Awotona 254). Buried within its horizontalism, however, lies a more secretive anomaly in the form of a vertical siheyuan. This anomaly, we contend, motivates a terminological shift from “globalisation” to “glocalisation”, for the latter term better captures the notion of a lack of reconciliation between the “global” and the “local” in the building. Koolhaas’s visionary architectural programme explicitly advances anomaly. The CCTV Headquarters radically reworks the skyscraper typology as the prototype of a hyperbuilding defined by horizontalism. Certainly, such horizontalism recalls the horizontal plane of pre-skyscraper Beijing and, if faintly, that plane’s ubiquitous feature: the classical courtyard house. Simultaneously, however, the siheyuan has a direct if secretive presence within the morphology of the CCTV Headquarters, even as any suggestion of a vertical courtyard is strikingly absent from Koolhaas’s vanguard manifesto. To this extent, the hyperbuilding fits within Abramson’s category of “the visionary unbuilt”, while the siheyuan aligns with Abramson’s “contingent unbuilt” descriptor. The latter is the “might have been” that, largely under the pressure of its ubiquity as Beijing vernacular architecture, “very nearly is”. Drawing on Schwarz’s idea that “forms are the abstract of specific social relationships”, we propose that the siheyuan, as anomalous form of the CCTV Headquarters, is a heterotopic space within the hybrid global harmony (to paraphrase Koolhaas) purportedly represented by the building (53). In this space thus formed collides the built-up historical and philosophical social intensity of the classical Chinese courtyard house and the intensities of data flows and captures that help constitute the predominantly capitalist and neo-liberalist “social relationship” of China and the Western world—the world of the skyscraper (Schwarz). Within the siheyuan of the CCTV Headquarters, globalised data is literally enveloped by Daoism and Confucianism; it is saturated with the social consequence of local place. The term “glocalisation” is, we suggest, to be preferred here to “globalisation”, because of how it better reflects such vernacular interruptions to the hegemony of globalised space. Forms delineate social relationships, and data, which both forms and is formed by social relationships, may be formed by architecture as much as anything else within social space. Attention to the unbuilt architectural forms (vanguard and contingent) contained within the CCTV Headquarters reveals layers of anomaly that might, ultimately, point to another form of architecture entirely, in which glocal tensions are not only recognised, but resolved. Here, Abramson’s historical project intersects, in the final analysis, with a worldwide politics. Figure 8: The CCTV Headquarters—A Sound Stage in Action. Cher Coad, 2020. References Abramson, Daniel M. “Stakes of the Unbuilt.” Aggregate Architectural History Collaborative. 20 July 2020. <http://we-aggregate.org/piece/stakes-of-the-unbuilt>.Foster, N. “The Architecture of the Future.” The Architecture Reader: Essential Writings from Vitruvius to the Present. Ed. A. Krista Sykes. New York: George Braziller, 2007: 276-79. Fraioli, Paul. “The Invention and Reinvention of the City: An Interview with Rem Koolhaas.” Journal of International Affairs 65.2 (Spring/Summer 2012): 113-19. Goldberger, Paul. “Forbidden Cities: Beijing’s Great New Architecture Is a Mixed Blessing for the City.” The New Yorker—The Sky Line. 23 June 2008. <https://www.newyorker.com/magazine/2008/06/30/forbidden-cities>.“Kool Enough for Beijing?” China Daily. 2 March 2004. <https://www.chinadaily.com.cn/english/doc/2004-03/02/content_310800.htm>. Liu, Ying, and Adenrele Awotona. “The Traditional Courtyard House in China: Its Formation and Transition.” Evolving Environmental Ideals—Changing Way of Life, Values and Design Practices: IAPS 14 Conference Proceedings. IAPS. Stockholm, Sweden: Royal Institute of Technology, 1996: 248-60. <https://iaps.architexturez.net/system/files/pdf/1202bm1029.content.pdf>.Oxford Languages Dictionary. “Rem Koolhaas Biography.” Encyclopedia of World Biography. 20 July 2020. <https://www.notablebiographies.com/news/Ge-La/Koolhaas-Rem.html>. “Rem Koolhaas Interview.” Manufacturing Intellect. Canadian Broadcasting Corporation. 2003. <https://www.youtube.com/watch?v=oW187PwSjY0>.Schwarz, Roberto. Misplaced Ideas: Essays on Brazilian Culture. New York: Verso, 1992. Zhang, Donia. “Classical Courtyard Houses of Beijing: Architecture as Cultural Artifact.” Space and Communication 1.1 (Dec. 2015): 47-68.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography