Zeitschriftenartikel zum Thema „Pressure Concrete Company“

Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Pressure Concrete Company.

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-34 Zeitschriftenartikel für die Forschung zum Thema "Pressure Concrete Company" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Zeitschriftenartikel für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Martauz, Pavel, Vojtěch Václavík und Branislav Cvopa. „The Properties of Concrete Based on Steel Slag as a By-Product of Metallurgical Production“. Key Engineering Materials 838 (April 2020): 10–22. http://dx.doi.org/10.4028/www.scientific.net/kem.838.10.

Der volle Inhalt der Quelle
Annotation:
This article presents the results of research on the use of unstable steel slag with a fraction of 0/8 mm as a 100% substitute for natural aggregate in concrete production. Two types of cements were used for the production of concrete: Portland cement CEM I 42.5N and hybrid cement H-CEMENT. Both of these cements were produced by the company Považská cementárna, a.s., Ladce. The main objective of this study was to assess the suitable type of binder to be combined with unstable steel slag in the production of concrete composite. The prepared concrete was used to test the properties of a fresh concrete mix, i.e. its consistency and bulk density. Hardened concrete was used to test the strength and deformation properties, including cube strength after 3, 7, 14, 21, 28 and 90 days, as well as prism strength after 28 days. The static modulus of elasticity was determined using prisms after 28 days of age of the test specimens. Our attention was also focused on determining the class of leachability of the concretes based on steel slag with CEM I 42.5N and H-CEMENT. The durability of concrete prepared on the basis of steel slag was tested in an environment with increased temperature and pressure. The results of the strength characteristics tests show a difference between the 28-day average cube strength of concrete using CEM I 42.5N and H-CEMENT (34.6 MPa and 29.1 MPa), while after 90 days, the average cube strength value stabilized at about 38 MPa. The average values ​​of the static modulus of elasticity when using CEM I 42.5N and H-CEMENT are almost identical, achieving values ​​of 32.5 GPa and 32.8 GPa, respectively. Concrete based on steel slag with CEM I 42.5N and H-CEMENT can be included in leachability class IIb. The results of the durability test of concrete based on steel slag in an environment with increased temperature and pressure confirmed the use of H-CEMENT hybrid cement from the company Považská cementáren, a.s., Ladce, as a suitable binder. .
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

TP, Krishna Kumar, M. Ramachandran, Chinnasami Sivaji und Chandrasakar Raja. „Financing practices of Micro and Small Entrepreneurs using WSM MCDM Method“. 4 1, Nr. 4 (01.12.2022): 18–25. http://dx.doi.org/10.46632/jdaai/1/4/3.

Der volle Inhalt der Quelle
Annotation:
A small or micro enterprise is usually a one-person show. A partnership is held by a firm or corporation Even in small units, operations are mainly carried out by shareholders or Carried out by one of the directors. In practice, others are sleeping partners or directors, essentially helping out financing. A company is classified If the paid up capital is less than or equal to 20,000 Birr As micro. Similarly, a company has its paid-up capital when less than or equal to Birr 500,000 considered small. However, it is the size of jobs or number of employees in MSE Does not provides information about Key Differences between Small Business and Small Business are scale and size. A small business is a type of small a business employing less than 10 persons, small Businesses with up to 500 employees including Haksever has fewer than 500 employees Define and characterizes a small business with shows following characteristics Management is independent; usually the manager is also the owner. Working Stress Design Method Reinforced A concrete design method is used Concrete is elastic, whereas steel and concrete are elastic in nature, in which the relationship between loads and stresses is linear. Working pressure method (WSM) This traditional design is perfect.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Pruška, Jan, Miroslav Šedivý und Vojtěch Anderle. „Problems with reinforced concrete industrial floors with regard to subsoil swelling“. Acta Polytechnica 64, Nr. 1 (04.03.2024): 34–41. http://dx.doi.org/10.14311/ap.2024.64.0034.

Der volle Inhalt der Quelle
Annotation:
Most of the problems associated with open cracks in reinforced concrete industrial floors do not arise from technological indiscipline in the execution or exceeding the permitted floor load, but from the geotechnical profile beneath the floor. In the presence of swelling soil in the subsoil, the floors can then be shifted upwards by centimeters and create open cracks. This article describes regression relationships for the prediction of swelling pressure and deformation of reinforced concrete industrial floors based on indirect measurements. These relationships were obtained by evaluating a large database of measurements carried out by the company GeoTec-GS and the Czech Technical University in Prague using neural networks, multiple correlation, regression analysis, and sensitivity analysis. The article also presents the actual classification of the risk of surface damage of reinforced concrete floors due to swelling of the subsoil and an example of its application is given.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Drzymała, Tomasz, Bartosz Zegardło und Piotr Tofilo. „Properties of Concrete Containing Recycled Glass Aggregates Produced of Exploded Lighting Materials“. Materials 13, Nr. 1 (04.01.2020): 226. http://dx.doi.org/10.3390/ma13010226.

Der volle Inhalt der Quelle
Annotation:
The paper presents an analysis of the possibilities of using glass waste from recycled lighting materials as aggregates for cement concrete. The research material was obtained from a company that utilizes electrical waste. Glass from pre-sorted elements was transported to the laboratory and crushed in a drum crusher. In this way, the aggregate obtained was subjected to the basic tests that are carried out for aggregates traditionally used in construction. The specific density of aggregate, bulk density, absorbability, crushing index, grain shape, texture type and aggregate flatness index were examined. In the next stage of research work, concrete mixtures were made in which crushed aggregate from crushed fluorescent lamps was used as a substitute for gravel aggregate. Mixtures containing 10%, 30%, 50% and 100% aggregate were made. A mixture containing only sand and gravel aggregate was made as a comparative mixture. Basic tests of both fresh concrete mix and hardened concrete were carried out for all concrete made. The consistency of the fresh concrete mix, the air content in the concrete mix, the density of hardened concrete, absorbability, water permeability under pressure and the basic compressive and tensile (flexular) strength tests were performed. The test results showed that the greater the addition of recycled glass aggregate, the less advantageous are the features of the concrete obtained with its participation. Microscopic analyses carried out in order to explain this phenomenon indicated an unfavorable influence of the grain shape of the aggregate thus obtained. Despite this fact, recycling of lighting waste in concrete composites is recommended as a pro-ecology measure; however, attention was paid to the benefits of using only 30% by mass of said waste in relation to the weight of the traditional aggregate used. Composite with such a quantity of waste retained the characteristics of cement concrete, which qualified its use for construction concrete.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Hubova, Olga, Lenka Konecna, Marek Macak und Gabriel Ciglan. „Pedestrian Wind Comfort around the Reinforced-Concrete Atypical Structure“. Key Engineering Materials 738 (Juni 2017): 153–63. http://dx.doi.org/10.4028/www.scientific.net/kem.738.153.

Der volle Inhalt der Quelle
Annotation:
The high-rise buildings in built-up areas affect the surrounding pedestrian-level wind environment. Unpleasant gust strong wind especially between the buildings, on corners, the crossing points and passages, will experience these locations as not comfortable or dangerous. The series of parametric wind tunnel studies were carried out to investigate the effects of building width, height and the gap width between buildings on the pedestrian-level wind environment. In the article, we will discuss build up area around designed reinforced concrete structure with atypical shape - TWIN CITY A1 Bratislava (developer company: HB Reavis). We indicated the high wind speed areas for discomfort under strong wind conditions experimentally in BLWT wind tunnel and using CFD simulation. It is important to note that the wind effects applied on the reinforced-concrete structures and occurring around these structures are one of the most important input parameters in their design. Therefore, they should not be neglected. As it was mentioned above, this paper deals only with the wind effects occurring around the examined structure. The wind effects applied on the structure – determination of the external wind pressure coefficients, were discussed in [1].
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Ovchinnikov, E. S., und I. A. Ovchinnikova. „Clad rolled reinforcing bars“. Litiyo i Metallurgiya (FOUNDRY PRODUCTION AND METALLURGY), Nr. 3 (20.10.2020): 56–58. http://dx.doi.org/10.21122/1683-6065-2020-3-56-58.

Der volle Inhalt der Quelle
Annotation:
Premature destruction of reinforced concrete structures exposed to aggressive environmental influences is a serious problem, both from a technical and economic point of view. Carbon steel reinforcing bar embedded in concrete is usually not subject to corrosion due to the formation of a protective ion-oxide film that passivates the steel under conditions of strong alkalis in the concrete pores. However, this passivity can be disrupted by chlorides penetrating the concrete, or by carbonation reaching the surface of the reinforcing bar. Then the corrosion begins.An example of a solution to this problem is the replacement of conventional steel reinforcement with clad steel during construction. Through the closely spaced interface of two solid metals, the atoms diffuse with each other at different speeds, at a high temperature, and at a certain pressure. This creates a metallurgical bond between two solid metals, the integrity or «strength» of which depends on the «purity» of the interface between the two metals and on the atoms that make up this «transition zone» or bond.The article investigates plated rebar, to determine the possibility of production in a mill 320 OJSC «BSW – Management Company of the Holding «BMC». To study the new type of reinforcing bars, special types of research were conducted as determining the chemical composition, microstructure, and mechanical properties. The main advantages of this type of product are defined in the article.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Komarova, S., und M. Abramenkov. „Growth of the region`s export potential and increasing its competitive positions by creating a cluster in the leading industry“. E-Management 3, Nr. 2 (29.08.2020): 22–31. http://dx.doi.org/10.26425/2658-3445-2020-2-22-31.

Der volle Inhalt der Quelle
Annotation:
It has been proposed to create and form a cluster of the cement industry in the Mogilev region. Сreationof structures allowing regional entities to carry out foreign economic activity in more favorable conditions solves many problems of overcoming the crisis situation.During the analysis of the effectiveness of the foreign economic activity of the Mogilev region, one of the problems identified was low export volumes in the regions of the region, which are significantly lower than in Mogilev and the Mogilev region.Increasing the competitiveness of enterprises› products in the cluster is achieved through the following advantages: access to information, complementarity, attracting internal resources for the rapid introduction of innovations, lower research costs, low degree of risk in conducting research and development, overcoming competitive pressure, low entry barriers, availability of necessary resources and personnel. Creation of a cluster in the cement industry of the Mogilev region of the Republic of Belarus, including enterprises from the Krichevsky and Kostyukovichsky districts, will improve the competitive position of the region. The core of the cluster is the holding company Belarusian Cement Company, which is the largest holding company for the production of cement, reinforced concrete blocks and structures. The cluster will include two enterprises from the city of Kostyukovichi, these are organizations such as:Tsemagro JV OJSC Belarusian Cement Plant and OJSC Belarusian Cement Plant. It is also necessary to include the largest enterprises from the city of Krichev, namely:OJSC KrichevСement Slate and OJSCKrichevsky Reinforced Concrete Products Plant. This formation will significantly increase the competitiveness of the enterprises themselves and their products, taking full advantage of the cluster structure, enhance the export potential of districts and the region as a whole, and will also contribute to the innovative development of the region
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Pang, Jian Yong, und Ling Yan Wang. „Test and Application of Polypropylene Concrete Bar Shell Bolting Shotcrete Support System in Soft Rock Drift“. Applied Mechanics and Materials 170-173 (Mai 2012): 274–78. http://dx.doi.org/10.4028/www.scientific.net/amm.170-173.274.

Der volle Inhalt der Quelle
Annotation:
The polypropylene bar shell bolting shotcrete and support system was studied on the basis of the analysis of the mechanics principle of shell structures. The technique feature is a kind of specially three-dimensional bar-mat reinforcement instead of three-dimensional bar-mat. It has very high mechanical property and was applied in difficult drift without profiled bar, anchor bolt, grouting and so on. The model experiment was done by relying on the industrial test on the broken laneway of Xishan Coal Power Company. The distortion, destruct shape, developmental rule of the supporting system were examined while the stress and strain distributing rule and also the ultimate load were obtained. The application showed that without reducing the carrying capacity of supporting the case, and heavy metal stent compared to the steel supporting the form of reduced consumption and the amount of concrete rebound, and has good flexibility, is all excellent soft rock drift Jean pressure retaining structure, and has good prospects for widespread application.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Jiang, Dan Ning, Hai Wei Sha, Guo Hui Gong, Jing Song, Hai Tao Xu, Chang Cheng Zhou und Kai Shen. „Fundamental Research and Demonstration Project of Evaporation Treatment of Wastewater from FGD in Flue Gas Duct“. Advanced Materials Research 864-867 (Dezember 2013): 434–37. http://dx.doi.org/10.4028/www.scientific.net/amr.864-867.434.

Der volle Inhalt der Quelle
Annotation:
The desulfurization wastewater is a kind of intractable wastewater which comes from the process of wet desulfurization. There are many shortcomings of traditional development methods. Desulfurization wastewater evaporation treatment in flue gas duct as a new processing technology. This approach of evaporating desulfurization wastewater by flue gas is feasible. The feasibility of the demonstration project of boiler flue gas treating wastewater on 4# boiler in Changshu Power Company limited was analyzed; the computational fluid dynamics (CFD) software FLUENT was adopted to simulated the pressure field and velocity field in the outlet flue pipes of air pre-heater. The concrete arrangement mode and the number of the spray lances were determined by calculations. The temperature of the flue gas is tested after the system was put into operation. The results showed that this technology can realize the desulfurization wastewater with zero discharge without hurting the work of electrostatic precipitator. The demonstration project is the first of its kind in domestic power industry; this study provides experiences and references of power plant desulfurization wastewater with zero discharge.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Saeb, Sajjad, José A. Capitán und Alfonso Cobo. „The Effect of Electric Arc Furnace Dust (EAFD) on Improving Characteristics of Conventional Concrete“. Buildings 13, Nr. 6 (14.06.2023): 1526. http://dx.doi.org/10.3390/buildings13061526.

Der volle Inhalt der Quelle
Annotation:
The steel industry is one of the key industries and its use is inevitable in many industries including construction. In addition to steel, this industry produces massive amounts of electric arc furnace dust (EAFD) that is classified as hazardous waste. Using this material as an admixture can improve the characteristics of concrete, neutralize potential risks and be beneficial to the circular economy. Considering the differences in EAFD between different steel companies, which in turn is caused by the type and percentage of input materials, the optimal percentage and specific application of EAFD from steel companies of each region is unique. In the present study, samples from 11 different sources of EAFD in Khuzestan Steel Company (KSC) were collected. Then, they were classified into three groups depending on the size and origin (fine and coarse, both obtained by filtering those particle sizes directly from furnaces, and a third class obtained in the interior of the steelmaking site close to material handling (MH) belt conveyors) based on their physical and chemical characteristics. To test the effect of EADF as an admixture, several conventional concrete samples were prepared by replacing 0% (control), 2%, 5% and 8% of cement with each EAFD group. Finally, the resulting material was characterized through several tests, namely: (i) compressive strength test at 7, 28 and 90 days, (ii) depth of water penetration under pressure test and (iii) electrical indication of concrete’s ability to resist chloride ion penetration. The result shows that replacing 2% of the cement with MH caused the largest improvement in compressive strength of 7 day concrete, but also showed negative effect on water penetration, while coarse had a negative effect in almost all tests except in the chloride ion penetration test. The best results were obtained by replacing with 2% of cement with fine EAFD, showing significant improvements in all tests, as well as in the observed trend of increasing compressive strength over time.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Moser, Martin A. „Current Controlling Challenges in Small and Medium-sized Companies on the Specific Example of Falling Profits and Accurate Sales Planning in a Family Business“. Gazdaság és Társadalom 12, Nr. 4 (2019): 31–62. http://dx.doi.org/10.21637/gt.2019.4.02.

Der volle Inhalt der Quelle
Annotation:
The globalization of the markets is not only a challenge for large companies. Small and medium-sized companies are also becoming more involved in new markets more than assumed and statistically recorded. They are required to position themselves more strongly on foreign markets due to increased competitive pressure, insufficient potential on the domestic market and narrower fields of market activity. Small and medium-sized companies are only exposed to minimal obligations under commercial law. This is due to their size, the organizational form and the lack of reference to the capital market. Since the annual balance sheet is often created with the aim of tax directives, there is only limited adequate reflection on the business profitability of the organization. A detailed internal cost accounting only exists in exceptional cases and cost monitoring is only carried out on the basis of historical data and not on a planned cost basis. This paper deals with a concrete practical question from the topic of controlling and the analysis of an existing company based on the organization’s financial documents. On the basis of the results and statements available, the objective of this paper is to develop recommendations for action for the supervisory board and then to discuss and evaluate them accordingly.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Putra, Rendi Harun, Winda Nur Cahyo und Tieling Zhang. „Re-Engineering The Business Process of Slickline and Electric Line Operation“. Spektrum Industri 20, Nr. 1 (20.04.2022): 49–66. http://dx.doi.org/10.12928/si.v20i1.22.

Der volle Inhalt der Quelle
Annotation:
A strategy to save cost related to oil exploration process is discussed in this paper. The focus is to reduce the cost in slickline and electric line operation in order to maintain the business continuity, where this conclusion was obtained by looking at the comparison between the costs incurred during previous operations with the costs incurred at this time by comparing the results of operations obtained as well as comparisons with oil and gas prices both before and after 2018. This concrete step to be taken is a method for lean management because there is a cost suppression in it. The context of its implementation is through Business Process Re-engineering which of course will implement appropriate method steps to be taken. The method steps taken are to implement Value Stream Mapping which in its preparation includes an analysis of the slickline and electric line operation work steps carried out, including by compiling; Product family or operation work steps, Current state map of both operations and Design future state map of Wire Line operation proposal and 5S theory process as completeness. The proposed strategy enable to reduce costs by combining slickline and electric line operations in one unit or one service company, then reducing the number of workers involved and making work more efficient, by reducing the Rig up and Rig process, Down PCE (Pressure Control Equipment), reducing the repetition of the work intervention process.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Monteiro, Nuno, und Pedro Anunciação. „Security Management Policies and Work Accidents“. Economics and Culture 19, Nr. 1 (01.06.2022): 75–86. http://dx.doi.org/10.2478/jec-2022-0007.

Der volle Inhalt der Quelle
Annotation:
Abstract Research purpose. The aim of the research is to demonstrate the impacts of an absence of organizational policies and strategies in the field of work security on enterprise competitiveness. The movement of loads in the warehouses of industrial and distribution companies is critical in optimizing the times of availability of products to market. This is an activity that, in the field of management, appears to be simple and not very complex, duly framed by national and international laws. However, when poorly managed, it can express significant costs that affect competitiveness and even significantly affect the operational functioning of companies. Knowing that the safety of cargo handling by different equipment presupposes rules and safety policies at different levels, the present study aims to demonstrate the economic impacts based on a real situation in one of the largest handling equipment companies in Portugal. Design / Methodology / Approach. Given the nature and objectives of the study, which seeks to demonstrate that the work security rules and policies compliance or non-compliance has benefits or costs and affects the competitiveness of economic organizations, the work was developed based on three phases. The first phase focused on direct observation of safety practices in operational activities. After this observation, in a second phase, we proceeded to collect and analyze existing data in the company under study, referring to the number of work accidents recorded in the past. In the last phase, we sought to understand and justify the results with the company’s top management. This last phase provided the understanding of the administrators’ view on the subject and the confrontation with the associated impacts, not only at the financial level but at the level of the company’s operation. Findings. This study made it possible to show the impact (associated costs) on organizational performance and that this reality, unfortunately, is not often part of top management’s concern. As a management issue that is often relegated to middle management, this study demonstrates the frequent failure to comply with safety rules due to the pressure of daily activities, the increased number of accidents with the personnel growing in the company, and that this situation can be enhanced through the low degree of control by the enterprise top management over the existing reality. With this concrete study, it was possible to verify the weak relevance of the topic for the company’s administration and the assumption of the difficulty in regularizing the existing situation. The need to review management practices and models in this field became evident. Originality / Value / Practical implications. The relevance of this study made it possible to point out to the top management administration that, in terms of competitiveness. However, the direct costs of the operation are relevant; there are indirect and opportunity costs, such as accidents or unavailability of equipment, which represent costs that can compromise the competitiveness of the company. This study also had the advantage of providing management with evidence of the existing reality in the company, which tends to be undervalued or go unnoticed in the day-to-day company’s current activities. In addition, it should also be noted that a proposal for an improvement plan for the company’s safety was made available.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Buryan, Petr, Zdeněk Bučko und Petr Mika. „A Complex Use of the Materials Extracted from an Open-Cast Lignite Mine“. Archives of Mining Sciences 59, Nr. 4 (01.12.2014): 1107–18. http://dx.doi.org/10.2478/amsc-2014-0077.

Der volle Inhalt der Quelle
Annotation:
Abstract The company Sokolovská uhelná, was the largest producer of city gas in the Czech Republic. After its substitution by natural gas the gasification technology became the basis of the production of electricity in the combine cycle power plant with total output 400 MW. For the possibility of gasification of liquid by- -products forming during the coal gasification a entrained-flow gasifier capable to process also alternative liquid fuels has been in installed. The concentrated waste gas with these sulphur compounds is conducted to the desulphurisation where the highly desired, pure, 96 % H2SO4 is produced. Briquettable brown coal is crushed, milled and dried and then it is passed into briquetting presses where briquettes, used mainly as a fuel in households, are pressed without binder in the punch under the pressure of 175 MPa. Fine brown coal dust (multidust) is commercially used for heat production in pulverized-coal burners. It forms not only during coal drying after separation on electrostatic separators, but it is also acquired by milling of dried coal in a vibratory bar mill. Slag from boilers of classical power plant, cinder form generators and ashes deposited at the dump are dehydrated and they are used as a quality bedding material during construction of communications in the mines of SUAS. Fly ash is used in building industry for partial substitution of cement in concrete. Flue gases after separation of fly ash are desulphurized by wet limestone method, where the main product is gypsum used, among others, in the building industry. Expanded clays from overburdens of coal seams, that are raw material for the production of “Liapor” artificial aggregate, are used heavily. This artificial aggregate is characterized by outstanding thermal and acoustic insulating properties.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

TUGRUL TUNC, Esra. „Determination of Fracture Toughness Parameters of Concrete Using Compact Pressure Test“. Bitlis Eren University Journal of Science and Technology 7, Nr. 2 (26.12.2017): 85–92. http://dx.doi.org/10.17678/beuscitech.336026.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Tugrul Tunc, Esra. „Determination of fracture toughness parameters of concrete using compact pressure test“. Bitlis Eren University Journal of Science and Technology 7, Nr. 2 (26.12.2017): 85–92. http://dx.doi.org/10.17678/beuscitech.372014.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Uzoma, Mathew Shadrack. „APPLYING DUDUCTIONS FROM NAVIER STOKES EQUATION TO FLOW SITUATIONS IN GAS PIPELINE NETWORK SYSTEM“. European Journal of Physical Sciences 1, Nr. 1 (17.09.2019): 10–18. http://dx.doi.org/10.47672/ejps.402.

Der volle Inhalt der Quelle
Annotation:
Navier Stokes equations are theoretical equations for pressure-flow-temperature problems in gas pipelines. Other well-known gas equations such as Weymouth, Panhandle A and Modified Panhandle B equations are employed in gas pipeline design and operational procedures at a level of practical relevance. Attaining optimality in the performance of this system entails concrete understanding of the theoretical and prevailing practical flow conditions. In this regard, Navier Stoke’s mass, momentum and energy equations had been worked upon subject to certain simplifying assumptions to deduced expressions for flow velocity and throughput in gas pipeline network system. This work could also bridge the link among theoretical, operational and optimal level of performance in gas pipelines. Purpose: The purpose of this research is to build a measure of practical relevance in gas pipeline operational procedures that would ultimately couple the missing links between theoretical flow equations such as Navier Stokes equation and practical gas pipeline flow equations. Such practical gas pipeline flow models are Weymouth, Panhandle A and Modified Panhandle B equations among others.Methodology: The approach in this regard entails reducing Narvier Stoke’s mas, momentum and energy equations to their appropriate forms by applicable practical conditions. By so doing flow models are deduced that could be worked upon by computational approach analytically or numerically to determine line throughput and flow velocity.The reduced forms of the Navier Stokes velocity and throughput equations would be applied to operating gas pipelines in Nigeria terrain. The gas pipelines are ElfTotal Nig. Ltd and Shell Petroleum Development Company (SPDC). This would enable the comparison of these gas pipelines operational data with theoretical results of Navier Stokes equations reduced to their appropriate forms.Findings: The follow up paper would employ theoretical and numerical discretization computational methods to compare theoretical and numerical discretization results to give a clue if these operating gas pipelines are operated at optimal level of performance.Unique contribution to theory, practice and policy: The reduced forms of Nervier Stokes equations applied to physical operating gas pipelines network system is considered by the researcher to be an endeavor of academic excellence that would foster clear cut understanding of theoretical and practical flow situations. It could also open up a measure of understanding to pushing a flow to attaining optical conditions in practical real life flow situations. Operating gas pipelines optimally would reduce the spread of these capital intensive assets and facilities and more so conserving our limited reserves for foreign exchange.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Palamarchuk, Oxana, und Tetіana Kuznіetsova. „International business strategies to increase the level of competitiveness of enterprises“. University Economic Bulletin, Nr. 54 (27.09.2022): 45–53. http://dx.doi.org/10.31470/2306-546x-2022-54-45-53.

Der volle Inhalt der Quelle
Annotation:
Relevance of the research topic. The development of market relations has generated a significant competitive tension almost in all spheres of business and all forms of entrepreneurship, which, in turn, implies the need to adjust the development strategy of enterprise to purposefully strengthen its competitive position in the internationalization of business, as well as to create and use sustainable strategic advantages in today's extremely competitive marketing environment. The enterprise management, understanding the complex modern conditions, should consider creation of specific strategic advantages and original international strategies, which will help form a powerful potential for further industrial and commercial development of the enterprise. Formulation of the problem. A large proportion of Ukrainian companies do not have the in-house experience of market behavior to actively compete with corporations in a global environment and with international companies which have spent decades perfecting their management skills as well as more experienced and successful international companies. More and more Ukrainian companies are experiencing increased competitive pressures. More than 25% of Ukrainian enterprises confirmed that the pressure from foreign companies is particularly strong [9]. Therefore, it is important to study from both theoretical and practical points of view the possibility of increasing the international competitiveness of the national economy, especially its subjects, by applying international business strategies, even for those enterprises that are not directly involved in foreign economic activity, but have to operate in an international competitive market. Selection of unexplored parts of the general problem. From this direction of research there is a large number of publications, however, we consider it expedient to deepen questions of formation of competitive behavior of subjects of management, namely creation of concrete strategic advantages and original international strategies which will help to form powerful potential of firm for the further industrial and commercial development in the current conditions of management. Setting the problem, the purpose is to investigate the directions of increasing the international competitiveness of enterprises and the use of international business strategies in the current environment and the postwar period. The method or methodology of the research. The article applies a combination of the following methods: generalization, descriptive, abstract-logical and method of systematization. Presenting main material. The formation of modern principles of competitive behavior of Ukrainian enterprises requires a comprehensive study of modern international business strategies and the possibility of their application in business practice. This is associated with the need to create a new competitive attitude and mechanism for the actual implementation of a certain set of tools to ensure that goods, enterprises and entire production sectors were internationally competitive. In fact, it is about creating a system of strategic and operational mutually coordinated decisions at the level of a product, enterprise, industry and country, aimed at improving the competitive status of domestic enterprises in the international market in modern economic conditions and in the post-war reconstruction period. Field of application of results. The results of the study can be used by scientists in the study of the problem of competitiveness management of enterprises. Modern business strategies in Ukraine are too weak compared to neighboring countries in Europe. It is necessary to actively strengthen and strengthen the competitive business strategies, taking into account the European perspective of Ukraine. Conclusions according to the article. A well thought out and properly formulated business strategy indicates the most effective and productive way to achieve the goal, offers the business a choice of promising and profitable activities, high demand for its products (services), usually unique and with a dominant market position among competitors. Each company develops its own strategy to ensure its stability and strategic vision for the future based on its goal and market characteristics.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Nikolenko, E. G., und I. V. Karyakin. „“Birds and Power Lines” in Russia: the USSR Heritage, Modern Achievements, and Issues“. Raptors Conservation, Nr. 2 (2023): 376–81. http://dx.doi.org/10.19074/1814-8654-2023-2-376-381.

Der volle Inhalt der Quelle
Annotation:
We should begin to consider the issue of “Birds and power lines” in Russia with the state plan for the electrification of Russia, which was adopted in the USSR in 1920. The length of overhead power lines (PLs) had increased by several dozen times over ten years back then, but they were seated on wooden supports, which are pretty much safe for birds. The issue arose in the 1970s, with the adoption of the standard for PL on grounded reinforced concrete supports with pin insulators. Introduction of this standard led to the widespread bird mortality from electric shock. Russia has not only inherited thousands of kilometers of bird-hazardous PLs from the USSR, but also decisions made on the need to use bird protection devices (BPDs). This was facilitated by the publication of “Birds on Wires” in the Komsomolskaya Pravda newspaper in 1980 (Peskov, 1980), after which research was carried out and the first solutions were proposed (Pererva, Blokhin, 1981; Zvonov, Krivonosov, 1981; Grazhdankin, Pererva, 1982). Since 1981, installation of prototypes began: BPDs of anti-perching and distracting type – “whiskers” and “perches” made of conductive materials, blank insulators (Selenergoproekt, 1985). However, these structures only increased mortality rates in birds, and therefore after four years “whiskers” and “perches” were banned from use (Main Scientific and Technical Directorate..., 1989), and in subsequent years power engineers dismantled them across most of Russia where they have been installed. Therefore, the USSR recognized the problem, yet chose the wrong solution. It is important that plastic was not so widespread at the time, and cables were covered with rubber and were very expensive. In the wake of Perestroika, the public environmental movement intensified in the USSR: ideological environmental activists came to the updated state system. Thanks to them, laws and other regulatory documents appeared in Russia, which made it possible to manage the situation and take measures to protect birds on the national scale. In 2003, the Ministry of Energy of the Russian Federation directly prohibits the use of PL supports with pin insulators in habitats of large birds in the Rules for the Construction of Electrical Installations. And, despite this, reconstructing many thousands of kilometers of PLs looked like a fantastic task, and the accepted requirements were ignored by organizations subordinate to the Ministry of Energy. Moreover, bird-hazardous structures were It was only because of many years of environmental activists’ activity, which included numerous complaints to the prosecutor’s office and dozens of lawsuits, the problem was noticed and recognized, both in state environmental protection agencies and among energy workers. In 2005–2008, serial production of plastic BPDs has been established by several companies in Russia. In those years, it was important to convince power engineers to equip PLs with BPDs. The experience of the Volga region (see Bakka, Kiseleva, this collection) formed the basis of our work in Siberia, in the Altai-Sayan region, an enclave of many rare raptor species, and later – in Transbaikal. This was the most active period of the detailed research of the issue: depending on the geographical location, natural environment, state of different raptorpopulations. Several groups carried research simultaneously in different regions of Russia (Dwyer et al., 2023). Dozens of publications appeared: only in 2011–2012, 19 scientific articles were published showing the results of studying bird mortality on PLs in 13 administrative regions of the Russian Federation. Ornithological conferences were held to develop recommendations and adopt resolutions to address this issue. A well-drafted complaints to the prosecutor's office, accompanied by PL inspection and bird deaths reports, almost invariably led to prosecutor’s office issuing an order to eliminate violations. In 2011, according to our recommendations, the “Interregional Distribution Grid Company of Siberia” (now a branch of the “Rosseti” PJSC) drew up a 10-year plan for equipping PLs with BPDs in seven regions of Siberia. Thanks to the recommendations, equipment was installed in the highest priority areas that are relevant for the conservation of rare bird species nesting groups. In general, in Siberia (Altai-Sayan region and Transbaikal) more than 10,000 km of PLs have been made safe for birds, and rare raptor mortality has decreased by 60–70% according to the most conservative estimates after this. Some companies have immediately taken an environmentally responsible position. The mobile operator “MTS” completely reconstructed PLs in the Altai-Sayan region to make them safe within three years. And the PLs along the Russian border with Mongolia and China owned by the state were almost completely replaced with underground cables – and their length is several thousand kilometers! However, as the demand for BPDs have risen, manufacturers with cheap and completely ineffective devices appeared. Thus, the product range included anti-perching BPDs – plastic spikes that broke under eagle talons, etc. This issue was solved in 2015, when “Rosseti” PJSC adopted a standard developed by the working group of Nonprofit Partnership “Elektrosetizolyatsiya” under pressure by activists: STO 34.01- 2.2-010-2015 “Bird protection devices for overhead power lines and open substations’ switchgears. General technical requirements” (as amended in 2017: STO 34.01-2.2-025-2017 “Bird protection devices for overhead power lines and open substations’ switchgears”), which set strict requirements for the quality of plastic and structure efficiency. Similar standards have been adopted by other large industrial companies. Since 2014, state began to pressure environmental activists in Russia: writing letters to the prosecutor’s office has become dangerous. However, last 25 years of efforts have made the process more or less autonomous: regulations dictate the need for power engineers to use BPDs, forming a permanent state demand, several BPDs manufacturing companies create healthy competition for each other. This gives hope for the use of BPDs, self-insulated wire (SIW), poles without traverses and composite traverses on PLs, as well as underground cable lines, to become the norm. Key issues that are relevant today: 1. The use of high-quality plastic, which would have a long service life (no less than 10 years) in the conditions of Siberia, where daily temperature changes reach 30 degrees, and also would be resilient to pressure of eagle talons that crumble caps made of “weak” plastic in one season; 2. It is necessary to introduce new and/ or less common technologies in PL structures: composite traverses, traversless supports made of treated wood, which last several times longer than untreated wood, suspended insulators with umbrellas on traverses with a distance between currentcarrying and grounded elements of one meter or more; 3. BPDs are a temporary measure, widespread transition to SIW and underground cable is needed; 4. Design companies still offer birdhazardous designs with pin insulators to power engineers, firstly, because regulatory authorities (for example, Rostekhnadzor) are not interested in solving the problem, and secondly, thanks to corruption, favorable conditions are created for the implementation of dangerous and lowquality projects, including those that violate Russian legislation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Kume, Vasilika, Ana Tomovska Misoska und Predrag Djordjevic. „Riding the waves of change: story of Brunes Ltd“. Emerald Emerging Markets Case Studies 6, Nr. 2 (14.06.2016): 1–25. http://dx.doi.org/10.1108/eemcs-04-2015-0061.

Der volle Inhalt der Quelle
Annotation:
Subject area Strategic management, HR Management, Change management, Marketing. Study level/applicability Potential audience. This case will serve to undergraduate students, master level students, in the subjects, entrepreneurship, managing change, marketing, H&R management, strategic management. Case overview The Brunes Company was founded in 1994 by Gerond Cela and his brother, with the goal to provide quality products for bathrooms in the then-emerging Albanian market. During the next few years, it had grown into one of the biggest wholesale and retail chains in Albania, with huge portfolio of goods for home refurbishing. The beginnings were very humble. Armed only with the high school diploma in textile trading, born and raised in an ex-communist country without developed entrepreneurship culture, Gerond set off to Italy, a popular destination for young Albanians who were looking for an opportunity to escape the pitfalls of the post-communist transitional economy. Gerond recognized the huge gap in the market for imported tiles in his home country, so he began importing quality Italian tiles in 1994. Initially, he was doing the wholesale from his truck, due to the lack of retail stores. He focused on increasing customer satisfaction and built the company name as trustful provider of quality goods. This strategy brought him less profit, but his long-term goal was to build the company name and to establish it as a trustful provider of quality goods. In 1999, he bought 18,000 m2 (for 50.000 euro) land on the highway Tirana-Durres, 7 km from the city centre, which proved to be extremely worthwhile in the long run because the price of the land had skyrocketed up to ten times during the next decade, due to the economic development of Albania. In 2004, Gerond and his brother epitomized their business idea. They entered the market of home furniture. In 2009, the company expanded further in country towns like Lezha, Saranda and Fier. After two decades of establishing his company as a market leader with approximately 30 per cent of market share, Brunes Company is at the crossroads. On one hand, it is pressured by very stiff competition in the form of their main competitor Delta Home, which succeeded in taking 10-15 per cent of the marketing just in one year. On the other hand, the company has been stagnating for some period without a concrete plan to overcome this problem, as well as without a clear strategy for the future directions of the expansion. To diversify the company’s portfolio, Gerond built a factory for tiling accessories which will cost 8 million Euros and employs about 30 workers. Expected learning outcomes Specific objectives of the case are as follows: to portray individuals who became successful primary through their leadership abilities, and to examine how their experiences and values contributed to the success of their business; illustrate the impact on operations of an increasingly competitive environment and how this environment affects the need for a change in strategy; identify the challenges of selling luxury goods in a competitive retail environment; to assist students to critically think about diversification strategy; to gain an understanding how to adapt to change; to discuss for issues that must be changed (culture, people, technology, values and philosophy of leadership, marketing, business model), to grow. Supplementary materials Teaching notes are available for educators only. Please contact your library to gain login details or email support@emeraldinsight.com to request teaching notes. Subject code CSS 11: Strategy
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Macnar, Kazimierz, Andrzej Gonet und Stanisław Stryczek. „Wybrane zagadnienia geotechniczne posadowienia urządzeń wiertniczych“. Nafta-Gaz 77, Nr. 5 (Mai 2021): 313–22. http://dx.doi.org/10.18668/ng.2021.05.04.

Der volle Inhalt der Quelle
Annotation:
This article presents selected geotechnical issues occurring at the foundation of drilling rigs for geological works included in the Operation Plan of a company performing geological works, in the aspect of designing and construction of their foundations and a yard. In the construction of drilling equipment, at least two main zones can be distinguished, often requiring separate foundations for individual machines: the zone near the borehole, including crane components, mast and drill pipe drive, and the so-called machine hall zone, including drive units and elements of mud system. The machine foundation is designed to mount a particular type of machine on it in order to transfer to the ground the static and dynamic loads generated during the movement of the machine. In particular, the current legislation, technical literature and standards were reviewed, especially: API recommended practice 51R and 4G, Working platforms for tracked plant, Eurocode 7 PN-EN 1997-2:2009 Standard. The values of safe bearing capacity of some soils and the magnitude of pressures generated by static and dynamic loads of selected drilling equipment were presented, which can be useful for preliminary assessment of the location of drilling equipment in the field and selection of surface and type of foundations. Typical examples of foundation of drilling rigs in various geotechnical conditions on direct foundations with the use of prefabricated elements such as reinforced concrete road slabs, wooden slabs and composite slabs based on HDPE plastic or on indirect ones with the use of micropiles were described. The following essential elements of the process of geotechnical design of the foundation of drilling rigs and their execution were indicated. According to legal regulations, the form of presentation of geotechnical foundation conditions and the scope of necessary tests should depend on assigning the building structure to a proper geotechnical category, which for practical purposes is tabulated in this article. The design and construction of foundations for drilling rigs should ensure, among other things, that their intrinsic vibrations are sufficiently different from those induced by subassemblies of the rig, that the vibration amplitudes are smaller than permissible, and that the foundations of individual machines are adequately separated from each other and from the rest of the facilities (yard). Conclusions on the safe foundation of drilling rigs on the ground, including, among others, the strengthening of the ground, design of independent building structures such as foundations for drilling rigs and their execution and removal were presented.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Hayashi, Haruo. „Long-term Recovery from Recent Disasters in Japan and the United States“. Journal of Disaster Research 2, Nr. 6 (01.12.2007): 413–18. http://dx.doi.org/10.20965/jdr.2007.p0413.

Der volle Inhalt der Quelle
Annotation:
In this issue of Journal of Disaster Research, we introduce nine papers on societal responses to recent catastrophic disasters with special focus on long-term recovery processes in Japan and the United States. As disaster impacts increase, we also find that recovery times take longer and the processes for recovery become more complicated. On January 17th of 1995, a magnitude 7.2 earthquake hit the Hanshin and Awaji regions of Japan, resulting in the largest disaster in Japan in 50 years. In this disaster which we call the Kobe earthquake hereafter, over 6,000 people were killed and the damage and losses totaled more than 100 billion US dollars. The long-term recovery from the Kobe earthquake disaster took more than ten years to complete. One of the most important responsibilities of disaster researchers has been to scientifically monitor and record the long-term recovery process following this unprecedented disaster and discern the lessons that can be applied to future disasters. The first seven papers in this issue present some of the key lessons our research team learned from the studying the long-term recovery following the Kobe earthquake disaster. We have two additional papers that deal with two recent disasters in the United States – the terrorist attacks on World Trade Center in New York on September 11 of 2001 and the devastation of New Orleans by the 2005 Hurricane Katrina and subsequent levee failures. These disasters have raised a number of new research questions about long-term recovery that US researchers are studying because of the unprecedented size and nature of these disasters’ impacts. Mr. Mammen’s paper reviews the long-term recovery processes observed at and around the World Trade Center site over the last six years. Ms. Johnson’s paper provides a detailed account of the protracted reconstruction planning efforts in the city of New Orleans to illustrate a set of sufficient and necessary conditions for successful recovery. All nine papers in this issue share a theoretical framework for long-term recovery processes which we developed based first upon the lessons learned from the Kobe earthquake and later expanded through observations made following other recent disasters in the world. The following sections provide a brief description of each paper as an introduction to this special issue. 1. The Need for Multiple Recovery Goals After the 1995 Kobe earthquake, the long-term recovery process began with the formulation of disaster recovery plans by the City of Kobe – the most severely impacted municipality – and an overarching plan by Hyogo Prefecture which coordinated 20 impacted municipalities; this planning effort took six months. Before the Kobe earthquake, as indicated in Mr. Maki’s paper in this issue, Japanese theories about, and approaches to, recovery focused mainly on physical recovery, particularly: the redevelopment plans for destroyed areas; the location and standards for housing and building reconstruction; and, the repair and rehabilitation of utility systems. But the lingering problems of some of the recent catastrophes in Japan and elsewhere indicate that there are multiple dimensions of recovery that must be considered. We propose that two other key dimensions are economic recovery and life recovery. The goal of economic recovery is the revitalization of the local disaster impacted economy, including both major industries and small businesses. The goal of life recovery is the restoration of the livelihoods of disaster victims. The recovery plans formulated following the 1995 Kobe earthquake, including the City of Kobe’s and Hyogo Prefecture’s plans, all stressed these two dimensions in addition to physical recovery. The basic structure of both the City of Kobe’s and Hyogo Prefecture’s recovery plans are summarized in Fig. 1. Each plan has three elements that work simultaneously. The first and most basic element of recovery is the restoration of damaged infrastructure. This helps both physical recovery and economic recovery. Once homes and work places are recovered, Life recovery of the impacted people can be achieved as the final goal of recovery. Figure 2 provides a “recovery report card” of the progress made by 2006 – 11 years into Kobe’s recovery. Infrastructure was restored in two years, which was probably the fastest infrastructure restoration ever, after such a major disaster; it astonished the world. Within five years, more than 140,000 housing units were constructed using a variety of financial means and ownership patterns, and exceeding the number of demolished housing units. Governments at all levels – municipal, prefectural, and national – provided affordable public rental apartments. Private developers, both local and national, also built condominiums and apartments. Disaster victims themselves also invested a lot to reconstruct their homes. Eleven major redevelopment projects were undertaken and all were completed in 10 years. In sum, the physical recovery following the 1995 Kobe earthquake was extensive and has been viewed as a major success. In contrast, economic recovery and life recovery are still underway more than 13 years later. Before the Kobe earthquake, Japan’s policy approaches to recovery assumed that economic recovery and life recovery would be achieved by infusing ample amounts of public funding for physical recovery into the disaster area. Even though the City of Kobe’s and Hyogo Prefecture’s recovery plans set economic recovery and life recovery as key goals, there was not clear policy guidance to accomplish them. Without a clear articulation of the desired end-state, economic recovery programs for both large and small businesses were ill-timed and ill-matched to the needs of these businesses trying to recover amidst a prolonged slump in the overall Japanese economy that began in 1997. “Life recovery” programs implemented as part of Kobe’s recovery were essentially social welfare programs for low-income and/or senior citizens. 2. Requirements for Successful Physical Recovery Why was the physical recovery following the 1995 Kobe earthquake so successful in terms of infrastructure restoration, the replacement of damaged housing units, and completion of urban redevelopment projects? There are at least three key success factors that can be applied to other disaster recovery efforts: 1) citizen participation in recovery planning efforts, 2) strong local leadership, and 3) the establishment of numerical targets for recovery. Citizen participation As pointed out in the three papers on recovery planning processes by Mr. Maki, Mr. Mammen, and Ms. Johnson, citizen participation is one of the indispensable factors for successful recovery plans. Thousands of citizens participated in planning workshops organized by America Speaks as part of both the World Trade Center and City of New Orleans recovery planning efforts. Although no such workshops were held as part of the City of Kobe’s recovery planning process, citizen participation had been part of the City of Kobe’s general plan update that had occurred shortly before the earthquake. The City of Kobe’s recovery plan is, in large part, an adaptation of the 1995-2005 general plan. On January 13 of 1995, the City of Kobe formally approved its new, 1995-2005 general plan which had been developed over the course of three years with full of citizen participation. City officials, responsible for drafting the City of Kobe’s recovery plan, have later admitted that they were able to prepare the city’s recovery plan in six months because they had the preceding three years of planning for the new general plan with citizen participation. Based on this lesson, Odiya City compiled its recovery plan based on the recommendations obtained from a series of five stakeholder workshops after the 2004 Niigata Chuetsu earthquake. <strong>Fig. 1. </strong> Basic structure of recovery plans from the 1995 Kobe earthquake. <strong>Fig. 2. </strong> “Disaster recovery report card” of the progress made by 2006. Strong leadership In the aftermath of the Kobe earthquake, local leadership had a defining role in the recovery process. Kobe’s former Mayor, Mr. Yukitoshi Sasayama, was hired to work in Kobe City government as an urban planner, rebuilding Kobe following World War II. He knew the city intimately. When he saw damage in one area on his way to the City Hall right after the earthquake, he knew what levels of damage to expect in other parts of the city. It was he who called for the two-month moratorium on rebuilding in Kobe city on the day of the earthquake. The moratorium provided time for the city to formulate a vision and policies to guide the various levels of government, private investors, and residents in rebuilding. It was a quite unpopular policy when Mayor Sasayama announced it. Citizens expected the city to be focusing on shelters and mass care, not a ban on reconstruction. Based on his experience in rebuilding Kobe following WWII, he was determined not to allow haphazard reconstruction in the city. It took several years before Kobe citizens appreciated the moratorium. Numerical targets Former Governor Mr. Toshitami Kaihara provided some key numerical targets for recovery which were announced in the prefecture and municipal recovery plans. They were: 1) Hyogo Prefecture would rebuild all the damaged housing units in three years, 2) all the temporary housing would be removed within five years, and 3) physical recovery would be completed in ten years. All of these numerical targets were achieved. Having numerical targets was critical to directing and motivating all the stakeholders including the national government’s investment, and it proved to be the foundation for Japan’s fundamental approach to recovery following the 1995 earthquake. 3. Economic Recovery as the Prime Goal of Disaster Recovery In Japan, it is the responsibility of the national government to supply the financial support to restore damaged infrastructure and public facilities in the impacted area as soon as possible. The long-term recovery following the Kobe earthquake is the first time, in Japan’s modern history, that a major rebuilding effort occurred during a time when there was not also strong national economic growth. In contrast, between 1945 and 1990, Japan enjoyed a high level of national economic growth which helped facilitate the recoveries following WWII and other large fires. In the first year after the Kobe earthquake, Japan’s national government invested more than US$ 80 billion in recovery. These funds went mainly towards the repair and reconstruction of infrastructure and public facilities. Now, looking back, we can also see that these investments also nearly crushed the local economy. Too much money flowed into the local economy over too short a period of time and it also did not have the “trickle-down” effect that might have been intended. To accomplish numerical targets for physical recovery, the national government awarded contracts to large companies from Osaka and Tokyo. But, these large out-of-town contractors also tended to have their own labor and supply chains already intact, and did not use local resources and labor, as might have been expected. Essentially, ten years of housing supply was completed in less than three years, which led to a significant local economic slump. Large amounts of public investment for recovery are not necessarily a panacea for local businesses, and local economic recovery, as shown in the following two examples from the Kobe earthquake. A significant national investment was made to rebuild the Port of Kobe to a higher seismic standard, but both its foreign export and import trade never recovered to pre-disaster levels. While the Kobe Port was out of business, both the Yokohama Port and the Osaka Port increased their business, even though many economists initially predicted that the Kaohsiung Port in Chinese Taipei or the Pusan Port in Korea would capture this business. Business stayed at all of these ports even after the reopening of the Kobe Port. Similarly, the Hanshin Railway was severely damaged and it took half a year to resume its operation, but it never regained its pre-disaster readership. In this case, two other local railway services, the JR and Hankyu lines, maintained their increased readership even after the Hanshin railway resumed operation. As illustrated by these examples, pre-disaster customers who relied on previous economic output could not necessarily afford to wait for local industries to recover and may have had to take their business elsewhere. Our research suggests that the significant recovery investment made by Japan’s national government may have been a disincentive for new economic development in the impacted area. Government may have been the only significant financial risk-taker in the impacted area during the national economic slow-down. But, its focus was on restoring what had been lost rather than promoting new or emerging economic development. Thus, there may have been a missed opportunity to provide incentives or put pressure on major businesses and industries to develop new businesses and attract new customers in return for the public investment. The significant recovery investment by Japan’s national government may have also created an over-reliance of individuals on public spending and government support. As indicated in Ms. Karatani’s paper, individual savings of Kobe’s residents has continued to rise since the earthquake and the number of individuals on social welfare has also decreased below pre-disaster levels. Based on our research on economic recovery from the Kobe earthquake, at least two lessons emerge: 1) Successful economic recovery requires coordination among all three recovery goals – Economic, Physical and Life Recovery, and 2) “Recovery indices” are needed to better chart recovery progress in real-time and help ensure that the recovery investments are being used effectively. Economic recovery as the prime goal of recovery Physical recovery, especially the restoration of infrastructure and public facilities, may be the most direct and socially accepted provision of outside financial assistance into an impacted area. However, lessons learned from the Kobe earthquake suggest that the sheer amount of such assistance may not be effective as it should be. Thus, as shown in Fig. 3, economic recovery should be the top priority goal for recovery among the three goals and serve as a guiding force for physical recovery and life recovery. Physical recovery can be a powerful facilitator of post-disaster economic development by upgrading social infrastructure and public facilities in compliance with economic recovery plans. In this way, it is possible to turn a disaster into an opportunity for future sustainable development. Life recovery may also be achieved with a healthy economic recovery that increases tax revenue in the impacted area. In order to achieve this coordination among all three recovery goals, municipalities in the impacted areas should have access to flexible forms of post-disaster financing. The community development block grant program that has been used after several large disasters in the United States, provide impacted municipalities with a more flexible form of funding and the ability to better determine what to do and when. The participation of key stakeholders is also an indispensable element of success that enables block grant programs to transform local needs into concrete businesses. In sum, an effective economic recovery combines good coordination of national support to restore infrastructure and public facilities and local initiatives that promote community recovery. Developing Recovery Indices Long-term recovery takes time. As Mr. Tatsuki’s paper explains, periodical social survey data indicates that it took ten years before the initial impacts of the Kobe earthquake were no longer affecting the well-being of disaster victims and the recovery was completed. In order to manage this long-term recovery process effectively, it is important to have some indices to visualize the recovery processes. In this issue, three papers by Mr. Takashima, Ms. Karatani, and Mr. Kimura define three different kinds of recovery indices that can be used to continually monitor the progress of the recovery. Mr. Takashima focuses on electric power consumption in the impacted area as an index for impact and recovery. Chronological change in electric power consumption can be obtained from the monthly reports of power company branches. Daily estimates can also be made by tracking changes in city lights using a satellite called DMSP. Changes in city lights can be a very useful recovery measure especially at the early stages since it can be updated daily for anywhere in the world. Ms. Karatani focuses on the chronological patterns of monthly macro-statistics that prefecture and city governments collect as part of their routine monitoring of services and operations. For researchers, it is extremely costly and virtually impossible to launch post-disaster projects that collect recovery data continuously for ten years. It is more practical for researchers to utilize data that is already being collected by local governments or other agencies and use this data to create disaster impact and recovery indices. Ms. Karatani found three basic patterns of disaster impact and recovery in the local government data that she studied: 1) Some activities increased soon after the disaster event and then slumped, such as housing construction; 2) Some activities reduced sharply for a period of time after the disaster and then rebounded to previous levels, such as grocery consumption; and 3) Some activities reduced sharply for a while and never returned to previous levels, such as the Kobe Port and Hanshin Railway. Mr. Kimura focuses on the psychology of disaster victims. He developed a “recovery and reconstruction calendar” that clarifies the process that disaster victims undergo in rebuilding their shattered lives. His work is based on the results of random surveys. Despite differences in disaster size and locality, survey data from the 1995 Kobe earthquake and the 2004 Niigata-ken Chuetsu earthquake indicate that the recovery and reconstruction calendar is highly reliable and stable in clarifying the recovery and reconstruction process. <strong>Fig. 3.</strong> Integrated plan of disaster recovery. 4. Life Recovery as the Ultimate Goal of Disaster Recovery Life recovery starts with the identification of the disaster victims. In Japan, local governments in the impacted area issue a “damage certificate” to disaster victims by household, recording the extent of each victim’s housing damage. After the Kobe earthquake, a total of 500,000 certificates were issued. These certificates, in turn, were used by both public and private organizations to determine victim’s eligibility for individual assistance programs. However, about 30% of those victims who received certificates after the Kobe earthquake were dissatisfied with the results of assessment. This caused long and severe disputes for more than three years. Based on the lessons learned from the Kobe earthquake, Mr. Horie’s paper presents (1) a standardized procedure for building damage assessment and (2) an inspector training system. This system has been adopted as the official building damage assessment system for issuing damage certificates to victims of the 2004 Niigata-ken Chuetsu earthquake, the 2007 Noto-Peninsula earthquake, and the 2007 Niigata-ken Chuetsu Oki earthquake. Personal and family recovery, which we term life recovery, was one of the explicit goals of the recovery plan from the Kobe earthquake, but it was unclear in both recovery theory and practice as to how this would be measured and accomplished. Now, after studying the recovery in Kobe and other regions, Ms. Tamura’s paper proposes that there are seven elements that define the meaning of life recovery for disaster victims. She recently tested this model in a workshop with Kobe disaster victims. The seven elements and victims’ rankings are shown in Fig. 4. Regaining housing and restoring social networks were, by far, the top recovery indicators for victims. Restoration of neighborhood character ranked third. Demographic shifts and redevelopment plans implemented following the Kobe earthquake forced significant neighborhood changes upon many victims. Next in line were: having a sense of being better prepared and reducing their vulnerability to future disasters; regaining their physical and mental health; and restoration of their income, job, and the economy. The provision of government assistance also provided victims with a sense of life recovery. Mr. Tatsuki’s paper summarizes the results of four random-sample surveys of residents within the most severely impacted areas of Hyogo Prefecture. These surveys were conducted biannually since 1999,. Based on the results of survey data from 1999, 2001, 2003, and 2005, it is our conclusion that life recovery took ten years for victims in the area impacted significantly by the Kobe earthquake. Fig. 5 shows that by comparing the two structural equation models of disaster recovery (from 2003 and 2005), damage caused by the Kobe earthquake was no longer a determinant of life recovery in the 2005 model. It was still one of the major determinants in the 2003 model as it was in 1999 and 2001. This is the first time in the history of disaster research that the entire recovery process has been scientifically described. It can be utilized as a resource and provide benchmarks for monitoring the recovery from future disasters. <strong>Fig. 4.</strong> Ethnographical meaning of “life recovery” obtained from the 5th year review of the Kobe earthquake by the City of Kobe. <strong>Fig. 5.</strong> Life recovery models of 2003 and 2005. 6. The Need for an Integrated Recovery Plan The recovery lessons from Kobe and other regions suggest that we need more integrated recovery plans that use physical recovery as a tool for economic recovery, which in turn helps disaster victims. Furthermore, we believe that economic recovery should be the top priority for recovery, and physical recovery should be regarded as a tool for stimulating economic recovery and upgrading social infrastructure (as shown in Fig. 6). With this approach, disaster recovery can help build the foundation for a long-lasting and sustainable community. Figure 6 proposes a more detailed model for a more holistic recovery process. The ultimate goal of any recovery process should be achieving life recovery for all disaster victims. We believe that to get there, both direct and indirect approaches must be taken. Direct approaches include: the provision of funds and goods for victims, for physical and mental health care, and for housing reconstruction. Indirect approaches for life recovery are those which facilitate economic recovery, which also has both direct and indirect approaches. Direct approaches to economic recovery include: subsidies, loans, and tax exemptions. Indirect approaches to economic recovery include, most significantly, the direct projects to restore infrastructure and public buildings. More subtle approaches include: setting new regulations or deregulations, providing technical support, and creating new businesses. A holistic recovery process needs to strategically combine all of these approaches, and there must be collaborative implementation by all the key stakeholders, including local governments, non-profit and non-governmental organizations (NPOs and NGOs), community-based organizations (CBOs), and the private sector. Therefore, community and stakeholder participation in the planning process is essential to achieve buy-in for the vision and desired outcomes of the recovery plan. Securing the required financial resources is also critical to successful implementation. In thinking of stakeholders, it is important to differentiate between supporting entities and operating agencies. Supporting entities are those organizations that supply the necessary funding for recovery. Both Japan’s national government and the federal government in the U.S. are the prime supporting entities in the recovery from the 1995 Kobe earthquake and the 2001 World Trade Center recovery. In Taiwan, the Buddhist organization and the national government of Taiwan were major supporting entities in the recovery from the 1999 Chi-Chi earthquake. Operating agencies are those organizations that implement various recovery measures. In Japan, local governments in the impacted area are operating agencies, while the national government is a supporting entity. In the United States, community development block grants provide an opportunity for many operating agencies to implement various recovery measures. As Mr. Mammen’ paper describes, many NPOs, NGOs, and/or CBOs in addition to local governments have had major roles in implementing various kinds programs funded by block grants as part of the World Trade Center recovery. No one, single organization can provide effective help for all kinds of disaster victims individually or collectively. The needs of disaster victims may be conflicting with each other because of their diversity. Their divergent needs can be successfully met by the diversity of operating agencies that have responsibility for implementing recovery measures. In a similar context, block grants made to individual households, such as microfinance, has been a vital recovery mechanism for victims in Thailand who suffered from the 2004 Sumatra earthquake and tsunami disaster. Both disaster victims and government officers at all levels strongly supported the microfinance so that disaster victims themselves would become operating agencies for recovery. Empowering individuals in sustainable life recovery is indeed the ultimate goal of recovery. <strong>Fig. 6.</strong> A holistic recovery policy model.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Shepelevich, M., und A. Puzan. „STUDYING CRACK RESISTANCE OF REINFORCED-CONCRETE − FIBRE-GLASS COMPOSITE PRESSURE PIPES FOR MICROTUNNELING“. Problemy sovremennogo betona i zhelezobetona, Nr. 11 (23.12.2019). http://dx.doi.org/10.35579/2076-6033-2019-11-06.

Der volle Inhalt der Quelle
Annotation:
The results are given for experimental studies of crack resistance of reinforced-concrete - fibre-glass composite pressure pipes when affected by three-way load and internal hydrostatic pressure. Pipes are designed for construction of pressure pipelines, using the trenchless laying (microtunneling) method. BelNIIS Institute Republican Unitary Enterprise (RUE) has developed the design solutions and engineering drawings of pipes in accordance with the order placed by Steklokompozit Industrial Company, Russia. Pipes are made as integrated ones, and they consist of an external thick-wall reinforced-concrete pipe (the cage) enveloping an insert pipe made of fibre-glass composite. Full-scale specimens of pipes with the inner diameter of 800 mm and the (effective) length of 3000 mm were used for studies. The total thickness of pipe wall was 138 mm, with the fibre-glass composite insert thickness 10 mm. Pipes were made in vertical position, with the vibroforming method used for this purpose. During the concrete mixture laying procedure, the fibre-glass composite insert pipe joined with the fibre-glass composite sleeve was used as a permanent formwork. The experimental studies were carried out in two steps: I pipes (2 specimens) were tested by the internal hydrostatic pressure II pipes (2 specimens), including a hydraulically-tested specimen, were tested by the three-way load (according to GOST 6482). During the tests, formation and opening of cracks in longitudinal cross-sections of pipe wall was registered. It was found that strength characteristics of integrated reinforced-concrete composite pipes provide their load-carrying capacity against both the internal hydraulic pressure and the external (three-way) load applied. Thus, when reference loads in terms of crack resistance were applied, there were no cracks in longitudinal cross-sections of pipes when reference loads in terms of strength were applied, the crack opening width never exceeded 0.05 mm. Also, both with the internal pressure and the three-way load applied, the crack opening widths in longitudinal cross-sections of a reinforced-concrete cage were significantly (1.52.5 times) less than the corresponding values resulting from pipe design calculations carried out in accordance with procedures being in force.Приведены результаты экспериментальных исследований трещиностойкости железобетонно-стеклокомпозитных напорных труб при действии трехлинейной нагрузки и внутреннего гидростатического давления. Трубы предназначены для строительства напорных трубопроводов методом бестраншейной прокладки (методом микротоннелирования). Конструктивные решения и рабочие чертежи труб разработаны РУП Институт БелНИИС по заказу ПО Стеклокомпозит (Россия). Трубы выполнены комплексными и состоят из внешней толстостенной железобетонной трубы (обоймы), охватывающей стеклокомпозитную трубу-вкладыш. Исследования произведены с использованием натурных образцов труб внутренним диаметром 800 мм и длиной ( полезной ) 3000 мм. Суммарная толщина стенки трубы составляла 138 мм при толщине стеклокомпозитного вкладыша 10 мм. Трубы изготовлены в вертикальном положении методом виброформования. При укладке бетонной смеси стеклокомпозитная труба-вкладыш, состыкованная со стеклокомпозитной муфтой, использовалась как несъемная опалубка. Экспериментальные исследования выполнены в два этапа: I трубы (2 образца) испытаны внутренним гидравлическим давлением II трубы (2 образца), в т. ч. образец, прошедший гидравлические испытания, испытаны трехлинейной нагрузкой (по ГОСТ 6482). В процессе испытаний фиксировали образование и раскрытие трещин в продольных сечениях стенки трубы. Установлено, что прочностные характеристики комплексных железобетонно-композитных напорных труб обеспечивают их несущую способность как при внутреннем гидравлическом давлении, так и под действием внешней (трехлинейной) нагрузки. Так, при контрольных нагрузках по трещиностойкости трещины в продольных сечениях труб отсутствовали, а при контрольных нагрузках по прочности ширина раскрытия трещин не превышала 0,05 мм. При этом как при действии внутреннего давления, так и при трехлинейной нагрузке ширина раскрытия трещин в продольных сечениях железобетонной обоймы существенно (в 1,52,5 раза) меньше, чем соответствующие значения, полученные в результате расчета труб по действующим методикам.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Cimprich, Alexander, Steven B. Young, Dieuwertje Schrijvers, Anthony Y. Ku, Christian Hagelüken, Patrice Christmann, Roderick Eggert et al. „The role of industrial actors in the circular economy for critical raw materials: a framework with case studies across a range of industries“. Mineral Economics, 21.02.2022. http://dx.doi.org/10.1007/s13563-022-00304-8.

Der volle Inhalt der Quelle
Annotation:
Abstract In this article, we explore concrete examples of circularity strategies for critical raw materials (CRMs) in commercial settings. We propose a company-level framework for systematically evaluating circularity strategies (e.g., material recycling, product reuse, and product or component lifetime extension) in specific applications of CRMs from the perspectives of specific industrial actors. This framework is applied in qualitative analyses—informed by relevant literature and expert consultation—of five case studies across a range of industries: (1) rhenium in high-pressure turbine components, (2) platinum group metals in industrial catalysts for chemical processing and oil refining, (3) rare earth permanent magnets in computer hard disk drives, (4) various CRMs in consumer electronics, and (5) helium in magnetic resonance imaging (MRI) machines. Drawing from these case studies, three broader observations can be made about company circularity strategies for CRMs. Firstly, there are multiple, partly competing motivations that influence the adoption of circularity strategies, including cost savings, supply security, and external stakeholder pressure. Secondly, business models and value-chain structure play a major role in the implementation of circularity strategies; business-to-business models appear to be more conducive to circularity than business-to-consumer models. Finally, it is important to distinguish between closed-loop circularity, in which material flows are contained within the “focal” actor’s system boundary, and open-loop circularity, in which material flows cross the system boundary, as the latter has limited potential for mitigating material criticality from the perspective of the focal actor.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

„Mexico“. IMF Staff Country Reports 19, Nr. 336 (05.11.2019). http://dx.doi.org/10.5089/9781513519005.002.

Der volle Inhalt der Quelle
Annotation:
The authorities are committed to very strong policies and policy frameworks. However, policy uncertainty and new priorities have created challenges and have clouded the growth outlook. Large-scale investment projects and social transfers—and a commitment to not raise taxes until after 2021—are yet to be reconciled with the administration’s fiscal targets and the objective of putting public debt on a downward path. Meanwhile, drastic budget cuts for some institutions have raised concern about their impact on human capital. A state-centered energy policy that limits the role of the private sector—putting the onus of stabilizing Pemex (the state-owned oil and gas company) squarely on the government—has imposed further pressure on the budget and has weakened prospects for oil production. Promises to tackle some of Mexico’s salient structural challenges—including corruption, informality and crime—have yet to be followed by concrete policy action.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Kim, Sung-Kee, und Peer Zumbansen. „The Risk of Reverse Convertible Bonds: German Capital Market Law and Investor Protection“. German Law Journal 3, Nr. 12 (Dezember 2002). http://dx.doi.org/10.1017/s2071832200015686.

Der volle Inhalt der Quelle
Annotation:
In times of a continuously expanding proliferation of investment and financing possibilities in the hands of banks, investment funds and individual capital investors, particular attention should be paid to the effects that new financial instruments are likely to have not only on concrete financing and investing modes but also on the further development of legal rules in this field. As the German capital market has been considered unable - at least until the widely marketed Deutsche Telekom IPO - to get rid of its persisting prejudice of being structurally lagging behind other countries’ systems, the legal treatment of emerging financial instruments deserves greatest attention. The rocket science of new financial instruments challenges law's aim to rightly assess the real quality of these instruments and to strike an adequate balance between the interests involved against a national policy background and EU demands. While the past few years have been a time of great legislative activity in the field of company and capital market law in Germany, only a closer look at court decisions reveals the true pressure resulting from a fast moving capital market on traditional legal perceptions. The so-called Aktienanleihe-Decision by the Federal Court of Justice, [FCJ] (Bundesgerichtshof - BGH) of 12 March 2002 marks an important step in the ongoing process of Germany's developing capital market law.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Qasim, Tabarek J., Ahmed A. Moosa und Zainab H. Mahdi. „Evaluation of Microcapsules Filled with Nano Magnesium Oxide for Self-Healing Concrete“. Current Nanomaterials 09 (11.07.2024). http://dx.doi.org/10.2174/0124054615317351240620100017.

Der volle Inhalt der Quelle
Annotation:
Introduction: This study aims to investigate the effect of the addition of Nano MgO on the self-healing behavior of concrete. Method: The Nano MgO wereadded to the concrete mixtures at ratios of (0.3, 0.4, 0.5, and 0.6) % by weight of cement, respectively. Then, the compressive strength, density, and water absorption were measured at ages (7, 14, and 28) days. Result: The results showed that the best compressive strength, density, and lowest water absorption were obtained by mixing 0.4% MgO with the weight of cement. SEM and EDX were used to characterize the concrete samples. SEM examination of the concrete samples with 0.4% MgO by weight of cement showed a dense microstructure with no pores and the formation of CSH. Microcapsules containing cement with 0.4% Nano MgO were prepared using a fluidized bed coating process (Pelletization method). The microcapsules were then added to concrete with (7, 10, and 13) % of the cement's weight. Compressive strength, water absorption, density, flexural strength, and splitting tensile strength tests were performed to study concrete properties. According to the results, MgO microcapsules were used as a useful material for the self-healing cracking process. method: Fabrication of Microcapsules 1- Polystyrene was dissolved with toluene at a ratio of 1:10 using a magnetic stirrer for 30 minutes, 70 C°, at a speed of 80 rpm. 2- 100 gm of cement was mixed with 0.4 gm of Nano MgO using a hand mixer for 10 minutes. 3- To perform the pelletizing process (fluidized bed coating), the polyurethane material consisting of polystyrene with toluene was placed in the spray gun that was fixed in the device shown in Figure 1, and it was pumped at a pressure of 7-8 bar, and the air jet installed at the bottom of the device was operated to raise the mixture consisting of cement and nanomaterials that was pumped out. Using a fixed injection, pumping the polystyrene solution for 3 seconds to obtain the best powder coverage and forming microcapsules with a homogeneous powder core covered with polystyrene. 4- Extract the microcapsules collected at the top of the device. 5- Drying in a drying oven for two hours at a temperature of 60 C°. Casting and Curing of Test Specimens The superplasticizer has been added to the water and the mixture was mixed for 10 seconds, then the Nano MgO has been added and placed in a sonicator (Powersonic 410, Hwashin Technology Company, Korea) for 30 minutes. The mixture was then placed in a mixing bowl, and cement has been added gradually with continuous mixing using a homemade electrical mixer, sand was added to the mixture gradually, taking a mixing period of 4 minutes. Finally, microcapsules have been added gradually with continuous mixing for 2 minutes. The concrete mixture has been poured into three different types of molds into 9 samples for each test: cubic molds (50*50*50) mm3 for compressive strength test, cylindrical molds (150 mm length and 50mm diameter) for splitting tensile strength tests, and Prismatic molds (40*40*160) mm3 for flexural strength test, molds they have been prepared and oiled in advance. After completing the pouring process, a polyethylene cover has been placed to ensure that the mixing water did not evaporate for 24 hours. After that, the molds have been opened and the models have been placed in the treatment basins until the test time. Figure 3 show Mixing and casting. Conclusion: They also improved concrete compressive strength, water absorption, density, flexural strength, and splitting tensile strength. 10% weight was selected as the best addition that enhances the characteristics that may be used in construction.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Handoko, Widhi, und Purnawan D. Negara. „Rebuilding Land Policy as a Base of Exploitation Right Preventing on the Land for Industrialization: Study on Mapping and Land Using in Social Function“. International Conference of Moslem Society 2 (23.04.2018). http://dx.doi.org/10.24090/icms.2018.3280.

Der volle Inhalt der Quelle
Annotation:
Policies in agrarian and natural resources is very close with economic liberalization that is the land exploitation for industrialization. The pressure of economic flows affects the political alignment of the economic power, so the bureaucratic system becomes weak. Das sollen concern with the industry is regulated with UU No. 3 year 2014 about Industrial and with PP No. 142 year 2015 about Industrial Area. The industrial provisions related to land rights are regulated in PP. 13 year 1995. According to Agrarian rules/Kep. BPN RI No. 2 year 1999, to attain the industrial permit, the applicant must obtain the location permit first which published by the Agrarian Office. Location permit is a license granted to a company to acquire the necessary land for investment, and it can be used as a license of replacement rights for using the land investment purposes. The government is very confident with the rules it makes, and does not focus on the economic impacts of liberalization that arise. The neglect on the environmental impacts to land rights exploitation, either directly or indirectly, has formed a "social system" that will interact in society, in the form of "system of expectations", so it becomes a very complex interaction which eventually emerges egocentric of human nature. A mutual relationship exists among the interested parties (it can be either positive or negative excess). Ultimately on behalf of the bureaucracy, the economic liberalization is born that puts the land on the economic function and market mechanisms, and keep away the social function of land rights from the concrete meaning of social justice.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Nile, Richard. „Post Memory Violence“. M/C Journal 23, Nr. 2 (13.05.2020). http://dx.doi.org/10.5204/mcj.1613.

Der volle Inhalt der Quelle
Annotation:
Hundreds of thousands of Australian children were born in the shadow of the Great War, fathered by men who had enlisted between 1914 and 1918. Their lives could be and often were hard and unhappy, as Anzac historian Alistair Thomson observed of his father’s childhood in the 1920s and 1930s. David Thomson was son of a returned serviceman Hector Thomson who spent much of his adult life in and out of repatriation hospitals (257-259) and whose memory was subsequently expunged from Thomson family stories (299-267). These children of trauma fit within a pattern suggested by Marianne Hirsch in her influential essay “The Generation of Postmemory”. According to Hirsch, “postmemory describes the relationship of the second generation to powerful, often traumatic, experiences that preceded their births but that were nevertheless transmitted to them so deeply as to seem to constitute memories in their own right” (n.p.). This article attempts to situate George Johnston’s novel My Brother Jack (1964) within the context of postmemory narratives of violence that were complicated in Australia by the Anzac legend which occluded any too open discussion about the extent of war trauma present within community, including the children of war.“God knows what damage” the war “did to me psychologically” (48), ponders Johnston’s protagonist and alter-ego David Meredith in My Brother Jack. Published to acclaim fifty years after the outbreak of the First World War, My Brother Jack became a widely read text that seemingly spoke to the shared cultural memories of a generation which did not know battlefield violence directly but experienced its effects pervasively and vicariously in the aftermath through family life, storytelling, and the memorabilia of war. For these readers, the novel represented more than a work of fiction; it was a touchstone to and indicative of their own negotiations though often unspoken post-war trauma.Meredith, like his creator, is born in 1912. Strictly speaking, therefore, both are not part of the post-war generation. However, they are representative and therefore indicative of the post-war “hinge generation” which was expected to assume “guardianship” of the Anzac Legend, though often found the narrative logic challenging. They had been “too young for the war to have any direct effect”, and yet “every corner” of their family’s small suburban homes appear to be “impregnated with some gigantic and sombre experience that had taken place thousands of miles away” (17).According to Johnston’s biographer, Garry Kinnane, the “most teasing puzzle” of George Johnston’s “fictional version of his childhood in My Brother Jack is the monstrous impression he creates of his returned serviceman father, John George Johnston, known to everyone as ‘Pop.’ The first sixty pages are dominated by the tyrannical figure of Jack Meredith senior” (1).A large man purported to be six foot three inches (1.9 metres) in height and weighing fifteen stone (95 kilograms), the real-life Pop Johnston reputedly stood head and shoulders above the minimum requirement of five foot and six inches (1.68 metres) at the time of his enlistment for war in 1914 (Kinnane 4). In his fortieth year, Jack Johnston senior was also around twice the age of the average Australian soldier and among one in five who were married.According to Kinnane, Pop Johnston had “survived the ordeal of Gallipoli” in 1915 only to “endure three years of trench warfare in the Somme region”. While the biographer and the Johnston family may well have held this to be true, the claim is a distortion. There are a few intimations throughout My Brother Jack and its sequel Clean Straw for Nothing (1969) to suggest that George Johnston may have suspected that his father’s wartime service stories had been embellished, though the depicted wartime service of Pop Meredith remains firmly within the narrative arc of the Anzac legend. This has the effect of layering the postmemory violence experienced by David Meredith and, by implication, his creator, George Johnston. Both are expected to be keepers of a lie masquerading as inviolable truth which further brutalises them.John George (Pop) Johnston’s First World War military record reveals a different story to the accepted historical account and his fictionalisation in My Brother Jack. He enlisted two and a half months after the landing at Gallipoli on 12 July 1915 and left for overseas service on 23 November. Not quite the imposing six foot three figure of Kinnane’s biography, he was fractionally under five foot eleven (1.8 metres) and weighed thirteen stone (82.5 kilograms). Assigned to the Fifth Field Engineers on account of his experience as an electric tram fitter, he did not see frontline service at Gallipoli (NAA).Rather, according to the Company’s history, the Fifth Engineers were involved in a range of infrastructure and support work on the Western Front, including the digging and maintenance of trenches, laying duckboard, pontoons and tramlines, removing landmines, building huts, showers and latrines, repairing roads, laying drains; they built a cinema at Beaulencourt Piers for “Brigade Swimming Carnival” and baths at Malhove consisting of a large “galvanised iron building” with a “concrete floor” and “setting tanks capable of bathing 2,000 men per day” (AWM). It is likely that members of the company were also involved in burial details.Sapper Johnston was hospitalised twice during his service with influenza and saw out most of his war from October 1917 attached to the Army Cookery School (NAA). He returned to Australia on board the HMAT Kildonian Castle in May 1919 which, according to the Sydney Morning Herald, also carried the official war correspondent and creator of the Anzac legend C.E.W. Bean, national poet Banjo Paterson and “Warrant Officer C G Macartney, the famous Australian cricketer”. The Herald also listed the names of “Returned Officers” and “Decorated Men”, but not Pop Johnston who had occupied the lower decks with other returning men (“Soldiers Return”).Like many of the more than 270,000 returned soldiers, Pop Johnston apparently exhibited observable changes upon his repatriation to Australia: “he was partially deaf” which was attributed to the “constant barrage of explosions”, while “gas” was suspected to have “left him with a legacy of lung disorders”. Yet, if “anyone offered commiserations” on account of this war legacy, he was quick to “dismiss the subject with the comment that ‘there were plenty worse off’” (Kinnane 6). The assumption is that Pop’s silence is stoic; the product of unspeakable horror and perhaps a symptom of survivor guilt.An alternative interpretation, suggested by Alistair Thomson in Anzac Memories, is that the experiences of the vast majority of returned soldiers were expected to fit within the master narrative of the Anzac legend in order to be accepted and believed, and that there was no space available to speak truthfully about alternative war service. Under pressure of Anzac expectations a great many composed stories or remained selectively silent (14).Data gleaned from the official medical history suggest that as many as four out of every five returned servicemen experienced emotional or psychological disturbance related to their war service. However, the two branches of medicine represented by surgeons and physicians in the Repatriation Department—charged with attending to the welfare of returned servicemen—focused on the body rather than the mind and the emotions (Murphy and Nile).The repatriation records of returned Australian soldiers reveal that there were, indeed, plenty physically worse off than Pop Johnston on account of bodily disfigurement or because they had been somatically compromised. An estimated 30,000 returned servicemen died in the decade after the cessation of hostilities to 1928, bringing the actual number of war dead to around 100,000, while a 1927 official report tabled the medical conditions of a further 72,388 veterans: 28,305 were debilitated by gun and shrapnel wounds; 22,261 were rheumatic or had respiratory diseases; 4534 were afflicted with eye, ear, nose, or throat complaints; 9,186 had tuberculosis or heart disease; 3,204 were amputees while only; 2,970 were listed as suffering “war neurosis” (“Enlistment”).Long after the guns had fallen silent and the wounded survivors returned home, the physical effects of war continued to be apparent in homes and hospital wards around the country, while psychological and emotional trauma remained largely undiagnosed and consequently untreated. David Meredith’s attitude towards his able-bodied father is frequently dismissive and openly scathing: “dad, who had been gassed, but not seriously, near Vimy Ridge, went back to his old job at the tramway depot” (9). The narrator-son later considers:what I realise now, although I never did at the time, is that my father, too, was oppressed by intimidating factors of fear and change. By disillusion and ill-health too. As is so often the case with big, strong, athletic men, he was an extreme hypochondriac, and he had convinced himself that the severe bronchitis which plagued him could only be attributed to German gas he had swallowed at Vimy Ridge. He was too afraid to go to a doctor about it, so he lived with a constant fear that his lungs were decaying, and that he might die at any time, without warning. (42-3)During the writing of My Brother Jack, the author-son was in chronically poor health and had been recently diagnosed with the romantic malady and poet’s disease of tuberculosis (Lawler) which plagued him throughout his work on the novel. George Johnston believed (correctly as it turned out) that he was dying on account of the disease, though, he was also an alcoholic and smoker, and had been reluctant to consult a doctor. It is possible and indeed likely that he resentfully viewed his condition as being an extension of his father—vicariously expressed through the depiction of Pop Meredith who exhibits hysterical symptoms which his son finds insufferable. David Meredith remains embittered and unforgiving to the very end. Pop Meredith “lived to seventy-three having died, not of German gas, but of a heart attack” (46).Pop Meredith’s return from the war in 1919 terrifies his seven-year-old son “Davy”, who accompanies the family to the wharf to welcome home a hero. The young boy is unable to recall anything about the father he is about to meet ostensibly for the first time. Davy becomes overwhelmed by the crowds and frightened by the “interminable blaring of horns” of the troopships and the “ceaseless roar of shouting”. Dwarfed by the bodies of much larger men he becomestoo frightened to look up at the hours-long progression of dark, hard faces under wide, turned-up hats seen against bayonets and barrels that are more blue than black ... the really strong image that is preserved now is of the stiff fold and buckle of coarse khaki trousers moving to the rhythm of knees and thighs and the tight spiral curves of puttees and the thick boots hammering, hollowly off the pier planking and thunderous on the asphalt roadway.Depicted as being small for his age, Davy is overwrought “with a huge and numbing terror” (10).In the years that follow, the younger Meredith desires emotional stability but remains denied because of the war’s legacy which manifests in the form of a violent patriarch who is convinced that his son has been rendered effeminate on account of the manly absence during vital stages of development. With the return of the father to the household, Davy grows to fear and ultimately despise a man who remains as alien to him as the formerly absent soldier had been during the war:exactly when, or why, Dad introduced his system of monthly punishments I no longer remember. We always had summary punishment, of course, for offences immediately detected—a cuffing around the ears or a sash with a stick of a strap—but Dad’s new system was to punish for the offences which had escaped his attention. So on the last day of every month Jack and I would be summoned in turn to the bathroom and the door would be locked and each of us would be questioned about the sins which we had committed and which he had not found out about. This interrogation was the merest formality; whether we admitted to crimes or desperately swore our innocence it was just the same; we were punished for the offences which, he said, he knew we must have committed and had to lie about. We then had to take our shirts and singlets off and bend over the enamelled bath-tub while he thrashed us with the razor-strop. In the blind rages of these days he seemed not to care about the strength he possessed nor the injuries he inflicted; more often than not it was the metal end of the strop that was used against our backs. (48)Ironically, the ritualised brutality appears to be a desperate effort by the old man to compensate for his own emasculation in war and unresolved trauma now that the war is ended. This plays out in complicated fashion in the development of David Meredith in Clean Straw for Nothing, Johnston’s sequel to My Brother Jack.The imputation is that Pop Meredith practices violence in an attempt to reassert his failed masculinity and reinstate his status as the head of the household. Older son Jack’s beatings cease when, as a more physically able young man, he is able to threaten the aggressor with violent retaliation. This action does not spare the younger weaker Davy who remains dominated. “My beating continued, more ferociously than ever, … . They ceased only because one day my father went too far; he lambasted me so savagely that I fell unconscious into the bath-tub, and the welts across my back made by the steel end of the razor-strop had to be treated by a doctor” (53).Pop Meredith is persistently reminded that he has no corporeal signifiers of war trauma (only a cough); he is surrounded by physically disabled former soldiers who are presumed to be worse off than he on account of somatic wounding. He becomes “morose, intolerant, bitter and violently bad-tempered”, expressing particular “displeasure and resentment” toward his wife, a trained nurse, who has assumed carer responsibilities for homing the injured men: “he had altogether lost patience with her role of Florence Nightingale to the halt and the lame” (40). Their marriage is loveless: “one can only suppose that he must have been darkly and profoundly disturbed by the years-long procession through our house of Mother’s ‘waifs and strays’—those shattered former comrades-in-arms who would have been a constant and sinister reminder of the price of glory” (43); a price he had failed to adequately pay with his uncompromised body intact.Looking back, a more mature David Meredith attempts to establish order, perspective and understanding to the “mess of memory and impressions” of his war-affected childhood in an effort to wrest control back over his postmemory violation: “Jack and I must have spent a good part of our boyhood in the fixed belief that grown-up men who were complete were pretty rare beings—complete, that is, in that they had their sight or hearing or all of their limbs” (8). While the father is physically complete, his brooding presence sets the tone for the oppressively “dark experience” within the family home where all rooms are “inhabited by the jetsam that the Somme and the Marne and the salient at Ypres and the Gallipoli beaches had thrown up” (18). It is not until Davy explores the contents of the “big deep drawer at the bottom of the cedar wardrobe” in his parents’ bedroom that he begins to “sense a form in the shadow” of the “faraway experience” that had been the war. The drawer contains his father’s service revolver and ammunition, battlefield souvenirs and French postcards but, “most important of all, the full set of the Illustrated War News” (19), with photographs of battlefield carnage. These are the equivalent of Hirsch’s photographs of the Holocaust that establish in Meredith an ontology that links him more realistically to the brutalising past and source of his ongoing traumatistion (Hirsch). From these, Davy begins to discern something of his father’s torment but also good fortune at having survived, and he makes curatorial interventions not by becoming a custodian of abjection like second generation Holocaust survivors but by disposing of the printed material, leaving behind artefacts of heroism: gun, the bullets, the medals and ribbons. The implication is that he has now become complicit in the very narrative that had oppressed him since his father’s return from war.No one apparently notices or at least comments on the removal of the journals, the images of which become linked in the young boys mind to an incident outside a “dilapidated narrow-fronted photographer’s studio which had been deserted and padlocked for as long as I could remember”. A number of sun-damaged photographs are still displayed in the window. Faded to a “ghostly, deathly pallor”, and speckled with fly droppings, years earlier, they had captured young men in uniforms before embarkation for the war. An “agate-eyed” boy from Davy’s school joins in the gazing, saying nothing for a long time until the silence is broken: “all them blokes there is dead, you know” (20).After the unnamed boy departs with a nonchalant “hoo-roo”, young Davy runs “all the way home, trying not to cry”. He cannot adequately explain the reason for his sudden reaction: “I never after that looked in the window of the photographer’s studio or the second hand shop”. From that day on Davy makes a “long detour” to ensure he never passes the shops again (20-1). Having witnessed images of pre-war undamaged young men in the prime of their youth, he has come face-to-face with the consequences of war which he is unable to reconcile in terms of the survival and return of his much older father.The photographs of the young men establishes a causal connection to the physically wrecked remnants that have shaped Davy’s childhood. These are the living remains that might otherwise have been the “corpses sprawled in mud or drowned in flooded shell craters” depicted in the Illustrated News. The photograph of the young men establishes Davy’s connection to the things “propped up our hallway”, of “Bert ‘sobbing’ in the backyard and Gabby Dixon’s face at the dark end of the room”, and only reluctantly the “bronchial cough of my father going off in the dawn light to the tramways depot” (18).That is to say, Davy has begun to piece together sense from senselessness, his father’s complicity and survival—and, by association, his own implicated life and psychological wounding. He has approached the source of his father’s abjection and also his own though he continues to be unable to accept and forgive. Like his father—though at the remove—he has been damaged by the legacies of the war and is also its victim.Ravaged by tuberculosis and alcoholism, George Johnston died in 1970. According to the artist Sidney Nolan he had for years resembled the ghastly photographs of survivors of the Holocaust (Marr 278). George’s forty five year old alcoholic wife Charmian Clift predeceased him by twelve months, having committed suicide in 1969. Four years later, in 1973, George and Charmian’s twenty four year old daughter Shane also took her own life. Their son Martin drank himself to death and died of organ failure at the age of forty three in 1990. They are all “dead, you know”.ReferencesAWM. Fifth Field Company, Australian Engineers. Diaries, AWM4 Sub-class 14/24.“Enlistment Report”. Reveille, 29 Sep. 1928.Hirsch, Marianne. “The Generation of Postmemory.” Poetics Today 29.1 (Spring 2008): 103-128. <https://read.dukeupress.edu/poetics-today/article/29/1/103/20954/The-Generation-of-Postmemory>.Johnston, George. Clean Straw for Nothing. London: Collins, 1969.———. My Brother Jack. London: Collins, 1964.Kinnane, Garry. George Johnston: A Biography. Melbourne: Nelson, 1986.Lawler, Clark. Consumption and Literature: the Making of the Romantic Disease. Basingstoke: Palgrave Macmillan, 2006.Marr, David, ed. Patrick White Letters. Sydney: Random House, 1994.Murphy, Ffion, and Richard Nile. “Gallipoli’s Troubled Hearts: Fear, Nerves and Repatriation.” Studies in Western Australian History 32 (2018): 25-38.NAA. John George Johnston War Service Records. <https://recordsearch.naa.gov.au/SearchNRetrieve/Interface/ViewImage.aspx?B=1830166>.“Soldiers Return by the Kildonan Castle.” Sydney Morning Herald, 10 May 1919: 18.Thomson, Alistair. Anzac Memories: Living with the Legend. Clayton: Monash UP, 2013.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Lavis, Anna, und Karin Eli. „Corporeal: Exploring the Material Dynamics of Embodiment“. M/C Journal 19, Nr. 1 (06.04.2016). http://dx.doi.org/10.5204/mcj.1088.

Der volle Inhalt der Quelle
Annotation:
Looked at again and again half consciously by a mind thinking of something else, any object mixes itself so profoundly with the stuff of thought that it loses its actual form and recomposes itself a little differently in an ideal shape which haunts the brain when we least expect it. (Virginia Woolf 38) From briefcases to drugs, and from boxing rings to tower blocks, this issue of M/C Journal turns its attention to the diverse materialities that make up our social worlds. Across a variety of empirical contexts, the collected papers employ objects, structures, and spaces as lenses onto corporeality, extending and unsettling habitual understandings of what a body is and does. By exploring everyday encounters among bodies and other materialities, the contributors elucidate the material processes through which human corporeality is enacted and imagined, produced and unmade.That materialities “tell stories” of bodies is an implicit tenet of embodied existence. In biomedical practice, for example, the thermometer assigns a value to a disease process which might already be felt, whereas the blood pressure cuff sets in motion a story of illness that is otherwise hidden or existentially absent. In so doing, such objects recast corporeality, shaping not only experiences of embodied life, but also the very matter of embodiment.Whilst recognising that objects are “companion[s] in life experience” (Turkle 5), this issue seeks to go beyond a sole focus on embodied experience, and explore the co-constitutive entanglements of embodiment and materiality. The collected papers examine how bodies and the material worlds around them are dialectically forged and shaped. By engaging with a specific object, structure, or space, each paper reflects on embodiment in ways that take account of its myriad material dynamics. BodiesHow to conceptualise the body and attend to its complex relationships with sociality, identity, and agency has been a central question in many recent strands of thinking across the humanities and social sciences (see Blackman; Shilling). From discussions of embodiment and personhood to an engagement with the affective and material turns, these strands have challenged theoretical emphases on body/mind dualisms that have historically informed much thinking about bodies in Western thought, turning the analytic focus towards the felt experience of embodied being.Through these explorations of embodiment, the body, as Csordas writes, has emerged as “the existential ground of culture” (135). Inspired by phenomenology, and particularly by the writings of Merleau-Ponty, Csordas has theorised the body as always-already inter-subjective. In constant dynamic interaction with self, others, and the environment, the body is both creative and created, constituting culture while being constituted by it. As such, bodies continuously materialise through sensory experiences of oneself and others, spaces and objects, such that the embodied self is at once both material and social.The concept of embodiment—as inter-subjective, dynamic, and experientially focussed—is central to this collection of papers. In using the term corporeality, we build on the concept of embodiment in order to interrogate the material makings of bodies. We attend to the ways in which objects, structures, and spaces extend into, and emanate from, embodied experiences and bodily imaginings. Being inherently inter-subjective, bodies are therefore not individual, clearly bounded entities. Rather, the body is an "infinitely malleable and highly unstable culturally constructed product” (Shilling 78), produced, shaped, and negated by political and social processes. Studies of professional practice—for example, in medicine—have shown how the body is assembled through culturally specific, sometimes contingent, arrangements of knowledges and practices (Berg and Mol). Such arrangements serve to make the body inherently “multiple” (Mol) as well as mutable.A further challenge to entrenched notions of singularity and boundedness has been offered by the “affective turn” (Halley and Clough) in the humanities and social sciences (see also Gregg and Siegworth; Massumi; Stewart). Affect theory is concerned with the felt experiences that comprise and shape our being-in-the-world. It problematises the discursive boundaries among emotive and visceral, cognitive and sensory, experiences. In so doing, the affective turn has sought to theorise inter-subjectivity by engaging with the ways in which bodily capacities arise in relation to other materialities, contexts, and “force-relations” (Seigworth and Gregg 4). In attending to affect, emphasis is placed on the unfinishedness of both human and non-human bodies, showing these to be “perpetual[ly] becoming (always becoming otherwise)” (3, italics in original). Affect theory thereby elucidates that a body is “as much outside itself as in itself” and is “webbed in its relations” (3).ObjectsIn parallel to the “affective turn,” a “material turn” across the social sciences has attended to “corporeality as a practical and efficacious series of emergent capacities” which “reveals both the materiality of agency and agentic properties inherent in nature itself” (Coole and Frost 20). This renewed attention to the “stuff” (Miller) of human and non-human environments and bodies has complemented, but also challenged, constructivist theorisations of social life that tend to privilege discourse over materiality. Engaging with the “evocative objects” (Turkle) of everyday life has thereby challenged any assumed distinction between material and social processes. The material turn has, instead, sought to take account of “active processes of materialization of which embodied humans are an integral part, rather than the monotonous repetitions of dead matter from which human subjects are apart” (Coole and Frost 8).Key to this material turn has been a recognition that matter is not lumpen or inert; rather, it is processual, emergent, and always relational. From Bergson, through Deleuze and Guattari, to Bennett and Barad, a focus on the “vitality” of matter has drawn questions about the agency of the animate and inanimate to the fore. Engaging with the agentic capacities of the objects that surround us, the “material turn” recognises human agency as always embedded in networks of human and non-human actors, all of whom shape and reshape each other. This is an idea influentially articulated in Actor-Network-Theory (Latour).In an exposition of Actor-Network-Theory, Latour writes: “Scallops make the fisherman do things just as nets placed in the ocean lure the scallops into attaching themselves to the nets and just as data collectors bring together fishermen and scallops in oceanography” (107, italics in original). Humans, non-human animals, objects, and spaces are thus always already entangled, their capacities realised and their movements motivated, directed, and moulded by one another in generative processes of responsive action.Embodied Objects: The IssueAt the intersections of a constructivist and materialist analysis, Alison Bartlett’s paper draws our attention to the ways in which “retro masculinity is materialised and embodied as both a set of values and a set of objects” in Nancy Meyers’s film The Intern. Bartlett engages with the business suit, the briefcase, and the handkerchief that adorn Ben the intern, played by Robert De Niro. Arguing that his “senior white male body” is framed by the depoliticised fetishisation of these objects, Bartlett elucidates how they construct, reinforce, or interrupt the gaze of others. The dynamics of the gaze are also the focus of Anita Howarth’s analysis of food banks in the UK. Howarth suggests that the material spaces of food banks, with their queues of people in dire need, make hunger visible. In so doing, food banks draw hunger from the hidden depths of biological intimacy into public view. Howarth thus calls attention to the ways in which individual bodies may be caught up in circulating cultural and political discursive regimes, in this case ones that define poverty and deservingness. Discursive entanglements also echo through Alexandra Littaye’s paper. Like Bartlett, Littaye focusses on the construction and performance of gender. Autoethnographically reflecting on her experiences as a boxer, Littaye challenges the cultural gendering of boxing in discourse and regulation. To unsettle this gendering, Littaye explores how being punched in the face by male opponents evolved into an experience of camaraderie and respect. She contends that the boxing ring is a unique space in which violence can break down definitions of gendered embodiment.Through the changing meaning of such encounters between another’s hand and the mutable surfaces of her face, Littaye charts how her “body boundaries were profoundly reconfigured” within the space of the boxing ring. This analysis highlights material transformations that bodies undergo—agentially or unagentially—in moments of encounter with other materialities, which is a key theme of the issue. Such material transformation is brought into sharp relief by Fay Dennis’s exploration of drug use, where ways of being emerge through the embodied entanglements of personhood and diamorphine, as the drug both offers and reconfigures bodily boundaries. Dennis draws on an interview with Mya, who has lived experience of drug use, and addiction treatment, in London, UK. Her analysis parses Mya’s discursive construction of “becoming normal” through the everyday use of drugs, highlighting how drugs are implicated in creating Mya’s construction of a “normal” embodied self as a less vulnerable, more productive, being-in-the-world.Moments of material transformation, however, can also incite experiences of embodied extremes. This is elucidated by the issue’s feature paper, in which Roy Brockington and Nela Cicmil offer an autoethnographic study of architectural objects. Focussing on two Brutalist housing developments in London, UK, they write that they “feel small and quite squashable in comparison” to the buildings they traverse. They suggest that the effects of walking within one of these vast concrete entities can be likened to having eaten the cake or drunk the potion from Alice in Wonderland (Carroll). Like the boxing ring and diamorphine, the buildings “shape the physicality of the bodies interacting within them,” as Brockington and Cicmil put it.That objects, spaces, and structures are therefore intrinsic to, rather than set apart from, the dynamic processes through which human bodies are made or unmade ripples through this collection of papers in diverse ways. While Dennis’s paper focusses on the potentiality of body/object encounters to set in motion mutual processes of becoming, an interest in the vulnerabilities of such processes is shared across the papers. Glimpsed in Howarth’s, as well as in Brockington and Cicmil’s discussions, this vulnerability comes to the fore in Bessie Dernikos and Cathlin Goulding’s analysis of teacher evaluations as textual objects. Drawing on their own experiences of teaching at high school and college levels, Dernikos and Goulding analyse the ways in which teacher evaluations are “anything but dead and lifeless;” they explore how evaluations painfully intervene in or interrupt corporeality, as the words on the page “sink deeply into [one’s] skin.” These words thereby enter into and impress upon bodies, both viscerally and emotionally, their affective power unveiling the agency that imbues a lit screen or a scribbled page.Yet, importantly, this issue also demonstrates how bodies actively forge the objects, spaces, and environments they encounter. Paola Esposito’s paper registers the press of bodies on material worlds by exploring the collective act of walking with golden thread, a project that has since come to be entitled “Walking Threads.” Writing that the thread becomes caught up in “the bumpy path, trees, wind, and passers-by,” Esposito explores how these intensities and forms register on the moving collective of bodies, just as those bodies also press into, and leave traces on, the world around them. That diverse materialities thereby come to be imbued with, or perhaps haunted by, the material and affective traces of (other) bodies, is also shown by the metonymic resonance between Littaye’s face and her coach’s pad: each bears the marks of another’s punch. Likewise, in Bartlett’s analysis of The Intern, Ben is described as having “shaped the building where the floor dips over in the corner” due to the heavy printers he used in his previous, analogue era, job.This sense of the marks or fragments left by the human form perhaps emerges most resonantly in Michael Gantley and James Carney’s paper. Exploring mortuary practices in archaeological context, Gantley and Carney trace the symbolic imprint of culture on the body, and of the body on (material) culture; their paper shows how concepts of the dead body are informed by cultural anxieties and technologies, which in turn shape death rituals. This discussion thereby draws attention to the material, even molecular, traces left by bodies, long after those bodies have ceased to be of substance. The (im)material intermingling of human and non-human bodies that this highlights is also invoked, albeit in a more affective way, by Chris Stover’s analysis of improvisational musical spaces. Through a discussion of “musical-objects-as-bodies,” Stover shows how each performer leaves an imprint on the musical bodies that emerge from transient moments of performance. Writing that “improvised music is a more fruitful starting place for thinking about embodiment and the co-constitutive relationship between performer and sound,” Stover suggests that performers’ bodies and the music “unfold” together. In so doing, he approaches the subject of bodies beyond the human, probing the blurred intersections among human and non-human (im)materialities.Across the issue, then, the contributors challenge any neat distinction between bodies and objects, showing how diverse materialities “become” together, to borrow from Deleuze and Guattari. This blurring is key to Gantley and Carney’s paper. They write that “in post-mortem rituals, the body—formerly the manipulator of objects—becomes itself the object that is manipulated.” Likewise, Esposito argues that “we generally think of objects and bodies as belonging to different domains—the inanimate and the animate, the lifeless and the living.” Her paper shares with the others a desire to illuminate the transient, situated, and often vulnerable processes through which bodies and (other) materialities are co-produced. Or, as Stover puts it, this issue “problematise[s] where one body stops and the next begins.”Thus, together, the papers explore the many dimensions and materialities of embodiment. In writing corporeality, the contributors engage with a range of theories and various empirical contexts, to interrogate the material dynamics through which bodies processually come into being. The issue thereby problematises taken-for-granted distinctions between bodies and objects. The corporeality that emerges from the collected discussions is striking in its relational and dynamic constitution, in the porosity of (imagined) boundaries between self, space, subjects, and objects. As the papers suggest, corporeal being is realised through and within continuously changing relations among the visceral, affective, and material. Such relations not only make individual bodies, but also implicate socio-political and ecological processes that materialise in structures, technologies, and lived experiences. We offer corporeality, then, as a framework to illuminate the otherwise hidden, politically contingent, becomings of embodied beings. ReferencesBarad, Karen. “Posthumanist Performativity: Toward an Understanding of How Matter Comes to Matter.” Signs: Journal of Women in Culture and Society 28 (2003): 801–831.Bennett, Jane. Vibrant Matter: A Political Ecology of Things. Durham, NC: Duke UP, 2010.Berg, Marc, and Annemarie Mol (eds.) Differences in Medicine: Unraveling Practices, Techniques, and Bodies. Durham, NC: Duke, 1998.Bergson, Henri. Creative Evolution. London: Henry Holt and Company, 1911Blackman, Lisa. The Body: Key Concepts. London: Berg, 2008.Carroll, Lewis. Alice in Wonderland. London: Macmillan, 1865.Coole, Diana, and Samantha Frost. “Introducing the New Materialisms.” New Materialisms: Ontology, Agency, and Politics. Eds. Diana Coole and Samantha Frost. Durham, NC: Duke, 2010. 1-46.Csordas, Thomas J. “Somatic Modes of Attention.” Cultural Anthropology 8.2 (1993): 135-156.Deleuze, Gilles, and Felix Guattari. A Thousand Plateaus: Capitalism and Schizophrenia. London: Continuum, 2004.Gregg, Melissa, and Gregory Seigworth (eds.) The Affect Theory Reader. Durham, NC: Duke, 2010.Halley, Jean, and Patricia Ticineto Clough. The Affective Turn: Theorizing the Social. Durham, NC: Duke UP, 2007.Latour, Bruno. Reassembling the Social: An Introduction to Actor-Network-Theory. Oxford: Oxford UP, 2005.Massumi, Brian. The Politics of Affect. Cambridge: Polity Press, 2015.Merleau-Ponty, Maurice. Phenomenology of Perception. Trans. Colin Smith. London: Routledge, 1962Miller, Daniel. Stuff. Cambridge: Polity Press, 2010.Mol, Anemarie. The Body Multiple: Ontology in Medical Practice. Durham, NC: Duke UP, 2002.Seigworth, Gregory, and Melissa Gregg. “An Inventory of Shimmers.” The Affect Theory Reader. Eds. Melissa Gregg and Gregory Seigworth. Durham, NC: Duke, 2010. 1-28.Shilling, Chris. The Body and Social Theory. Nottingham: SAGE Publications, 2012.Stewart, Kathleen. Ordinary Affects. Durham, NC: Duke UP, 2007.Turkle, Sherry. “The Things That Matter.” Evocative Objects: Things We Think With. Ed. Sherry Turkle. Cambridge MA: MIT Press, 2007.Woolf, Virginia. Street Haunting. London: Penguin Books, 2005.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Fraim, John. „Friendly Persuasion“. M/C Journal 3, Nr. 1 (01.03.2000). http://dx.doi.org/10.5204/mcj.1825.

Der volle Inhalt der Quelle
Annotation:
"If people don't trust their information, it's not much better than a Marxist-Leninist society." -- Orville Schell Dean, Graduate School of Journalism, UC Berkeley "Most people aren't very discerning. Maybe they need good financial information, but I don't think people know what good information is when you get into culture, society, and politics." -- Steven Brill,Chairman and Editor-in-chief, Brill's Content Once upon a time, not very long ago, advertisements were easy to recognise. They had simple personalities with goals not much more complicated than selling you a bar of soap or a box of cereal. And they possessed the reassuring familiarity of old friends or relatives you've known all your life. They were Pilgrims who smiled at you from Quaker Oats boxes or little tablets named "Speedy" who joyfully danced into a glass of water with the sole purpose of giving up their short life to help lessen your indigestion from overindulgence. Yes, sometimes they could be a little obnoxious but, hey, it was a predictable annoyance. And once, not very long ago, advertisements also knew their place in the landscape of popular culture, their boundaries were the ad space of magazines or the commercial time of television programs. When the ads got too annoying, you could toss the magazine aside or change the TV channel. The ease and quickness of their dispatch had the abruptness of slamming your front door in the face of an old door-to-door salesman. This all began to change around the 1950s when advertisements acquired a more complex and subtle personality and began straying outside of their familiar media neighborhoods. The social observer Vance Packard wrote a best-selling book in the late 50s called The Hidden Persuaders which identified this change in advertising's personality as coming from hanging around Professor Freud's psychoanalysis and learning his hidden, subliminal methods of trickery. Ice cubes in a glass for a liquor ad were no longer seen as simple props to help sell a brand of whiskey but were now subliminal suggestions of female anatomy. The curved fronts of automobiles were more than aesthetic streamlined design features but rather suggestive of a particular feature of the male anatomy. Forgotten by the new subliminal types of ads was the simple salesmanship preached by founders of the ad industry like David Ogilvy and John Caples. The word "sales" became a dirty word and was replaced with modern psychological buzzwords like subliminal persuasion. The Evolution of Subliminal Techniques The book Hidden Persuaders made quite a stir at the time, bringing about congressional hearings and even the introduction of legislation. Prominent motivation researchers Louis Cheskin and Ernest Dichter utilised the new ad methods and were publicly admonished as traitors to their profession. The life of the new subliminal advertising seemed short indeed. Even Vance Packard predicted its coming demise. "Eventually, say by A.D. 2000," he wrote in the preface to the paperback edition of his book, "all this depth manipulation of the psychological variety will seem amusingly old- fashioned". Yet, 40 years later, any half-awake observer of popular culture knows that things haven't exactly worked out the way Packard predicted. In fact what seems old-fashioned today is the belief that ads are those simpletons they once were before the 50s and that products are sold for features and benefits rather than for images. Even Vance Packard expresses an amazement at the evolution of advertising since the 50s, noting that today ads for watches have nothing to do with watches or that ads for shoes scarcely mention shoes. Packard remarks "it used to be the brand identified the product. In today's advertising the brand is the product". Modern advertising, he notes, has an almost total obsession with images and feelings and an almost total lack of any concrete claims about the product and why anyone should buy it. Packard admits puzzlement. "Commercials seem totally unrelated to selling any product at all". Jeff DeJoseph of the J. Walter Thompson firm underlines Packard's comments. "We are just trying to convey a sensory impression of the brand, and we're out of there". Subliminal advertising techniques have today infiltrated the heart of corporate America. As Ruth Shalit notes in her article "The Return of the Hidden Persuaders" from the 27 September 1999 issue of Salon magazine, "far from being consigned to the maverick fringe, the new psycho- persuaders of corporate America have colonized the marketing departments of mainstream conglomerates. At companies like Kraft, Coca-Cola, Proctor & Gamble and Daimler-Chrysler, the most sought-after consultants hail not from McKinsey & Company, but from brand consultancies with names like Archetype Discoveries, PsychoLogics and Semiotic Solutions". Shalit notes a growing number of CEOs have become convinced they cannot sell their brands until they first explore the "Jungian substrata of four- wheel drive; unlock the discourse codes of female power sweating; or deconstruct the sexual politics of bologna". The result, as Shalit observes, is a "charmingly retro school of brand psychoanalysis, which holds that all advertising is simply a variation on the themes of the Oedipus complex, the death instinct, or toilet training, and that the goal of effective communications should be to compensate the consumer for the fact that he was insufficiently nursed as an infant, has taken corporate America by storm". The Growing Ubiquity of Advertising Yet pervasive as the subliminal techniques of advertising have become, the emerging power of modern advertising ultimately centres around "where" it is rather than "what" it is or "how" it works. The power of modern advertising is within this growing ubiquity or "everywhereness" of advertising rather than the technology and methodology of advertising. The ultimate power of advertising will be arrived at when ads cannot be distinguished from their background environment. When this happens, the environment will become a great continuous ad. In the process, ads have wandered away from their well-known hangouts in magazines and TV shows. Like alien-infected pod-people of early science fiction movies, they have stumbled out of these familiar media playgrounds and suddenly sprouted up everywhere. The ubiquity of advertising is not being driven by corporations searching for new ways to sell products but by media searching for new ways to make money. Traditionally, media made money by selling subscriptions and advertising space. But these two key income sources are quickly drying up in the new world of online media. Journalist Mike France wisely takes notice of this change in an important article "Journalism's Online Credibility Gap" from the 11 October 1999 issue of Business Week. France notes that subscription fees have not worked because "Web surfers are used to getting content for free, and they have been reluctant to shell out any money for it". Advertising sales and their Internet incarnation in banner ads have also been a failure so far, France observes, because companies don't like paying a flat fee for online advertising since it's difficult to track the effectiveness of their marketing dollars. Instead, they only want to pay for actual sales leads, which can be easily monitored on the Web as readers' click from site to site. Faced with the above situation, media companies have gone on the prowl for new ways to make money. This search underpins the emerging ubiquity of advertising: the fact that it is increasingly appearing everywhere. In the process, traditional boundaries between advertising and other societal institutions are being overrun by these media forces on the prowl for new "territory" to exploit. That time when advertisements knew their place in the landscape of popular culture and confined themselves to just magazines or TV commercials is a fading memory. And today, as each of us is bombarded by thousands of ads each day, it is impossible to "slam" the door and keep them out of our house as we could once slam the door in the face of the old door-to-door salesmen. Of course you can find them on the matchbook cover of your favorite bar, on t-shirts sold at some roadside tourist trap or on those logo baseball caps you always pick up at trade shows. But now they have got a little more personal and stare at you over urinals in the men's room. They have even wedged themselves onto the narrow little bars at the check-out counter conveyer belts of supermarkets or onto the handles of gasoline pumps at filling stations. The list goes on and on. (No, this article is not an ad.) Advertising and Entertainment In advertising's march to ubiquity, two major boundaries have been crossed. They are crucial boundaries which greatly enhance advertising's search for the invisibility of ubiquity. Yet they are also largely invisible themselves. These are the boundaries separating advertising from entertainment and those separating advertising from journalism. The incursion of advertising into entertainment is a result of the increasing merger of business and entertainment, a phenomenon pointed out in best-selling business books like Michael Wolf's Entertainment Economy and Joseph Pine's The Experience Economy. Wolf, a consultant for Viacom, Newscorp, and other media heavy-weights, argues business is becoming synonymous with entertainment: "we have come to expect that we will be entertained all the time. Products and brands that deliver on this expectation are succeeding. Products that do not will disappear". And, in The Experience Economy, Pine notes the increasing need for businesses to provide entertaining experiences. "Those businesses that relegate themselves to the diminishing world of goods and services will be rendered irrelevant. To avoid this fate, you must learn to stage a rich, compelling experience". Yet entertainment, whether provided by businesses or the traditional entertainment industry, is increasingly weighted down with the "baggage" of advertising. In a large sense, entertainment is a form of new media that carries ads. Increasingly, this seems to be the overriding purpose of entertainment. Once, not long ago, when ads were simple and confined, entertainment was also simple and its purpose was to entertain rather than to sell. There was money enough in packed movie houses or full theme parks to make a healthy profit. But all this has changed with advertising's ubiquity. Like media corporations searching for new revenue streams, the entertainment industry has responded to flat growth by finding new ways to squeeze money out of entertainment content. Films now feature products in paid for scenes and most forms of entertainment use product tie-ins to other areas such as retail stores or fast-food restaurants. Also popular with the entertainment industry is what might be termed the "versioning" of entertainment products into various sub-species where entertainment content is transformed into other media so it can be sold more than once. A film may not make a profit on just the theatrical release but there is a good chance it doesn't matter because it stands to make a profit in video rentals. Advertising and Journalism The merger of advertising and entertainment goes a long way towards a world of ubiquitous advertising. Yet the merger of advertising and journalism is the real "promised land" in the evolution of ubiquitous advertising. This fundamental shift in the way news media make money provides the final frontier to be conquered by advertising, a final "promised land" for advertising. As Mike France observes in Business Week, this merger "could potentially change the way they cover the news. The more the press gets in the business of hawking products, the harder it will be to criticize those goods -- and the companies making them". Of course, there is that persistent myth, perpetuated by news organisations that they attempt to preserve editorial independence by keeping the institutions they cover and their advertisers at arm's length. But this is proving more and more difficult, particularly for online media. Observers like France have pointed out a number of reasons for this. One is the growth of ads in news media that look more like editorial content than ads. While long-standing ethical rules bar magazines and newspapers from printing advertisements that look like editorial copy, these rules become fuzzy for many online publications. Another reason making it difficult to separate advertising from journalism is the growing merger and consolidation of media corporations. Fewer and fewer corporations control more and more entertainment, news and ultimately advertising. It becomes difficult for a journalist to criticise a product when it has a connection to the large media conglomerate the journalist works for. Traditionally, it has been rare for media corporations to make direct investments in the corporations they cover. However, as Mike France notes, CNBC crossed this line when it acquired a stake in Archipelago in September 1999. CNBC, which runs a business-news Website, acquired a 12.4% stake in Archipelago Holdings, an electronic communications network for trading stock. Long-term plans are likely to include allowing visitors to cnbc.com to link directly to Archipelago. That means CNBC could be in the awkward position of both providing coverage of online trading and profiting from it. France adds that other business news outlets, such as Dow Jones (DJ), Reuters, and Bloomberg, already have indirect ties to their own electronic stock-trading networks. And, in news organisations, a popular method of cutting down on the expense of paying journalists for content is the growing practice of accepting advertiser written content or "sponsored edit" stories. The confusion to readers violates the spirit of a long-standing American Society of Magazine Editors (ASME) rule prohibiting advertisements with "an editorial appearance". But as France notes, this practice is thriving online. This change happens in ever so subtle ways. "A bit of puffery inserted here," notes France, "a negative adjective deleted there -- it doesn't take a lot to turn a review or story about, say, smart phones, into something approaching highbrow ad copy". He offers an example in forbes.com whose Microsoft ads could easily be mistaken for staff-written articles. Media critic James Fallows points out that consumers have been swift to discipline sites that are caught acting unethically and using "sponsored edits". He notes that when it was revealed that amazon.com was taking fees of up to $10,000 for books that it labelled as "destined for greatness", its customers were outraged, and the company quickly agreed to disclose future promotional payments. Unfortunately, though, the lesson episodes like these teach online companies like Amazon centres around more effective ways to be less "revealing" rather than abstention from the practice of "sponsored edits". France reminds us that journalism is built on trust. In the age of the Internet, though, trust is quickly becoming an elusive quality. He writes "as magazines, newspapers, radio stations, and television networks rush to colonize the Internet, the Great Wall between content and commerce is beginning to erode". In the end, he ponders whether there is an irrevocable conflict between e-commerce and ethical journalism. When you can't trust journalists to be ethical, just who can you trust? Transaction Fees & Affiliate Programs - Advertising's Final Promised Land? The engine driving the growing ubiquity of advertising, though, is not the increasing merger of advertising with other industries (like entertainment and journalism) but rather a new business model of online commerce and Internet technology called transaction fees. This emerging and potentially dominant Internet e-commerce technology provides for the ability to track transactions electronically on Websites and to garner transaction fees. Through these fees, many media Websites take a percentage of payment through online product sales. In effect, a media site becomes one pervasive direct mail ad for every product mentioned on its site. This of course puts them in a much closer economic partnership with advertisers than is the case with traditional fixed-rate ads where there is little connection between product sales and the advertising media carrying them. Transaction fees are the new online version of direct marketing, the emerging Internet technology for their application is one of the great economic driving forces of the entire Internet commerce apparatus. The promise of transaction fees is that a number of people, besides product manufacturers and advertisers, might gain a percentage of profit from selling products via hypertext links. Once upon a time, the manufacturer of a product was the one that gained (or lost) from marketing it. Now, however, there is the possibility that journalists, news organisations and entertainment companies might also gain from marketing via transaction fees. The spread of transaction fees outside media into the general population provides an even greater boost to the growing ubiquity of advertising. This is done through the handmaiden of media transaction fees: "affiliate programs" for the general populace. Through the growing magic of Internet technology, it becomes possible for all of us to earn money through affiliate program links to products and transaction fee percentages in the sale of these products. Given this scenario, it is not surprising that advertisers are most likely to increasingly pressure media Websites to support themselves with e-commerce transaction fees. Charles Li, Senior Analyst for New Media at Forrester Research, estimates that by the year 2003, media sites will receive $25 billion in revenue from transaction fees, compared with $17 billion from ads and $5 billion from subscriptions. The possibility is great that all media will become like great direct response advertisements taking a transaction fee percentage for anything sold on their sites. And there is the more dangerous possibility that all of us will become the new "promised land" for a ubiquitous advertising. All of us will have some cut in selling somebody else's product. When this happens and there is a direct economic incentive for all of us to say nice things about products, what is the need and importance of subliminal techniques and methods creating advertising based on images which try to trick us into buying things? A Society Without Critics? It is for these reasons that criticism and straight news are becoming an increasingly endangered species. Everyone has to eat but what happens when one can no longer make meal money by criticising current culture? Cultural critics become a dying breed. There is no money in criticism because it is based around disconnection rather than connection to products. No links to products or Websites are involved here. Critics are becoming lonely icebergs floating in the middle of a cyber-sea of transaction fees, watching everyone else (except themselves) make money on transaction fees. The subliminal focus of the current consultancies is little more than a repackaging of an old theme discovered long ago by Vance Packard. But the growing "everywhereness" and "everyoneness" of modern advertising through transaction fees may mark the beginning of a revolutionary new era. Everyone might become their own "brand", a point well made in Tim Peters's article "A Brand Called You". Media critic James Fallows is somewhat optimistic that there still may remain "niche" markets for truthful information and honest cultural criticism. He suggests that surely people looking for mortgages, voting for a politician, or trying to decide what movie to see will continue to need unbiased information to help them make decisions. But one must ask what happens when a number of people have some "affiliate" relationship with suggesting particular movies, politicians or mortgages? Orville Schell, dean of the Graduate School of Journalism at the University of California at Berkeley, has summarised this growing ubiquity of advertising in a rather simple and elegant manner saying "at a certain point, people won't be able to differentiate between what's trustworthy and what isn't". Over the long run, this loss of credibility could have a corrosive effect on society in general -- especially given the media's importance as a political, cultural, and economic watchdog. Schell warns, "if people don't trust their information, it's not much better than a Marxist-Leninist society". Yet, will we be able to realise this simple fact when we all become types of Marxists and Leninists? Still, there is the great challenge to America to learn how to utilise transaction fees in a democratic manner. In effect, a combination of the technological promise of the new economy with that old promise, and perhaps even myth, of a democratic America. America stands on the verge of a great threshold and challenge in the growing ubiquity of advertising. In a way, as with most great opportunities or threats, this challenge centres on a peculiar paradox. On the one hand, there is the promise of the emerging Internet business model and its centre around the technology of transaction fees. At the same time, there is the threat posed by transaction fees to America's democratic society in the early years of the new millennium. Yes, once upon a time, not very long ago, advertisements were easy to recognise and also knew their place in the landscape of popular culture. Their greatest, yet silent, evolution (especially in the age of the Internet) has really been in their spread into all areas of culture rather than in methods of trickery and deceit. Now, it is more difficult to slam that front door in the face of that old door-to-door salesman. Or toss that magazine and its ad aside, or switch off commercials on television. We have become that door-to-door salesman, that magazine ad, that television commercial. The current cultural landscape takes on some of the characteristics of the theme of that old science fiction movie The Invasion of the Body Snatchers. A current advertising campaign from RJ Reynolds has a humorous take on the current zeitgeist fad of alien abduction with copy reading "if aliens are smart enough to travel through space then why do they keep abducting the dumbest people on earth?" One might add that when Americans allow advertising to travel through all our space, perhaps we all become the dumbest people on earth, abducted by a new alien culture so far away from a simplistic nostalgia of yesterday. (Please press below for your links to a world of fantastic products which can make a new you.) References Brill, Steven. Quoted by Mike France in "Journalism's Online Credibility Gap." Business Week 11 Oct. 1999. France, Mike. "Journalism's Online Credibility Gap." Business Week 11 Oct. 1999. <http://www.businessweek.com/1999/99_41/b3650163.htm>. Packard, Vance. The Hidden Persuaders. Out of Print, 1957. Pine, Joseph, and James Gilmore. The Experience Economy. Harvard Business School P, 1999. Shalit, Ruth. "The Return of the Hidden Persuaders." Salon Magazine 27 Sep. 1999. <http://www.salon.com/media/col/shal/1999/09/27/persuaders/index.php>. Schell, Orville. Quoted by Mike France in "Journalism's Online Credibility Gap." Business Week 11 Oct. 1999. Wolf, Michael. Entertainment Economy. Times Books, 1999. Citation reference for this article MLA style: John Fraim. "Friendly Persuasion: The Growing Ubiquity of Advertising, or What Happens When Everyone Becomes an Ad?." M/C: A Journal of Media and Culture 3.1 (2000). [your date of access] <http://www.uq.edu.au/mc/0003/ads.php>. Chicago style: John Fraim, "Friendly Persuasion: The Growing Ubiquity of Advertising, or What Happens When Everyone Becomes an Ad?," M/C: A Journal of Media and Culture 3, no. 1 (2000), <http://www.uq.edu.au/mc/0003/ads.php> ([your date of access]). APA style: John Fraim. (2000) Friendly Persuasion: The Growing Ubiquity of Advertising, or What Happens When Everyone Becomes an Ad?. M/C: A Journal of Media and Culture 3(1). <http://www.uq.edu.au/mc/0003/ads.php> ([your date of access]).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Ingham, Valerie. „Decisions on Fire“. M/C Journal 10, Nr. 3 (01.06.2007). http://dx.doi.org/10.5204/mcj.2667.

Der volle Inhalt der Quelle
Annotation:
Introduction Decision making on the fireground is a complex activity reflected in the cultural image of fire in contemporary Western societies, the expertise of firefighters and the public demand for response to fire. The split second decisions that must be made by incident commanders on the fireground demonstrate that the dominant models of rational, logical argument and naturalistic decision making are incapable of dealing with this complexity. Twelve senior ranking Australian fire officers participated in the investigation from which I propose that fireground incident commanders are relying on aesthetic awareness and somatic responses, similar to those of an artist, and that due to the often ineffable nature of their responses these sources of information are usually unacknowledged. As a result I have developed my own theory of decision making on the fireground, termed ‘Multimodal Decision Making’, which is distinguished from formal rationality and informal sense-based rationality in that it approaches art, science and practice as a complex and irreducible whole. Fire – Complex Decision Making The complex reality of a fireground incident is not effectively explained by decision making models based on logic. These models understand decision making in terms of a rational choice between various options (Dowie) and tend to oversimplify decision making. Grouped together they are commonly described as ‘traditional’. Recent research and the development of an alternative understanding, termed naturalistic decision making, has demonstrated that under the pressures of an emergency situation there is just not time enough to weight up alternatives (Flin, Salas, Strub and Martin; Klein). Naturalistic decision making draws on the cognitive sciences to explain how incident commanders make decisions when they are not using probability theory or rational logic (Montgomery, Lipshitz and Brehmer). Although I appreciate various aspects of the naturalistic models of decision making (Cannon-Bowers and Salas; Flin and Arbuthnot; Zsambok and Klein), the problem for me is that the research has been conducted from a cognitive task analysis perspective where typically each decision has been broken down into its supposed constituent parts, analysed and then reassembled. I understand this process to be counterproductive to appreciating complex and interrelated decision making. I propose an alternative explanation which I call Multimodal Decision Making. Multimodal Decision Making recognises that probability theory or rational logic does not adequately explain how incident commanders balance feelings of contradictory information in parallel, and by the very clash or strangeness of the juxtaposition, see a way forward. This is reasoning by similarity rather than calculation. I suggest that the mechanistic rational processes do not necessarily disappear, but that they are assimilated into a dynamic, as opposed to inflexible and rigid, approach to decision making. The following excerpt from a country Inspector is provided to illustrate the role of aesthetic awareness and somatic perception in fireground decision making. The Trembling Voice Early one morning a country Inspector is called out to a factory fire in a town, normally one hour’s drive away. It takes him 40 minutes to drive to the fire, and on the way he busies himself receiving two updates from the communications centre and talking by radio to the first arriving officer at the incident. Nothing the first arriving officer said was unusual or alarming. What was alarming, said the Inspector, was the very slight tremor in the first arriving officer’s voice. It contained a hint of fear. …so I got the message from the first pump that was on the scene. I could hear in his voice that he was quivering, so I thought ‘I am not too sure if he is comfortable, I’d better get him some help’ so I rang up the communications centre, and I said ‘Listen, I know you have got these two trucks coming from A., you’ve got the rural fire service’, I said ‘you need to send U. up now…I may have waited another 10 or 15 minutes before I said ‘Ok you better get G. there’ – it’s only another 40km maybe, I said ‘get them on the road as well.’ V – This is all while you are in the car? All while I am in the car driving to the incident, I am building a mental picture of what’s happening, and from hearing his voice, I felt that he was maybe not in control because of the quivering in it. V – Did you know him well already? Yeah I knew him sort of well enough… I could just tell, he sounded like he was in trouble…I felt once I arrived, he more or less – I could feel a weight come off his shoulders, ‘You’re here now, I don’t have to deal with this anymore, its all yours.’ The Inspector deduced the incident was possibly more serious than the communications centre had so far anticipated. He organised backup appliances, and these decisions, maintained the Inspector, were prompted by the “quivering” in the officer’s voice. On arrival he saw immediately that his call for backup was indeed necessary, because the fire was moving out of control with the possibility of spreading. Although the Inspector in this incident was not physically present, he relied on his aesthetic awareness and somatic perception to inform his decision making. He would have been justified if he acted only on the basis of incoming communications, which were presented in scientifically measurable terms: “factory well alight, two appliances in attendance…” and so on; nothing out of the ordinary, a straightforward incident. In fact, what he responded to was not the information he received as a verbal message, but rather the slight tremor in the first arriving officer’s voice. That is, the Inspector’s aesthetic awareness and somatic perception informed his decision to call for backup, overriding the word-information contained in the verbal report. Fire – Complex Cultural Image Fire is a complex object in itself and in a threatening context, such as the engulfment of an inhabited building, creates a complex environment which in turn, for me as a researcher, requires a complex method of inquiry. As a result I have been obliged to draw on theories of art and art criticism as part of my own method of enquiry and I have adopted Eisner and Powell’s application of aesthetics: It may be that somatic forms of knowledge – the use of the physical body as a source of information – play an important role in enabling scientists to make judgements about alternative courses of action or directions to pursue. It might be that qualitative cues are difficult to articulate, indeed clues that may themselves be ineffable, are critical for doing productive scientific work. (134) That is, sometimes the physical body is used as a source of information, and sometimes it is difficult to express in words how this happens. The following incident illustrates the importance of somatic awareness in decision making from an Inspector’s perspective. A Smell of Petrol In this incident a country Inspector was called to a row of factory units. The smell of petrol had been happening on and off over a period of 18 months, but now in the toilet of one shop it had become unbearable. The Inspector set his crew to work with a device that detects levels of petrol in the air, that he called a ‘sniffer’. When the ‘sniffer’ did not register a high value for petrol the Inspector considered the machine to be faulty and trusted his own sense of smell and that of his crew, over the ‘sniffer’. Decisions in this incident were informed by somatic response to the situation. In the Smell of Petrol, the Inspector considered his nose a more reliable source of information than a mechanised ‘sniffer’. Burning Ears Continuing the theme of mechanisation and technology, personal protective equipment, one Inspector informed me, has become so effective that firefighters are able to move much deeper into a fire than ever before. The new technology comes with a price. Previously firefighters perceived the sensation of their ears burning to be a warning sign. This somatic response has now been effectively curtailed. Technology in the form of increasing personal protective equipment, complex communication systems and sophisticated firefighting equipment is usually understood as increasing the opportunity to prevent and control an incident. Perhaps an alternative perspective could be that increasingly sophisticated technology is replacing somatic response with dangerous implications? Somatic awareness is developed within a cultural context. On the fireground, I understand the cultural context to be the image, as a fire is a moving, alive image demanding an immediate response. An arsonist may look for a fire to spiral out of control, enjoying the spectacle of an entire building being engulfed and spreading to the next office block. What is it that firefighters are looking for? What do they see? What directs their attention? Firefighters invariably see what they have been trained to see – smoke escaping from under the eaves, melting rubber between clip-lock walls, cracks in structural concrete, the colour and density of the smoke and so on. Their perception of signs, indicating their appreciation of the situation, and they way they perceive these signs – they look for them, gauge and measure their progress and act in response, are all intensified by time pressure and the imperative and means to do something. This is in sharp contrast to an arsonist or even the general public watching the fire’s progress on the TV news. The ability to comprehend and act on the visual is called aesthetics in the discipline of art criticism. I use the words ‘aesthetic awareness’ to mean the way an activity of perception is organised and informed to unspoken, but shared, principles for recognising fire features and characteristics; being able to share these principles helps with the building of an identity of expertise. In firefighting, as in other emergency service work, an aesthetic appreciation of the scene it is technically termed situational awareness (Banbury and Tremblay; Craig; Endsley and Garland) and sometimes colloquially known as a size-up. This is when incident commanders appraise the fireground and on the basis of their judgement, make decisions involving, for example, the placement of personnel and resources, calling for backup and so on. It is at this stage that the expertise of the incident commander is fore grounded and I suggest that a linear approach to decision making does not fully explain the complexities involved when a small input or adjustment can lead to very dramatic consequences. In fact, a small input leading to dramatic consequences is likely to indicate a non-linear system (Lewin). In a non-linear dynamic system, such as a fireground, some things may appear random, but they are known equations. Pink heralds a visual and non-linear approach, “perhaps some of the problems we face when we write linear texts with words as our only tool can be resolved by thinking of anthropology and its representations as not solely verbal, but also visual and not simply linear but multilinear” (Pink 10). With linear thinking there is a beginning and an end, which leads naturally to the supposition of cause and effect. This is because there is no looping back into the whole; it is as if there are many beginnings, leading to a fragmented sort of perception. Language shapes the way we perceive issues by virtue of the words we have to create our impressions with. Unfortunately, English and Western languages in general are not equipped for a multimodal communication. We are, by the structure of our language, almost squeezed into the position of talking linearly in terms of cause and effect for understanding what is happening. Fire – Complex Experience Creative decision making occurs when the person has a deep knowledge of the discipline. Great flashes of insight rarely come to the inexperienced mind. People who don’t understand rhythm, melody and harmony will not be able to compose complex pieces of music. Creative and innovative decision making on the fireground will not be possible without prior experience regarding how various materials react on combustion, the structure of the organisational hierarchy, crew configurations and the nature of the fire being fought. There is beginner’s luck of course, but this will not be a consistent approach to an otherwise fearful and dangerous situation, because knowing what to expect means feeling less danger and less fear, freeing up more energy to respond creatively. For example, consider a junior firefighter trembling in fear prior to their first incident, compared with an experienced firefighter who feels anticipation and exhilaration. We live in a world of specialisation and expert opinion, even if there is a certain cynicism creeping in over what makes someone an expert. Taylorism has ultimately produced people with high technical skills in one area and a lack of ability to see the whole picture (Konzelmann, Forrant and Wilkinson).As a counterbalance there is a current push towards multi-skilling and flattened hierarchies. For firefighting organisations this creates an interesting challenge. On the one hand there is a concentration on highly technical skill development which involves acknowledging the importance of team work; on the other, the demands of a time critical situation in which the imperative is to act quickly and decisively for the best possible outcome. Ultimate decision making responsibility lies with the incident commander who must be able to negotiate the complexity of the scene in its entirety, balancing competing demands rather than focusing solely on one aspect. The ease with which incident commanders move through the decision making process, perceiving the situation, looking at the fire and sizing it up, is not reliant on eyesight alone. It involves their ability to adjust, reframe, and move through the incident without losing their bearings, no matter how or where they are physically situated in relation to the fire. Seeing does not involve only eyesight, it sums up the experience of becoming so familiar and integrated with the aspects of fire behaviour that expert incident commanders do not lose their bearings in the process of changing their physical location. Often they rely on incoming intelligence to develop a three dimensional perspective of the fireground. They have a multimodal perspective, a holistic vista, because their sensory relationship with the fire is so thorough and extensive. Just looking at the fire for the incident commander, is not just looking at the fire, it is an aesthetic experience in which there is a shared standard for recognising what is happening, if not what should be done to mitigate it. Participating in the knowledge of these standards, these ways of seeing, is recognised as part of the identity of the group member. Nelson (97), who specialised in visually reading the man-made environment, wrote “we see what we are looking for, what we have been trained to see by habit or tradition.” Firefighters are known and respected within their cultural context by their depth of understanding of these shared standards. These shared standards may or may not be a reflection of the ideal or organisationally endorsed standard operating guidelines. I suggest that a heightened situational awareness and consequent decision making may be a visible indication of contribution and inclusion within the cultural practices of firefighting. Thus seeing involves not only eyesight, but also being a part of a cultural context; for example interpreting individualised body movements and gestures. Standard operating guidelines place rules and constraints on incident commanders. These guidelines provide a hierarchy of needs, and prescribe recommended approaches for various fireground contingencies. This does not mean that incident commanders are not creative. “Play and art without rules is uninteresting. Absolute liberty is boring” (Karlqvist 111). Within the context of the fireground, creative experience is deliberate as opposed to random. The creation of innovative approaches does not happen in a vacuum; rather it is the result of playing with the rules, stretching them, moving and testing them. It is essential to maintain common operating guidelines, or rules, because they form a stock body of common knowledge, but it is also essential to break the rules and play around with them. Karlqvist (112) writes “mastery reveals itself as breaking rules. The secret of creativity hinges on this insight, to know the right moment when you can go too far”. There are experts who are trained to be mechanical, and there are experts, such as the incident commanders I interviewed, who integrate and sometimes override the mechanical list of rules. Multimodal Decision Making is not primarily about an objective representation of the ‘truth’, but rather the unpredictable and complex conditions which incident commanders must negotiate. Conclusion When dealing with a complex and dynamic system, cause and effect are not sufficient explanation for what is happening. Instead of linear progression we are looking at a feedback or circular system, in which a small act may produce a larger reaction. Decision making on the fireground is a complex and difficult activity. Its complexity stems from the uncertain variables, the immediate threat to life and property, the safety of the crew, trapped victims, observing public, the perceptions reported by the media and the statutory obligations that motivate firefighters to their tasks are intricately interwoven. This melting pot of variable contingencies creates a complex working environment which I suggest is negotiated by a little acknowledged ability to integrate somatic and aesthetic awareness into decision making in time critical situations. When dealing with a complex and dynamic system, cause and effect are not sufficient explanation for what is happening. Instead of linear progression we are looking at a feedback or circular system, in which a small act may produce a larger reaction. Decision making on the fireground is a complex and difficult activity. Its complexity stems from uncertain variables which include the immediate threat to life and property, the safety of the crew, trapped victims, and observing public, the perceptions reported by the media and the statutory obligations that motivate firefighters to their tasks, all of which are intricately interwoven. This melting pot of variable contingencies creates a complex working environment which I suggest is negotiated by a little acknowledged ability to integrate somatic and aesthetic awareness into decision making in time critical situations. References Banbury, Simon, and Sebastian Tremblay, eds. A Cognitive Approach to Situational Awareness: Theory and Application. Hampshire, England: Ashgate, 2004. Cannon-Bowers, Janis, and Eduardo Salas. Making Decisions under Stress. Washington: American Psychological Association, 1998. Craig, Peter. Situational Awareness: Controlling Pilot Error. New York: McGraw-Hill, 2001. Dowie, Jack. “Clinical Decision Analysis: Background and Introduction.” In Analysing How We Reach Clinical Decisions, eds. H. Llewellyn & A. Hopkins. London: Royal College of Physicians, 1993. Eisner, Elliot, and Kimberly Powell. “Art in Science?” Curriculum Inquiry 32.2 (2002): 131-159. Endsley, Mica, and Daniel Garland, eds. Situational Awareness Analysis and Measurement. New Jersey: Lawrence Erlbaum Associates, 2000. Flin, Rhona, and Kevin Arbuthnot. Incident Command: Tales from the Hot Seat. England: Ashgate, 2002. Flin, Rhona, Eduardo Salas, M. Strub, and L. Martin, eds. Decision Making under Stress. England: Ashgate, 1997. Karlqvist, Aka. “Creativity: Some Historical Footnotes from Art and Science.” Ake Andersson and Nihls-Eric Sahlin, eds. The Complexity of Creativity. Dordrecht: Kluwer, 1997. Klein, Gary. Sources of Power. Massachusetts: Massachusetts Institute of Technology, 1998. Konzelmann, Suzanne, Robert Forrant, and Frank Wilkinson. “Work Systems, Corporate Strategies and Global Markets: Creative Shop Floors or ‘a Barge Mentality’?” Industrial Relations Journal 35.3 (2004). Lewin, Roger. Complexity: Life at the Edge of Chaos. 2nd ed. Chicago: U of Chicago P, 1999. Montgomery, Henry, and Raanan Lipshitz, and Berndt Brehmer, eds. How Professionals Make Decisions. New Jersey: Lawrence Erlbaum, 2005. Nelson, George. How to See: A Guide to Reading Our Manmade Environment. Boston: Little, Brown and Company, 1977. Pink, Sarah. “Introduction: Situating Visual Research.” In Working Images, eds. Sarah Pink, Laszlo Kurti, and Ana Isabel Afonso. New York: Routledge, 2004. Zsambok, Caroline, and Gary Klein. Naturalistic Decision Making. New Jersey: Lawrence Erlbaum, 1997. Citation reference for this article MLA Style Ingham, Valerie. "Decisions on Fire." M/C Journal 10.3 (2007). echo date('d M. Y'); ?> <http://journal.media-culture.org.au/0706/06-ingham.php>. APA Style Ingham, V. (Jun. 2007) "Decisions on Fire," M/C Journal, 10(3). Retrieved echo date('d M. Y'); ?> from <http://journal.media-culture.org.au/0706/06-ingham.php>.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Charman, Suw, und Michael Holloway. „Copyright in a Collaborative Age“. M/C Journal 9, Nr. 2 (01.05.2006). http://dx.doi.org/10.5204/mcj.2598.

Der volle Inhalt der Quelle
Annotation:
The Internet has connected people and cultures in a way that, just ten years ago, was unimaginable. Because of the net, materials once scarce are now ubiquitous. Indeed, never before in human history have so many people had so much access to such a wide variety of cultural material, yet far from heralding a new cultural nirvana, we are facing a creative lock-down. Over the last hundred years, copyright term has been extended time and again by a creative industry eager to hold on to the exclusive rights to its most lucrative materials. Previously, these rights guaranteed a steady income because the industry controlled supply and, in many cases, manufactured demand. But now culture has moved from being physical artefacts that can be sold or performances that can be experienced to being collections of 1s and 0s that can be easily copied and exchanged. People are revelling in the opportunity to acquire and experience music, movies, TV, books, photos, essays and other materials that they would otherwise have missed out on; and they picking up the creative ball and running with it, making their own version, remixes, mash-ups and derivative works. More importantly than that, people are producing and sharing their own cultural resources, publishing their own original photos, movies, music, writing. You name it, somewhere someone is making it, just for the love of it. Whilst the creative industries are using copyright law in every way they can to prosecute, shut down, and scare people away from even legitimate uses of cultural materials, the law itself is becoming increasingly inadequate. It can no longer deal with society’s demands and expectations, nor can it cope with modern forms of collaboration facilitated by technologies that the law makers could never have anticipated. Understanding Copyright Copyright is a complex area of law and even a seemingly simple task like determining whether a work is in or out of copyright can be a difficult calculation, as illustrated by flowcharts from Tim Padfield of the National Archives examining the British system, and Bromberg & Sunstein LLP which covers American works. Despite the complexity, understanding copyright is essential in our burgeoning knowledge economies. It is becoming increasingly clear that sharing knowledge, skills and expertise is of great importance not just within companies but also within communities and for individuals. There are many tools available today that allow people to work, synchronously or asynchronously, on creative endeavours via the Web, including: ccMixter, a community music site that helps people find material to remix; YouTube, which hosts movies; and JumpCut:, which allows people to share and remix their movies. These tools are being developed because of the increasing number of cultural movements toward the appropriation and reuse of culture that are encouraging people to get involved. These movements vary in their constituencies and foci, and include the student movement FreeCulture.org, the Free Software Foundation, the UK-based Remix Commons. Even big business has acknowledged the importance of cultural exchange and development, with Apple using the tagline ‘Rip. Mix. Burn.’ for its controversial 2001 advertising campaign. But creators—the writers, musicians, film-makers and remixers—frequently lose themselves in the maze of copyright legislation, a maze complicated by the international aspect of modern collaboration. Understanding of copyright law is at such a low ebb because current legislation is too complex and, in parts, out of step with modern technology and expectations. Creators have neither the time nor the motivation to learn more—they tend to ignore potential issues and continue labouring under any misapprehensions they have acquired along the way. The authors believe that there is an urgent need for review, modernisation and simplification of intellectual property laws. Indeed, in the UK, intellectual property is currently being examined by a Treasury-level review lead by Andrew Gowers. The Gowers Review is, at the time of writing, accepting submissions from interested parties and is due to report in the Autumn of 2006. Internationally, however, the situation is likely to remain difficult, so creators must grasp the nettle, educate themselves about copyright, and ensure that they understand the legal ramifications of collaboration, publication and reuse. What Is Collaboration? Wikipedia, a free online encyclopaedia created and maintained by unpaid volunteers, defines collaboration as “all processes wherein people work together—applying both to the work of individuals as well as larger collectives and societies” (Wikipedia, “Collaboration”). These varied practices are some of our most common and basic tendencies and apply in almost every sphere of human behaviour; working together with others might be described as an instinctive, pragmatic or social urge. We know we are collaborating when we work in teams with colleagues or brainstorm an idea with a friend, but there are many less familiar examples of collaboration, such as taking part in a Mexican wave or standing in a queue. In creative works, the law expects collaborators to obtain permission to reuse work created by others before they embark upon that reuse. Yet this distinction between ‘my’ work and ‘your’ work is entirely a legal and social construct, as opposed to an absolute fact of human nature, and new technologies are blurring the boundaries between what is ‘mine’ and what is ‘yours’ whilst new cultural movements posit a third position, ‘ours’. Yochai Benkler coined the term ‘commons-based peer production’ (Benkler, Coase’s Penguin; The Wealth of Nations) to describe collaborative efforts, such as free and open-source software or projects such as Wikipedia itself, which are based on sharing information. Benkler posits this particular example of collaboration as an alternative model for economic development, in contrast to the ‘firm’ and the ‘market’. Benkler’s notion sits uncomfortably with the individualistic precepts of originality which dominate IP policy, but with examples of commons-based peer production on the increase, it cannot be ignored when considering how new technologies and ways of working interact with existing and future copyright legislation. The Development of Collaboration When we think of collaboration we frequently imagine academics working together on a research paper, or musicians jamming together to write a new song. In academia, researchers working on a project are expected to write papers for publication in journals on a regular basis. The motto ‘publish or die’ is well known to anyone who has worked in academic circle—publishing papers is the lifeblood of the academic career, forming the basis of a researcher’s status within the academic community and providing data and theses for other researchers to test and build upon. In these circumstances, copyright is often assigned by the authors to a journal and, because there is no direct commercial outcome for the authors, conflicts regarding copyright tend to be restricted to issues such as reuse and reproduction. Within the creative industries, however, the focus of the collaboration is to derive commercial benefit from the work, so copyright issues, such as division of fees and royalties, plagiarism, and rights for reuse are much more profitable and hence they are more vigorously pursued. All of these issues are commonly discussed, documented and well understood. Less well understood is the interaction between copyright and the types of collaboration that the Internet has facilitated over the last decade. Copyright and Wikis Ten years ago, Ward Cunningham invented the ‘wiki’—a Web page which could be edited in situ by anyone with a browser. A wiki allows multiple users to read and edit the same page and, in many cases, those users are either anonymous or identified only by a nickname. The most famous example of a wiki is Wikipedia, which was started by Jimmy Wales in 2001 and now has over a million articles and over 1.2 million registered users (Wikipedia, “Wikipedia Statistics”). The culture of online wiki collaboration is a gestalt—the whole is greater than the sum of the parts and the collaborators see the overall success of the project as more important than their contribution to it. The majority of wiki software records every single edit to every page, creating a perfect audit trail of who changed which page and when. Because copyright is granted for the expression of an idea, in theory, this comprehensive edit history would allow users to assert copyright over their contributions, but in practice it is not possible to delineate clearly between different people’s contributions and, even if it was possible, it would simply create a thicket of rights which could never be untangled. In most cases, wiki users do not wish to assert copyright and are not interested in financial gain, but when wikis are set up to provide a source of information for reuse, copyright licensing becomes an issue. In the UK, it is not possible to dedicate a piece of work to the public domain, nor can you waive your copyright in a work. When a copyright holder wishes to licence their work, they can only assign that licence to another person or a legal entity such as a company. This is because in the UK, the public domain is formed of the ‘leftovers’ of intellectual property—works for which copyright has expired or those aspects of creative works which do not qualify for protection. It cannot be formally added to, although it certainly can be reduced by, for example, extension of copyright term which removes work from the public domain by re-copyrighting previously unprotected material. So the question becomes, to whom does the content of a wiki belong? At this point traditional copyright doctrines are of little use. The concept of individuals owning their original contribution falls down when contributions become so entangled that it’s impossible to split one person’s work from another. In a corporate context, individuals have often signed an employment contract in which they assign copyright in all their work to their employer, so all material created individually or through collaboration is owned by the company. But in the public sphere, there is no employer, there is no single entity to own the copyright (the group of contributors not being in itself a legal entity), and therefore no single entity to give permission to those who wish to reuse the content. One possible answer would be if all contributors assigned their copyright to an individual, such as the owner of the wiki, who could then grant permission for reuse. But online communities are fluid, with people joining and leaving as the mood takes them, and concepts of ownership are not as straightforward as in the offline world. Instead, authors who wished to achieve the equivalent of assigning rights to the public domain would have to publish a free licence to ‘the world’ granting permission to do any act otherwise restricted by copyright in the work. Drafting such a licence so that it is legally binding is, however, beyond the skills of most and could be done effectively only by an expert in copyright. The majority of creative people, however, do not have the budget to hire a copyright lawyer, and pro bono resources are few and far between. Copyright and Blogs Blogs are a clearer-cut case. Blog posts are usually written by one person, even if the blog that they are contributing to has multiple authors. Copyright therefore resides clearly with the author. Even if the blog has a copyright notice at the bottom—© A.N. Other Entity—unless there has been an explicit or implied agreement to transfer rights from the writer to the blog owner, copyright resides with the originator. Simply putting a copyright notice on a blog does not constitute such an agreement. Equally, copyright in blog comments resides with the commenter, not the site owner. This reflects the state of copyright with personal letters—the copyright in a letter resides with the letter writer, not the recipient, and owning letters does not constitute a right to publish them. Obviously, by clicking the ‘submit’ button, commenters have decided themselves to publish, but it should be remembered that that action does not transfer copyright to the blog owner without specific agreement from the commenter. Copyright and Musical Collaboration Musical collaboration is generally accepted by legal systems, at least in terms of recording (duets, groups and orchestras) and writing (partnerships). The practice of sampling—taking a snippet of a recording for use in a new work—has, however, changed the nature of collaboration, shaking up the recording industry and causing a legal furore. Musicians have been borrowing directly from each other since time immemorial and the student of classical music can point to many examples of composers ‘quoting’ each other’s melodies in their own work. Folk musicians too have been borrowing words and music from each other for centuries. But sampling in its modern form goes back to the musique concrète movement of the 1940s, when musicians used portions of other recordings in their own new compositions. The practice developed through the 50s and 60s, with The Beatles’ “Revolution 9” (from The White Album) drawing heavily from samples of orchestral and other recordings along with speech incorporated live from a radio playing in the studio at the time. Contemporary examples of sampling are too common to pick highlights, but Paul D. Miller, a.k.a. DJ Spooky ‘that Subliminal Kid’, has written an analysis of what he calls ‘Rhythm Science’ which examines the phenomenon. To begin with, sampling was ignored as it was rare and commercially insignificant. But once rap artists started to make significant amounts of money using samples, legal action was taken by originators claiming copyright infringement. Notable cases of illegal sampling were “Pump Up the Volume” by M/A/R/R/S in 1987 and Vanilla Ice’s use of Queen/David Bowie’s “Under Pressure” in the early 90s. Where once artists would use a sample and sort out the legal mess afterwards, such high-profile litigation has forced artists to secure permission for (or ‘clear’) their samples before use, and record companies will now refuse to release any song with uncleared samples. As software and technology progress further, so sampling progresses along with it. Indeed, sampling has now spawned mash-ups, where two or more songs are combined to create a musical hybrid. Instead of using just a portion of a song in a new composition which may be predominantly original, mash-ups often use no original material and rely instead upon mixing together tracks creatively, often juxtaposing musical styles or lyrics in a humorous manner. One of the most illuminating examples of a mash-up is DJ Food Raiding the 20th Century which itself gives a history of sampling and mash-ups using samples from over 160 sources, including other mash-ups. Mash-ups are almost always illegal, and this illegality drives mash-up artists underground. Yet, despite the fact that good mash-ups can spread like wildfire on the Internet, bringing new interest to old and jaded tracks and, potentially, new income to artists whose work had been forgotten, this form of musical expression is aggressively demonised upon by the industry. Given the opportunity, the industry will instead prosecute for infringement. But clearing rights is a complex and expensive procedure well beyond the reach of the average mash-up artist. First, you must identify the owner of the sound recording, a task easier said than done. The name of the rights holder may not be included in the original recording’s packaging, and as rights regularly change hands when an artist’s contract expires or when a record label is sold, any indication as to the rights holder’s identity may be out of date. Online musical databases such as AllMusic can be of some use, but in the case of older or obscure recordings, it may not be possible to locate the rights holder at all. Works where there is no identifiable rights holder are called ‘orphaned works’, and the longer the term of copyright, the more works are orphaned. Once you know who the rights holder is, you can negotiate terms for your proposed usage. Standard fees are extremely high, especially in the US, and typically discourage use. This convoluted legal culture is an anachronism in desperate need of reform: sampling has produced some of the most culturally interesting and financially valuable recordings of the past thirty years, so should be supported rather than marginalised. Unless the legal culture develops an acceptance for these practices, the associated financial and cultural benefits for society will not be realised. The irony is that there is already a successful model for simplifying licensing. If a musician wishes to record a cover version of a song, then royalty terms are set by law and there is no need to seek permission. In this case, the lawmakers have recognised the social and cultural benefit of cover versions and created a workable solution to the permissions problem. There is no logical reason why a similar system could not be put in place for sampling. Alternatives to Traditional Copyright Copyright, in its default structure, is a disabling force. It says that you may not do anything with my work without my permission and forces creators wishing to make a derivative work to contact me in order to obtain that permission in writing. This ‘permissions society’ has become the norm, but it is clear that it is not beneficial to society to hide away so much of our culture behind copyright, far beyond the reach of the individual creator. Fortunately there are fast-growing alternatives which simplify whilst encouraging creativity. Creative Commons is a global movement started by academic lawyers in the US who thought to write a set of more flexible copyright licences for creative works. These licenses enable creators to precisely tailor restrictions imposed on subsequent users of their work, prompting the tag-line ‘some rights reserved’ Creators decide if they will allow redistribution, commercial or non-commercial re-use, or require attribution, and can combine these permissions in whichever way they see fit. They may also choose to authorise others to sample their works. Built upon the foundation of copyright law, Creative Commons licences now apply to some 53 million works world-wide (Doctorow), and operate in over 60 jurisdictions. Their success is testament to the fact that collaboration and sharing is a fundamental part of human nature, and treating cultural output as property to be locked away goes against the grain for many people. Creative Commons are now also helping scientists to share not just the results of their research, but also data and samples so that others can easily replicate experiments and verify or refute results. They have thus created Science Commons in an attempt to free up data and resources from unnecessary private control. Scientists have been sharing their work via personal Web pages and other Websites for many years, and additional tools which allow them to benefit from network effects are to be welcomed. Another example of functioning alternative practices is the Remix Commons, a grassroots network spreading across the UK that facilitates artistic collaboration. Their Website is a forum for exchange of cultural materials, providing a space for creators to both locate and present work for possible remixing. Any artistic practice which can reasonably be rendered online is welcomed in their broad church. The network’s rapid expansion is in part attributable to its developers’ understanding of the need for tangible, practicable examples of a social movement, as embodied by their ‘free culture’ workshops. Collaboration, Copyright and the Future There has never been a better time to collaborate. The Internet is providing us with ways to work together that were unimaginable even just a decade ago, and high broadband penetration means that exchanging large amounts of data is not only feasible, but also getting easier and easier. It is possible now to work with other artists, writers and scientists around the world without ever physically meeting. The idea that the Internet may one day contain the sum of human knowledge is to underestimate its potential. The Internet is not just a repository, it is a mechanism for new discoveries, for expanding our knowledge, and for making links between people that would previously have been impossible. Copyright law has, in general, failed to keep up with the amazing progress shown by technology and human ingenuity. It is time that the lawmakers learnt how to collaborate with the collaborators in order to bring copyright up to date. References Apple. “Rip. Mix. Burn.” Advertisement. 28 April 2006 http://www.theapplecollection.com/Collection/AppleMovies/mov/concert_144a.html>. Benkler, Yochai. Coase’s Penguin. Yale Law School, 1 Dec. 2002. 14 April 2006 http://www.benkler.org/CoasesPenguin.html>. ———. The Wealth of Nations. New Haven: Yape UP, 2006. Bromberg & Sunstein LLP. Flowchart for Determining when US Copyrights in Fixed Works Expire. 14 Apr. 2006 http://www.bromsun.com/practices/copyright-portfolio-development/flowchart.htm>. DJ Food. Raiding the 20th Century. 14 April 2006 http://www.ubu.com/sound/dj_food.html>. Doctorow, Cory. “Yahoo Finds 53 Million Creative Commons Licensed Works Online.” BoingBoing 5 Oct. 2005. 14 April 2006 http://www.boingboing.net/2005/10/05/yahoo_finds_53_milli.html>. Miller, Paul D. Rhythm Science. Cambridge, Mass.: MIT Press, 2004. Padfield, Tim. “Duration of Copyright.” The National Archives. 14 Apr. 2006 http://www.kingston.ac.uk/library/copyright/documents/DurationofCopyright FlowchartbyTimPadfieldofTheNationalArchives_002.pdf>. Wikipedia. “Collaboration.” 14 April 2006 http://en.wikipedia.org/wiki/Collaboration>. ———. “Wikipedia Statistics.” 14 April 2006 http://en.wikipedia.org/wiki/Special:Statistics>. Citation reference for this article MLA Style Charman, Suw, and Michael Holloway. "Copyright in a Collaborative Age." M/C Journal 9.2 (2006). echo date('d M. Y'); ?> <http://journal.media-culture.org.au/0605/02-charmanholloway.php>. APA Style Charman, S., and M. Holloway. (May 2006) "Copyright in a Collaborative Age," M/C Journal, 9(2). Retrieved echo date('d M. Y'); ?> from <http://journal.media-culture.org.au/0605/02-charmanholloway.php>.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Malmstedt, Johan. „Formatted Sound“. M/C Journal 27, Nr. 2 (16.04.2024). http://dx.doi.org/10.5204/mcj.3028.

Der volle Inhalt der Quelle
Annotation:
Locating the Format What is format radio? At a glance, the answer might seem simple. In common parlance, the concept is often presented as a pejorative counterpart to non-commercial broadcasting. However, to render the concept synonymous with commercial broadcasting neglects its historical specificity. Previous research has demonstrated the nuanced factors at stake in the selection of music at specific broadcasting stations (Ahlkvist and Fisher 301-325). Beyond economic structures and conditions, however, remains the matter of whether we can posit an aesthetic expression that epitomises format radio. Previous research has focussed predominantly on semantic source materials and theoretical propositions to hone in on the question. However, my wager is that the signal content itself can help us reveal something about the nature of formats. To pursue such a task, the Swedish case offers a promising possibility. Swedish media archives provide the opportunity to study how, and if, formatting tendencies can be detected beyond the realm of commercial broadcasting. Unlike many of its global counterparts, Sweden maintained a public service radio monopoly until the late twentieth century. Throughout the 1980s, experiments were done with regional radio, and by 1993 full commercial licencing was permitted. The result is a rare situation where the entire flow of daily Public Service Broadcasting (PSB) can be studied from the 1980s onwards, which in turn allows for large-scale analysis of the relationship between organisation structure and content stylistics before and after the introduction of commercial broadcasting. Broadcasting in Sweden was during this time maintained by the company Sveriges Radio (SR), and had been in more or less the same form since the early 1920s. The organisation was not directly owned by the state, but financed through the so-called broadcasting licence, which in turn depended upon governmentally decided goals. The overall ambition was similar to the usual PSB objectives: the guiding documents for broadcasting would thus be inclined to emphasise values like objectivity and diversity – yet little is said about aesthetics (Banerjee and Seneviratne 10). This arrangement was the setting for regular conflicts throughout Swedish radio history, in which the demand of the audience squared off with the ideals of broadcasters. Yet in the last decade of the century, the media environment was about to change in a precedented way. During these years, conceptual and economic tension between commercial and public service radio reached new heights, in turn forcing the matter of formats on the agenda. Research by Stjernstedt and Forsman, while not exclusively focussed on radio formats, addresses the theme within the broader framework of commercial Swedish radio. Their findings, along with those of Hedman and Jauert, suggest the influence of commercial formats on Swedish radio prior to the formal introduction of commercial broadcasting. This allows for an interesting epistemic possibly: if their propositions are correct, it would allow us to study the character of radio formats, without being intermixed with the scope of commercial broadcasting. The following analysis thus attempts to track the actual changes in the content, which could reveal the influence of format radio on PSB. For critics like David Hendy and Wolfgang Hagen, the format comes down to a question of self-similarity, “Programmierung von Selbstverständlichkeit” as Hagen critically dubs it (333), supposedly induces a certain superficial uniqueness of music stations, despite a fundamental sameness in their content. For this reason, the analysis is focussed on repeated similarities in the musical content. Methodological Approach Given the significant role ascribed to music in the theorisation of radio formatting, this aspect appears to be a potentially apt focus for constructing an experimental analysis. This is also practical as it allows for the translation and application of established spectral analysis methods from the realm of music analysis. In the context of audio data, spectral analysis refers to a process where the frequency components of a sound signal are decomposed and examined as numerical values. This method is crucial in audio engineering to understand the underlying structure and composition of sound and can assist in identifying specific patterns or anomalies in audio data. The analysis uses a sampling approach, concentrating on full-day broadcasts from music channels P2 and P3 in order to capture the general channel characteristics. These were the two formal music channels of SR during the time, with P3, the popular music channel, being directed at a younger audience, and P2 offering a more diverse mixture of classic and world music. The data, selected randomly from five weekdays across each of the years 1988, 1991, 1994, and 1999, are examined for indicators of self-similarity and format structuring. The hypothesis guiding this study suggests that format radio, shaped by technical standards and theoretical principles, exhibits a degree of radiophonic self-similarity. This proposition is explored quantitatively, applying statistical methods to the spectral data to assess similarities. The analysis itself is executed in combining a set of Python libraries. Initially, a segmentation model from The National Audiovisual Institute of France is used to isolate the musical content from the broadcasts. Then, the audio analysis library librosa converts these segments into spectral data, providing insights into timbre, key, and tempo. The results are presented using Principal Component Analysis (PCA) visualisations. These visualisations employ dimension reduction to represent the relationships between audio segments in a Euclidean space, with the proximity of nodes indicating similarity between data points. Even though the PCA method has limitations, well-discussed in the methodological literature, it grants certain insight into the structure of the dataset (Jolliffe and Cadima). In essence the method uses two-dimensional space to plot the relationship between n-dimensional data objects. The implication is that the distance between each entry entails something about the general similarity between the features of these data points. Critical scholarship, like that of Johanna Drucker, has brought attention to the epistemological tendencies inherent in these standardised statistical methods. The PCA method was also at the centre of discussions concerning the general credibility of computational literary studies in 2019, and I here follow Andrew Piper’s conclusion that the limitations are less a reason to abandon ship, and more an encouragement to remain “skeptical” (11). Thus, it is a call for complementary and comparative approaches. In order to do this, my analysis also employs audio recognition tools to compare the feature elements of the data set. By comparing both these different levels of analysis, across yearly samples from the decade, the study traces the evolution of musical styles and formats, potentially relating them to wider technological, cultural, or sociopolitical shifts. Results Fig. 1: The distribution of observations from the years 1988, 1991, 1994, and 1999, arranged from the top right to the bottom left. For each year, around 1,000 songs from each channel were studied. Data from 1988 indicate that the popular music channel P3 had a somewhat broader distribution spectrum in its content than its more eclectic counterpart. Initially, this may seem counterintuitive, as one might expect the expanding range of classic and world music of P2 to exhibit greater sonic variety than a channel playing mainstream and popular music. A possible explanation for these results can be found in prior research. Musicologist Alf Björnberg has provided a detailed account of the musical content of Sveriges Radio. His narrative emphasises that P3 underwent a significant content shift during the 1980s. Throughout this decade, the channel began to accommodate artists and genres that had been marginalised in the expanding landscape of broadcast media for decades (320). While P2 was not without variation during this era, P3 experienced a more notable change, moving towards avant-garde rock and experimental electronic music. This shift reflects a broader trend in the radio landscape, where traditional boundaries of genre and style were increasingly blurred, allowing for more diverse and experimental sounds to emerge in mainstream channels. The visualisation of these data not only highlights these historical shifts but also provides a quantitative basis for understanding the evolution of musical trends and preferences within the radio broadcasting domain. Björnberg’s study, predominantly centred on playlists and textual documents, mainly focusses on the period up to the end of the 1980s. Shortly summarising the 1990s, Björnberg concludes that 'the effects of commercial competition were most noticeable for P3' (326), especially during the early years of this competition. The data reveal a surprising trend in this regard too. At first glance, it might appear that P3 is reducing its musical variety, which could be interpreted as a response to the new, more rigidly formatted radio landscape. However, this perception is an illusion created by the rescaling of graphs to accommodate P2's possibly drastic content expansion. When calculating the average distance between points in P3's data, it becomes evident that the variation remains within the same range, with a fluctuation of only 0.4. This figure contrasts with P2's distance, which increases from 34.2 to 46.7 between 1988 and 1994. Instead of P3 reducing its musical variety, it is P2 that broadens its musical offerings. This can be seen as a response to the identified issue. Sveriges Radio addressed the new competitive situation with two seemingly contradictory initiatives: seeking a unique sonic identity while emphasising "diversity with a quality signifier" (Björnberg). The developments in P2's data can be viewed as a concrete expression of this ambition to counter the entry of format radio with increased variation. These findings underscore the complexities and adaptive strategies in the broadcasting landscape, demonstrating how public broadcasters like Sveriges Radio navigated the challenges posed by commercial competition. By expanding and diversifying their musical content, channels like P2 showed a commitment to maintaining their relevance and appeal in a rapidly changing media environment. This study not only sheds light on the historical trajectory of Swedish radio but also offers broader insights into the dynamics of cultural adaptation and change within the media industry. However, it is crucial to understand what the analysis actually signifies – it is not an absolute statement about the variation in music, but rather an analysis based on a number of specific measures. The assessment of the musical content, as mentioned earlier, is based on the combined factors of tempo, key, and a simplified measure of timbre. These metrics are analytically recognised methods for categorising musical content and have been used in previous research to address genre variation (Bogdanov et al.). However, this does not necessarily mean that they provide a comprehensive understanding of how the music actually sounded. In the graphs above, the timbre data is represented by a median value. To more accurately capture the variation in the sound profile, it might be more appropriate to analyse a broader spectrum of frequency values. In the following graphs, values are compared over time across 13 different frequency bands, based on the first 30 seconds of each song. This refined approach allows for a more nuanced understanding of the sonic domain. By examining a wider range of frequency values, the analysis can potentially reveal subtler shifts in the musical characteristics of the radio channels over time. This method acknowledges that while tempo, key, and timbre are significant, they are only part of a more complex auditory picture. By broadening the scope of analysis to include more detailed frequency data, a richer and more textured picture of the evolution of musical content on these radio stations emerges. This approach offers deeper insights into the intricate ways in which radio programming responds to and reflects broader musical trends and listener preferences. Fig. 2: The distribution of observations from 1988, 1991, 1994, and 1999, from the top right to the bottom left. This approach to measuring audio content results in a less dramatic visualisation, with no longer a clear dominance in the pace of development; instead, both channels undergo a similar process of differentiation. What we observe here is a form of channel profiling, where both channels progressively establish a more distinct sound profile. According to the definition by Hendy and Hagen, this type of channel characterisation can be considered in terms of formatting. What is revealed is an increase in channel self-similarity. However, this empirical examination of audio material suggests a nuanced understanding of formatting. On one hand, both channels, particularly P2, expand certain aspects of their music. Simultaneously, the sound profile becomes more distinctly framed for each channel. Understanding this stylistic evolution compels us to reflect on what self-similarity means on a perceptual level. While self-similarity as a mathematical concept does not require comparative data points from another source, this value means little for the listener's perceptual experience. The content coherence of a channel, in terms of experience, depends on a contrasting example. This contrast is precisely what the two music channels within Sveriges Radio offered during this period. Their musical profiles became clearer by sonically contrasting with each other. Under this broader channel similarity, certain characteristics, such as tempo and key, appear to have been able to vary more freely. Nonetheless, this profiling represents a type of complexity reduction – an increased predictability within the format's constraints. These conclusions offer an indication on how radio channels adapt and refine their identities over time, responding to both internal objectives and external competitive pressures. It underscores the dynamic nature of radio broadcasting, where channels continually evolve their formats to maintain relevance and listener engagement. The nuanced understanding of formatting and self-similarity provides valuable insights into the strategic decisions made by broadcasters in shaping their auditory content. Björnberg echoes Hagen and Hendy’s tendency to primarily criticise the lack of creativity in radio formats. However, his focus is specifically on a certain format – the 'adult contemporary' (Björnberg). Against this backdrop, a comparative study of the audio content of commercial alternatives would indeed be interesting. Unfortunately, due to the scarcity of preserved material, compiling a proportionate dataset for such a study is challenging. However, we can still contemplate the general content of 'adult contemporary' music. One speculative approach to addressing this question is to examine the instruments used in the music to see if they align with specific format descriptions. Such an analysis could provide further insight into how the PSB style is changing under the stakes of commercial competition. Fig. 3: Percentage distribution of instruments in sub-segments from the music content of P2 and P3 in the sample data from 1988. The results offer a potential explanation for the questions raised by the initial graphs in this study. While P2 shows some variation, there remains a, perhaps expected, focus on string instruments and the piano. P3, on the other hand, displays a wide mix of content. Notably, there is a relatively high presence of the accordion – an instrument that is as lauded as it is loved within the Swedish context. The instrument belongs to a longer tradition of popular music, encapsulating both certain folk music traditions, as well as ‘dansband’ tunes. Already by the onset of broadcasting, the accordion split the audiences right down the middle (Hadenius 76), and by the late 1960s, it was proclaimed a “dead” instrument (Björnberg 257). Here, my results highlight a certain perseverance of the instrument, speaking to the resilience of this sound. While the accordion may seem peripheral to the 1990s debates about radio formats, this example serves both as a reminder of the persistence of stylistic questions and their emotional charge. Therefore, it is instructive to study the instrument distribution in the final year of the sample data. Fig. 4: Percentage distribution of instruments in sub-segments from the music content of P2 and P3 in the sample data from 1999. The historical development has several interesting tendencies. Whilst the general distribution of instruments on P2 remains similar, P3 has witnessed significant changes. The previous sample displayed a wide variety of sounds with high distribution, albeit with guitar at the top. The data from 1999 have developed in a more guitar-centred direction. While this provides certain analytical depth to the previous stages of analysis, it might also give a clue into the more general question of format radio. The results demonstrate a tendency towards a clearer channel identity, with a more unified sound. Extending the interpretation, we can also consider the results in a scope of international research. Eric Weisbard, alongside other researchers like Saesha Senger, has extensively mapped the content of the top charts during the final decades of the twentieth century, revealing a clear direction towards synth music and guitar-driven rock during the 80s and 90s. Since we cannot study the actual content of commercial broadcasting in Sweden during this time, such historical references remain a promising second-degree comparison. In this perspective, P3 partly mirrors global trends, illustrating the station's responsiveness to changing listener preferences and the dynamic nature of music consumption. We are here engaged in classical historical work, piecing together fragments of data from different types of sources. Nevertheless, the results indicate how old traditions and global trends intermingle in the construction of a national format: a soundscape where accordions and guitars reverberate in parallel. Summary The investigation into SR’s channels P2 and P3 during the 1980s and 1990s reveals a nuanced understanding of radio formatting and its implications. Both channels exhibit idiosyncratic approaches that blend various musical styles to develop distinct channel identities within the context of format radio. While the channels have moved towards more predictable and structured formats, reminiscent of commercial radio, this has not led to an overall homogenisation of content. Instead, each channel has developed a unique version of formatting, and maintained its distinct identity while incorporating elements of structure and predictability. Finally, this matter runs up against an epistemological question which has fascinated sound scholars for some time. To create format radio is to create a sound in time that feels uniform. Philosophers and psychologists have long debated whether sounds can at all be understood as continuous perceptual phenomena, and what it means for a sound to be 'the same' over time (for example, Moles). If sonic uniformity is a scientific challenge already on the time scale of the second, then radiophonic flows introduce whole new complications and questions. Here it is no longer the similarity between two tones in flow, but day-long broadcast products to be understood under the same channel identity. This requires a shaping of sound at completely different scale. The empirical study of such challenges has only begun. References Ahlkvist, Jarl A., and Gene Fisher. "And the Hits Just Keep on Coming: Music Programming Standardization in Commercial Radio." Poetics 27.5-6 (2000): 301–325. Banerjee, Indrajit, and Kalinga Seneviratne. Public Service Broadcasting: A Best Practices Sourcebook. Asian Media Information and Communication Centre, 2005. Björnberg, Alf. Skval och harmoni: musik i radio och TV 1925-1995. Stockholm: Etermedierna i Sverige, 1998. Bogdanov, D., J. Serr, N. Wack, and P. Herrera. "From Low-Level to High-Level: Comparative Study of Music Similarity Measures." Proceedings of the IEEE International Symposium on Multimedia (ISM'09), International Workshop on Advances in Music Information Research (AdMIRe'09). 2009. 453-458. Doukhan, David, et al. "An Open-Source Speaker Gender Detection Framework for Monitoring Gender Equality." 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2018. Drucker, Johanna. Visualization and Interpretation: Humanistic Approaches to Display, Cambridge: MIT P, 2020. Forsman, Michael. Lokalradio och kommersiell radio 1975-2010: en mediehistorisk studie av produktion och konkurrens. Diss. Stockholms Universitet, 2011. Hadenius, Stig. Kampen om monopolet: Sveriges radio och TV under 1900-talet. Stockholm: Prisma 1998. Hagen, Wolfgang. Das Radio: Zur Geschichte und Theorie des Hörfunks – Deutschland/USA. Das Radio. Paderborn: Brill Fink, 2005. Hedman, L. "Radio." In Mediesverige 2003, ed. U. Carlsson. Gothenburg: Nordicom, 2002. Hendy, David. Radio in the Global Age. Cambridge: Polity, 2003. Jolliffe, Ian T., and Jorge Cadima. "Principal Component Analysis: A Review and Recent Developments." Philosophical Transactions of the Royal Society A 374 (2016). Juaert, Per. "Policy Development in Danish Radio Broadcasting 1980-2002: Layers, Scenarios and the Public Service Remit." In New Articulations of the Public Service Remit, eds. Gregory Ferrell Lowe and Taisto Hujanen. Gothenburg: Nordicom, 2003. McFee, B. et al. “Librosa: Audio and Music Signal Analysis in Python.” Proceedings of the 14th Python in Science Conference. 2015. 18-25. Moles, Abraham. Information Theory and Esthetic Perception. Trans. Joel E. Cohen. Urbana: U of Illinois P, 1966. Piper, Andrew. “Do We Know What We Are Doing?” Journal of Cultural Analytics 5.1 (2019). <https://doi.org/10.22148/001c.11826>. Stjernstedt, Fredrik. Från radiofabrik till mediehus: medieförändring och medieproduktion på MTG-radio. Örebro Universitet, 2013. Weisbard, Eric. Top 40 Democracy: The Rival Mainstreams of American music. Chicago: U of Chicago P, 2014.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie