Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Portfolio management Australia Econometric models.

Zeitschriftenartikel zum Thema „Portfolio management Australia Econometric models“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-32 Zeitschriftenartikel für die Forschung zum Thema "Portfolio management Australia Econometric models" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Zeitschriftenartikel für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Yong, Jaime, und Anh Khoi Pham. „The long-term linkages between direct and indirect property in Australia“. Journal of Property Investment & Finance 33, Nr. 4 (06.07.2015): 374–92. http://dx.doi.org/10.1108/jpif-01-2015-0005.

Der volle Inhalt der Quelle
Annotation:
Purpose– Investment in Australia’s property market, whether directly or indirectly through Australian real estate investment trusts (A-REITs), grew remarkably since the 1990s. The degree of segregation between the property market and other financial assets, such as shares and bonds, can influence the diversification benefits within multi-asset portfolios. This raises the question of whether direct and indirect property investments are substitutable. Establishing how information transmits between asset classes and impacts the predictability of returns is of interest to investors. The paper aims to discuss these issues.Design/methodology/approach– The authors study the linkages between direct and indirect Australian property sectors from 1985 to 2013, with shares and bonds. This paper employs an Autoregressive Fractionally Integrated Moving Average (ARFIMA) process to de-smooth a valuation-based direct property index. The authors establish directional lead-lag relationships between markets using bi-variate Granger causality tests. Johansen cointegration tests are carried out to examine how direct and indirect property markets adjust to an equilibrium long-term relationship and short-term deviations from such a relationship with other asset classes.Findings– The authors find the use of appraisal-based property data creates a smoothing bias which masks the extent of how information is transmitted between the indirect property sector, stock and bond markets, and influences returns. The authors demonstrate that an ARFIMA process accounting for a smoothing bias up to lags of four quarters can overcome the overstatement of the smoothing bias from traditional AR models, after individually appraised constituent properties are aggregated into an overall index. The results show that direct property adjusts to information transmitted from market-traded A-REITs and stocks.Practical implications– The study shows direct property investments and A-REITs are substitutible in a multi-asset portfolio in the long and short term.Originality/value– The authors apply an ARFIMA(p,d,q) model to de-smooth Australian property returns, as proposed by Bond and Hwang (2007). The authors expect the findings will contribute to the discussion on whether direct property and REITs are substitutes in a multi-asset portfolio.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Reddy, Wejendra. „Evaluation of Australian industry superannuation fund performance; asset allocation to property“. Journal of Property Investment & Finance 34, Nr. 4 (04.07.2016): 301–20. http://dx.doi.org/10.1108/jpif-12-2015-0084.

Der volle Inhalt der Quelle
Annotation:
Purpose – Property is a key investment asset class that offers considerable benefits in a mixed-asset portfolio. Previous studies have concluded that property allocation should be within the 10-30 per cent range. However, there seems to be wide variation in theory and practice. Historical Australian superannuation data shows that the level of allocation to property asset class in institutional portfolios has remained constant in recent decades, restricted at 10 per cent or lower. This is seen by many in the property profession as a subjective measure and needs further investigation. The purpose of this paper is to compare the performance of the AU$431 billion industry superannuation funds’ strategic balanced portfolio against ten different passive and active investment strategies. Design/methodology/approach – The analysis used 20 years (1995-2015) of quarterly data covering seven benchmark asset classes, namely: Australian equities, international equities, Australian fixed income, international fixed income, property, cash and alternatives. The 11 different asset allocation models are constructed within the modern portfolio theory framework utilising Australian ten-year bonds as the risk free rate. The Sharpe ratio is used as the key risk-adjusted return performance measure. Findings – The ten different asset allocation models perform as well as the industry fund strategic approach. The empirical results show that there is scope to increase the property allocation level from its current 10-23 per cent. Upon excluding unconstrained strategies, the recommended allocation to property for industry funds is 19 per cent (12 per cent direct and 7 per cent listed). This high allocation is backed by improved risk-adjusted return performance. Research limitations/implications – The constrained optimal, tactical and dynamic models are limited to asset weight, no short selling and turnover parameters. Other institutional constraints that can be added to the portfolio optimisation problem include transaction costs, taxation, liquidity and tracking error constraints. Practical implications – The 11 different asset allocation models developed to evaluate the property allocation component in industry superannuation funds portfolio will attract fund managers to explore alternative strategies (passive and active) where risk-adjusted returns can be improved, compared to the common strategic approach with increased allocation to property assets. Originality/value – The research presents a unique perspective of investigating the optimal allocation to property assets within the context of active investment strategies, such as tactical and dynamic models, whereas previous studies have focused mainly on passive investment strategies. The investigation of these models effectively contributes to the transfer of broader finance and investment market theories and practice to the property discipline.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Shah, Rohan, und Phani R. Jammalamadaka. „Optimal Portfolio Strategy for Risk Management in Toll Road Forecasts and Investments“. Transportation Research Record: Journal of the Transportation Research Board 2670, Nr. 1 (Januar 2017): 83–94. http://dx.doi.org/10.3141/2670-11.

Der volle Inhalt der Quelle
Annotation:
The study leveraged modern portfolio theory and stochastic time series models to develop a risk management strategy for future traffic projections along brownfield toll facilities. Uncertainty in future traffic forecasts may raise concerns about performance reliability and revenue potential. Historical time series traffic data from brownfield corridors were used for developing econometric forecast estimates, and Monte Carlo simulation was used to quantify a priori risks or variance to develop optimal forecasts by using mean-variance optimization strategies. Numerical analysis is presented with historical toll transactions along the Massachusetts Turnpike system. Suggested diversification strategies were found to achieve better long-term forecast efficiencies with improved trade-offs between anticipated risks and returns. Planner and agency forecast performance expectations and risk propensity are thus jointly captured.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Brdyś, Mietek A., Marcin T. Brdyś und Sebastian M. Maciejewski. „Adaptive predictions of the euro/złoty currency exchange rate using state space wavelet networks and forecast combinations“. International Journal of Applied Mathematics and Computer Science 26, Nr. 1 (01.03.2016): 161–73. http://dx.doi.org/10.1515/amcs-2016-0011.

Der volle Inhalt der Quelle
Annotation:
Abstract The paper considers the forecasting of the euro/Polish złoty (EUR/PLN) spot exchange rate by applying state space wavelet network and econometric forecast combination models. Both prediction methods are applied to produce one-trading-day-ahead forecasts of the EUR/PLN exchange rate. The paper presents the general state space wavelet network and forecast combination models as well as their underlying principles. The state space wavelet network model is, in contrast to econometric forecast combinations, a non-parametric prediction technique which does not make any distributional assumptions regarding the underlying input variables. Both methods can be used as forecasting tools in portfolio investment management, asset valuation, IT security and integrated business risk intelligence in volatile market conditions.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Ogorelkova, Natalya Vladimirovna, und Irina Mikhaylovna Reutova. „FACTORS OF THE EFFICIENCY OF MANAGING PORTFOLIO PENSION RESERVES OF NON-STATE PENSION FUNDS“. Scientific Bulletin: finance, banking, investment., Nr. 3 (52) (2021): 22–30. http://dx.doi.org/10.37279/2312-5330-2020-3-22-30.

Der volle Inhalt der Quelle
Annotation:
The article is devoted to the consideration of approaches and assessment of the efficiency of management of investment portfolios of non-state pension funds. This article is a logical continuation of the previously conducted research on assessing the effectiveness of pension savings management and contains an analysis of the effectiveness of the second component of investment portfolios of non-state pension funds (NPF) — pension reserves. The article examines the factors influencing the efficiency of managing the portfolios of pension reserves of non-state pension funds on the basis of statistical data on 28 NPFs for 2013–2018. The factors chosen were the volumes and growth rates of the funds attracted from pension reserves, the share of pension reserves in the economies of scale of non-state pension funds, the presence of risk strategies (the share of shares and investment units), and the amount of remuneration of management companies. The aim of the study is to assess the influence of the selected factors on the efficiency of managing the portfolio of pension reserves of NPFs based on the construction of econometric models. The construction of one-factor and multi-factor econometric models confirms the absence of dependence of the effectiveness of portfolios of pension reserves of APFs, determined by the Sharpe ratio, on the size of attracted pension reserves per one insured person; from the share occupied by NPFs in the non-state pension market, as well as from remuneration to management companies paid by non-state pension funds. The influence of the chosen investment strategy and the growth rate of pension reserves on the efficiency of managing pension reserves of NPFs is revealed.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Kucukkocaoglu, Guray, und M. Ayhan Altintas. „Using non-performing loan ratios as default rates in the estimation of credit losses and macroeconomic credit risk stress testing: A case from Turkey“. Risk Governance and Control: Financial Markets and Institutions 6, Nr. 1 (2016): 52–63. http://dx.doi.org/10.22495/rgcv6i1art6.

Der volle Inhalt der Quelle
Annotation:
In this study, inspired by the Credit Portfolio View approach, we intend to develop an econometric credit risk model to estimate credit loss distributions of Turkish Banking System under baseline and stress macro scenarios, by substituting default rates with non-performing loan (NPL) ratios. Since customer number based historical default rates are not available for the whole Turkish banking system’s credit portfolio, we used NPL ratios as dependent variable instead of default rates, a common practice for many countries where historical default rates are not available. Although, there are many problems in using NPL ratios as default rates such as underestimating portfolio losses as a result of totally non-homogeneous total credit portfolios and transferring non-performing loans to asset management companies from banks’ balance sheets, our aim is to underline and limit some ignored problems using accounting based NPL ratios as default rates in macroeconomic credit risk modeling. Developed models confirm the strong statistical relationship between systematic component of credit risk and macroeconomic variables in Turkey. Stress test results also are compatible with the past experiences.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Zagaglia, Paolo. „International diversification for portfolios of European fixed-income mutual funds“. Managerial Finance 43, Nr. 2 (13.02.2017): 242–62. http://dx.doi.org/10.1108/mf-01-2015-0026.

Der volle Inhalt der Quelle
Annotation:
Purpose The purpose of this paper is to study the scope for country diversification in international portfolios of mutual funds for the “core” EMU countries. The author uses a sample of daily returns for country indices of French, German and Italian funds to investigate the quest for international diversification. The author focuses on fixed-income mutual funds during the period of the financial market turmoil since 2007. Design/methodology/approach The author compute optimal portfolio allocations from both unconstrained and constrained mean-variance frameworks that take as input the out-of-sample forecasts for the conditional mean, volatility and correlation of country-level indices for funds returns. The author also applies a portfolio allocation model based on utility maximization with learning about the time-varying conditional moments. The author compares the out-of-sample forecasting performance of 12 multivariate volatility models. Findings The author finds that there is a “core” EMU country also for the mutual fund industry: optimal portfolios allocate the largest portfolio weight to German funds, with Italian funds assigned a lower weight in comparison to French funds. This result is remarkably robust across competing forecasting models and optimal allocation strategies. It is also consistent with the findings from a utility-maximization model that incorporates learning about time-varying conditional moments. Originality/value This is the first study on optimal country-level diversification for a mutual fund investor focused on European countries in the fixed-income space for the turmoil period. The author uses a large array of econometric models that captures the salient features of a period characterized by large changes in volatility and correlation, and compare the performance of different optimal asset allocation models.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Jacobs Jr., Michael. „Supervisory requirements and expectations for portfolio level counterparty credit risk measurement and management“. Journal of Financial Regulation and Compliance 22, Nr. 3 (08.07.2014): 252–70. http://dx.doi.org/10.1108/jfrc-10-2013-0034.

Der volle Inhalt der Quelle
Annotation:
Purpose – This study aims to survey supervisory requirements and expectations for counterparty credit risk (CCR). Design/methodology/approach – In this paper, a survey of CCR including the following elements has been performed. First, various concepts in CCR measurement and management, including prevalent practices, definitions and conceptual issues have been introduced. Then, various supervisory requirements and expectations with respect to CCR have been summarized. This study has multiple areas of relevance and may be extended in various ways. Risk managers, traders and regulators may find this to be a valuable reference. Directions for future research could include empirical analysis, development of a theoretical framework and a comparative analysis of systems for analyzing and regulating CCR. Findings – Some of the thoughts regarding the concept of risk will be considered and surveyed, and then how these apply to CCR will be considered. A classical dichotomy exists in the literature, the earliest exposition upon which is credited to Knight (1921), who defines uncertainty is when it is not possible to measure a probability distribution or it is unknown. This is contrasted with the situation where either the probability distribution is known, or knowable through repeated experimentation. Arguably, in economic and finance (and more broadly in the social or natural as opposed to the physical or mathematical sciences), the former is a more realistic scenario that is being contending with (e.g. a fair vs loaded die, or die with unknown number of sides.) The authors are forced to rely upon empirical data to estimate loss distributions, but this is complicated because of changing economic conditions, which invalidate forecasts that our econometric models generate. Originality/value – This is one of few studies of the CCR regulations that is so far-reaching.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Shirur, Srinivas. „Are Managers Measuring the Financial Risk in the Right Manner? An Exploratory Study“. Vikalpa: The Journal for Decision Makers 38, Nr. 2 (April 2013): 81–94. http://dx.doi.org/10.1177/0256090920130205.

Der volle Inhalt der Quelle
Annotation:
The basic problem with corporate finance is that it deals with the fundamental analysis issues while the tools used are those applicable for technical analysis. That is the reason why finance managers often arrive at wrong decisions which snowball into issues like the subprime crisis. Initially, Markowitz model was used to calculate risk for portfolio management. It gave importance only to systematic risk as unsystematic risk could be avoided through diversification. Later on, CAPM model was developed for corporate finance and project finance for calculation of risk. Finance models dealing with risk management are applicable only for a short period and that too for an average of a large number of companies. The approach to apply risk measurement technique suitable for portfolio management to corporate finance is not correct. Even the econometric techniques applied to validate calculation of risk for portfolio management should be different from those applied for corporate finance. The present article analyses the problems of applying such risk measurement techniques for corporate finance purpose. A company faces mainly two types of risks: liquidity risk and bankruptcy risk. In case a company suffers from bankruptcy threat (which may or may not lead to actual bankruptcy), i.e., possibilities of closure due to losses, there will be two possibilities: The company may move with market index in normal times while it may come down suddenly with index and may not bounce back (Kink in the beta curve), as in the case of MTNL and Jet Airlines. There may be a sudden bankruptcy threat as in the case of Satyam. The latter case does not allow investors to react. However, corporate managers will have to take account of the first possibility of bankruptcy risk which cannot be ignored by assuming beta to be constant. This paper examines three companies, Mastek, Jet Airlines, and MTNL, in this category. The author suggests that instead of segregating risk into systematic and unsystematic risk, it should be segregated into bankruptcy and liquidity risk. In this way, unsystematic risk is also priced while determining the value of a company.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Duppati, Geeta, und Mengying Zhu. „Oil prices changes and volatility in sector stock returns: Evidence from Australia, New Zealand, China, Germany and Norway“. Corporate Ownership and Control 13, Nr. 2 (2016): 351–70. http://dx.doi.org/10.22495/cocv13i2clp4.

Der volle Inhalt der Quelle
Annotation:
The paper examines the exposure of sectoral stock returns to oil price changes in Australia, China, Germany, New Zealand and Norway over the period 2000-2015 using weekly data drawn from DataStream. The issue of volatility has important implications for the theory of finance and as is well-known accurate volatility forecasts are important in a variety of settings including option and other derivatives pricing, portfolio and risk management (e.g. in the calculation of hedge ratios and Value-at-Risk measures), and trading strategies (David and Ruiz, 2009). This study adopts GARCH and EGARCH to understand the relationship between the returns and volatility. The findings using GARCH (EGARCH) models suggests that in the case of Germany eight (nine) out of ten sectors returns can be explained by the volatility of past oil price in Germany, while in the case of Australia, six (seven) out of ten sector returns are sensitive to the oil price changes with the exception of Industrials, Consumer Goods, Health care and Utilities. While in China and New Zealand five sectors are found sensitive to oil price changes and three sectors in Norway, namely Oil & Gas, Consumer Services and Financials. Secondly, this paper also investigated the exposure of the stock returns to oil price changes using market index data as a proxy using GARCH or EGARCH model. The results indicated that the stock returns are sensitive to the oil price changes and have leverage effects for all the five countries. Further, the findings also suggests that sector with more constituents is likely to have leverage effects and vice versa. The results have implications to market participants to make informed decisions about a better portfolio diversification for minimizing risk and adding value to the stocks.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

A. Asongu, Simplice. „Linkages between investment flows and financial development“. African Journal of Economic and Management Studies 5, Nr. 3 (26.08.2014): 269–99. http://dx.doi.org/10.1108/ajems-05-2012-0036.

Der volle Inhalt der Quelle
Annotation:
Purpose – The purpose of this paper is to introduce previously missing financial components (efficiency, activity and size) in the assessment of the finance-investment nexus. Design/methodology/approach – Vector autoregressive models in the perspectives of Vector Error Correction Model and short-run Granger causality are employed. There is usage of optimally specified econometric methods as opposed to purely discretionary model specifications in mainstream literature. Findings – Three main findings are established: first, while finance led investment elasticities are positive, investment elasticities of finance are negative; second, but for Guinea Bissau, Mozambique and Togo, finance does not seem to engender portfolio investment; and finally, contrary to mainstream literature, financial efficiency appears to impact investment more than financial depth. Practical implications – Four policy implications result: first, extreme caution is needed in the use of single equation analysis for economic forecasts; second, financial development leads more to investment flows than the other way round; third, financial allocation efficiency is more relevant as means to attracting investment flows than financial depth; and finally, the somewhat heterogeneous character of the findings also point to shortcomings in blanket policies that are not contingent on country-specific trends in the finance-investment nexus. Originality/value – First, contrary to the mainstream approach we use four measures of financial intermediary development (depth, efficiency, activity and size) as well as four types of investment flows (domestic, foreign, portfolio and total). Second, the chosen investment and financial indicators are derived upon preliminary robust correlation analyses from the broadest macroeconomic data set available on investment and financial intermediary flows.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Robertson, M. J., G. J. Rebetzke und R. M. Norton. „Assessing the place and role of crop simulation modelling in Australia“. Crop and Pasture Science 66, Nr. 9 (2015): 877. http://dx.doi.org/10.1071/cp14361.

Der volle Inhalt der Quelle
Annotation:
Computer-based crop simulation models (CSMs) are well entrenched as tools for a wide variety of research, development and extension applications. Despite this, critics remain and there are perceptions that CSMs have not contributed to impacts on-farm or in the research community, particularly with plant breeding. This study reviewed the literature, interviewed 45 stakeholders (modellers, institutional representatives and clients of modelling), and analysed the industry-funded project portfolio to ascertain the current state of use of CSMs in the grains industry in Australia, including scientific progress, impacts and development needs. We found that CSMs in Australia are widely used, with ~100 active and independent users, ~15 model developers, and at any one time ~10 postgraduate students, chiefly across six public research institutions. The dominant platform used is APSIM (Agricultural Production Systems Simulator). It is widely used in the agronomic domain. Several cases were documented where CSM use had a demonstrable impact on farm and research practice. The updating of both plant and soil process routines in the models has slowed and even stalled in recent years, and scientific limitations to future use were identified: the soil–plant nitrogen cycle, root growth and function, soil surface water and residue dynamics, impact of temperature extremes on plant function, and up-to-date cultivar parameter sets. There was a widespread appreciation of and optimism for the potential of CSMs to assist with plant-breeding activities, such as environmental characterisation, trait assessment, and design of plant-breeding programs. However, we found little evidence of models or model output being used by plant breeders in Australia, despite significant impacts that have emerged recently in larger international breeding programs. Closer cooperation between geneticists, physiologists and breeders will allow gene-based approaches to characterise and parameterise cultivars in CSMs, demonstrated by recent progress with phenology in wheat. This will give models the ability to deal with a wider range of potential genotype × environment × management scenarios.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Ma, Le, Chunlu Liu und Anthony Mills. „Construction labor productivity convergence: a conditional frontier approach“. Engineering, Construction and Architectural Management 23, Nr. 3 (16.05.2016): 283–301. http://dx.doi.org/10.1108/ecam-03-2015-0040.

Der volle Inhalt der Quelle
Annotation:
Purpose – Understanding and simulating construction activities is a vital issue from a macro-perspective, since construction is an important contributor in economic development. Although the construction labor productivity frontier has attracted much research effort, the temporal and regional characteristics have not yet been explored. The purpose of this paper is to investigate the long-run equilibrium and dynamics within construction development under a conditional frontier context. Design/methodology/approach – Analogous to the simplified production function, this research adopts the conditional frontier theory to investigate the convergence of construction labor productivity across regions and over time. Error correction models are implemented to identify the long-run equilibrium and dynamics of construction labor productivity against three types of convergence hypotheses, while a panel regression method is used to capture the regional heterogeneity. The developed models are applied to investigate and simulate the construction labor productivity in the Australian states and territories. Findings – The results suggest that construction labor productivity in Australia should converge to stable frontiers in a long-run perspective. The dynamics of the productivity are mainly caused by the technology utilization efficiency levels of the local construction industry, while the influences of changes in technology level and capital depending appear limited. Five regional clusters of the Australian construction labor productivity are suggested by the simulation results, including New South Wales; Australian Capital Territory; Northern Territory, Queensland, and Western Australia; South Australia; and Tasmania and Victoria. Originality/value – Three types of frontier of construction labor productivity is proposed. An econometric approach is developed to identify the convergence frontier of construction labor productivity across regions over time. The specified model can provides accurate predictions of the construction labor productivity.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Panda, Ajaya Kumar, Swagatika Nanda, Vipul Kumar Singh und Satish Kumar. „Evidence of leverage effects and volatility spillover among exchange rates of selected emerging and growth leading economies“. Journal of Financial Economic Policy 11, Nr. 2 (07.05.2019): 174–92. http://dx.doi.org/10.1108/jfep-03-2018-0042.

Der volle Inhalt der Quelle
Annotation:
Purpose The purpose of this study is to examine the evidences of leverage effects on the conditional volatility of exchange rates because of asymmetric innovations and its spillover effects among the exchange rates of selected emerging and growth-leading economies. Design/methodology/approach The empirical analysis uses the sign bias test and asymmetric generalized autoregressive conditional heteroskedasticity (GARCH) models to capture the leverage effects on conditional volatility of exchange rates and also uses multivariate GARCH (MGARCH) model to address volatility spillovers among the studied exchange rates. Findings The study finds substantial impact of asymmetric innovations (news) on the conditional volatility of exchange rates, where Russian Ruble is showing significant leverage effect followed by Indian Rupee. The exchange rates depict significant mean spillover effects, where Rupee, Peso and Ruble are strongly connected; Real, Rupiah and Lira are moderately connected; and Yuan is the least connected exchange rate within the sample. The study also finds the assimilation of information in foreign exchanges and increased spillover effects in the post 2008 periods. Practical implications The results probably have the implications for international investment and asset management. Portfolio managers could use this research to optimize their international portfolio. Policymakers such as central banks may find the study useful to monitor and design interventions strategies in foreign exchange markets keeping an eye on the nature of movements among these exchange rates. Originality/value This is one of the few empirical research studies that aim to explore the leverage effects on exchange rates and their volatility spillovers among seven emerging and growth-leading economies using advanced econometric methodologies.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Saadaoui, Amir, Kais Saidi und Mohamed Kriaa. „Transmission of shocks between bond and oil markets“. Managerial Finance 46, Nr. 10 (26.05.2020): 1231–46. http://dx.doi.org/10.1108/mf-11-2019-0554.

Der volle Inhalt der Quelle
Annotation:
PurposeThis paper aims at looking into the transmission of shocks between bond and oil markets using a bivariate GARCH (BEKK and DCC) model. As lots of financial assets have been exchanged due to these index returns, it is essential for financial market participants to figure out the mechanism of volatility transmission through time and via these series for the purpose of taking optimal decisions of portfolio allocation. The outcomes drawn reveal an important volatility transmission between sovereign bond and oil indices, with great sensitivity during and after the subprime crisis period.Design/methodology/approachIn this context, we propose our hypotheses. Indeed, our study aims to see whether the financial crisis has been responsible for the sharp drop in oil prices since October 2008. To this end, we suggest, in this paper, the empirical study of the shock transmission between the bond and oil markets, using BEK-GARCH and DCC models. To our knowledge, this is the first document using the BEKK-GARCH and the DCC models in studying the shock transmission between a sovereign bond and oil indices.FindingsWe have noticed that in the event of a disruption in the bond market, oil prices respond to these shocks in the short term. It has also been emphasized, however, that this relationship has exacerbated if the period has extended. This makes us conclude that the financial market situation affects the oil price only throughout the crisis period; and that this situation is causally significant only in the event of a severe crisis, such as those of subprime and sovereign debt.Originality/valueThe global financial system has been going through an acute crisis since mid-2007. This crisis, initially occurred only in the US real estate market, progressively affects the global financial system, and is now becoming a general economic crisis. The objective of this work is to analyze the effects of the current financial market disturbance on oil prices based on econometric models in order to promote the proper functioning of this study.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Surmann, Markus, Wolfgang Andreas Brunauer und Sven Bienert. „The energy efficiency of corporate real estate assets“. Journal of Corporate Real Estate 18, Nr. 2 (09.05.2016): 68–101. http://dx.doi.org/10.1108/jcre-12-2015-0045.

Der volle Inhalt der Quelle
Annotation:
Purpose On the basis of corporate wholesale and hypermarket stores, this study aims to investigate the relationship between energy consumption, physical building characteristics and operational sales performance and the impact of energy management on the corporate environmental performance. Design/methodology/approach A very unique dataset of METRO GROUP over 19 European countries is analyzed in a sophisticated econometric approach for the timeframe from January 2011 until December 2014. Multiple regression models are applied for the panel, to explain the electricity consumption of the corporate assets on a monthly basis and the total energy consumption on an annual basis. Using Generalized Additive Models, to model nonlinear covariate effects, the authors decompose the response variables into the implicit contribution of building characteristics, operational sales performance and energy management attributes, under control of the outdoor weather conditions and spatial–temporal effects. Findings METRO GROUP’s wholesale and hypermarket stores prove significant reductions in electricity and total energy consumption over the analyzed timeframe. Due to the implemented energy consumption and carbon emission reduction targets, the influence of the energy management measures, such as the identification of stores associated with the lowest energy performance, was found to contribute toward a more efficient corporate environmental performance. Originality/value In the context of corporate responsibility/sustainability of wholesale, hypermarket and retail corporations, the energy efficiency and reduction of carbon emissions from corporates’ real estate assets is of emerging interest. Besides the insights about the energy efficiency of corporate real estate assets, the role of the energy management, contributing to a more efficient corporate environmental performance, is not yet investigated for a large European wholesale and hypermarket portfolio.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Koki, Constandina, Stefanos Leonardos und Georgios Piliouras. „Do Cryptocurrency Prices Camouflage Latent Economic Effects? A Bayesian Hidden Markov Approach“. Future Internet 12, Nr. 3 (21.03.2020): 59. http://dx.doi.org/10.3390/fi12030059.

Der volle Inhalt der Quelle
Annotation:
We study the Bitcoin and Ether price series under a financial perspective. Specifically, we use two econometric models to perform a two-layer analysis to study the correlation and prediction of Bitcoin and Ether price series with traditional assets. In the first part of this study, we model the probability of positive returns via a Bayesian logistic model. Even though the fitting performance of the logistic model is poor, we find that traditional assets can explain some of the variability of the price returns. Along with the fact that standard models fail to capture the statistic and econometric attributes—such as extreme variability and heteroskedasticity—of cryptocurrencies, this motivates us to apply a novel Non-Homogeneous Hidden Markov model to these series. In particular, we model Bitcoin and Ether prices via the non-homogeneous Pólya-Gamma Hidden Markov (NHPG) model, since it has been shown that it outperforms its counterparts in conventional financial data. The transition probabilities of the underlying hidden process are modeled via a logistic link whereas the observed series follow a mixture of normal regressions conditionally on the hidden process. Our results show that the NHPG algorithm has good in-sample performance and captures the heteroskedasticity of both series. It identifies frequent changes between the two states of the underlying Markov process. In what constitutes the most important implication of our study, we show that there exist linear correlations between the covariates and the ETH and BTC series. However, only the ETH series are affected non-linearly by a subset of the accounted covariates. Finally, we conclude that the large number of significant predictors along with the weak degree of predictability performance of the algorithm back up earlier findings that cryptocurrencies are unlike any other financial assets and predicting the cryptocurrency price series is still a challenging task. These findings can be useful to investors, policy makers, traders for portfolio allocation, risk management and trading strategies.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Ma, Le, Richard Reed und Jian Liang. „Separating owner-occupier and investor demands for housing in the Australian states“. Journal of Property Investment & Finance 37, Nr. 2 (04.03.2019): 215–32. http://dx.doi.org/10.1108/jpif-07-2018-0045.

Der volle Inhalt der Quelle
Annotation:
PurposeThere has been declining home ownership and increased acceptance of long-term renting in many western countries including Australia; this has created a problem when examining housing markets as there are dual demand and include both owner-occupiers and investors. The purpose of this paper is to examine the long-run relationship between house prices, housing supply and demand, and to estimate the effects of the two types of demand (i.e. owner-occupier and investor) on house prices.Design/methodology/approachThe econometric techniques for cointegration with vector error correction models are used to specify the proposed models, where the housing markets in the Australian states and territories illustrate the models.FindingsThe results highlight the regional long-run equilibrium and associated patterns in house prices, the level of new housing supply, owner-occupier demand for housing and investor demand for housing. Different types of markets were identified.Practical implicationsThe findings suggest that policies that depress the investment demand can effectively prevent the housing bubble from further building up in the Australian states. The empirical findings shed light in the strategy of maintaining levels of housing affordability in regions where owner-occupiers have been priced out of the housing market.Originality/valueThere has been declining home ownership and increased acceptance of long-term renting in many western countries including Australia; this has created a problem when examining housing markets as there are dual demand and include both owner-occupiers and investors. This research has given to the relationship between supply and dual demand, which includes owner-occupation and investment, for housing and the influence on house prices.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Fidanoski, Filip, Moorad Choudhry, Milivoje Davidović und Bruno S. Sergi. „What does affect profitability of banks in Croatia?“ Competitiveness Review: An International Business Journal 28, Nr. 4 (16.07.2018): 338–67. http://dx.doi.org/10.1108/cr-09-2016-0058.

Der volle Inhalt der Quelle
Annotation:
Purpose The paper aims to determine the impact of bank-specific, industry-specific and macro-specific determinants on the profitability indicators – return on assets (ROA) and ratio net-interest margin (RNIM). Design/methodology/approach This research sample includes selected Croatian banks, and the empirical analysis covers the period 2007-2014. Based on the reliable and robust econometric tests, dynamic estimation technique (DOLS) was run to estimate the profitability models, by using of ROA and RNIM as dependent variables, which also include lagged dependent variables to capture the speed of mean reversion in terms of profitability, respectively. Findings The results proved the crucial positive impact of assets size (economies of scale), loan portfolio and GDP growth on the banks’ profitability. Further, the negative impacts on profitability have risks and administrative costs. This paper shows the positive impact of capital adequacy ratio (CAR) and leverage on ROA and RNIM, as well as the correlation between market concentration and banks’ profitability. Practical implications Basically, Croatian banks should improve operative efficiency and risk management practice to increase their profitability. In addition, banks should carefully balance between capital base and risk exposure on the one hand and take advantage of using relative cheaper deposits and borrowed funds instead of using more expensive equity. This conclusion is reasonable, keeping in mind that the Croatian financial market does not punish banks for an extra risk exposure caused by market imperfections. Finally, the regulatory authority in Croatia should impose some additional antitrust measures to increase competition in the banking market. Originality/value Although a bunch of existing studies explain the determinants of bank profitability from different perspectives, this paper conducts a specific empirical analysis about the determinants of bank profitability in Croatia. In addition, this paper provides a good synthesis of the relevant empirical and theoretical studies from this domain.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Manickavasagam, Jeevananthan, und Visalakshmi S. „An investigational analysis on forecasting intraday values“. Benchmarking: An International Journal 27, Nr. 2 (04.10.2019): 592–605. http://dx.doi.org/10.1108/bij-11-2018-0361.

Der volle Inhalt der Quelle
Annotation:
Purpose The algorithmic trading has advanced exponentially and necessitates the evaluation of intraday stock market forecasting on the grounds that any stock market series are foreseen to follow the random walk hypothesis. The purpose of this paper is to forecast the intraday values of stock indices using data mining techniques and compare the techniques’ performance in different markets to accomplish the best results. Design/methodology/approach This study investigates the intraday values (every 60th-minute closing value) of four different markets (namely, UK, Australia, India and China) spanning from April 1, 2017 to March 31, 2018. The forecasting performance of multivariate adaptive regression spline (MARSplines), support vector regression (SVR), backpropagation neural network (BPNN) and autoregression (1) are compared using statistical measures. Robustness evaluation is done to check the performance of the models on the relative ratios of the data. Findings MARSplines produces better results than the compared models in forecasting every 60th minute of selected stocks and stock indices. Next to MARSplines, SVR outperforms neural network and autoregression (1) models. The MARSplines proved to be more robust than the other models. Practical implications Forecasting provides a substantial benchmark for companies, which entails long-run operations. Significant profit can be earned by successfully predicting the stock’s future price. The traders have to outperform the market using techniques. Policy makers need to estimate the future prices/trends in the stock market to identify the link between the financial instruments and monetary policy which gives higher insights about the mechanism of existing policy and to know the role of financial assets in many channels. Thus, this study expects that the proposed model can create significant profits for traders by more precisely forecasting the stock market. Originality/value This study contributes to the high-frequency forecasting literature using MARSplines, SVR and BPNN. Finding the most effective way of forecasting the stock market is imperative for traders and portfolio managers for investment decisions. This study reveals the changing levels of trends in investing and expectation of significant gains in a short time through intraday trading.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Higgins, David. „Defining the three Rs of commercial property market performance“. Journal of Property Investment & Finance 33, Nr. 6 (07.09.2015): 481–93. http://dx.doi.org/10.1108/jpif-08-2014-0054.

Der volle Inhalt der Quelle
Annotation:
Purpose – Modern property investment allocation techniques are typically based on recognised measures of return and risk. Whilst these models work well in theory under stable conditions, they can fail when stable assumptions cease to hold and extreme volatility occurs. This is evident in commercial property markets which can experience extended stable periods followed by large concentrated negative price fluctuations as a result of major unpredictable events. This extreme volatility may not be fully reflected in traditional risk calculations and can lead to ruin. The paper aims to discuss these issues. Design/methodology/approach – This research studies 28 years of quarterly Australian direct commercial property market performance data for normal distribution features and signs of extreme downside risk. For the extreme values, Power Law distribution models were examined as to provide a better probability measure of large negative price fluctuations. Findings – The results show that the normal bell curve distribution underestimated actual extreme values both by frequency and extent, being by at least 30 per cent for the outermost data point. For the statistical outliers beyond 2 SD, a Power Law distribution can overcome many of the shortcomings of the standard deviation approach and therefore better measure the probability of ruin, being extreme downside risk. Practical implications – In highlighting the challenges to measuring property market performance, analysis of extreme downside risk should be separated from traditional standard deviation risk calculations. In recognising these two different types of risk, extreme downside risk has a magnified domino effect with the tendency of bad news to come in crowds. Big price changes can lead to market crashes and financial ruin which is well beyond the standard deviation risk measure. This needs to be recognised and developed as there is evidence that extreme downside risk determinants are increasing by magnitude, frequency and impact. Originality/value – Analysis of extreme downside risk should form a key part of the property decision process and be included in the property investment manager’s toolkit. Modelling techniques for estimating measures of tail risk provide challenges and have shown to be beyond traditional risk management practices, being too narrow and constraining a definition. Measuring extreme risk and the likelihood of ruin is the first step in analysing and dealing with risk in both an asset class and portfolio context.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Wilkinson, Sara J., und Julie R. Jupp. „Exploring the value of BIM for corporate real estate“. Journal of Corporate Real Estate 18, Nr. 4 (14.11.2016): 254–69. http://dx.doi.org/10.1108/jcre-11-2015-0040.

Der volle Inhalt der Quelle
Annotation:
Purpose Building information modelling (BIM) offers rich opportunities for property professionals to use information throughout the property life cycle. However, the benefits of BIM for property professionals are largely untapped. BIM was developed by the architecture, engineering and construction (AEC) sector to assist in managing design and construction data. As these technologies mature and evolve, so does the opportunity for other professional groups to use data within, or linked to, BIM models. This paper aims to explore the potential for corporate real estate managers (CREM) and investment surveyors to use data contained in BIM models and building management systems, which could help these professionals with strategic planning, portfolio rationalisation and acquisitions. Design/methodology/approach This is a scoping study to explore the potential to expand the scope of BIM to other professional activities. As such, the research adopted a Delphi approach with a series of workshops with experienced stakeholders in Australia and England. Qualitative research is inductive and hypothesis-generating. That is, as the researcher assimilates knowledge and information contained in the literature, ideas and questions are formed, which are put to research participants, and, from this process, conclusions are drawn. Findings It is technologically feasible for some property professionals, such as CREM, to use some data contained within BIM, and linked building management systems. The types of data used by property professionals were identified and ranked in importance. Needs are varied, both in the range of data and the points in the property life cycle when they are required. The benefits identified include potentially accessing and using more reliable and accurate data in professional tasks; however, challenges exist around the fidelity of the data and assurances that it is current. Research limitations/implications The key limitations of the research were that the views expressed are those of a select group of experienced practitioners and may not represent the consensus view of the professions and industry as a whole. The limitations and criticisms of focus group data collection are that individuals holding strong views may dominate the sessions. Practical implications The findings show that expanding access to BIM could enable some property professionals, including CREM, to utilise relevant data that could improve the quality and accuracy of their professional services. A simple initial system could be trialled to ascertain the value of the data. Over time, the availability of data could be extended to allow more professionals access. Furthermore, there is potential to link BIM to other digitised property data in the future. Originality/value To date, no one has considered the practicality or potential utility of expanding the access to data contained in 3D BIM models to property professionals, nor has anyone considered which data would be useful to them. The value of using BIM data is that, as more property stock is delivered and maintained via BIM-enabled processes, it will be possible for a wider range of professionals such as CREM and investment surveyors to offer more accurate advice and services to clients.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Buiak, Lesya, Kateryna Pryshliak und Oksana Bashutska. „Simulation model of the insurance company management in market conditions“. Scientific Journal of Yuriy Fedkovich Chernivtsi National University. Economics, Nr. 829 (05.04.2021). http://dx.doi.org/10.31861/ecovis/2020-829-12.

Der volle Inhalt der Quelle
Annotation:
The article is devoted to the study of the financial mechanism and features of insurance companies and the development of economic and mathematical models for finding quantitative parameters of management of an insurance company in market conditions. The object of research is the activity of insurance companies of Ukraine. The subject of research is economic and mathematical methods and models in the system of optimal management of the insurance company in market conditions. Scientific research was conducted using the following methods: methods of systems analysis, econometric methods, methods of probability theory and mathematical statistics, simulation and stochastic modeling. The developed set of models allows to make optimal decisions in the conditions of unstable micro- and macroenvironment taking into account strengthening of a competition in the insurance market. The use of research results, including software to simulate the insurance company, provides flexibility in the management of pricing, cost policy, in the formation of the insurance portfolio, is a tool for choosing reinsurance and investment strategy to optimize the financial results of the insurance company.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

D’AMICO, GUGLIELMO, BICE DI BASILIO und FILIPPO PETRONI. „A SEMI-MARKOVIAN APPROACH TO DRAWDOWN-BASED MEASURES“. Advances in Complex Systems, 23.06.2021, 2050020. http://dx.doi.org/10.1142/s0219525920500204.

Der volle Inhalt der Quelle
Annotation:
In this paper we assess the suitability of weighted-indexed semi-Markov chains (WISMC) to study risk measures as applied to high-frequency financial data. The considered measures are the drawdown of fixed level, the time to crash, the speed of crash, the recovery time and the speed of recovery; they provide valuable information in portfolio management and in the selection of investments. The results obtained by implementing the WISMC model are compared with those based on the real data and also with those achieved by GARCH and EGARCH models. Globally, the WISMC model performs much better than the other econometric models for all the considered measures unless in the cases when the percentage of censored units is more than 30% where the models behave similarly.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Shao, Adam W., Katja Hanewald und and Michael Sherris. „House Price Models for Banking and Insurance Applications: The Impact of Property Characteristics“. Asia-Pacific Journal of Risk and Insurance, 07.12.2017. http://dx.doi.org/10.1515/apjri-2017-0003.

Der volle Inhalt der Quelle
Annotation:
AbstractHouse price indices are needed to assess house price risk in households’ portfolio allocation decisions and in many housing-related financial products such as reverse mortgages, mortgage insurance and real estate derivatives. This paper first introduces nine widely-used house price models to the insurance, risk management and actuarial literature and provides new evidence on the relative performance of these models. We then show how portfolio-level house price indices for properties with specific physical and locational characteristics can be constructed for these different models. All analyses are based on a large dataset of individual property transactions in Sydney, Australia, for the period 1971-2011. The unrestricted hedonic model and a hybrid hedonic repeat-sales model provide a good model fit and reliable portfolio-level house price indices. Our results are important for banks, insurers and investors that have exposure to house price risks.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Dang, Van Dan, und Khac Quoc Bao Nguyen. „Monetary policy, bank leverage and liquidity“. International Journal of Managerial Finance ahead-of-print, ahead-of-print (22.09.2020). http://dx.doi.org/10.1108/ijmf-06-2020-0284.

Der volle Inhalt der Quelle
Annotation:
PurposeThe study explores how banks design their financial structure and asset portfolio in response to monetary policy changes.Design/methodology/approachThe authors conduct the research design for the Vietnamese banking market during 2007–2018. To ensure robust findings, the authors employ two econometric models of static and dynamic panels, multiple monetary policy indicators and alternative measures of bank leverage and liquidity.FindingsBanks respond to monetary expansion by raising their financial leverage on the liability side and cutting their liquidity positions on the asset side. Further analysis suggests that larger banks' financial leverage is more responsive to monetary policy changes, while smaller banks strengthen the potency of monetary policy transmission toward bank liquidity. Additionally, the authors document that lower interest rates induce a beneficial effect on the net stable funding ratio (NSFR) under Basel III guidelines, implying that banks appear to modify the composition of liabilities to improve the stability of funding sources.Originality/valueThe study is the first attempt to simultaneously examine the impacts of monetary policy on both sides of bank balance sheets, across various banks of different sizes under a multiple-tool monetary regime. Besides, understanding how banks organize their stable funding sources and illiquid assets amid monetary shocks is an innovation of this study.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Marwat, Jahanzeb, Suresh Kumar Oad Rajput, Sarfraz Ahmed Dakhan, Sonia Kumari und Muhammad Ilyas. „Tax avoidance as earning game player in emerging economies: evidence from Pakistan“. South Asian Journal of Business Studies ahead-of-print, ahead-of-print (29.06.2021). http://dx.doi.org/10.1108/sajbs-10-2020-0379.

Der volle Inhalt der Quelle
Annotation:
PurposeThe current study aims to achieve two targets. First, examine empirically that whether corporate managers use tax avoidance to influence short-term profitability? Second, investigate the impact of tax avoidance on the value of firms. The tax accounts provide the opportunity to influence temporary/permanent profitability but empirical studies overlooking this matter, particularly in emerging economies.Design/methodology/approachFirst, the authors identified unexpected fluctuations of tax avoidance and then examine whether it impacts the profitability signal and firms' value? The unbalanced panel data of 189 non-financial firms for the period 2000–2018 are used for empirical analysis. The estimation biases and results consistency are verified by using two different econometric models including generalized least square and two-stage least squareFindingsThe study identifies that managers manipulate the profitability signal through tax avoidance. Tax avoidance practices help in earning management and earning smoothing to avoid negative signals in the stock market. In line with the behavioral finance view, tax avoidance has a positive impact on current stock returns because investors focus on profitability without a detailed screening of cash flows.Originality/valueA limited number of studies investigate the use of tax avoidance for manipulation of the short-term earning signal. Identifying gaps and limitations in the literature, this study provides invaluable insights into tax avoidance and its association with the profitability and value of firms. The findings are important for investors, managers and policymakers in making portfolio decisions and corporate policies.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Hadley, Bree Jamila, und Sandra Gattenhof. „Measurable Progress? Teaching Artsworkers to Assess and Articulate the Impact of Their Work“. M/C Journal 14, Nr. 6 (22.11.2011). http://dx.doi.org/10.5204/mcj.433.

Der volle Inhalt der Quelle
Annotation:
The National Cultural Policy Discussion Paper—drafted to assist the Australian Government in developing the first national Cultural Policy since Creative Nation nearly two decades ago—envisages a future in which arts, cultural and creative activities directly support the development of an inclusive, innovative and productive Australia. "The policy," it says, "will be based on an understanding that a creative nation produces a more inclusive society and a more expressive and confident citizenry by encouraging our ability to express, describe and share our diverse experiences—with each other and with the world" (Australian Government 3). Even a cursory reading of this Discussion Paper makes it clear that the question of impact—in aesthetic, cultural and economic terms—is central to the Government's agenda in developing a new Cultural Policy. Hand-in-hand with the notion of impact comes the process of measurement of progress. The Discussion Paper notes that progress "must be measurable, and the Government will invest in ways to assess the impact that the National Cultural Policy has on society and the economy" (11). If progress must be measurable, this raises questions about what arts, cultural and creative workers do, whether it is worth it, and whether they could be doing it better. In effect, the Discussion Paper pushes artsworkers ever closer to a climate in which they have to be skilled not just at making work, but at making the impact of this work clear to stakeholders. The Government in its plans for Australia's cultural future, is clearly most supportive of artsworkers who can do this, and the scholars, educators and employers who can best train the artsworkers of the future to do this. Teaching Artsworkers to Measure the Impact of Their Work: The Challenges How do we train artsworkers to assess, measure and articulate the impact of what they do? How do we prepare them to be ready to work in a climate that will—as the National Cultural Policy Discussion Paper makes clear—emphasise measuring impact, communicating impact, and communicating impact across aesthetic, cultural and economic categories? As educators delivering training in this area, the Discussion Paper has made this already compelling question even more pressing as we work to develop the career-ready graduates the Government seeks. Our program, the Master of Creative Industries (Creative Production & Arts Management) offered in the Creative Industries Faculty at Queensland University of Technology in Brisbane, Australia, is, like most programs in arts and cultural management in the US, UK, Europe and Australia, offering a three-Semester postgraduate program that allows students to develop the career-ready skills required to work as managers of arts, cultural or creative organisations. That we need to train our graduates to work not just as producers of plays, paintings or recordings, but as entrepreneurial arts advocates who can measure and articulate the value of their programs to others, is not news (Hadley "Creating" 647-48; cf. Brkic; Ebewo and Sirayi; Beckerman; Sikes). Our program—which offers training in arts policy, management, marketing and budgeting followed by training in entrepreneurship and a practical project—is already structured around this necessity. The question of how to teach students this diverse skill set is, however, still a subject of debate; and the question of how to teach students to measure the impact of this work is even more difficult. There is, of course, a body of literature on the impact of arts, cultural and creative activities, value and evaluation that has been developed over the past decade, particularly through landmark reports like Matarasso's Use or Ornament? The Social Impact of Participation in the Arts (1997) and the RAND Corporation's Gifts of the Muse: Reframing the Debate about the Benefits of the Arts (2004). There are also emergent studies in an Australian context: Madden's "Cautionary Note" on using economic impact studies in the arts (2001); case studies on arts and wellbeing by consultancy firm Effective Change (2003); case studies by DCITA (2003); the Asia Pacific Journal of Arts and Cultural Management (2009) issue on "value"; and Australia Council publications on arts, culture and economy. As Richards has explained, "evaluation is basically a straightforward concept. E-value-ation = a process of enquiry that allows a judgment of amount, value or worth to be made" (99). What makes arts evaluation difficult is not the concept, but the measurement of intangible values—aesthetic quality, expression, engagement or experience. In the literature, discussion has been plagued by debate about what is measured, what method is used, and whether subjective values can in fact be measured. Commentators note that in current practice, questions of value are still deferred because they are too difficult to measure (Bilton and Leary 52), discussed only in terms of economic measures such as market share or satisfaction which are statistically quantifiable (Belfiore and Bennett "Rethinking" 137), or done through un-rigorous surveys that draw only ambiguous, subjective, or selective responses (Merli 110). According to Belfiore and Bennett, Public debate about the value of the arts thus comes to be dominated by what might best be termed the cult of the measurable; and, of course, it is those disciplines primarily concerned with measurement, namely, economics and statistics, which are looked upon to find the evidence that will finally prove why the arts are so important to individuals and societies. A corollary of this is that the humanities are of little use in this investigation. ("Rethinking" 137) Accordingly, Ragsdale states, Arts organizations [still] need to find a way to assess their progress in …making great art that matters to people—as evidenced, perhaps, by increased enthusiasm, frequency of attendance, the capacity and desire to talk or write about one's experience, or in some other way respond to the experience, the curiosity to learn about the art form and the ideas encountered, the depth of emotional response, the quality of the social connections made, and the expansion of one's aesthetics over time. Commentators are still looking for a balanced approach (cf. Geursen and Rentschler; Falk and Dierkling), which evaluates aesthetic practices, business practices, audience response, and results for all parties, in tandem. An approach which evaluates intrinsic impacts, instrumental impacts, and the way each enables the other, in tandem—with an emphasis not on the numbers but on whether we are getting better at what we are doing. And, of course, allows evaluators of arts, cultural and creative activities to use creative arts methods—sketches, stories, bodily movements and relationships and so forth—to provide data to inform the assessment, so they can draw not just on statistical research methods but on arts, culture and humanities research methods. Teaching Artsworkers to Measure the Impact of Their Work: Our Approach As a result of this contested terrain, our method for training artsworkers to measure the impact of their programs has emerged not just from these debates—which tend to conclude by declaring the needs for better methods without providing them—but from a research-teaching nexus in which our own trial-and-error work as consultants to arts, cultural and educational organisations looking to measure the impact of or improve their programs has taught us what is effective. Each of us has worked as managers of professional associations such as Drama Australia and Australasian Association for Theatre, Drama and Performance Studies (ADSA), members of boards or committees for arts organisations such as Youth Arts Queensland and Young People and the Arts Australia (YPAA), as well as consultants to major cultural organisations like the Queensland Performing Arts Centre and the Brisbane Festival. The methods for measuring impact we have developed via this work are based not just on surveys and statistics, but on our own practice as scholars and producers of culture—and are therefore based in arts, culture and humanities approaches. As scholars, we investigate the way marginalised groups tell stories—particularly groups marked by age, gender, race or ability, using community, contemporary and public space performance practices (cf. Hadley, "Bree"; Gattenhof). What we have learned by bringing this sort of scholarly analysis into dialogue with a more systematised approach to articulating impact to government, stakeholders and sponsors is that there is no one-size-fits-all approach. What is needed, instead, is a toolkit, which incorporates central principles and stages, together with qualitative, quantitative and performative tools to track aesthetics, accessibility, inclusivity, capacity-building, creativity etc., as appropriate on a case-by-case basis. Whatever the approach, it is critical that the data track the relationship between the experience the artists, audience or stakeholders anticipated the activity should have, the aspects of the activity that enabled that experience to emerge (or not), and the effect of that (or not) for the arts organisation, their artists, their partners, or their audiences. The combination of methods needs to be selected in consultation with the arts organisation, and the negotiations typically need to include detailed discussion of what should be evaluated (aesthetics, access, inclusivity, or capacity), when it should be evaluated (before, during or after), and how the results should be communicated (including the difference between evaluation for reporting purposes and evaluation for program improvement purposes, and the difference between evaluation and related processes like reflection, documentary-making, or market research). Translating what we have learned through our cultural research and consultancy into a study package for students relies on an understanding of what they want from their study. This, typically, is practical career-ready skills. Students want to produce their own arts, or produce other people's arts, and most have not imagined themselves participating in meta-level processes in which they argue the value of arts, cultural and creative activities (Hadley, "Creating" 652). Accordingly, most have not thought of themselves as researchers, using cultural research methods to create reports that inform how the Australian government values, supports, and services the arts. The first step in teaching students to operate effectively as evaluators of arts, cultural and creative activities is, then, to re-orient their expectations to include this in their understanding of what artsworkers do, what skills artsworkers need, and where they deploy these skills. Simply handing over our own methods, as "the" methods, would not enable graduates to work effectively in a climate were one size will not fit all, and methods for evaluating impact need to be negotiated again for each new context. 1. Understanding the Need for Evaluation: Cause and Effect The first step in encouraging students to become effective evaluators is asking them to map their sector, the major stakeholders, the agendas, alignments and misalignments in what the various players are trying to achieve, and the programs, projects and products through which the players are trying to achieve it. This starting point is drawn from Program Theory—which, as Joon-Yee Kwok argues in her evaluation of the SPARK National Mentoring Program for Young and Emerging Artists (2010) is useful in evaluating cultural activities. The Program Theory approach starts with a flow chart that represents relationships between activities in a program, allowing evaluators to unpack some of the assumptions the program's producers have about what activities have what sort of effect, then test whether they are in fact having that sort of effect (cf. Hall and Hall). It could, for example, start with a flow chart representing the relationship between a community arts policy, a community arts organisation, a community-devised show it is producing, and a blog it has created because it assumes it will allow the public to become more interested in the show the participants are creating, to unpack the assumptions about the sort of effect this is supposed to have, and test whether this is in fact having this sort of effect. Masterclasses, conversations and debate with peers and industry professionals about the agendas, activities and assumptions underpinning programs in their sector allows students to look for elements that may be critical in their programs' ability to achieve (or not) an anticipated impact. In effect to start asking about, "the way things are done now, […] what things are done well, and […] what could be done better" (Australian Government 12).2. Understanding the Nature of Evaluation: PurposeOnce students have been alerted to the need to look for cause-effect assumptions that can determine whether or not their program, project or product is effective, they are asked to consider what data they should be developing about this, why, and for whom. Are they evaluating a program to account to government, stakeholders and sponsors for the money they have spent? To improve the way it works? To use that information to develop innovative new programs in future? In other words, who is the audience? Being aware of the many possible purposes and audiences for evaluation information can allow students to be clear not just about what needs to be evaluated, but the nature of the evaluation they will do—a largely statistical report, versus a narrative summary of experiences, emotions and effects—which may differ depending on the audience.3. Making Decisions about What to Evaluate: Priorities When setting out to measure the impact of arts, cultural or creative activities, many people try to measure everything, measure for the purposes of reporting, improvement and development using the same methods, or gather a range of different sorts of data in the hope that something in it will answer questions about whether an activity is having the anticipated effect, and, if so, how. We ask students to be more selective, making strategic decisions about which anticipated effects of a program, project or product need to be evaluated, whether the evaluation is for reporting, improvement or innovation purposes, and what information stakeholders most require. In addition to the concept of collecting data about critical points where programs succeed or fail in achieving a desired effect, and different approaches for reporting, improvement or development, we ask students to think about the different categories of effect that may be more or less interesting to different stakeholders. This is not an exhaustive list, or a list of things every evaluation should measure. It is a tool to demonstrate to would-be evaluators points of focus that could be developed, depending on the stakeholders' priorities, the purpose of the evaluation, and the critical points at which desired effects need to occur to ensure success. Without such framing, evaluators are likely to end up with unusable data, which become a difficulty to deal with rather than a benefit for the artsworkers, arts organisations or stakeholders. 4. Methods for Evaluation: Process To be effective, methods for collecting data about how arts, cultural or creative activities have (or fail to have) anticipated impact need to include conventional survey, interview and focus group style tools, and creative or performative tools such as discussion, documentation or observation. We encourage students to use creative practice to draw out people's experience of arts events—for example, observation, documentation still images, video or audio documentation, or facilitated development of sketches, stories or scenes about an experience, can be used to register and record people's feelings. These sorts of methods can capture what Mihaly Csikszentmihalyi calls "flow" of experience (cf. Belfiore and Bennett, "Determinants" 232)—for example, photos of a festival space at hourly intervals or the colours a child uses to convey memory of a performance can capture to flow of movement, engagement, and experience for spectators more clearly than statistics. These, together with conventional surveys or interviews that comment on the feelings expressed, allow for a combination of quantitative, qualitative and performative data to demonstrate impact. The approach becomes arts- and humanities- based, using arts methods to encourage people to talk, write or otherwise respond to their experience in terms of emotion, connection, community, or expansion of aesthetics. The evaluator still needs to draw out the meaning of the responses through content, text or discourse analysis, and teaching students how to do a content analysis of quantitative, qualitative and performative data is critical at this stage. When teaching students how to evaluate their data, our method encourages students not just to focus on the experience, or the effect of the experience, but the relationship between the two—the things that act as "enablers" "determinants" (White and Hede; Belfiore and Bennett, "Determinants" passim) of effect. This approach allows the evaluator to use a combination of conventional and creative methods to describe not just what effect an activity had, but, more critically, what enabled it to have that effect, providing a firmer platform for discussing the impact, and how it could be replicated, developed or deepened next time, than a list of effects and numbers of people who felt those effects alone. 5. Communicating Results: Politics Often arts, cultural or creative organisations can be concerned about the image of their work an evaluation will create. The final step in our approach is to alert students to the professional, political and ethical implications of evaluation. Students learn to share their knowledge with organisations, encouraging them to see the value of reporting both correct and incorrect assumptions about the impact of their activities, as part of a continuous improvement process. Then we assist them in drawing the results of this sort of cultural research into planning, development and training documents which may assist the organisation in improving in the future. In effect, it is about encouraging organisations to take the Australian government at its word when, in the National Cultural Policy Discussion Paper, it says it that measuring impact is about measuring progress—what we do well, what we could do better, and how, not just success statistics about who is most successful—as it is this that will ultimately be most useful in creating an inclusive, innovative, productive Australia. Teaching Artsworkers to Measure the Impact of Their Work: The Impact of Our Approach What, then, is the impact of our training on graduates' ability to measure the impact of work? Have we made measurable progress in our efforts to teach artsworkers to assess and articulate the impact of their work? The MCI (CP&AM) has been offered for three years. Our approach is still emergent and experimental. We have, though, identified a number of impacts of our work. First, our students are less fearful of becoming involved in measuring the value or impact of arts, cultural and creative programs. This is evidenced by the number who chooses to do some sort of evaluation for their Major Project, a 15,000 word individual project or internship which concludes their degree. Of the 50 or so students who have reached the Major Project in three years—35 completed and 15 in planning for 2012—about a third have incorporated evaluation into their Major Project. This includes evaluation of sector, business or producing models (5), youth arts and youth arts mentorship programs (4), audience development programs (2), touring programs (4), and even other arts management training programs (1). Indeed, after internships in programming or producing roles, this work—aligned with the Government's interest in improving training of young artists, touring, audience development, and economic development—has become a most popular Major Project option. This has enabled students to work with a range of arts, cultural and creative organisations, share their training—their methods, their understanding of what their methods can measure, when, and how—with Industry. Second, this Industry-engaged training has helped graduates in securing employment. This is evidenced by the fact that graduates have gone on to be employed with organisations they have interned with as part of their Major Project, or other organisations, including some of Brisbane's biggest cultural organisations—local and state government departments, Queensland Performing Arts Centre, Brisbane Festival, Metro Arts, Backbone Youth Arts, and Youth Arts Queensland, amongst others. Thirdly, graduates' contribution to local organisations and industry has increased the profile of a relatively new program. This is evidenced by the fact that it enrols 40 to 50 new students a year across Graduate Certificate / MCI (CP&AM) programs, typically two thirds domestic students and one third international students from Canada, Germany, France, Denmark, Norway and, of course, China. Indeed, some students are now disseminating this work globally, undertaking their Major Project as an internship or industry project with an organisation overseas. In effect, our training's impact emerges not just from our research, or our training, but from the fact that our graduates disseminate our approach to a range of arts, cultural and creative organisations in a practical way. We have, as a result, expanded the audience for this approach, and the number of people and contexts via which it is being adapted and made useful. Whilst few of students come into our program with a desire to do this sort of work, or even a working knowledge of the policy that informs it, on completion many consider it a viable part of their practice and career pathway. When they realise what they can achieve, and what it can mean to the organisations they work with, they do incorporate research, research consultant and government roles as part of their career portfolio, and thus make a contribution to the strong cultural sector the Government envisages in the National Cultural Policy Discussion Paper. Our work as scholars, practitioners and educators has thus enabled us to take a long-term, processual and grassroots approach to reshaping agendas for approaches to this form of cultural research, as our practices are adopted and adapted by students and industry stakeholders. Given the challenges commentators have identified in creating and disseminating effective evaluation methods in arts over the past decade, this, for us—though by no means work that is complete—does count as measurable progress. References Beckerman, Gary. "Adventuring Arts Entrepreneurship Curricula in Higher Education: An Examination of Present Efforts, Obstacles, and Best pPractices." The Journal of Arts Management, Law, and Society 37.2 (2007): 87-112. Belfiore, Eleaonora, and Oliver Bennett. "Determinants of Impact: Towards a Better Understanding of Encounters with the Arts." Cultural Trends 16.3 (2007): 225-75. ———. "Rethinking the Social Impacts of the Arts." International Journal of Cultural Policy 13.2 (2007): 135-51. Bilton, Chris, and Ruth Leary. "What Can Managers Do for Creativity? Brokering Creativity in the Creative Industries." International Journal of Cultural Policy 8.1 (2002): 49-64. Brkic, Aleksandar. "Teaching Arts Management: Where Did We Lose the Core Ideas?" Journal of Arts Management, Law and Society 38.4 (2009): 270-80. Czikszentmihalyi, Mihaly. "A Systems Perspective on Creativity." Creative Management. Ed. Jane Henry. Sage: London, 2001. 11-26. Australian Government. "National Cultural Policy Discussion Paper." Department of Prime Minster and Cabinet – Office for the Arts 2011. 1 Oct. 2011 ‹http://culture.arts.gov.au/discussion-paper›. Ebewo, Patrick, and Mzo Sirayi. "The Concept of Arts/Cultural Management: A Critical Reflection." Journal of Arts Management, Law and Society 38.4 (2009): 281-95. Effective Change and VicHealth. Creative Connections: Promoting Mental Health and Wellbeing through Community Arts Participation 2003. 1 Oct. 2011 ‹http://www.vichealth.vic.gov.au/en/Publications/Social-connection/Creative-Connections.aspx›. Effective Change. Evaluating Community Arts and Community Well Being 2003. 1 Oct. 2011 ‹http://www.arts.vic.gov.au/Research_and_Resources/Resources/Evaluating_Community_Arts_and_Wellbeing›. Falk, John H., and Lynn. D Dierking. "Re-Envisioning Success in the Cultural Sector." Cultural Trends 17.4 (2008): 233-46. Gattenhof, Sandra. "Sandra Gattenhof." QUT ePrints Article Repository. Queensland University of Technology, 2011. 1 Oct. 2011 ‹http://eprints.qut.edu.au/view/person/Gattenhof,_Sandra.html›. Geursen, Gus and Ruth Rentschler. "Unravelling Cultural Value." The Journal of Arts Management, Law and Society 33.3 (2003): 196-210. Hall, Irene and David Hall. Evaluation and Social Research: Introducing Small Scale Practice. London: Palgrave McMillan, 2004. Hadley, Bree. "Bree Hadley." QUT ePrints Article Repository. Queensland University of Technology, 2011. 1 Oct. 2011 ‹http://eprints.qut.edu.au/view/person/Hadley,_Bree.html›. ———. "Creating Successful Cultural Brokers: The Pros and Cons of a Community of Practice Approach in Arts Management Education." Asia Pacific Journal of Arts and Cultural Management 8.1 (2011): 645-59. Kwok, Joon. When Sparks Fly: Developing Formal Mentoring Programs for the Career Development of Young and Emerging Artists. Masters Thesis. Brisbane: Queensland University of Technology, 2010. Madden, Christopher. "Using 'Economic' Impact Studies in Arts and Cultural Advocacy: A Cautionary Note." Media International Australia, Incorporating Culture & Policy 98 (2001): 161-78. Matarasso, Francis. Use or Ornament? The Social Impact of Participation in the Arts. Bournes Greens, Stroud: Comedia, 1997. McCarthy, Kevin. F., Elizabeth H. Ondaatje, Laura Zakaras, and Arthur Brooks. Gifts of the Muse: Reframing the Debate about the Benefits of the Arts. Santa Monica: RAND Corporation, 2004. Merli, Paola. "Evaluating the Social Impact of Participation in Arts Activities." International Journal of Cultural Policy 8.1 (2002): 107-18. Muir, Jan. The Regional Impact of Cultural Programs: Some Case Study Findings. Communications Research Unit - DCITA, 2003. Ragsdale, Diana. "Keynote - Surviving the Culture Change." Australia Council Arts Marketing Summit. Australia Council for the Arts: 2008. Richards, Alison. "Evaluation Approaches." Creative Collaboration: Artists and Communities. Melbourne: Victorian College of the Arts, University of Melbourne, 2006. Sikes, Michael. "Higher Education Training in Arts Administration: A Millennial and Metaphoric Reappraisal. Journal of Arts Management, Law and Society 30.2 (2000): 91-101.White, Tabitha, and Anne-Marie Hede. "Using Narrative Inquiry to Explore the Impact of Art on Individuals." Journal of Arts Management, Law, and Society 38.1 (2008): 19-35.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Hackett, Lisa J. „Designing for Curves“. M/C Journal 24, Nr. 4 (12.08.2021). http://dx.doi.org/10.5204/mcj.2795.

Der volle Inhalt der Quelle
Annotation:
Retro fashion trends continue to be a feature of the contemporary clothing market, providing alternate configurations of womanhood from which women can fashion their identities (Hackett). This article examines the design attributes of 1950s-style clothing, that some women choose to wear over more contemporary styles. The 1950s style can be located in a distinctly hourglass design that features a small waist with distinct bust and hips. This article asks: what are the design features of this style that lead women to choose it over contemporary fashion? Taking a material culture approach, it firstly looks at the design features of the garments and the way they are marketed. Secondly, it draws upon interviews and a survey conducted with women who wear these clothes. Thirdly, it investigates the importance of this silhouette to the women who wear it, through the key concepts of body shape and size. Clothing styles of the 1950s were influenced by the work of Christian Dior, particularly his "New Look" collection of 1947. Dior’s design focus was on emphasising female curves, featuring full bust and flowing skirts cinched in with a narrow waist (Dior), creating an exaggerated hourglass shape. The look was in sharp contrast to fashion designs of the Second World War and offered a different conceptualisation of the female body, which was eagerly embraced by many women who had grown weary of rationing and scarcity. Post-1950s, fashion designers shifted their focus to a slimmer ideal, often grounded in narrow hips and a smaller bust. Yet not all women suit this template; some simply do not have the right body shape for this ideal. Additionally, the intervening years between the 1950s and now have also seen an incremental increase in body sizes so that a slender figure no longer represents many women. High-street brand designers, such as Review, Kitten D’Amour and Collectif, have recognised these issues, and in searching for an alternative conceptualisation of the female body have turned to the designs of the 1950s for their inspiration. The base design of wide skirts which emphasise the relative narrowness of the waist is arguably more suited to many women today, both in terms of fit and shape. Using a material culture approach, this article will examine these design features to uncover why women choose this style over more contemporary designs. Method This article draws upon a material culture study of 1950s-designed clothes and why some contemporary women choose to wear 1950s-style clothing as everyday dress. Material culture is “the study through artefacts of the beliefs—values, ideas, attitudes and assumptions—of a particular community or society at a given time” (Prown 1). The premise is that a detailed examination of a culture’s relationship with its objects cannot be undertaken without researching the objects themselves (Hodder 174). Thus both the object is analysed and the culture is surveyed about their relationship with the object. In this study, analysis was conducted in March and September 2019 on the 4,286 items of clothing available for sale by the 19 brands that the interview subjects wear, noting the design features that mark the style as "1950s" or "1950s-inspired". Further, a quantitative analysis of the types of clothing (e.g. dress, skirt, trousers, etc.) was undertaken to reveal where the design focus lay. A secondary analysis of the design brands was also undertaken, examining the design elements they used to market their products. In parallel, two cohorts of women who wear 1950s-style clothing were examined to ascertain the social meanings of their clothing choices. The first group comprised 28 Australian women who participated in semi-structured interviews. The second cohort responded to an international survey that was undertaken by 229 people who sew and wear historic clothing. The survey aimed to reveal the meaning of the clothes to those who wear them. Both sets of participants were found through advertising the study on Facebook in 2018. The interview subjects were selected with the requirement that they self-identified as wearing 1950s-style clothing on a daily basis. The survey examined home dressmakers who made historic-style clothing and asked them a range of questions regarding their sewing practice and the wearing of the clothes. Literature Review While subcultures have adopted historic clothing styles as part of their aesthetic (Hebdige), the more mainstream wearing of clothing from alternative eras as an everyday fashion choice has its roots in the hippy movement of the late 1960s (Cumming 109). These wearers are not attempting to “‘rebel’ against society, nor … explicitly ‘subvert’ items that are offered by mainstream culture” (Veenstra and Kuipers 362-63), rather they are choosing styles that both fit in with contemporary styles, yet are drawn from a different design ideal. Wearers of vintage clothing often feel that modern clothing is designed for an ideal body size or shape which differed markedly from their own (Smith and Blanco 360-61). The fashion industry has long been criticised for its adherence to an ultra-thin body shape and it is only in the last decade or so that small changes have begun to be made (Hackett and Rall 270-72). While plus-size models have begun to appear in advertising and on cat-walks, and fashion brands have begun to employ plus-sized fit models, the shift to inclusivity has been limited as the models persistently reflect the smaller end of the “plus” spectrum and continue to have slim, hourglass proportions (Gruys 12-13). The overwhelming amount of clothing offered for sale remains within the normative AU8-16 clothing range. This range is commonly designated “standard” with any sizes above this “plus-sized”. Yet women around the world do not fit neatly into this range and the average woman in countries such as Australia and the United States are at the upper edge of normative size ranges. In Australia, the average woman is around an AU16 (Olds) and in the US they are in the lower ranges of plus sizes (Gruys) which calls into question the validity of the term “plus-sized”. Closely related to body size, but distinctly different, is the concept of body shape. Body shape refers to the relative dimensions of the body, and within fashion, this tends to focus on the waist, hips and bust. Where clothing from the 1960s onwards has generally presented a slim silhouette, 1950s-style clothing offers an arguably different body shape. Christian Dior’s 1947 "New Look" design collection came to dominate the style of the 1950s. Grounded in oversized skirts, cinched waists, full bust, and curved lines of the mid-nineteenth century styles, Dior sought to design for “flower-like women” (Dior 24) who were small and delicate, yet had full hips and busts. While Dior’s iteration was an exaggerated shape that required substantial body structuring through undergarments, the pronounced hourglass design shape became identified with 1950s-style clothing. By the 1960s the ideal female body shape had changed dramatically, as demonstrated by the prominent model of that decade, the gamine Twiggy. For the next few decades, iterations of this hyper-thin design ideal were accelerated and fashion models in magazines consistently decreased in size (Sypeck et al.) as fashion followed trends such as "heroin chic", culminating in the "size zero" scandals that saw models' BMI and waist-to-height rations plummet to dangerously unhealthy sizes (Hackett and Rall 272-73; Rodgers et al. 287-88). The majority of the fashion industry, it appears, is not designing for the average woman. Discrimination against “fat” people leads to industry practices that actively exclude them from product offerings (Christel). This has been variously located as being entrenched anywhere from the top of the industry (Clements) to the entry level, where design students are taught their trade using size 8 models (Rutherford-Black et al.). By restricting their designs in terms of size and shape offering, clothing brands collectively restrict the ability of people whose bodies fall outside that arbitrary range to fashion their identity but are eager nonetheless to participate in fashion (Church Gibson; Peters). This resulting gap provides an opportunity for brands to differentiate their product offering with alternate designs that cater to this group. Findings 1950s-Style Clothing There are several key styles that could arguably be identified as “1950s”; however, one of the findings in this study was that the focus of the designs was on the voluptuous style of the 1950s associated with Dior’s New Look, featuring a cinched-in waist, full bust, and predominantly wide, flowing skirts. A count of the garments available for sale on the websites of these brands found that the focus is overwhelmingly on dresses (64% of the 4,286 garments on offer), with skirts and bifurcated garments being marketed in far smaller numbers, 10% (679) and 7% (467) respectively. The majority of the skirts were wide, with just a few being narrow, often in a hobble-skirt style. Both styles emphasise wide hips and narrow waists. The high number of dresses with voluminous skirts suggest that this design aesthetic is popular amongst their customers; these women are seeking designs that are based on a distinctly, if exaggerated, female form. Many of the brands surveyed have an extended size collection, outside the normative AU8-16, with one brand going as high as a UK32. Sizing standards have ceased to be universally used by clothing designers, with brands often creating their own size scales, making it difficult to make direct size comparisons between the brands (Hackett and Rall, 267). Despite this, the analysis found that many of these brands have extended their sizing ranges well into the plus-sized bracket, with one brand going up to a size 32. In most brands, the exact same designs are available throughout the sizes rather than having a separate dedicated plus-size range. Only one design brand had a dedicated separate "plus-size" range where the clothing differed from their "standard-sized" ranges. Further, many of the brands did not use terminology separating sizes into “standard” or “plus-size”. Beyond the product offering, this analysis also looked at the size of the models that design brands use to market their clothes. Four brands did not use models, displaying the clothes in isolation. Eight of the brands used a range of models of different sizes to advertise their clothes, reflecting the diversity of the product range. Seven of the brands did not, preferring to use models of smaller size, usually around a size AU8, with a couple using the occasional model who was a size AU12. Body Shape There were two ideal body shapes in the 1950s. The first was a voluptuous hourglass shape of a large bust and hips, with a small cinched-in waist. The second was more slender, as exemplified by women such as Grace Kelly and Audrey Hepburn, this was “a subdued and classy sensuality, often associated with the aristocrat and high fashion” (Mazur). It is the first that has come to be the silhouette most commonly associated with the decade among this cohort, and it is this conceptualisation of a curvy ideal that participants in this study referenced when discussing why they wear these clothes: I'm probably like a standard Australia at 5'10" but I am curvy. A lot of corporate clothes I don't think are really made to fit women in the way they probably could and they could probably learn a bit from looking back a bit more at the silhouettes for you know, your more, sort of average women with curves. (Danielle) The 50s styles suit my figure and I wear that style on an everyday basis. (Survey Participant #22) As these women note, this curvy ideal aligns with their own figures. There was also a sense that the styles of the 1950s were more forgiving, and thus suited a wider range of body shapes, than more contemporary styles: these are the styles of clothes I generally wear as the 50’s and 60’s styles flatter the body and are flattering to most body types. (Survey Participant #213) In contrast, some participants chose the style because it created the illusion of a body shape they did not naturally possess. For example, Emma stated: I’m very tall and I found that modern fast fashion is often quite short on me whereas if it’s either reproduction or vintage stuff it tends to suit me better in length. It gives me a bit of shape; I’m like a string bean, straight up and down. (Emma) For others it allows them to control or mask elements of their body: okay, so the 1950s clothes I find give you a really feminine shape. They always consider the fact that you have got a waist. And my waist [inaudible]. My hips I always want to hide, so those full skirts always do a good job at hiding those hips. I feel… I feel pretty in them. (Belinda) Underlying both these statements is the desire to create a feminine silhouette, which in turn increases feelings of being attractive. This reflects Christian Dior’s aim to ground his designs in femininity. This locating of the body ideal in exaggerated curves and equating it to a sense of femininity was reflected by a number of participants. The sensory appeal of 1950s designs led to one participant feeling “more feminine because of that tiny waist and heels on” (Rosy). This reflects Dior’s design aim to create highly feminine clothing styles. Another participant mused upon this in more detail: I love how pretty they make me feel. The tailoring involved to fit your individual body to enhance your figure, no matter your size, just amazes me. In by-gone eras, women dressed like women, and men like men ... not so androgynous and sloppy like today. I also like the idea of teaching the younger generation about history ... and debunking a lot of information and preconceived notions that people have. But most of all ... THE PRETTY FACTOR! (Survey Participant #130) Thus the curvy style is conceived to be distinctly feminine and thus a clear marker of the female identity of the person wearing the clothes. Body Size Participants were also negotiating the relative size of their bodies when it came to apparel choice. Body size is closely related to body shape and participants often negotiated both when choosing which style to wear. For example, Skye stated how “my bust and my waist and my hips don’t fit a standard [size]”, indicating that, for her, both issues impacted on her ability to wear contemporary clothing. Ashleigh concurred, stating: I was a size 8, but I was still a very hourglass sized 8. So modern stuff doesn’t even work with me when I’m skinnier and that shape. (Ashleigh) Body size is not just about measurements around the hips and torso, it also affects the ability to choose clothing for those at the higher and lower ends of the height spectrum. Gabrielle discussed her height, saying: so I’m really tall, got quite big hips … . So I quite like that it cinches the waist a bit, goes over the hips and hides a little bit [laughs] I don’t know … I really like that about it I guess. (Gabrielle) For Gabrielle, her height creates a further dimension for her to negotiate. In this instance, contemporary fashion is too short for her to feel comfortable wearing it. The longer skirts of 1950s style clothing provide the desired coverage of her body. The curvy contours of 1950s-designed clothing were found by some participants to be compatible with their body size, particularly for those in the large size ranges. The following statement typifies this point of view: the later styles are mostly small waist/full skirt that flatters my plus size figure. I also find them the most romantic/attractive. (Survey Participant #74) The desire to feel attractive in clothes when negotiating body size reflects the concerns participants had regarding shape. For this cohort, 1950s-style clothing presents a solution to these issues. Discussion The clothing designs of the 1950s focus on a voluptuous body shape that is in sharp contrast to the thin ideal of contemporary styles. The women in this study state that contemporary designs just do not suit their body shape, and thus they have consciously sought out a style that is designed along lines that do. The heavy reliance on skirts and dresses that cinch at the waist and flare wide over the hips suggests that the base silhouette of the 1950s designed clothing is flattering for a wide range of female shapes, both in respect to shape and size. The style is predominantly designed around flared skirts which serves to reduce the fit focus to the waist and bust, thus women do not have to negotiate hip size when purchasing or wearing clothes. By removing one to the three major fit points in clothing, the designers are able to cater to a wider range of body shapes. This is supported in the interviews with women across the spectrum of body shapes, from those who note that they can "hide their wider hips" and to those women who use the style to create an hourglass shape. The wider range of sizes available in the 1950s-inspired clothing brands suggests that the flexibility of the style also caters to a wide range of body sizes. Some of the brands also market their clothes using models with diverse body sizes. Although this is, in some cases, limited to the lower end of the “plus”-size bracket, others did include models who were at the higher end. This suggests that some of these brands recognise the market potential of this style and that their customers are welcoming of body diversity. The focus on a relatively smaller waist to hip and bust also locates the bigger body in the realm of femininity, a trait that many of the respondents felt these clothes embodied. The focus on the perceived femininity of this style, at any size, is in contrast to mainstream fashion. This suggests that contemporary fashion designers are largely continuing to insist on a thin body ideal and are therefore failing to cater for a considerable section of the market. Rather than attempting to get their bodies to fit into fashion, these women are finding alternate styles that fit their bodies. The fashion brands analysed did not create an artificial division of sizing into “standard” and “plus” categories, reinforcing the view that these brands are size-inclusive and the styles are meant for all women. This posits the question of why the fashion industry continues this downward trajectory in body size. Conclusion The design of 1950s-inspired clothing provides an alternate silhouette through which women can fashion their identity. Designers of this style are catering to an alternate concept of feminine beauty than the one provided by contemporary fashion. Analysis of the design elements reveals that the focus is on a narrow waist below a full bust, with wide flowing skirts. In addition, women in this study felt these designs catered for a wide variety of body sizes and shapes. The women interviewed and surveyed in this study feel that designers of contemporary styles do not cater for their body size and/or shape, whereas 1950s-style clothing provides a silhouette that flatters them. Further, they felt the designs achieved femininity through the accentuating of feminine curves. The dominance of the dress, a highly gendered garment, within this modern iteration of 1950s-style underscores this association with femininity. This reflects Christian Dior’s design ethos which placed emphasis on female curves. This was to become one of the dominating influences on the clothing styles of the 1950s and it still resonates today with the clothing choices of the women in this study. References Christel, Deborah A. "It's Your Fault You're Fat: Judgements of Responsibility and Social Conduct in the Fashion Industry." Clothing Cultures 1.3 (2014): 303-20. DOI: 10.1386/cc.1.3.303_1. Church Gibson, Pamela. "'No One Expects Me Anywhere': Invisible Women, Ageing and the Fashion Industry." Fashion Cultures: Theories, Explorations and Analysis, eds. Stella Bruzzi and Pamela Church Gibson. Routledge, 2000. 79-89. Clements, Kirstie. "Former Vogue Editor: The Truth about Size Zero." The Guardian, 6 July 2013. <https://www.theguardian.com/fashion/2013/jul/05/vogue-truth-size-zero-kirstie-clements>. Cumming, Valerie. Understanding Fashion History. Batsford, 2004. Dior, Christian. Dior by Dior: The Autobiography of Christian Dior. Trans. Antonia Fraser. V&A Publishing, 1957 [2018]. Gruys, Kjerstin. "Fit Models, Not Fat Models: Body Inclusiveness in the Us Fit Modeling Job Market." Fat Studies (2021): 1-14. Hackett, L.J. "‘Biography of the self’: Why Australian Women Wear 1950s Style Clothing." Fashion, Style and Popular Culture 16 Apr. 2021. <http://doi.org/10.1386/fspc_00072_1>. Hackett, L.J., and D.N. Rall. “The Size of the Problem with the Problem of Sizing: How Clothing Measurement Systems Have Misrepresented Women’s Bodies from the 1920s – Today.” Clothing Cultures 5.2 (2018): 263-83. DOI: 10.1386/cc.5.2.263_1. Hebdige, Dick. Subculture the Meaning of Style. Methuen & Co Ltd, 1979. Hodder, Ian. The Interpretation of Documents and Material Culture. Sage, 2012. Mazur, Allan. "US Trends in Feminine Beauty and Overadaptation." Journal of Sex Research 22.3 (1986): 281-303. Olds, Tim. "You’re Not Barbie and I’m Not GI Joe, So What Is a Normal Body?" The Conversation, 2 June 2014. Peters, Lauren Downing. "You Are What You Wear: How Plus-Size Fashion Figures in Fat Identity Formation." Fashion Theory 18.1 (2014): 45-71. DOI: 10.2752/175174114X13788163471668. Prown, Jules David. "Mind in Matter: An Introduction to Material Culture Theory and Method." Winterthur Portfolio 17.1 (1982): 1-19. DOI: 10.1086/496065. Rodgers, Rachel F., et al. "Results of a Strategic Science Study to Inform Policies Targeting Extreme Thinness Standards in the Fashion Industry." International Journal of Eating Disorders 50.3 (2017): 284-92. DOI: 10.1002/eat.22682. Rutherford-Black, Catherine, et al. "College Students' Attitudes towards Obesity: Fashion, Style and Garment Selection." Journal of Fashion Marketing and Management 4.2 (2000): 132-39. Smith, Dina, and José Blanco. "‘I Just Don't Think I Look Right in a Lot of Modern Clothes…’: Historically Inspired Dress as Leisure Dress." Annals of Leisure Research 19.3 (2016): 347-67. Sypeck, Mia Foley, et al. "No Longer Just a Pretty Face: Fashion Magazines' Depictions of Ideal Female Beauty from 1959 to 1999." International Journal of Eating Disorders 36.3 (2004): 342-47. DOI: 10.1002/eat.20039. Veenstra, Aleit, and Giselinde Kuipers. "It Is Not Old-Fashioned, It Is Vintage, Vintage Fashion and the Complexities of 21st Century Consumption Practices." Sociology Compass 7.5 (2013): 355-65. DOI: 10.1111/soc4.12033.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Makeham, Paul Benedict, Bree Jamila Hadley und Joon-Yee Bernadette Kwok. „A "Value Ecology" Approach to the Performing Arts“. M/C Journal 15, Nr. 3 (03.05.2012). http://dx.doi.org/10.5204/mcj.490.

Der volle Inhalt der Quelle
Annotation:
In recent years ecological thinking has been applied to a range of social, cultural, and aesthetic systems, including performing arts as a living system of policy makers, producers, organisations, artists, and audiences. Ecological thinking is systems-based thinking which allows us to see the performing arts as a complex and protean ecosystem; to explain how elements in this system act and interact; and to evaluate its effects on Australia’s social fabric over time. According to Gallasch, ecological thinking is “what we desperately need for the arts.” It enables us to “defeat the fragmentary and utilitarian view of the arts that dominates, to make connections, to establish overviews of the arts that can be shared and debated” (Gallasch NP). The ecological metaphor has featured in debates about the performing arts in Brisbane, Australia, in the last two or three years. A growing state capital on Australia’s eastern seaboard, Brisbane is proud of its performing arts culture. Its main theatre organisations include the state flagship Queensland Theatre Company; the second major presenter of adapted and new text-based performances La Boite Theatre Company; venues which support local and touring performances such as the Judith Wright Centre for Contemporary Arts and the Brisbane Powerhouse; emerging talent incubator Metro Arts; indigenous companies like Kooemba Jdarra; independent physical theatre and circus companies such as Zen Zen Zo and Circa; and contemporary play-producing company 23rd Productions (cf. Baylis 3). Brisbane aspires to be a cultural capital in Australia, Australasia, and the Asia Pacific (Gill). Compared to Australia’s southern capitals Sydney and Melbourne, however, Brisbane does have a relatively low level of performing arts activity across traditional and contemporary theatre, contemporary performance, musicals, circus, and other genres of performance. It has at times been cast as a piecemeal, potentially unsustainable arts centre prone to losing talent to other states. In 2009, John Baylis took up these issues in Mapping Queensland Theatre, an Arts Queensland-funded survey designed to map practices in Brisbane and in Queensland more broadly, and to provide a platform to support future policy-making. This report excited debate amongst artists who, whilst accepting the tenor of Baylis’s criticisms, also lamented the lack of nuanced detail and contextualised relationships its map of Queensland theatre provided. In this paper we propose a new approach to mapping Brisbane’s and Queensland’s theatre that extends Baylis’s “value chain” into a “value ecology” that provides a more textured picture of players, patterns, relationships, and activity levels. A “value chain” approach emphasises linear relationships and gaps between production, distribution, and consumption in a specific sector of the economy. A “value ecology” approach goes further by examining a complex range of rhizomatic relationships between production, distribution, and consumption infrastructure and how they influence each other within a sector of the economy such as the performing arts. Our approach uses a “value ecology” model adapted from Hearn et al. and Cherbo et al. to map and interpret information from the AusStage performing arts database, the Australian Bureau of Statistics, and other sources such as previews, reviews, and an ongoing local blogosphere debate. Building upon Baylis’s work, our approach produces literal and conceptual maps of Queensland’s performing arts as they change over time, with analysis of support, infrastructure, and relationships amongst government, arts organisations, artists, and audiences. As debate on Mapping Queensland Theatre gives way to more considered reflection, and as Baylis develops a follow-up report, our approach captures snapshots of Queensland’s performing arts before, during, and after such policy interventions. It supports debate about how Queensland artists might manage their own sustainability, their own ability to balance artistic, cultural, and economic factors that influence their work in a way that allows them to survive long term, and allows policy makers, producers, and other players to better understand, articulate, assess, and address criticisms. The Ecological Metaphor In recent years a number of commentators have understood the performing arts as an “ecology,” a system characterised by interacting elements, engagements, flows, blockages, breaks, and breakthroughs whose “health” (synonymous in this context with sustainability) depends on relationships between players within and without the system. Traditionally, performing arts policies in Australia have concentrated on singular elements in a system. They have, as Hunt and Shaw argue, “concentrate[d] on individual companies or an individual artist’s practice rather than the sector as a whole” (5, cf. 43). The focus has been on how to structure, support, and measure the success—the aesthetic and social benefits—of individual training institutions, artists, administrators, and arts organisations. The “health” of singular elements has been taken as a sign of the “health” of the system. An ecologies approach, by contrast, concentrates on engagements, energies, and flows as signs of health, and thus sustainability, in a system. Ecological thinking enables policy makers, practitioners, and scholars to go beyond debate about the presence of activity, the volume of activity, and the fate of individual agents as signs of the health or non-health of a system. In an ecologies context, level of activity is not the only indicator of health, and low activity does not necessarily equate with instability or unsustainability. An ecological approach is critical in Brisbane, and in Queensland more broadly, where attempts to replicate the nature or level of activity in southern capitals are not necessarily the best way to shore up the “health” of our performing arts system in our own unique environment. As the locus of our study Queensland is unique. While Queensland has 20% of Australia’s population (OESR; ABS ‘ Population Projections’), and is regularly recognised as a rapidly growing “lifestyle superstate” which values innovation, creativity, and cultural infrastructure (Cunningham), it is still home to significantly less than 20% of Australia’s performing arts producers, and many talented people continue to migrate to the south to pursue career opportunities (Baylis 4, 28). An ecologies approach can break into oft-cited anxieties about artist, activity, and audience levels in Brisbane, and in Queensland, and create new ideas about what a “healthy” local performing arts sector might look like. This might start to infuse some of the social media commentary that currently tends to emphasise the gaps in the sector. Ecologies are complex systems. So, as Costanza says, when we consider ecosystem health, we must consider the overall performance of the system, including its ability to deal with “external stress” (240) from macro-level political, legal, social, cultural, economic, or technological currents that change the broader society this particular sector or ecosystem sits within. In Brisbane, there is a growing population and a desire to pursue a cultural capital tag, but the distinctive geographic, demographic, and behavioural characteristics of Brisbane’s population—and the associated ‘stresses’, conditions, or constraints—mean that striving to replicate patterns of activity seen in Sydney or Melbourne may not be the straightest path to a “healthy” or “sustainable” sector here. The attitudes of the players and the pressures influencing the system are different, so this may be like comparing rainforests with deserts (Costanza), and forgetting that different elements and engagements are in fact “healthy” in different ecosystems. From an ecologies point of view, policy makers and practitioners in Brisbane and in Queensland more broadly might be well advised to stop trying to match Sydney or Melbourne, and to instead acknowledge that a “healthy” ecosystem here may look different, and so generate policy, subsidy, and production systems to support this. An ecological approach can help determine how much activity is in fact necessary to ensure a healthy and sustainable local performing arts sector. It can, in other words, provide a fresh approach that inspires new ideas and strategies for sector sustainability. Brisbane, Baylis and the Blogosphere Debate The ecological metaphor has clearly captured the interest of policy makers as they consider how to make Queensland’s performing arts more sustainable and successful. For Arts Queensland: The view of the sector as a complex and interdependent ‘ecosystem’ is forging new thinking, new practices and new business models. Individual practitioners and organisations are rethinking where they sit within the broader ecology, and what they contribute to the health and vitality of the sector, and how they might address the gaps in services and skills (12). This view informed the commissioning of Mapping Queensland Theatre, an assessment of Queensland’s theatre sector which offers a framework for allocation of resources under the Queensland Arts & Cultural Sector Plan 2010-2013. It also offers a framework for negotiation with funded organisations to ensure “their activities and focus support a harmonious ecology” (Baylis 3) in which all types and levels of practice (emerging, established, touring, and so on) are functioning well and are well represented within the overall mix of activities. Utilising primary and secondary survey sources, Mapping Queensland Theatre seeks: to map individuals, institutions, and organisations who have a stake in developing Queensland’s professional theatre sector; and to apply a “value chain” model of production from supply (training, creation, presentation, and distribution) to demand (audiences) to identify problems and gaps in Queensland’s professional theatre sector and recommend actions to address them. The report is critical of the sector. Baylis argues that “the context for great theatre is not yet in place in Queensland … therefore works of outstandingly high quality will be rare” (28).Whilst acknowledging a lack of ready answers about how much activity is required in a vibrant theatre culture, Baylis argues that “comparisons are possible” (27) and he uses various data sets to compare numbers of new Australian productions in different states. He finds that “despite having 20% of the Australian population, [Queensland] generates a dramatically lower amount of theatre activity” (4, cf. 28). The reason, according to Baylis (20, 23, 25, 29, 32, 40-41, 44), is that there are gaps in the “value chain” of Queensland theatre, specifically in: Support for the current wave of emerging and independent artistsSpace for experimentation Connections between artists, companies, venues and festivals, between and within regional centres, and between Queensland companies and their (inter)national peers Professional development for producers to address the issue of market distributionAudience development “Queensland lacks a critical mass of theatre activity to develop a sustainable theatre culture” (48), and the main gap is in pathways for independent artists. Quality new work does not emerge, energy dissipates, and artists move on. The solution, for Baylis, is to increase support for independent companies (especially via co-productions with mainstage companies), to improve (inter)national touring, and to encourage investment in audience development. Naturally, Queensland’s theatre makers responded to this report. Responses were given, for example, in inaugural speeches by new Queensland Theatre Company director Wesley Enoch and new La Boite Theatre Company director David Berthold, in the media, and in blogosphere commentary on a range of articles on Brisbane performing arts in 2010. The blogosphere debate in particular raged for months and warrants more detailed analysis elsewhere. For the purposes of this paper, though, it is sufficient to note that blogosphere debate about the health of Queensland theatre culture acknowledged many of the deficits Baylis identified and called for: More leadershipMore government supportMore venuesMore diversityMore audience, especially for risky work, and better audience engagementMore jobs and retention of artists Whilst these responses endorse Baylis’s findings and companies have since conceived programs that address Baylis’s criticisms (QTC’s introduction of a Studio Season and La Boite’s introduction of an Indie program in 2010 for example) a sense of frustration also emerged. Some, like former QTC Chair Kate Foy, felt that “what’s really needed in the theatre is a discussion that breaks out from the old themes and encourages fresh ideas—approaches to solving whatever problems are perceived to exist in ‘the system’.” For commentators like Foy the blogosphere debate enacted a kind of ritual rehearsal of an all-too-familiar set of concerns: inadequate and ill-deployed funding, insufficient venues, talent drain, and an impoverished local culture of theatre going. “Value Chains” versus “Value Ecologies” Why did responses to this report demand more artists, more arts organisations, more venues, and more activities? Why did they repeat demands for more government-subsidised venues, platforms, and support rather than drive toward new seed- or non- subsidised initiatives? At one level, this is to do with the report’s claims: it is natural for artists who have been told quality work is “rare” amongst them to point to lack of support to achieve success. At another level, though, this is because—as useful as it has been for local theatre makers—Baylis’s map is premised on a linear chain from training, to first productions, to further developed productions (involving established writers, directors, designers and performers), to opportunities to tour (inter)nationally, etc. It provides a linear image of a local performing arts sector in which there are individuals and institutions with potential, but specific gaps in the production-distribution-consumption chain that make it difficult to deliver work to target markets. It emphasises gaps in the linear pathway towards “stability” of financial, venue, and audience support and thus “sustainability” over a whole career for independent artists and the audiences they attract. Accordingly, asking government to plug the gaps through elements added to the system (venues, co-production platforms, producer hubs, subsidy, and entrepreneurial endeavours) seems like a logical solution. Whilst this is true, it does not tell the whole story. To generate a wider story, we need to consider: What the expected elements in a “healthy” ecosystem would be (e.g. more versus alternative activity);What other aesthetic, cultural, or economic pressures affect the “health” of an ecosystem;Why practices might need to cycle, ebb, and flow over time in a “healthy” ecosystem. A look at the way La Boite works before, during, and after Baylis’s analysis of Brisbane theatre illustrates why attention to these elements is necessary. A long-running company which has made the transition from amateur to professional to being a primary developer of new Australian work in its distinctive in-the-round space, La Boite has recently shifted its strategic position. A focus on text-based Australian plays has given way to adapted, contemporary, and new work in a range of genres; regular co-productions with companies in Brisbane and beyond; and an “Indie” program that offers other companies a venue. This could be read as a response to Baylis’s recommendation: the production-distribution-consumption chain gap for Brisbane’s independents is plugged, the problem is solved, the recommendation has led to the desired result. Such a reading might, though, overlook the range of pressures beyond Brisbane, beyond Queensland, and beyond the Baylis report that drive—and thus help, hinder, or otherwise effect—the shift in La Boite’s program strategies. The fact that La Boite recently lost its Australia Council funding, or that La Boite like all theatre companies needs co-productions to keep its venue running as costs increase, or that La Boite has rebranded to appeal to younger audiences interested in postdramatic, do-it-your-self or junkyard style aesthetics. These factors all influence what La Boite might do to sustain itself, and more importantly, what its long-term impact on Brisbane’s theatre ecology will be. To grasp what is happening here, and get beyond repetitive responses to anxieties about Brisbane’s theatre ecology, detail is required not simply on whether programs like La Boite’s “plugged the gap” for independent artists, but on how they had both predicted and unpredicted effects, and how other factors influenced the effects. What is needed is to extend mapping from a “value chain” to a full ”value ecology”? This is something Hearn et al. have called for. A value chain suggests a “single linear process with one stage leading to the next” (5). It ignores the environment and other external enablers and disregards a product’s relationship to other systems or products. In response they prefer a “value creating ecology” in which the “constellation of firms are [sic] dynamic and value flow is multi-directional and works through clusters of networks” (6). Whilst Hearn et al. emphasise “firms” or companies in their value creating ecology, a range of elements—government, arts organisations, artists, audiences, and the media as well as the aesthetic, social, and economic forces that influence them—needs to be mapped in the value creating ecology of the performing arts. Cherbo et al. provide a system of elements or components which, adapted for a local context like Brisbane or Queensland, can better form the basis of a value ecology approach to the way a specific performing arts community works, adapts, changes, breaks down, or breaks through over time. Figure 1 – Performing Arts Sector Map (adapted from Cherbo et. al. 14) Here, the performing arts sector is understood in terms of core artistic workers, companies, a constellation of generic and sector specific support systems, and wider social contexts (Cherbo et al. 15). Together, the shift from “value chain” to “value ecology” that Hearn et al. advocate, and the constellation of ecology elements that Cherbo et al. emphasise, bring a more detailed, dynamic range of relations into play. These include “upstream” production infrastructure (education, suppliers, sponsors), “downstream” distribution infrastructure (venues, outlets, agents), and overall public infrastructure. As a framework for mapping “value ecology” this model offers a more nuanced perspective on production, distribution, and consumption elements in an ecology. It allows for analysis of impact of interventions in dozens of different areas, from dozens of perspectives, and thus provides a more detailed picture of players, relationships, and results to support both practice and policy making around practice. An Aus-e-Stage Value Ecology To provide the more detailed, dynamic image of local theatre culture that a value ecology approach demands—to show players, relations between players, and context in all their complexity—we use the Aus-e-Stage Mapping Service, an online application that maps data about artists, arts organisations, and audiences across cityscapes/landscapes. We use Aus-e-Stage with data drawn from three sources: the AusStage database of over 50,000 entries on Australian performing arts venues, productions, artists, and reviews; the Australian Bureau of Statistics (ABS) data on population; and the Local Government Area (LGA) maps the ABS uses to cluster populations. Figure 2 – Using AusStage Interface Figure 3 – AusStage data on theatre venues laid over ABS Local Government Area Map Figure 4 – Using Aus-e-Stage / AusStage to zoom in on Australia, Queensland, Brisbane and La Boite Theatre Company, and generate a list of productions, dates and details Aus-e-Stage produces not just single maps, but a sequential series of snapshots of production ecologies, which visually track who does what when, where, with whom, and for whom. Its sequences can show: The way artists, companies, venues, and audiences relate to each other;The way artists’ relationship to companies, venues, and audiences changes over time;The way “external stressors” changes such as policy, industrial, or population changes affect the elements, roles, and relationships in the ecology from that point forward. Though it can be used in combination with other data sources such as interviews, the advantage of AusStage data is that maps of moving ecologies of practice are based not on descriptions coloured by memory but clear, accurate program, preview, and review data. This allows it to show how factors in the environment—population, policy, infrastructure, or program shifts—effect the ecology, effect players in the ecology, and prompt players to adapt their type, level, or intensity of practice. It extends Baylis’s value chain into a full value ecology that shows the detail on how an ecology works, going beyond demands that government plug perceived gaps and moving towards data- and history- based decisions, ideas and innovation based on what works in Brisbane’s performing arts ecology. Our Aus-e-Stage mapping shows this approach can do a number of useful things. It can create sequences showing breaks, blockages, and absences in an individual or company’s effort to move from emerging to established (e.g. in a sudden burst of activity followed by nothing). It can create sequences showing an individual or company’s moves to other parts of Australia (e.g. to tour or to pursue more permanent work). It can show surprising spaces, relations, and sources of support artists use to further their career (e.g. use of an amateur theatre outside the city such as Brisbane Arts Theatre). It can capture data about venues, programs, or co-production networks that are more or less effective in opening up new opportunities for artists (e.g. moving small-scale experiments in Metro Arts’ “Independents” program to full scale independent productions in La Boite’s “Indie” program, its mainstage program, other mainstage programs, and beyond). It can link to program information, documentation, or commentary to compare anticipated and actual effects. It can lay the map dates and movements across significant policy, infrastructure, or production climate shifts. In the example below, for instance, Aus-e-Stage represents the tour of La Boite’s popular production of a new Australian work Zig Zag Street, based on the Brisbane-focused novel by Nick Earls about a single, twentysomething man’s struggles with life, love, and work. Figure 5 – Zig Zag Street Tour Map In the example below, Aus-e-Stage represents the movements not of a play but of a performer—in this case Christopher Sommers—who has been able to balance employment with new work incubator Metro Arts, mainstage and indie producer La Boite, and stage theatre company QTC with his role with independent theatre company 23rd Productions to create something more protean, more portfolio-based or boundary-less than a traditional linear career trajectory. Figure 6 – Christopher Sommers Network Map and Travel Map This value of this approach, and this technology, is clear. Which independents participate in La Boite Indie (or QTC’s “Studio” or “Greenroom” new work programs, or Metro’s emerging work programs, or others)? What benefits does it bring for artists, for independent companies, or for mainstage companies like La Boite? Is this a launching pad leading to ongoing, sustainable production practices? What do artists, audiences or others say about these launching pads in previews, programs, or reviews? Using Aus-e-Stage as part of a value ecology approach answers these questions. It provides a more detailed picture of what happens, what effect it has on local theatre ecology, and exactly which influences enabled this effect: precisely the data needed to generate informed debate, ideas, and decision making. Conclusion Our ecological approach provides images of a local performing arts ecology in action, drawing out filtered data on different players, relationships, and influencing factors, and thus extending examination of Brisbane’s and Queensland’s performing arts sector into useful new areas. It offers three main advances—first, it adopts a value ecology approach (Hearn et al.), second, it adapts this value ecology approach to include not just companies by all up- and down- stream players, supporters and infrastructure (Cherbo et. al.), and, thirdly, it uses the wealth of data available via Aus-e-Stage maps to fill out and filter images of local theatre ecology. It allows us to develop detailed, meaningful data to support discussion, debate, and development of ideas that is less likely to get bogged down in old, outdated, or inaccurate assumptions about how the sector works. Indeed, our data lends itself to additional analysis in a number of ways, from economic analysis of how shifts in policy influence productivity to sociological analysis of the way practitioners or practices acquire status and cultural capital (Bourdieu) in the field. Whilst descriptions offered here demonstrate the potential of this approach, this is by no means a finished exercise. Indeed, because this approach is about analysing how elements, roles, and relationships in an ecology shift over time, it is an ever-unfinished exercise. As Fortin and Dale argue, ecological studies of this sort are necessarily iterative, with each iteration providing new insights and raising further questions into processes and patterns (3). Given the number of local performing arts producers who have changed their practices significantly since Baylis’s Mapping Queensland Theatre report, and the fact that Baylis is producing a follow-up report, the next step will be to use this approach and the Aus-e-Stage technology that supports it to trace how ongoing shifts impact on Brisbane’s ambitions to become a cultural capital. This process is underway, and promises to open still more new perspectives by understanding anxieties about local theatre culture in terms of ecologies and exploring them cartographically. References Arts Queensland. Queensland Arts & Cultural Sector Plan 2010-2013. Brisbane: Arts Queensland, 2010. Australian Bureau of Statistics. “Population Projections, Australia, 2006 to 2101.” Canberra: ABS (2008). 20 June 2011 ‹http://www.abs.gov.au/AUSSTATS/abs@.nsf/Lookup/3222.0Main+Features12006%20to%202101?OpenDocument›. ——-. “Regional Population Growth, Australia, 2008-2009: Queensland.” Canberra: ABS (2010). 20 June 2011 ‹http://www.abs.gov.au/ausstats/abs@.nsf/Latestproducts/3218.0Main%20Features62008-09?opendocument&tabname=Summary&prodno=3218.0&issue=2008-09&num=&view=›. Baylis, John. Mapping Queensland Theatre. Brisbane: Arts Queensland, 2009. Bourdieu, Pierre. “The Forms of Capital.” Handbook of Theory and Research for the Sociology of Education. Ed. John G. Richardson. New York: Greenwood, 1986.241-58. Cherbo, Joni M., Harold Vogel, and Margaret Jane Wyszomirski. “Towards an Arts and Creative Sector.” Understanding the Arts and Creative Sector in the United States. Ed. Joni M. Cherbo, Ruth A. Stewart and Margaret J. Wyszomirski. New Brunswick: Rutgers University Press, 2008. 32-60. Costanza, Robert. “Toward an Operational Definition of Ecosystem Health”. Ecosystem Health: New Goals for Environmental Management. Eds. Robert Costanza, Bryan G. Norton and Benjamin D. Haskell. Washington: Island Press, 1992. 239-56. Cunningham, Stuart. “Keeping Artistic Tempers Balanced.” The Courier Mail, 4 August (2010). 20 June 2012 ‹http://www.couriermail.com.au/news/opinion/keeping-artistic-tempers-balanced/story-e6frerc6-1225901295328›. Gallasch, Keith. “The ABC and the Arts: The Arts Ecologically.” RealTime 61 (2004). 20 June 2011 ‹http://www.realtimearts.net/article/61/7436›. Gill, Raymond. “Is Brisbane Australia’s New Cultural Capital?” Sydney Morning Herald, 16 October (2010). 20 June 2011 ‹http://www.smh.com.au/entertainment/art-and-design/is-brisbane-australias-new-cultural-capital-20101015-16np5.html›. Fortin, Marie-Josée and Dale, Mark R.T. Spatial Analysis: A Guide for Ecologists. Cambridge: Cambridge University Press, 2005. Foy, Kate. “Is There Anything Right with the Theatre?” Groundling. 10 January (2010). 20 June 2011 ‹http://katefoy.com/2010/01/is-there-anything-right-with-the-theatre/›. Hearn, Gregory N., Simon C. Roodhouse, and Julie M. Blakey. ‘From Value Chain to Value Creating Ecology: Implications for Creative Industries Development Policy.’ International Journal of Cultural Policy 13 (2007). 20 June 2011 ‹http://eprints.qut.edu.au/15026/›. Hunt, Cathy and Phyllida Shaw. A Sustainable Arts Sector: What Will It Take? Strawberry Hills: Currency House, 2007. Knell, John. Theatre’s New Rules of Evolution. Available from Intelligence Agency, 2008. Office of Economic and Statistical Research. “Information Brief: Australian Demographic Statistics June Quarter 2009.” Canberra: OESR (2010). 20 June 2012 ‹http://www.oesr.qld.gov.au/queensland-by-theme/demography/briefs/aust-demographic-stats/aust-demographic-stats-200906.pdf›.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Burns, Alex. „Oblique Strategies for Ambient Journalism“. M/C Journal 13, Nr. 2 (15.04.2010). http://dx.doi.org/10.5204/mcj.230.

Der volle Inhalt der Quelle
Annotation:
Alfred Hermida recently posited ‘ambient journalism’ as a new framework for para- and professional journalists, who use social networks like Twitter for story sources, and as a news delivery platform. Beginning with this framework, this article explores the following questions: How does Hermida define ‘ambient journalism’ and what is its significance? Are there alternative definitions? What lessons do current platforms provide for the design of future, real-time platforms that ‘ambient journalists’ might use? What lessons does the work of Brian Eno provide–the musician and producer who coined the term ‘ambient music’ over three decades ago? My aim here is to formulate an alternative definition of ambient journalism that emphasises craft, skills acquisition, and the mental models of professional journalists, which are the foundations more generally for journalism practices. Rather than Hermida’s participatory media context I emphasise ‘institutional adaptiveness’: how journalists and newsrooms in media institutions rely on craft and skills, and how emerging platforms can augment these foundations, rather than replace them. Hermida’s Ambient Journalism and the Role of Journalists Hermida describes ambient journalism as: “broad, asynchronous, lightweight and always-on communication systems [that] are creating new kinds of interactions around the news, and are enabling citizens to maintain a mental model of news and events around them” (Hermida 2). His ideas appear to have two related aspects. He conceives ambient journalism as an “awareness system” between individuals that functions as a collective intelligence or kind of ‘distributed cognition’ at a group level (Hermida 2, 4-6). Facebook, Twitter and other online social networks are examples. Hermida also suggests that such networks enable non-professionals to engage in ‘communication’ and ‘conversation’ about news and media events (Hermida 2, 7). In a helpful clarification, Hermida observes that ‘para-journalists’ are like the paralegals or non-lawyers who provide administrative support in the legal profession and, in academic debates about journalism, are more commonly known as ‘citizen journalists’. Thus, Hermida’s ambient journalism appears to be: (1) an information systems model of new platforms and networks, and (2) a normative argument that these tools empower ‘para-journalists’ to engage in journalism and real-time commentary. Hermida’s thesis is intriguing and worthy of further discussion and debate. As currently formulated however it risks sharing the blind-spots and contradictions of the academic literature that Hermida cites, which suffers from poor theory-building (Burns). A major reason is that the participatory media context on which Hermida often builds his work has different mental models and normative theories than the journalists or media institutions that are the target of critique. Ambient journalism would be a stronger and more convincing framework if these incorrect assumptions were jettisoned. Others may also potentially misunderstand what Hermida proposes, because the academic debate is often polarised between para-journalists and professional journalists, due to different views about institutions, the politics of knowledge, decision heuristics, journalist training, and normative theoretical traditions (Christians et al. 126; Cole and Harcup 166-176). In the academic debate, para-journalists or ‘citizen journalists’ may be said to have a communitarian ethic and desire more autonomous solutions to journalists who are framed as uncritical and reliant on official sources, and to media institutions who are portrayed as surveillance-like ‘monitors’ of society (Christians et al. 124-127). This is however only one of a range of possible relationships. Sole reliance on para-journalists could be a premature solution to a more complex media ecology. Journalism craft, which does not rely just on official sources, also has a range of practices that already provides the “more complex ways of understanding and reporting on the subtleties of public communication” sought (Hermida 2). Citizen- and para-journalist accounts may overlook micro-studies in how newsrooms adopt technological innovations and integrate them into newsgathering routines (Hemmingway 196). Thus, an examination of the realities of professional journalism will help to cast a better light on how ambient journalism can shape the mental models of para-journalists, and provide more rigorous analysis of news and similar events. Professional journalism has several core dimensions that para-journalists may overlook. Journalism’s foundation as an experiential craft includes guidance and norms that orient the journalist to information, and that includes practitioner ethics. This craft is experiential; the basis for journalism’s claim to “social expertise” as a discipline; and more like the original Linux and Open Source movements which evolved through creative conflict (Sennett 9, 25-27, 125-127, 249-251). There are learnable, transmissible skills to contextually evaluate, filter, select and distil the essential insights. This craft-based foundation and skills informs and structures the journalist’s cognitive witnessing of an event, either directly or via reconstructed, cultivated sources. The journalist publishes through a recognised media institution or online platform, which provides communal validation and verification. There is far more here than the academic portrayal of journalists as ‘gate-watchers’ for a ‘corporatist’ media elite. Craft and skills distinguish the professional journalist from Hermida’s para-journalist. Increasingly, media institutions hire journalists who are trained in other craft-based research methods (Burns and Saunders). Bethany McLean who ‘broke’ the Enron scandal was an investment banker; documentary filmmaker Errol Morris first interviewed serial killers for an early project; and Neil Chenoweth used ‘forensic accounting’ techniques to investigate Rupert Murdoch and Kerry Packer. Such expertise allows the journalist to filter information, and to mediate any influences in the external environment, in order to develop an individualised, ‘embodied’ perspective (Hofstadter 234; Thompson; Garfinkel and Rawls). Para-journalists and social network platforms cannot replace this expertise, which is often unique to individual journalists and their research teams. Ambient Journalism and Twitter Current academic debates about how citizen- and para-journalists may augment or even replace professional journalists can often turn into legitimation battles whether the ‘de facto’ solution is a social media network rather than a media institution. For example, Hermida discusses Twitter, a micro-blogging platform that allows users to post 140-character messages that are small, discrete information chunks, for short-term and episodic memory. Twitter enables users to monitor other users, to group other messages, and to search for terms specified by a hashtag. Twitter thus illustrates how social media platforms can make data more transparent and explicit to non-specialists like para-journalists. In fact, Twitter is suitable for five different categories of real-time information: news, pre-news, rumours, the formation of social media and subject-based networks, and “molecular search” using granular data-mining tools (Leinweber 204-205). In this model, the para-journalist acts as a navigator and “way-finder” to new information (Morville, Findability). Jaron Lanier, an early designer of ‘virtual reality’ systems, is perhaps the most vocal critic of relying on groups of non-experts and tools like Twitter, instead of individuals who have professional expertise. For Lanier, what underlies debates about citizen- and para-journalists is a philosophy of “cybernetic totalism” and “digital Maoism” which exalts the Internet collective at the expense of truly individual views. He is deeply critical of Hermida’s chosen platform, Twitter: “A design that shares Twitter’s feature of providing ambient continuous contact between people could perhaps drop Twitter’s adoration of fragments. We don’t really know, because it is an unexplored design space” [emphasis added] (Lanier 24). In part, Lanier’s objection is traceable back to an unresolved debate on human factors and design in information science. Influenced by the post-war research into cybernetics, J.C.R. Licklider proposed a cyborg-like model of “man-machine symbiosis” between computers and humans (Licklider). In turn, Licklider’s framework influenced Douglas Engelbart, who shaped the growth of human-computer interaction, and the design of computer interfaces, the mouse, and other tools (Engelbart). In taking a system-level view of platforms Hermida builds on the strength of Licklider and Engelbart’s work. Yet because he focuses on para-journalists, and does not appear to include the craft and skills-based expertise of professional journalists, it is unclear how he would answer Lanier’s fears about how reliance on groups for news and other information is superior to individual expertise and judgment. Hermida’s two case studies point to this unresolved problem. Both cases appear to show how Twitter provides quicker and better forms of news and information, thereby increasing the effectiveness of para-journalists to engage in journalism and real-time commentary. However, alternative explanations may exist that raise questions about Twitter as a new platform, and thus these cases might actually reveal circumstances in which ambient journalism may fail. Hermida alludes to how para-journalists now fulfil the earlier role of ‘first responders’ and stringers, in providing the “immediate dissemination” of non-official information about disasters and emergencies (Hermida 1-2; Haddow and Haddow 117-118). Whilst important, this is really a specific role. In fact, disaster and emergency reporting occurs within well-established practices, professional ethics, and institutional routines that may involve journalists, government officials, and professional communication experts (Moeller). Officials and emergency management planners are concerned that citizen- or para-journalism is equated with the craft and skills of professional journalism. The experience of these officials and planners in 2005’s Hurricane Katrina in the United States, and in 2009’s Black Saturday bushfires in Australia, suggests that whilst para-journalists might be ‘first responders’ in a decentralised, complex crisis, they are perceived to spread rumours and potential social unrest when people need reliable information (Haddow and Haddow 39). These terms of engagement between officials, planners and para-journalists are still to be resolved. Hermida readily acknowledges that Twitter and other social network platforms are vulnerable to rumours (Hermida 3-4; Sunstein). However, his other case study, Iran’s 2009 election crisis, further complicates the vision of ambient journalism, and always-on communication systems in particular. Hermida discusses several events during the crisis: the US State Department request to halt a server upgrade, how the Basij’s shooting of bystander Neda Soltan was captured on a mobile phone camera, the spread across social network platforms, and the high-velocity number of ‘tweets’ or messages during the first two weeks of Iran’s electoral uncertainty (Hermida 1). The US State Department was interested in how Twitter could be used for non-official sources, and to inform people who were monitoring the election events. Twitter’s perceived ‘success’ during Iran’s 2009 election now looks rather different when other factors are considered such as: the dynamics and patterns of Tehran street protests; Iran’s clerics who used Soltan’s death as propaganda; claims that Iran’s intelligence services used Twitter to track down and to kill protestors; the ‘black box’ case of what the US State Department and others actually did during the crisis; the history of neo-conservative interest in a Twitter-like platform for strategic information operations; and the Iranian diaspora’s incitement of Tehran student protests via satellite broadcasts. Iran’s 2009 election crisis has important lessons for ambient journalism: always-on communication systems may create noise and spread rumours; ‘mirror-imaging’ of mental models may occur, when other participants have very different worldviews and ‘contexts of use’ for social network platforms; and the new kinds of interaction may not lead to effective intervention in crisis events. Hermida’s combination of news and non-news fragments is the perfect environment for psychological operations and strategic information warfare (Burns and Eltham). Lessons of Current Platforms for Ambient Journalism We have discussed some unresolved problems for ambient journalism as a framework for journalists, and as mental models for news and similar events. Hermida’s goal of an “awareness system” faces a further challenge: the phenomenological limitations of human consciousness to deal with information complexity and ambiguous situations, whether by becoming ‘entangled’ in abstract information or by developing new, unexpected uses for emergent technologies (Thackara; Thompson; Hofstadter 101-102, 186; Morville, Findability, 55, 57, 158). The recursive and reflective capacities of human consciousness imposes its own epistemological frames. It’s still unclear how Licklider’s human-computer interaction will shape consciousness, but Douglas Hofstadter’s experiments with art and video-based group experiments may be suggestive. Hofstadter observes: “the interpenetration of our worlds becomes so great that our worldviews start to fuse” (266). Current research into user experience and information design provides some validation of Hofstadter’s experience, such as how Google is now the ‘default’ search engine, and how its interface design shapes the user’s subjective experience of online search (Morville, Findability; Morville, Search Patterns). Several models of Hermida’s awareness system already exist that build on Hofstadter’s insight. Within the information systems field, on-going research into artificial intelligence–‘expert systems’ that can model expertise as algorithms and decision rules, genetic algorithms, and evolutionary computation–has attempted to achieve Hermida’s goal. What these systems share are mental models of cognition, learning and adaptiveness to new information, often with forecasting and prediction capabilities. Such systems work in journalism areas such as finance and sports that involve analytics, data-mining and statistics, and in related fields such as health informatics where there are clear, explicit guidelines on information and international standards. After a mid-1980s investment bubble (Leinweber 183-184) these systems now underpin the technology platforms of global finance and news intermediaries. Bloomberg LP’s ubiquitous dual-screen computers, proprietary network and data analytics (www.bloomberg.com), and its competitors such as Thomson Reuters (www.thomsonreuters.com and www.reuters.com), illustrate how financial analysts and traders rely on an “awareness system” to navigate global stock-markets (Clifford and Creswell). For example, a Bloomberg subscriber can access real-time analytics from exchanges, markets, and from data vendors such as Dow Jones, NYSE Euronext and Thomson Reuters. They can use portfolio management tools to evaluate market information, to make allocation and trading decisions, to monitor ‘breaking’ news, and to integrate this information. Twitter is perhaps the para-journalist equivalent to how professional journalists and finance analysts rely on Bloomberg’s platform for real-time market and business information. Already, hedge funds like PhaseCapital are data-mining Twitter’s ‘tweets’ or messages for rumours, shifts in stock-market sentiment, and to analyse potential trading patterns (Pritchett and Palmer). The US-based Securities and Exchange Commission, and researchers like David Gelernter and Paul Tetlock, have also shown the benefits of applied data-mining for regulatory market supervision, in particular to uncover analysts who provide ‘whisper numbers’ to online message boards, and who have access to material, non-public information (Leinweber 60, 136, 144-145, 208, 219, 241-246). Hermida’s framework might be developed further for such regulatory supervision. Hermida’s awareness system may also benefit from the algorithms found in high-frequency trading (HFT) systems that Citadel Group, Goldman Sachs, Renaissance Technologies, and other quantitative financial institutions use. Rather than human traders, HFT uses co-located servers and complex algorithms, to make high-volume trades on stock-markets that take advantage of microsecond changes in prices (Duhigg). HFT capabilities are shrouded in secrecy, and became the focus of regulatory attention after several high-profile investigations of traders alleged to have stolen the software code (Bray and Bunge). One public example is Streambase (www.streambase.com), a ‘complex event processing’ (CEP) platform that can be used in HFT, and commercialised from the Project Aurora research collaboration between Brandeis University, Brown University, and Massachusetts Institute of Technology. CEP and HFT may be the ‘killer apps’ of Hermida’s awareness system. Alternatively, they may confirm Jaron Lanier’s worst fears: your data-stream and user-generated content can be harvested by others–for their gain, and your loss! Conclusion: Brian Eno and Redefining ‘Ambient Journalism’ On the basis of the above discussion, I suggest a modified definition of Hermida’s thesis: ‘Ambient journalism’ is an emerging analytical framework for journalists, informed by cognitive, cybernetic, and information systems research. It ‘sensitises’ the individual journalist, whether professional or ‘para-professional’, to observe and to evaluate their immediate context. In doing so, ‘ambient journalism’, like journalism generally, emphasises ‘novel’ information. It can also inform the design of real-time platforms for journalistic sources and news delivery. Individual ‘ambient journalists’ can learn much from the career of musician and producer Brian Eno. His personal definition of ‘ambient’ is “an atmosphere, or a surrounding influence: a tint,” that relies on the co-evolution of the musician, creative horizons, and studio technology as a tool, just as para-journalists use Twitter as a platform (Sheppard 278; Eno 293-297). Like para-journalists, Eno claims to be a “self-educated but largely untrained” musician and yet also a craft-based producer (McFadzean; Tamm 177; 44-50). Perhaps Eno would frame the distinction between para-journalist and professional journalist as “axis thinking” (Eno 298, 302) which is needlessly polarised due to different normative theories, stances, and practices. Furthermore, I would argue that Eno’s worldview was shaped by similar influences to Licklider and Engelbart, who appear to have informed Hermida’s assumptions. These influences include the mathematician and game theorist John von Neumann and biologist Richard Dawkins (Eno 162); musicians Eric Satie, John Cage and his book Silence (Eno 19-22, 162; Sheppard 22, 36, 378-379); and the field of self-organising systems, in particular cyberneticist Stafford Beer (Eno 245; Tamm 86; Sheppard 224). Eno summed up the central lesson of this theoretical corpus during his collaborations with New York’s ‘No Wave’ scene in 1978, of “people experimenting with their lives” (Eno 253; Reynolds 146-147; Sheppard 290-295). Importantly, he developed a personal view of normative theories through practice-based research, on a range of projects, and with different creative and collaborative teams. Rather than a technological solution, Eno settled on a way to encode his craft and skills into a quasi-experimental, transmittable method—an aim of practitioner development in professional journalism. Even if only a “founding myth,” the story of Eno’s 1975 street accident with a taxi, and how he conceived ‘ambient music’ during his hospital stay, illustrates how ambient journalists might perceive something new in specific circumstances (Tamm 131; Sheppard 186-188). More tellingly, this background informed his collaboration with the late painter Peter Schmidt, to co-create the Oblique Strategies deck of aphorisms: aleatory, oracular messages that appeared dependent on chance, luck, and randomness, but that in fact were based on Eno and Schmidt’s creative philosophy and work guidelines (Tamm 77-78; Sheppard 178-179; Reynolds 170). In short, Eno was engaging with the kind of reflective practices that underpin exemplary professional journalism. He was able to encode this craft and skills into a quasi-experimental method, rather than a technological solution. Journalists and practitioners who adopt Hermida’s framework could learn much from the published accounts of Eno’s practice-based research, in the context of creative projects and collaborative teams. In particular, these detail the contexts and choices of Eno’s early ambient music recordings (Sheppard 199-200); Eno’s duels with David Bowie during ‘Sense of Doubt’ for the Heroes album (Tamm 158; Sheppard 254-255); troubled collaborations with Talking Heads and David Byrne (Reynolds 165-170; Sheppard; 338-347, 353); a curatorial, mentor role on U2’s The Unforgettable Fire (Sheppard 368-369); the ‘grand, stadium scale’ experiments of U2’s 1991-93 ZooTV tour (Sheppard 404); the Zorn-like games of Bowie’s Outside album (Eno 382-389); and the ‘generative’ artwork 77 Million Paintings (Eno 330-332; Tamm 133-135; Sheppard 278-279; Eno 435). Eno is clearly a highly flexible maker and producer. Developing such flexibility would ensure ambient journalism remains open to novelty as an analytical framework that may enhance the practitioner development and work of professional journalists and para-journalists alike.Acknowledgments The author thanks editor Luke Jaaniste, Alfred Hermida, and the two blind peer reviewers for their constructive feedback and reflective insights. References Bray, Chad, and Jacob Bunge. “Ex-Goldman Programmer Indicted for Trade Secrets Theft.” The Wall Street Journal 12 Feb. 2010. 17 March 2010 ‹http://online.wsj.com/article/SB10001424052748703382904575059660427173510.html›. Burns, Alex. “Select Issues with New Media Theories of Citizen Journalism.” M/C Journal 11.1 (2008). 17 March 2010 ‹http://journal.media-culture.org.au/index.php/mcjournal/article/view/30›.———, and Barry Saunders. “Journalists as Investigators and ‘Quality Media’ Reputation.” Record of the Communications Policy and Research Forum 2009. Eds. Franco Papandrea and Mark Armstrong. Sydney: Network Insight Institute, 281-297. 17 March 2010 ‹http://eprints.vu.edu.au/15229/1/CPRF09BurnsSaunders.pdf›.———, and Ben Eltham. “Twitter Free Iran: An Evaluation of Twitter’s Role in Public Diplomacy and Information Operations in Iran’s 2009 Election Crisis.” Record of the Communications Policy and Research Forum 2009. Eds. Franco Papandrea and Mark Armstrong. Sydney: Network Insight Institute, 298-310. 17 March 2010 ‹http://eprints.vu.edu.au/15230/1/CPRF09BurnsEltham.pdf›. Christians, Clifford G., Theodore Glasser, Denis McQuail, Kaarle Nordenstreng, and Robert A. White. Normative Theories of the Media: Journalism in Democratic Societies. Champaign, IL: University of Illinois Press, 2009. Clifford, Stephanie, and Julie Creswell. “At Bloomberg, Modest Strategy to Rule the World.” The New York Times 14 Nov. 2009. 17 March 2010 ‹http://www.nytimes.com/2009/11/15/business/media/15bloom.html?ref=businessandpagewanted=all›.Cole, Peter, and Tony Harcup. Newspaper Journalism. Thousand Oaks, CA: Sage Publications, 2010. Duhigg, Charles. “Stock Traders Find Speed Pays, in Milliseconds.” The New York Times 23 July 2009. 17 March 2010 ‹http://www.nytimes.com/2009/07/24/business/24trading.html?_r=2andref=business›. Engelbart, Douglas. “Augmenting Human Intellect: A Conceptual Framework, 1962.” Ed. Neil Spiller. Cyber Reader: Critical Writings for the Digital Era. London: Phaidon Press, 2002. 60-67. Eno, Brian. A Year with Swollen Appendices. London: Faber and Faber, 1996. Garfinkel, Harold, and Anne Warfield Rawls. Toward a Sociological Theory of Information. Boulder, CO: Paradigm Publishers, 2008. Hadlow, George D., and Kim S. Haddow. Disaster Communications in a Changing Media World, Butterworth-Heinemann, Burlington MA, 2009. Hemmingway, Emma. Into the Newsroom: Exploring the Digital Production of Regional Television News. Milton Park: Routledge, 2008. Hermida, Alfred. “Twittering the News: The Emergence of Ambient Journalism.” Journalism Practice 4.3 (2010): 1-12. Hofstadter, Douglas. I Am a Strange Loop. New York: Perseus Books, 2007. Lanier, Jaron. You Are Not a Gadget: A Manifesto. London: Allen Lane, 2010. Leinweber, David. Nerds on Wall Street: Math, Machines and Wired Markets. Hoboken, NJ: John Wiley and Sons, 2009. Licklider, J.C.R. “Man-Machine Symbiosis, 1960.” Ed. Neil Spiller. Cyber Reader: Critical Writings for the Digital Era, London: Phaidon Press, 2002. 52-59. McFadzean, Elspeth. “What Can We Learn from Creative People? The Story of Brian Eno.” Management Decision 38.1 (2000): 51-56. Moeller, Susan. Compassion Fatigue: How the Media Sell Disease, Famine, War and Death. New York: Routledge, 1998. Morville, Peter. Ambient Findability. Sebastopol, CA: O’Reilly Press, 2005. ———. Search Patterns. Sebastopol, CA: O’Reilly Press, 2010.Pritchett, Eric, and Mark Palmer. ‘Following the Tweet Trail.’ CNBC 11 July 2009. 17 March 2010 ‹http://www.casttv.com/ext/ug0p08›. Reynolds, Simon. Rip It Up and Start Again: Postpunk 1978-1984. London: Penguin Books, 2006. Sennett, Richard. The Craftsman. London: Penguin Books, 2008. Sheppard, David. On Some Faraway Beach: The Life and Times of Brian Eno. London: Orion Books, 2008. Sunstein, Cass. On Rumours: How Falsehoods Spread, Why We Believe Them, What Can Be Done. New York: Farrar, Straus and Giroux, 2009. Tamm, Eric. Brian Eno: His Music and the Vertical Colour of Sound. New York: Da Capo Press, 1995. Thackara, John. In the Bubble: Designing in a Complex World. Boston, MA: The MIT Press, 1995. Thompson, Evan. Mind in Life: Biology, Phenomenology, and the Science of Mind. Boston, MA: Belknap Press, 2007.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Glover, Stuart. „Failed Fantasies of Cohesion: Retrieving Positives from the Stalled Dream of Whole-of-Government Cultural Policy“. M/C Journal 13, Nr. 1 (21.03.2010). http://dx.doi.org/10.5204/mcj.213.

Der volle Inhalt der Quelle
Annotation:
In mid-2001, in a cultural policy discussion at Arts Queensland, an Australian state government arts policy and funding apparatus, a senior arts bureaucrat seeking to draw a funding client’s gaze back to the bigger picture of what the state government was trying to achieve through its cultural policy settings excused his own abstracting comments with the phrase, “but then I might just be a policy ‘wank’”. There was some awkward laughter before one of his colleagues asked, “did you mean a policy ‘wonk’”? The incident was a misstatement of a term adopted in the 1990s to characterise the policy workers in the Clinton Whitehouse (Cunningham). This was not its exclusive use, but many saw Clinton as an exemplary wonk: less a pragmatic politician than one entertained by the elaboration of policy. The policy work of Clinton’s kitchen cabinet was, in part, driven by a pervasive rationalist belief in the usefulness of ordered policy processes as a method of producing social and economic outcomes, and, in part, by the seductions of policy-play: its ambivalences, its conundrums, and, in some sense, its aesthetics (Klein 193-94). There, far from being characterised as unproductive “self-abuse” of the body-politic, policy processes were alive as a pragmatic technology, an operationalisation of ideology, as an aestheticised field of play, but more than anything as a central rationalist tenant of government action. This final idea—the possibilities of policy for effecting change, promoting development, meeting government objectives—is at the centre of the bureaucratic imagination. Policy is effective. And a concomitant belief is that ordered or organised policy processes result in the best policy and the best outcomes. Starting with Harold Lasswell, policy theorists extended the general rationalist suppositions of Western representative democracies into executive government by arguing for the value of information/knowledge and the usefulness of ordered process in addressing thus identified policy problems. In the post-war period particularly, a case can be made for the usefulness of policy processes to government—although, in a paradox, these rationalist conceptions of the policy process were strangely irrational, even Utopian, in their view of transformational capacities possibilities of policy. The early policy scientists often moved beyond a view of policy science as a useful tool, to the advocacy of policy science and the policy scientist as panaceas for public ills (Parsons 18-19). The Utopian ambitions of policy science finds one of their extremes in the contemporary interest in whole-of-government approaches to policy making. Whole-of-governmentalism, concern with co-ordination of policy and delivery across all areas of the state, can seen as produced out of Western governments’ paradoxical concern with (on one hand) order, totality, and consistency, and (on the other) deconstructing existing mechanisms of public administration. Whole-of-governmentalism requires a horizontal purview of government goals, programs, outputs, processes, politics, and outcomes, alongside—and perhaps in tension with—the long-standing vertical purview that is fundamental to ministerial responsibility. This often presents a set of public management problems largely internal to government. Policy discussion and decision-making, while affecting community outcomes and stakeholder utility, are, in this circumstance, largely inter-agency in focus. Any eventual policy document may well have bureaucrats rather than citizens as its target readers—or at least as its closest readers. Internally, cohesion of objective, discourse, tool and delivery are pursued as a prime interests of policy making. Failing at Policy So what happens when whole-of-government policy processes, particularly cultural policy processes, break down or fail? Is there anything productive to be retrieved from a failed fantasy of policy cohesion? This paper examines the utility of a failure to cohere and order in cultural policy processes. I argue that the conditions of contemporary cultural policy-making, particularly the tension between the “boutique” scale of cultural policy-making bodies and the revised, near universal, remit of cultural policy, require policy work to be undertaken in an environment and in such a way that failure is almost inevitable. Coherence and cohesions are fundamental principles of whole-of-government policy but cultural policy ambitions are necessarily too comprehensive to be achievable. This is especially so for the small arts or cultural offices government that normally act as lead agencies for cultural policy development within government. Yet, that these failed processes can still give rise to positive outcomes or positive intermediate outputs that can be taken up in a productive way in the ongoing cycle of policy work that categorises contemporary cultural governance. Herein, I detail the development of Building the Future, a cultural policy planning paper (and the name of a policy planning process) undertaken within Arts Queensland in 1999 and 2000. (While this process is now ten years in the past, it is only with a decade past that as a consultant I am in apposition to write about the material.) The abandonment of this process before the production of a public policy program allows something to be said about the utility and role of failure in cultural policy-making. The working draft of Building the Future never became a public document, but the eight months of its development helped produce a series of shifts in the discourse of Queensland Government cultural policy: from “arts” to “creative industries”; and from arts bureaucracy-centred cultural policy to the whole-of-government policy frameworks. These concepts were then taken up and elaborated in the Creative Queensland policy statement published by Arts Queensland in October 2002, particularly the concern with creative industries; whole-of-government cultural policy; and the repositioning of Arts Queensland as a service agency to other potential cultural funding-bodies within government. Despite the failure of the Building the Future process, it had a role in the production of the policy document and policy processes that superseded it. This critique of cultural policy-making rather than cultural policy texts, announcements and settings is offered as part of a project to bring to cultural policy studies material and theoretical accounts of the particularities of making cultural policy. While directions in cultural policy have much to do with the overall directions of government—which might over the past decade be categorised as focus on de-regulation, out-sourcing of services—there are developments in cultural policy settings and in cultural policy processes that are particular to cultural policy and cultural policy-making. Central to the development of cultural policy studies and to cultural policy is a transformational broadening of the operant definition of culture within government (O'Regan). Following Raymond Williams, the domain of culture is broadened to include the high culture, popular culture, folk culture and the culture of everyday life. Accordingly, in some sense, every issue of governance is deemed to have a cultural dimension—be it policy questions around urban space, tourism, community building and so on. Contemporary governments are required to act with a concern for cultural questions both within and across a number of long-persisting and otherwise discrete policy silos. This has implications for cultural policy makers and for program delivery. The definition of culture as “everyday life”, while truistically defendable, becomes unwieldy as an imprimatur or a container for administrative activity. Transforming cultural policy into a domain incorporating most social policy and significant elements of economic policy makes the domain titanically large. Potentially, it compromises usual government efforts to order policy activity through the division or apportionment of responsibility (Glover and Cunningham 19). The problem has given rise to a new mode of policy-making which attends to the co-ordination of policy across and between levels of government, known as whole-of government policy-making (see O’Regan). Within the domain of cultural policy the task of whole-of-government cultural policy is complicated by the position of, and the limits upon, arts and cultural bureaux within state and federal governments. Dedicated cultural planning bureaux often operate as “boutique” agencies. They are usually discrete line agencies or line departments within government—only rarely are they part of the core policy function of departments of a Premier or a Prime Minister. Instead, like most line agencies, they lack the leverage within the bureaucracy or policy apparatus to deliver whole-of-government cultural policy change. In some sense, failure is the inevitable outcome of all policy processes, particularly when held up against the mechanistic representation of policy processes in policy typical of policy handbooks (see Bridgman and Davis 42). Against such models, which describe policy a series of discrete linear steps, all policy efforts fail. The rationalist assumptions of early policy models—and the rigid templates for policy process that arise from their assumptions—in retrospect condemn every policy process to failure or at least profound shortcoming. This is particularly so with whole-of-government cultural policy making To re-think this, it can be argued that the error then is not really in the failure of the process, which is invariably brought about by the difficulty for coherent policy process to survive exogenous complexity, but instead the error rests with the simplicity of policy models and assumptions about the possibility of cohesion. In some sense, mechanistic policy processes make failure endogenous. The contemporary experience of making policy has tended to erode any fantasies of order, clear process, or, even, clear-sightedness within government. Achieving a coherence to the policy message is nigh on impossible—likewise cohesion of the policy framework is unlikely. Yet, importantly, failed policy is not without value. The churn of policy work—the exercise of attempting cohrent policy-making—constitutes, in some sense, the deliberative function of government, and potentially operates as a force (and site) of change. Policy briefings, reports, and draft policies—the constitution of ideas in the policy process and the mechanism for their dissemination within the body of government and perhaps to other stakeholders—are discursive acts in the process of extending the discourse of government and forming its later actions. For arts and cultural policy agencies in particular, who act without the leverage or resources of central agencies, the expansive ambitions of whole-of-government cultural policy makes failure inevitable. In such a circumstance, retrieving some benefits at the margins of policy processes, through the churn of policy work towards cohesion, is an important consolation. Case study: Cultural Policy 2000 The policy process I wish to examine is now complete. It ran over the period 1999–2002, although I wish to concentrate on my involvement in the process in early 2000 during which, as a consultant to Arts Queensland, I generated a draft policy document, Building the Future: A policy framework for the next five years (working draft). The imperative to develop a new state cultural policy followed the election of the first Beattie Labor government in July 1998. By 1999, senior Arts Queensland staff began to argue (within government at least) for the development of a new state cultural policy. The bureaucrats perceived policy development as one way of establishing “traction” in the process of bidding for new funds for the portfolio. Arts Minister Matt Foley was initially reluctant to “green-light” the policy process, but eventually in early 1999 he acceded to it on the advice of Arts Queensland, the industry, his own policy advisors and the Department of Premier. As stated above, this case study is offered now because the passing of time makes the analysis of relatively sensitive material possible. From the outset, an abbreviated timeframe for consultation and drafting seem to guarantee a difficult birth for the policy document. This was compounded by a failure to clarity the aims and process of the project. In presenting the draft policy to the advisory group, it became clear that there was no agreed strategic purpose to the document: Was it to be an advertisement, a framework for policy ideas, an audit, or a report on achievements? Tied to this, were questions about the audience for the policy statement. Was it aimed at the public, the arts industry, bureaucrats inside Arts Queensland, or, in keeping with the whole-of-government inflection to the document and its putative use in bidding for funds inside government, bureaucrats outside of Arts Queensland? My own conception of the document was as a cultural policy framework for the whole-of-government for the coming five years. It would concentrate on cultural policy in three realms: Arts Queensland; the arts instrumentalities; and other departments (particularly the cultural initiatives undertaken by the Department of Premier and the Department of State Development). In order to do this I articulated (for myself) a series of goals for the document. It needed to provide the philosophical underpinnings for a new arts and cultural policy, discuss the cultural significance of “community” in the context of the arts, outline expansion plans for the arts infrastructure throughout Queensland, advance ideas for increased employment in the arts and cultural industries, explore the development of new audiences and markets, address contemporary issues of technology, globalisation and culture commodification, promote a whole-of-government approach to the arts and cultural industries, address social justice and equity concerns associated with cultural diversity, and present examples of current and new arts and cultural practices. Five key strategies were identified: i) building strong communities and supporting diversity; ii) building the creative industries and the cultural economy; iii) developing audiences and telling Queensland’s stories; iv) delivering to the world; and v) a new role for government. While the second aim of building the creative industries and the cultural economy was an addition to the existing Australian arts policy discourse, it is the articulation of a new role for government that is most radical here. The document went to the length of explicitly suggesting a series of actions to enable Arts Queensland to re-position itself inside government: develop an ongoing policy cycle; position Arts Queensland as a lead agency for cultural policy development; establish a mechanism for joint policy planning across the arts portfolio; adopt a whole-of-government approach to policy-making and program delivery; use arts and cultural strategies to deliver on social and economic policy agendas; centralise some cultural policy functions and project; maintain and develop mechanisms and peer assessment; establish long-term strategic relationships with the Commonwealth and local government; investigate new vehicles for arts and cultural investment; investigate partnerships between industry, community and government; and develop appropriate performance measures for the cultural industries. In short, the scope of the document was titanically large, and prohibitively expansive as a basis for policy change. A chief limitation of these aims is that they seem to place the cohesion and coherence of the policy discourse at the centre of the project—when it might have better privileged a concern with policy outputs and industry/community outcomes. The subsequent dismal fortunes of the document are instructive. The policy document went through several drafts over the first half of 2000. By August 2000, I had removed myself from the process and handed the drafting back to Arts Queensland which then produced shorter version less discursive than my initial draft. However, by November 2000, it is reasonable to say that the policy document was abandoned. Significantly, after May 2000 the working drafts began to be used as internal discussion documents with government. Thus, despite the abandonment of the policy process, largely due to the unworkable breadth of its ambition, the document had a continued policy utility. The subsequent discussions helped organise future policy statements and structural adjustments by government. After the re-election of the Beattie government in January 2001, a more substantial policy process was commenced with the earlier policy documents as a starting point. By early 2002 the document was in substantial draft. The eventual policy, Creative Queensland, was released in October 2002. Significantly, this document sought to advance two ideas that I believe the earlier process did much to mobilise: a whole-of-government approach to culture; and a broader operant definition of culture. It is important not to see these as ideas merely existing “textually” in the earlier policy draft of Building the Future, but instead to see them as ideas that had begun adhere themselves to the cultural policy mechanism of government, and begun to be deployed in internal policy discussions and in program design, before finding an eventual home in a published policy text. Analysis The productive effects of the aborted policy process in which I participated are difficult to quantify. They are difficult, in fact, to separate out from governments’ ongoing processes of producing and circulating policy ideas. What is clear is that the effects of Building the Future were not entirely negated by it never becoming public. Instead, despite only circulating to a readership of bureaucrats it represented the ideas of part of the bureaucracy at a point in time. In this instance, a “failed” policy process, and its intermediate outcomes, the draft policy, through the churn of policy work, assisted government towards an eventual policy statement and a new form of governmental organisation. This suggests that processes of cultural policy discussion, or policy churn, can be as productive as the public “enunciation” of formal policy in helping to organise ideas within government and determine programs and the allocation of resources. This is even so where the Utopian idealism of the policy process is abandoned for something more graspable or politic. For the small arts or cultural policy bureau this is an important incremental benefit. Two final implications should be noted. The first is for models of policy process. Bridgman and Davis’s model of the Australian policy cycle, despite its mechanistic qualities, is ambiguous about where the policy process begins and ends. In one instance they represent it as linear but strictly circular, always coming back to its own starting point (27). Elsewhere, however, they represent it as linear, but not necessarily circular, passing through eight stages with a defined beginning and end: identification of issues; policy analysis; choosing policy instruments; consultation; co-ordination; decision; implementation; and evaluation (28–29). What is clear from the 1999-2002 policy process—if we take the full period between when Arts Queensland began to organise the development of a new arts policy and its publication as Creative Queensland in October 2002—is that the policy process was not a linear one progressing in an orderly fashion towards policy outcomes. Instead, Building the Future, is a snapshot in time (namely early to mid-2000) of a fragmenting policy process; it reveals policy-making as involving a concurrency of policy activity rather than a progression through linear steps. Following Mark Considine’s conception of policy work as the state’s effort at “system-wide information exchange and policy transfer” (271), the document is concerned less in the ordering of resources than the organisation of policy discourse. The churn of policy is the mobilisation of information, or for Considine: policy-making, when considered as an innovation system among linked or interdependent actors, becomes a learning and regulating web based upon continuous exchanges of information and skill. Learning occurs through regulated exchange, rather than through heroic insight or special legislative feats of the kind regularly described in newspapers. (269) The acceptance of this underpins a turn in contemporary accounts of policy (Considine 252-72) where policy processes become contingent and incomplete Policy. The ordering of policy is something to be attempted rather than achieved. Policy becomes pragmatic and ad hoc. It is only coherent in as much as a policy statement represents a bringing together of elements of an agency or government’s objectives and program. The order, in some sense, arrives through the act of collection, narrativisation and representation. The second implication is more directly for cultural policy makers facing the prospect of whole-of-government cultural policy making. While it is reasonable for government to wish to make coherent totalising statements about its cultural interests, such ambitions bring the near certainty of failure for the small agency. Yet these failures of coherence and cohesion should be viewed as delivering incremental benefits through the effort and process of this policy “churn”. As was the case with the Building the Future policy process, while aborted it was not a totally wasted effort. Instead, Building the Future mobilised a set of ideas within Arts Queensland and within government. For the small arts or cultural bureaux approaching the enormous task of whole-of government cultural policy making such marginal benefits are important. References Arts Queensland. Creative Queensland: The Queensland Government Cultural Policy 2002. Brisbane: Arts Queensland, 2002. Bridgman, Peter, and Glyn Davis. Australian Policy Handbook. St Leonards: Allen & Unwin, 1998. Considine, Mark. Public Policy: A Critical Approach. South Melbourne: Palgrave Macmillan, 1996. Cunningham, Stuart. "Willing Wonkers at the Policy Factory." Media Information Australia 73 (1994): 4-7. Glover, Stuart, and Stuart Cunningham. "The New Brisbane." Artlink 23.2 (2003): 16-23. Glover, Stuart, and Gillian Gardiner. Building the Future: A Policy Framework for the Next Five Years (Working Draft). Brisbane: Arts Queensland, 2000. Klein, Joe. "Eight Years." New Yorker 16 & 23 Oct. 2000: 188-217. O'Regan, Tom. "Cultural Policy: Rejuvenate or Wither". 2001. rtf.file. (26 July): AKCCMP. 9 Aug. 2001. ‹http://www.gu.edu.au/centre/cmp>. Parsons, Wayne. Public Policy: An Introduction to the Theory and Practice of Policy Analysis. Aldershot: Edward Edgar, 1995.Williams, Raymond. Key Words: A Vocabulary of Culture and Society. London: Fontana, 1976.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie