Journal articles on the topic 'Stocks Prices Australia Mathematical models'

To see the other types of publications on this topic, follow the link: Stocks Prices Australia Mathematical models.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 16 journal articles for your research on the topic 'Stocks Prices Australia Mathematical models.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Hua, Jie, Maolin Huang, and Chengshun Huang. "Centrality Metrics’ Performance Comparisons on Stock Market Datasets." Symmetry 11, no. 7 (July 15, 2019): 916. http://dx.doi.org/10.3390/sym11070916.

Full text
Abstract:
The stock market is an essential sub-sector in the financial area. Both understanding and evaluating the mountains of collected stock data has become a challenge in relevant fields. Data visualisation techniques can offer a practical and engaging method to show the processed data in a meaningful way, with centrality measurements representing the significant variables in a network, through exploring the aspects of the exact definition of the metric. Here, in this study, we conducted an approach that combines data processing, graph visualisation and social network analysis methods, to develop deeper insights of complex stock data, with the ultimate aim of drawing the correct conclusions with the finalised graph models. We addressed the performance of centrality metrics methods such as betweenness, closeness, eigenvector, PageRank and weighted degree measurements, drawing comparisons between the experiments’ results and the actual top 300 shares in the Australian Stock Market. The outcomes showed consistent results. Although, in our experiments, the results of the top 300 stocks from those five centrality measurements’ rankings did not match the top 300 shares given by the ASX (Australian Securities Exchange) entirely, in which the weighted degree and PageRank metrics performed better than other three measurements such as betweenness, closeness and eigenvector. Potential reasons may include that we did not take into account the factor of stock’s market capitalisation in the methodology. This study only considers the stock price’s changing rates among every two shares and provides a relevant static pattern at this stage. Further research will include looking at cycles and symmetry in the stock market over chosen trading days, and these may assist stakeholder in grasping deep insights of those stocks.
APA, Harvard, Vancouver, ISO, and other styles
2

Chen, You-Shyang, Chih-Lung (Jerome) Chou, Yau-Jung (Mike) Lee, Su-Fen Chen, and Wen-Ju Hsiao. "Identifying Stock Prices Using an Advanced Hybrid ARIMA-Based Model: A Case of Games Catalogs." Axioms 11, no. 10 (September 24, 2022): 499. http://dx.doi.org/10.3390/axioms11100499.

Full text
Abstract:
At the beginning of 2020, the COVID-19 pandemic struck the world, affecting the pace of life and the economic behavioral patterns of people around the world, with an impact exceeding that of the 2008 financial crisis, causing a global stock market crash and even the first recorded negative oil prices. Under the impact of this pandemic, due to the global large-scale quarantine and lockdown measures, game stocks belonging to the stay-at-home economy have become the focus of investors from all over the world. Therefore, under such incentives, this study aims to construct a set of effective prediction models for the price of game stocks, which could help relevant stakeholders—especially investors—to make efficient predictions so as to achieve a profitable investment niche. Moreover, because stock prices have the characteristics of a time series, and based on the relevant discussion in the literature, we know that ARIMA (the autoregressive integrated moving average) prediction models have excellent prediction performance. In conclusion, this study aims to establish an advanced hybrid model based on ARIMA as an excellent prediction technology for the price of game stocks, and to construct four groups of different investment strategies to determine which technical models of investment strategies are suitable for different game stocks. There are six important directions, experimental results, and research findings in the construction of advanced models: (1) In terms of the experiment, the data are collected from the daily closing prices of game-related stocks on the Taiwan Stock Exchange, and the sample range is from 2014 to 2020. (2) In terms of the performance verification, the return on investment is used as the evaluation standard to verify the availability of the ARIMA prediction model. (3) In terms of the research results, the accuracy of the model in predicting the prices of listed stocks can reach the 95% confidence interval predicted by the model 14 days after the closing price, and the OTC stocks fall within the 95% confidence interval for 3 days. (4) In terms of the empirical study of the rate of return, the investors can obtain a better rate of return than the benchmark strategy by trading the game stocks based on the indices set by the ARIMA model in this study. (5) In terms of the research findings, this study further compares the rate of return of trading strategies with reference to the ARIMA index and the rate of return of trading strategies with reference to the monitoring indicator, finding no significant difference between the two. (6) Different game stocks apply for different technical models of investment strategies.
APA, Harvard, Vancouver, ISO, and other styles
3

Hidayana, Rizki Apriva, Herlina Napitupulu, and Jumadil Saputra. "Determination of Risk Value Using the ARMA-GJR-GARCH Model on BCA Stocks and BNI Stocks." Operations Research: International Conference Series 2, no. 3 (September 4, 2021): 62–66. http://dx.doi.org/10.47194/orics.v2i3.176.

Full text
Abstract:
Stocks are common investments that are in great demand by investors. Stocks are also an investment instrument that provides returns but tends to be riskier. The return time series is easier to handle than the price time series. In investment activities, there are the most important components, namely volatility and risk. All financial evaluations require accurate volatility predictions. Volatility is identical to the conditional standard deviation of stock price returns. The most frequently used risk calculation is Value-at-Risk (VaR). Mathematical models can be used to predict future stock prices, the model that will be used is the Glosten Jagannathan Runkle-generalized autoregressive conditional heteroscedastic (GJR-GARCH) model. The purpose of this study was to determine the value of the risk obtained by using the time series model. GJR-GARCH is a development of GARCH by including the leverage effect. The effect of leverage is related to the concept of asymmetry. Asymmetry generally arises because of the difference between price changes and value volatility. The method used in this study is a literature and experimental study through secondary data simulations in the form of daily data from BCA shares and BNI shares. Data processing by looking at the heteroscedasticity of the data, then continued by using the GARCH model and seeing whether there is an asymmetry in the data. If there is an asymmetric effect on the processed data, then it is continued by using the GJR-GARCH model. The results obtained on the two stocks can be explained that the analyzed stock has a stock return volatility value for the leverage effect because the GJR-GARCH coefficient value is > 0. So, the risk value obtained by using VaR measurements on BCA stocks is 0.047247 and on BNI stocks. is 0.037355. Therefore, the ARMA-GJR-GARCH model is good for determining the value of stock risk using VaR.
APA, Harvard, Vancouver, ISO, and other styles
4

Zhou, Dehui. "Financial Market Prediction and Simulation Based on the FEPA Model." Journal of Mathematics 2021 (December 26, 2021): 1–11. http://dx.doi.org/10.1155/2021/5955375.

Full text
Abstract:
Since the birth of the financial market, the industry and academia want to find a method to accurately predict the future trend of the financial market. The ultimate goal of this paper is to build a mathematical model that can effectively predict the short-term trend of the financial time series. This paper presents a new combined forecasting model: its name is Financial Time Series-Empirical Mode Decomposition-Principal Component Analysis-Artificial Neural Network (FEPA) model. This model is mainly composed of three components, which are based on financial time series special empirical mode decomposition (FTA-EMD), principal component analysis (PCA), and artificial neural network. This model is mainly used to model and predict the complex financial time series. At the same time, the model also predicts the stock market index and exchange rate and studies the hot fields of the financial market. The results show that the empirical mode decomposition back propagation neural network (EMD-BPNN) model has better prediction effect than the autoregressive comprehensive moving average model (ARIMA), which is mainly reflected in the accuracy of prediction. This shows that the prediction method of decomposing and recombining nonlinear and nonstationary financial time series can effectively improve the prediction accuracy. When predicting the closing price of Australian stock index, the hit rate (DS) of the FEPA model decomposition method is 72.22%, 10.86% higher than the EMD-BPNN model and 3.23% higher than the EMD-LPP-BPNN model. When the FEPA model predicts the Australian stock index, the hit rate is improved to a certain extent, and the effect is better than other models.
APA, Harvard, Vancouver, ISO, and other styles
5

Ogwuche, O. I., M. R. Odekunle, and M. O. Egwurube. "A Stochastic Model of the Dynamics of Change in Stock Price." NIGERIAN ANNALS OF PURE AND APPLIED SCIENCES 6 (December 28, 2015): 99–105. http://dx.doi.org/10.46912/napas.14.

Full text
Abstract:
The solutions of many mathematical models resulting in stochastic differential equations are based on the assumption that the drift and the volatility coefficients were linear functions of the solutions. We formulated a model whose basic parameters could be derived from observations over discretized time intervals rather than the assumption that the drift and the volatility coefficients were linear functions of the solutions. We took into consideration the possibility of an asset gaining, losing or stable in a small interval of time instead of the assumption of the Binomial Asset pricing models that posited that the price could appreciate by a factor p or depreciate by a factor 1-p. A multi-dimensional stochastic differential equation was obtained whose drift is the expectation vector and the volatility the covariance of the stocks with respect to each other. The resulting system of stochastic differential equations was solved numerically using the Euler Maruyama Scheme for multi-dimensional stochastic differential equations through the use of a computer program written in MatLab. We obtained a realization of the evolutions of their prices over a chosen interval of time
APA, Harvard, Vancouver, ISO, and other styles
6

Robson, Edward, and Vinayak V. Dixit. "Constructing a Database for Computable General Equilibrium Modeling of Sydney, Australia, Transport Network." Transportation Research Record: Journal of the Transportation Research Board 2606, no. 1 (January 2017): 54–62. http://dx.doi.org/10.3141/2606-07.

Full text
Abstract:
In the search for benefits to justify transport projects, economic appraisals have increasingly incorporated the valuation of impacts to the wider economy. Computable general equilibrium (CGE) models provide a framework to estimate these impacts by simulating the interactions of urban economies and transport networks. In CGE models, households and firms are represented by microeconomic behavioral functions, and markets adjust according to prices. As markets both inside and outside the transport network are taken into account, a wide variety of measures that can assist in economic appraisals can be extracted. However, urban CGE models are computationally burdensome and require detailed, spatially disaggregate data. This paper discusses the methodology used to develop a database, including an input–output table, for the calibration of an urban CGE model for Sydney, Australia. Official and publicly available data sources were manipulated by using a number of mathematical and statistical techniques to compile a table for 249 regions and 20 sectors across Sydney. Issues, such as determining the appropriate level of aggregation, generating incomplete data, and managing conflicting data, that other input–output table developers may encounter when constructing multiregional tables were addressed in the study. The table entries themselves were mapped and explored, as they provide a useful study of the spatial economy of Sydney. Future work will focus on streamlining the construction of input–output tables and incorporating new data sources.
APA, Harvard, Vancouver, ISO, and other styles
7

Duppati, Geeta, and Mengying Zhu. "Oil prices changes and volatility in sector stock returns: Evidence from Australia, New Zealand, China, Germany and Norway." Corporate Ownership and Control 13, no. 2 (2016): 351–70. http://dx.doi.org/10.22495/cocv13i2clp4.

Full text
Abstract:
The paper examines the exposure of sectoral stock returns to oil price changes in Australia, China, Germany, New Zealand and Norway over the period 2000-2015 using weekly data drawn from DataStream. The issue of volatility has important implications for the theory of finance and as is well-known accurate volatility forecasts are important in a variety of settings including option and other derivatives pricing, portfolio and risk management (e.g. in the calculation of hedge ratios and Value-at-Risk measures), and trading strategies (David and Ruiz, 2009). This study adopts GARCH and EGARCH to understand the relationship between the returns and volatility. The findings using GARCH (EGARCH) models suggests that in the case of Germany eight (nine) out of ten sectors returns can be explained by the volatility of past oil price in Germany, while in the case of Australia, six (seven) out of ten sector returns are sensitive to the oil price changes with the exception of Industrials, Consumer Goods, Health care and Utilities. While in China and New Zealand five sectors are found sensitive to oil price changes and three sectors in Norway, namely Oil & Gas, Consumer Services and Financials. Secondly, this paper also investigated the exposure of the stock returns to oil price changes using market index data as a proxy using GARCH or EGARCH model. The results indicated that the stock returns are sensitive to the oil price changes and have leverage effects for all the five countries. Further, the findings also suggests that sector with more constituents is likely to have leverage effects and vice versa. The results have implications to market participants to make informed decisions about a better portfolio diversification for minimizing risk and adding value to the stocks.
APA, Harvard, Vancouver, ISO, and other styles
8

Lalwani, Vaibhav, and Madhumita Chakraborty. "Multi-factor asset pricing models in emerging and developed markets." Managerial Finance 46, no. 3 (December 2, 2019): 360–80. http://dx.doi.org/10.1108/mf-12-2018-0607.

Full text
Abstract:
Purpose The purpose of this paper is to compare the performance of various multifactor asset pricing models across ten emerging and developed markets. Design/methodology/approach The general methodology to test asset pricing models involves regressing test asset returns (left-hand side assets) on pricing factors (right-hand side assets). Then the performance of different models is evaluated based on how well they price multiple test assets together. The parameters used to compare relative performance of different models are their pricing errors (GRS statistic and average absolute intercepts) and explained variation (average adjusted R2). Findings The Fama-French five-factor model improves the pricing performance for stocks in Australia, Canada, China and the USA. The pricing in these countries appears to be more integrated. However, the superior performance in these four countries is not consistent across a variety of test assets and the magnitude of reduction in pricing errors vis-à-vis three- or four-factor models is often economically insignificant. For other markets, the parsimonious three-factor model or its four-factor variants appear to be more suitable. Originality/value Unlike most asset pricing studies that use test assets based on variables that are already used to construct RHS factors, this study uses test assets that are generally different from RHS sorts. This makes the tests more robust and less biased to be in favour of any multifactor model. Also, most international studies of asset pricing tests use data for different markets and combine them into regions. This study provides the evidence from ten countries separately because prior research has shown that locally constructed factors are more suitable to explain asset prices. Further, this study also tests for the usefulness of adding a quality factor in the existing asset pricing models.
APA, Harvard, Vancouver, ISO, and other styles
9

Manickavasagam, Jeevananthan, and Visalakshmi S. "An investigational analysis on forecasting intraday values." Benchmarking: An International Journal 27, no. 2 (October 4, 2019): 592–605. http://dx.doi.org/10.1108/bij-11-2018-0361.

Full text
Abstract:
Purpose The algorithmic trading has advanced exponentially and necessitates the evaluation of intraday stock market forecasting on the grounds that any stock market series are foreseen to follow the random walk hypothesis. The purpose of this paper is to forecast the intraday values of stock indices using data mining techniques and compare the techniques’ performance in different markets to accomplish the best results. Design/methodology/approach This study investigates the intraday values (every 60th-minute closing value) of four different markets (namely, UK, Australia, India and China) spanning from April 1, 2017 to March 31, 2018. The forecasting performance of multivariate adaptive regression spline (MARSplines), support vector regression (SVR), backpropagation neural network (BPNN) and autoregression (1) are compared using statistical measures. Robustness evaluation is done to check the performance of the models on the relative ratios of the data. Findings MARSplines produces better results than the compared models in forecasting every 60th minute of selected stocks and stock indices. Next to MARSplines, SVR outperforms neural network and autoregression (1) models. The MARSplines proved to be more robust than the other models. Practical implications Forecasting provides a substantial benchmark for companies, which entails long-run operations. Significant profit can be earned by successfully predicting the stock’s future price. The traders have to outperform the market using techniques. Policy makers need to estimate the future prices/trends in the stock market to identify the link between the financial instruments and monetary policy which gives higher insights about the mechanism of existing policy and to know the role of financial assets in many channels. Thus, this study expects that the proposed model can create significant profits for traders by more precisely forecasting the stock market. Originality/value This study contributes to the high-frequency forecasting literature using MARSplines, SVR and BPNN. Finding the most effective way of forecasting the stock market is imperative for traders and portfolio managers for investment decisions. This study reveals the changing levels of trends in investing and expectation of significant gains in a short time through intraday trading.
APA, Harvard, Vancouver, ISO, and other styles
10

Volontyr, L., and L. Mykhalchyshyna. "Organizational and economic mechanism of grain sales: information component." Scientific Messenger of LNU of Veterinary Medicine and Biotechnologies 21, no. 92 (May 11, 2019): 81–89. http://dx.doi.org/10.32718/nvlvet-e9213.

Full text
Abstract:
A significant part of the output of the agro-industrial complex of Ukraine is exported. Therefore, it is desirable to determine the optimal volume of products to be implemented each month. Prices for grain are formed depending on demand and supply, costs for production and sale, market fees, etc. The analysis of the price situation on the Ukrainian cities shows a large variation. The average price of 1 kg of grain crops does not give a full opportunity to characterize the price situation of the Ukrainian grain market. There is seasonal price cyclicality: their growth with the decrease of stocks and the reduction after harvesting, when mass sales of grain are carried out by producers who are not able to store the grown crops, and consumers make grain crops. In the article the solution of the economic-mathematical model of optimization of the calendar plan for the sale of agricultural products is developed and found. The model is considered from the standpoint of deterministic product prices and under the probabilistic nature of future market prices. The system of restrictions consists of two constraints: to determine the optimal size of grain crop harvesting of each type and the capacity of the warehouse. If future market prices are considered not deterministic, then the commodity producer always has the risk of receiving in the future revenue from the sale of products smaller than expected. A risk-averse person will be guided by two criteria when deciding to: maximize the expected total net income and minimize the dispersion of total net income. In this case, the model will be two-criterial and nonlinear. The method of supporting the process of determining the predominance of multi-criteria optimization is that the owner first of all has received information about the limits of the variation of the expected total net income and the standard deviation of income on the set of effective options for the calendar plan. The peculiarities of the individual attitude to risk are calculated by drawing information on the permissible levels of the indicated criterion. Further among all effective variants of the calendar plan of realization is calculated precisely the one that best reflects the individual predominance of the owner of the product. The following information is needed to construct a numerical model for grain sales: sales prices and the cost of storing 1 ton of grain crops to a certain month. The predicted values are based on a simple linear econometric model based on statistical sampling. The reliability of the econometric model is determined by the determination coefficient or on the basis of Fisher's F-criterion according to the theory of statistical hypotheses. Econometric models have weak extropolitic properties, so the forecast can be formed only short-term. The solution of the model showed: all kinds of grain crops, except for barley, are economically unprofitable to be implemented in such months as January, May, June, July and August. Wheat grades 3 and 6, corn is also unprofitable to be sold in September. Unlike other crops, barley is beneficial throughout the year. In February, the maximum sales of wheat is 2, 3 and 6 classes, in March the maximum sale of barley, and the minimum is in May. Maize has the maximum sales in May, and the minimum in September. The minimum sale of wheat depends on its class – September, April and December respectively 2, 3 and 6 classes. With such incomplete loading of warehouses, the profit from storage of grain crops will be 743 thousand. UAH. Thus, PJSC “Gnivan Grain Reciprocal Enterprise” is more likely to load its warehouses to improve its financial position. One of the ways of solving the problem of seasonal grain sales is to create a network of modern certified grain elevators, taking into account the logistically rational location, which will allow to keep enough grain in addition and of the proper quality. This will allow an increase in the efficiency of grain producers through the sale of grain at favorable market conditions in a wider range of time. Independent operators should also be encouraged to ensure that the quality of the grain is objectively measured. At present, the analysis of the work of the grain storage system shows that the high cost of services of active elevators is also a problem.
APA, Harvard, Vancouver, ISO, and other styles
11

Petrova, Anzhela, and Margarita Deyneka. "ARIMA-MODELS: MODELING AND FORECASTING PRICES OF STOCKS." International scientific journal "Internauka". Series: "Economic Sciences", 2022. http://dx.doi.org/10.25313/2520-2294-2022-2-7921.

Full text
Abstract:
Modeling the dynamics and forecasting of financial market indicators is of interest to its participants and analysts, as well as scientific circles. The implementation of the task of researching market indicators involves the selection of appropriate methods, tools and resources. The popularity and a significant number of approbations belongs to technical analysis, based on the visual analysis of time series using the construction of special charts and graphical models (figures). Technical analysis is subjective, therefore, in addition to it, methods are used that involve the use of a mathematical apparatus. ARIMA model has shown its effectiveness in working with different time series and has become a powerful tool for obtaining accurate forecasts. The algorithm for constructing such a model involves performing a number of mathematical calculations, which can cause difficulties. But thanks to modern software capabilities, for example, the R programming language, statistical analysis of time series, namely the construction of an ARIMA model, is implemented quickly and with the ability to obtain graphical and numerical results. That is why building an ARIMA model using the R language for modeling the price of a company's shares is of practical importance, which will allow you to get a forecast and make decisions during asset purchase and sale transactions in the financial market. In this work, the algorithm for constructing an ARIMA model is implemented using the R-studio environment in 3 stages (with the consistent use of the corresponding library functions) using the example of PepsiCo prices of stocks time series (monthly and daily data). The graphs of the series were constructed, the series were tested for stationarity, an assumption was made about the value of the model parameters according to the preliminary analysis, automatic selection of parameters was also used, and the corresponding models were built. All constructed models were tested for adequacy through appropriate tests and criteria, as well as the quality of approximation of the actual data by the model. Forecast values were obtained, presented graphically in comparison with the actual data of the share price, and the accuracy of the forecast was calculated.
APA, Harvard, Vancouver, ISO, and other styles
12

Ma, Yanran, Nan Chen, and Han Lv. "Back propagation mathematical model for stock price prediction." Applied Mathematics and Nonlinear Sciences, December 30, 2021. http://dx.doi.org/10.2478/amns.2021.2.00144.

Full text
Abstract:
Abstract Due to the extremely volatile nature of financial markets, it is commonly accepted that stock price prediction is a task filled with challenges. However, in order to make profits or understand the essence of equity market, numerous market participants or researchers try to forecast stock prices using various statistical, econometric or even neural network models. In this work, we survey and compare the predictive power of five neural network models, namely, back propagation (BP) neural network, radial basis function neural network, general regression neural network, support vector machine regression (SVMR) and least squares support vector machine regression. We apply the five models to make price predictions for three individual stocks, namely, Bank of China, Vanke A and Guizhou Maotai. Adopting mean square error and average absolute percentage error as criteria, we find that BP neural network consistently and robustly outperforms the other four models. Then some theoretical and practical implications have been discussed.
APA, Harvard, Vancouver, ISO, and other styles
13

Gopal, Warade Kalyani, Jawale Mamta Pandurang, Tayade Pratiksha Devaram, and Dr Dinesh D. Patil. "Stock Price Prediction Using Machine Learning." International Journal of Advanced Research in Science, Communication and Technology, July 9, 2021, 386–89. http://dx.doi.org/10.48175/ijarsct-1644.

Full text
Abstract:
In Stock Market Prediction, the aim is to predict for future value of the financial stocks of a company. The recent trend in stock market prediction technologies is the use of machine learning which makes predictions based on the values of current stock market by training on their previous values. Machine learning itself employs different models to make prediction easier. The paper focuses on Regression and LSTM based Machine learning to predict stock values. Factors considered are open, close, low, high and volume. In order to predict market movement, the stock prices and stock indicators in addition to the news related to these stocks. Most of the previous work in this industry focused on either classifying the released market news and demonstrating their effect on the stock price or focused on the historical price movement and predicted their future movement. In this work, we propose an automated trading system that integrates mathematical functions, machine learning, and other external factors such as news’ sentiments for the purpose of a better stock prediction accuracy and issuing profitable trades. The aim to determine the price of a certain stock for the coming end-of-day considering the first several trading hours of the day.
APA, Harvard, Vancouver, ISO, and other styles
14

Siddharth Meghwal and Irfan Khan. "Stock Prediction Using Machine Learning Algorithms." International Journal of Scientific Research in Computer Science, Engineering and Information Technology, September 1, 2022, 18–30. http://dx.doi.org/10.32628/cseit22855.

Full text
Abstract:
In the recent times, the stock markets have emerged as one of the top investment destinations for individual and retail investors due to the lure of huge profits that are possible with stock investments compared to more traditional and conservative forms of investments such as bank deposits, real estate and gold. The stock markets unlike other forms of investment are highly dynamic due to the various variables involved in stock price determination and are complex to understand for a common investor. Individual and small-time investors have to generate a portfolio of common stocks to reduce the overall risk and generate reasonable returns on their investment. This phenomenon has given way too many individual and retail investors incurring huge losses because their decisions are based on speculation and not on sound technical grounds. While there are financial advisory firms and online tools where individual investors can get professional stock investment advice, the reliability of such investment advice in the recent past has been inconsistent and not meeting the rigor of quantitative and rational stock selection process. Many of such stock analysts and the tools mostly rely on short term technical indicators and are biased by the speculation in the market leading to huge variances in their predictions and leading to huge losses for individual investors. While the use of Artificial Intelligence (AI) and Machine Learning (ML) techniques is widely adopted in the financial domain, integration of AI/ML techniques with fundamental variables and long-term value investing is a lacking in this domain. Some of the stock portfolio tools available in the market use AI/ML techniques but are mostly built using technical indicators which makes them only suitable for general trend predictions, intraday trading and not suitable for long term value investing due to wide variances and reliability issues. The availability of a Financial Decision Support System which can help stock investors with reliable and accurate information for selecting stocks and creating an automated portfolio with detailed quantitative analysis is lacking. A Financial Decision Support System (DSS) that can establish a relationship between the fundamental financial variables and the stock prices that can VII automatically create a portfolio of premium stocks shall be of great utility to the individual investment community. As part of this thesis, the researcher has designed and developed a Financial Decision Support System (DSS) for selecting stocks and automatically creating portfolios with minimal inputs from the individual investors. The Financial DSS is based on a System Architecture combining the advantages of Artificial Intelligence (AI), Machine learning (ML) and Mathematical models. The design and development of the Financial DSS is based on the philosophy to combine various independent models and not rely on a single stock price model to increase the accuracy and reliability of the stock selections and increase the overall Return on Investment (ROI) of the stock portfolio. The Machine learning models are used to establish the relationship between fundamental financial variables and the price of the stock, a mathematical model is developed to calculate the intrinsic value of the stock taking in to account the full lifecycle of the stock which involves various phases and a comprehensive model to analyze the financial health of the stocks. The AI/ML stock models are independently trained using historical financial data and integrated with the overall Financial DSS. Finally, the Financial DSS tool with a graphical user interface is built integrating all the three models which shall be able to run on a general-purpose desktop or laptop. To reliably validate the Financial DSS, it has been subjected to wide variety of stocks in terms of market capitalization and industry segments. The Financial DSS is validated for its short term and long-term Return on Investment (ROI) using both historical and current real-time financial data. The researcher has reported that the accuracy of the AI/ML stock price models is greater than 90% and the overall ROI of the stock portfolios created by the Financial DSS is 61% for long term investments and 11.74% for short term investments. This system has the potential to help millions of individual investors who can make their financial decisions on stocks using this system for a fraction of cost paid to corporate financial consultants and value eventually may contribute to a more efficient financial system.
APA, Harvard, Vancouver, ISO, and other styles
15

Burns, Alex. "Oblique Strategies for Ambient Journalism." M/C Journal 13, no. 2 (April 15, 2010). http://dx.doi.org/10.5204/mcj.230.

Full text
Abstract:
Alfred Hermida recently posited ‘ambient journalism’ as a new framework for para- and professional journalists, who use social networks like Twitter for story sources, and as a news delivery platform. Beginning with this framework, this article explores the following questions: How does Hermida define ‘ambient journalism’ and what is its significance? Are there alternative definitions? What lessons do current platforms provide for the design of future, real-time platforms that ‘ambient journalists’ might use? What lessons does the work of Brian Eno provide–the musician and producer who coined the term ‘ambient music’ over three decades ago? My aim here is to formulate an alternative definition of ambient journalism that emphasises craft, skills acquisition, and the mental models of professional journalists, which are the foundations more generally for journalism practices. Rather than Hermida’s participatory media context I emphasise ‘institutional adaptiveness’: how journalists and newsrooms in media institutions rely on craft and skills, and how emerging platforms can augment these foundations, rather than replace them. Hermida’s Ambient Journalism and the Role of Journalists Hermida describes ambient journalism as: “broad, asynchronous, lightweight and always-on communication systems [that] are creating new kinds of interactions around the news, and are enabling citizens to maintain a mental model of news and events around them” (Hermida 2). His ideas appear to have two related aspects. He conceives ambient journalism as an “awareness system” between individuals that functions as a collective intelligence or kind of ‘distributed cognition’ at a group level (Hermida 2, 4-6). Facebook, Twitter and other online social networks are examples. Hermida also suggests that such networks enable non-professionals to engage in ‘communication’ and ‘conversation’ about news and media events (Hermida 2, 7). In a helpful clarification, Hermida observes that ‘para-journalists’ are like the paralegals or non-lawyers who provide administrative support in the legal profession and, in academic debates about journalism, are more commonly known as ‘citizen journalists’. Thus, Hermida’s ambient journalism appears to be: (1) an information systems model of new platforms and networks, and (2) a normative argument that these tools empower ‘para-journalists’ to engage in journalism and real-time commentary. Hermida’s thesis is intriguing and worthy of further discussion and debate. As currently formulated however it risks sharing the blind-spots and contradictions of the academic literature that Hermida cites, which suffers from poor theory-building (Burns). A major reason is that the participatory media context on which Hermida often builds his work has different mental models and normative theories than the journalists or media institutions that are the target of critique. Ambient journalism would be a stronger and more convincing framework if these incorrect assumptions were jettisoned. Others may also potentially misunderstand what Hermida proposes, because the academic debate is often polarised between para-journalists and professional journalists, due to different views about institutions, the politics of knowledge, decision heuristics, journalist training, and normative theoretical traditions (Christians et al. 126; Cole and Harcup 166-176). In the academic debate, para-journalists or ‘citizen journalists’ may be said to have a communitarian ethic and desire more autonomous solutions to journalists who are framed as uncritical and reliant on official sources, and to media institutions who are portrayed as surveillance-like ‘monitors’ of society (Christians et al. 124-127). This is however only one of a range of possible relationships. Sole reliance on para-journalists could be a premature solution to a more complex media ecology. Journalism craft, which does not rely just on official sources, also has a range of practices that already provides the “more complex ways of understanding and reporting on the subtleties of public communication” sought (Hermida 2). Citizen- and para-journalist accounts may overlook micro-studies in how newsrooms adopt technological innovations and integrate them into newsgathering routines (Hemmingway 196). Thus, an examination of the realities of professional journalism will help to cast a better light on how ambient journalism can shape the mental models of para-journalists, and provide more rigorous analysis of news and similar events. Professional journalism has several core dimensions that para-journalists may overlook. Journalism’s foundation as an experiential craft includes guidance and norms that orient the journalist to information, and that includes practitioner ethics. This craft is experiential; the basis for journalism’s claim to “social expertise” as a discipline; and more like the original Linux and Open Source movements which evolved through creative conflict (Sennett 9, 25-27, 125-127, 249-251). There are learnable, transmissible skills to contextually evaluate, filter, select and distil the essential insights. This craft-based foundation and skills informs and structures the journalist’s cognitive witnessing of an event, either directly or via reconstructed, cultivated sources. The journalist publishes through a recognised media institution or online platform, which provides communal validation and verification. There is far more here than the academic portrayal of journalists as ‘gate-watchers’ for a ‘corporatist’ media elite. Craft and skills distinguish the professional journalist from Hermida’s para-journalist. Increasingly, media institutions hire journalists who are trained in other craft-based research methods (Burns and Saunders). Bethany McLean who ‘broke’ the Enron scandal was an investment banker; documentary filmmaker Errol Morris first interviewed serial killers for an early project; and Neil Chenoweth used ‘forensic accounting’ techniques to investigate Rupert Murdoch and Kerry Packer. Such expertise allows the journalist to filter information, and to mediate any influences in the external environment, in order to develop an individualised, ‘embodied’ perspective (Hofstadter 234; Thompson; Garfinkel and Rawls). Para-journalists and social network platforms cannot replace this expertise, which is often unique to individual journalists and their research teams. Ambient Journalism and Twitter Current academic debates about how citizen- and para-journalists may augment or even replace professional journalists can often turn into legitimation battles whether the ‘de facto’ solution is a social media network rather than a media institution. For example, Hermida discusses Twitter, a micro-blogging platform that allows users to post 140-character messages that are small, discrete information chunks, for short-term and episodic memory. Twitter enables users to monitor other users, to group other messages, and to search for terms specified by a hashtag. Twitter thus illustrates how social media platforms can make data more transparent and explicit to non-specialists like para-journalists. In fact, Twitter is suitable for five different categories of real-time information: news, pre-news, rumours, the formation of social media and subject-based networks, and “molecular search” using granular data-mining tools (Leinweber 204-205). In this model, the para-journalist acts as a navigator and “way-finder” to new information (Morville, Findability). Jaron Lanier, an early designer of ‘virtual reality’ systems, is perhaps the most vocal critic of relying on groups of non-experts and tools like Twitter, instead of individuals who have professional expertise. For Lanier, what underlies debates about citizen- and para-journalists is a philosophy of “cybernetic totalism” and “digital Maoism” which exalts the Internet collective at the expense of truly individual views. He is deeply critical of Hermida’s chosen platform, Twitter: “A design that shares Twitter’s feature of providing ambient continuous contact between people could perhaps drop Twitter’s adoration of fragments. We don’t really know, because it is an unexplored design space” [emphasis added] (Lanier 24). In part, Lanier’s objection is traceable back to an unresolved debate on human factors and design in information science. Influenced by the post-war research into cybernetics, J.C.R. Licklider proposed a cyborg-like model of “man-machine symbiosis” between computers and humans (Licklider). In turn, Licklider’s framework influenced Douglas Engelbart, who shaped the growth of human-computer interaction, and the design of computer interfaces, the mouse, and other tools (Engelbart). In taking a system-level view of platforms Hermida builds on the strength of Licklider and Engelbart’s work. Yet because he focuses on para-journalists, and does not appear to include the craft and skills-based expertise of professional journalists, it is unclear how he would answer Lanier’s fears about how reliance on groups for news and other information is superior to individual expertise and judgment. Hermida’s two case studies point to this unresolved problem. Both cases appear to show how Twitter provides quicker and better forms of news and information, thereby increasing the effectiveness of para-journalists to engage in journalism and real-time commentary. However, alternative explanations may exist that raise questions about Twitter as a new platform, and thus these cases might actually reveal circumstances in which ambient journalism may fail. Hermida alludes to how para-journalists now fulfil the earlier role of ‘first responders’ and stringers, in providing the “immediate dissemination” of non-official information about disasters and emergencies (Hermida 1-2; Haddow and Haddow 117-118). Whilst important, this is really a specific role. In fact, disaster and emergency reporting occurs within well-established practices, professional ethics, and institutional routines that may involve journalists, government officials, and professional communication experts (Moeller). Officials and emergency management planners are concerned that citizen- or para-journalism is equated with the craft and skills of professional journalism. The experience of these officials and planners in 2005’s Hurricane Katrina in the United States, and in 2009’s Black Saturday bushfires in Australia, suggests that whilst para-journalists might be ‘first responders’ in a decentralised, complex crisis, they are perceived to spread rumours and potential social unrest when people need reliable information (Haddow and Haddow 39). These terms of engagement between officials, planners and para-journalists are still to be resolved. Hermida readily acknowledges that Twitter and other social network platforms are vulnerable to rumours (Hermida 3-4; Sunstein). However, his other case study, Iran’s 2009 election crisis, further complicates the vision of ambient journalism, and always-on communication systems in particular. Hermida discusses several events during the crisis: the US State Department request to halt a server upgrade, how the Basij’s shooting of bystander Neda Soltan was captured on a mobile phone camera, the spread across social network platforms, and the high-velocity number of ‘tweets’ or messages during the first two weeks of Iran’s electoral uncertainty (Hermida 1). The US State Department was interested in how Twitter could be used for non-official sources, and to inform people who were monitoring the election events. Twitter’s perceived ‘success’ during Iran’s 2009 election now looks rather different when other factors are considered such as: the dynamics and patterns of Tehran street protests; Iran’s clerics who used Soltan’s death as propaganda; claims that Iran’s intelligence services used Twitter to track down and to kill protestors; the ‘black box’ case of what the US State Department and others actually did during the crisis; the history of neo-conservative interest in a Twitter-like platform for strategic information operations; and the Iranian diaspora’s incitement of Tehran student protests via satellite broadcasts. Iran’s 2009 election crisis has important lessons for ambient journalism: always-on communication systems may create noise and spread rumours; ‘mirror-imaging’ of mental models may occur, when other participants have very different worldviews and ‘contexts of use’ for social network platforms; and the new kinds of interaction may not lead to effective intervention in crisis events. Hermida’s combination of news and non-news fragments is the perfect environment for psychological operations and strategic information warfare (Burns and Eltham). Lessons of Current Platforms for Ambient Journalism We have discussed some unresolved problems for ambient journalism as a framework for journalists, and as mental models for news and similar events. Hermida’s goal of an “awareness system” faces a further challenge: the phenomenological limitations of human consciousness to deal with information complexity and ambiguous situations, whether by becoming ‘entangled’ in abstract information or by developing new, unexpected uses for emergent technologies (Thackara; Thompson; Hofstadter 101-102, 186; Morville, Findability, 55, 57, 158). The recursive and reflective capacities of human consciousness imposes its own epistemological frames. It’s still unclear how Licklider’s human-computer interaction will shape consciousness, but Douglas Hofstadter’s experiments with art and video-based group experiments may be suggestive. Hofstadter observes: “the interpenetration of our worlds becomes so great that our worldviews start to fuse” (266). Current research into user experience and information design provides some validation of Hofstadter’s experience, such as how Google is now the ‘default’ search engine, and how its interface design shapes the user’s subjective experience of online search (Morville, Findability; Morville, Search Patterns). Several models of Hermida’s awareness system already exist that build on Hofstadter’s insight. Within the information systems field, on-going research into artificial intelligence–‘expert systems’ that can model expertise as algorithms and decision rules, genetic algorithms, and evolutionary computation–has attempted to achieve Hermida’s goal. What these systems share are mental models of cognition, learning and adaptiveness to new information, often with forecasting and prediction capabilities. Such systems work in journalism areas such as finance and sports that involve analytics, data-mining and statistics, and in related fields such as health informatics where there are clear, explicit guidelines on information and international standards. After a mid-1980s investment bubble (Leinweber 183-184) these systems now underpin the technology platforms of global finance and news intermediaries. Bloomberg LP’s ubiquitous dual-screen computers, proprietary network and data analytics (www.bloomberg.com), and its competitors such as Thomson Reuters (www.thomsonreuters.com and www.reuters.com), illustrate how financial analysts and traders rely on an “awareness system” to navigate global stock-markets (Clifford and Creswell). For example, a Bloomberg subscriber can access real-time analytics from exchanges, markets, and from data vendors such as Dow Jones, NYSE Euronext and Thomson Reuters. They can use portfolio management tools to evaluate market information, to make allocation and trading decisions, to monitor ‘breaking’ news, and to integrate this information. Twitter is perhaps the para-journalist equivalent to how professional journalists and finance analysts rely on Bloomberg’s platform for real-time market and business information. Already, hedge funds like PhaseCapital are data-mining Twitter’s ‘tweets’ or messages for rumours, shifts in stock-market sentiment, and to analyse potential trading patterns (Pritchett and Palmer). The US-based Securities and Exchange Commission, and researchers like David Gelernter and Paul Tetlock, have also shown the benefits of applied data-mining for regulatory market supervision, in particular to uncover analysts who provide ‘whisper numbers’ to online message boards, and who have access to material, non-public information (Leinweber 60, 136, 144-145, 208, 219, 241-246). Hermida’s framework might be developed further for such regulatory supervision. Hermida’s awareness system may also benefit from the algorithms found in high-frequency trading (HFT) systems that Citadel Group, Goldman Sachs, Renaissance Technologies, and other quantitative financial institutions use. Rather than human traders, HFT uses co-located servers and complex algorithms, to make high-volume trades on stock-markets that take advantage of microsecond changes in prices (Duhigg). HFT capabilities are shrouded in secrecy, and became the focus of regulatory attention after several high-profile investigations of traders alleged to have stolen the software code (Bray and Bunge). One public example is Streambase (www.streambase.com), a ‘complex event processing’ (CEP) platform that can be used in HFT, and commercialised from the Project Aurora research collaboration between Brandeis University, Brown University, and Massachusetts Institute of Technology. CEP and HFT may be the ‘killer apps’ of Hermida’s awareness system. Alternatively, they may confirm Jaron Lanier’s worst fears: your data-stream and user-generated content can be harvested by others–for their gain, and your loss! Conclusion: Brian Eno and Redefining ‘Ambient Journalism’ On the basis of the above discussion, I suggest a modified definition of Hermida’s thesis: ‘Ambient journalism’ is an emerging analytical framework for journalists, informed by cognitive, cybernetic, and information systems research. It ‘sensitises’ the individual journalist, whether professional or ‘para-professional’, to observe and to evaluate their immediate context. In doing so, ‘ambient journalism’, like journalism generally, emphasises ‘novel’ information. It can also inform the design of real-time platforms for journalistic sources and news delivery. Individual ‘ambient journalists’ can learn much from the career of musician and producer Brian Eno. His personal definition of ‘ambient’ is “an atmosphere, or a surrounding influence: a tint,” that relies on the co-evolution of the musician, creative horizons, and studio technology as a tool, just as para-journalists use Twitter as a platform (Sheppard 278; Eno 293-297). Like para-journalists, Eno claims to be a “self-educated but largely untrained” musician and yet also a craft-based producer (McFadzean; Tamm 177; 44-50). Perhaps Eno would frame the distinction between para-journalist and professional journalist as “axis thinking” (Eno 298, 302) which is needlessly polarised due to different normative theories, stances, and practices. Furthermore, I would argue that Eno’s worldview was shaped by similar influences to Licklider and Engelbart, who appear to have informed Hermida’s assumptions. These influences include the mathematician and game theorist John von Neumann and biologist Richard Dawkins (Eno 162); musicians Eric Satie, John Cage and his book Silence (Eno 19-22, 162; Sheppard 22, 36, 378-379); and the field of self-organising systems, in particular cyberneticist Stafford Beer (Eno 245; Tamm 86; Sheppard 224). Eno summed up the central lesson of this theoretical corpus during his collaborations with New York’s ‘No Wave’ scene in 1978, of “people experimenting with their lives” (Eno 253; Reynolds 146-147; Sheppard 290-295). Importantly, he developed a personal view of normative theories through practice-based research, on a range of projects, and with different creative and collaborative teams. Rather than a technological solution, Eno settled on a way to encode his craft and skills into a quasi-experimental, transmittable method—an aim of practitioner development in professional journalism. Even if only a “founding myth,” the story of Eno’s 1975 street accident with a taxi, and how he conceived ‘ambient music’ during his hospital stay, illustrates how ambient journalists might perceive something new in specific circumstances (Tamm 131; Sheppard 186-188). More tellingly, this background informed his collaboration with the late painter Peter Schmidt, to co-create the Oblique Strategies deck of aphorisms: aleatory, oracular messages that appeared dependent on chance, luck, and randomness, but that in fact were based on Eno and Schmidt’s creative philosophy and work guidelines (Tamm 77-78; Sheppard 178-179; Reynolds 170). In short, Eno was engaging with the kind of reflective practices that underpin exemplary professional journalism. He was able to encode this craft and skills into a quasi-experimental method, rather than a technological solution. Journalists and practitioners who adopt Hermida’s framework could learn much from the published accounts of Eno’s practice-based research, in the context of creative projects and collaborative teams. In particular, these detail the contexts and choices of Eno’s early ambient music recordings (Sheppard 199-200); Eno’s duels with David Bowie during ‘Sense of Doubt’ for the Heroes album (Tamm 158; Sheppard 254-255); troubled collaborations with Talking Heads and David Byrne (Reynolds 165-170; Sheppard; 338-347, 353); a curatorial, mentor role on U2’s The Unforgettable Fire (Sheppard 368-369); the ‘grand, stadium scale’ experiments of U2’s 1991-93 ZooTV tour (Sheppard 404); the Zorn-like games of Bowie’s Outside album (Eno 382-389); and the ‘generative’ artwork 77 Million Paintings (Eno 330-332; Tamm 133-135; Sheppard 278-279; Eno 435). Eno is clearly a highly flexible maker and producer. Developing such flexibility would ensure ambient journalism remains open to novelty as an analytical framework that may enhance the practitioner development and work of professional journalists and para-journalists alike.Acknowledgments The author thanks editor Luke Jaaniste, Alfred Hermida, and the two blind peer reviewers for their constructive feedback and reflective insights. References Bray, Chad, and Jacob Bunge. “Ex-Goldman Programmer Indicted for Trade Secrets Theft.” The Wall Street Journal 12 Feb. 2010. 17 March 2010 ‹http://online.wsj.com/article/SB10001424052748703382904575059660427173510.html›. Burns, Alex. “Select Issues with New Media Theories of Citizen Journalism.” M/C Journal 11.1 (2008). 17 March 2010 ‹http://journal.media-culture.org.au/index.php/mcjournal/article/view/30›.———, and Barry Saunders. “Journalists as Investigators and ‘Quality Media’ Reputation.” Record of the Communications Policy and Research Forum 2009. Eds. Franco Papandrea and Mark Armstrong. Sydney: Network Insight Institute, 281-297. 17 March 2010 ‹http://eprints.vu.edu.au/15229/1/CPRF09BurnsSaunders.pdf›.———, and Ben Eltham. “Twitter Free Iran: An Evaluation of Twitter’s Role in Public Diplomacy and Information Operations in Iran’s 2009 Election Crisis.” Record of the Communications Policy and Research Forum 2009. Eds. Franco Papandrea and Mark Armstrong. Sydney: Network Insight Institute, 298-310. 17 March 2010 ‹http://eprints.vu.edu.au/15230/1/CPRF09BurnsEltham.pdf›. Christians, Clifford G., Theodore Glasser, Denis McQuail, Kaarle Nordenstreng, and Robert A. White. Normative Theories of the Media: Journalism in Democratic Societies. Champaign, IL: University of Illinois Press, 2009. Clifford, Stephanie, and Julie Creswell. “At Bloomberg, Modest Strategy to Rule the World.” The New York Times 14 Nov. 2009. 17 March 2010 ‹http://www.nytimes.com/2009/11/15/business/media/15bloom.html?ref=businessandpagewanted=all›.Cole, Peter, and Tony Harcup. Newspaper Journalism. Thousand Oaks, CA: Sage Publications, 2010. Duhigg, Charles. “Stock Traders Find Speed Pays, in Milliseconds.” The New York Times 23 July 2009. 17 March 2010 ‹http://www.nytimes.com/2009/07/24/business/24trading.html?_r=2andref=business›. Engelbart, Douglas. “Augmenting Human Intellect: A Conceptual Framework, 1962.” Ed. Neil Spiller. Cyber Reader: Critical Writings for the Digital Era. London: Phaidon Press, 2002. 60-67. Eno, Brian. A Year with Swollen Appendices. London: Faber and Faber, 1996. Garfinkel, Harold, and Anne Warfield Rawls. Toward a Sociological Theory of Information. Boulder, CO: Paradigm Publishers, 2008. Hadlow, George D., and Kim S. Haddow. Disaster Communications in a Changing Media World, Butterworth-Heinemann, Burlington MA, 2009. Hemmingway, Emma. Into the Newsroom: Exploring the Digital Production of Regional Television News. Milton Park: Routledge, 2008. Hermida, Alfred. “Twittering the News: The Emergence of Ambient Journalism.” Journalism Practice 4.3 (2010): 1-12. Hofstadter, Douglas. I Am a Strange Loop. New York: Perseus Books, 2007. Lanier, Jaron. You Are Not a Gadget: A Manifesto. London: Allen Lane, 2010. Leinweber, David. Nerds on Wall Street: Math, Machines and Wired Markets. Hoboken, NJ: John Wiley and Sons, 2009. Licklider, J.C.R. “Man-Machine Symbiosis, 1960.” Ed. Neil Spiller. Cyber Reader: Critical Writings for the Digital Era, London: Phaidon Press, 2002. 52-59. McFadzean, Elspeth. “What Can We Learn from Creative People? The Story of Brian Eno.” Management Decision 38.1 (2000): 51-56. Moeller, Susan. Compassion Fatigue: How the Media Sell Disease, Famine, War and Death. New York: Routledge, 1998. Morville, Peter. Ambient Findability. Sebastopol, CA: O’Reilly Press, 2005. ———. Search Patterns. Sebastopol, CA: O’Reilly Press, 2010.Pritchett, Eric, and Mark Palmer. ‘Following the Tweet Trail.’ CNBC 11 July 2009. 17 March 2010 ‹http://www.casttv.com/ext/ug0p08›. Reynolds, Simon. Rip It Up and Start Again: Postpunk 1978-1984. London: Penguin Books, 2006. Sennett, Richard. The Craftsman. London: Penguin Books, 2008. Sheppard, David. On Some Faraway Beach: The Life and Times of Brian Eno. London: Orion Books, 2008. Sunstein, Cass. On Rumours: How Falsehoods Spread, Why We Believe Them, What Can Be Done. New York: Farrar, Straus and Giroux, 2009. Tamm, Eric. Brian Eno: His Music and the Vertical Colour of Sound. New York: Da Capo Press, 1995. Thackara, John. In the Bubble: Designing in a Complex World. Boston, MA: The MIT Press, 1995. Thompson, Evan. Mind in Life: Biology, Phenomenology, and the Science of Mind. Boston, MA: Belknap Press, 2007.
APA, Harvard, Vancouver, ISO, and other styles
16

Mules, Warwick. "That Obstinate Yet Elastic Natural Barrier." M/C Journal 4, no. 5 (November 1, 2001). http://dx.doi.org/10.5204/mcj.1936.

Full text
Abstract:
Introduction It used to be the case that for the mass of workers, work was something that was done in order to get by. A working class was simply the sum total of all those workers and their dependents whose wages paid for the necessities of life, providing the bare minimum for family reproduction, to secure a place and a lineage within the social order. However, work has now become something else. Work has become the privileged sign of a new kind of class, whose existence is guaranteed not so much by work, but by the very fact of holding a job. Society no longer divides itself between a ruling elite and a subordinated working class, but between a job-holding, job-aspiring class, and those excluded from holding a job; those unable, by virtue of age, infirmity, education, gender, race or demographics, to participate in the rewards of work. Today, these rewards are not only a regular salary and job satisfaction (the traditional consolations of the working class), but also a certain capacity to plan ahead, to gain control of one's destiny through saving and investment, and to enjoy the pleasures of consumption through the fulfilment of self-images. What has happened to transform the worker from a subsistence labourer to an affluent consumer? In what way has the old working class now become part of the consumer society, once the privileged domain of the rich? And what effects has this transformation had on capitalism and its desire for profit? These questions take on an immediacy when we consider that, in the recent Federal election held in Australia (November 11, 2001), voters in the traditional working class areas of western Sydney deserted the Labour Party (the party of the worker) and instead voted Liberal/conservative (the party of capital and small business). The fibro worker cottage valleys of Parramatta are apparently no more, replaced by the gentrified mansions of an aspiring worker formation, in pursuit of the wealth and independence once the privilege of the educated bourgeoisie. In this brief essay, I will outline an understanding of work in terms of its changing relation to capital. My aim is to show how the terrain of work has shifted so that it no longer operates in strict subordination to capital, and has instead become an investment in capital. The worker no longer works to subsist, but does so as an investment in the future. My argument is situated in the rich theoretical field set out by Karl Marx in his critique of capitalism, which described the labour/capital relation in terms of a repressive, extractive force (the power of capital over labour) and which has since been redefined by various poststructuralist theorists including Michel Foucault and Gilles Deleuze (Anti-Oedipus) in terms of the forces of productive desire. What follows then, is not a Marxist reading of work, but a reading of the way Marx sets forth work in relation to capital, and how this can be re-read through poststructuralism, in terms of the transformation of work from subordination to capital, to investment in capital; from work as the consequence of repression, to work as the fulfilment of desire. The Discipline of Work In his major work Capital Marx sets out a theory of labour in which the task of the worker is to produce surplus value: "Capitalist production is not merely the production of commodities, it is, by its very essence, the production of surplus-value. The worker produces not for himself, but for capital. It is no longer sufficient, therefore, for him simply to produce. He must produce surplus-value." (644) For Marx, surplus-value is generated when commodities are sold in the market for a price greater than the price paid to the worker for producing it: "this increment or excess over the original value I call surplus-value" (251). In order to create surplus value, the time spent by the worker in making a commodity must be strictly controlled, so that the worker produces more than required to fulfil his subsistence needs: ". . . since it is just this excess labour that supplies [the capitalist] with the surplus value" (1011). In other words, capital production is created through a separation between labour and capital: "a division between the product of labour and labour itself, between the objective conditions of labour and the subjective labour-power, was . . . the real foundation and the starting point of the process of capital production" (716). As Michael Ryan has argued, this separation was forced , through an allegiance between capital and the state, to guarantee the conditions for capital renewal by controlling the payment of labour in the form of a wage (84). Marx's analysis of industrialised capital in Capital thus outlines the way in which human labour is transformed into a form of surplus value, by the forced extraction of labour time: "the capitalist forces the worker where possible to exceed the normal rate of intensity [of work] and he forces him as best he can to extend the process of labour beyond the time necessary to replace the amount laid out in wages" (987). For Marx, capitalism is not a voluntary system; workers are not free to enter into and out of their relation with capital, since capital itself cannot survive without the constant supply of labour from which to extract surplus value. Needs and wants can only be satisfied within the labour/capital relation which homogenises labour into exchange value in terms of a wage, pegged to subsistence levels: "the capital earmarked for wages . . . belongs to the worker as soon as it has assumed its true shape of the means of subsistence destined to be consumed by him" (984). The "true shape" of wages, and hence the single, univocal truth of the wage labourer, is that he is condemned to subsistence consumption, because his capacity to share in the surplus value extracted from his own labour is circumscribed by the alliance between capital and the state, where wages are fixed and controlled according to wage market regulations. Marx's account of the labour/capital relation is imposing in its description of the dilemma of labour under the power of capital. Capitalism appears as a thermodynamic system fuelled by labour power, where, in order to make the system homogeneous, to produce exchange value, resistance is reduced: "Because it is capital, the automatic mechanism is endowed, in the person of the capitalist, with consciousness and a will. As capital, therefore, it is animated by the drive to reduce to a minimum the resistance offered by man, that obstinate yet elastic natural barrier." (527) In the capitalist system resistance takes the form of a living residue within the system itself, acting as an "elastic natural barrier" to the extractive force of capital. Marx names this living residue "man". In offering resistance, that is, in being subjected to the force of capital, the figure of man persists as the incommensurable presence of a resistive force composed by a refusal to assimilate. (Lyotard 102) This ambivalent position (the place of many truths) which places man within/outside capital, is not fully recognised by Marx at this stage of his analysis. It suggests the presence of an immanent force, coming from the outside, yet already present in the figure of man (man as "offering" resistance). This force, the counter-force operating through man as the residue of labour, is necessarily active in its effects on the system. That is to say, resistance in the system is not resistance to the system, but the resistance which carries the system elsewhere, to another place, to another time. Unlike the force of capital which works on labour to preserve the system, the resistive force figured in man works its way through the system, transforming it as it goes, with the elusive power to refuse. The separation of labour and capital necessary to create the conditions for capitalism to flourish is achieved by the action of a force operating on labour. This force manifests itself in the strict surveillance of work, through supervisory practices: "the capitalist's ability to supervise and enforce discipline is vital" (Marx 986). Marx's formulation of supervision here and elsewhere, assumes a direct power relation between the supervisor and the supervised: a coercive power in the form of 'the person of the capitalist, with consciousness and a will'. Surplus value can only be extracted at the maximum rate when workers are entirely subjected to physical surveillance. As Foucault has shown, surveillance practices in the nineteenth century involved a panoptic principle as a form of surveillance: "Power has its principle not so much in a person as in a certain concerted distribution of bodies, surfaces, lights, gazes; an arrangement whose internal mechanisms produce the relation in which individuals get caught up." (202) Power is not power over, but a productive power involving the commingling of forces, in which the resistive force of the body does not oppose, but complies with an authoritative force: "there is not a single moment of life from which one cannot extract forces, providing one knows how to differentiate it and combine it with others" (165). This commingling of dominant and resistive forces is distributive and proliferating, allowing the spread of institutions across social terrains, producing both "docile" and "delinquent" bodies at the same time: "this production of delinquency and its investment by the penal apparatus ..." (285, emphasis added). Foucault allows us to think through the dilemma posed by Marx, where labour appears entirely subject to the power of capital, reducing the worker to subsistence levels of existence. Indeed, Foucault's work allows us to see the figure of man, briefly adumbrated in quote from Marx above as "that obstinate yet elastic natural barrier", but refigured as an active, investing, transformative force, operating within the capitalist system, yet sending it on its way to somewhere else. In Foucauldian terms, self-surveillance takes on a normative function during the nineteenth century, producing a set of disciplinary values around the concepts of duty and respectability (Childers 409). These values were not only imposed from above, through education and the state, but enacted and maintained by the workers themselves, through the myriad threads of social conformity operating in daily life, whereby people made themselves suitable to each other for membership of the imagined community of disciplined worker-citizens. In this case, the wellbeing of workers gravitated to self-awareness and self-improvement, seen for instance in the magazines circulating at the time addressed to a worker readership (e.g. The Penny Magazine published in Britain from 1832-1845; see Sinnema 15). Instead of the satisfaction of needs in subsistence consumption, the worker was possessed by a desire for self-improvement, taking place in his spare time which was in turn, consolidated into the ego-ideal of the bourgeois self as the perfected model of civilised, educated man. Here desire takes the form of a repression (Freud 355), where the resistive force of the worker is channelled into maintaining the separation between labour and capital, and where the worker is encouraged to become a little bourgeois himself. The desire for self-improvement by the worker did not lead to a shift into the capitalist classes, but was satisfied in coming to know one's place, in being satisfied with fulfilling one's duty and in living a respectable life; that is in being individuated with respect to the social domain. Figure 1 - "The British Beehive", George Cruickshank's image of the hierarchy of labour in Victorian England (1840, modified 1867). Each profession is assigned an individualised place in the social order. A time must come however, in the accumulation of surplus-value, in the vast accelerating machine of capitalism, when the separation between labour and capital begins to dissolve. This point is reached when the residue left by capital in extracting surplus value is sufficient for the worker to begin consuming for its own sake, to engage in "unproductive expenditure" (Bataille 117) where desire is released as an active force. At this point, workers begin to abandon the repressive disciplines of duty and respectability, and turn instead to the control mechanisms of self-transformation or the "inventing of a self as if from scratch" (Massumi 18). In advanced capitalism, where the accrued wealth has concentrated not only profit but wages as well (a rise in the "standard of living"), workers cease to behave as subordinated to the system, and through their increased spending power re-enter the system as property owners, shareholders, superannuants and debtees with the capacity to access money held in banks and other financial institutions. As investment guru Peter Drucker has pointed out, the accumulated wealth of worker-owned superannuation or "pension" funds, is the most significant driving force of global capital today (Drucker 76-8). In the superannuation fund, workers' labour is not fully expended in the production of surplus value, but re-enters the system as investment on the workers' behalf, indirectly fuelling their capacity to fulfil desires through a rapidly accelerating circulation of money. As a consequence, new consumer industries begin to emerge based on the management of investment, where money becomes a product, subject to consumer choice. The lifestyles of the old capitalist class, itself a simulacra of aristocracy which it replaced, are now reproduced by the new worker-capitalist, but in ersatz forms, proliferating as the sign of wealth and abundance (copies of palatial homes replace real palaces, look-alike Rolex watches become available at cheap prices, medium priced family sedans take on the look and feel of expensive imports, and so forth). Unable to extract the surplus value necessary to feed this new desire for money from its own workforce (which has, in effect, become the main consumer of wealth), capital moves 'offshore' in search of a new labour pool, and repeats what it did to the labour pools in the older social formations in its relentless quest to maximise surplus value. Work and Control We are now witnessing a second kind of labour taking shape out of the deformations of the disciplinary society, where surplus value is not extracted, but incorporated into the labour force itself (Mules). This takes place when the separation between labour and capital dissolves, releasing quantities of "reserve time" (the time set aside from work in order to consume), which then becomes part of the capitalising process itself. In this case workers become "investors in their own lives (conceived of as capital) concerned with obtaining a profitable behaviour through information (conceived of as a production factor) sold to them." (Alliez and Feher 347). Gilles Deleuze has identified this shift in terms of what he calls a "control society" where the individuation of workers guaranteed by the disciplinary society gives way to a cybernetic modulation of "dividuals" or cypher values regulated according to a code (180). For dividualised workers, the resource incorporated into capital is their own lived time, no longer divided between work and leisure, but entirely "consummated" in capital (Alliez and Fehrer 350). A dividualised worker will thus work in order to produce leisure, and conversely enjoy leisure as a form of work. Here we have what appears to be a complete breakdown of the separation of labour and capital instigated by the disciplinary society; a sweeping away of the grounds on which labour once stood as a mass of individuals, conscious of their rivalry with capital over the spoils of surplus value. Here we have a situation where labour itself has become a form of capital (not just a commodity exchangeable on the market), incorporated into the temporalised body of the worker, contributing to the extraction of its own surplus value. Under the disciplinary society, the body of the worker became subject to panoptic surveillance, where "time and motion" studies enabled a more efficient control of work through the application of mathematical models. In the control society there is no need for this kind of panoptic control, since the embodiment of the panoptic principle, anticipated by Foucault and responsible for the individuation of the subject in disciplinary societies, has itself become a resource for extracting surplus value. In effect, dividualised workers survey themselves, not as a form of self-discipline, but as an investment for capitalisation. Dividuals are not motivated by guilt, conscience, duty or devotion to one's self, but by a transubjective desire for the other, the figure of a self projected into the future, and realised through their own bodily becoming. Unlike individuals who watch themselves as an already constituted self in the shadow of a super-ego, dividuals watch themselves in the image of a becoming-other. We might like to think of dividuals as self-correctors operating in teams and groups (franchises) whose "in-ness" as in-dividuals, is derived not from self-reflection, but from directiveness. Directiveness is the disposition of a habitus to find its way within programs designed to maximise performance across a territory. Following Gregory Bateson, we might say that directiveness is the pathway forged between a map and its territory (Bateson 454). A billiard ball sitting on a billiard table needs to be struck in such a way to simultaneously reduce the risk of a rival scoring from it, and maximise the score available, for instance by potting it into a pocket. The actual trajectory of the ball is governed by a logic of "restraint" (399) which sets up a number of virtual pathways, all but one of which is eliminated when the map (the rules and strategies of the game) is applied to the territory of the billiard table. If surveillance was the modus operandi of the old form of capitalism which required a strict control over labour, then directiveness is the new force of capital which wants to eliminate work in the older sense of the word, and replace it with the self-managed flow of capitalising labour. Marx's labour theory of value has led us, via a detour through Foucault and Deleuze, to the edge of the labour/capital divide, where the figure of man reappears, not as a worker subject to capital, but in some kind of partnership with it. This seems to spell the end of the old form of work, which required a strict delineation between labour and capital, where workers became rivals with capital for a share in surplus value. In the new formation of work, workers are themselves little capitalists, whose labour time is produced through their own investments back into the system. Yet, the worker is also subject to the extraction of her labour time in the necessity to submit to capital through the wage relation. This creates a reflexive snarl, embedded in the worker's own self-image, where work appears as leisure and leisure appears as work, causing labour to drift over capital and vice versa, for capital to drift over labour. This drifting, mobile relation between labour and capital cannot be secured through appeals to older forms of worker awareness (duty, responsibility, attentiveness, self-surveillance) since this would require a repression of the desire for self-transformation, and hence a fatal dampening of the dynamics of the market (anathema to the spirit of capitalism). Rather it can only be directed through control mechanisms involving a kind of forced partnership between capital and labour, where both parties recognise their mutual destinies in being "thrown" into the system. In the end, work remains subsumed under capital, but not in its alienated, disciplinary state. Rather work has become a form of capital itself, one's investment in the future, and hence as valuable now as it was before. It's just a little more difficult to see how it can be protected as a 'right' of the worker, since workers are themselves investors of their own labour, and not right-bearing individuals whose position in society has been fixed by the separation of labour from capital. References Alliez, Eric and Michel Feher. "The Luster of Capital." Zone1/2 (1987): 314-359. Bataille, Georges. 'The Notion of Expenditure'. Visions of Excess: Selected Writings, 1927-1939, Trans. and Ed. Alan Stoekl. Minneapolis: Minnesota UP, 1985. 116-29. Bateson, Gregory. Steps to an Ecology of Mind. New York: Ballantine Books, 1972. Childers, Joseph W. "Observation and Representation: Mr. Chadwick Writes the Poor." Victorian Studies37.3 (1994): 405-31. Deleuze, Gilles. Anti-Oedipus: Capitalism and Schizophrenia. Minneapolis: U of Minnesota P, 1983. --. Negotiations, 1972-1990. Trans. Martin Joughin. New York: Columbia UP, 1995. Drucker, Peter F. Post-Capitalist Society. New York: Harper, 1993. Foucault, Michel. Discipline and Punish: The Birth of the Prison. Trans. Alan Sheridan. Harmondsworth: Penguin, 1977. Freud, Sigmund. "The Ego and the Id". On Metapsychology: The Theory of Psychoanalysis. The Pelican Freud Library, Vol 11. Harmondsworth: Penguin, 1984. 339-407. Lyotard, Jean-Francois. Libidinal Economy. Trans. Iain Hamilton Grant,. Bloomington: Indiana UP, 1993. Marx, Karl. Capital, Vol. I. Trans. Ben Fowkes. Harmondsworth: Penguin, 1976. Massumi, Brian. "Everywhere You Wanted to Be: Introduction to Fear." The Politics of Everyday Fear. Ed. Brian Massumi. Minneapolis: U of Minnesota P, 1993. 3-37. Mules, Warwick. "A Remarkable Disappearing Act: Immanence and the Creation of Modern Things." M/C: A Journal of Media and Culture 4.4 (2001). 15 Nov. 2001 <http://www.media-culture.org.au/0108/disappear.php>. Ryan, Michael. Marxism and Deconstruction: A Critical Introduction. Baltimore: John Hopkins Press, 1982. Sinnema, Peter W. Dynamics of the Printed Page: Representing the Nation in the Illustrated London News. Aldershot: Ashgate Press, 1998. Links http://csf.colorado.edu/psn/marx/Archive/1867-C1/ http://www.media-culture.org.au/0108/Disappear.html http://acnet.pratt.edu/~arch543p/help/Foucault.html http://acnet.pratt.edu/~arch543p/help/Deleuze.html Citation reference for this article MLA Style Mules, Warwick. "That Obstinate Yet Elastic Natural Barrier" M/C: A Journal of Media and Culture 4.5 (2001). [your date of access] < http://www.media-culture.org.au/0111/Mules.xml >. Chicago Style Mules, Warwick, "That Obstinate Yet Elastic Natural Barrier" M/C: A Journal of Media and Culture 4, no. 5 (2001), < http://www.media-culture.org.au/0111/Mules.xml > ([your date of access]). APA Style Mules, Warwick. (2001) That Obstinate Yet Elastic Natural Barrier. M/C: A Journal of Media and Culture 4(5). < http://www.media-culture.org.au/0111/Mules.xml > ([your date of access]).
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography