Journal articles on the topic 'Thermodynamics-based Neural Networks'

To see the other types of publications on this topic, follow the link: Thermodynamics-based Neural Networks.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 34 journal articles for your research on the topic 'Thermodynamics-based Neural Networks.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Masi, Filippo, Ioannis Stefanou, Paolo Vannucci, and Victor Maffi-Berthier. "Thermodynamics-based Artificial Neural Networks for constitutive modeling." Journal of the Mechanics and Physics of Solids 147 (February 2021): 104277. http://dx.doi.org/10.1016/j.jmps.2020.104277.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Masi, Filippo, and Ioannis Stefanou. "Multiscale modeling of inelastic materials with Thermodynamics-based Artificial Neural Networks (TANN)." Computer Methods in Applied Mechanics and Engineering 398 (August 2022): 115190. http://dx.doi.org/10.1016/j.cma.2022.115190.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Huang, Shenglin, Zequn He, Bryan Chem, and Celia Reina. "Variational Onsager Neural Networks (VONNs): A thermodynamics-based variational learning strategy for non-equilibrium PDEs." Journal of the Mechanics and Physics of Solids 163 (June 2022): 104856. http://dx.doi.org/10.1016/j.jmps.2022.104856.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Zhao, Liang, Chunyang Mo, Tingting Sun, and Wei Huang. "Aero Engine Gas-Path Fault Diagnose Based on Multimodal Deep Neural Networks." Wireless Communications and Mobile Computing 2020 (October 3, 2020): 1–10. http://dx.doi.org/10.1155/2020/8891595.

Full text
Abstract:
Aeroengine, served by gas turbine, is a highly sophisticated system. It is a hard task to analyze the location and cause of gas-path faults by computational-fluid-dynamics software or thermodynamic functions. Thus, artificial intelligence technologies rather than traditional thermodynamics methods are widely used to tackle this problem. Among them, methods based on neural networks, such as CNN and BPNN, cannot only obtain high classification accuracy but also favorably adapt to aeroengine data of various specifications. CNN has superior ability to extract and learn the attributes hiding in properties, whereas BPNN can keep eyesight on fitting the real distribution of original sample data. Inspired by them, this paper proposes a multimodal method that integrates the classification ability of these two excellent models, so that complementary information can be identified to improve the accuracy of diagnosis results. Experiments on several UCR time series datasets and aeroengine fault datasets show that the proposed model has more promising and robust performance compared to the typical and the state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
5

Morán-Durán, Andrés, Albino Martínez-Sibaja, José Pastor Rodríguez-Jarquin, Rubén Posada-Gómez, and Oscar Sandoval González. "PEM Fuel Cell Voltage Neural Control Based on Hydrogen Pressure Regulation." Processes 7, no. 7 (July 10, 2019): 434. http://dx.doi.org/10.3390/pr7070434.

Full text
Abstract:
Fuel cells are promising devices to transform chemical energy into electricity; their behavior is described by principles of electrochemistry and thermodynamics, which are often difficult to model mathematically. One alternative to overcome this issue is the use of modeling methods based on artificial intelligence techniques. In this paper is proposed a hybrid scheme to model and control fuel cell systems using neural networks. Several feature selection algorithms were tested for dimensionality reduction, aiming to eliminate non-significant variables with respect to the control objective. Principal component analysis (PCA) obtained better results than other algorithms. Based on these variables, an inverse neural network model was developed to emulate and control the fuel cell output voltage under transient conditions. The results showed that fuel cell performance does not only depend on the supply of the reactants. A single neuro-proportional–integral–derivative (neuro-PID) controller is not able to stabilize the output voltage without the support of an inverse model control that includes the impact of the other variables on the fuel cell performance. This practical data-driven approach is reliably able to reduce the cost of the control system by the elimination of non-significant measures.
APA, Harvard, Vancouver, ISO, and other styles
6

Mahmoud, Saida Saad Mohamed, Gennaro Esposito, Giuseppe Serra, and Federico Fogolari. "Generalized Born radii computation using linear models and neural networks." Bioinformatics 36, no. 6 (November 6, 2019): 1757–64. http://dx.doi.org/10.1093/bioinformatics/btz818.

Full text
Abstract:
Abstract Motivation Implicit solvent models play an important role in describing the thermodynamics and the dynamics of biomolecular systems. Key to an efficient use of these models is the computation of generalized Born (GB) radii, which is accomplished by algorithms based on the electrostatics of inhomogeneous dielectric media. The speed and accuracy of such computations are still an issue especially for their intensive use in classical molecular dynamics. Here, we propose an alternative approach that encodes the physics of the phenomena and the chemical structure of the molecules in model parameters which are learned from examples. Results GB radii have been computed using (i) a linear model and (ii) a neural network. The input is the element, the histogram of counts of neighbouring atoms, divided by atom element, within 16 Å. Linear models are ca. 8 times faster than the most widely used reference method and the accuracy is higher with correlation coefficient with the inverse of ‘perfect’ GB radii of 0.94 versus 0.80 of the reference method. Neural networks further improve the accuracy of the predictions with correlation coefficient with ‘perfect’ GB radii of 0.97 and ca. 20% smaller root mean square error. Availability and implementation We provide a C program implementing the computation using the linear model, including the coefficients appropriate for the set of Bondi radii, as Supplementary Material. We also provide a Python implementation of the neural network model with parameter and example files in the Supplementary Material as well. Supplementary information Supplementary data are available at Bioinformatics online.
APA, Harvard, Vancouver, ISO, and other styles
7

Zhang, Yi, Junfu Fan, Mengzhen Zhang, Zongwen Shi, Rufei Liu, and Bing Guo. "A Recurrent Adaptive Network: Balanced Learning for Road Crack Segmentation with High-Resolution Images." Remote Sensing 14, no. 14 (July 7, 2022): 3275. http://dx.doi.org/10.3390/rs14143275.

Full text
Abstract:
Road crack segmentation based on high-resolution images is an important task in road service maintenance. The undamaged road surface area is much larger than the damaged area on a highway. This imbalanced situation yields poor road crack segmentation performance for convolutional neural networks. In this paper, we first evaluate the mainstream convolutional neural network structure in the road crack segmentation task. Second, inspired by the second law of thermodynamics, an improved method called a recurrent adaptive network for a pixelwise road crack segmentation task is proposed to solve the extreme imbalance between positive and negative samples. We achieved a flow between precision and recall, similar to the conduction of temperature repetition. During the training process, the recurrent adaptive network (1) dynamically evaluates the degree of imbalance, (2) determines the positive and negative sampling rates, and (3) adjusts the loss weights of positive and negative features. By following these steps, we established a channel between precision and recall and kept them balanced as they flow to each other. A dataset of high-resolution road crack images with annotations (named HRRC) was built from a real road inspection scene. The images in HRRC were collected on a mobile vehicle measurement platform by high-resolution industrial cameras and were carefully labeled at the pixel level. Therefore, this dataset has sufficient data complexity to objectively evaluate the real performance of convolutional neural networks in highway patrol scenes. Our main contribution is a new method of solving the data imbalance problem, and the method of guiding model training by analyzing precision and recall is experimentally demonstrated to be effective. The recurrent adaptive network achieves state-of-the-art performance on this dataset.
APA, Harvard, Vancouver, ISO, and other styles
8

Lam, Stephen, Yu Shi, and Thomas Beck. "Modeling Solvation Thermodynamics in Molten Salts with Quasichemical Theory and Ab Initio-Accurate Deep Learning-Accelerated Simulations." ECS Meeting Abstracts MA2022-01, no. 46 (July 7, 2022): 1956. http://dx.doi.org/10.1149/ma2022-01461956mtgabs.

Full text
Abstract:
Molten salts are a promising class of ionic liquids used in advanced energy applications including next-generation nuclear reactors, batteries, and solar thermal energy storage. In these applications, understanding corrosion processes and predicting phase behavior remains a critical challenge. This requires accurate prediction of the solvation thermodynamics of ionic species in a variety of chemical and configurational states. In this work, we fundamentally address these challenges by combining quasichemical theory (QCT), ab initio simulation with density functional theory (DFT), and neural network interatomic potentials (NNIP) to accurately predict the solvation free energy of solute ions in molten salt. Ab initio data is used to train neural networks that learn the environment-dependent atomic forces and energies. This enables acceleration of atomistic simulation by more than three orders of magnitude. Using chemically accurate and highly efficient neural network-based molecular simulations, we perform free energy calculations within the QCT framework. Namely, QCT provides an exact partitioning of the free energy that includes contributions from 1) formation of a cavity in solution, 2) insertion of a solute ion into the cavity, and 3) relaxation of the cavity surrounding the solute ion. This requires simulations in timescales totaling tens of nanoseconds. As such, using AIMD alone is impractical for exploring a wide range of solutes, compositions, and thermodynamic conditions. In this work, we show that the NNIPs can accurately predict molten salt thermodynamics and local coordination structures. We provide a demonstration of the combined methods (DFT-NNIP-QCT) on molten NaCl, in which we obtain the total excess potentials of Na+ and Cl- ions, and perform corrections to errors in electrostatic energy caused by finite size of the simulation cell. The calculated excess chemical potential for Na+/Cl− was predicted to be -161.7±10.6 kcal/mol, which is consistent with previous calculations and an experimental value of -163.5 kcal/mol from thermochemical tables. These results provide initial validation of the methods for predicting excess chemical potentials, which can be directly exploited for the determination of solute chemistry, and the solubility of dissolved gases and metallic ions in molten salts. This provides motivation for the use of these methods to understanding solute chemistry in a wide range of molten salt systems in advanced energy applications.
APA, Harvard, Vancouver, ISO, and other styles
9

Njegovanović, Ana. "Mind Theory and the Role of Financial Decision and Process Role of Optogenetics." Financial Markets, Institutions and Risks 4, no. 1 (2020): 40–50. http://dx.doi.org/10.21272/fmir.4(1).40-50.2020.

Full text
Abstract:
This paper is devoted to the study of functional relationships between behavioral finance, in particular when making decisions in the financial market, and the theory of reason and optogenetics. The purpose of this paper is to analyze the interaction of financial decision-making processes with the key principles of the mental state model (theory of mind) and define the role of optogenetics. The author notes that the use of the theory of reason in behavioral finance allows us to consider the key characteristics of the mental state of the subject of economic relations (thoughts, perceptions, desires, intentions, feelings have an internal mentalistic and experimental content). The author notes that decision-making at any level characterizes the complex network of scientific industries that allow us to understand the complexity of financial decision-making and the role and significance of the laws of thermodynamics and entropy. Modeling neural networks (based on the experimental approach), the paper presents the results of research in the context of analyzing behavioral changes in our brain under the following scenarios: at the stage of awareness of certain processes; if we participate (or do not) participate in these processes. The following conclusions are made in the paper: for the normal states of anxiety, the greatest number of possible configurations of interactions between brain networks, which represent the highest values of entropy is characteristic. These results are obtained from the study of a small number of participants in the experiment, but give an objective assessment and understanding of the complexity of the research and the guidance that include a scientific basis in the process of solving problems in the financial sphere (as an example: when trading in the financial market). Keywords: behavioral finance; theory of mind, financial decision making, optogenetics.
APA, Harvard, Vancouver, ISO, and other styles
10

Gorkowski, Kyle, Thomas C. Preston, and Andreas Zuend. "Relative-humidity-dependent organic aerosol thermodynamics via an efficient reduced-complexity model." Atmospheric Chemistry and Physics 19, no. 21 (October 30, 2019): 13383–407. http://dx.doi.org/10.5194/acp-19-13383-2019.

Full text
Abstract:
Abstract. Water plays an essential role in aerosol chemistry, gas–particle partitioning, and particle viscosity, but it is typically omitted in thermodynamic models describing the mixing within organic aerosol phases and the partitioning of semivolatile organics. In this study, we introduce the Binary Activity Thermodynamics (BAT) model, a water-sensitive reduced-complexity model treating the nonideal mixing of water and organics. The BAT model can process different levels of physicochemical mixture information enabling its application in the thermodynamic aerosol treatment within chemical transport models, the evaluation of humidity effects in environmental chamber studies, and the analysis of field observations. It is capable of using organic structure information including O:C, H:C, molar mass, and vapor pressure, which can be derived from identified compounds or estimated from bulk aerosol properties. A key feature of the BAT model is predicting the extent of liquid–liquid phase separation occurring within aqueous mixtures containing hydrophobic organics. This is crucial to simulating the abrupt change in water uptake behavior of moderately hygroscopic organics at high relative humidity, which is essential for capturing the correct behavior of organic aerosols serving as cloud condensation nuclei. For gas–particle partitioning predictions, we complement a volatility basis set (VBS) approach with the BAT model to account for nonideality and liquid–liquid equilibrium effects. To improve the computational efficiency of this approach, we trained two neural networks; the first for the prediction of aerosol water content at given relative humidity, and the second for the partitioning of semivolatile components. The integrated VBS + BAT model is benchmarked against high-fidelity molecular-level gas–particle equilibrium calculations based on the AIOMFAC (Aerosol Inorganic-Organic Mixtures Functional groups Activity Coefficient) model. Organic aerosol systems derived from α-pinene or isoprene oxidation are used for comparison. Predicted organic mass concentrations agree within less than a 5 % error in the isoprene case, which is a significant improvement over a traditional VBS implementation. In the case of the α-pinene system, the error is less than 2 % up to a relative humidity of 94 %, with larger errors past that point. The goal of the BAT model is to represent the bulk O:C and molar mass dependencies of a wide range of water–organic mixtures to a reasonable degree of accuracy. In this context, we discuss that the reduced-complexity effort may be poor at representing a specific binary water–organic mixture perfectly. However, the averaging effects of our reduced-complexity model become more representative when the mixture diversity increases in terms of organic functionality and number of components.
APA, Harvard, Vancouver, ISO, and other styles
11

Hang, Peng, Leihao Zhou, and Guilian Liu. "Thermodynamics-based neural network and the optimization of ethylbenzene production process." Journal of Cleaner Production 296 (May 2021): 126615. http://dx.doi.org/10.1016/j.jclepro.2021.126615.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Ren, Likun, Haiqin Qin, Zhenbo Xie, Jing Xie, and Bianjiang Li. "A Thermodynamics-Oriented and Neural Network-Based Hybrid Model for Military Turbofan Engines." Sustainability 14, no. 10 (May 23, 2022): 6373. http://dx.doi.org/10.3390/su14106373.

Full text
Abstract:
Traditional thermodynamic models for military turbofans suffer from non-convergence and inaccuracy due to inaccuracy of the component maps and the instability of the iterative process. To address these problems, a thermodynamically oriented and neural network-based hybrid model for military turbofans is proposed. Different from iteration-based thermodynamic models, the proposed hybrid model transforms the iteration process into a multi-objective optimization and training process for a component-level neural network in order to improve convergence and modeling accuracy. The experiment shows that the accuracy of the proposed hybrid model can reach about 7%, 5% better than the map-fitting-based thermodynamic model and 8% better than the purely data-driven method, with a similar number of network neutrons, verifying its effectiveness. The contributions of this work mainly lie in the following aspects: a new component-level neural network structure is proposed to improve convergence and computational efficiency; a multi-objective loss function based on component co-working is proposed to direct the model to converge toward the physical thermodynamic process; a fusion training method of multiple data sources is established to train the model with good convergence and high computational accuracy.
APA, Harvard, Vancouver, ISO, and other styles
13

Shen, Y., K. Chandrashekhara, W. F. Breig, and L. R. Oliver. "Neural Network Based Constitutive Model for Rubber Material." Rubber Chemistry and Technology 77, no. 2 (May 1, 2004): 257–77. http://dx.doi.org/10.5254/1.3547822.

Full text
Abstract:
Abstract Rubber hyperelasticity is characterized by a strain energy function. The strain energy functions fall primarily into two categories: one based on statistical thermodynamics, the other based on the phenomenological approach of treating the material as a continuum. This work is focused on the phenomenological approach. To determine the constants in the strain energy function by this method, curve fitting of rubber test data is required. A review of the available strain energy functions based on the phenomenological approach shows that it requires much effort to obtain a curve fitting with good accuracy. To overcome this problem, a novel method of defining rubber strain energy function by Feedforward Backpropagation Neural Network is presented. The calculation of strain energy and its derivatives by neural network is explained in detail. The preparation of the neural network training data from rubber test data is described. Curve fitting results are given to show the effectiveness and accuracy of the neural network approach. A material model based on the neural network approach is implemented and applied to the simulation of V-ribbed belt tracking using the commercial finite element code ABAQUS.
APA, Harvard, Vancouver, ISO, and other styles
14

Lei, Chun Li, and Zhi Yuan Rui. "Thermal Error Modeling and Compensating of Motorized Spindle Based on Improved Neural Network." Advanced Materials Research 129-131 (August 2010): 556–60. http://dx.doi.org/10.4028/www.scientific.net/amr.129-131.556.

Full text
Abstract:
In a lot of factors, thermal deformation of motorized high-speed spindle is a key factor affecting the manufacturing accuracy of machine tool. In order to reduce the thermal errors, the reasons and influence factors are analyzed. A thermal error model, that considers the effect of thermodynamics and speed on the thermal deformation, is proposed by using genetic algorithm-based radial basis function neural network. The improved neural network has been trained and tested, then a thermal error compensation system based on this model is established to compensate thermal deformation. The experiment results show that there is a 79% decrease in motorized spindle errors and this model has high accuracy.
APA, Harvard, Vancouver, ISO, and other styles
15

Mohanty, Itishree, Appa Rao Chintha, and Saurabh Kundu. "Design Optimization of Microalloyed Steels Using Thermodynamics Principles and Neural-Network-Based Modeling." Metallurgical and Materials Transactions A 49, no. 6 (March 16, 2018): 2405–18. http://dx.doi.org/10.1007/s11661-018-4540-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Meng, Yixue. "Analysis of surface temperature characteristics of multiscale fusion based on convolution neural network." MATEC Web of Conferences 173 (2018): 03011. http://dx.doi.org/10.1051/matecconf/201817303011.

Full text
Abstract:
Intelligent detection of surface temperature 1, describing macro characteristics of microcosmic combination, promotes the cross fusion of geoscience, thermodynamics, climatology, geological science, and so on. However, there are still two notable problems to be solved. One is the model lacks characterization capability, and the other is that the precision of surface temperature’s monitoring and prediction is low. To solve these problems, we propose an algorithm to predict surface temperature characteristics of multiscale fusion based on convolution neural network. Firstly, after researching the multiscale disturbance characteristics of surface temperature, we draw a conclusion based on analyzing time change, spatial change, casual change. To improve the parameter correlations among surface temperature characteristics, a neural network about compensating and optimizing analysis of surface temperature characteristics is proposed on the fundamental of multivariate surface temperature characterization models. By designing cluster input layer, dynamic hidden layer and visual output layer of neural network, the precise of predict data has been improved by 53.3% on average, and 76.0% on variance compared with remote sensing data. What’s more, the data consumption of this model has promoted by 17.2% in contrast to grey theory on predictive complexity and precision, and 10.8% compared with BP neural network.
APA, Harvard, Vancouver, ISO, and other styles
17

Zhang, Tao, and Shuyu Sun. "Thermodynamics-Informed Neural Network (TINN) for Phase Equilibrium Calculations Considering Capillary Pressure." Energies 14, no. 22 (November 18, 2021): 7724. http://dx.doi.org/10.3390/en14227724.

Full text
Abstract:
The thermodynamic properties of fluid mixtures play a crucial role in designing physically meaningful models and robust algorithms for simulating multi-component multi-phase flow in subsurface, which is needed for many subsurface applications. In this context, the equation-of-state-based flash calculation used to predict the equilibrium properties of each phase for a given fluid mixture going through phase splitting is a crucial component, and often a bottleneck, of multi-phase flow simulations. In this paper, a capillarity-wise Thermodynamics-Informed Neural Network is developed for the first time to propose a fast, accurate and robust approach calculating phase equilibrium properties for unconventional reservoirs. The trained model performs well in both phase stability tests and phase splitting calculations in a large range of reservoir conditions, which enables further multi-component multi-phase flow simulations with a strong thermodynamic basis.
APA, Harvard, Vancouver, ISO, and other styles
18

Pancotti, Corrado, Silvia Benevenuta, Valeria Repetto, Giovanni Birolo, Emidio Capriotti, Tiziana Sanavia, and Piero Fariselli. "A Deep-Learning Sequence-Based Method to Predict Protein Stability Changes Upon Genetic Variations." Genes 12, no. 6 (June 12, 2021): 911. http://dx.doi.org/10.3390/genes12060911.

Full text
Abstract:
Several studies have linked disruptions of protein stability and its normal functions to disease. Therefore, during the last few decades, many tools have been developed to predict the free energy changes upon protein residue variations. Most of these methods require both sequence and structure information to obtain reliable predictions. However, the lower number of protein structures available with respect to their sequences, due to experimental issues, drastically limits the application of these tools. In addition, current methodologies ignore the antisymmetric property characterizing the thermodynamics of the protein stability: a variation from wild-type to a mutated form of the protein structure (XW→XM) and its reverse process (XM→XW) must have opposite values of the free energy difference (ΔΔGWM=−ΔΔGMW). Here we propose ACDC-NN-Seq, a deep neural network system that exploits the sequence information and is able to incorporate into its architecture the antisymmetry property. To our knowledge, this is the first convolutional neural network to predict protein stability changes relying solely on the protein sequence. We show that ACDC-NN-Seq compares favorably with the existing sequence-based methods.
APA, Harvard, Vancouver, ISO, and other styles
19

Cao, Yunpeng, Xinran Lv, Guodong Han, Junqi Luan, and Shuying Li. "Research on Gas-Path Fault-Diagnosis Method of Marine Gas Turbine Based on Exergy Loss and Probabilistic Neural Network." Energies 12, no. 24 (December 10, 2019): 4701. http://dx.doi.org/10.3390/en12244701.

Full text
Abstract:
In order to improve the accuracy of gas-path fault detection and isolation for a marine three-shaft gas turbine, a gas-path fault diagnosis method based on exergy loss and a probabilistic neural network (PNN) is proposed. On the basis of the second law of thermodynamics, the exergy flow among the subsystems and the external environment is analyzed, and the exergy model of a marine gas turbine is established. The exergy loss of a marine gas turbine under the healthy condition and typical gas-path faulty condition is analyzed, and the relative change of exergy loss is used as the input of the PNN to detect the gas-path malfunction and locate the faulty component. The simulation case study was conducted based on a three-shaft marine gas turbine with typical gas-path faults. Several results show that the proposed diagnosis method can accurately detect the fault and locate the malfunction component.
APA, Harvard, Vancouver, ISO, and other styles
20

Ren, Likun, Haiqin Qin, Na Cai, Bianjiang Li, and Zhenbo Xie. "A Hybrid Degradation Evaluation Model for Aero-Engines." Sustainability 15, no. 1 (December 20, 2022): 29. http://dx.doi.org/10.3390/su15010029.

Full text
Abstract:
The non-convergence and low efficiency of the thermodynamic model make them difficult to be used in the aero-engines degradation evaluation, while the negligence of the thermodynamics process of data-driven degradation evaluation methods makes them inaccurate and hard to analyze the actual degradation of air path components. So, we propose a thermodynamic-based and data-driven hybrid model for aero-engine degradation evaluation. Different from thermodynamic-based methods, the iteration calculation is converted to the forward flow in the proposed neural network, thus improving convergence. Moreover, a multi-objective loss function considering the components co-operation process and fusion training process fully taking advantage of simulation and degradation trajectory datasets are proposed to improve the degradation evaluation accuracy. The test case is carried out on NASA’s benchmark for aero-engine degradation evaluation. The result shows that the proposed method can improve the accuracy significantly, which suggests its effectiveness.
APA, Harvard, Vancouver, ISO, and other styles
21

Isiyaka, Hamza Ahmad, Khairulazhar Jumbri, Nonni Soraya Sambudi, Zakariyya Uba Zango, Bahruddin Saad, and Adamu Mustapha. "Removal of 4-chloro-2-methylphenoxyacetic acid from water by MIL-101(Cr) metal-organic framework: kinetics, isotherms and statistical models." Royal Society Open Science 8, no. 1 (January 13, 2021): 201553. http://dx.doi.org/10.1098/rsos.201553.

Full text
Abstract:
Effective removal of 4-chloro-2-methylphenoxyacetic acid (MCPA), an emerging agrochemical contaminant in water with carcinogenic and mutagenic health effects has been reported using hydrothermally synthesized MIL-101(Cr) metal-organic framework (MOF). The properties of the MOF were ascertained using powdered X-ray diffraction (XRD), Fourier transform infrared (FTIR) spectroscopy, thermal gravimetric analysis (TGA), field emission scanning electron microscopy (FESEM) and surface area and porosimetry (SAP). The BET surface area and pore volume of the MOF were 1439 m 2 g −1 and 0.77 cm 3 g −1 , respectively. Artificial neural network (ANN) model was significantly employed for the accurate prediction of the experimental adsorption capacity ( q e ) values with minimal error. A rapid removal of the pollutant (99%) was recorded within short time (approx. 25 min), and the reusability of the MOF (20 mg) was achieved up to six cycles with over 90% removal efficiency. The kinetics, isotherm and thermodynamics of the process were described by the pseudo-second-order, Freundlich and endothermic adsorption, respectively. The adsorption process is spontaneous based on the negative Gibbs free energy values. The significant correlation between the experimental findings and simulation results suggests the great potential of MIL-101(Cr) for the remediation of MCPA from water matrices.
APA, Harvard, Vancouver, ISO, and other styles
22

Nedostup, Alexander Alekseevich, and Alexey Olegovich Razhev. "FORCES PERFORMANCE OF TRAWL SYSTEM – II: PHYSICAL MODELING." Vestnik of Astrakhan State Technical University. Series: Fishing industry 2021, no. 3 (September 30, 2021): 86–93. http://dx.doi.org/10.24143/2073-5529-2021-3-86-93.

Full text
Abstract:
The article is a continuation of scientific research and justification of the possibility of artificial intelligence technologies for the tasks of predictive modeling of the behavior of a trawl system in the process of fishing on a self-learning neural network. The definition of the productivity of forces is introduced - the second time derivative of the work of these forces. The intermediate result of the design of the trawl system is a project - an integrated set of characteristics described in a form suitable for its operation with a given performance of forces. To proceed to predictive modeling, it is necessary to determine the extent of similarity of the trawl system in different areas of its interaction. There is inter-discipline, which is manifested in the formulation of problems, in ap-proaches to their solution, in revealing the connections between theories, in the formation of new disciplines. Interdisciplinarity allows conducting research with the trawl system in its entirety, combining data from various disciplines (hydromechanics, electrodynamics, thermodynamics, acoustics, optics, etc.), leading to the emergence of new postulates and laws that synthesize the sci-entific knowledge necessary for a self-learning neural network of fishing for the trawl system. To combine the knowledge there was chosen the similarity theory as a mathematical modeling method based on the transition from ordinary physical quantities that affect the system being modeled to generalized complex-type quantities composed of original physical quantities, but in certain combinations, depending - from the specific nature of the process under study. The complex nature of these quantities has a deep physical meaning of reflecting the interaction of various influences. The similarity theory studies the methods of constructing and applying these variables and is used in cases of mathematical modeling when an analytical solution of mathematical modeling problems is impossible due to complexity and accuracy requirements. The similarity theory is used in these cases to synthesize relations obtained on the basis of the physical mechanism of the process under study and data of a numerical solution or experiment.
APA, Harvard, Vancouver, ISO, and other styles
23

Sahu, Saswata, Manoj Kumar Yadav, Ashok Kumar Gupta, Venkatesh Uddameri, Ashish Navneet Toppo, Bellum Maheedhar, and Partha Sarathi Ghosal. "Modeling defluoridation of real-life groundwater by a green adsorbent aluminum/olivine composite: Isotherm, kinetics, thermodynamics and novel framework based on artificial neural network and support vector machine." Journal of Environmental Management 302 (January 2022): 113965. http://dx.doi.org/10.1016/j.jenvman.2021.113965.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Mu’azu, Nuhu Dalhat. "Insight into ANN and RSM Models’ Predictive Performance for Mechanistic Aspects of Cr(VI) Uptake by Layered Double Hydroxide Nanocomposites from Water." Water 14, no. 10 (May 20, 2022): 1644. http://dx.doi.org/10.3390/w14101644.

Full text
Abstract:
Mathematical predictive models are vital tools for understanding of pollutant uptake during adsorptive water and wastewater treatment processes. In this study, applications of CoAl-LDH and its bentonite-CoAl intercalated LDH (bentonite-CoAl-LDH) for uptake of Cr(VI) from water were modeled using response surface methodology (RSM) and artificial neural network (ANN), and their performance for predicting equilibrium, thermodynamics and kinetics of the Cr(VI) uptake were assessed and compared based on coefficient of determination (R2) and root mean square error (RMSE). The uptake of Cr(VI) fits well quartic RSM polynomial models and ANN models based on Levenberg–Marquardt algorithms (ANN-LMA). Both models predicted a better fit for the Langmuir model compared to the Freundlich model for the Cr(VI) uptake. The predicted non-linear Langmuir model contestant (KL) values, for both the RSM and ANN-LMA models yielded better ΔG°, ΔH and ΔS predictions which supported the actual feasible, spontaneous and greater order of reaction as well as exothermic nature of Cr(VI) uptake onto the tested adsorbents. Employing the linear Langmuir model KL values dwindles the thermodynamic parameter predictions, especially for the RSM models. The excellent kinetic parameter predictions for the ANN-LMA models further indicate a mainly pseudo-second-order process, thus confirming the predominant chemisorption mechanism as established by the Cr(VI) speciation and surface charges for the Cr(VI) uptake by both CoAl-LDH and bentonite-CoAl-LDH. The ANN-LMA models showed consistent and insignificant decline in their predictions under different mechanistic studies carried out compared to the RSM models. This study demonstrates the high potential reliability of ANN-LMA models in capturing Cr(VI) adsorption data for LDHs nanocomposite heavy metal uptake in water and wastewater treatment.
APA, Harvard, Vancouver, ISO, and other styles
25

Wu, Qiong, Yi-Xuan Shou, Lei-Ming Ma, Qifeng Lu, and Rui Wang. "Estimation of Maximum Hail Diameters from FY-4A Satellite Data with a Machine Learning Method." Remote Sensing 14, no. 1 (December 24, 2021): 73. http://dx.doi.org/10.3390/rs14010073.

Full text
Abstract:
The magnitude of damage caused by hail depends on its size; however, direct observation or indirect estimation of hail size remains a significant challenge. One primary reason for estimations by proxy, such as through remote sensing methods, is that empirical relationships or statistical models established in one region may not apply to other areas. This study employs a machine learning method to build a hail size estimation model without assuming relations in advance. It uses FY-4A AGRI data to provide cloud-top information and ERA5 data to add vertical environment information. Before training the model, we conducted a principal component analysis (PCA) to analyze the highly influential factors on hail sizes. A total of 18 features, composed of four groups, namely brightness temperature (BT), the difference in BT (BTD), thermodynamics, and dynamics groups, were chosen from 29 original features. Dynamic and BTD features show superior performance in identifying large hail. Although the selected features are more closely correlated to hail sizes than unselected ones, the relationships are complicated and nonlinear. As a result, a two-layer regression back propagation neural network (BPNN) model with powerful fitting ability is trained with selected features to predict maximum hail diameter (MHD). The linear fitting R2 between predicted and observed MHDs is 0.52 on the test set, which signifies that our model performs well compared with other hail size estimation models. We also examine the model concerning all three hail cases in Shanghai, China, between 2019 and 2021. The model attained more satisfactory results than the radar-based maximum estimated hail size (MEHS) method, which overestimates the MHDs, thus further supporting the operational applications of our model.
APA, Harvard, Vancouver, ISO, and other styles
26

Mozaffari, Ahmad, Mehdi Emami, Nasser L. Azad, and Alireza Fathi. "A Modified SSLPS Algorithm with Logistic Pseudo-Random Sequence Generator for Improving the Performance of Neka Power Plant." International Journal of Applied Evolutionary Computation 6, no. 1 (January 2015): 1–29. http://dx.doi.org/10.4018/ijaec.2015010101.

Full text
Abstract:
Metaheuristic techniques have successfully contributed to the development and optimization of large-scale distributed power systems. The archived literature demonstrate that the modification or tuning of the parameters of specific metaheuristics can provide powerful tools suited for optimization of power plants with different types of constraints. In spite of the high potential of metaheuristics in dealing with such systems, most of the conducted researches only address the optimization of the electrical aspects of power systems. In this research, the authors intend to attest the applicability of metaheuristics for optimizing the mechanical aspects of a real-world large-scale power plant, i.e. Neka power plant sited in Mazandaran, Iran. To do so, firstly, based on the laws of thermodynamics and the physics of the problem at hand, the authors implement a mathematical model to calculate the values of exergetic efficiency, energetic efficiency, and total cost of the Neka power plant as three main objective functions. Besides, a memetic supervised neural network and Bahadori's mathematical model are used to calculate the dynamic values of specific heat over the operating procedure of the power plant. At the second stage, a modified version of a recent spotlighted Pareto based multiobjective metaheuristic called synchronous self-learning Pareto strategy (SSLPS) is proposed. The proposed technique is based on embedding logistic chaotic map into the algorithmic architecture of SSLPS. In this context, the resulting optimizer, i.e. chaos-enhanced SSLPS (C-SSLPS), uses the response of time-discrete nonlinear logistic map to update the positions of heuristic agents over the optimization procedure. For the sake of comparison, strength Pareto evolutionary algorithm (SPEA 2), non-dominated sorting genetic algorithm (NSGA-II) and standard SSLPS are taken into account. The results of the numerical study confirm the superiority of the proposed technique as compared to the other rival optimizers. Besides, it is observed that metaheuristics can be successfully used for optimizing the mechanical/energetic parameters of Neka power plant.
APA, Harvard, Vancouver, ISO, and other styles
27

Dibaeinia, Payam, and Saurabh Sinha. "Deciphering enhancer sequence using thermodynamics-based models and convolutional neural networks." Nucleic Acids Research, September 11, 2021. http://dx.doi.org/10.1093/nar/gkab765.

Full text
Abstract:
Abstract Deciphering the sequence-function relationship encoded in enhancers holds the key to interpreting non-coding variants and understanding mechanisms of transcriptomic variation. Several quantitative models exist for predicting enhancer function and underlying mechanisms; however, there has been no systematic comparison of these models characterizing their relative strengths and shortcomings. Here, we interrogated a rich data set of neuroectodermal enhancers in Drosophila, representing cis- and trans- sources of expression variation, with a suite of biophysical and machine learning models. We performed rigorous comparisons of thermodynamics-based models implementing different mechanisms of activation, repression and cooperativity. Moreover, we developed a convolutional neural network (CNN) model, called CoNSEPT, that learns enhancer ‘grammar’ in an unbiased manner. CoNSEPT is the first general-purpose CNN tool for predicting enhancer function in varying conditions, such as different cell types and experimental conditions, and we show that such complex models can suggest interpretable mechanisms. We found model-based evidence for mechanisms previously established for the studied system, including cooperative activation and short-range repression. The data also favored one hypothesized activation mechanism over another and suggested an intriguing role for a direct, distance-independent repression mechanism. Our modeling shows that while fundamentally different models can yield similar fits to data, they vary in their utility for mechanistic inference. CoNSEPT is freely available at: https://github.com/PayamDiba/CoNSEPT.
APA, Harvard, Vancouver, ISO, and other styles
28

Shiina, Kenta, Hiroyuki Mori, Yusuke Tomita, Hwee Kuan Lee, and Yutaka Okabe. "Inverse renormalization group based on image super-resolution using deep convolutional networks." Scientific Reports 11, no. 1 (May 5, 2021). http://dx.doi.org/10.1038/s41598-021-88605-w.

Full text
Abstract:
AbstractThe inverse renormalization group is studied based on the image super-resolution using the deep convolutional neural networks. We consider the improved correlation configuration instead of spin configuration for the spin models, such as the two-dimensional Ising and three-state Potts models. We propose a block-cluster transformation as an alternative to the block-spin transformation in dealing with the improved estimators. In the framework of the dual Monte Carlo algorithm, the block-cluster transformation is regarded as a transformation in the graph degrees of freedom, whereas the block-spin transformation is that in the spin degrees of freedom. We demonstrate that the renormalized improved correlation configuration successfully reproduces the original configuration at all the temperatures by the super-resolution scheme. Using the rule of enlargement, we repeatedly make inverse renormalization procedure to generate larger correlation configurations. To connect thermodynamics, an approximate temperature rescaling is discussed. The enlarged systems generated using the super-resolution satisfy the finite-size scaling.
APA, Harvard, Vancouver, ISO, and other styles
29

Huang, Juntao, Zhiting Ma, Yizhou Zhou, and Wen-An Yong. "Learning Thermodynamically Stable and Galilean Invariant Partial Differential Equations for Non-Equilibrium Flows." Journal of Non-Equilibrium Thermodynamics, May 18, 2021. http://dx.doi.org/10.1515/jnet-2021-0008.

Full text
Abstract:
Abstract In this work, we develop a method for learning interpretable, thermodynamically stable and Galilean invariant partial differential equations (PDEs) based on the conservation-dissipation formalism of irreversible thermodynamics. As governing equations for non-equilibrium flows in one dimension, the learned PDEs are parameterized by fully connected neural networks and satisfy the conservation-dissipation principle automatically. In particular, they are hyperbolic balance laws and Galilean invariant. The training data are generated from a kinetic model with smooth initial data. Numerical results indicate that the learned PDEs can achieve good accuracy in a wide range of Knudsen numbers. Remarkably, the learned dynamics can give satisfactory results with randomly sampled discontinuous initial data and Sod’s shock tube problem although it is trained only with smooth initial data.
APA, Harvard, Vancouver, ISO, and other styles
30

Deco, Gustavo, Yonathan Sanz Perl, Laura de la Fuente, Jacobo D. Sitt, B. T. Thomas Yeo, Enzo Tagliazucchi, and Morten Kringelbach. "The arrow of time of brain signals in cognition: Potential intriguing role of parts of the default mode network." Network Neuroscience, December 22, 2022, 1–50. http://dx.doi.org/10.1162/netn_a_00300.

Full text
Abstract:
Abstract A promising idea in human cognitive neuroscience is that the default mode network (DMN) is responsible for coordinating the recruitment and scheduling of networks for computing and solving task-specific cognitive problems. This is supported by evidence showing that the physical and functional distance of DMN regions is maximally removed from sensorimotor regions containing environment-driven neural activity directly linked to perception and action, which would allow the DMN to orchestrate complex cognition from the top of the hierarchy. However, discovering the functional hierarchy of brain dynamics requires finding the best way to measure interactions between brain regions. In contrast to previous methods measuring the hierarchical flow of information using for example transfer entropy, here we used a thermodynamics-inspired, deep learning based Temporal Evolution NETwork (TENET) framework to assess the asymmetry in the flow of events, ‘arrow of time’, in human brain signals. This provides an alternative way of quantifying hierarchy, given that the arrow of time measures the directionality of information flow that leads to a breaking of the balance of the underlying hierarchy. In turn, the arrow of time is a measure of non-reversibility and thus non-equilibrium in brain dynamics. When applied to large-scale HCP neuroimaging data from close to a thousand participants, the TENET framework suggests that the DMN plays a significant role in orchestrating the hierarchy, i.e. levels of non-reversibility, which changes between the resting state and when performing seven different cognitive tasks. Furthermore, this quantification of the hierarchy of the resting state is significantly different in health compared to neuropsychiatric disorders. Overall, the present thermodynamics-based machine learning framework provides vital new insights into the fundamental tenets of brain dynamics for orchestrating the interactions between cognition and brain in complex environments.
APA, Harvard, Vancouver, ISO, and other styles
31

Shi, Yu, Stephen Lam, and Thomas Beck. "Deep neural network based quantum simulations and quasichemical theory for accurate modeling of molten salt thermodynamics." Chemical Science, 2022. http://dx.doi.org/10.1039/d2sc02227c.

Full text
Abstract:
With dual goals of efficient and accurate modeling of solvation thermodynamics in molten salt liquids, we employ ab initio molecular dynamics (AIMD) simulations, deep neural network interatomic potentials (NNIP), and...
APA, Harvard, Vancouver, ISO, and other styles
32

Alizadeh, Rasool, Javad Mohebbi Najm Abad, Abolfazl Fattahi, Ebrahim Alhajri, and Nader Karimi. "Application of Machine Learning to Investigation of Heat and Mass Transfer Over a Cylinder Surrounded by Porous Media—The Radial Basic Function Network." Journal of Energy Resources Technology 142, no. 11 (June 25, 2020). http://dx.doi.org/10.1115/1.4047402.

Full text
Abstract:
Abstract This paper investigates heat and mass transport around a cylinder featuring non-isothermal homogenous and heterogeneous chemical reactions in a surrounding porous medium. The system is subject to an impinging flow, while local thermal non-equilibrium, non-linear thermal radiation within the porous region, and the temperature dependency of the reaction rates are considered. Further, non-equilibrium thermodynamics, including Soret and Dufour effects are taken into account. The governing equations are numerically solved using a finite-difference method after reducing them to a system of non-linear ordinary differential equations. Since the current problem contains a large number of parameters with complex interconnections, low-cost models such as those based on artificial intelligence are desirable for the conduction of extensive parametric studies. Therefore, the simulations are used to train an artificial neural network. Comparing various algorithms of the artificial neural network, the radial basic function network is selected. The results show that variations in radiative heat transfer as well as those in Soret and Dufour effects can significantly change the heat and mass transfer responses. Within the investigated parametric range, it is found that the diffusion mechanism is dominantly responsible for heat and mass transfer. Importantly, it is noted that the developed predictor algorithm offers a considerable saving of the computational burden.
APA, Harvard, Vancouver, ISO, and other styles
33

Cham, Karen, and Jeffrey Johnson. "Complexity Theory." M/C Journal 10, no. 3 (June 1, 2007). http://dx.doi.org/10.5204/mcj.2672.

Full text
Abstract:
Complex systems are an invention of the universe. It is not at all clear that science has an a priori primacy claim to the study of complex systems. (Galanter 5) Introduction In popular dialogues, describing a system as “complex” is often the point of resignation, inferring that the system cannot be sufficiently described, predicted nor managed. Transport networks, management infrastructure and supply chain logistics are all often described in this way. In socio-cultural terms “complex” is used to describe those humanistic systems that are “intricate, involved, complicated, dynamic, multi-dimensional, interconnected systems [such as] transnational citizenship, communities, identities, multiple belongings, overlapping geographies and competing histories” (Cahir & James). Academic dialogues have begun to explore the collective behaviors of complex systems to define a complex system specifically as an adaptive one; i.e. a system that demonstrates ‘self organising’ principles and ‘emergent’ properties. Based upon the key principles of interaction and emergence in relation to adaptive and self organising systems in cultural artifacts and processes, this paper will argue that complex systems are cultural systems. By introducing generic principles of complex systems, and looking at the exploration of such principles in art, design and media research, this paper argues that a science of cultural systems as part of complex systems theory is the post modern science for the digital age. Furthermore, that such a science was predicated by post structuralism and has been manifest in art, design and media practice since the late 1960s. Complex Systems Theory Complexity theory grew out of systems theory, an holistic approach to analysis that views whole systems based upon the links and interactions between the component parts and their relationship to each other and the environment within they exists. This stands in stark contrast to conventional science which is based upon Descartes’s reductionism, where the aim is to analyse systems by reducing something to its component parts (Wilson 3). As systems thinking is concerned with relationships more than elements, it proposes that in complex systems, small catalysts can cause large changes and that a change in one area of a system can adversely affect another area of the system. As is apparent, systems theory is a way of thinking rather than a specific set of rules, and similarly there is no single unified Theory of Complexity, but several different theories have arisen from the natural sciences, mathematics and computing. As such, the study of complex systems is very interdisciplinary and encompasses more than one theoretical framework. Whilst key ideas of complexity theory developed through artificial intelligence and robotics research, other important contributions came from thermodynamics, biology, sociology, physics, economics and law. In her volume for the Elsevier Advanced Management Series, “Complex Systems and Evolutionary Perspectives on Organisations”, Eve Mitleton-Kelly describes a comprehensive overview of this evolution as five main areas of research: complex adaptive systems dissipative structures autopoiesis (non-equilibrium) social systems chaos theory path dependence Here, Mitleton-Kelly points out that relatively little work has been done on developing a specific theory of complex social systems, despite much interest in complexity and its application to management (Mitleton-Kelly 4). To this end, she goes on to define the term “complex evolving system” as more appropriate to the field than ‘complex adaptive system’ and suggests that the term “complex behaviour” is thus more useful in social contexts (Mitleton-Kelly). For our purpose here, “complex systems” will be the general term used to describe those systems that are diverse and made up of multiple interdependent elements, that are often ‘adaptive’, in that they have the capacity to change and learn from events. This is in itself both ‘evolutionary’ and ‘behavioural’ and can be understood as emerging from the interaction of autonomous agents – especially people. Some generic principles of complex systems defined by Mitleton Kelly that are of concern here are: self-organisation emergence interdependence feedback space of possibilities co-evolving creation of new order Whilst the behaviours of complex systems clearly do not fall into our conventional top down perception of management and production, anticipating such behaviours is becoming more and more essential for products, processes and policies. For example, compare the traditional top down model of news generation, distribution and consumption to the “emerging media eco-system” (Bowman and Willis 14). Figure 1 (Bowman & Willis 10) Figure 2 (Bowman & Willis 12) To the traditional news organisations, such a “democratization of production” (McLuhan 230) has been a huge cause for concern. The agencies once solely responsible for the representation of reality are now lost in a global miasma of competing perspectives. Can we anticipate and account for complex behaviours? Eve Mitleton Kelly states that “if organisations are understood as complex evolving systems co-evolving as part of a social ‘ecosystem’, then that changed perspective changes ways of acting and relating which lead to a different way of working. Thus, management strategy changes, and our organizational design paradigms evolve as new types of relationships and ways of working provide the conditions for the emergence of new organisational forms” (Mitleton-Kelly 6). Complexity in Design It is thus through design practice and processes that discovering methods for anticipating complex systems behaviours seem most possible. The Embracing Complexity in Design (ECiD) research programme, is a contemporary interdisciplinary research cluster consisting of academics and designers from architectural engineering, robotics, geography, digital media, sustainable design, and computing aiming to explore the possibility of trans disciplinary principles of complexity in design. Over arching this work is the conviction that design can be seen as model for complex systems researchers motivated by applying complexity science in particular domains. Key areas in which design and complexity interact have been established by this research cluster. Most immediately, many designed products and systems are inherently complex to design in the ordinary sense. For example, when designing vehicles, architecture, microchips designers need to understand complex dynamic processes used to fabricate and manufacture products and systems. The social and economic context of design is also complex, from market economics and legal regulation to social trends and mass culture. The process of designing can also involve complex social dynamics, with many people processing and exchanging complex heterogeneous information over complex human and communication networks, in the context of many changing constraints. Current key research questions are: how can the methods of complex systems science inform designers? how can design inform research into complex systems? Whilst ECiD acknowledges that to answer such questions effectively the theoretical and methodological relations between complexity science and design need further exploration and enquiry, there are no reliable precedents for such an activity across the sciences and the arts in general. Indeed, even in areas where a convergence of humanities methodology with scientific practice might seem to be most pertinent, most examples are few and far between. In his paper “Post Structuralism, Hypertext & the World Wide Web”, Luke Tredennick states that “despite the concentration of post-structuralism on text and texts, the study of information has largely failed to exploit post-structuralist theory” (Tredennick 5). Yet it is surely in the convergence of art and design with computation and the media that a search for practical trans-metadisciplinary methodologies might be most fruitful. It is in design for interactive media, where algorithms meet graphics, where the user can interact, adapt and amend, that self-organisation, emergence, interdependence, feedback, the space of possibilities, co-evolution and the creation of new order are embraced on a day to day basis by designers. A digitally interactive environment such as the World Wide Web, clearly demonstrates all the key aspects of a complex system. Indeed, it has already been described as a ‘complexity machine’ (Qvortup 9). It is important to remember that this ‘complexity machine’ has been designed. It is an intentional facility. It may display all the characteristics of complexity but, whilst some of its attributes are most demonstrative of self organisation and emergence, the Internet itself has not emerged spontaneously. For example, Tredinnick details the evolution of the World Wide Web through the Memex machine of Vannevar Bush, through Ted Nelsons hypertext system Xanadu to Tim Berners-Lee’s Enquire (Tredennick 3). The Internet was engineered. So, whilst we may not be able to entirely predict complex behavior, we can, and do, quite clearly design for it. When designing digitally interactive artifacts we design parameters or co ordinates to define the space within which a conceptual process will take place. We can never begin to predict precisely what those processes might become through interaction, emergence and self organisation, but we can establish conceptual parameters that guide and delineate the space of possibilities. Indeed this fact is so transparently obvious that many commentators in the humanities have been pushed to remark that interaction is merely interpretation, and so called new media is not new at all; that one interacts with a book in much the same way as a digital artifact. After all, post-structuralist theory had established the “death of the author” in the 1970s – the a priori that all cultural artifacts are open to interpretation, where all meanings must be completed by the reader. The concept of the “open work” (Eco 6) has been an established post modern concept for over 30 years and is commonly recognised as a feature of surrealist montage, poetry, the writings of James Joyce, even advertising design, where a purposive space for engagement and interpretation of a message is designated, without which the communication does not “work”. However, this concept is also most successfully employed in relation to installation art and, more recently, interactive art as a reflection of the artist’s conscious decision to leave part of a work open to interpretation and/or interaction. Art & Complex Systems One of the key projects of Embracing Complexity in Design has been to look at the relationship between art and complex systems. There is a relatively well established history of exploring art objects as complex systems in themselves that finds its origins in the systems art movement of the 1970s. In his paper “Observing ‘Systems Art’ from a Systems-Theroretical Perspective”, Francis Halsall defines systems art as “emerging in the 1960s and 1970s as a new paradigm in artistic practice … displaying an interest in the aesthetics of networks, the exploitation of new technology and New Media, unstable or de-materialised physicality, the prioritising of non-visual aspects, and an engagement (often politicised) with the institutional systems of support (such as the gallery, discourse, or the market) within which it occurs” (Halsall 7). More contemporarily, “Open Systems: Rethinking Art c.1970”, at Tate Modern, London, focuses upon systems artists “rejection of art’s traditional focus on the object, to wide-ranging experiments al focus on the object, to wide-ranging experiments with media that included dance, performance and…film & video” (De Salvo 3). Artists include Andy Warhol, Richard Long, Gilbert & George, Sol Lewitt, Eva Hesse and Bruce Nauman. In 2002, the Samuel Dorsky Museum of Art, New York, held an international exhibition entitled “Complexity; Art & Complex Systems”, that was concerned with “art as a distinct discipline offer[ing] its own unique approache[s] and epistemic standards in the consideration of complexity” (Galanter and Levy 5), and the organisers go on to describe four ways in which artists engage the realm of complexity: presentations of natural complex phenomena that transcend conventional scientific visualisation descriptive systems which describe complex systems in an innovative and often idiosyncratic way commentary on complexity science itself technical applications of genetic algorithms, neural networks and a-life ECiD artist Julian Burton makes work that visualises how companies operate in specific relation to their approach to change and innovation. He is a strategic artist and facilitator who makes “pictures of problems to help people talk about them” (Burton). Clients include public and private sector organisations such as Barclays, Shell, Prudential, KPMG and the NHS. He is quoted as saying “Pictures are a powerful way to engage and focus a group’s attention on crucial issues and challenges, and enable them to grasp complex situations quickly. I try and create visual catalysts that capture the major themes of a workshop, meeting or strategy and re-present them in an engaging way to provoke lively conversations” (Burton). This is a simple and direct method of using art as a knowledge elicitation tool that falls into the first and second categories above. The third category is demonstrated by the ground breaking TechnoSphere, that was specifically inspired by complexity theory, landscape and artificial life. Launched in 1995 as an Arts Council funded online digital environment it was created by Jane Prophet and Gordon Selley. TechnoSphere is a virtual world, populated by artificial life forms created by users of the World Wide Web. The digital ecology of the 3D world, housed on a server, depends on the participation of an on-line public who accesses the world via the Internet. At the time of writing it has attracted over a 100,000 users who have created over a million creatures. The artistic exploration of technical applications is by default a key field for researching the convergence of trans-metadisciplinary methodologies. Troy Innocent’s lifeSigns evolves multiple digital media languages “expressed as a virtual world – through form, structure, colour, sound, motion, surface and behaviour” (Innocent). The work explores the idea of “emergent language through play – the idea that new meanings may be generated through interaction between human and digital agents”. Thus this artwork combines three areas of converging research – artificial life; computational semiotics and digital games. In his paper “What Is Generative Art? Complexity Theory as a Context for Art Theory”, Philip Galanter describes all art as generative on the basis that it is created from the application of rules. Yet, as demonstrated above, what is significantly different and important about digital interactivity, as opposed to its predecessor, interpretation, is its provision of a graphical user interface (GUI) to component parts of a text such as symbol, metaphor, narrative, etc for the multiple “authors” and the multiple “readers” in a digitally interactive space of possibility. This offers us tangible, instantaneous reproduction and dissemination of interpretations of an artwork. Conclusion: Digital Interactivity – A Complex Medium Digital interaction of any sort is thus a graphic model of the complex process of communication. Here, complexity does not need deconstructing, representing nor modelling, as the aesthetics (as in apprehended by the senses) of the graphical user interface conveniently come first. Design for digital interactive media is thus design for complex adaptive systems. The theoretical and methodological relations between complexity science and design can clearly be expounded especially well through post-structuralism. The work of Barthes, Derrida & Foucault offers us the notion of all cultural artefacts as texts or systems of signs, whose meanings are not fixed but rather sustained by networks of relationships. Implemented in a digital environment post-structuralist theory is tangible complexity. Strangely, whilst Philip Galanter states that science has no necessary over reaching claim to the study of complexity, he then argues conversely that “contemporary art theory rooted in skeptical continental philosophy [reduces] art to social construction [as] postmodernism, deconstruction and critical theory [are] notoriously elusive, slippery, and overlapping terms and ideas…that in fact [are] in the business of destabilising apparently clear and universal propositions” (4). This seems to imply that for Galanter, post modern rejections of grand narratives necessarily will exclude the “new scientific paradigm” of complexity, a paradigm that he himself is looking to be universal. Whilst he cites Lyotard (6) describing both political and linguistic reasons why postmodern art celebrates plurality, denying any progress towards singular totalising views, he fails to appreciate what happens if that singular totalising view incorporates interactivity? Surely complexity is pluralistic by its very nature? In the same vein, if language for Derrida is “an unfixed system of traces and differences … regardless of the intent of the authored texts … with multiple equally legitimate meanings” (Galanter 7) then I have heard no better description of the signifiers, signifieds, connotations and denotations of digital culture. Complexity in its entirety can also be conversely understood as the impact of digital interactivity upon culture per se which has a complex causal relation in itself; Qvortups notion of a “communications event” (9) such as the Danish publication of the Mohammed cartoons falls into this category. Yet a complex causality could be traced further into cultural processes enlightening media theory; from the relationship between advertising campaigns and brand development; to the exposure and trajectory of the celebrity; describing the evolution of visual language in media cultures and informing the relationship between exposure to representation and behaviour. In digital interaction the terms art, design and media converge into a process driven, performative event that demonstrates emergence through autopoietic processes within a designated space of possibility. By insisting that all artwork is generative Galanter, like many other writers, negates the medium entirely which allows him to insist that generative art is “ideologically neutral” (Galanter 10). Generative art, like all digitally interactive artifacts are not neutral but rather ideologically plural. Thus, if one integrates Qvortups (8) delineation of medium theory and complexity theory we may have what we need; a first theory of a complex medium. Through interactive media complexity theory is the first post modern science; the first science of culture. References Bowman, Shane, and Chris Willis. We Media. 21 Sep. 2003. 9 March 2007 http://www.hypergene.net/wemedia/weblog.php>. Burton, Julian. “Hedron People.” 9 March 2007 http://www.hedron.com/network/assoc.php4?associate_id=14>. Cahir, Jayde, and Sarah James. “Complex: Call for Papers.” M/C Journal 9 Sep. 2006. 7 March 2007 http://journal.media-culture.org.au/journal/upcoming.php>. De Salvo, Donna, ed. Open Systems: Rethinking Art c. 1970. London: Tate Gallery Press, 2005. Eco, Umberto. The Open Work. Cambridge, Mass.: Harvard UP, 1989. Galanter, Phillip, and Ellen K. Levy. Complexity: Art & Complex Systems. SDMA Gallery Guide, 2002. Galanter, Phillip. “Against Reductionism: Science, Complexity, Art & Complexity Studies.” 2003. 9 March 2007 http://isce.edu/ISCE_Group_Site/web-content/ISCE_Events/ Norwood_2002/Norwood_2002_Papers/Galanter.pdf>. Halsall, Francis. “Observing ‘Systems-Art’ from a Systems-Theoretical Perspective”. CHArt 2005. 9 March 2007 http://www.chart.ac.uk/chart2005/abstracts/halsall.htm>. Innocent, Troy. “Life Signs.” 9 March 2007 http://www.iconica.org/main.htm>. Johnson, Jeffrey. “Embracing Complexity in Design (ECiD).” 2007. 9 March 2007 http://www.complexityanddesign.net/>. Lyotard, Jean-Francois. The Postmodern Condition. Manchester: Manchester UP, 1984. McLuhan, Marshall. The Gutenberg Galaxy: The Making of Typographic Man. Toronto: U of Toronto P, 1962. Mitleton-Kelly, Eve, ed. Complex Systems and Evolutionary Perspectives on Organisations. Elsevier Advanced Management Series, 2003. Prophet, Jane. “Jane Prophet.” 9 March 2007 http://www.janeprophet.co.uk/>. Qvortup, Lars. “Understanding New Digital Media.” European Journal of Communication 21.3 (2006): 345-356. Tedinnick, Luke. “Post Structuralism, Hypertext & the World Wide Web.” Aslib 59.2 (2007): 169-186. Wilson, Edward Osborne. Consilience: The Unity of Knowledge. New York: A.A. Knoff, 1998. Citation reference for this article MLA Style Cham, Karen, and Jeffrey Johnson. "Complexity Theory: A Science of Cultural Systems?." M/C Journal 10.3 (2007). echo date('d M. Y'); ?> <http://journal.media-culture.org.au/0706/08-cham-johnson.php>. APA Style Cham, K., and J. Johnson. (Jun. 2007) "Complexity Theory: A Science of Cultural Systems?," M/C Journal, 10(3). Retrieved echo date('d M. Y'); ?> from <http://journal.media-culture.org.au/0706/08-cham-johnson.php>.
APA, Harvard, Vancouver, ISO, and other styles
34

Barker, Timothy Scott. "Information and Atmospheres: Exploring the Relationship between the Natural Environment and Information Aesthetics." M/C Journal 15, no. 3 (May 3, 2012). http://dx.doi.org/10.5204/mcj.482.

Full text
Abstract:
Our culture abhors the world.Yet Quicksand is swallowing the duellists; the river is threatening the fighter: earth, waters and climate, the mute world, the voiceless things once placed as a decor surrounding the usual spectacles, all those things that never interested anyone, from now on thrust themselves brutally and without warning into our schemes and manoeuvres (Michel Serres, The Natural Contract, p 3). When Michel Serres describes culture's abhorrence of the world in the opening pages of The Natural Contract he draws our attention to the sidelining of nature in histories and theories that have sought to describe Western culture. As Serres argues, cultural histories are quite often built on the debates and struggles of humanity, which are largely held apart from their natural surroundings, as if on a stage, "purified of things" (3). But, as he is at pains to point out, human activity and conflict always take place within a natural milieu, a space of quicksand, swelling rivers, shifting earth, and atmospheric turbulence. Recently, via the potential for vast environmental change, what was once thought of as a staid “nature” has reasserted itself within culture. In this paper I explore how Serres’s positioning of nature can be understood amid new communication systems, which, via the apparent dematerialization of messages, seems to have further removed culture from nature. From here, I focus on a set of artworks that work against this division, reformulating the connection between information, a topic usually considered in relation to media and anthropic communication (and something about which Serres too has a great deal to say), and nature, an entity commonly considered beyond human contrivance. In particular, I explore how information visualisation and sonification has been used to give a new sense of materiality to the atmosphere, repotentialising the air as a natural and informational entity. The Natural Contract argues for the legal legitimacy of nature, a natural contract similar in standing to Rousseau’s social contract. Serres’ss book explores the history and notion of a “legal person”, arguing for a linking of the scientific view of the world and the legal visions of social life, where inert objects and living beings are considered within the same legal framework. As such The Natural Contract does not deal with ecology per-se, but instead focuses on an argument for the inclusion of nature within law (Serres, “A Return” 131). In a drastic reconfiguring of the subject/object relationship, Serres explains how the space that once existed as a backdrop for human endeavour now seems to thrust itself directly into history. "They (natural events) burst in on our culture, which had never formed anything but a local, vague, and cosmetic idea of them: nature" (Serres, The Natural Contract 3). In this movement, nature does not simply take on the role of a new object to be included within a world still dominated by human subjects. Instead, human beings are understood as intertwined with a global system of turbulence that is both manipulated by them and manipulates them. Taking my lead from Serres’s book, in this paper I begin to explore the disconnections and reconnections that have been established between information and the natural environment. While I acknowledge that there is nothing natural about the term “nature” (Harman 251), I use the term to designate an environment constituted by the systematic processes of the collection of entities that are neither human beings nor human crafted artefacts. As the formation of cultural systems becomes demarcated from these natural objects, the scene is set for the development of culturally mediated concepts such as “nature” and “wilderness,” as entities untouched and unspoilt by cultural process (Morton). On one side of the divide the complex of communication systems is situated, on the other is situated “nature”. The restructuring of information flows due to developments in electronic communication has ostensibly removed messages from the medium of nature. Media is now considered within its own ecology (see Fuller; Strate) quite separate from nature, except when it is developed as media content (see Cubitt; Murray; Heumann). A separation between the structures of media ecologies and the structures of natural ecologies has emerged over the history of electronic communication. For instance, since the synoptic media theory of McLuhan it has been generally acknowledged that the shift from script to print, from stone to parchment, and from the printing press to more recent developments such as the radio, telephone, television, and Web2.0, have fundamentally altered the structure and effects of human relationships. However, these developments – “the extensions of man” (McLuhan)— also changed the relationship between society and nature. Changes in communications technology have allowed people to remain dispersed, as ideas, in the form of electric currents or pulses of light travel vast distances and in diverse directions, with communication no longer requiring human movement across geographic space. Technologies such as the telegraph and the radio, with their ability to seemingly dematerialize the media of messages, reformulated the concept of communication into a “quasi-physical connection” across the obstacles of time and space (Clarke, “Communication” 132). Prior to this, the natural world itself was the medium through which information was passed. Rather than messages transmitted via wires, communication was associated with the transport of messages through the world via human movement, with the materiality of the medium measured in the time it took to cover geographic space. The flow of messages followed trade flows (Briggs and Burke 20). Messages moved along trails, on rail, over bridges, down canals, and along shipping channels, arriving at their destination as information. More recently however, information, due to its instantaneous distribution and multiplication across space, seems to have no need for nature as a medium. Nature has become merely a topic for information, as media content, rather than as something that takes part within the information system itself. The above example illustrates a separation between information exchange and the natural environment brought about by a set of technological developments. As Serres points out, the word “media” is etymologically related to the word “milieu”. Hence, a theory of media should be always related to an understanding of the environment (Crocker). But humans no longer need to physically move through the natural world to communicate, ideas can move freely from region to region, from air-conditioned room to air-conditioned room, relatively unimpeded by natural forces or geographic distance. For a long time now, information exchange has not necessitated human movement through the natural environment and this has consequences for how the formation of culture and its location in (or dislocation from) the natural world is viewed. A number of artists have begun questioning the separation between media and nature, particularly concerning the materiality of air, and using information to provide new points of contact between media and the atmosphere (for a discussion of the history of ecoart see Wallen). In Eclipse (2009) (fig. 1) for instance, an internet based work undertaken by the collective EcoArtTech, environmental sensing technology and online media is used experimentally to visualize air pollution. EcoArtTech is made up of the artist duo Cary Peppermint and Leila Nadir and since 2005 they have been inquiring into the relationship between digital technology and the natural environment, particularly regarding concepts such as “wilderness”. In Eclipse, EcoArtTech garner photographs of American national parks from social media and photo sharing sites. Air quality data gathered from the nearest capital city is then inputted into an algorithm that visibly distorts the image based on the levels of particle pollution detected in the atmosphere. The photographs that circulate on photo sharing sites such as Flickr—photographs that are usually rather banal in their adherence to a history of wilderness photography—are augmented by the environmental pollution circulating in nearby capital cities. Figure 1: EcoArtTech, Eclipse (detail of screenshot), 2009 (Internet-based work available at:http://turbulence.org/Works/eclipse/) The digital is often associated with the clean transmission of information, as packets of data move from a server, over fibre optic cables, to be unpacked and re-presented on a computer's screen. Likewise, the photographs displayed in Eclipse are quite often of an unspoilt nature, containing no errors in their exposure or focus (most probably because these wilderness photographs were taken with digital cameras). As the photographs are overlaid with information garnered from air quality levels, the “unspoilt” photograph is directly related to pollution in the natural environment. In Eclipse the background noise of “wilderness,” the pollution in the air, is reframed as foreground. “We breathe background noise…Background noise is the ground of our perception, absolutely uninterrupted, it is our perennial sustenance, the element of the software of all our logic” (Serres, Genesis 7). Noise is activated in Eclipse in a similar way to Serres’s description, as an indication of the wider milieu in which communication takes place (Crocker). Noise links the photograph and its transmission not only to the medium of the internet and the glitches that arise as information is circulated, but also to the air in the originally photographed location. In addition to noise, there are parallels between the original photographs of nature gleaned from photo sharing sites and Serres’s concept of a history that somehow stands itself apart from the effects of ongoing environmental processes. By compartmentalising the natural and cultural worlds, both the historiography that Serres argues against and the wilderness photograph produces a concept of nature that is somehow outside, behind, or above human activities and the associated matter of noise. Eclipse, by altering photographs using real-time data, puts the still image into contact with the processes and informational outputs of nature. Air quality sensors detect pollution in the atmosphere and code these atmospheric processes into computer readable information. The photograph is no longer static but is now open to continual recreation and degeneration, dependent on the coded value of the atmosphere in a given location. A similar materiality is given to air in a public work undertaken by Preemptive Media, titled Areas Immediate Reading (AIR) (fig. 2). In this project, Preemptive Media, made up of Beatriz da Costa, Jamie Schulte and Brooke Singer, equip participants with instruments for measuring air quality as they walked around New York City. The devices monitor the carbon monoxide (CO), nitrogen oxides (NOx) or ground level ozone (O3) levels that are being breathed in by the carrier. As Michael Dieter has pointed out in his reading of the work, the application of sensing technology by Preemptive Media is in distinct contrast to the conventional application of air quality monitoring, which usually takes the form of extremely high resolution located devices spread over great distances. These larger air monitoring networks tend to present the value garnered from a large expanse of the atmosphere that covers individual cities or states. The AIR project, in contrast, by using small mobile sensors, attempts to put people in informational contact with the air that they are breathing in their local and immediate time and place, and allows them to monitor the small parcels of atmosphere that surround other users in other locations (Dieter). It thus presents many small and mobile spheres of atmosphere, inhabited by individuals as they move through the city. In AIR we see the experimental application of an already developed technology in order to put people on the street in contact with the atmospheres that they are moving through. It gives a new informational form to the “vast but invisible ocean of air that surrounds us and permeates us” (Ihde 3), which in this case is given voice by a technological apparatus that converts the air into information. The atmosphere as information becomes less of a vague background and more of a measurable entity that ingresses into the lives and movements of human users. The air is conditioned by information; the turbulent and noisy atmosphere has been converted via technology into readable information (Connor 186-88). Figure 2: Preemptive Media, Areas Immediate Reading (AIR) (close up of device), 2011 Throughout his career Serres has developed a philosophy of information and communication that may help us to reframe the relationship between the natural and cultural worlds (see Brown). Conventionally, the natural world is understood as made up of energy and matter, with exchanges of energy and the flows of biomass through food webs binding ecosystems together (DeLanda 120-1). However, the tendencies and structures of natural systems, like cultural systems, are also dependent on the communication of information. It is here that Serres provides us with a way to view natural and cultural systems as connected by a flow of energy and information. He points out that in the wake of Claude Shannon’s famous Mathematical Theory of Communication it has been possible to consider the relationship between information and thermodynamics, at least in Shannon’s explanation of noise as entropy (Serres, Hermes74). For Serres, an ecosystem can be conceptualised as an informational and energetic system: “it receives, stores, exchanges, and gives off both energy and information in all forms, from the light of the sun to the flow of matter which passes through it (food, oxygen, heat, signals)” (Serres, Hermes 74). Just as we are related to the natural world based on flows of energy— as sunlight is converted into energy by plants, which we in turn convert into food— we are also bound together by flows of information. The task is to find new ways to sense this information, to actualise the information, and imagine nature as more than a welter of data and the air as more than background. If we think of information in broad ranging terms as “coded values of the output of a process” (Losee 254), then we see that information and the environment—as a setting that is produced by continual and energetic processes—are in constant contact. After all, humans sense information from the environment all the time; we constantly decode the coded values of environmental processes transmitted via the atmosphere. I smell a flower, I hear bird songs, and I see the red glow of a sunset. The process of the singing bird is coded as vibrations of air particles that knock against my ear drum. The flower is coded as molecules in the atmosphere enter my nose and bind to cilia. The red glow is coded as wavelengths from the sun are dispersed in the Earth’s atmosphere and arrive at my eye. Information, of course, does not actually exist as information until some observing system constructs it (Clarke, “Information” 157-159). This observing system as we see the sunset, hear the birds, or smell the flower involves the atmosphere as a medium, along with our sense organs and cognitive and non-cognitive processes. The molecules in the atmosphere exist independently of our sense of them, but they do not actualise as information until they are operationalised by the observational system. Prior to this, information can be thought of as noise circulating within the atmosphere. Heinz Von Foester, one of the key figures of cybernetics, states “The environment contains no information. The environment is as it is” (Von Foester in Clarke, “Information” 157). Information, in this model, actualises only when something in the world causes a change to the observational system, as a difference that makes a difference (Bateson 448-466). Air expelled from a bird’s lungs and out its beak causes air molecules to vibrate, introducing difference into the atmosphere, which is then picked up by my ear and registered as sound, informing me that a bird is nearby. One bird song is picked up as information amid the swirling noise of nature and a difference in the air makes a difference to the observational system. It may be useful to think of the purpose of information as to control action and that this is necessary “whenever the people concerned, controllers as well as controlled, belong to an organised social group whose collective purpose is to survive and prosper” (Scarrott 262). Information in this sense operates the organisation of groups. Using this definition rooted in cybernetics, we see that information allows groups, which are dependent on certain control structures based on the sending and receiving of messages through media, to thrive and defines the boundaries of these groups. We see this in a flock of birds, for instance, which forms based on the information that one bird garners from the movements of the other birds in proximity. Extrapolating from this, if we are to live included in an ecological system capable of survival, the transmission of information is vital. But the form of the information is also important. To communicate, for example, one entity first needs to recognise that the other is speaking and differentiate this information from the noise in the air. Following Clarke and Von Foester, an observing system needs to be operational. An art project that gives aesthetic form to environmental processes in this vein—and one that is particularly concerned with the co-agentive relation between humans and nature—is Reiko Goto and Tim Collin’s Plein Air (2010) (fig. 3), an element in their ongoing Eden 3 project. In this work a technological apparatus is wired to a tree. This apparatus, which references the box easels most famously used by the Impressionists to paint ‘en plein air’, uses sensing technology to detect the tree’s responses to the varying CO2 levels in the atmosphere. An algorithm then translates this into real time piano compositions. The tree’s biological processes are coded into the voice of a piano and sensed by listeners as aesthetic information. What is at stake in this work is a new understanding of atmospheres as a site for the exchange of information, and an attempt to resituate the interdependence of human and non-human entities within an experimental aesthetic system. As we breathe out carbon dioxide—both through our physiological process of breathing and our cultural processes of polluting—trees breath it in. By translating these biological processes into a musical form, Collins and Gotto’s work signals a movement from a process of atmospheric exchange to a digital process of sensing and coding, the output of which is then transmitted through the atmosphere as sound. It must be mentioned that within this movement from atmospheric gas to atmospheric music we are not listening to the tree alone. We are listening to a much more complex polyphony involving the components of the digital sensing technology, the tree, the gases in the atmosphere, and the biological (breathing) and cultural processes (cars, factories and coal fired power stations) that produce these gases. Figure 3: Reiko Goto and Tim Collins, Plein Air, 2010 As both Don Ihde and Steven Connor have pointed out, the air that we breathe is not neutral. It is, on the contrary, given its significance in technology, sound, and voice. Taking this further, we might understand sensing technology as conditioning the air with information. This type of air conditioning—as information alters the condition of air—occurs as technology picks up, detects, and makes sensible phenomena in the atmosphere. While communication media such as the telegraph and other electronic information distribution systems may have distanced information from nature, the sensing technology experimentally applied by EcoArtTech, Preeemptive Media, and Goto and Collins, may remind us of the materiality of air. These technologies allow us to connect to the atmosphere; they reformulate it, converting it to information, giving new form to the coded processes in nature.AcknowledgmentAll images reproduced with the kind permission of the artists. References Bateson, Gregory. Steps to an Ecology of Mind. Chicago: University of Chicago Press, 1972. Briggs, Asa, and Peter Burke. A Social History of the Media: From Gutenberg to the Internet. Maden: Polity Press, 2009. Brown, Steve. “Michel Serres: Science, Translation and the Logic of the Parasite.” Theory, Culture and Society 19.1 (2002): 1-27. Clarke, Bruce. “Communication.” Critical Terms for Media Studies. Eds. Mark B. N. Hansen and W. J. T. Mitchell. Chicago: University of Chicago Press, 2010. 131-45 -----. “Information.” Critical Terms for Media Studies. Eds. Mark B. N. Hansen and W. J. T. Mitchell. Chicago: University of Chicago Press, 2010. 157-71 Crocker, Stephen. “Noise and Exceptions: Pure Mediality in Serres and Agamben.” CTheory: 1000 Days of Theory. (2007). 7 June 2012 ‹http://www.ctheory.net/articles.aspx?id=574› Connor, Stephen. The Matter of Air: Science and the Art of the Etheral. London: Reaktion, 2010. Cubitt, Sean. EcoMedia. Amsterdam and New York: Rodopi, 2005 Deiter, Michael. “Processes, Issues, AIR: Toward Reticular Politics.” Australian Humanities Review 46 (2009). 9 June 2012 ‹http://www.australianhumanitiesreview.org/archive/Issue-May-2009/dieter.htm› DeLanda, Manuel. Intensive Science and Virtual Philosophy. London and New York: Continuum, 2002. Fuller, Matthew. Media Ecologies: Materialist Energies in Art and Technoculture. Cambridge, MA: MIT Press, 2005 Harman, Graham. Guerilla Metaphysics. Illinois: Open Court, 2005. Ihde, Don. Listening and Voice: Phenomenologies of Sound. Albany: State University of New York, 2007. Innis, Harold. Empire and Communication. Toronto: Voyageur Classics, 1950/2007. Losee, Robert M. “A Discipline Independent Definition of Information.” Journal of the American Society for Information Science 48.3 (1997): 254–69. McLuhan, Marshall. Understanding Media: The Extensions of Man. London: Sphere Books, 1964/1967. Morton, Timothy. Ecology Without Nature: Rethinking Environmental Aesthetics. Cambridge: Harvard University Press, 2007. Murray, Robin, and Heumann, Joseph. Ecology and Popular Film: Cinema on the Edge. Albany: State University of New York, 2009 Scarrott, G.C. “The Nature of Information.” The Computer Journal 32.3 (1989): 261-66 Serres, Michel. Hermes: Literature, Science Philosophy. Baltimore: The John Hopkins Press, 1982. -----. The Natural Contract. Trans. Elizabeth MacArthur and William Paulson. Ann Arbor: The University of Michigan Press, 1992/1995. -----. Genesis. Trans. Genevieve James and James Nielson. Ann Arbor: The University of Michigan Press, 1982/1995. -----. “A Return to the Natural Contract.” Making Peace with the Earth. Ed. Jerome Binde. Oxford: UNESCO and Berghahn Books, 2007. Strate, Lance. Echoes and Reflections: On Media Ecology as a Field of Study. New York: Hampton Press, 2006 Wallen, Ruth. “Ecological Art: A Call for Intervention in a Time of Crisis.” Leonardo 45.3 (2012): 234-42.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography