To see the other types of publications on this topic, follow the link: TECHNO-COMMERCIAL COMPARISON.

Journal articles on the topic 'TECHNO-COMMERCIAL COMPARISON'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 28 journal articles for your research on the topic 'TECHNO-COMMERCIAL COMPARISON.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Katzav, Hadas, Libi Chirug, Zoya Okun, Maya Davidovich-Pinhas, and Avi Shpigelman. "Comparison of Thermal and High-Pressure Gelation of Potato Protein Isolates." Foods 9, no. 8 (August 2, 2020): 1041. http://dx.doi.org/10.3390/foods9081041.

Full text
Abstract:
Potato protein isolate (PPI), a commercial by-product of the starch industry, is a promising novel protein for food applications with limited information regarding its techno-functionality. This research focused on the formation of both thermal and high-pressure gels at acidic and neutral pH levels. Our results reveal that physical gels are formed after 30 min by heat at pH 7 and pH 3, while pressure (300–500 MPa) allows the formation of physical gels only at pH 3, and only when the system crosses 30 °C by adiabatic heating during pressurization. Texture profile analysis (TPA) revealed that gel hardness increased with both gelation temperature and pressure, while water-holding capacity was lower for the pressure-induced gels. The proteins released in the water-holding test suggested only partial involvement of patatin in the gel formation. Vitamin C as a model for a thermally liable compound verified the expected better conservation of such compounds in a pressure-induced gel compared to a thermal one of similar textural properties, presenting a possible advantage for pressure-induced gelation.
APA, Harvard, Vancouver, ISO, and other styles
2

Abdollahi, Reza, Seyed Mahdia Motahhari, and Hamid Esfandyari. "Integrated Technical and Economical Methodology for Assessment of Undeveloped Shale Gas Prospects: Applying in the Lurestan Shale Gas, Iran." Mathematical Problems in Engineering 2021 (July 7, 2021): 1–8. http://dx.doi.org/10.1155/2021/7919264.

Full text
Abstract:
Shale gas resources can supply the substantial growing demand for clean energy. In comparison with conventional reservoirs, shale gas reservoirs have lower production potential, and selecting the most favorable areas from the broad region of shale gas prospect is very crucial in commercial development. These areas are screened regarding some key evaluation indicators that affect the ultimate recovery of shale gas reservoirs. Many attempts have been made to screen sweet spots by applying the different evaluation indicators. These studies mainly focus on geological sweet spot identification without considering the economic indicators that may influence the order of geological sweet spots for development. The current study introduces a methodology for selecting the best techno-economic spots in undeveloped shale gas regions by integrating the technical and economic criteria. The techno-economic areas are defined as the geological sweet spots with the highest rate of return under the currently employed technology. The economic objective functions for selecting these areas are net present value, internal rate of return, and payback time. To estimate the unknown features for integrating the technical and economic criteria in undeveloped areas, an analogy study is applied. Due to the large number of unknowns and uncertainties in shale gas evaluation and low confidence of deterministic results, a probabilistic approach is used. As the first attempt in shale gas assessment in Iran, the Lurestan shale gas region is evaluated by applying this approach. The results indicate that no selected geological sweet spots in this region are commercial regarding the current cost rates and the available technology in Iran, and it can be considered as a future affordable source of energy.
APA, Harvard, Vancouver, ISO, and other styles
3

Fisher, Michael, Jay Apt, and Jay F. Whitacre. "Can flow batteries scale in the behind-the-meter commercial and industrial market? A techno-economic comparison of storage technologies in California." Journal of Power Sources 420 (April 2019): 1–8. http://dx.doi.org/10.1016/j.jpowsour.2019.02.051.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Intan Shafinas Muhammad, Noor, and Kurt A. Rosentrater. "Economic Assessment of Bioethanol Recovery Using Membrane Distillation for Food Waste Fermentation." Bioengineering 7, no. 1 (February 11, 2020): 15. http://dx.doi.org/10.3390/bioengineering7010015.

Full text
Abstract:
Ethanol is a material that has a high demand from different industries such as fuel, beverages, and other industrial applications. Commonly, ethanol has been produced from yeast fermentation using sugar crops as a feedstock. However, food waste (FW) was found to be one of the promising resources to produce ethanol because it contained a higher amount of glucose. Generally, column distillation has been used to separate ethanol from the fermentation broth, but this operation is considered an energy-intensive process. On the contrary, membrane distillation is expected to be more practical and cost-effective because of its lower energy requirement. Therefore, this study aims to make a comparison of economic performance on FW fermentation with membrane distillation and a conventional distillation system using techno-economy analysis (TEA) method. A commercial-scale FW fermentation plant was modeled using SuperPro Designer V9.0 Modeling. Discounted cash flow analysis was employed to determine ethanol minimum selling price (MSP) for both distillation systems at 10% of the internal rate of return. Results from this analysis showed that membrane distillation has a higher MSP than a conventional process, $6.24 and $2.41 per gallon ($1.65 and $0.64 per liter) respectively. Hence, this study found that membrane distillation is not economical to be implemented in commercial-scale ethanol production.
APA, Harvard, Vancouver, ISO, and other styles
5

Nandiyanto, Asep Bayu Dani, Risti Ragadhita, Meli Fiandini, Dwi Fitria Al Husaeni, Dwi Novia Al Husaeni, and Farid Fadhillah. "Domestic waste (eggshells and banana peels particles) as sustainable and renewable resources for improving resin-based brakepad performance: Bibliometric literature review, techno-economic analysis, dual-sized reinforcing experiments, to comparison ..." Communications in Science and Technology 7, no. 1 (July 31, 2022): 50–61. http://dx.doi.org/10.21924/cst.7.1.2022.757.

Full text
Abstract:
The objective of this study is to develop a new environmentally-friendly brake pad made from eggshells (Es) and banana peels (BPs) as reinforcement agents. E and BP particles as dual reinforcement with various compositions were combined. The E/BP mixture was then embedded on a polymer matrix composing a resin/hardener mixture in a 1:1 ratio. As a standard, brake pads using a single reinforcement of E and BP particles were also fabricated. Physical properties (i.e. particle size, surface roughness, morphology, and density), as well as mechanical properties (i.e. hardness, wear rate, and friction coefficient properties) were investigated. It was observed that using dual reinforcements was preferable (compared to using single reinforcements) because they had a synergistic effect on the mechanical properties of the brake pad. The best mechanical properties were found in dual reinforcements of brake pad specimens using E/BP particles with a higher BP ratio in which the value of the stiffness test, puncture test, wear rate, and coefficient of friction were 4.5 MPa, 86.80, 0.093×10-4 g/s.mm2, and 1.67×10-4, respectively. A high BP particle ratio played a dominant role in dual reinforcements, increasing the resin's bonding ability and resulting in good adhesion between the reinforcement and matrix. When compared to commercial brake pads, the brake pad specimens fabricated in this study met the standards. The techno-economic analysis also confirmed the prospective production of brake pads from E and BP particles (compared to commercial brake pads). From this research, it is expected that environmentally friendly and low-cost brake pads can be used to replace the dangerous friction materials.
APA, Harvard, Vancouver, ISO, and other styles
6

Raman, Manali, P. Meena, V. Champa, V. Prema, and Priya Ranjan Mishra. "Techno-economic assessment of microgrid in rural India considering incremental load growth over years." AIMS Energy 10, no. 4 (2022): 900–921. http://dx.doi.org/10.3934/energy.2022041.

Full text
Abstract:
<abstract> <p>India, being a developing country with a fast-growing economy, experiences ever increasing electrical energy demand. Industrial and economic development in rural India is impeded by inadequate, erratic and unreliable grid supply. This has resulted in underperformance of small-scale manufacturing and service industries. Dependency on fossil fuel-based sources as an alternative increases the operation costs and carbon emissions. Migration to cleaner energy ensures sustainable solution and addresses the issues of depleting fossil fuels, global warming and environmental hazards. In this regard, hybrid renewable energy systems have gained wide acceptance as optimum solution. Hence, authors have optimally designed hybrid energy system for power deprived rural Indian villages. Authors have heeded to the vital element of incremental load growth over years while designing the microgrid to sustain the increasing load demand of emerging economy of developing country. HOMER Pro Software is utilized to accomplish system size optimization and authors have gained comprehensive insights into techno-financial feasibility for different dispatch strategies of the proposed energy system. The levelized cost of electricity of the optimal off-grid system catering to multiyear incremental load growth is 0.14$/kWh indicating that proposed system is promising in terms of commercial efficacy. The study performs a detailed analysis of the results obtained during different phases of the project to ensure robustness and supply continuity of the proposed system. The paper also includes comparison of the carbon footprint in the proposed system with that of existing system.</p> </abstract>
APA, Harvard, Vancouver, ISO, and other styles
7

Wu, Jingjing, Shane W. Rogers, Rebekah Schaummann, and Nichole N. Price. "A Comparison of Multiple Macroalgae Cultivation Systems and End-Use Strategies of Saccharina latissima and Gracilaria tikvahiae Based on Techno-Economic Analysis and Life Cycle Assessment." Sustainability 15, no. 15 (August 7, 2023): 12072. http://dx.doi.org/10.3390/su151512072.

Full text
Abstract:
Macroalgae can be processed into various products with the potential to substitute land-based crops; their cultivation can bioextract nutrients from coastal waters. This study investigated the economic cost and environmental impacts of multiple seaweed cultivation platforms, cultivation strategies, and processing/end-use strategies through techno-economic analysis (TEA) and life cycle assessment (LCA) with a focus on Saccharina latissima and Gracilaria tikvahiae. Cultivation platforms included single-layer longline, dual-layer longline, single-layer strip, and dual-layer strip systems. Processing/end-use products included seaweed to biofuel, dried sea vegetables, marketable commercial fertilizer, and animal feed. Economic and environmental costs decreased with dual-layer and strip cultivation systems. Cultivation costs were highest using the common single-layer longline system ($4.44 kg−1 dry weight (dw) S. latissima and $6.73 kg−1 dw G. tikvahiae when cultivated on rotation). The use of the dual-layer strip system reduced cultivation costs to $2.19 kg−1 dw for S. latissima and $3.43 kg−1 dw for G. tikvahiae. Seaweed drying was the major contributor to economic and environmental costs for macroalgae processing. Yet, all scenarios achieved environmental benefits for marine eutrophication. The best environmental performance was observed when biomass was processed to dry sea vegetables, assuming the offset of land-based vegetable production, or used as biofeedstock for anaerobic digestion for combined heat and power.
APA, Harvard, Vancouver, ISO, and other styles
8

Oppong, David, Worawan Panpipat, and Manat Chaijan. "Chemical, physical, and functional properties of Thai indigenous brown rice flours." PLOS ONE 16, no. 8 (August 3, 2021): e0255694. http://dx.doi.org/10.1371/journal.pone.0255694.

Full text
Abstract:
Thai indigenous brown rice flours from Nakhon Si Thammarat, Thailand, namely Khai Mod Rin (KMRF) and Noui Khuea (NKRF), were assessed for quality aspects in comparison with brown Jasmine rice flour (JMRF) and commercial rice flour (CMRF) from Chai Nat 1 variety. All the rice flours had different chemical composition, physical characteristic, and techno-functionality. The KMRF, NKRF, and JMRF were classified as a low amylose type (19.56–21.25% dw). All rice flours had low total extractable phenolic content (0.1–0.3 mg GAE/g dw) with some DPPH● scavenging activity (38.87–46.77%). The variations in the bulk density (1.36–1.83 g/cm3), water absorption capacity (0.71–1.17 g/g), solubility (6.93–13.67%), oil absorption capacity (1.39–2.49 g/g), and swelling power (5.71–6.84 g/g) were noticeable. The least gelation concentration ranged from 4.0 to 8.0% where KMRF was easier to form gel than JMRF, and NKRF/CMRF. The foam capacity of the flours was relatively low (1.30–2.60%). The pasting properties differed among rice flours and the lowest pasting temperature was observed in CMRF. Overall, the chemical, physical, functional, and pasting qualities of flours were substantially influenced by rice variety. The findings offered fundamental information on Thai indigenous rice flour that can be used in food preparations for specific uses.
APA, Harvard, Vancouver, ISO, and other styles
9

Nemeslaki, András. "The Puzzle of ICT Driven Innovation in the Public Sector: Hungary's Case." Central and Eastern European eDem and eGov Days 331 (July 12, 2018): 151–65. http://dx.doi.org/10.24989/ocg.v331.13.

Full text
Abstract:
Public ICT (Information Communication Technologies) investments do not necessarily result in improvement of effectiveness or efficiency regarding public services. Hungary has been spending around 1,2 billion Euros using funds from the European Social Cohesion and Structural Funds during the period of 2007-2018 for modernizing its public administration. Taking the investments into other sectors as a comparison, this means that more than 25% of ICT development projects go to the public sector, which is in the magnitude of the financial, commercial and media sectors of Hungary. While the effects of digital transformation are unquestionable in these latter sectors, effectiveness of public ICT spending is problematic. When we look at the measurement scoreboards used in the EU and UN, we find Hungary not even improved its position, but in some areas has lost competiveness and fell behind. In this paper we show using some elements of earlier findings in digital innovation studies on public administration, that four key factors should be analysed in detail to find out reasons behind this phenomenon, Infrastructural questions, although need constant development and improvement, do not seem to be key explaining factors of lack of productivity improvement. Nor the techno-legislative institutions seem to be obstacles in Hungary´s case, but rather some alignment in policy objectives and consistency.
APA, Harvard, Vancouver, ISO, and other styles
10

Kurambhatti, Kumar, and Singh. "Impact of Fractionation Process on the Technical and Economic Viability of Corn Dry Grind Ethanol Process." Processes 7, no. 9 (September 1, 2019): 578. http://dx.doi.org/10.3390/pr7090578.

Full text
Abstract:
Use of corn fractionation techniques in dry grind process increases the number of coproducts, enhances their quality and value, generates feedstock for cellulosic ethanol production and potentially increases profitability of the dry grind process. The aim of this study is to develop process simulation models for eight different wet and dry corn fractionation techniques recovering germ, pericarp fiber and/or endosperm fiber, and evaluate their techno-economic feasibility at the commercial scale. Ethanol yields for plants processing 1113.11 MT corn/day were 37.2 to 40 million gal for wet fractionation and 37.3 to 31.3 million gal for dry fractionation, compared to 40.2 million gal for conventional dry grind process. Capital costs were higher for wet fractionation processes ($92.85 to $97.38 million) in comparison to conventional ($83.95 million) and dry fractionation ($83.35 to $84.91 million) processes. Due to high value of coproducts, ethanol production costs in most fractionation processes ($1.29 to $1.35/gal) were lower than conventional ($1.36/gal) process. Internal rate of return for most of the wet (6.88 to 8.58%) and dry fractionation (6.45 to 7.04%) processes was higher than the conventional (6.39%) process. Wet fractionation process designed for germ and pericarp fiber recovery was most profitable among the processes.
APA, Harvard, Vancouver, ISO, and other styles
11

Portillo, E., Luz M. Gallego Fernández, M. Cano, B. Alonso-Fariñas, and B. Navarrete. "Techno-Economic Comparison of Integration Options for an Oxygen Transport Membrane Unit into a Coal Oxy-Fired Circulating Fluidized Bed Power Plant." Membranes 12, no. 12 (December 2, 2022): 1224. http://dx.doi.org/10.3390/membranes12121224.

Full text
Abstract:
The inclusion of membrane-based oxygen-fired combustion in power plants is considered an emerging technology that could reduce carbon emissions in a more efficient way than cryogenic oxygen-fired processes. In this paper, a techno-economic assessment was developed for a 863 MWel,net power plant to demonstrate whether this CCS technique results in a reduction in efficiency losses and economic demand. Four configurations based on oxygen transport membranes were considered, while the benchmark cases were the air combustion process without CO2 capture and a cryogenic oxygen-fired process. The type of driving force through the membrane (3-end or 4-end), the point of integration into the oxy-fuel combustion process, the heating system, and the pollutant control system were aspects considered in this work. In comparison, the efficiency losses for membrane-based alternatives were lower than those in the cryogenic oxygen-fired process, reaching savings of up to 14% net efficiency. Regarding the specific energy consumption for CO2 capture, the configuration based on the oxygen transport membrane unit with 4-end mode and hot filtration presented 1.01 kWel,net,·h/kgCO2 captured with 100% CO2 recovery, which is an improvement of 11% compared with the cases using cryogenic oxygen. Comparing economic aspects, the specific investment costs for cases based on the oxygen transport membrane unit varied between 2520 and 2942 $/kWel,net·h. This was between 39.6 and 48.2% above the investment for the reference case without carbon capture. However, its hypothetical implantation could suppose a savings of 10.7% in terms of investment cost compared with cryogenic oxygen-based case. In terms of the levelized cost of electricity and the cost of CO2 avoidance, the oxygen transport membrane configurations achieved more favorable results compared with the cryogenic route, reaching savings up to 14 and 38%, respectively. Although oxygen transport membrane units are currently not mature for commercial-scale applications, the results indicated that its application within carbon capture and storage technologies can be strongly competitive.
APA, Harvard, Vancouver, ISO, and other styles
12

Shehadeh, Maha, Emily Kwok, Jason Owen, and Majid Bahrami. "Integrating Mobile Thermal Energy Storage (M-TES) in the City of Surrey’s District Energy Network: A Techno-Economic Analysis." Applied Sciences 11, no. 3 (January 30, 2021): 1279. http://dx.doi.org/10.3390/app11031279.

Full text
Abstract:
The City of Surrey in British Columbia, Canada has recently launched a district energy network (DEN) to supply residential and commercial buildings in the Surrey Centre area with hot water for space and domestic hot water heating. The network runs on natural gas boilers and geothermal exchange. However, the City plans to transition to low-carbon energy sources and envisions the DEN as a key development in reaching its greenhouse gas emissions (GHG) reduction targets in the building sector. Harvesting and utilizing waste heat from industrial sites using a mobile thermal energy storage (M-TES) is one of the attractive alternative energy sources that Surrey is considering. In this study, a techno-economic analysis (TEA) was conducted to determine the energy storage density (ESD) of the proposed M-TES technology, costs, and the emission reduction potential of integrating waste heat into Surrey’s DEN. Three transportation methods were considered to determine the most cost-effective and low-carbon option(s) to transfer heat from industrial waste heat locations at various distances (15 km, 30 km, 45 km) to district energy networks, including: (i) a diesel truck; (ii) a renewable natural gas-powered (RNG) truck, and (iii) an electric truck. To evaluate the effectiveness of M-TES, the cost of emission reduction ($/tCO2e avoided) is compared with business as usual (BAU), which is using a natural gas boiler only. Another comparison was made with other low carbon energy sources that the city is considering, such as RNG/biomass boiler, sewer heat recovery, electric boiler, and solar thermal. The minimum system-level ESD required to makes M-TES competitive when compared to other low carbon energy sources was 0.4 MJ/kg.
APA, Harvard, Vancouver, ISO, and other styles
13

Nikulina, A. V., A. V. Arkadieva, and L. P. Bondareva. "Evaluation of the water-holding capacity of sweeteners." Proceedings of the Voronezh State University of Engineering Technologies 83, no. 4 (December 10, 2021): 269–73. http://dx.doi.org/10.20914/2310-1202-2021-4-269-273.

Full text
Abstract:
Water-retaining capacity is an essential property for the chemical components of food products, as it refers to one of the characteristics that determine the commercial properties of products. At the same time, there are practically no data on the hydrophilicity of sweeteners in the literature; a specific criterion for assessing this property is not given. Hydrophilicity is typically defined as a techno-functional property, i.e. it is assessed for a certain food product as a whole, when replacing the classic sweetener with another, for example, cheaper. From the literature, the isopiestic method was used to assess the hydrophilicity of isomalt in comparison with sucrose. The article is devoted to choosing a parameter that can become a universal criterion for assessing the water-retaining capacity of sweeteners. The hydrophilicity of erythritol, cyclomate, glucose, sucrose, and sorbitol has been studied by isopiestic method. The gravimetrically determined area S under the isopiestic curve was used as an analytical signal to assess the hydrophilicity of sweeteners. The correlations of S with various criteria, such as the Davis and Griffin criterions, the simplified criterion for hydrophobicity, the integral Gibbs energy, and the solubility of substances in water, for assessing the hydrophilicity (hydrophobicity) of sweeteners are considered. A correlation of the area under the isopiestic curves S only with the integral Gibbs energy from all the considered criteria was shown. The data obtained allow us to recommend the integral Gibbs energy for assessing the hydrophilicity and, consequently, the water-retaining capacity of sweeteners.
APA, Harvard, Vancouver, ISO, and other styles
14

Butt, Rohan Zafar, Syed Ali Abbas Kazmi, Mohammed Alghassab, Zafar A. Khan, Abdullah Altamimi, Muhammad Imran, and Fahad F. Alruwaili. "Techno-Economic and Environmental Impact Analysis of Large-Scale Wind Farms Integration in Weak Transmission Grid from Mid-Career Repowering Perspective." Sustainability 14, no. 5 (February 22, 2022): 2507. http://dx.doi.org/10.3390/su14052507.

Full text
Abstract:
Repowering a wind farm enhances its ability to generate electricity, allowing it to better utilize areas with high mean wind speeds. Pakistan’s present energy dilemma is a serious impediment to its economic development. The usage of a diesel generator as a dependable backup power source raises the cost of energy per kWh and increases environmental emissions. To minimize environmental emissions, grid-connected wind farms enhance the percentage of wind energy in the electricity system. These wind generators’ effects, on the other hand, are augmented by the absorption of greater quantities of reactive electricity from the grid. According to respective grid codes, integration of commercial onshore Large-Scale Wind Farms (LSWF) into a national grid is fraught with technical problems and inter-farm wake effects, which primarily ensure power quality while degrading overall system operation and limiting the optimal use of attainable wind resources. The goal of this study is to examine and estimate the techno-economic influence of large-scale wind farms linked to poor transmission systems in Pakistan, contemplating the inter-farm wake effect and reactive power diminution and compensating using a range of voltage-ampere reactive (VAR) devices. This study presents a partial repowering technique to address active power deficits produced by the wake effect by raising hub height by 20 m, which contributed to recovering the active power deficit to 48% and so reduced the effects of upstream wind farms. Simulations were conducted for several scenarios on an actual test system modeled in MATLAB for comparative study using capacitor banks and different flexible alternating current transmission system (FACTS) devices. Using the SAM (System Advisor Model) and RETscreen, a complete technical, economic, and environmental study was done based on energy fed into the grid, payback time, net present value (NPV), and greenhouse gases (GHG) emission reduction. The studies suggest that the unified power flow controller (UPFC) is the optimum compensating device via comparison analysis as it improved the power handling capabilities of the power system. Our best-case scenario includes UPFC with hub height augmentation, demonstrating that it is technically, fiscally, and environmentally viable. Over the course of its lifespan, the planned system has the potential to save 1,011,957 tCO2, resulting in a greener environment. When the energy generated annually by a current wake-affected system is compared to our best-recommended scenario, a recovered shortfall of 4.851% is seen, with improved system stability. This modest investment in repowering boosts energy production due to wake effects, resulting in increased NPV, revenue, and fewer CO2 footprints.
APA, Harvard, Vancouver, ISO, and other styles
15

Bagde, Vandana, and Dethe C. G. "Performance improvement of space diversity technique using space time block coding for time varying channels in wireless environment." International Journal of Intelligent Unmanned Systems 10, no. 2/3 (June 8, 2020): 278–86. http://dx.doi.org/10.1108/ijius-04-2019-0026.

Full text
Abstract:
PurposeA recent innovative technology used in wireless communication is recognized as multiple input multiple output (MIMO) communication system and became popular for quicker data transmission speed. This technology is being examined and implemented for the latest broadband wireless connectivity networks. Though high-capacity wireless channel is identified, there is still requirement of better techniques to get increased data transmission speed with acceptable reliability. There are two types of systems comprising of multi-antennas placed at transmitting and receiving sides, of which first is diversity technique and another is spatial multiplexing method. By making use of these diversity techniques, the reliability of transmitting signal can be improved. The fundamental method of the diversity is to transform wireless channel such as Rayleigh fading into steady additive white Gaussian noise (AWGN) channel which is devoid of any disastrous fading of the signal. The maximum transmission speed that can be achieved by spatial multiplexing methods is nearly equal to channel capacity of MIMO. Conversely, for diversity methods, the maximum speed of broadcasting is much lower than channel capacity of MIMO. With the advent of space–time block coding (STBC) antenna diversity technique, higher-speed data transmission is achievable for spatially multiplexed multiple input multiple output (SM-MIMO) system. At the receiving end, detection of the signal is a complex task for system which exhibits SM-MIMO. Additionally, a link modification method is implemented to decide appropriate coding and modulation scheme such as space diversity technique STBC to use two-way radio resources efficiently. The proposed work attempts to improve detection of signal at receiving end by employing STBC diversity technique for linear detection methods such as zero forcing (ZF), minimum mean square error (MMSE), ordered successive interference cancellation (OSIC) and maximum likelihood detection (MLD). The performance of MLD has been found to be better than other detection techniques.Design/methodology/approachAlamouti's STBC uses two transmit antennas regardless of the number of receiver antennas. The encoding and decoding operation of STBC is shown in the earlier cited diagram. In the following matrix, the rows of each coding scheme represent a different time instant, while the columns represent the transmitted symbols through each different antenna. In this case, the first and second rows represent the transmission at the first and second time instant, respectively. At a time t, the symbol s1 and symbol s2 are transmitted from antenna 1 and antenna 2, respectively. Assuming that each symbol has duration T, then at time t + T, the symbols –s2* and s1*, where (.)* denotes the complex conjugate, are transmitted from antenna 1 and antenna 2, respectively. Case of one receiver antenna: The reception and decoding of the signal depend on the number of receiver antennas available. For the case of one receiver antenna, the received signals are received at antenna 1 , hij is the channel transfer function from the jth transmit antenna and the ith receiver antenna, n1 is a complex random variable representing noise at antenna 1 and x (k) denotes x at time instant k ( at time t + (k – 1)T.FindingsThe results obtained for maximal ratio combining (MRC) with 1 × 4 scheme show that the BER curve drops to 10–4 for signal-to-noise (SNR) ratio of 10 dB, whereas for MRC 1 × 2 scheme, the BER drops down to 10–5 for SNR of 20 dB. Results obtained in Table 1 show that when STBC is employed for MRC with 1 × 2 scheme (one antenna at transmitter node and two antennas at receiver node), BER curve comes down to 0.0076 for Eb/N0 of 12. Similarly, when MRC with 1 × 4 antenna scheme is implemented, BER drops down to 0 for Eb/N0 of 12. Thus, it can be concluded from the obtained graph that the performance of MRC with STBC gives improved results. When STBC technique is used with 3 × 4 scheme, at SNR of 10 dB, BER comes nearer to 10–6 (figure 7.3). It can be concluded from the analytics observed between AWGN and Rayleigh fading channel that for AWGN channel, BER is found to be equal to 0 for SNR value of 13.5 dB, whereas for Rayleigh fading channel, BER is observed nearer to 10–3 for Eb/N0 = 15. Simulation results (in figure 7.2) from the analytics show BER drops to 0 for SNR value of 12 dB.Research limitations/implicationsOptimal design and successful deployment of high-performance wireless networks present a number of technical challenges. These include regulatory limits on useable radio-frequency spectrum and a complex time-varying propagation environment affected by fading and multipath. The effect of multipath fading in wireless systems can be reduced by using antenna diversity. Previous studies show the performance of transmit diversity with narrowband signals using linear equalization, decision feedback equalization, maximum likelihood sequence estimation (MLSE) and spread spectrum signals using a RAKE receiver. The available IC techniques compatible with STBC schemes at transmission require multiple antennas at the receiver. However, if this not a strong constraint at the base station level, it remains a challenge at the handset level due to cost and size limitation. For this reason, SAIC technique, alternative to complex ML multiuser demodulation technique, is still of interest for 4G wireless networks using the MIMO technology and STBC in particular. In a system with characteristics similar to the North American Digital mobile radio standard IS-54 (24.3 K symbols per sec. with an 81 Hz fading rate), adaptive retransmission with time deviation is not practical.Practical implicationsThe evaluation of performance in terms of bit error rate and convergence time which estimates that MLD technique outperforms in terms of received SNR and low decoding complexity. MLD technique performs well but when higher number of antennas are used, it requires more computational time and thereby resulting in increased hardware complexity. When MRC scheme is implemented for singe input single output (SISO) system, BER drops down to 10–2 for SNR of 20 dB. Therefore, when MIMO systems are employed for MRC scheme, improved results based on BER versus SNR are obtained and are used for detecting the signal; comparative study based on different techniques is done. Initially ZF detection method is utilized which was then modified to ZF with successive interference cancellation (ZFSIC). When successive interference cancellation scheme is employed for ZFSIC, better performance is observed as compared to the estimation of ML and MMSE. For 2 × 2 scheme with QPSK modulation method, ZFSIC requires more computational time as compared to ZF, MMSE and ML technique. From the obtained results, the conclusion is that ZFSIC gives the improved results as compared to ZF in terms of BER ratio. ZF-based decision statistics can be produced by the detection algorithm for a desired sub-stream from the received vector whichs consist of an interference which occurred from previous transmitted sub-streams. Consequently, a decision on the secondary stream is made and contribution of the noise is regenerated and subtracted from the vector received. With no involvement of interference cancellation, system performance gets reduced but computational cost is saved. While using cancellation, as H is deflated, coefficients of MMSE are recalculated at each iteration. When cancellation is not involved, the computation of MMSE coefficients is done only once, because of H remaining unchanged. For MMSE 4 × 4 BPSK scheme, bit error rate of 10–2 at 30 dB is observed. In general, the most thorough procedure of the detection algorithm is the computation of the MMSE coefficients. Complexity arises in the calculation of the MMSE coefficients, when the antennas at the transmitting side are increased. However, while implementing adaptive MMSE receivers on slow channel fading, it is probable to recover the signal with the complications being linear in the antennas of transmitter node. The performance of MMSE and successive interference cancellation of MMSE are observed for 2 × 2 and 4 × 4 BPSK and QPSK modulation schemes. The drawback of MMSE SIC scheme is that the first detected signal observes the noise interference from (NT-1) signals, while signals processed from every antenna later observe less noisy interference as the process of cancellation progresses. This difficulty could be overcome by using OSIC detection method which uses successive ordering of the processed layers in the decreasing power of the signal or by power allocation to the signal transmitted depending on the order of the processing. By using successive scheme, a computation of NT delay stages is desired to bring out the abandoned process. The work also includes comparison of BER with various modulation schemes and number of antennas involved while evaluating the performance. MLD determines the Euclidean distance among the vector signal received and result of all probable transmitted vector signals with the specified channel H and finds the one with the minimum distance. Estimated results show that higher order of the diversity is observed by employing more antennas at both the receiving and transmitting ends. MLD with 8 × 8 binary phase shift keying (BPSK) scheme offers bit error rate near to 10–4 for SNR (16 dB). By using Altamonti space ti.Social implicationsIt should come as no surprise that companies everywhere are pushing to get products to market faster. Missing a market window or a design cycle can be a major setback in a competitive environment. It should be equally clear that this pressure is coming at the same time that companies are pushing towards “leaner” organizations that can do more with less. The trends mentioned earlier are not well supported by current test and measurement equipment, given this increasingly high-pressure design environment: in order to measure signals across multiple domains, multiple pieces of measurement equipment are needed, increasing capital or rental expenses. The methods available for making cross-domain, time-correlated measurements are inefficient, reducing engineering efficiency. When only used on occasion, the learning curve to understand how to use equipment for logic analysis, time domain and RF spectrum measurements often requires an operator to re-learn each piece of separate equipment. The equipment needed to measure wide bandwidth, time-varying spectral signals is expensive, again increasing capital or rental expenses. What is needed is a measurement instrument with a common user interface that integrates multiple measurement capabilities into a single cost-effective tool that can efficiently measure signals in the current wide-bandwidth, time-correlated, cross-domain environments. The market of wireless communication using STBCs has large scope of expansion in India. Therefore, the proposed work has techno-commercial potential and the product can be patented. This project shall in turn be helpful for remote areas of the nearby region particularly in Gadchiroli district and Melghat Tiger reserve project of Amravati district, Nagjira and so on where electricity is not available and there is an all the time problem of coverage in getting the network. In some regions where electricity is available, the shortage is such that they cannot use it for peak hours. In such cases, stand-alone space diversity technique, STBC shall help them to meet their requirements in making connection during coverage problem, thereby giving higher data transmission rates with better QOS (quality of service) with least dropped connections. This trend towards wireless everywhere is causing a profound change in the responsibilities of embedded designers as they struggle to incorporate unfamiliar RF technology into their designs. Embedded designers frequently find themselves needing to solve problems without the proper equipment needed to perform the tasks.Originality/valueWork is original.
APA, Harvard, Vancouver, ISO, and other styles
16

Roles, John, Jennifer Yarnold, Karen Hussey, and Ben Hankamer. "Techno-economic evaluation of microalgae high-density liquid fuel production at 12 international locations." Biotechnology for Biofuels 14, no. 1 (June 7, 2021). http://dx.doi.org/10.1186/s13068-021-01972-4.

Full text
Abstract:
Abstract Background Microalgae-based high-density fuels offer an efficient and environmental pathway towards decarbonization of the transport sector and could be produced as part of a globally distributed network without competing with food systems for arable land. Variations in climatic and economic conditions significantly impact the economic feasibility and productivity of such fuel systems, requiring harmonized technoeconomic assessments to identify important conditions required for commercial scale up. Methods Here, our previously validated Techno-economic and Lifecycle Analysis (TELCA) platform was extended to provide a direct performance comparison of microalgae diesel production at 12 international locations with variable climatic and economic settings. For each location, historical weather data, and jurisdiction-specific policy and economic inputs were used to simulate algal productivity, evaporation rates, harvest regime, CapEx and OpEx, interest and tax under location-specific operational parameters optimized for Minimum Diesel Selling Price (MDSP, US$ L−1). The economic feasibility, production capacity and CO2-eq emissions of a defined 500 ha algae-based diesel production facility is reported for each. Results Under a for-profit business model, 10 of the 12 locations achieved a minimum diesel selling price (MDSP) under US$ 1.85 L−1 / US$ 6.99 gal−1. At a fixed theoretical MDSP of US$ 2 L−1 (US$ 7.57 gal−1) these locations could achieve a profitable Internal Rate of Return (IRR) of 9.5–22.1%. Under a public utility model (0% profit, 0% tax) eight locations delivered cost-competitive renewable diesel at an MDSP of < US$ 1.24 L−1 (US$ 4.69 gal−1). The CO2-eq emissions of microalgae diesel were about one-third of fossil-based diesel. Conclusions The public utility approach could reduce the fuel price toward cost-competitiveness, providing a key step on the path to a profitable fully commercial renewable fuel industry by attracting the investment needed to advance technology and commercial biorefinery co-production options. Governments’ adoption of such an approach could accelerate decarbonization, improve fuel security, and help support a local COVID-19 economic recovery. This study highlights the benefits and limitations of different factors at each location (e.g., climate, labour costs, policy, C-credits) in terms of the development of the technology—providing insights on how governments, investors and industry can drive the technology forward. Graphic abstract
APA, Harvard, Vancouver, ISO, and other styles
17

Chaijan, M., and W. Panpipat. "Techno-biofunctional aspect of seasoning powder from farm-raised sago palm weevil (Rhynchophorus ferrugineus) larvae." Journal of Insects as Food and Feed, October 30, 2020, 1–10. http://dx.doi.org/10.3920/jiff2020.0025.

Full text
Abstract:
This study aimed at characterising the techno-biofunctional aspect of seasoning powder made from sago palm weevil larvae (SP) in comparison with commercial products prepared from pork (CP) and chicken (CC). SP had a comparable moisture and water activity with CP and CC, following the Thai Community Product Standards. SP had higher protein, fat, carbohydrate, calcium, magnesium and potassium with lower ash and sodium (P<0.05). All samples had the same Fourier transform infrared spectra with different peak intensities. SP was darker (lower L*, higher a* and b*, and lower whiteness) than CP and CC. Different content and polarity of the intermediate (A285) and final (A420) products of the Maillard reaction was found. A420 of the aqueous extract was distinctly higher than the acetone extract in all samples, suggesting the predominance of water soluble brown pigments. The highest total phenolic content and DPPH• inhibition was found in SP (P<0.05). The bulk density of SP was lower than CP and CC, which consequently affected the wettability. SP needed more time to become wet (P<0.05). The soup made from SP had the highest initial turbidity (P<0.05). All sensory aspects of SP were similar to CP and CC. Thus, SP can be categorised as an alternative functional food ingredient.
APA, Harvard, Vancouver, ISO, and other styles
18

Kennedy, Ben, Simone Giorgi, Adrian Senar, and Ronan Costello. "SEASNAKE: Impact - Marine operations modelling for evidence-based results detailing the impact of using a new fully dynamic cable design for ocean energy devices." Proceedings of the European Wave and Tidal Energy Conference 15 (September 2, 2023). http://dx.doi.org/10.36688/ewtec-2023-535.

Full text
Abstract:
The SEASNAKE project - an OceanERA-NET project – is aiming for developing fully dynamic cables for ocean energy. Through new design and application of novel coatings the project is designing a dynamic cable that better suits the conditions and user requirements. In order to fully understand the benefits, the Wave Venture TEMPEST™ techno-economic analysis software will be used to simulate the performance of a proposed wave energy farm with a special focus on the contribution of the dynamic cable subsystem. The results obtained from the simulations will not only provide a deep understanding of the reliability and cost-risk areas regarding the use of dynamic cables, but will also allow a comparison with a baseline scenario consisting of a more traditional cabling system. The latter will be key to identify the advantages of the SEASNAKE project and its commercial viability. KPIs (Key Performance Indicators) will include farm availability and power production downtime as a direct result of cable related issues. The reliability, maintainability and survivability of the cabling subsystem will be tested and the KPIs will give clear indication on the performance benefits and ultimately the impact on the LCOE will be of greatest consideration.
APA, Harvard, Vancouver, ISO, and other styles
19

Strickroth, A., M. Schumacher, G. W. Hasse, and I. Kgomo. "Next-generation, affordable SO2 abatement for coal-fired power generation - A comparison of limestone-based wet flue gas desulphurization and Sulfacid® technologies for Medupi power station." Journal of the Southern African Institute of Mining and Metallurgy 120, no. 10 (2020). http://dx.doi.org/10.17159/2411-9717/1252/2020.

Full text
Abstract:
SYNOPSIS Coal is used to generate more than three-quarters of South Africa's electricity, while numerous coal-fired boilers are employed for steam generation in industrial processes. However, coal-fired power generation is responsible for the release of the largest quantities of SO2 emissions to the atmosphere and leads to detrimental health and welfare effects in communities in the proximity of coal-fired plants. The classical industrial SO2 abatement solution for the coal-fired power generation industry is wet flue gas desulphurization, which uses a limestone adsorbent and produces a gypsum by-product (WFGD L/G). In South Africa, due to the poor quality of the limestone the gypsum product is unsaleable and is co-disposed with coal ash. In comparison, the Sulfacid® process technology converts SO2 contained in industrial flue gas into saleable sulphuric acid using a catalytic process requiring only water and air. This process does not require limestone. The scale of the latest commercial applications of the Sulfacid® SO2 abatement technology in the chemical, fertilizer, and copper mining industries demonstrates the potential and readiness of this technology to be employed in the coal-fired electricity and steam production sectors. This paper provides a first-order direct comparison between the techno-economic aspects of the WFGD (L/G) and Sulfacid® technologies using the requirements specified for the 6 x 800 MWe Eskom coal-fired Medupi power station. The results indicate that affordable flue gas desulphurization technology exists that could be adopted by the South African industry to reduce SO2 emissions to legislative limits and beyond. Keywords: SO2 abatement, coal-fired power, and heat generation, sulphuric acid, wet fluidized gas desulphurization, Sulfacid®, waste-to-chemicals.
APA, Harvard, Vancouver, ISO, and other styles
20

Wu, Wenzhao, Kirti M. Yenkie, and Christos T. Maravelias. "Synthesis and analysis of separation processes for extracellular chemicals generated from microbial conversions." BMC Chemical Engineering 1, no. 1 (October 28, 2019). http://dx.doi.org/10.1186/s42480-019-0022-8.

Full text
Abstract:
Abstract Recent advances in metabolic engineering have enabled the production of chemicals via bio-conversion using microbes. However, downstream separation accounts for 60–80% of the total production cost in many cases. Previous work on microbial production of extracellular chemicals has been mainly restricted to microbiology, biochemistry, metabolomics, or techno-economic analysis for specific product examples such as succinic acid, xanthan gum, lycopene, etc. In these studies, microbial production and separation technologies were selected apriori without considering any competing alternatives. However, technology selection in downstream separation and purification processes can have a major impact on the overall costs, product recovery, and purity. To this end, we apply a superstructure optimization based framework that enables the identification of critical technologies and their associated parameters in the synthesis and analysis of separation processes for extracellular chemicals generated from microbial conversions. We divide extracellular chemicals into three categories based on their physical properties, such as water solubility, physical state, relative density, volatility, etc. We analyze three major extracellular product categories (insoluble light, insoluble heavy and soluble) in detail and provide suggestions for additional product categories through extension of our analysis framework. The proposed analysis and results provide significant insights for technology selection and enable streamlined decision making when faced with any microbial product that is released extracellularly. The parameter variability analysis for the product as well as the associated technologies and comparison with novel alternatives is a key feature which forms the basis for designing better bioseparation strategies that have potential for commercial scalability and can compete with traditional chemical production methods.
APA, Harvard, Vancouver, ISO, and other styles
21

Ruiz-Minguela, Pablo, Jesús María Blanco, and Vincenzo Nava. "Successful innovation strategies to overcome the technical challenges in the development of wave energy technologies." Proceedings of the European Wave and Tidal Energy Conference 15 (September 2, 2023). http://dx.doi.org/10.36688/ewtec-2023-144.

Full text
Abstract:
Despite the considerable efforts the international research community has made over the last decades, wave energy technologies have failed to achieve the desired design convergence to support their future market growth. Many technical challenges remain unresolved, leading to high costs of energy in comparison with other renewable energy sources. It becomes apparent that incremental innovation alone cannot fill the gap between the current techno-economic estimates and the medium-term policy targets established for wave energy. A systematic problem-solving approach must be embedded from the outset of technology development to meet the high sector expectations. This approach should support the engineering design processes, facilitate traceability of engineering analysis, and provide practical tools for understanding the wave energy context, formalising wave energy system requirements, guiding techno-economic design decisions, and overcoming technical challenges. Systems Engineering methods have been successfully applied to developing complex commercial products in many sectors. Among the many tools developed in Systems Engineering, it is worthwhile mentioning two structured innovation techniques: Quality Function Deployment (QFD) for problem formulation and selection [1]; and the Theory of Inventive Problem Solving (TRIZ) for concept generation [2]. Unfortunately, their use in wave energy is still limited and fragmented. Taking as a starting point the technology-agnostic assessment of wave energy capabilities performed in previous research work [3] for the problem formulation and concept selection, the authors have applied QFD to obtain the prioritisation of the technical characteristics that may offer the greatest impact to the overall design for a wave energy system. The main Functional Requirements are mapped to an equal number of Design Parameters extracted from the 39 technical parameters provided by TRIZ. The TRIZ toolkit is then employed to suggest three alternative innovation strategies to overcome wave energy cost and performance limitations. Firstly, separation principles are used to deal with physical contradictions. Examples of potentially effective strategies involving separation in time, space, scale or condition are proposed. Subsequently, inventive principles are employed to solve technical contradictions and trade-offs. The four most promising inventive principles that have been found in this implementation are "Local quality", "Dynamism", "Pneumatics or hydraulics", and "Physical or chemical properties". These principles prompt the user to consider a broader range of alternatives and improve creative thinking. Additional examples are given on how these inventive principles could be applied in wave energy. Finally, a system transition strategy is needed for the most complex challenges. Bypassing contradictory demands involves more radical changes in the functional allocation of requirements to the physical embodiment. Therefore, such a significant pivot in wave energy design can only be made in the initial phases of technology development. [1] S. Mizuno, Y. Akao, and K. Ishihara, Eds., QFD, the customer-driven approach to quality planning and deployment. Tokyo, Japan: Asian Productivity Organization, 1994. [2] K. Gadd, TRIZ for Engineers: Enabling Inventive Problem Solving, 1st ed. Wiley, 2011. doi: 10.1002/9780470684320. [3] P. Ruiz-Minguela, J. M. Blanco, V. Nava, and H. Jeffrey, ‘Technology-Agnostic Assessment of Wave Energy System Capabilities’, Energies, vol. 15, no. 7, p. 2624, Apr. 2022, doi: 10.3390/en15072624.
APA, Harvard, Vancouver, ISO, and other styles
22

Bastos, Paula, Fiona Devoy-McAuliffe, Abel Arredondo-Galeana, Julia Fernández Chozas, Paul Lamont-Kane, and Pedro Almeida Vinagre. "Life Cycle Assessment of a wave energy device – LiftWEC." Proceedings of the European Wave and Tidal Energy Conference 15 (September 2, 2023). http://dx.doi.org/10.36688/ewtec-2023-377.

Full text
Abstract:
The need to move towards a low-carbon economy has brought about the emergence of various renewable energy sectors, including Marine Renewable Energy (MRE). However, after many years of research and development, the MRE industry still faces challenges in achieving commercial viability, especially regarding wave energy. Whilst it remains possible that successful wave energy technologies exist in the traditional research trend, it is also appropriate to explore alternatives that produce energy by different approaches. Wave-induced lift force devices may be the possibility to move beyond traditional wave energy technologies using diffraction and/or buoyancy forces. In this context arises the LiftWEC, a promising configuration of a lift-based wave energy converter. The LiftWEC device couples with the waves through lift forces generated by two hydrofoils that rotate in a single direction aligned orthogonally to the direction of wave propagation. To fully evaluate the overall advantages of this new technology, it is necessary to go beyond the techno-economic performance and reliability. While capable of producing electricity from clean sources, MRE devices are not entirely environmentally friendly, since energy is consumed and pollutants are emitted during their various life cycle stages. Accordingly, as the MRE sector expands, it is important to ensure that the technologies prove to be sustainable alternatives in terms of their environmental impact. Life Cycle Assessment (LCA) is a widely recognized methodology to evaluate environmental impacts by considering the technology’s performance over its life cycle. This methodology complies with international standards ISO 14040, which specify the general framework, principles, and requirements for conducting and reporting this type of assessment. A “cradle to grave” LCA assessment was applied to the LiftWEC device to evaluate the potential cumulative environmental impacts of the system, complete from the extraction of raw materials until decommissioning. Each stage was analysed within the defined system boundaries, and data on the energy, materials, emissions, and waste products associated were gathered. To allow comparison with other MRE technologies and traditional means of electricity generation, carbon dioxide equivalent emissions per produced electricity (gCO2eq/kWh) were calculated for the study. Since ocean energy is broadly considered to contribute to a low-carbon energy system, special attention was given to the LCA results on the global warming potential (GWP). Besides a set of 18 impact categories, Cumulative Energy Demand (CED) and Carbon and Energy payback time (CPT and EPT, respectively) were also analysed. The CPT and EPT are important indicators that measure the time required to offset the carbon emission and demanded energy, respectively, accounted across all development phases of the device. This work included the comparison of LCA findings for other MRE devices reported in the literature to validate the viability of the LiftWEC in terms of carbon and energy footprint. In addition, the assessment analysed alternative materials, locations, and recyclability allowing the identification of potential improvement opportunities regarding the reduction of environmental impacts.
APA, Harvard, Vancouver, ISO, and other styles
23

Tola, Vittorio, Giorgio Cau, Francesca Ferrara, and Alberto Pettinau. "CO2 Emissions Reduction From Coal-Fired Power Generation: A Techno-Economic Comparison." Journal of Energy Resources Technology 138, no. 6 (September 14, 2016). http://dx.doi.org/10.1115/1.4034547.

Full text
Abstract:
Carbon capture and storage (CCS) represents a key solution to control the global warming reducing carbon dioxide emissions from coal-fired power plants. This study reports a comparative performance assessment of different power generation technologies, including ultrasupercritical (USC) pulverized coal combustion plant with postcombustion CO2 capture, integrated gasification combined cycle (IGCC) with precombustion CO2 capture, and oxy-coal combustion (OCC) unit. These three power plants have been studied considering traditional configuration, without CCS, and a more complex configuration with CO2 capture. These technologies (with and without CCS systems) have been compared from both the technical and economic points of view, considering a reference thermal input of 1000 MW. As for CO2 storage, the sequestration in saline aquifers has been considered. Whereas a conventional (without CCS) coal-fired USC power plant results to be more suitable than IGCC for power generation, IGCC becomes more competitive for CO2-free plants, being the precombustion CO2 capture system less expensive (from the energetic point of view) than the postcombustion one. In this scenario, oxy-coal combustion plant is currently not competitive with USC and IGCC, due to the low industrial experience, which means higher capital and operating costs and a lower plant operating reliability. But in a short-term future, a progressive diffusion of commercial-scale OCC plants will allow a reduction of capital costs and an improvement of the technology, with higher efficiency and reliability. This means that OCC promises to become competitive with USC and also with IGCC.
APA, Harvard, Vancouver, ISO, and other styles
24

Ghilardi, Alessandra, Guido Francesco Frate, Andrea Baccioli, Dario Ulivieri, Lorenzo Ferrari, Umberto Desideri, Lorenzo Cosi, Simone Amidei, and Vittorio Michelassi. "Techno-Economic Comparison of Several Technologies for Waste Heat Recovery of Gas Turbine Exhausts." Journal of Engineering for Gas Turbines and Power, October 5, 2022. http://dx.doi.org/10.1115/1.4055872.

Full text
Abstract:
Abstract The waste heat recovery from the gas turbine (GT) exhaust is typical for increasing performance and reducing CO2 emissions in industrial facilities. Nowadays, numerous already operating gas turbine plants could be retrofitted and upgraded with a bottoming cycle powered by the exhaust gasses. In this case, the standard solution would be to use a water Steam Rankine Cycle. However, even if this technology usually yields the best efficiency, other alternatives are often preferred on the lower size scale. Organic Rankine Cycles (ORCs) are the commercial alternatives to Steam Rankine Cycles, but many other alternative cycles exist or can be developed, with potential benefits from safety, technical or economic points of view. This study compares several alternative technologies suited to recover gas turbine waste heat, and a detailed cost analysis for each is presented. On this basis, a guideline is proposed for the technology choice considering a wide range of application sizes and temperature levels typical for waste heat recovery from GTs gas turbines. The compared technologies are ORCs, Rankine Cycles (RCs) with water and ammonia mixtures at constant composition, supercritical CO2 cycles (sCO2), sCO2 cycles with mixtures of CO2, and other gasses. As it resulted, ORCs can achieve the lowest levelized cost of energy (32 $/MWh - 46 $/MWh) if flammable fluids can be employed. Otherwise, Rankine cycles with a constant composition mixture of water and ammonia are a promising alternative, reaching a Levelised Cost Of Energy (LCOE) of 36-58 $/MWh.
APA, Harvard, Vancouver, ISO, and other styles
25

Beath, Andrew Charles, Mehdi Aghaei Meybodi, and Geoffrey Drewer. "Techno-economic Assessment of Application of Particle-Based Concentrated Solar Thermal Systems in Australian Industry." Journal of Renewable and Sustainable Energy, May 4, 2022. http://dx.doi.org/10.1063/5.0086655.

Full text
Abstract:
Australia has significant areas with high quality solar resources, but the requirement for large scale solar thermal plants to be financially competitive in the electricity market, appears to have hindered uptake. Industrial use of heat provides an alternative route to market where the technology is not impacted by poor efficiencies in converting heat to electricity and an appropriate scale can be applied to the demands of a specific site. A re-examination of prior industrial energy use studies in Australia was used in combination with solar data and published data on industrial sites to identify three specific industrial sites for case studies. These sites were selected to cover applications in three industries with varying scale and temperature requirements. The primary solar technology selected utilises a particle receiver on a tower with associated storage and heat exchanger for hot water/steam production or heating a gas reactor. The wide range of temperatures possible with this technology appears to be desirable for development of a general-purpose industrial heat system. Comparison with parabolic trough systems that are commercially available was conducted for comparison in cases where the required temperatures were appropriate. In all assessments the optimised solar plant designs approach cost competitiveness with the estimated cost of natural gas purchase for the relevant locations and industrial scales. In smaller and lower temperature applications parabolic trough systems are likely to be appropriate conventional choices, but the particle system exhibited a high degree of flexibility across multiple sites and applications that is encouraging to future commercial application.
APA, Harvard, Vancouver, ISO, and other styles
26

Dwyer, Tim. "Transformations." M/C Journal 7, no. 2 (March 1, 2004). http://dx.doi.org/10.5204/mcj.2339.

Full text
Abstract:
The Australian Government has been actively evaluating how best to merge the functions of the Australian Communications Authority (ACA) and the Australian Broadcasting Authority (ABA) for around two years now. Broadly, the reason for this is an attempt to keep pace with the communications media transformations we reduce to the term “convergence.” Mounting pressure for restructuring is emerging as a site of turf contestation: the possibility of a regulatory “one-stop shop” for governments (and some industry players) is an end game of considerable force. But, from a public interest perspective, the case for a converged regulator needs to make sense to audiences using various media, as well as in terms of arguments about global, industrial, and technological change. This national debate about the institutional reshaping of media regulation is occurring within a wider global context of transformations in social, technological, and politico-economic frameworks of open capital and cultural markets, including the increasing prominence of international economic organisations, corporations, and Free Trade Agreements (FTAs). Although the recently concluded FTA with the US explicitly carves out a right for Australian Governments to make regulatory policy in relation to existing and new media, considerable uncertainty remains as to future regulatory arrangements. A key concern is how a right to intervene in cultural markets will be sustained in the face of cultural, politico-economic, and technological pressures that are reconfiguring creative industries on an international scale. While the right to intervene was retained for the audiovisual sector in the FTA, by contrast, it appears that comparable unilateral rights to intervene will not operate for telecommunications, e-commerce or intellectual property (DFAT). Blurring Boundaries A lack of certainty for audiences is a by-product of industry change, and further blurs regulatory boundaries: new digital media content and overlapping delivering technologies are already a reality for Australia’s media regulators. These hypothetical media usage scenarios indicate how confusion over the appropriate regulatory agency may arise: 1. playing electronic games that use racist language; 2. being subjected to deceptive or misleading pop-up advertising online 3. receiving messaged imagery on your mobile phone that offends, disturbs, or annoys; 4. watching a program like World Idol with SMS voting that subsequently raises charging or billing issues; or 5. watching a new “reality” TV program where products are being promoted with no explicit acknowledgement of the underlying commercial arrangements either during or at the end of the program. These are all instances where, theoretically, regulatory mechanisms are in place that allow individuals to complain and to seek some kind of redress as consumers and citizens. In the last scenario, in commercial television under the sector code, no clear-cut rules exist as to the precise form of the disclosure—as there is (from 2000) in commercial radio. It’s one of a number of issues the peak TV industry lobby Commercial TV Australia (CTVA) is considering in their review of the industry’s code of practice. CTVA have proposed an amendment to the code that will simply formalise the already existing practice . That is, commercial arrangements that assist in the making of a program should be acknowledged either during programs, or in their credits. In my view, this amendment doesn’t go far enough in post “cash for comment” mediascapes (Dwyer). Audiences have a right to expect that broadcasters, production companies and program celebrities are open and transparent with the Australian community about these kinds of arrangements. They need to be far more clearly signposted, and people better informed about their role. In the US, the “Commercial Alert” <http://www.commercialalert.org/> organisation has been lobbying the Federal Communications Commission and the Federal Trade Commission to achieve similar in-program “visual acknowledgements.” The ABA’s Commercial Radio Inquiry (“Cash-for-Comment”) found widespread systemic regulatory failure and introduced three new standards. On that basis, how could a “standstill” response by CTVA, constitute best practice for such a pervasive and influential medium as contemporary commercial television? The World Idol example may lead to confusion for some audiences, who are unsure whether the issues involved relate to broadcasting or telecommunications. In fact, it could be dealt with as a complaint to the Telecommunication Industry Ombudsman (TIO) under an ACA registered, but Australian Communications Industry Forum (ACIF) developed, code of practice. These kind of cross-platform issues may become more vexed in future years from an audience’s perspective, especially if reality formats using on-screen premium rate service numbers invite audiences to participate, by sending MMS (multimedia messaging services) images or short video grabs over wireless networks. The political and cultural implications of this kind of audience interaction, in terms of access, participation, and more generally the symbolic power of media, may perhaps even indicate a longer-term shift in relations with consumers and citizens. In the Internet example, the Australian Competition and Consumer Commission’s (ACCC) Internet advertising jurisdiction would apply—not the ABA’s “co-regulatory” Internet content regime as some may have thought. Although the ACCC deals with complaints relating to Internet advertising, there won’t be much traction for them in a more complex issue that also includes, say, racist or religious bigotry. The DVD example would probably fall between the remits of the Office of Film and Literature Classification’s (OFLC) new “convergent” Guidelines for the Classification of Film and Computer Games and race discrimination legislation administered by the Human Rights and Equal Opportunity Commission (HREOC). The OFLC’s National Classification Scheme is really geared to provide consumer advice on media products that contain sexual and violent imagery or coarse language, rather than issues of racist language. And it’s unlikely that a single person would have the locus standito even apply for a reclassification. It may fall within the jurisdiction of the HREOC depending on whether it was played in public or not. Even then it would probably be considered exempt on free speech grounds as an “artistic work.” Unsolicited, potentially illegal, content transmitted via mobile wireless devices, in particular 3G phones, provide another example of content that falls between the media regulation cracks. It illustrates a potential content policy “turf grab” too. Image-enabled mobile phones create a variety of novel issues for content producers, network operators, regulators, parents and viewers. There is no one government media authority or agency with a remit to deal with this issue. Although it has elements relating to the regulatory activities of the ACA, the ABA, the OFLC, the TIO, and TISSC, the combination of illegal or potentially prohibited content and its carriage over wireless networks positions it outside their current frameworks. The ACA may argue it should have responsibility for this kind of content since: it now enforces the recently enacted Commonwealth anti-Spam laws; has registered an industry code of practice for unsolicited content delivered over wireless networks; is seeking to include ‘adult’ content within premium rate service numbers, and, has been actively involved in consumer education for mobile telephony. It has also worked with TISSC and the ABA in relation to telephone sex information services over voice networks. On the other hand, the ABA would probably argue that it has the relevant expertise for regulating wirelessly transmitted image-content, arising from its experience of Internet and free and subscription TV industries, under co-regulatory codes of practice. The OFLC can also stake its claim for policy and compliance expertise, since the recently implemented Guidelines for Classification of Film and Computer Games were specifically developed to address issues of industry convergence. These Guidelines now underpin the regulation of content across the film, TV, video, subscription TV, computer games and Internet sectors. Reshaping Institutions Debates around the “merged regulator” concept have occurred on and off for at least a decade, with vested interests in agencies and the executive jockeying to stake claims over new turf. On several occasions the debate has been given renewed impetus in the context of ruling conservative parties’ mooted changes to the ownership and control regime. It’s tended to highlight demarcations of remit, informed as they are by historical and legal developments, and the gradual accretion of regulatory cultures. Now the key pressure points for regulatory change include the mere existence of already converged single regulatory structures in those countries with whom we tend to triangulate our policy comparisons—the US, the UK and Canada—increasingly in a context of debates concerning international trade agreements; and, overlaying this, new media formats and devices are complicating existing institutional arrangements and legal frameworks. The Department of Communications, Information Technology & the Arts’s (DCITA) review brief was initially framed as “options for reform in spectrum management,” but was then widened to include “new institutional arrangements” for a converged regulator, to deal with visual content in the latest generation of mobile telephony, and other image-enabled wireless devices (DCITA). No other regulatory agencies appear, at this point, to be actively on the Government’s radar screen (although they previously have been). Were the review to look more inclusively, the ACCC, the OFLC and the specialist telecommunications bodies, the TIO and the TISSC may also be drawn in. Current regulatory arrangements see the ACA delegate responsibility for broadcasting services bands of the radio frequency spectrum to the ABA. In fact, spectrum management is the turf least contested by the regulatory players themselves, although the “convergent regulator” issue provokes considerable angst among powerful incumbent media players. The consensus that exists at a regulatory level can be linked to the scientific convention that holds the radio frequency spectrum is a continuum of electromagnetic bands. In this view, it becomes artificial to sever broadcasting, as “broadcasting services bands” from the other remaining highly diverse communications uses, as occurred from 1992 when the Broadcasting Services Act was introduced. The prospect of new forms of spectrum charging is highly alarming for commercial broadcasters. In a joint submission to the DCITA review, the peak TV and radio industry lobby groups have indicated they will fight tooth and nail to resist new regulatory arrangements that would see a move away from the existing licence fee arrangements. These are paid as a sliding scale percentage of gross earnings that, it has been argued by Julian Thomas and Marion McCutcheon, “do not reflect the amount of spectrum used by a broadcaster, do not reflect the opportunity cost of using the spectrum, and do not provide an incentive for broadcasters to pursue more efficient ways of delivering their services” (6). An economic rationalist logic underpins pressure to modify the spectrum management (and charging) regime, and undoubtedly contributes to the commercial broadcasting industry’s general paranoia about reform. Total revenues collected by the ABA and the ACA between 1997 and 2002 were, respectively, $1423 million and $3644.7 million. Of these sums, using auction mechanisms, the ABA collected $391 million, while the ACA collected some $3 billion. The sale of spectrum that will be returned to the Commonwealth by television broadcasters when analog spectrum is eventually switched off, around the end of the decade, is a salivating prospect for Treasury officials. The large sums that have been successfully raised by the ACA boosts their position in planning discussions for the convergent media regulatory agency. The way in which media outlets and regulators respond to publics is an enduring question for a democratic polity, irrespective of how the product itself has been mediated and accessed. Media regulation and civic responsibility, including frameworks for negotiating consumer and citizen rights, are fundamental democratic rights (Keane; Tambini). The ABA’s Commercial Radio Inquiry (‘cash for comment’) has also reminded us that regulatory frameworks are important at the level of corporate conduct, as well as how they negotiate relations with specific media audiences (Johnson; Turner; Gordon-Smith). Building publicly meaningful regulatory frameworks will be demanding: relationships with audiences are often complex as people are constructed as both consumers and citizens, through marketised media regulation, institutions and more recently, through hybridising program formats (Murdock and Golding; Lumby and Probyn). In TV, we’ve seen the growth of infotainment formats blending entertainment and informational aspects of media consumption. At a deeper level, changes in the regulatory landscape are symptomatic of broader tectonic shifts in the discourses of governance in advanced information economies from the late 1980s onwards, where deregulatory agendas created an increasing reliance on free market, business-oriented solutions to regulation. “Co-regulation” and “self-regulation’ became the preferred mechanisms to more direct state control. Yet, curiously contradicting these market transformations, we continue to witness recurring instances of direct intervention on the basis of censorship rationales (Dwyer and Stockbridge). That digital media content is “converging” between different technologies and modes of delivery is the norm in “new media” regulatory rhetoric. Others critique “visions of techno-glory,” arguing instead for a view that sees fundamental continuities in media technologies (Winston). But the socio-cultural impacts of new media developments surround us: the introduction of multichannel digital and interactive TV (in free-to-air and subscription variants); broadband access in the office and home; wirelessly delivered content and mobility, and, as Jock Given notes, around the corner, there’s the possibility of “an Amazon.Com of movies-on-demand, with the local video and DVD store replaced by online access to a distant server” (90). Taking a longer view of media history, these changes can be seen to be embedded in the global (and local) “innovation frontier” of converging digital media content industries and its transforming modes of delivery and access technologies (QUT/CIRAC/Cutler & Co). The activities of regulatory agencies will continue to be a source of policy rivalry and turf contestation until such time as a convergent regulator is established to the satisfaction of key players. However, there are risks that the benefits of institutional reshaping will not be readily available for either audiences or industry. In the past, the idea that media power and responsibility ought to coexist has been recognised in both the regulation of the media by the state, and the field of communications media analysis (Curran and Seaton; Couldry). But for now, as media industries transform, whatever the eventual institutional configuration, the evolution of media power in neo-liberal market mediascapes will challenge the ongoing capacity for interventions by national governments and their agencies. Works Cited Australian Broadcasting Authority. Commercial Radio Inquiry: Final Report of the Australian Broadcasting Authority. Sydney: ABA, 2000. Australian Communications Information Forum. Industry Code: Short Message Service (SMS) Issues. Dec. 2002. 8 Mar. 2004 <http://www.acif.org.au/__data/page/3235/C580_Dec_2002_ACA.pdf >. Commercial Television Australia. Draft Commercial Television Industry Code of Practice. Aug. 2003. 8 Mar. 2004 <http://www.ctva.com.au/control.cfm?page=codereview&pageID=171&menucat=1.2.110.171&Level=3>. Couldry, Nick. The Place of Media Power: Pilgrims and Witnesses of the Media Age. London: Routledge, 2000. Curran, James, and Jean Seaton. Power without Responsibility: The Press, Broadcasting and New Media in Britain. 6th ed. London: Routledge, 2003. Dept. of Communication, Information Technology and the Arts. Options for Structural Reform in Spectrum Management. Canberra: DCITA, Aug. 2002. ---. Proposal for New Institutional Arrangements for the ACA and the ABA. Aug. 2003. 8 Mar. 2004 <http://www.dcita.gov.au/Article/0,,0_1-2_1-4_116552,00.php>. Dept. of Foreign Affairs and Trade. Australia-United States Free Trade Agreement. Feb. 2004. 8 Mar. 2004 <http://www.dfat.gov.au/trade/negotiations/us_fta/outcomes/11_audio_visual.php>. Dwyer, Tim. Submission to Commercial Television Australia’s Review of the Commercial Television Industry’s Code of Practice. Sept. 2003. Dwyer, Tim, and Sally Stockbridge. “Putting Violence to Work in New Media Policies: Trends in Australian Internet, Computer Game and Video Regulation.” New Media and Society 1.2 (1999): 227-49. Given, Jock. America’s Pie: Trade and Culture After 9/11. Sydney: U of NSW P, 2003. Gordon-Smith, Michael. “Media Ethics After Cash-for-Comment.” The Media and Communications in Australia. Ed. Stuart Cunningham and Graeme Turner. Sydney: Allen and Unwin, 2002. Johnson, Rob. Cash-for-Comment: The Seduction of Journo Culture. Sydney: Pluto, 2000. Keane, John. The Media and Democracy. Cambridge: Polity, 1991. Lumby, Cathy, and Elspeth Probyn, eds. Remote Control: New Media, New Ethics. Melbourne: Cambridge UP, 2003. Murdock, Graham, and Peter Golding. “Information Poverty and Political Inequality: Citizenship in the Age of Privatized Communications.” Journal of Communication 39.3 (1991): 180-95. QUT, CIRAC, and Cutler & Co. Research and Innovation Systems in the Production of Digital Content and Applications: Report for the National Office for the Information Economy. Canberra: Commonwealth of Australia, Sept. 2003. Tambini, Damian. Universal Access: A Realistic View. IPPR/Citizens Online Research Publication 1. London: IPPR, 2000. Thomas, Julian and Marion McCutcheon. “Is Broadcasting Special? Charging for Spectrum.” Conference paper. ABA conference, Canberra. May 2003. Turner, Graeme. “Talkback, Advertising and Journalism: A cautionary tale of self-regulated radio”. International Journal of Cultural Studies 3.2 (2000): 247-255. ---. “Reshaping Australian Institutions: Popular Culture, the Market and the Public Sphere.” Culture in Australia: Policies, Publics and Programs. Ed. Tony Bennett and David Carter. Melbourne: Cambridge UP, 2001. Winston, Brian. Media, Technology and Society: A History from the Telegraph to the Internet. London: Routledge, 1998. Web Links http://www.aba.gov.au http://www.aca.gov.au http://www.accc.gov.au http://www.acif.org.au http://www.adma.com.au http://www.ctva.com.au http://www.crtc.gc.ca http://www.dcita.com.au http://www.dfat.gov.au http://www.fcc.gov http://www.ippr.org.uk http://www.ofcom.org.uk http://www.oflc.gov.au Links http://www.commercialalert.org/ Citation reference for this article MLA Style Dwyer, Tim. "Transformations" M/C: A Journal of Media and Culture <http://www.media-culture.org.au/0403/06-transformations.php>. APA Style Dwyer, T. (2004, Mar17). Transformations. M/C: A Journal of Media and Culture, 7, <http://www.media-culture.org.au/0403/06-transformations.php>
APA, Harvard, Vancouver, ISO, and other styles
27

Craven, Allison Ruth. "The Last of the Long Takes: Feminism, Sexual Harassment, and the Action of Change." M/C Journal 23, no. 2 (May 13, 2020). http://dx.doi.org/10.5204/mcj.1599.

Full text
Abstract:
The advent of the #MeToo movement and the scale of participation in 85 countries (Gill and Orgad; see Google Trends) has greatly expanded debate about the revival of feminism (Winch Littler and Keeler) and the contribution of digital media to a “reconfiguration” of feminism (Jouet). Insofar as these campaigns are concerned with sexual harassment and related forms of sexual abuse, the longer history of sexual harassment in which this practice was named by women’s movement activists in the 1970s has gone largely unremarked except in the broad sense of the recharging or “techno-echo[es]” (Jouet) of earlier “waves” of feminism. However, #MeToo and its companion movement #TimesUp, and its fighting fund timesupnow.org, stemmed directly from the allegations in 2017 against the media mogul Harvey Weinstein by Hollywood professionals and celebrities. The naming of prominent, powerful men as harassers and the celebrity sphere of activism have become features of #MeToo that warrant comparison with the naming of sexual harassment in the earlier era of feminism.While the practices it named were not new, the term “sexual harassment” was new, and it became a defining issue in second wave feminism that was conceptualised within the continuum of sexual violence. I outline this history, and how it transformed the private, individual experiences of many women into a shared public consciousness about sexual coercion in the workplace, and some of the debate that this generated within the women’s movement at the time. It offers scope to compare the threshold politics of naming names in the 21st century, and its celebrity vanguard which has led to some ambivalence about the lasting impact. For Kathy Davis (in Zarkov and Davis), for instance, it is atypical of the collective goals of second wave feminism.In comparing the two eras, Anita Hill’s claims against Clarence Thomas in the early 1990s is a bridging incident. It dates from closer to the time in which sexual harassment was named, and Hill’s testimony is now recognised as a prototype of the kinds of claims made against powerful men in the #MeToo era. Lauren Berlant’s account of “Diva Citizenship”, formulated in response to Hill’s testimony to the US Senate, now seems prescient of the unfolding spectacle of feminist subjectivities in the digital public sphere and speaks directly to the relation between individual and collective action in making lasting change. The possibility of change, however, descends from the intervention of the women’s movement in naming sexual harassment.The Name Is AllI found my boss in a room ... . He was alone ... . He greeted me ... touched my hair and ... said ... “Come, Ruth, sit down here.” He motioned to his knee. I felt my face flush. I backed away towards the door ... . Then he rose ... and ... put his hand into his pocket, took out a roll of bills, counted off three dollars, and brought it over to me at the door. “Tell your father,” he said, “to find you a new shop for tomorrow morning.” (Cohen 129)Sexual coercion in the workplace, such as referred to in this workplace novel published in 1918, was spoken about among women in subcultures and gossip long before it was named as sexual harassment. But it had no place in public discourse. Women’s knowledge of sexual harassment coalesced in an act of naming that is reputed to have occurred in a consciousness raising group in New York at the height of the second wave women’s movement. Lin Farley lays claim to it in her book, Sexual Shakedown, first published in 1978, in describing the coinage of the term from a workshop on women and work in 1974 at Cornell University. The group of participants was made up, she says, of near equal numbers of black and white women with “economic backgrounds ranging from very affluent to poor” (11). She describes how, “when we had finished, there was an unmistakable pattern to our employment ... . Each one of us had already quit or been fired from a job at least once because we had been made too uncomfortable by the behaviour of men” (11–12). She claims to have later devised the term “sexual harassment” in collaboration with others from this group (12).The naming of sexual harassment has been described as a kind of “discovery” (Leeds TUCRIC 1) and possibly “the only concept of sexual violence to be labelled by women themselves” (Hearn et al. 20). Not everyone agrees that Farley’s group first coined the term (see Herbert 1989) and there is some evidence that it was in use from the early 1970s. Catherine Mackinnon accredits its first use to the Working Women United Institute in New York in connection with the case of Carmita Wood in 1975 (25). Yet Farley’s account gained authority and is cited in several other contemporary radical feminist works (for instance, see Storrie and Dykstra 26; Wise and Stanley 48), and Sexual Shakedown can now be listed among the iconic feminist manifestoes of the second wave era.The key insight of Farley’s book was that sexual coercion in the workplace was more than aberrant behaviour by individual men but was systemic and organised. She suggests how the phrase sexual harassment “is the first verbal description of women’s feelings about this behaviour and it unstintingly conveys a negative perception of male aggression in the workplace” (32). Others followed in seeing it as organised expression of male power that functions “to keep women out of non-traditional occupations and to reinforce their secondary status in the workplace” (Pringle 93), a wisdom that is now widely accepted but seemed radical at the time.A theoretical literature on sexual harassment grew rapidly from the 1970s in which the definition of sexual harassment was a key element. In Sexual Shakedown, Farley defines it with specific connection to the workplace and a woman’s “function as worker” (33). Some definitions attempted to cover a range of practices that “might threaten a woman’s job security or create a stressful or intimidating working environment” ranging from touching to rape (Sedley and Benn 6). In the wider radical feminist discussion, sexual harassment was located within the “continuum of sexual violence”, a paradigm that highlighted the links between “every day abuses” and “less common experiences labelled as crimes” (Kelly 59). Accordingly, it was seen as a diminished category of rape, termed “little rape” (Bularzik 26), or a means whereby women are “reminded” of the “ever present threat of rape” (Rubinstein 165).The upsurge of research and writing served to document the prevalence and history of sexual harassment. Radical feminist accounts situated the origins in the long-standing patriarchal assumption that economic responsibility for women is ultimately held by men, and how “women forced to earn their own living in the past were believed to be defenceless and possibly immoral” (Rubinstein 166). Various accounts highlighted the intersecting effects of racism and sexism in the experience of black women, and women of colour, in a way that would be now termed intersectional. Jo Dixon discussed black women’s “least advantaged position in the economy coupled with the legacy of slavery” (164), while, in Australia, Linda Rubinstein describes the “sexual exploitation of aboriginal women employed as domestic servants on outback stations” which was “as common as the better documented abuse of slaves in the American South” (166).In The Sexual Harassment of Working Women, Catherine Mackinnon provided a pioneering legal argument that sexual harassment was a form of sex discrimination. She defined two types: the quid pro quo, when “sexual compliance is exchanged, or proposed to be exchanged, for an employment opportunity” (32); and sexual harassment as a “persistent condition of work” that “simply makes the work environment unbearable” (40). Thus the feminist histories of sexual harassment became detailed and strategic. The naming of sexual harassment was a moment of relinquishing women’s experience to the gaze of feminism and the bureaucratic gaze of the state, and, in the legal interventions that followed, it ceased to be exclusively a feminist issue.In Australia, a period of bureaucratisation and state intervention commenced in the late 1970s that corresponded with similar legislative responses abroad. The federal Sex Discrimination Act was amended in 1984 to include a definition of sexual harassment, and State and Territory jurisdictions also framed legislation pertaining to sexual harassment (see Law Council of Australia). The regimes of redress were linked with Equal Opportunity and Affirmative Action frameworks and were of a civil order. Under the law, there was potential for employers to be found vicariously liable for sexual harassment.In the women’s movement, legislative strategies were deemed reformist. Radical and socialist feminists perceived the de-gendering effects of these policies in the workplace that risked collusion with the state. Some argued that naming and defining sexual harassment denies that women constantly deal with a range of harassment anywhere, not only in the workplace (Wise and Stanley 10); while others argued that reformist approaches effectively legitimate other forms of sex discrimination not covered by legislation (Game and Pringle 290). However, in feminism and in the policy realm, the debate concerned sexual harassment in the general workplace. In contrast to #MeToo, it was not led by celebrity voices, nor galvanised by incidents in the sphere of entertainment, nor, by and large, among figures of public office, except for a couple of notable exceptions, including Anita Hill.The “Spectacle of Subjectivity” in the “Scene of Public Life”Through the early 1990s as an MA candidate at the University of Queensland, I studied media coverage of sexual harassment cases, clipping newspapers and noting electronic media reports on a daily basis. These mainly concerned incidents in government sector workplaces or small commercial enterprises. While the public prominence of the parties involved was not generally a factor in reportage, occasionally, prominent individuals were affected, such as the harassment of the athlete Michelle Baumgartner at the Commonwealth Games in 1990 which received extensive coverage but the offenders were never publicly named or disciplined. Two other incidents stand out: the Ormond College case at the University of Melbourne, about which much has been written; and Anita Hill’s claims against Clarence Thomas during his nomination to the US Supreme Court in 1991.The spectacle of Hill’s testimony to the US Senate is now an archetype of claims against powerful men, although, at the time, her credibility was attacked and her dignified presentation was criticised as “too composed. Too cool. Too censorious” (Legge 31). Hill was also seen to counterpose the struggles of race and gender, and Thomas himself famously described it as “a hi-tech lynching of an uppity black” (qtd in Stephens 1). By “hi-tech”, Thomas alluded to the occasion of the first-ever live national broadcast of the United States Senate hearings in which Hill’s claims were aired directly to the national public, and re-broadcast internationally in news coverage. Thus, it was not only the claims but the scale and medium of delivery to a global audience that set it apart from other sexual harassment stories.Recent events have since prompted revisiting of the inequity of Hill’s treatment at the Senate hearings. But well before this, in an epic and polemical study of American public culture, Berlant reflected at length on the heroism of Hill’s “witnessing” as paradigmatic of citizenship in post-Reaganite America’s “shrinking” public sphere. It forms part of her much wider thesis regarding the “intimate public sphere” and the form of citizenship “produced by personal acts and values” (5) in the absence of a context that “makes ordinary citizens feel they have a common public culture, or influence on a state” (3), and in which the fundamental inequality of minority cultures is assumed. For Berlant, Hill’s testimony becomes the model of “Diva Citizenship”; the “strange intimacy” in which the Citizen Diva, “the subordinated person”, believes in the capacity of the privileged ones “to learn and to change” and “trust[s] ... their innocence of ... their obliviousness” of the system that has supported her subjugation (222–223). While Berlant’s thesis pertains to profound social inequalities, there is no mistaking the comparison to the digital feminist in the #MeToo era in the call to identify with her suffering and courage.Of Hill’s testimony, Berlant describes how: “a member of a stigmatised population testifies reluctantly to a hostile public the muted and anxious history of her imperiled citizenship” (222). It is an “act of heroic pedagogy” (223) which occurs when “a person stages a dramatic coup in a public sphere in which she does not have privilege” (223). In such settings, “acts of language can feel like explosives” and put “the dominant story into suspended animation” (223). The Diva Citizen cannot “change the world” but “challenges her audience” to identify with her “suffering” and the “courage she has had to produce” in “calling on people to change the practices of citizenship into which they currently consent” (223). But Berlant cautions that the strongest of Divas cannot alone achieve change because “remaking the scene of public life into a spectacle of subjectivity” can lead to “a confusion of ... memorable rhetorical performance with sustained social change itself” (223). Instead, she argues that the Diva’s act is a call; the political obligation for the action of change lies with the collective, the greater body politic.The EchoIf Acts of Diva Citizenship abound in the #MeToo movement, relations between the individual and the collective are in question in a number of ways. This suggests a basis of comparison between past and present feminisms which have come full circle in the renewed recognition of sexual harassment in the continuum of sexual violence. Compared with the past, the voices of #MeToo are arguably empowered by a genuine, if gradual, change in the symbolic status of women, and a corresponding destabilization of the images of male power since the second wave era of feminism. The one who names an abuser on Twitter symbolises a power of individual courage, backed by a responding collective voice of supporters. Yet there are concerns about who can “speak out” without access to social media or with the constraint that “the sanctions would be too great” (Zarkov and Davis). Conversely, the “spreadability” — as Jenkins, Ford and Green term the travelling properties of digital media — and the apparent relative ease of online activism might belie the challenge and courage of those who make the claims and those who respond.The collective voice is also allied with other grassroots movements like SlutWalk (Jouet), the women’s marches in the US against the Trump presidency, and the several national campaigns — in India and Egypt, for instance (Zarkov and Davis) — that contest sexual violence and gender inequality. The “sheer numbers” of participation in #MeToo testify to “the collectivity of it all” and the diversity of the movement (Gill and Orgad). If the #MeToo hashtag gained traction with the “experiences of white heterosexual women in the US”, it “quickly expanded” due to “broad and inclusive appeal” with stories of queer women and men and people of colour well beyond the Global North. Even so, Tarana Burke, who founded the #MeToo hashtag in 2006 in her campaign of social justice for working class women and girls of colour, and endorsed its adoption by Hollywood, highlights the many “untold stories”.More strikingly, #MeToo participants name the names of the alleged harassers. The naming of names, famous names, is threshold-crossing and as much the public-startling power of the disclosures as the allegations and stimulates newsworthiness in conventional media. The resonance is amplified in the context of the American crisis over the Trump presidency in the sense that the powerful men called out become echoes or avatars of Trump’s monstrous manhood and the urgency of denouncing it. In the case of Harvey Weinstein, the name is all. A figure of immense power who symbolised an industry, naming Weinstein blew away the defensive old Hollywood myths of “casting couches” and promised, perhaps idealistically, the possibility for changing a culture and an industrial system.The Hollywood setting for activism is the most striking comparison with second wave feminism. A sense of contradiction emerges in this new “visibility” of sexual harassment in a culture that remains predominantly “voyeuristic” and “sexist” (Karkov and Davis), and not least in the realm of Hollywood where the sexualisation of women workers has long been a notorious open secret. A barrage of Hollywood feminism has accompanied #MeToo and #TimesUp in the campaign for diversity at the Oscars, and the stream of film remakes of formerly all-male narrative films that star all-female casts (Ghostbusters; Oceans 11; Dirty, Rotten Scoundrels). Cynically, this trend to make popular cinema a public sphere for gender equality in the film industry seems more glorifying than subversive of Hollywood masculinities. Uneasily, it does not overcome those lingering questions about why these conditions were uncontested openly for so long, and why it took so long for someone to go public, as Rose McGowan did, with claims about Harvey Weinstein.However, a reading of She Said, by Jodie Kantor and Megan Tuohey, the journalists who broke the Weinstein story in the New York Times — following their three year efforts to produce a legally water-tight report — makes clear that it was not for want of stories, but firm evidence and, more importantly, on-the-record testimony. If not for their (and others’) fastidious journalism and trust-building and the Citizen Divas prepared to disclose their experiences publicly, Weinstein might not be convicted today. Yet without the naming of the problem of sexual harassment in the women’s movement all those years ago, none of this may have come to pass. Lin Farley can now be found on YouTube retelling the story (see “New Mexico in Focus”).It places the debate about digital activism and Hollywood feminism in some perspective and, like the work of journalists, it is testament to the symbiosis of individual and collective effort in the action of change. The tweeting activism of #MeToo supplements the plenum of knowledge and action about sexual harassment across time: the workplace novels, the consciousness raising, the legislation and the poster campaigns. In different ways, in both eras, this literature demonstrates that names matter in calling for change on sexual harassment. But, if #MeToo is to become the last long take on sexual harassment, then, as Berlant advocates, the responsibility lies with the body politic who must act collectively for change in ways that will last well beyond the courage of the Citizen Divas who so bravely call it on.ReferencesBerlant, Lauren. The Queen of America Goes to Washington City: Essays on Sex and Citizenship. 1997. Durham: Duke UP, 2002.Bularzik, Mary. “Sexual Harassment at the Workplace: Historical Notes.” Radical America 12.4 (1978): 25-43.Cohen, Rose. Out of the Shadow. NY: Doran, 1918.Dixon, Jo. “Feminist Reforms of Sexual Coercion Laws.” Sexual Coercion: A Sourcebook on Its Nature, Causes and Prevention. Eds. Elizabeth Grauerholz and Mary A. Karlewski. Massachusetts: Lexington, 1991. 161-171.Farley, Lin. Sexual Shakedown: The Sexual Harassment of Women in the Working World. London: Melbourne House, 1978.Game, Ann, and Rosemary Pringle. “Beyond Gender at Work: Secretaries.” Australian Women: New Feminist Perspectives. Melbourne: Oxford UP, 1986. 273–91.Gill, Rosalind, and Shani Orgad. “The Shifting Terrain of Sex and Power: From the ‘Sexualisation of Culture’ to #MeToo.” Sexualities 21.8 (2018): 1313–1324. <https://doi-org.elibrary.jcu.edu.au/10.1177/1363460718794647>.Google Trends. “Me Too Rising: A Visualisation of the Movement from Google Trends.” 2017–2020. <https://metoorising.withgoogle.com>.Hearn, Jeff, Deborah Shepherd, Peter Sherrif, and Gibson Burrell. The Sexuality of Organization. London: Sage, 1989.Herbert, Carrie. Talking of Silence: The Sexual Harassment of Schoolgirls. London: Falmer, 1989.Jenkins, Henry, Sam Ford, and Joshua Green. Spreadable Media: Creating Value and Meaning in a Networked Culture. New York: New York UP, 2013.Jouet, Josiane. “Digital Feminism: Questioning the Renewal of Activism.” Journal of Research in Gender Studies 8.1 (2018). 1 Jan. 2018. <http://dx.doi.org.elibrary.jcu.edu.au/10.22381/JRGS8120187>.Kantor, Jodi, and Megan Twohey. She Said: Breaking the Sexual Harassment Story That Helped Ignite a Movement. London: Bloomsbury, 2019.Kelly, Liz. “The Continuum of Sexual Violence.” Women, Violence, and Social Control. Eds. Jalna Hanmer and Mary Maynard. London: MacMillan, 1989. 46–60.Legge, Kate. “The Harassment of America.” Weekend Australian 19–20 Oct. 1991: 31.Mackinnon, Catherine. The Sexual Harassment of Working Women. New Haven: Yale UP, 1979.New Mexico in Focus, a Production of NMPBS. 26 Jan. 2018. <https://www.youtube.com/watch?v=LlO5PiwZk8U>.Pringle, Rosemary. Secretaries Talk. Sydney: Allen and Unwin, 1988.Rubinstein, Linda. “Dominance Eroticized: Sexual Harassment of Working Women.” Worth Her Salt. Eds. Margaret Bevege, Margaret James, and Carmel Shute. Sydney: Hale and Iremonger, 1982. 163–74.Sedley, Ann, and Melissa Benn. Sexual Harassment at Work. London: NCCL Rights for Women Unit, 1986.Stephens, Peter. “America’s Sick and Awful Farce.” Sydney Morning Herald 14 Oct. 1991: 1.Storrie, Kathleen, and Pearl Dykstra. “Bibliography on Sexual Harassment.” Resources for Feminist Research/Documentation 10.4 (1981–1982): 25–32.Wise, Sue, and Liz Stanley. Georgie Porgie: Sexual Harassment in Every Day Life. London: Pandora, 1987.Winch, Alison, Jo Littler, and Jessalyn Keller. “Why ‘Intergenerational Feminist Media Studies’?” Feminist Media Studies 16.4 (2016): 557–572. <https://doi.org/10.1080/14680777.2016.1193285>.Zarkov, Dubravka, and Kathy Davis. “Ambiguities and Dilemmas around #MeToo: #ForHowLong and #WhereTo?” European Journal of Women's Studies 25.1 (2018): 3–9. <https://doi.org/10.1177/1350506817749436>.
APA, Harvard, Vancouver, ISO, and other styles
28

Brien, Donna Lee. "Climate Change and the Contemporary Evolution of Foodways." M/C Journal 12, no. 4 (September 5, 2009). http://dx.doi.org/10.5204/mcj.177.

Full text
Abstract:
Introduction Eating is one of the most quintessential activities of human life. Because of this primacy, eating is, as food anthropologist Sidney Mintz has observed, “not merely a biological activity, but a vibrantly cultural activity as well” (48). This article posits that the current awareness of climate change in the Western world is animating such cultural activity as the Slow Food movement and is, as a result, stimulating what could be seen as an evolutionary change in popular foodways. Moreover, this paper suggests that, in line with modelling provided by the Slow Food example, an increased awareness of the connections of climate change to the social injustices of food production might better drive social change in such areas. This discussion begins by proposing that contemporary foodways—defined as “not only what is eaten by a particular group of people but also the variety of customs, beliefs and practices surrounding the production, preparation and presentation of food” (Davey 182)—are changing in the West in relation to current concerns about climate change. Such modification has a long history. Since long before the inception of modern Homo sapiens, natural climate change has been a crucial element driving hominidae evolution, both biologically and culturally in terms of social organisation and behaviours. Macroevolutionary theory suggests evolution can dramatically accelerate in response to rapid shifts in an organism’s environment, followed by slow to long periods of stasis once a new level of sustainability has been achieved (Gould and Eldredge). There is evidence that ancient climate change has also dramatically affected the rate and course of cultural evolution. Recent work suggests that the end of the last ice age drove the cultural innovation of animal and plant domestication in the Middle East (Zeder), not only due to warmer temperatures and increased rainfall, but also to a higher level of atmospheric carbon dioxide which made agriculture increasingly viable (McCorriston and Hole, cited in Zeder). Megadroughts during the Paleolithic might well have been stimulating factors behind the migration of hominid populations out of Africa and across Asia (Scholz et al). Thus, it is hardly surprising that modern anthropogenically induced global warming—in all its’ climate altering manifestations—may be driving a new wave of cultural change and even evolution in the West as we seek a sustainable homeostatic equilibrium with the environment of the future. In 1962, Rachel Carson’s Silent Spring exposed some of the threats that modern industrial agriculture poses to environmental sustainability. This prompted a public debate from which the modern environmental movement arose and, with it, an expanding awareness and attendant anxiety about the safety and nutritional quality of contemporary foods, especially those that are grown with chemical pesticides and fertilizers and/or are highly processed. This environmental consciousness led to some modification in eating habits, manifest by some embracing wholefood and vegetarian dietary regimes (or elements of them). Most recently, a widespread awareness of climate change has forced rapid change in contemporary Western foodways, while in other climate related areas of socio-political and economic significance such as energy production and usage, there is little evidence of real acceleration of change. Ongoing research into the effects of this expanding environmental consciousness continues in various disciplinary contexts such as geography (Eshel and Martin) and health (McMichael et al). In food studies, Vileisis has proposed that the 1970s environmental movement’s challenge to the polluting practices of industrial agri-food production, concurrent with the women’s movement (asserting women’s right to know about everything, including food production), has led to both cooks and eaters becoming increasingly knowledgeable about the links between agricultural production and consumer and environmental health, as well as the various social justice issues involved. As a direct result of such awareness, alternatives to the industrialised, global food system are now emerging (Kloppenberg et al.). The Slow Food (R)evolution The tenets of the Slow Food movement, now some two decades old, are today synergetic with the growing consternation about climate change. In 1983, Carlo Petrini formed the Italian non-profit food and wine association Arcigola and, in 1986, founded Slow Food as a response to the opening of a McDonalds in Rome. From these humble beginnings, which were then unashamedly positing a return to the food systems of the past, Slow Food has grown into a global organisation that has much more future focused objectives animating its challenges to the socio-cultural and environmental costs of industrial food. Slow Food does have some elements that could be classed as reactionary and, therefore, the opposite of evolutionary. In response to the increasing homogenisation of culinary habits around the world, for instance, Slow Food’s Foundation for Biodiversity has established the Ark of Taste, which expands upon the idea of a seed bank to preserve not only varieties of food but also local and artisanal culinary traditions. In this, the Ark aims to save foods and food products “threatened by industrial standardization, hygiene laws, the regulations of large-scale distribution and environmental damage” (SFFB). Slow Food International’s overarching goals and activities, however, extend far beyond the preservation of past foodways, extending to the sponsoring of events and activities that are attempting to create new cuisine narratives for contemporary consumers who have an appetite for such innovation. Such events as the Salone del Gusto (Salon of Taste) and Terra Madre (Mother Earth) held in Turin every two years, for example, while celebrating culinary traditions, also focus on contemporary artisanal foods and sustainable food production processes that incorporate the most current of agricultural knowledge and new technologies into this production. Attendees at these events are also driven by both an interest in tradition, and their own very current concerns with health, personal satisfaction and environmental sustainability, to change their consumer behavior through an expanded self-awareness of the consequences of their individual lifestyle choices. Such events have, in turn, inspired such events in other locations, moving Slow Food from local to global relevance, and affecting the intellectual evolution of foodway cultures far beyond its headquarters in Bra in Northern Italy. This includes in the developing world, where millions of farmers continue to follow many traditional agricultural practices by necessity. Slow Food Movement’s forward-looking values are codified in the International Commission on the Future of Food and Agriculture 2006 publication, Manifesto on the Future of Food. This calls for changes to the World Trade Organisation’s rules that promote the globalisation of agri-food production as a direct response to the “climate change [which] threatens to undermine the entire natural basis of ecologically benign agriculture and food preparation, bringing the likelihood of catastrophic outcomes in the near future” (ICFFA 8). It does not call, however, for a complete return to past methods. To further such foodway awareness and evolution, Petrini founded the University of Gastronomic Sciences at Slow Food’s headquarters in 2004. The university offers programs that are analogous with the Slow Food’s overall aim of forging sustainable partnerships between the best of old and new practice: to, in the organisation’s own words, “maintain an organic relationship between gastronomy and agricultural science” (UNISG). In 2004, Slow Food had over sixty thousand members in forty-five countries (Paxson 15), with major events now held each year in many of these countries and membership continuing to grow apace. One of the frequently cited successes of the Slow Food movement is in relation to the tomato. Until recently, supermarkets stocked only a few mass-produced hybrids. These cultivars were bred for their disease resistance, ease of handling, tolerance to artificial ripening techniques, and display consistency, rather than any culinary values such as taste, aroma, texture or variety. In contrast, the vine ripened, ‘farmer’s market’ tomato has become the symbol of an “eco-gastronomically” sustainable, local and humanistic system of food production (Jordan) which melds the best of the past practice with the most up-to-date knowledge regarding such farming matters as water conservation. Although the term ‘heirloom’ is widely used in relation to these tomatoes, there is a distinctively contemporary edge to the way they are produced and consumed (Jordan), and they are, along with other organic and local produce, increasingly available in even the largest supermarket chains. Instead of a wholesale embrace of the past, it is the connection to, and the maintenance of that connection with, the processes of production and, hence, to the environment as a whole, which is the animating premise of the Slow Food movement. ‘Slow’ thus creates a gestalt in which individuals integrate their lifestyles with all levels of the food production cycle and, hence to the environment and, importantly, the inherently related social justice issues. ‘Slow’ approaches emphasise how the accelerated pace of contemporary life has weakened these connections, while offering a path to the restoration of a sense of connectivity to the full cycle of life and its relation to place, nature and climate. In this, the Slow path demands that every consumer takes responsibility for all components of his/her existence—a responsibility that includes becoming cognisant of the full story behind each of the products that are consumed in that life. The Slow movement is not, however, a regime of abstention or self-denial. Instead, the changes in lifestyle necessary to support responsible sustainability, and the sensual and aesthetic pleasure inherent in such a lifestyle, exist in a mutually reinforcing relationship (Pietrykowski 2004). This positive feedback loop enhances the potential for promoting real and long-term evolution in social and cultural behaviour. Indeed, the Slow zeitgeist now informs many areas of contemporary culture, with Slow Travel, Homes, Design, Management, Leadership and Education, and even Slow Email, Exercise, Shopping and Sex attracting adherents. Mainstreaming Concern with Ethical Food Production The role of the media in “forming our consciousness—what we think, how we think, and what we think about” (Cunningham and Turner 12)—is self-evident. It is, therefore, revealing in relation to the above outlined changes that even the most functional cookbooks and cookery magazines (those dedicated to practical information such as recipes and instructional technique) in Western countries such as the USA, UK and Australian are increasingly reflecting and promoting an awareness of ethical food production as part of this cultural change in food habits. While such texts have largely been considered as useful but socio-politically relatively banal publications, they are beginning to be recognised as a valid source of historical and cultural information (Nussel). Cookbooks and cookery magazines commonly include discussion of a surprising range of issues around food production and consumption including sustainable and ethical agricultural methods, biodiversity, genetic modification and food miles. In this context, they indicate how rapidly the recent evolution of foodways has been absorbed into mainstream practice. Much of such food related media content is, at the same time, closely identified with celebrity mass marketing and embodied in the television chef with his or her range of branded products including their syndicated articles and cookbooks. This commercial symbiosis makes each such cuisine-related article in a food or women’s magazine or cookbook, in essence, an advertorial for a celebrity chef and their named products. Yet, at the same time, a number of these mass media food celebrities are raising public discussion that is leading to consequent action around important issues linked to climate change, social justice and the environment. An example is Jamie Oliver’s efforts to influence public behaviour and government policy, a number of which have gained considerable traction. Oliver’s 2004 exposure of the poor quality of school lunches in Britain (see Jamie’s School Dinners), for instance, caused public outrage and pressured the British government to commit considerable extra funding to these programs. A recent study by Essex University has, moreover, found that the academic performance of 11-year-old pupils eating Oliver’s meals improved, while absenteeism fell by 15 per cent (Khan). Oliver’s exposé of the conditions of battery raised hens in 2007 and 2008 (see Fowl Dinners) resulted in increased sales of free-range poultry, decreased sales of factory-farmed chickens across the UK, and complaints that free-range chicken sales were limited by supply. Oliver encouraged viewers to lobby their local councils, and as a result, a number banned battery hen eggs from schools, care homes, town halls and workplace cafeterias (see, for example, LDP). The popular penetration of these ideas needs to be understood in a historical context where industrialised poultry farming has been an issue in Britain since at least 1848 when it was one of the contributing factors to the establishment of the RSPCA (Freeman). A century after Upton Sinclair’s The Jungle (published in 1906) exposed the realities of the slaughterhouse, and several decades since Peter Singer’s landmark Animal Liberation (1975) and Tom Regan’s The Case for Animal Rights (1983) posited the immorality of the mistreatment of animals in food production, it could be suggested that Al Gore’s film An Inconvenient Truth (released in 2006) added considerably to the recent concern regarding the ethics of industrial agriculture. Consciousness-raising bestselling books such as Jim Mason and Peter Singer’s The Ethics of What We Eat and Michael Pollan’s The Omnivore’s Dilemma (both published in 2006), do indeed ‘close the loop’ in this way in their discussions, by concluding that intensive food production methods used since the 1950s are not only inhumane and damage public health, but are also damaging an environment under pressure from climate change. In comparison, the use of forced labour and human trafficking in food production has attracted far less mainstream media, celebrity or public attention. It could be posited that this is, in part, because no direct relationship to the environment and climate change and, therefore, direct link to our own existence in the West, has been popularised. Kevin Bales, who has been described as a modern abolitionist, estimates that there are currently more than 27 million people living in conditions of slavery and exploitation against their wills—twice as many as during the 350-year long trans-Atlantic slave trade. Bales also chillingly reveals that, worldwide, the number of slaves is increasing, with contemporary individuals so inexpensive to purchase in relation to the value of their production that they are disposable once the slaveholder has used them. Alongside sex slavery, many other prevalent examples of contemporary slavery are concerned with food production (Weissbrodt et al; Miers). Bales and Soodalter, for example, describe how across Asia and Africa, adults and children are enslaved to catch and process fish and shellfish for both human consumption and cat food. Other campaigners have similarly exposed how the cocoa in chocolate is largely produced by child slave labour on the Ivory Coast (Chalke; Off), and how considerable amounts of exported sugar, cereals and other crops are slave-produced in certain countries. In 2003, some 32 per cent of US shoppers identified themselves as LOHAS “lifestyles of health and sustainability” consumers, who were, they said, willing to spend more for products that reflected not only ecological, but also social justice responsibility (McLaughlin). Research also confirms that “the pursuit of social objectives … can in fact furnish an organization with the competitive resources to develop effective marketing strategies”, with Doherty and Meehan showing how “social and ethical credibility” are now viable bases of differentiation and competitive positioning in mainstream consumer markets (311, 303). In line with this recognition, Fair Trade Certified goods are now available in British, European, US and, to a lesser extent, Australian supermarkets, and a number of global chains including Dunkin’ Donuts, McDonalds, Starbucks and Virgin airlines utilise Fair Trade coffee and teas in all, or parts of, their operations. Fair Trade Certification indicates that farmers receive a higher than commodity price for their products, workers have the right to organise, men and women receive equal wages, and no child labour is utilised in the production process (McLaughlin). Yet, despite some Western consumers reporting such issues having an impact upon their purchasing decisions, social justice has not become a significant issue of concern for most. The popular cookery publications discussed above devote little space to Fair Trade product marketing, much of which is confined to supermarket-produced adverzines promoting the Fair Trade products they stock, and international celebrity chefs have yet to focus attention on this issue. In Australia, discussion of contemporary slavery in the press is sparse, having surfaced in 2000-2001, prompted by UNICEF campaigns against child labour, and in 2007 and 2008 with the visit of a series of high profile anti-slavery campaigners (including Bales) to the region. The public awareness of food produced by forced labour and the troubling issue of human enslavement in general is still far below the level that climate change and ecological issues have achieved thus far in driving foodway evolution. This may change, however, if a ‘Slow’-inflected connection can be made between Western lifestyles and the plight of peoples hidden from our daily existence, but contributing daily to them. Concluding Remarks At this time of accelerating techno-cultural evolution, due in part to the pressures of climate change, it is the creative potential that human conscious awareness brings to bear on these challenges that is most valuable. Today, as in the caves at Lascaux, humanity is evolving new images and narratives to provide rational solutions to emergent challenges. As an example of this, new foodways and ways of thinking about them are beginning to evolve in response to the perceived problems of climate change. The current conscious transformation of food habits by some in the West might be, therefore, in James Lovelock’s terms, a moment of “revolutionary punctuation” (178), whereby rapid cultural adaption is being induced by the growing public awareness of impending crisis. It remains to be seen whether other urgent human problems can be similarly and creatively embraced, and whether this trend can spread to offer global solutions to them. References An Inconvenient Truth. Dir. Davis Guggenheim. Lawrence Bender Productions, 2006. Bales, Kevin. Disposable People: New Slavery in the Global Economy. Berkeley: University of California Press, 2004 (first published 1999). Bales, Kevin, and Ron Soodalter. The Slave Next Door: Human Trafficking and Slavery in America Today. Berkeley: University of California Press, 2009. Carson, Rachel. Silent Spring. Boston: Houghton Mifflin, 1962. Chalke, Steve. “Unfinished Business: The Sinister Story behind Chocolate.” The Age 18 Sep. 2007: 11. Cunningham, Stuart, and Graeme Turner. The Media and Communications in Australia Today. Crows Nest: Allen & Unwin, 2002. Davey, Gwenda Beed. “Foodways.” The Oxford Companion to Australian Folklore. Ed. Gwenda Beed Davey, and Graham Seal. Melbourne: Oxford University Press, 1993. 182–85. Doherty, Bob, and John Meehan. “Competing on Social Resources: The Case of the Day Chocolate Company in the UK Confectionery Sector.” Journal of Strategic Marketing 14.4 (2006): 299–313. Eshel, Gidon, and Pamela A. Martin. “Diet, Energy, and Global Warming.” Earth Interactions 10, paper 9 (2006): 1–17. Fowl Dinners. Exec. Prod. Nick Curwin and Zoe Collins. Dragonfly Film and Television Productions and Fresh One Productions, 2008. Freeman, Sarah. Mutton and Oysters: The Victorians and Their Food. London: Gollancz, 1989. Gould, S. J., and N. Eldredge. “Punctuated Equilibrium Comes of Age.” Nature 366 (1993): 223–27. (ICFFA) International Commission on the Future of Food and Agriculture. Manifesto on the Future of Food. Florence, Italy: Agenzia Regionale per lo Sviluppo e l’Innovazione nel Settore Agricolo Forestale and Regione Toscana, 2006. Jamie’s School Dinners. Dir. Guy Gilbert. Fresh One Productions, 2005. Jordan, Jennifer A. “The Heirloom Tomato as Cultural Object: Investigating Taste and Space.” Sociologia Ruralis 47.1 (2007): 20-41. Khan, Urmee. “Jamie Oliver’s School Dinners Improve Exam Results, Report Finds.” Telegraph 1 Feb. 2009. 24 Aug. 2009 < http://www.telegraph.co.uk/education/educationnews/4423132/Jamie-Olivers-school-dinners-improve-exam-results-report-finds.html >. Kloppenberg, Jack, Jr, Sharon Lezberg, Kathryn de Master, G. W. Stevenson, and John Henrickson. ‘Tasting Food, Tasting Sustainability: Defining the Attributes of an Alternative Food System with Competent, Ordinary People.” Human Organisation 59.2 (Jul. 2000): 177–86. (LDP) Liverpool Daily Post. “Battery Farm Eggs Banned from Schools and Care Homes.” Liverpool Daily Post 12 Jan. 2008. 24 Aug. 2009 < http://www.liverpooldailypost.co.uk/liverpool-news/regional-news/2008/01/12/battery-farm-eggs-banned-from-schools-and-care-homes-64375-20342259 >. Lovelock, James. The Ages of Gaia: A Biography of Our Living Earth. New York: Bantam, 1990 (first published 1988). Mason, Jim, and Peter Singer. The Ethics of What We Eat. Melbourne: Text Publishing, 2006. McLaughlin, Katy. “Is Your Grocery List Politically Correct? Food World’s New Buzzword Is ‘Sustainable’ Products.” The Wall Street Journal 17 Feb. 2004. 29 Aug. 2009 < http://www.globalexchange.org/campaigns/fairtrade/coffee/1732.html >. McMichael, Anthony J, John W Powles, Colin D Butler, and Ricardo Uauy. “Food, Livestock Production, Energy, Climate Change, and Health.” The Lancet 370 (6 Oct. 2007): 1253–63. Miers, Suzanne. “Contemporary Slavery”. A Historical Guide to World Slavery. Ed. Seymour Drescher, and Stanley L. Engerman. New York: Oxford University Press, 1998. Mintz, Sidney W. Tasting Food, Tasting Freedom: Excursions into Eating, Culture, and the Past. Boston: Beacon Press, 1994. Nussel, Jill. “Heating Up the Sources: Using Community Cookbooks in Historical Inquiry.” History Compass 4/5 (2006): 956–61. Off, Carol. Bitter Chocolate: Investigating the Dark Side of the World's Most Seductive Sweet. St Lucia: U of Queensland P, 2008. Paxson, Heather. “Slow Food in a Fat Society: Satisfying Ethical Appetites.” Gastronomica: The Journal of Food and Culture 5.1 (2005): 14–18. Pietrykowski, Bruce. “You Are What You Eat: The Social Economy of the Slow Food Movement.” Review of Social Economy 62:3 (2004): 307–21. Pollan, Michael. The Omnivore’s Dilemma: A Natural History of Four Meals. New York: The Penguin Press, 2006. Regan, Tom. The Case for Animal Rights. Berkeley: University of California Press, 1983. Scholz, Christopher A., Thomas C. Johnson, Andrew S. Cohen, John W. King, John A. Peck, Jonathan T. Overpeck, Michael R. Talbot, Erik T. Brown, Leonard Kalindekafe, Philip Y. O. Amoako, Robert P. Lyons, Timothy M. Shanahan, Isla S. Castañeda, Clifford W. Heil, Steven L. Forman, Lanny R. McHargue, Kristina R. Beuning, Jeanette Gomez, and James Pierson. “East African Megadroughts between 135 and 75 Thousand Years Ago and Bearing on Early-modern Human Origins.” PNAS: Proceedings of the National Academy of the Sciences of the United States of America 104.42 (16 Oct. 2007): 16416–21. Sinclair, Upton. The Jungle. New York: Doubleday, Jabber & Company, 1906. Singer, Peter. Animal Liberation. New York: HarperCollins, 1975. (SFFB) Slow Food Foundation for Biodiversity. “Ark of Taste.” 2009. 24 Aug. 2009 < http://www.fondazioneslowfood.it/eng/arca/lista.lasso >. (UNISG) University of Gastronomic Sciences. “Who We Are.” 2009. 24 Aug. 2009 < http://www.unisg.it/eng/chisiamo.php >. Vileisis, Ann. Kitchen Literacy: How We Lost Knowledge of Where Food Comes From and Why We Need to Get It Back. Washington: Island Press/Shearwater Books, 2008. Weissbrodt, David, and Anti-Slavery International. Abolishing Slavery and its Contemporary Forms. New York and Geneva: Office of the United Nations High Commissioner for Human Rights, United Nations, 2002. Zeder, Melinda A. “The Neolithic Macro-(R)evolution: Macroevolutionary Theory and the Study of Culture Change.” Journal of Archaeological Research 17 (2009): 1–63.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography