Journal articles on the topic 'Optimisation RD'

To see the other types of publications on this topic, follow the link: Optimisation RD.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 16 journal articles for your research on the topic 'Optimisation RD.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Keller, Tobias, Bjoern Dreisewerd, and Andrzej Górak. "Reactive Distillation for Multiple-Reaction Systems: Optimisation Study Using an Evolutionary Algorithm." Chemical and Process Engineering 34, no. 1 (March 1, 2013): 17–38. http://dx.doi.org/10.2478/cpe-2013-0003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract Reactive distillation (RD) has already demonstrated its potential to significantly increase reactant conversion and the purity of the target product. Our work focuses on the application of RD to reaction systems that feature more than one main reaction. In such multiple-reaction systems, the application of RD would enhance not only the reactant conversion but also the selectivity of the target product. The potential of RD to improve the product selectivity of multiple-reaction systems has not yet been fully exploited because of a shortage of available comprehensive experimental and theoretical studies. In the present article, we want to theoretically identify the full potential of RD technology in multiple-reaction systems by performing a detailed optimisation study. An evolutionary algorithm was applied and the obtained results were compared with those of a conventional stirred tank reactor to quantify the potential of RD to improve the target product selectivity of multiple-reaction systems. The consecutive transesterification of dimethyl carbonate with ethanol to form ethyl methyl carbonate and diethyl carbonate was used as a case study.
2

Schmetz, Roland. "METHODICAL OPTIMISATION OF DRIVETRAINS OF AGRICULTURAL MACHINERY WITH SPECIAL FOCUS ON THEIR ELECTRIFICATION AND ENERGY EFFICIENCY." RURAL DEVELOPMENT 2019 2021, no. 1 (January 24, 2022): 155–60. http://dx.doi.org/10.15544/rd.2021.026.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kumar, B. S. Sunil. "IF-RD optimisation for bandwidth compression in video HEVC and congestion control in wireless networks using dolphin echolocation optimisation with FEC." International Journal of Signal and Imaging Systems Engineering 11, no. 3 (2018): 151. http://dx.doi.org/10.1504/ijsise.2018.093267.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Sunil Kumar, B. S. "IF-RD optimisation for bandwidth compression in video HEVC and congestion control in wireless networks using dolphin echolocation optimisation with FEC." International Journal of Signal and Imaging Systems Engineering 11, no. 3 (2018): 151. http://dx.doi.org/10.1504/ijsise.2018.10014296.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Lefter, Răzvan Corneliu, Daniela Popescu, and Alexandrina Untăroiu. "Method for Redesign of District Heating Networks within Transition from the 2nd to the 3rd Generation." Applied Mechanics and Materials 657 (October 2014): 689–93. http://dx.doi.org/10.4028/www.scientific.net/amm.657.689.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Important investmentsare made lately in the area of district heating, as a technology capable ofhelping countries to reach sustainability goals. In Romania, European fundswere spent for transition from the 2nd to the 3rdgeneration of district heating systems. The lack of appropriate monitoringsystems in old district heating systems makes optimisation nowadays very difficult,especially because nominal values used in the first design stage areoverestimated. Realistic nominal heat loads are necessary to make goodestimations of hydraulic parameters to be used for redesign. This studyproposes a method that uses the heat load duration curve theory to identify theappropriate nominal heat loads to be used for redesign. Comparison betweenresults obtained by applying the nominal heat loads of each consumer, as theywere established in the first design stage, and the ones identified by theproposed method are analyzed in a case study. The results show that errors arein the +/- 3% band, between the metered heat consumption rates and the proposedrates. The new method can be used for the sizing of pumps and district heatingnetworks after retrofit, in order to get better adjustments of the circulationpumps and increase of the energy efficiency.
6

Ahmed, Polash, Md Ferdous Rahman, A. K. M. Mahmudul Haque, Mustafa K. A. Mohammed, G. F. Ishraque Toki, Md Hasan Ali, Abdul Kuddus, M. H. K. Rubel, and M. Khalid Hossain. "Feasibility and Techno-Economic Evaluation of Hybrid Photovoltaic System: A Rural Healthcare Center in Bangladesh." Sustainability 15, no. 2 (January 11, 2023): 1362. http://dx.doi.org/10.3390/su15021362.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This study aimed to investigate a techno-economic evaluation of the photovoltaic system, along with a diesel generator as a backup supply, to ensure a continuous twenty-four hours power supply per day, no matter the status of the weather. Healthcare centers in Bangladesh play a vital role in the health issues of the residents of rural areas. In this regard, a healthcare center in Baliadangi—Lahiri Hat Rd, Baliadangi, Thakurgaon, Bangladesh, was selected to be electrically empowered. The simulation software Hybrid Optimisation Model for Electric Renewables (HOMER) and the HOMER Powering Health tool were used to analyze and optimize the renewable energy required by the healthcare center. It was found that the healthcare center required a 24.3 kW solar PV system with a net current cost of $28,705.2; the levelized cost of electricity (LCOE) was $0.02728 per kW-hours, where renewable energy would provide 98% of the system’s total power requirements. The generator would provide 1% and the grid would supply the remaining 1%. The load analysis revealed that the hybrid PV system might be superior to other power sources for providing electricity for both the normal function and the emergencies that arise in healthcare’s day-to-day life. The outcome of the study is expected to be beneficial for both government and other stakeholders in decision-making.
7

Patterson, Neil, and Jonathan Binns. "Development of a Six Degree of Freedom Velocity Prediction Program for the foiling America’s Cup Vessels." Journal of Sailing Technology 7, no. 01 (July 11, 2022): 120–51. http://dx.doi.org/10.5957/jst/2022.7.6.151.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract Since the introduction of hydrofoils to the Moth sailing class in the early 2000’s foiling has become increasingly popular in sailing from windsurfers to large 75’ foiling monohulls. The last 3 America’s Cups have been competed on hydro foiling vessels. Design programs such as Velocity Prediction Programs (VPP) have become a key asset to America’s Cup teams to allow for the optimisation and testing of designs before manufacture. Presented is the development of a Six Degree-of-Freedom (6DoF) Quasi-Static Velocity Prediction Program (SVPP) and Dynamic Velocity Prediction Program (DVPP) for the 35th and 36th America's Cup foiling AC50 Catamaran and AC75 Monohull. The models have been validated against race data from the 35th and 36th America’s Cup showing good correlation for a wide wind range of 8 to 22 knots. The paper presents how the AC50 SVPP was used for analysis on the impact of Rudder Rake Differential (RD) on overall performance, and predicting the optimal wind range for use of the light and heavy weather dagger boards on the AC50 Catamaran. The AC75 SVPP and DVPP were used to analyse the affect of hull shape and the main foils’ fixed angle-of-attack (AOA) on time-to-fight and peak velocity to determine optimal foil setup and pitch angle when foiling. The SVPP and DVPP use XFLR5 software suite to model the foils. Experimental data for a T-foil tested in the Australian Maritime College towing tank facility has been used to predict viscous and free surface effect adjustments to the predictions from XFLR5.
8

Goekbuget, Nicola, Anja Baumann, Joachim Beck, Monika Brueggemann, Helmut Diedrich, Andreas Huettmann, Lothar Leimer, et al. "PEG-Asparaginase Intensification In Adult Acute Lymphoblastic Leukemia (ALL): Significant Improvement of Outcome with Moderate Increase of Liver Toxicity In the German Multicenter Study Group for Adult ALL (GMALL) Study 07/2003." Blood 116, no. 21 (November 19, 2010): 494. http://dx.doi.org/10.1182/blood.v116.21.494.494.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract Abstract 494 Several randomised pediatric trials have demonstrated that intensification of Asparaginase (ASP) treatment in ALL can contribute to improved outcome. In adult ALL few data are availabe and optimal ASP preparation, schedule and intensity with respect to efficacy and tolerability have to be defined. The optimisation of ASP treatment is therefore an essential aim of the GMALL. Treatment: Induction treatment of the ongoing study 07/2003 consists of dexamethasone, vincristine, daunorubicine, pegylated asparaginase (PEG-ASP) (phase I), mercaptopurine, cyclophosphamide and cytarabine (phase II) as previously described (Brueggemann et al, Blood 2006: 107; 1116). During the study the dose for PEG-ASP was increased from 1000 to 2000 U/m2 in induction and from 500 to 2000 U/m2 in consolidation (combined with HDMTX and MP) for pts aged between 15 and 55 years. 1 application for high risk and 7 applications for standard risk (SR) were scheduled during the first year and the aim was improvement of overall survival (OS) and remission duration (RD). Patients: From more than 100 centers in Germany 1226 pts with a median age of 35 (15-55) yrs were evaluable. 826 pts were treated with 1000 U/m2 (cohort 1) and 400 pts with 2000 U/m2 (cohort 2) and both groups were comparable regarding major entry criteria. The analysis was restricted to pts who received one of the scheduled PEG-ASP doses during induction. Outcome: CR rate after induction was 91% vs 91% in cohort 1 and 2 resp., with comparable rates for early death (4% vs 5%) and failure (5% vs 4%). Data on molecular response (MRD below 10−4) after induction are available in a subset and showed no difference between both cohorts after induction (79% vs 82%). OS after 3 years was improved in cohort 2 (60% vs 67%; p>.05). The positive effect was specifically evident in SR patients (N=407 vs 190) with respect to OS (68% vs 80%; p=.02) and RD (61% vs 74%; p=.02). It was demonstrated in younger pts (15-45 yrs) (71% vs 82%; p=.02) and older pt (45-55 yrs) (56% vs 74%; p>.05). Excellent results were achieved in young adults (15-25 years) with respect to OS (77% vs 86%; p>.05) and RD (60% vs 78%; p>.05). Toxicity: The analysis of toxicity was focused on grade III-IV events during induction with potential correlation to PEG-ASP (764/382 pts in cohort 1/cohort 2)). Incidences are as follows: GOT or GPT (30%/30%), bilirubine (10%/16%), thrombosis (5%/5%) and hypersensitivity (<1%/<1%). In a subset of pts additional AEs were assessed as amylase (5%/13%), lipase (23%/15%) and glucose (10%/12%). Significantly less toxicity was observed during consolidation cycles. Bilirubine °III/IV occurred median 16d after PEG-ASP during phase II of induction. In univariate analysis it was correlated to dose (10% vs 16%; p=.004), age <> 45 yrs (11% vs 17%; p=.005), BMI <> 30 (12% vs 18%; p=.04) and rituximab application (11% vs 18%; p=.009). Hepatomegaly, infections or imatinib application had no significant effect. In multivariate analysis dose and age remained independent significant prognostic factors. Bilirubine increase during induction was associated with treatment delays and inferior prognosis. Conclusions: This is the largest cohort of adult ALL treated with PEG-ASP. Due to prolonged activity fewer applications are required which is a pre-requisite for realisation of ASP intensification in the context of an intensive multidrug chemotherapy for adult ALL. Although CR rate and molecular CR were not significantly improved PEG-ASP intensification was associated with an improved OS and RD. The improvement was specifically evident in SR pts treated with up to 7 doses of PEG-ASP. Overall intensified PEG-ASP was feasible. The rate of grade III-IV bilirubine elevation increased after dose escalation and led to treatment delays in individual pts which were prognostically relevant. It would be an important goal to identify parameters to predict severe ASP related toxicity. Further intensification of ASP by additional applications would be of interest. Supported by Deutsche Krebshilfe 70–2657-Ho2 and partly BMBF 01GI 9971 and Medac GmbH. Disclosures: Goekbuget: Medac: Consultancy, Research Funding, Speakers Bureau. Hoelzer: Medac: Speakers Bureau.
9

Göbel, Astrid, Tobias Knuuti, Carola Franzen, Dinara Abbasova, Thuro Arnold, Vinzenz Brendler, Kateryna Fuzik, et al. "State-of-Knowledge and Guidance in EURAD Knowledge Management (Work Packages 11 State-of-Knowledge &amp; 12 Guidance)." Safety of Nuclear Waste Disposal 1 (November 10, 2021): 249–50. http://dx.doi.org/10.5194/sand-1-249-2021.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract. EURAD, the European Joint Programme on Radioactive Waste Management (RWM), is the European Research Programme on RWM, aimed at supporting member states with the implementation of their national programmes. It brings together over 100 organisations from different backgrounds and countries, which work together in RD&amp;D projects, Strategic Studies and Knowledge Management (KM). The importance of KM is recognised by EURAD and reflected in a number of activities. One essential activity is the capture of the current State-of-Knowledge in the field of RWM and its transfer to the implementation of the different national programmes. This is done by different types of Knowledge Documents that are made available through a dedicated IT tool (e.g. a Wiki). The development of the individual EURAD KM documents is performed by recognised experts. These experts will share their view on the most relevant knowledge on a specific topic, highlighting safety functions and operational aspects. Additionally, signposting to pre-existing documents is performed (State-of-the-Art Documents, Scientific Papers, etc.). The hierarchy of the works for the KM documents (Theme Overview, Domain Insight, State-of-Knowledge, Guidance) is closely linked to the generic EURAD Roadmap/GBS (Goals Breakdown Structure). It provides a hierarchical structure that facilitates definition, organisation and communication of topics. All of this allows knowledge to be captured and presented with the level of detail that is required by the end-user, from a broad overview down to an increasing level of detail (pyramid of knowledge). To ensure the quality and consistency of the documents with the overall EURAD KM approach, quality assurance and editorial procedures are applied. Collection of end-user feedback will aid the optimisation and further development of the KM activities. To facilitate the transfer of knowledge, the EURAD KM programme goes beyond documents and strives to facilitate exchange between people and signpost to other resources, such as Training and Mobility activities (also organised by EURAD Work Package 13 Training &amp; Mobility) or Communities of Practice. All these activities will contribute to a useful and end-user-friendly EURAD KM programme that is designed to be operational well beyond the runtime of EURAD-1. This presentation will provide further insight into the approaches, status of work and an outlook on future activities that will support member states with the implementation of their national programmes.
10

Hering, Hannah, Beth Effeney, Carole Brady, and Catriona Hargrave. "An evaluation of ankle and foot bolus in paediatric modulated arc total body irradiation (MATBI)." Journal of Medical Radiation Sciences, March 12, 2024. http://dx.doi.org/10.1002/jmrs.780.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
AbstractIntroductionThis retrospective planning study aimed to evaluate the role of bolus in achieving dose uniformity in the ankles and feet in paediatric patients undergoing Modulated Arc Total Body Irradiation (MATBI) treatment and to identify patient factors that may negate or warrant its use.MethodsThe clinically treated plans of 20 paediatric patients who received MATBI treatment utilising ankle and foot bolus (Bolus plan) were compared with two retrospectively generated plans; a plan with bolus removed and no re‐optimisation (No Bolus plan), and a re‐optimised plan without bolus attempting to achieve equal dosimetry to the clinical plan via monitor unit adjustment (MU plan). Descriptive statistics were used to evaluate the dose uniformity criteria of ±10% coverage of the reference dose (RD) for each subregion of the ankle and foot for the three plans. The impact of patient height, weight, and age at the time of treatment was evaluated using Spearman's correlation.ResultsVariation in doses >10% RD was minimal across the three plans, with an average D1cc difference < 0.4Gy. For the ankle and foot regions in the Bolus plans, the volume receiving at least 90% of the RD (V90) was on average > 92%. In No Bolus and MU plans, there was an average reduction of 24.5% and 23.2% V90 coverage respectively in the toes. Spearman's correlation suggests height has the strongest relationship to D1cc.ConclusionThis study validated the continued use of ankle and foot bolus to achieve dosimetric goals for paediatric MATBI treatments, particularly V90 coverage across all heights.
11

Botha, Frederik. "Sugarcane RD&E: over managed and underperforming?" Sugar Industry, February 2017, 104–11. http://dx.doi.org/10.36961/si18168.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In general, few productivity improvements in sugarcane have occurred during the past three decades. At the same time, production costs have increased and production statistics reflect decreased yields globally. In comparison to the ‘golden years’ of new technology and improved germplasm in the second half of the previous century, little more than optimisation of existing practises has emerged from the past two decades. Given the slowdown in new technology delivery, it is not surprising that many industries have placed more scrutiny on how they manage their Research & Development institutions and investments. The result of this ‘slowdown’ has created a perception that poor management of research projects and programs by scientists is at the core of the problem. This has led to the introduction of ‘real’ managers with the subsequent management of R&D as if it is a ‘normal’ production and sales business through well established ‘business models’. Strong emphasis has been placed on project selection, project management and minimising risk. Research, especially in the discovery phase, is a very high-risk endeavour and a high proportion of all projects fail. Institutions that have a low appetite for risk quickly run out of new technology innovation. Because of the inability to predict, a discovery project cannot easily accommodate management issues such as budgeting, milestone definition and timeframes. Managers generally prefer D and Extension over R because of the higher predictability and lower perceived failure rate. The key to proper management of R&D is a recognition that researchers and managers operate under very different codes of conduct. If this is not properly managed, then conflict between researchers and the rest of the business follows. It has become customary to view RD&E as a unit following a ‘systems’ approach. Despite obvious advantages of this approach, it often fails to recognise the most significant shortfall(s) in the value chain. This practice can unnecessarily inflate the cost, slow project progress and is dependent on consensus that tends to favour the lowest common denominator or more vocal team members. Consensus and innovation tend to be opposing objectives, as innovation requires thinking with an ‘outside the box’ mindset. Consequently, innovation can be stifled using this approach. Peer review is a great tool to measure progress in projects and selecting projects for development. It is not suited for selection of new innovative ideas. With no obvious improvement in technology delivery and adoption, it is timely to ask whether the current approaches are achieving their objectives. In addressing this question it is important to look at the global evolution of R&D models and modern trends in highly innovative businesses. Instead of trying to ensure that every research project entering the technology funnel delivers a product, a greater emphasis is needed to create an innovative environment where all role players are focussed on key strategic objectives, and all research results are seen as key learnings for future deployment.
12

Sciortino, Davide Domenico, Fabrizio Bonatesta, Edward Hopkins, Daniel Bell, and Mark Cary. "A systematic approach to calibrate spray and break-up models for the simulation of high-pressure fuel injections." International Journal of Engine Research, November 3, 2021, 146808742110507. http://dx.doi.org/10.1177/14680874211050787.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
A novel calibration methodology is presented to accurately predict the fundamental characteristics of high-pressure fuel sprays for Gasoline Direct Injection (GDI) applications. The model was developed within the Siemens Simcenter STAR-CD 3D CFD software environment and used the Lagrangian–Eulerian solution scheme. The simulations were carried out based on a quiescent, constant volume, computational vessel to reproduce the real spray testing environment. A combination of statistic and optimisation methods was used for spray model selection and calibration and the process was supported by a wide range of experimental data. A comparative study was conducted between the two most commonly used models for fuel atomisation: Kelvin–Helmholtz/Rayleigh–Taylor (KH–RT) and Reitz–Diwakar (RD) break-up models. The Rosin–Rammler (RR) mono-modal droplet size distribution was tuned to assign initial spray characteristics at the critical nozzle exit location. A half factorial design was used to reveal how the various model calibration factors influence the spray properties, leading to the selection of the dominant ones. Numerical simulations of the injection process were carried out based on space-filling Design of Experiment (DoE) schedules, which used the dominant factors as input variables. Statistical regression and nested optimisation procedures were then applied to define the optimal levels of the model calibration factors. The method aims to give an alternative to the widely used trial-and-error approach and unveils the correlation between calibration factors and spray characteristics. The results show the importance of the initial droplet size distribution and secondary break-up coefficients to accurately calibrate the entire spray process. RD outperformed KH–RT in terms of prediction when comparing numerical spray tip penetration and droplet size characteristics to the experimental counterparts. The calibrated spray model was able to correctly predict the spray properties over a wide range of injection pressure. The work presented in this paper is part of the APC6 DYNAMO project led by Ford Motor Company.
13

Şenol, Zeynep Mine, Serap Çetinkaya, and Hasan Arslanoglu. "Recycling of Labada (Rumex) biowaste as a value-added biosorbent for rhodamine B (Rd-B) wastewater treatment: biosorption study with experimental design optimisation." Biomass Conversion and Biorefinery, January 14, 2022. http://dx.doi.org/10.1007/s13399-022-02324-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Şenol, Zeynep Mine, Serap Çetinkaya, and Hasan Arslanoglu. "Correction to: Recycling of Labada (Rumex) biowaste as a value-added biosorbent for rhodamine B (Rd-B) wastewater treatment: biosorption study with experimental design optimisation." Biomass Conversion and Biorefinery, February 4, 2022. http://dx.doi.org/10.1007/s13399-022-02412-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Toran, Marc Sauchelli, Patricia Fernández Labrador, Juan Francisco Ciriza, Yeray Asensio, André Reigersman, Juan Arevalo, Frank Rogalla, and Victor M. Monsalvo. "Membrane-Based Processes to Obtain High-Quality Water From Brewery Wastewater." Frontiers in Chemical Engineering 3 (September 14, 2021). http://dx.doi.org/10.3389/fceng.2021.734233.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Water reuse is a safe and often the least energy-intensive method of providing water from non-conventional sources in water stressed regions. Although public perception can be a challenge, water reuse is gaining acceptance. Recent advances in membrane technology allow for reclamation of wastewater through the production of high-quality treated water, including potable reuse. This study takes an in-depth evaluation of a combination of membrane-based tertiary processes for its application in reuse of brewery wastewater, and is one of the few studies that evaluates long-term membrane performance at the pilot-scale. Two different advanced tertiary treatment trains were tested with secondary wastewater from a brewery wastewater treatment plant (A) ultrafiltration (UF) and reverse osmosis (RO), and (B) ozonation, coagulation, microfiltration with ceramic membranes (MF) and RO. Three specific criteria were used for membrane comparison: 1) pilot plant optimisation to identify ideal operating conditions, 2) Clean-In-Place (CIP) procedures to restore permeability, and 3) final water quality obtained. Both UF and Micro-Filtration membranes were operated at increasing fluxes, filtration intervals and alternating phases of backwash (BW) and chemically enhanced backwash (CEB) to control fouling. Operation of polymeric UF membranes was optimized at a flux of 25–30 LMH with 15–20 min of filtration time to obtain longer production periods and avoid frequent CIP membrane cleaning procedures. Combination of ozone and coagulation with ceramic MF membranes resulted in high flux values up to 120 LMH with CEB:BW ratios of 1:4 to 1:10. Coagulation doses of 3–6 ppm were required to deal with the high concentrations of polyphenols (coagulation inhibitors) in the feed, but higher concentrations led to increasing fouling resistance of the MF membrane. Varying the ozone concentration stepwise from 0 to 25 mg/L had no noticeable effect on coagulation. The most effective cleaning strategy was found to be a combination of 2000 mg/L NaOCl followed by 5% HCl which enabled to recover permeability up to 400 LMH·bar−1. Both polymeric UF and ceramic MF membranes produced effluents that fulfil the limits of the national regulatory framework for reuse in industrial services (RD 1620/2007). Coupling to the RO units in both tertiary trains led to further water polishing and an improved treated water quality.
16

Conti, Olivia. "Disciplining the Vernacular: Fair Use, YouTube, and Remixer Agency." M/C Journal 16, no. 4 (August 11, 2013). http://dx.doi.org/10.5204/mcj.685.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Introduction The research from which this piece derives explores political remix video (PRV), a genre in which remixers critique dominant discourses and power structures through guerrilla remixing of copyrighted footage (“What Is Political Remix Video?”). Specifically, I examined the works of political video remixer Elisa Kreisinger, whose queer remixes of shows such as Sex and the City and Mad Men received considerable attention between 2010 and the present. As a rhetoric scholar, I am attracted not only to the ways that remix functions discursively but also the ways in which remixers are constrained in their ability to argue, and what recourse they have in these situations of legal and technological constraint. Ultimately, many of these struggles play out on YouTube. This is unsurprising: many studies of YouTube and other user-generated content (UGC) platforms focus on the fact that commercial sites cannot constitute utopian, democratic, or free environments (Hilderbrand; Hess; Van Dijck). However, I find that, contrary to popular belief, YouTube’s commercial interests are not the primary factor limiting remixer agency. Rather, United States copyright law as enacted on YouTube has the most potential to inhibit remixers. This has led to many remixers becoming advocates for fair use, the provision in the Copyright Act of 1976 that allows for limited use of copyrighted content. With this in mind, I decided to delve more deeply into the framing of fair use by remixers and other advocates such as the Electronic Frontier Foundation (EFF) and the Center for Social Media. In studying discourses of fair use as they play out in the remix community, I find that the framing of fair use bears a striking similarity to what rhetoric scholars have termed vernacular discourse—a discourse emanating from a small segment of the larger civic community (Ono and Sloop 23). The vernacular is often framed as that which integrates the institutional or mainstream while simultaneously asserting its difference through appropriation and subversion. A video qualifies as fair use if it juxtaposes source material in a new way for the purposes of critique. In turn, a vernacular text asserts its “vernacularity” by taking up parts of pre-existing dominant institutional discourses in a way that resonates with a smaller community. My argument is that this tension between institutional and vernacular gives political remix video a multivalent argument—one that presents itself both in the text of the video itself as well as in the video’s status as a fair use of copyrighted material. Just as fair use represents the assertion of creator agency against unfair copyright law, vernacular discourse represents the assertion of a localised community within a world dominated by institutional discourses. In this way, remixers engage rights holders and other institutions in a pleasurable game of cat and mouse, a struggle to expose the boundaries of draconian copyright law. YouTube’s Commercial InterestsYouTube’s commercial interests operate at a level potentially invisible to the casual user. While users provide YouTube with content, they also provide the site with data—both metadata culled from their navigations of the site (page views, IP addresses) as well as member-provided data (such as real name and e-mail address). YouTube mines this data for a number of purposes—anything from interface optimisation to targeted advertising via Google’s AdSense. Users also perform a certain degree of labour to keep the site running smoothly, such as reporting videos that violate the Terms of Service, giving videos the thumbs up or thumbs down, and reporting spam comments. As such, users involved in YouTube’s participatory culture are also necessarily involved in the site’s commercial interests. While there are legitimate concerns regarding the privacy of personal information, especially after Google introduced policies in 2012 to facilitate a greater flow of information across all of their subsidiaries, it does not seem that this has diminished YouTube’s popularity (“Google: Privacy Policy”).Despite this, some make the argument that users provide the true benefit of UGC platforms like YouTube, yet reap few rewards, creating an exploitative dynamic (Van Dijck, 46). Two assumptions seem to underpin this argument: the first is that users do not desire to help these platforms prosper, the second is that users expect to profit from their efforts on the website. In response to these arguments, it’s worth calling attention to scholars who have used alternative economic models to account for user-platform coexistence. This is something that Henry Jenkins addresses in his recent book Spreadable Media, largely by focusing on assigning alternate sorts of value to user and fan labour—either the cultural worth of the gift, or the satisfaction of a job well done common to pre-industrial craftsmanship (61). However, there are still questions of how to account for participatory spaces in which labours of love coexist with massively profitable products. In service of this point, Jenkins calls up Lessig, who posits that many online networks operate as hybrid economies, which combine commercial and sharing economies. In a commercial economy, profit is the primary consideration, while a sharing economy is composed of participants who are there because they enjoy doing the work without any expectation of compensation (176). The strict separation between the two economies is, in Lessig’s estimation, essential to the hybrid economy’s success. While it would be difficult to incorporate these two economies together once each had been established, platforms like YouTube have always operated under the hybrid principle. YouTube’s users provide the site with its true value (through their uploading of content, provision of metadata, and use of the site), yet users do not come to YouTube with these tasks in mind—they come to YouTube because it provides an easy-to-use platform by which to share amateur creativity, and a community with whom to interact. Additionally, YouTube serves as the primary venue where remixers can achieve visibility and viral status—something Elisa Kreisinger acknowledged in our interviews (2012). However, users who are not concerned with broad visibility as much as with speaking to particular viewers may leave YouTube if they feel that the venue does not suit their content. Some feminist fan vidders, for instance, have withdrawn from YouTube due to what they perceived as a community who didn’t understand their work (Kreisinger, 2012). Additionally, Kreisinger ended up garnering many more views of her Queer Men remix on Vimeo due simply to the fact that the remix’s initial upload was blocked via YouTube’s Content ID feature. By the time Kreisinger had argued her case with YouTube, the Vimeo link had become the first stop for those viewing and sharing the remix, which received 72,000 views to date (“Queer Men”). Fair Use, Copyright, and Content IDThis instance points to the challenge that remixers face when dealing with copyright on YouTube, a site whose processes are not designed to accommodate fair use. Specifically, Title II, Section 512 of the DMCA (the Digital Millennium Copyright Act, passed in 1998) states that certain websites may qualify as “safe harbours” for copyright infringement if users upload the majority of the content to the site, or if the site is an information location service. These sites are insulated from copyright liability as long as they cooperate to some extent with rights holders. A common objection to Section 512 is that it requires media rights holders to police safe harbours in search of infringing content, rather than placing the onus on the platform provider (Meyers 939). In order to cooperate with Section 512 and rights holders, YouTube initiated the Content ID system in 2007. This system offers rights holders the ability to find and manage their content on the site by creating archives of footage against which user uploads are checked, allowing rights holders to automatically block, track, or monetise uses of their content (it is also worth noting that rights holders can make these responses country-specific) (“How Content ID Works”). At the current time, YouTube has over 15 million reference files against which it checks uploads (“Statistics - YouTube”). Thus, it’s fairly common for uploaded work to get flagged as a violation, especially when that work is a remix of popular institutional footage. If an upload is flagged by the Content ID system, the user can dispute the match, at which point the rights holder has the opportunity to either allow the video through, or to issue a DMCA takedown notice. They can also sue at any point during this process (“A Guide to YouTube Removals”). Content ID matches are relatively easy to dispute and do not generally require legal intervention. However, disputing these automatic takedowns requires users to be aware of their rights to fair use, and requires rights holders to acknowledge a fair use (“YouTube Removals”). This is only compounded by the fact that fair use is not a clearly defined right, but rather a vague provision relying on a balance between four factors: the purpose of the use, character of the work, the amount used, and the effect on the market value of the original (“US Copyright Office–Fair Use”). As Aufderheide and Jaszi observed in 2008, the rejection of videos for Content ID matches combined with the vagaries of fair use has a chilling effect on user-generated content. Rights Holders versus RemixersRights holders’ objections to Section 512 illustrate the ruling power dynamic in current intellectual property disputes: power rests with institutional rights-holding bodies (the RIAA, the MPAA) who assert their dominance over DMCA safe harbours such as YouTube (who must cooperate to stay in business) who, in turn, exert power over remixers (the lowest on the food chain, so to speak). Beyond the observed chilling effect of Content ID, remix on YouTube is shot through with discursive struggle between these rights-holding bodies and remixers attempting to express themselves and reach new communities. However, this has led political video remixers to become especially vocal when arguing for their uses of content. For instance, in the spring of 2009, Elisa Kreisinger curated a show entitled “REMOVED: The Politics of Remix Culture” in which blocked remixes screened alongside the remixers’ correspondence with YouTube. Kreisinger writes that each of these exchanges illustrate the dynamic between rights holders and remixers: “Your video is no longer available because FOX [or another rights-holding body] has chosen to block it (“Remixed/Removed”). Additionally, as Jenkins notes, even Content ID on YouTube is only made available to the largest rights holders—smaller companies must still go through an official DMCA takedown process to report infringement (Spreadable 51). In sum, though recent technological developments may give the appearance of democratising access to content, when it comes to policing UGC, technology has made it easier for the largest rights holders to stifle the creation of content.Additionally, it has been established that rights holders do occasionally use takedowns abusively, and recent court cases—specifically Lenz v. Universal Music Corp.—have established the need for rights holders to assess fair use in order to make a “good faith” assertion that users intend to infringe copyright prior to issuing a takedown notice. However, as Joseph M. Miller notes, the ruling fails to rebalance the burdens and incentives between rights holders and users (1723). This means that while rights holders are supposed to take fair use into account prior to issuing takedowns, there is no process in place that either effectively punishes rights holders who abuse copyright, or allows users to defend themselves without the possibility of massive financial loss (1726). As such, the system currently in place does not disallow or discourage features like Content ID, though cases like Lenz v. Universal indicate a push towards rebalancing the burden of determining fair use. In an effort to turn the tables, many have begun arguing for users’ rights and attempting to parse fair use for the layperson. The Electronic Frontier Foundation (EFF), for instance, has espoused an “environmental rhetoric” of fair use, casting intellectual property as a resource for users (Postigo 1020). Additionally, they have created practical guidelines for UGC creators dealing with DMCA takedowns and Content ID matches on YouTube. The Center for Social Media has also produced a number of fair use guides tailored to different use cases, one of which targeted online video producers. All of these efforts have a common goal: to educate content creators about the fair use of copyrighted content, and then to assert their use as fair in opposition to large rights-holding institutions (though they caution users against unfair uses of content or making risky legal moves that could lead to lawsuits). In relation to remix specifically, this means that remixers must differentiate themselves from institutional, commercial content producers, standing up both for the argument contained in their remix as well as their fair use of copyrighted content.In their “Code of Best Practices for Fair Use in Online Video,” the Center for Social Media note that an online video qualifies as a fair use if (among other things) it critiques copyrighted material and if it “recombines elements to make a new work that depends for its meaning on (often unlikely) relationships between the elements” (8). These two qualities are also two of the defining qualities of political remix video. For instance, they write that work meets the second criteria if it creates “new meaning by juxtaposition,” noting that in these cases “the recombinant new work has a cultural identity of its own and addresses an audience different from those for which its components were intended” (9). Remixes that use elements of familiar sources in unlikely combinations, such as those made by Elisa Kreisinger, generally seek to reach an audience who are familiar with the source content, but also object to it. Sex and the City, for instance, while it initially seemed willing to take on previously “taboo” topics in its exploration of dating in Manhattan, ended with each of the heterosexual characters paired with an opposite sex partner, and forays from this heteronormative narrative were contained either within in one-off episodes or tokenised gay characters. For this reason, Kreisinger noted that the intended audience for Queer Carrie were the queer and feminist viewers of Sex and the City who felt that the show was overly normative and exclusionary (Kreisinger, Art:21). As a result, the target audience of these remixes is different from the target audience of the source material—though the full nuance of the argument is best understood by those familiar with the source. Thus, the remix affirms the segment of the viewing community who saw only tokenised representations of their identity in the source text, and in so doing offers a critique of the original’s heteronormative focus.Fair Use and the VernacularVernacular discourse, as broadly defined by Kent A. Ono and John M. Sloop, refers to discourses that “emerge from discussions between members of self-identified smaller communities within the larger civic community.” It operates partially through appropriating dominant discourses in ways better suited to the vernacular community, through practices of pastiche and cultural syncretism (23). In an effort to better describe the intricacies of this type of discourse, Robert Glenn Howard theorised a hybrid “dialectical vernacular” that oscillates between institutional and vernacular discourse. This hybridity arises from the fact that the institutional and the vernacular are fundamentally inseparable, the vernacular establishing its meaning by asserting itself against the institutional (Howard, Toward 331). When put into use online, this notion of a “dialectical vernacular” is particularly interesting as it refers not only to the content of vernacular messages but also to their means of production. Howard notes that discourse embodying the dialectical vernacular is by nature secondary to institutional discourse, that the institutional must be clearly “structurally prior” (Howard, Vernacular 499). With this in mind it is unsurprising that political remix video—which asserts its secondary nature by calling upon pre-existing copyrighted content while simultaneously reaching out to smaller segments of the civic community—would qualify as a vernacular discourse.The notion of an institutional source’s structural prevalence also echoes throughout work on remix, both in practical guides such as the Center for Social Media’s “Best Practices” as well as in more theoretical takes on remix, like Eduardo Navas’ essay “Turbulence: Remixes + Bonus Beats,” in which he writes that:In brief, the remix when extended as a cultural practice is a second mix of something pre-existent; the material that is mixed for a second time must be recognized, otherwise it could be misunderstood as something new, and it would become plagiarism […] Without a history, the remix cannot be Remix. An elegant theoretical concept, this becomes muddier when considered in light of copyright law. If the history of remix is what gives it its meaning—the source text from which it is derived—then it is this same history that makes a fair use remix vulnerable to DMCA takedowns and other forms of discipline on YouTube. However, as per the criteria outlined by the Center for Social Media, it is also from this ironic juxtaposition of institutional sources that the remix object establishes its meaning, and thus its vernacularity. In this sense, the force of a political remix video’s argument is in many ways dependent on its status as an object in peril: vulnerable to the force of a law that has not yet swung in its favor, yet subversive nonetheless.With this in mind, YouTube and other UGC platforms represent a fraught layer of mediation between institutional and vernacular. As a site for the sharing of amateur video, YouTube has the potential to affirm small communities as users share similar videos, follow one particular channel together, or comment on videos posted by people in their networks. However, YouTube’s interface (rife with advertisements, constantly reminding users of its affiliation with Google) and cooperation with rights holders establish it as an institutional space. As such, remixes on the site are already imbued with the characteristic hybridity of the dialectical vernacular. This is especially true when the remixers (as in the case of PRV) have made the conscious choice to advocate for fair use at the same time that they distribute remixes dealing with other themes and resonating with other communities. ConclusionPolitical remix video sits at a fruitful juncture with regard to copyright as well as vernacularity. Like almost all remix, it makes its meaning through juxtaposing sources in a unique way, calling upon viewers to think about familiar texts in a new light. This creation invokes a new audience—a quality that makes it both vernacular and also a fair use of content. Given that PRV is defined by the “guerrilla” use of copyrighted footage, it has the potential to stand as a political statement outside of the thematic content of the remix simply due to the nature of its composition. This gives PRV tremendous potential for multivalent argument, as a video can simultaneously represent a marginalised community while advocating for copyright reform. This is only reinforced by the fact that many political video remixers have become vocal in advocating for fair use, asserting the strength of their community and their common goal.In addition to this argumentative richness, PRV’s relation to fair use and vernacularity exposes the complexity of the remix form: it continually oscillates between institutional affiliations and smaller vernacular communities. However, the hybridity of these remixes produces tension, much of which manifests on YouTube, where videos are easily responded to and challenged by both institutuional and vernacular authorities. In addition, a tension exists in the remix text itself between the source and the new, remixed message. Further research should attend to these areas of tension, while also exploring the tenacity of the remix community and their ability to advocate for themselves while circumventing copyright law.References“About Political Remix Video.” Political Remix Video. 15 Feb. 2012. ‹http://www.politicalremixvideo.com/what-is-political-remix/›.Aufderheide, Patricia, and Peter Jaszi. Reclaiming Fair Use: How to Put Balance Back in Copyright. Chicago: U of Chicago P, 2008. Kindle.“Code of Best Practices for Fair Use in Online Video.” The Center For Social Media, 2008. Van Dijck, José. “Users like You? Theorizing Agency in User-Generated Content.” Media Culture Society 31 (2009): 41-58.“A Guide to YouTube Removals,” The Electronic Frontier Foundation, 15 June 2013 ‹https://www.eff.org/issues/intellectual-property/guide-to-YouTube-removals›.Hilderbrand, Lucas. “YouTube: Where Cultural Memory and Copyright Converge.” Film Quarterly 61.1 (2007): 48-57.Howard, Robert Glenn. “The Vernacular Web of Participatory Media.” Critical Studies in Media Communication 25.5 (2008): 490-513.Howard, Robert Glenn. “Toward a Theory of the World Wide Web Vernacular: The Case for Pet Cloning.” Journal of Folklore Research 42.3 (2005): 323-60.“How Content ID Works.” YouTube. 21 June 2013. ‹https://support.google.com/youtube/answer/2797370?hl=en›.Jenkins, Henry, Sam Ford, and Joshua Green. Spreadable Media: Creating Value and Meaning in a Networked Culture. New York: New York U P, 2013. Jenkins, Henry. Convergence Culture: Where Old and New Media Collide. New York: New York U P, 2006. Kreisinger, Elisa. Interview with Nick Briz. Art:21. Art:21, 30 June 2011. 21 June 2013.Kreisinger, Elisa. “Queer Video Remix and LGBTQ Online Communities,” Transformative Works and Cultures 9 (2012). 19 June 2013 ‹http://journal.transformativeworks.org/index.php/twc/article/view/395/264›.Kreisinger, Elisa. Pop Culture Pirate. < http://www.popculturepirate.com/ >.Lessig, Lawrence. Remix: Making Art and Commerce Thrive in the Hybrid Economy. New York: Penguin Books, 2008. PDF.Meyers, B.G. “Filtering Systems or Fair Use? A Comparative Analysis of Proposed Regulations for User-Generated Content.” Cardozo Arts & Entertainment Law Journal 26.3: 935-56.Miller, Joseph M. “Fair Use through the Lenz of § 512(c) of the DMCA: A Preemptive Defense to a Premature Remedy?” Iowa Law Review 95 (2009-2010): 1697-1729.Navas, Eduardo. “Turbulence: Remixes + Bonus Beats.” New Media Fix 1 Feb. 2007. 10 June 2013 ‹http://newmediafix.net/Turbulence07/Navas_EN.html›.Ono, Kent A., and John M. Sloop. Shifting Borders: Rhetoric, Immigration and California’s Proposition 187. Philadelphia: Temple U P, 2002.“Privacy Policy – Policies & Principles.” Google. 19 June 2013 ‹http://www.google.com/policies/privacy/›.Postigo, Hector. “Capturing Fair Use for The YouTube Generation: The Digital Rights Movement, the Electronic Frontier Foundation, and the User-Centered Framing of Fair Use.” Information, Communication & Society 11.7 (2008): 1008-27.“Statistics – YouTube.” YouTube. 21 June 2013 ‹http://www.youtube.com/yt/press/statistics.html›.“US Copyright Office: Fair Use,” U.S. Copyright Office. 19 June 2013 ‹http://www.copyright.gov/fls/fl102.html›.“YouTube Help.” YouTube FAQ. 19 June 2013 ‹http://support.google.com/youtube/?hl=en&topic=2676339&rd=2›.

To the bibliography