Academic literature on the topic 'From lab to full scale design'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'From lab to full scale design.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "From lab to full scale design"

1

Flapper, T. G., N. J. Ashbolt, A. T. Lee, and M. O'Neill. "From the lab to full-scale SBR operation: treating high strength and variable industrial wastewaters." Water Science and Technology 43, no. 3 (February 1, 2001): 347–54. http://dx.doi.org/10.2166/wst.2001.0156.

Full text
Abstract:
This paper describes the path taken from client objectives through laboratory studies and detailed design to full-scale SBR operation and current research. Conventional municipal design principles have often been used to develop treatment processes for industrial wastewaters. The use of scientific trials to test design criteria offers the client a “tailor made” design fit for their particular wastewater character. In this project, a waste management company wished to upgrade their physical-chemical treatment plant to incorporate a biological reactor for treating a range of industrial wastewaters. Laboratory-scale trials were undertaken to determine appropriate design criteria for a full-scale biological process. These laboratory studies indicated that conventional design criteria were not appropriate and that a SBR configuration was optimal compared with an IDAR configuration. It was also found that a novel fungal:bacterial mixed liquor consortium developed, resulting in good effluent quality and settling properties. The treatment plant was able to be constructed and operational within a tight timeframe and budget, allowing the client to take advantage of a commercial opportunity. The plant has been operating since 1997 and meets its discharge conditions. By combining scientific studies with engineering principles, the end-user obtained a complete treatment plant to meet their specific needs. A further benefit of the laboratory trials is current research into the development of a fungal:bacterial SBR to treat industrial wastewaters. This offers ongoing knowledge to the operational full-scale SBR.
APA, Harvard, Vancouver, ISO, and other styles
2

Malmqvist, Åsa, Lars Gunnarsson, and Christer Torstenon. "Lab and pilot scale tests as tools for upgrading - comparison with full scale results." Water Science and Technology 37, no. 9 (May 1, 1998): 25–31. http://dx.doi.org/10.2166/wst.1998.0336.

Full text
Abstract:
Parameters such as hydraulic retention time, organic load, maximum COD removal, sludge characteristics and optimal nutrient dosage can be determined by simulation in small scale models of the chosen process. Laboratory tests are the natural first step when considering upgrading, or designing a new, biological treatment plant. The potential for a biological treatment can be examined at a low cost and within a minimum of time, often through parallel testing of different treatment methods. Once a suitable process configuration has been found, lab scale tests may well be used for optimizing the process and obtaining design data, thus minimizing the need for more expensive tests in larger scale. The principal reason for a pilot plant test is the possibility to investigate natural variations in wastewater composition and the effect this will have on process stability. The use of laboratory and pilot scale tests is here illustrated by the work carried out prior to the upgrading of the treatment plant at Nyboholm paper mill. A description of the upgraded full scale installation consisting of both chemical treatment and a suspended-carrier biofilm process is included and a comparison between results from lab, pilot and full scale treatment is made.
APA, Harvard, Vancouver, ISO, and other styles
3

Krause, S., P. Cornel, and M. Wagner. "Comparison of different oxygen transfer testing procedures in full-scale membrane bioreactors." Water Science and Technology 47, no. 12 (June 1, 2003): 169–76. http://dx.doi.org/10.2166/wst.2003.0643.

Full text
Abstract:
Membrane bioreactors (MBRs) for wastewater treatment offer the advantage of a complete removal of solids from the effluent. The secondary clarifier is replaced by a membrane filtration and therefore high biomass concentrations (MLSS) in the reactor are possible. The design of the aeration system is vital for an energy efficient operation of any wastewater treatment plant. Hence the exact measurement of oxygen transfer rates (OTR) and α-values is important. For MBRs these values reported in literature differ considerably. The OTR can be measured using non-steady state methods or using the off-gas method. The non-steady state methods additionally require the determination of the respiration rate (oxygen uptake rate ≡ OUR), which usually is measured in lab scale units. As there are differences of OUR between lab scale and full scale measurements, off-gas tests (which do not require an additional respiration test) were performed in order to compare both methods at high MLSS concentrations. Both methods result in the same average value of OTR. Due to variations in loading and wastewater composition variations of OTR in time can be pointed out using the off-gas method. For the first time a comparison of different oxygen transfer tests in full scale membrane bioreactors is presented.
APA, Harvard, Vancouver, ISO, and other styles
4

Nywening, J. P., H. Zhou, and H. Husain. "Comparison of mixed liquor filterability measured with bench and pilot-scale membrane bioreactors." Water Science and Technology 56, no. 6 (September 1, 2007): 155–62. http://dx.doi.org/10.2166/wst.2007.644.

Full text
Abstract:
Parallel experimental tests to measure mixed liquor filterability for submerged membrane bioreactors were conducted over a six month period using three ZW-500 pilot plants and a ZW-10 lab-scale filterability apparatus. Non-air sparged conditions during the tests yielded operation behaviour that was equivalent to dead-end filtration. The fouling resistance increased linearly with the intercepted mass until a critical point was reached at which point significant cake compression was induced and the resistance began to increase exponentially. Although the point of cake compression appears to be dependent on the membrane module design, similar resistance per unit solid mass intercepted per unit area (Rmass) values were observed when the same mixed liquor was filtered. Coupled with the established correlation between the Rmass and the critical flux, it is suggested that the filterability test results from a side-stream, lab-scale module may be used to predict fouling potential in a full scale MBR wastewater treatment system without interrupting the full-scale MBR operation.
APA, Harvard, Vancouver, ISO, and other styles
5

Miranda, Margarida, Cláudia Veloso, Catarina Cardoso, and Carla Vitorino. "From Lab to Upscale—Boosting Formulation Performance through In Vitro Technologies." Proceedings 78, no. 1 (December 1, 2020): 35. http://dx.doi.org/10.3390/iecp2020-08674.

Full text
Abstract:
Pre-stability studies carried out throughout the development of a diclofenac emulgel formulation have shown a clear decrease in the drug release rate. In order to address the root-cause associated with this phenomena, product historical data were retrieved and analyzed following a retrospective Quality by Design (rQbD) approach. The quality target product profile (QTPP) was established, and risk assessment tools were used to identify the most relevant parameters affecting formulation performance. These consisted in (i) mixing time, (ii) sodium hydroxide content and (iii) carbopol grade. Following a 23 full factorial design, the pH, viscosity, in vitro release rate and cumulative amount of drug released at the end of the release experiment were selected as responses to statistically model the available data. It was observed that higher sodium hydroxide concentrations induce a decrease in viscosity, consequently resulting in a superior pharmaceutical performance. Moreover, as a secondary effect, a lower carbopol viscosity yields lower release outputs. The estimated models were used to define a feasible working region, which was further confirmed at an industrial scale. This work highlights the use of rQbD principles to achieve a greater product understanding. By doing so, specific strategies can be applied to product manufacture in order to consistently meet QTPP requirements.
APA, Harvard, Vancouver, ISO, and other styles
6

Huang, Xing, Tsung-Yi Ho, Wenzhong Guo, Bing Li, Krishnendu Chakrabarty, and Ulf Schlichtmann. "Computer-aided Design Techniques for Flow-based Microfluidic Lab-on-a-chip Systems." ACM Computing Surveys 54, no. 5 (June 2021): 1–29. http://dx.doi.org/10.1145/3450504.

Full text
Abstract:
As one of the most promising lab-on-a-chip systems, flow-based microfluidic biochips are being increasingly used for automatically executing various laboratory procedures in biology and biochemistry, such as enzyme-linked immunosorbent assay, point-of-care diagnosis, and so on. As manufacturing technology advances, the characteristic dimensions of biochip systems keep shrinking, and tens of thousands of microvalves can now be integrated into a coin-sized microfluidic platform, making the conventional manual-based chip design no longer applicable. Accordingly, computer-aided design (CAD) of microfluidics has attracted considerable research interest in the EDA community over the past decade. This review article presents recent advances in the design automation of biochips, involving CAD techniques for architectural synthesis, wash optimization, testing, fault diagnosis, and fault-tolerant design. With the help of these CAD tools, chip designers can be released from the burden of complex, large-scale design tasks. Meanwhile, new chip architectures can be explored automatically to open new doors to meet requirements from future large-scale biological experiments and medical diagnosis. We discuss key trends and directions for future research that are related to enable microfluidics to reach its full potential, thus further advancing the development and progression of the microfluidics industry.
APA, Harvard, Vancouver, ISO, and other styles
7

Sjostrom, Sharon, and Constance Senior. "Pilot testing of CO2 capture from a coal-fired power plant—Part 2: Results from 1-MWe pilot tests." Clean Energy 4, no. 1 (March 2020): 12–25. http://dx.doi.org/10.1093/ce/zkz034.

Full text
Abstract:
Abstract Using a 1-MWe slipstream pilot plant, solid-sorbent-based post-combustion CO2 capture was tested at a coal-fired power plant. Results from pilot testing were used to develop a preliminary full-scale commercial design. The sorbent selected for pilot-scale evaluation during this project consisted of an ion-exchange resin that incorporated amines covalently bonded to the substrate. A unique temperature-swing-absorption (TSA) process was developed that incorporated a three-stage fluidized-bed adsorber integrated with a single-stage fluidized-bed regenerator. Overall, following start-up and commissioning challenges that are often associated with first-of-a-kind pilots, the pilot plant operated as designed and expected, with a few key exceptions. The two primary exceptions were associated with: (i) handling characteristics of the sorbent, which were sufficiently different at operating temperature than at ambient temperature when design specifications were established with lab-scale testing; and (ii) CO2 adsorption in the transport line between the regenerator and adsorber that preloaded the sorbent with CO2 prior to entering the adsorber. Results from the pilot programme demonstrate that solid-sorbent-based post-combustion capture can be utilized to achieve 90% CO2 capture from coal-fired power plants.
APA, Harvard, Vancouver, ISO, and other styles
8

Ali, Mohammad, Li-Yuan Chai, Chong-Jian Tang, Ping Zheng, Xiao-Bo Min, Zhi-Hui Yang, Lei Xiong, and Yu-Xia Song. "The Increasing Interest of ANAMMOX Research in China: Bacteria, Process Development, and Application." BioMed Research International 2013 (2013): 1–21. http://dx.doi.org/10.1155/2013/134914.

Full text
Abstract:
Nitrogen pollution created severe environmental problems and increasingly has become an important issue in China. Since the first discovery of ANAMMOX in the early 1990s, this related technology has become a promising as well as sustainable bioprocess for treating strong nitrogenous wastewater. Many Chinese research groups have concentrated their efforts on the ANAMMOX research including bacteria, process development, and application during the past 20 years. A series of new and outstanding outcomes including the discovery of new ANAMMOX bacterial species (Brocadia sinica), sulfate-dependent ANAMMOX bacteria (Anammoxoglobus sulfate andBacillus benzoevorans), and the highest nitrogen removal performance (74.3–76.7 kg-N/m3/d) in lab scale granule-based UASB reactors around the world were achieved. The characteristics, structure, packing pattern and floatation mechanism of the high-rate ANAMMOX granules in ANAMMOX reactors were also carefully illustrated by native researchers. Nowadays, some pilot and full-scale ANAMMOX reactors were constructed to treat different types of ammonium-rich wastewater including monosodium glutamate wastewater, pharmaceutical wastewater, and leachate. The prime objective of the present review is to elucidate the ongoing ANAMMOX research in China from lab scale to full scale applications, comparative analysis, and evaluation of significant findings and to set a design to usher ANAMMOX research in culmination.
APA, Harvard, Vancouver, ISO, and other styles
9

Basnayake, B. F. A. "Simulation of Lab-scale Leachate Treatment Bioreactor with Application of Logistic Growth Equation for Determining Design and Operational Parameters." International Journal of Scientific & Engineering Research 8, no. 1 (January 25, 2017): 1061–70. http://dx.doi.org/10.14299/ijser.2017.01.021.

Full text
Abstract:
A laboratory scale Leachate Treatment Bioreactor (LTB) was needed to determine the optimum design and operational parameters because of poor performance of a full scale unit. In order to increase the lifespan of LTB, coconut comb and rubber tyres were included in the partially decomposed Municipal Solid Waste (MSW) as biofilter material inside the reactor. A composite liner of clay and waste polythene was used to mineralize excess inorganic compounds. The parameter reductions were from 26,000 mg/L to 6,832 mg/L of Total Solids (TS), 6,230 mg/L to 2,930 mg/L of Total Disolved Solids (TDS), 12,000 mg/L to 1182.6 mg/L of Volatile Solids (VS), 14,000 mg/L to 4,410 mg/L of Total Fixed Solids (TFS) and 29,700 mg/L to 3,000 mg/L of Biochemical Oxygen Demand (BOD). The kinetic analysis using the logistic growth equation showed cyclic events and the application of separating the growth and decay of microbes based on the Total Fixed Solids (TFS) gave a mineralization rate of 1.83 x 102 kg /m3 of leachate/m height of LTB /day for up scaling.
APA, Harvard, Vancouver, ISO, and other styles
10

Kasten, Georgia, Íris Duarte, Maria Paisana, Korbinian Löbmann, Thomas Rades, and Holger Grohganz. "Process Optimization and Upscaling of Spray-Dried Drug-Amino acid Co-Amorphous Formulations." Pharmaceutics 11, no. 1 (January 9, 2019): 24. http://dx.doi.org/10.3390/pharmaceutics11010024.

Full text
Abstract:
The feasibility of upscaling the formulation of co-amorphous indomethacin-lysine from lab-scale to pilot-scale spray drying was investigated. A 22 full factorial design of experiments (DoE) was employed at lab scale. The atomization gas flow rate (Fatom, from 0.5 to 1.4 kg/h) and outlet temperature (Tout, from 55 to 75 °C) were chosen as the critical process parameters. The obtained amorphization, glass transition temperature, bulk density, yield, and particle size distribution were chosen as the critical quality attributes. In general, the model showed low Fatom and high Tout to be beneficial for the desired product characteristics (a co-amorphous formulation with a low bulk density, high yield, and small particle size). In addition, only a low Fatom and high Tout led to the desired complete co-amorphization, while a minor residual crystallinity was observed with the other combinations of Fatom and Tout. Finally, upscaling to a pilot scale spray dryer was carried out based on the DoE results; however, the drying gas flow rate and the feed flow rate were adjusted to account for the different drying chamber geometries. An increased likelihood to achieve complete amorphization, because of the extended drying chamber, and hence an increased residence time of the droplets in the drying gas, was found in the pilot scale, confirming the feasibility of upscaling spray drying as a production technique for co-amorphous systems.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "From lab to full scale design"

1

GIAGNORIO, MATTIA. "Membrane-based technologies for the production of high-quality water from contaminated sources: from lab experiments to full-scale system design." Doctoral thesis, Politecnico di Torino, 2020. http://hdl.handle.net/11583/2829687.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

SALVIATI, SERGIO. "Thermochemical energy storage with salt hydrate composites: from materials design to lab-scale reactor validation." Doctoral thesis, Politecnico di Torino, 2020. http://hdl.handle.net/11583/2827706.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Phillip, Edsel. "The design and construction of a pilot-scale compost reactor for the study of gas emissions from compost under different physical conditions." Thesis, McGill University, 2010. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=94965.

Full text
Abstract:
Composting is generally accepted as an environmentally benign process for organic waste disposal. However, when not properly managed, composting can result in the emission of toxic and environmentally hazardous gases, including CH4, NH3, N2O and CO. Due to the potential negative consequences of composting, there is a need to gain a better understanding of the physical conditions that affect these volatile emissions in order to better control them. The objective of this project was to construct a pilot-scale compost reactor, as a platform to study the potential impact of temperature, O2 concentration, airflow rate, and moisture content on the gaseous emissions from compost. The pilot-scale reactor was able to control the temperature and O2 concentration inside of the compost using an automated control algorithm, and continually measure the concentration of CO, CO2, CH4, NH3, and N2O under time-varying temperature and O2 concentration conditions, using FTIR spectroscopy.
Composter pour disposer des déchets organiques est généralement perçu comme un processus bénin et sans impact négatif. Cependant, si mal gérée la dégradation des déchets par compostage peut émettre des gaz toxiques et dangereux pour l'environnement tel que les composés : CH4, NH3, N2O et CO. Ces conséquences potentiellement négatives ont soulevé le besoin de mieux comprendre les conditions physiques qui affectent l'émission de ces composés volatiles afin de mieux les contrôler. L'objectif de ce projet consista en la construction d'un composteur intelligent expérimental pour étudier l'impact potentiel de la température, la concentration en oxygène, l'aération et l'humidité sur les émissions gazeuses émanant du compost. Le composteur fut capable de contrôler la température et la concentration en oxygène du compostage et de mesurer en temps continu la concentration en CO, CO2, CH4, NH3, et N2O sous différentes conditions de température et concentration d'O2. Le contrôle de la température et de la concentration en oxygène fut possible grâce à un algorithme de contrôle automatique intégré au composteur et la mesure des concentrations des composés toxiques par la méthode de spectroscopie FTIR.
APA, Harvard, Vancouver, ISO, and other styles
4

Shang-Xiao, Chiou, and 邱尚孝. "Construct Virtual Environments for Architectural Design: From Virtual Construction to Full-Scale Simulation." Thesis, 1998. http://ndltd.ncl.edu.tw/handle/84939550172390510113.

Full text
Abstract:
碩士
中原大學
建築研究所
86
The Virtual Reality (VR) technology is a particular form of interactive 3D graphic of Human-Computer Interface (HCI), which is thought of great potential on architectural application. The purpose of this research tried to elucidate and apply the concepts that applied VR on architectural design.The practice of VR is constructing a Virtual Environment (VE) as a medium between human and computer. VE play two parts in the history of VR, one is the HCI, and the other is scientific research model. The model of VE is the behavior of perception and action of human in real world, so VR could be a great potential tool of simulation of architectural studies. The two concepts of HCI and research model generate two applied concepts in architectural design. First, as a new human-computer interface, VR can more easily translate design concept into 3D model in computer than traditional CAD. Second, we can evaluate design by VR technology, which has some special perspectives that traditional architectural representation tools have not.In this research, Virtual Construction and Full-Scale Simulation are the applied concepts about these issues. Virtual construction tried to integrate spatial construction in real world and VR concept in HCI to create a virtual environment for architectural representation. Full-scale simulation means that we can inhabit in computer model and evaluate design by full-scale experience.The most value of virtual reality on architectural application is VR represents interactive relations between human and environment with computer. In the age of information, virtual reality means that everything will being digital, architectural issues will associate with the totally different life way of human.
APA, Harvard, Vancouver, ISO, and other styles
5

Mohammadpourasl, Sanaz, Adalgisa Sinicropi, and Maria Laura Parisi. "Design and characterization by using computational methodologies and life cycle assessment (LCA) of devices for energy production from renewable energy sources." Doctoral thesis, 2020. http://hdl.handle.net/2158/1202001.

Full text
Abstract:
This thesis focuses on the design and characterization of more efficient components for Dye-Sensitized Solar Cells (DSSCs), an example of innovative latest generation photovoltaic systems. DSSCs are considered as a promising alternative to silicon solar cells due to their low cost, flexibility, and facile fabrication. However, a low photo-electric conversion efficiency and stability of these cells are the main obstacles for their large-scale commercial applications. An emerging challenge is to find an optimum set of materials to improve the performance of DSSCs. One of the key components to optimize is the light absorbing dye (also referred as sensitizer) that is employed to enhance light harvesting of TiO2 nanoparticles. Indeed, sensitizers are responsible for DSSCs photovoltaic performances, transparency and color. Another scope of this thesis is the assessment of the environmental performances connected with the fabrication of DSSCs components, namely the sensitizer, through the application of the Life Cycle Assessment (LCA) methodology. Indeed, to evaluate the sustainability of photovoltaic devices, the investigation of the environmental impacts generated during their fabrication is essential in order to improve and optimize the energy and resource efficiency of manufacturing processes and, ultimately, the environmental footprint of the device.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "From lab to full scale design"

1

Gelpi, Nick. Architecture of Full-Scale Mock-Ups: From Representation to Reality. Taylor & Francis Group, 2020.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Gelpi, Nick. Architecture of Full-Scale Mock-Ups: From Representation to Reality. Taylor & Francis Group, 2020.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Gelpi, Nick. Architecture of Full-Scale Mock-Ups: From Representation to Reality. Taylor & Francis Group, 2020.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Gelpi, Nick. Architecture of Full-Scale Mock-Ups: From Representation to Reality. Taylor & Francis Group, 2020.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Architecture of Full-Scale Mock-Ups: From Representation to Reality. Taylor & Francis Group, 2020.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Gelpi, Nick. Architecture of Full-Scale Mock-Ups: From Representation to Reality. Taylor & Francis Group, 2020.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Parlange, Marc B., and Jan W. Hopmans. Vadose Zone Hydrology. Oxford University Press, 1999. http://dx.doi.org/10.1093/oso/9780195109900.001.0001.

Full text
Abstract:
The vadose zone is the region between ground level and the upper limits of soil fully saturated with water. Hydrology in the zone is complex: nonlinear physical, chemical, and biological interactions all affect the transfer of heat, mass, and momentum between the atmosphere and the water table. This book takes an interdisciplinary approach to vadose zone hydrology, bringing together insights from soil science, hydrology, biology, chemistry, physics, and instrumentation design. The chapters present state-of-the-art research, focusing on new frontiers in theory, experiment, and management of soils. The collection addresses the full range of processes, from the pore-scale to field and landscape scales.
APA, Harvard, Vancouver, ISO, and other styles
8

Biewener, Andrew, and Sheila Patek. Animal Locomotion. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198743156.001.0001.

Full text
Abstract:
This book provides a synthesis of the physical, physiological, evolutionary, and biomechanical principles that underlie animal locomotion. An understanding and full appreciation of animal locomotion requires the integration of these principles. Toward this end, we provide the necessary introductory foundation that will allow a more in-depth understanding of the physical biology and physiology of animal movement. In so doing, we hope that this book will illuminate the fundamentals and breadth of these systems, while inspiring our readers to look more deeply into the scientific literature and investigate new features of animal movement. Several themes run through this book. The first is that by comparing the modes and mechanisms by which animals have evolved the capacity for movement, we can understand the common principles that underlie each mode of locomotion. A second is that size matters. One of the most amazing aspects of biology is the enormous spatial and temporal scale over which organisms and biological processes operate. Within each mode of locomotion, animals have evolved designs and mechanisms that effectively contend with the physical properties and forces imposed on them by their environment. Understanding the constraints of scale that underlie locomotor mechanisms is essential to appreciating how these mechanisms have evolved and how they operate. A third theme is the importance of taking an integrative and comparative evolutionary approach in the study of biology. Organisms share much in common. Much of their molecular and cellular machinery is the same. They also must navigate similar physical properties of their environment. Consequently, an integrative approach to organismal function that spans multiple levels of biological organization provides a strong understanding of animal locomotion. By comparing across species, common principles of design emerge. Such comparisons also highlight how certain organisms may differ and point to strategies that have evolved for movement in diverse environments. Finally, because convergence upon common designs and the generation of new designs result from historical processes governed by natural selection, it is also important that we ask how and why these systems have evolved.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "From lab to full scale design"

1

Agaogullaria, D., and I. Duman. "Process Design and Production of Boron Trichloride from Native Boron Carbide in Lab-Scale." In Ceramic Transactions Series, 77–90. Hoboken, NJ, USA: John Wiley & Sons, Inc., 2009. http://dx.doi.org/10.1002/9780470522189.ch8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Nguyen, Mai Lan, Cyrille Chazallon, Mehdi Sahli, Georg Koval, Pierre Hornych, Daniel Doligez, Armelle Chabot, Yves Le Gal, Laurent Brissaud, and Eric Godard. "Design of Reinforced Pavements with Glass Fiber Grids: From Laboratory Evaluation of the Fatigue Life to Accelerated Full-Scale Test." In Lecture Notes in Civil Engineering, 329–38. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-55236-7_34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Cerreta, Maria, and Simona Panaro. "Collaborative Decision-Making Processes for Local Innovation: The CoULL Methodology in Living Labs Approach." In Regenerative Territories, 193–212. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-78536-9_12.

Full text
Abstract:
AbstractThe concept of the Living Lab is closely connected to the priorities of the Europe 2020 strategy and of the Digital Agenda for Europe and is the subject of numerous user-centric open innovation programs and European projects supported by the European ENoLL Network. The chapter presents a new methodology, called Collaborative Urban Living Lab (CoULL), to support the Collaborative Decision-Making Processes to activate local innovation processes at the neighbourhood, city or landscape scale. Starting from the Quintuple Helix framework and the literature review on the Living Lab concept, its extension to the city and territorial context, and the related people-centred approaches have been discussed. The potentials to using them for putting open innovation into practice and developing innovative solutions for the cities have been shown. Nowadays, the built environments need to accelerate the transition to sustainable, climate-neutral, inclusive, resilient, healthy and smart prosperous. In the last few years, the Living Lab approaches have been promoted and used by local and international research and innovation agencies in collaboration with enterprises, NGOs and local governments to find solutions to the new issues. However, the Living Lab methodologies to guide the urban scale’s co-development solutions are few and need more accurate research and experimentations. In that direction, the CoULL methodology, tested in four different research projects (including the REPAiR project), has defined a suitable process for supporting the co-design, co-production and co-decision cycles of urban innovative and sustainable solutions.
APA, Harvard, Vancouver, ISO, and other styles
4

Eiringhaus, Daniel, Hendrik Riedmann, and Oliver Knab. "Definition and Evaluation of Advanced Rocket Thrust Chamber Demonstrator Concepts." In Notes on Numerical Fluid Mechanics and Multidisciplinary Design, 407–19. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-53847-7_26.

Full text
Abstract:
Abstract Since the beginning of the German collaborative research center SFB-TRR 40 in 2008 ArianeGroup has been involved as industrial partner and supported the research activities with its expertise. For the final funding period ArianeGroup actively contributes to the SFB-TRR 40 with the self-financed project K4. Within project K4 virtual thrust chamber demonstrators have been defined that allow the application of the attained knowledge of the entire collaborative research center to state-of-the-art numerical benchmark cases. Furthermore, ArianeGroup uses these testcases to continue the development of its in-house spray combustion and performance analysis tool Rocflam3. Unique within the collaborative research center fully three-dimensional conjugate heat transfer computations have been performed for a full-scale 100 kN upper stage thrust chamber. The strong three-dimensionality of the temperature field in the structure resulting from injection element and cooling channel configuration is displayed.
APA, Harvard, Vancouver, ISO, and other styles
5

Wittmann, Erich Christian. "Mathematics Education as a ‘Design Science’." In Connecting Mathematics and Mathematics Education, 77–93. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-61570-3_6.

Full text
Abstract:
AbstractMathematics education (didactics of mathematics) cannot grow without close relationships to mathematics, psychology, pedagogy and other areas. However, there is the risk that by adopting standards, methods and research contexts from other well-established disciplines, the applied nature of mathematics education may be undermined. In order to preserve the specific status and the relative autonomy of mathematics education, the suggestion to conceive of mathematics education as a “design science” is made. In a paper presented to the twenty second Annual Meeting of German mathematics educators in 1988 Heinrich Bauersfeld presented some views on the perspectives and prospects of mathematics education. It was his intention to stimulate a critical reflection’among the members of the community’ on what they do and what they could and should do in the future (Bauersfeld 1988). The early seventies have witnessed a vivid programmatic discussion on the role and nature of mathematics education in the German speaking part of Europe (cf., the papers by Bigalke, Griesel, Wittmann, Freudenthal, Otte, Dress and Tietz in the special issue 74/3 of the Zentralblatt für Didaktik der Mathematik as well as Krygowska 1972). Since then the status of mathematics education has not been considered on a larger scale despite the contributions by Bigalke (1985) and Winter (1986). So the time is overdue for redefining the basic orientation for research; therefore, Bauersfeld’s talk could hardly have been more appropriate. In recent years the interest in a better understanding of the nature and role of mathematics education has also grown considerably at the international level as indicated, for example, by the ICMI-study on ‘What is research in mathematics education and what are its results?’ launched in 1992 (cf., Balacheff et al. 1992). The following considerations are intended both as a critical analysis of the present situation and an attempt to capture the specificity of mathematics education. Like Bauersfeld, the author presents them ‘in full subjectivity and in a concise way’ as a kind of ‘thinking aloud about our profession’. (The present paper concentrates on the didactics of mathematics although the line of argument pertains equally to the didactics of other subjects and also to education in general (cf., Clifford and Guthrie 1988, a detailed study on the identity crisis of the Schools of Education at the leading American universities).)
APA, Harvard, Vancouver, ISO, and other styles
6

Nesterova, Nina, L. Hoeke, J. A. L. Goodman-Deane, S. Delespaul, Bartosz Wybraniec, and Boris Lazzarini. "Framing Digital Mobility Gap: A Starting Point in the Design of Inclusive Mobility Eco-Systems." In Towards User-Centric Transport in Europe 3, 235–53. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-26155-8_14.

Full text
Abstract:
AbstractDigital transport eco-systems worldwide provide great advantages to many but also carry a risk of excluding population groups that struggle with accessing or using digital products and services. The DIGNITY project (DIGital traNsport In and for socieTY) delves into the development of such eco-systems to deepen the understanding of the full range of factors that lead to disparities in the uptake of digital transport solutions in Europe. A starting point for developing digitally inclusive transport systems is to obtain state-of-the-art knowledge and understanding of where local transport eco-systems are in relation to the digital gap and digital mobility gap in terms of their policies, transport products and services, and population digital literacy. This chapter presents the methodology developed in the DIGNITY project to frame this digital gap, incorporating a self-assessment framework that may be used by public authorities to identify potential gaps in the development of local digital transport eco-systems. This framework is informed by results from customer journey mapping exercises that provide insights into the daily activities and trips of users, and larger scale surveys on digital technology access, use, attitudes and competence in the area. In the DIGNITY approach as a whole, the results from the framing phase are then used to inform subsequent work on bridging the digital gap through the co-creation of more inclusive policies, products and services. The chapter provides concrete results from the framing exercise in four DIGNITY pilot areas: Barcelona, Tilburg, Flanders and Ancona. The results clearly show that a digital transport gap exists in these areas, and that this is manifested in different ways in different local situations, requiring tailored approaches to address the gap.
APA, Harvard, Vancouver, ISO, and other styles
7

Luckman, Susan, and Jane Andrew. "What Does ‘Handmade’ Mean Today?" In Creative Working Lives, 125–48. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-44979-7_5.

Full text
Abstract:
AbstractThe phrase ‘designer maker’ is being employed increasingly in the contemporary craft and design marketplace, especially among those seeking to make a full-time living from their practice. It marks those makers who may undertake original design and prototyping themselves, but who, in order to scale up their production in ways not always possible for a solo hand maker, outsource some or all subsequent aspects of production to other makers or machine-assisted manufacturing processes. But despite widespread use of this phrase, some makers remain keen to manage the scale of their business. As a result, many of those craftspeople and designer makers we spoke to who were in a position to scale-up their production while stepping back from the making themselves were reluctant to go down this path. Elsewhere we have explored these issues in terms of balancing making income with quality of life, as well as in terms of the desire to be a maker, to be doing the creative work oneself, and thus not ‘get too big’ with the added pressures and responsibilities of being an employer (Luckman, Cultural Trends, 27(5), 313–326 (2018)). In this chapter, we home in more on what upscaling and outsourcing reveals about competing definitions of, and attitudes towards, the idea of ‘the handmade’. We also explore attitudes towards handmaking versus other forms of production, including outsourcing and the use of digital tools.
APA, Harvard, Vancouver, ISO, and other styles
8

Juniper, Matthew P. "Machine Learning for Thermoacoustics." In Lecture Notes in Energy, 307–37. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-16248-0_11.

Full text
Abstract:
AbstractThis chapter demonstrates three promising ways to combine machine learning with physics-based modelling in order to model, forecast, and avoid thermoacoustic instability. The first method assimilates experimental data into candidate physics-based models and is demonstrated on a Rijke tube. This uses Bayesian inference to select the most likely model. This turns qualitatively-accurate models into quantitatively-accurate models that can extrapolate, which can be combined powerfully with automated design. The second method assimilates experimental data into level set numerical simulations of a premixed bunsen flame and a bluff-body stabilized flame. This uses either an Ensemble Kalman filter, which requires no prior simulation but is slow, or a Bayesian Neural Network Ensemble, which is fast but requires prior simulation. This method deduces the simulations’ parameters that best reproduce the data and quantifies their uncertainties. The third method recognises precursors of thermoacoustic instability from pressure measurements. It is demonstrated on a turbulent bunsen flame, an industrial fuel spray nozzle, and full scale aeroplane engines. With this method, Bayesian Neural Network Ensembles determine how far each system is from instability. The trained BayNNEs out-perform physics-based methods on a given system. This method will be useful for practical avoidance of thermoacoustic instability.
APA, Harvard, Vancouver, ISO, and other styles
9

Bauereiss, Thomas, Brian Campbell, Thomas Sewell, Alasdair Armstrong, Lawrence Esswood, Ian Stark, Graeme Barnes, Robert N. M. Watson, and Peter Sewell. "Verified Security for the Morello Capability-enhanced Prototype Arm Architecture." In Programming Languages and Systems, 174–203. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-99336-8_7.

Full text
Abstract:
AbstractMemory safety bugs continue to be a major source of security vulnerabilities in our critical infrastructure. The CHERI project has proposed extending conventional architectures with hardware-supported capabilities to enable fine-grained memory protection and scalable compartmentalisation, allowing historically memory-unsafe C and C++ to be adapted to deterministically mitigate large classes of vulnerabilities, while requiring only minor changes to existing system software sources. Arm is currently designing and building Morello, a CHERI-enabled prototype architecture, processor, SoC, and board, extending the high-performance Neoverse N1, to enable industrial evaluation of CHERI and pave the way for potential mass-market adoption. However, for such a major new security-oriented architecture feature, it is important to establish high confidence that it does provide the intended protections, and that cannot be done with conventional engineering techniques.In this paper we put the Morello architecture on a solid mathematical footing from the outset. We define the fundamental security property that Morello aims to provide, reachable capability monotonicity, and prove that the architecture definition satisfies it. This proof is mechanised in Isabelle/HOL, and applies to a translation of the official Arm specification of the Morello instruction-set architecture (ISA) into Isabelle. The main challenge is handling the complexity and scale of a production architecture: 62,000 lines of specification, translated to 210,000 lines of Isabelle. We do so by factoring the proof via a narrow abstraction capturing essential properties of arbitrary CHERI ISAs, expressed above a monadic intra-instruction semantics. We also develop a model-based test generator, which generates instruction-sequence tests that give good specification coverage, used in early testing of the Morello implementation and in Morello QEMU development, and we use Arm’s internal test suite to validate our model.This gives us machine-checked mathematical proofs of whole-ISA security properties of a full-scale industry architecture, at design-time. To the best of our knowledge, this is the first demonstration that that is feasible, and it significantly increases confidence in Morello.
APA, Harvard, Vancouver, ISO, and other styles
10

Arévalo, Juan, Patricia Zamora, Vicente F. Mena, Naiara Hernández-Ibáñez, Victor Monsalvo-Garcia, and Frank Rogalla. "Design of the MIDES plant." In Microbial Desalination Cells for Low Energy Drinking Water, 93–104. IWA Publishing, 2021. http://dx.doi.org/10.2166/9781789062120_0093.

Full text
Abstract:
Abstract This chapter presents the full design of two microbial desalination cells (MDCs) at pilot-plant scale from the MIDES project. The final MDC pilot unit design was based on the knowledge gained through up scaling of the MDC from lab- to prepilot scale. The MDC pilot plant consists of one stack of 15 MDC pilot units with 0.4 m2 electrode area. This chapter also presents the piping and instrumentation diagram (P&ID) and layout of the MDC pilot plant. The MIDES pilot plants are comprised of an MDC pilot plant housed in a 40-ft container with the rest of the peripheral elements. Finally, this chapter presents the improvement made from the first to the second MDC stack in terms of stability and the chemical compatibility of the end plates.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "From lab to full scale design"

1

Her, Remy, Jacques Renard, Vincent Gaffard, Yves Favry, and Paul Wiet. "Design of Pipeline Composite Repairs: From Lab Scale Tests to FEA and Full Scale Testing." In 2014 10th International Pipeline Conference. American Society of Mechanical Engineers, 2014. http://dx.doi.org/10.1115/ipc2014-33201.

Full text
Abstract:
Composite repair systems are used for many years to restore locally the pipe strength where it has been affected by damage such as wall thickness reduction due to corrosion, dent, lamination or cracks. Composite repair systems are commonly qualified, designed and installed according to ASME PCC2 code or ISO 24817 standard requirements. In both of these codes, the Maximum Allowable Working Pressure (MAWP) of the damaged section must be determined to design the composite repair. To do so, codes such as ASME B31G for example for corrosion, are used. The composite repair systems is designed to “bridge the gap” between the MAWP of the damaged pipe and the original design pressure. The main weakness of available approaches is their applicability to combined loading conditions and various types of defects. The objective of this work is to set-up a “universal” methodology to design the composite repair by finite element calculations with directly taking into consideration the loading conditions and the influence of the defect on pipe strength (whatever its geometry and type). First a program of mechanical tests is defined to allow determining all the composite properties necessary to run the finite elements calculations. It consists in compression and tensile tests in various directions to account for the composite anisotropy and of Arcan tests to determine steel to composite interface behaviors in tension and shear. In parallel, a full scale burst test is performed on a repaired pipe section where a local wall thinning is previously machined. For this test, the composite repair was designed according to ISO 24817. Then, a finite element model integrating damaged pipe and composite repair system is built. It allowed simulating the test, comparing the results with experiments and validating damage models implemented to capture the various possible types of failures. In addition, sensitivity analysis considering composite properties variations evidenced by experiments are run. The composite behavior considered in this study is not time dependent. No degradation of the composite material strength due to ageing is taking into account. The roadmap for the next steps of this work is to clearly identify the ageing mechanisms, to perform tests in relevant conditions and to introduce ageing effects in the design process (and in particular in the composite constitutive laws).
APA, Harvard, Vancouver, ISO, and other styles
2

Feser, Joseph S., and Ashwani K. Gupta. "Performance and Emissions of Drop-in Aviation Biofuels in a Lab Scale Gas Turbine Combustor." In ASME 2020 Power Conference collocated with the 2020 International Conference on Nuclear Engineering. American Society of Mechanical Engineers, 2020. http://dx.doi.org/10.1115/power2020-16958.

Full text
Abstract:
Abstract There is a growing need for drop-in biofuels for gas turbines for enhanced energy security and sustainability. Several fuels are currently being developed and tested to reduce dependency on fossil fuels while maintaining performance, particularly in the aviation industry. The transition from traditional fossil fuels to sustainable biofuels is much desired for reducing the rapidly rising CO2 levels in the environment. This requires biofuels to be drop-in ready, where there are no adverse effects on performance and emissions upon replacement. In this study the performance and emissions of four different aviation drop-in biofuels was evaluated. They include: UOP HEFA-SPK, Gevo ATJ, Amyris Farnesane, and SB-JP-8. These aviation biofuels are currently being produced and tested to be ready for full or partial drop-in fuels as the replacement of traditional jet fuels. The characteristic performance of each fuel from the prevaporized liquid fuels was performed in a high intensity (20 MW/m3-atm) reverse flow combustor. The NO emissions showed near unity ppm levels for each of the fuels examined with a minimum at an equivalence ratio of ∼0.6, while CO levels were in the range of 1000–1300 ppm depending on the fuel at an equivalence ratio between 0.75–0.8. For an equivalence ratio range between 0.4 and 0.6, NO and CO emissions remained very low (between 1–2ppm NO and 2400–2900ppm CO) depending on the fuel. The examined biofuels did not show any instability over a wide range of equivalence ratios from lean to near stoichiometric condition. These results provide promising results on the behavior of these drop-in aviation biofuels for use in high intensity gas turbine combustors providing stability and cleaner performance without any modification to the combustor design.
APA, Harvard, Vancouver, ISO, and other styles
3

Frederickson, Lee, Kyle Kitzmiller, and Fletcher Miller. "Carbon Particle Generation and Preliminary Small Particle Heat Exchange Receiver Lab Scale Testing." In ASME 2013 7th International Conference on Energy Sustainability collocated with the ASME 2013 Heat Transfer Summer Conference and the ASME 2013 11th International Conference on Fuel Cell Science, Engineering and Technology. American Society of Mechanical Engineers, 2013. http://dx.doi.org/10.1115/es2013-18215.

Full text
Abstract:
High temperature central receivers are on the forefront of concentrating solar power research. Current receivers use liquid cooling and power steam cycles, but new receivers are being designed to power gas turbine engines within a power cycle while operating at a high efficiency. To address this, a lab-scale Small Particle Heat Exchange Receiver (SPHER), a high temperature solar receiver, was built and is currently undergoing testing at the San Diego State University’s (SDSU) Combustion and Solar Energy Laboratory. The final goal is to design, build, and test a full-scale SPHER that can absorb 5 MWth and eventually be used within a Brayton cycle. The SPHER utilizes air mixed with carbon particles generated in the Carbon Particle Generator (CPG) as an absorption medium for the concentrated solar flux. Natural gas and nitrogen are sent to the CPG where the natural gas undergoes pyrolysis to carbon particles and nitrogen is used as the carrier gas. The resulting particle-gas mixture flows out of the vessel and is met with dilution air, which flows to the SPHER. The lab-scale SPHER is an insulated steel vessel with a spherical cap quartz window. For simulating on-sun testing, a solar flux is produced by a solar simulator, which consists of a 15kWe xenon arc lamp, situated vertically, and an ellipsoidal reflector to obtain a focus at the plane of the receiver window. The solar simulator has been shown to produce an output of about 3.25 kWth within a 10 cm diameter aperture. Inside of the SPHER, the carbon particles in the inlet particle-gas mixture absorb radiation from the solar flux. The carbon particles heat the air and eventually oxidize to carbon dioxide, resulting in a clear outlet fluid stream. Since testing was initiated, there have been several changes to the system as we have learned more about the operation. A new extinction tube was designed and built to obtain more accurate mass loading data. Piping and insulation for the CPG and SPHER were improved based on observations between testing periods. The window flange and seal have been redesigned to incorporate window film cooling. These improvements have been made in order to achieve the lab scale SPHER design objective gas outlet flow of 650°C at 5 bar.
APA, Harvard, Vancouver, ISO, and other styles
4

Briottet, L., I. Moro, J. Furtado, J. Solin, P. Bortot, G. M. Tamponi, R. Dey, and B. Acosta-Iborra. "Hydrogen-Enhanced Fatigue of a Cr-Mo Steel Pressure Vessel." In ASME 2015 Pressure Vessels and Piping Conference. American Society of Mechanical Engineers, 2015. http://dx.doi.org/10.1115/pvp2015-45267.

Full text
Abstract:
The current international standards and codes dedicated to the design of pressure vessels do not properly ensure fitness for service of vessels used for gaseous hydrogen storage and subjected to hydrogen enhanced fatigue. In this context, the European project MATHRYCE intends to propose an easy to implement vessel design methodology based on lab-scale tests and taking into account hydrogen enhanced fatigue. In the present document the lab-scale experimental developments and results are presented. The material considered was a commercially available Q&T low alloy Cr-Mo steel from a seamless pressure vessel. Due to the high hydrogen diffusion at room temperature in such steel, all the tests were performed under hydrogen pressure to avoid outgassing. Different types of lab-scale tests were developed and used in order to identify the most promising one for a design code. The effect of mechanical parameters, such as H2 pressure, frequency and ΔK, on fatigue crack initiation and propagation was analyzed. In particular, special attention was paid on the influence of H2 on the relative parts of initiation and propagation in the fatigue life of a component. The second part of the work was dedicated to cyclic hydraulic and hydrogen pressure tests on full scale vessels. Three artificial defects with different geometries per cylinder were machined in the inner wall of each tested cylinder. They were specifically designed in order to detect fatigue crack initiation and fatigue crack propagation with a single test. The final goal of this work is to propose a methodology to derive a “hydrogen safety factor” from lab-scale tests. The proposed method is compared to the full-scale results obtained, leading to recommendations on the design of pressure components operating under cyclic hydrogen pressure.
APA, Harvard, Vancouver, ISO, and other styles
5

Golob, Matthew, Clayton Nguyen, Sheldon Jeter, and Said Abdel-Khalik. "Solar Simulator Efficiency Testing of Lab-Scale Particle Heating Receiver at Elevated Operating Temperatures." In ASME 2016 10th International Conference on Energy Sustainability collocated with the ASME 2016 Power Conference and the ASME 2016 14th International Conference on Fuel Cell Science, Engineering and Technology. American Society of Mechanical Engineers, 2016. http://dx.doi.org/10.1115/es2016-59655.

Full text
Abstract:
Particle Heating Receivers (PHR) offer a range of advantages for concentrator solar power (CSP). PHRs can facilitate higher operating temperatures (>700°C), they can allow for inexpensive direct storage, and they can be integrated into cavity receiver designs for high collection efficiency. In operation, PHRs use solid particles that are irradiated and heated directly as they fall through a region exposed to concentrated sunlight. The heated particles can subsequently be stored in insulated bins, with the stored thermal energy reclaimed via heat exchanger to secondary working fluid for the power cycle in CSP. In this field Georgia Tech has over five years’ experience developing PHR technology through the support of the DOE SunShot program and similar research efforts. Georgia Tech has dealt with the crucial challenges in particle receiver technology: particulate flow behavior, particulate handling, and particulate heat transfer. In particular, Georgia Tech has specialized in innovative advances in the utilization and design of discrete structures in PHRs (DS-PHR) to prolong particulate residence time in the irradiated zone. This paper describes the development and results of lab-scale testing for DS-PHRs especially in the Georgia Tech high flux solar simulator (GTHFSS). The GTHFSS is a bank of high intensity xenon lamps with elliptical reflectors designed to replicate a concentrated solar source. Two series of tests have been undertaken: batch and continuous operation. Initially the DS-PHR has been tested in a batch apparatus in which a substantial but still limited quantity of preheated particulate flows through from an elevated bin through the irradiated PHR into a weighing box collecting bin. The use of a weighing box is advantageous since the flow rate of particulate is otherwise especially hard to measure. Temperature rise measurements and mass flow rate measurements allow calculation of energy collection rates. Calorimetry measurements, also described in the paper, are used to verify the incident concentrated radiation allowing the calculation of the collection efficiency. This preliminary series of experiments have been completed using the batch apparatus, with the efficiencies of the lab-scale DS-PHR being determined for a range of temperatures. Efficiencies above 90% have been measured at low temperatures roughly corresponding to the so-called optical efficiency, which is the rate of energy collection at low temperature and minimal heat loss. Batch experiment data indicates a collection efficiency of approximately 81–85% at an average particle operating temperature of 500°C. Lab-scale batch results at 700°C in proved to be unstable, and as such a rework employing a continuous recirculation loop is underway. While the batch apparatus is convenient for preliminary work, it is challenging to reach steady state operation in the mixing and measurement section below the DS-PHR, which limits this apparatus in higher temperature experiments. Consequently, the experiment is being reconfigured for continuous flow, in which the particulate will be heated and recirculated by a high temperature air conveyor. The advantage of the high temperature conveyor has already been proved by its successful integration as a heater and mixer in the hot bin of the batch apparatus. Such a compact device was also quite advantageous in the limited confines of a typical laboratory simulator such as the GTHFSS. While continuous flow prevents the exceedingly desirable use of an uninterrupted mass measurement device, highly accurate mass flow data is still expected based on the use of a perforated plate flow control station. This device relies on the Berverloo effect to maintain a constant flow of particulate through an array of orifices, for which the flow is largely independent of upstream conditions. A weighing box will be used to calibrate and verify the mass flow. This paper will report on efficiency measurements with the batch flow experiments and present the preliminary steps taken to conduct the recirculation experiment. The bulk the research reported in the paper is sponsored by and done in support of the DoE Sun Shot initiative.
APA, Harvard, Vancouver, ISO, and other styles
6

Koduru, Nitish, Nandini Nag Choudhury, Vineet Kumar, Dhruva Prasad, Rahul Raj, Debaditya Barua, Aditya Kumar Singh, Shakti Jain, Abhishek Kumar Gupta, and Amitabh Pandey. "Bhagyam Full Field Polymer Flood: Implementation and Initial Production Response." In Abu Dhabi International Petroleum Exhibition & Conference. SPE, 2021. http://dx.doi.org/10.2118/208164-ms.

Full text
Abstract:
Abstract Bhagyam is an onshore field in the Barmer basin, located in the state of Rajasthan in Western India. Fatehgarh Formation is the main producing unit, comprising of multi-storied fluvial sandstones. Reservoir quality is excellent with permeability in the range of 1 to 10 Darcy and porosity in the range of 25-30%. The crude is moderately viscous (15 – 500 cP) having a large variation with depth (15 cP – 50 cP from around 270 m TVDSS to 400 m TVDSS and then rising steeply to 500 cp at the OWC of 448m TVDSS). Lab studies on Bhagyam cores show that the reservoir is primarily oil wet in nature. Bhagyam Field was developed initially with edge water injection and with subsequent infill campaigns, prior to polymer flood development plan implementation, the Field was operating with 162 wells. Simple mobility ratio and fractional flow considerations indicate that improving the mobility ratio (water flood end-point mobility ratio is 30-100) in Bhagyam would substantially improve the sweep efficiency. Early EOR screening studies recommended chemical EOR (polymer and ASP flood) as the most suitable method for maximizing oil recovery. The lab studies further demonstrated good recovery potential for Polymer flood. Bhagyam's first Polymer flood field application started with testing in one injector which was later expanded to 8 wells. Extended polymer injection in these wells continued for four years. Observing a very encouraging field response, field scale polymer expansion plan was designed which included drilling of 28 new infill wells (17 P+ 11 I) and 24 producer-to-injector conversions. Modular skid-based polymer preparation units were installed to meet the injection requirements of the expansion plan. Infill producers were brought online in 2018 as per the plan but polymer injection was delayed due to various external factors. The production rate, however, was sustained without significant decline, aided by continuous polymer injection in initial 8 injectors, continuing water flood and good reservoir management practices. First polymer injection in field scale expansion started in Oct’20 and was quickly ramped up to the planned 80000 BPD in 4 months, supported by analyses of surveillance data, indicating very encouraging initial production response. Laboratory quality check program was designed to check quality of polymer during preparation and to ensure viscosity integrity till the well head. The paper discusses modular polymer preparation unit set-up and the additional installations designed to reduce pipeline vibrations during pumping of polymers., Experience gained while bringing online the polymer injection wells and the lab quality checks employed to ensure good polymer quality during preparation and pumping have also been discussed. The paper also discusses reservoir surveillance program adopted at the start of polymer injection like spinner survey, Pressure fall-off surveys and the stimulation activities that worked in improving the injectivity of polymer injectors. The paper further outlines the observations from the production response and the surveillance data collected to ensure good polymer flow in this multi-darcy reservoir.
APA, Harvard, Vancouver, ISO, and other styles
7

Ganapathi, Gani B., Daniel Berisford, Benjamin Furst, David Bame, Michael Pauken, and Richard Wirz. "A 5 kWht Lab-Scale Demonstration of a Novel Thermal Energy Storage Concept With Supercritical Fluids." In ASME 2013 7th International Conference on Energy Sustainability collocated with the ASME 2013 Heat Transfer Summer Conference and the ASME 2013 11th International Conference on Fuel Cell Science, Engineering and Technology. American Society of Mechanical Engineers, 2013. http://dx.doi.org/10.1115/es2013-18182.

Full text
Abstract:
An alternate to the two-tank molten salt thermal energy storage system using supercritical fluids is presented. This technology can enhance the production of electrical power generation and high temperature technologies for commercial use by lowering the cost of energy storage in comparison to current state-of-the-art molten salt energy storage systems. The volumetric energy density of a single-tank supercritical fluid energy storage system is significantly higher than a two-tank molten salt energy storage system due to the high compressibilities in the supercritical state. As a result, the single-tank energy storage system design can lead to almost a factor of ten decrease in fluid costs. This paper presents results from a test performed on a 5 kWht storage tank with a naphthalene energy storage fluid as part of a small preliminary demonstration of the concept of supercritical thermal energy storage. Thermal energy is stored within naphthalene filled tubes designed to handle the temperature (500 °C) and pressure (6.9 MPa or 1000 psia) of the supercritical fluid state. The tubes are enclosed within an insulated shell heat exchanger which serves as the thermal energy storage tank. The storage tank is thermally charged by flowing air at >500 °C over the storage tube bank. Discharging the tank can provide energy to a Rankine cycle (or any other thermodynamic process) over a temperature range from 480 °C to 290 °C. Tests were performed over three stages, starting with a low temperature (200 °C) shake-out test and progressing to a high temperature single cycle test cycling between room temperature and 480 °C and concluding a two-cycle test cycling between 290 °C and 480 °C. The test results indicate a successful demonstration of high energy storage using supercritical fluids.
APA, Harvard, Vancouver, ISO, and other styles
8

Lin, Weigang, Anker D. Jensen, Jan E. Johnsson, and Kim Dam-Johansen. "Combustion of Biomass in Fluidized Beds: Problems and Some Solutions Based on Danish Experiences." In 17th International Conference on Fluidized Bed Combustion. ASMEDC, 2003. http://dx.doi.org/10.1115/fbc2003-124.

Full text
Abstract:
This paper summarizes the major problems in firing and co-firing the annual biomass, such as straw, in both lab-scale and full-scale fluidized bed combustors. Two types of problems were studied: operational problems, such as agglomeration, deposition and corrosion; and emission problems, e.g. emissions of NO and SO2. Measurements of deposition and corrosion rate on the heat transfer surfaces, as well as gas phase alkali metal concentrations, were performed in full scale CFB boilers (an 80 MWth and a 20 MWth plant), which have been co-firing coal with straw and other biomass. Severe corrosion and deposition were observed in the superheater located in the loop-seal of the 80 MWth boiler. The boiler load variation has impact on the operation parameters. When the fraction of biomass with a high K-content (>1 wt. %) was higher than 60% on a thermal basis, the boiler suffered from severe agglomeration problems. Lab-scale experiments were carried out for the fundamental understanding of phenomena found in full-scale boilers and for testing possible solutions to the problems. The results showed a strong tendency of agglomeration in fluidized beds during combustion of straw, which normally have a high content of potassium and chlorine. The results indicate that the operational problems may be minimized by a combination of additives, improved boiler design, split of combustion air and detection of agglomeration at an early stage.
APA, Harvard, Vancouver, ISO, and other styles
9

Sun, Jason, and Paul Jukes. "From Installation to Operation: A Full-Scale Finite Element Modeling of Deep-Water Pipe-in-Pipe System." In ASME 2009 28th International Conference on Ocean, Offshore and Arctic Engineering. ASMEDC, 2009. http://dx.doi.org/10.1115/omae2009-79519.

Full text
Abstract:
Developments of deep water oil reservoirs are presently being considered in the Gulf of Mexico (GoM). Pipe-in-Pipe (PIP) systems are widely used and planned as the tie-back flowline for high pressure and high temperature production (HPHT) due to their exceptional thermal insulation capabilities. The installation of PIP flowline in deep water, disregarding the laying method, can present real challenges because of the PIP string weight. The effect of the lowering displacement as well as the lock-in compressive load acting on the inner pipe for the commonly used un-bonded PIP is also a major concern. Such effects will enhance the total flowline compression when the high temperature and high pressure are applied after start-up; they greatly increase the severity of the global buckling and result in local plastic collapse at a larger bending curvature section or strain localization area. An even greater concern is that industry fails to realize the seriousness of such failure potential, and the PIP is generally treated as a composite single pipe which does not evaluate the PIP load response correctly, especially the inner pipe lock-in compression omitted. It could result in an unsafe design for HPHT production. This paper endeavors to provide a trustworthy solution for the HPHT PIP systems from installation to operation by using the advanced analysis tool — “Simulator”, an ABAQUS based in-house Finite Element Analysis (FEA) engine. “Simulator” allows the PIP pipes being modeled individually with realistic interaction between the pipes. A systematic process was introduced by using a generic deep-water PIP flowline as a working example of J-Lay installation and HPHT production. The load and stress responses of the PIP at all installation stages were calculated with a high level of accuracy, they were then included in the global buckling analysis for the HPHT operation. The study demonstrated the effectiveness of Loadshare, an industry-leading solution; which reduces or eliminates the inner pipe lock-in compression and improves the PIP compressive load capacity for the high temperature operation.
APA, Harvard, Vancouver, ISO, and other styles
10

Govil, Amit, Harald Nevoy, Lars Hovda, Guillermo A. Obando Palacio, and Geir Kjeldaas. "Identifying Formation Creep – Ultrasonic Bond Logging Field Examples Validated by Full-Scale Reference Barrier Cell Experiments." In SPE/IADC International Drilling Conference and Exhibition. SPE, 2021. http://dx.doi.org/10.2118/204040-ms.

Full text
Abstract:
Abstract As part of plug and abandonment (P&A) operations, several acceptance criteria need to be considered by operators to qualify barrier elements. In casing annuli, highly bonded material is occasionally found far above the theoretical top of cement. This paper aims to describe how the highly bonded material can be identified using a combination of ultrasonic logging data, validated with measurements in lab experiments using reference cells and how this, in combination with data from the well construction records. can contribute to lowering the costly toll of P&A operations. Ultrasonic and sonic log data was acquired in several wells to assess the bond quality behind multiple casing sizes in an abandonment campaign. Data obtained from pulse-echo and flexural sensors was interactively analyzed with a cross-plotting technique to distinguish gas, liquid, barite, cement, and formation in the annular space. Within the methodology used, historical data on each well was considered as an integral part of the analysis. During the original well construction, either water-based or synthetic oil-based mud was used for drilling and cementing operations, and some formation intervals consistently showed high bonding signature under specific conditions, giving clear evidence of formation creep. Log data from multiple wells confirms formation behavior is influenced by the type of mud used during well construction. The log data provided information of annulus material with a detailed map of the axial and azimuthal variations of the annulus contents. In some cases, log response showed a clear indication of formation creep, evidenced by a high bond quality around the production casing where cement cannot be present. Based on observations from multiple fields in the Norwegian continental shelf, a crossplot workflow has been designed to distinguish formation from cement as the potential barrier element. NORSOK D-010 has initial verification acceptance criteria both for annulus cement and creeping formation as a well barrier element, both involving bond logs; however, in the case of creeping formation it is more stringent stating that "two independent logging measurements/tools shall be applied." This paper aims to demonstrate how this can be done with confidence utilizing ultrasonic and sonic log data, validated against reference barrier cells (SPE-199578). Logging responses like those gathered during full-scale experiment of reference barrier cells with known defects were observed in multiple wells in the field. Understanding the phenomenon of formation creep and its associated casing bond signature could have a massive impact on P&A operations. With a successful qualification of formation as an annulus barrier, significant cost and time savings can be achieved.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "From lab to full scale design"

1

Ali, Ayman, Ahmed Saidi, Yusef Mehta, Christopher DeCarlo, Mohamed Elshear, Benjamin Cox, and Wade Lein. Development and validation of a balanced mix design approach for CIR mixtures using full-scale testing. Engineer Research and Development Center (U.S.), October 2022. http://dx.doi.org/10.21079/11681/45704.

Full text
Abstract:
The main goal of this study was to improve the performance of cold in-place recycling (CIR) mixtures by using a balanced mix design (BMD) approach. This study involved preparing and testing CIR mixtures in the lab at varying contents of bituminous additives and constant content of 1% ce-ment and 3% water. Eight combinations of CIR mixtures were produced for this study using two binders (emulsion and foamed asphalt), compaction efforts (30 and 70 gyrations), and curing processes (72 hours at 140°F and 50°F). Results showed that asphalt pavement analyzer, semicircular bend, and indirect tensile strength tests presented the highest correlation with the change of binder contents. The study successfully used the developed BMD for designing CIR mixtures and selecting their optimum binder contents. It then used three balanced CIR mixtures to construct full-scale pavement sections to validate the BMD approach in the field. A heavy vehicle simulator was used to apply different accelerated loadings on each section. Results showed that the CIR section with 2% binder presented the best rutting performance under truck loading and the highest rutting susceptibility under aircraft loading. Conversely, the CIR section with 3% binder presented the highest cracking resistance under both truck and aircraft loading.
APA, Harvard, Vancouver, ISO, and other styles
2

Nantung, Tommy E., Jusang Lee, John E. Haddock, M. Reza Pouranian, Dario Batioja Alvarez, Jongmyung Jeon, Boonam Shin, and Peter J. Becker. Structural Evaluation of Full-Depth Flexible Pavement Using APT. Purdue University, 2021. http://dx.doi.org/10.5703/1288284317319.

Full text
Abstract:
The fundamentals of rutting behavior for thin full-depth flexible pavements (i.e., asphalt thickness less than 12 inches) are investigated in this study. The scope incorporates an experimental study using full-scale Accelerated Pavement Tests (APTs) to monitor the evolution of each pavement structural layer's transverse profiles. The findings were then employed to verify the local rutting model coefficients used in the current pavement design method, the Mechanistic-Empirical Pavement Design Guide (MEPDG). Four APT sections were constructed using two thin typical pavement structures (seven-and ten-inches thick) and two types of surface course material (dense-graded and SMA). A mid-depth rut monitoring and automated laser profile systems were designed to reconstruct the transverse profiles at each pavement layer interface throughout the process of accelerated pavement deterioration that is produced during the APT. The contributions of each pavement structural layer to rutting and the evolution of layer deformation were derived. This study found that the permanent deformation within full-depth asphalt concrete significantly depends upon the pavement thickness. However, once the pavement reaches sufficient thickness (more than 12.5 inches), increasing the thickness does not significantly affect the permanent deformation. Additionally, for thin full-depth asphalt pavements with a dense-graded Hot Mix Asphalt (HMA) surface course, most pavement rutting is caused by the deformation of the asphalt concrete, with about half the rutting amount observed within the top four inches of the pavement layers. However, for thin full-depth asphalt pavements with an SMA surface course, most pavement rutting comes from the closet sublayer to the surface, i.e., the intermediate layer. The accuracy of the MEPDG’s prediction models for thin full-depth asphalt pavement was evaluated using some statistical parameters, including bias, the sum of squared error, and the standard error of estimates between the predicted and actual measurements. Based on the statistical analysis (at the 95% confidence level), no significant difference was found between the version 2.3-predicted and measured rutting of total asphalt concrete layer and subgrade for thick and thin pavements.
APA, Harvard, Vancouver, ISO, and other styles
3

Haddock, John E., Reyhaneh Rahbar-Rastegar, M. Reza Pouranian, Miguel Montoya, and Harsh Patel. Implementing the Superpave 5 Asphalt Mixture Design Method in Indiana. Purdue University, 2020. http://dx.doi.org/10.5703/1288284317127.

Full text
Abstract:
Recent research developments have indicated that asphalt mixture durability and pavement life can be increased by modifying the Superpave asphalt mixture design method to achieve an in-place density of 95%, approximately 2% higher than the density requirements of conventionally designed Superpave mixtures. Doing so requires increasing the design air voids content to 5% and making changes to the mixture aggregate gradation so that effective binder content is not lowered. After successful laboratory testing of this modified mixture design method, known as Superpave 5, two controlled field trials and one full scale demonstration project, the Indiana Department of Transportation (INDOT) let 12 trial projects across the six INDOT districts based on the design method. The Purdue University research team was tasked with observing the implementation of the Superpave 5 mixture design method, documenting the construction and completing an in-depth analysis of the quality control and quality assurance (QC/QA) data obtained from the projects. QC and QA data for each construction project were examined using various statistical metrics to determine construction performance with respect to INDOT Superpave 5 specifications. The data indicate that, on average, the contractors achieved 5% laboratory air voids, which coincides with the Superpave 5 recommendation of 5%. However, on average, the as-constructed mat density of 93.8% is roughly 1% less than the INDOT Superpave 5 specification. It is recommended that INDOT monitor performance of the Superpave 5 mixtures and implement some type of additional training for contractor personnel, in order to help them increase their understanding of Superpave 5 concepts and how best to implement the design method in their operation.
APA, Harvard, Vancouver, ISO, and other styles
4

Lahav, Ori, Albert Heber, and David Broday. Elimination of emissions of ammonia and hydrogen sulfide from confined animal and feeding operations (CAFO) using an adsorption/liquid-redox process with biological regeneration. United States Department of Agriculture, March 2008. http://dx.doi.org/10.32747/2008.7695589.bard.

Full text
Abstract:
The project was originally aimed at investigating and developing new efficient methods for cost effective removal of ammonia (NH₃) and hydrogen sulfide (H₂S) from Concentrated Animal Feeding Operations (CAFO), in particular broiler and laying houses (NH₃) and hog houses (H₂S). In both cases, the principal idea was to design and operate a dedicated air collection system that would be used for the treatment of the gases, and that would work independently from the general ventilation system. The advantages envisaged: (1) if collected at a point close to the source of generation, pollutants would arrive at the treatment system at higher concentrations; (2) the air in the vicinity of the animals would be cleaner, a fact that would promote animal growth rates; and (3) collection efficiency would be improved and adverse environmental impact reduced. For practical reasons, the project was divided in two: one effort concentrated on NH₃₍g₎ removal from chicken houses and another on H₂S₍g₎ removal from hog houses. NH₃₍g₎ removal: a novel approach was developed to reduce ammonia emissions from CAFOs in general, and poultry houses in particular. Air sucked by the dedicated air capturing system from close to the litter was shown to have NH₃₍g₎ concentrations an order of magnitude higher than at the vents of the ventilation system. The NH₃₍g₎ rich waste air was conveyed to an acidic (0<pH<~5) bubble column reactor where NH₃ was converted to NH₄⁺. The reactor operated in batch mode, starting at pH 0 and was switched to a new acidic absorption solution just before NH₃₍g₎ breakthrough occurred, at pH ~5. Experiments with a wide range of NH₃₍g₎ concentrations showed that the absorption efficiency was practically 100% throughout the process as long as the face velocity was below 4 cm/s. The potential advantages of the method include high absorption efficiency, lower NH₃₍g₎ concentrations in the vicinity of the birds, generation of a valuable product and the separation between the ventilation and ammonia treatment systems. A small scale pilot operation conducted for 5 weeks in a broiler house showed the approach to be technically feasible. H₂S₍g₎ removal: The main goal of this part was to develop a specific treatment process for minimizing H₂S₍g₎ emissions from hog houses. The proposed process consists of three units: In the 1ˢᵗ H₂S₍g₎ is absorbed into an acidic (pH<2) ferric iron solution and oxidized by Fe(III) to S⁰ in a bubble column reactor. In parallel, Fe(III) is reduced to Fe(II). In the 2ⁿᵈ unit Fe(II) is bio-oxidized back to Fe(III) by Acidithiobacillus ferrooxidans (AF).In the 3ʳᵈ unit S⁰ is separated from solution in a gravity settler. The work focused on three sub-processes: the kinetics of H₂S absorption into a ferric solution at low pH, the kinetics of Fe²⁺ oxidation by AF and the factors that affect ferric iron precipitation (a main obstacle for a continuous operation of the process) under the operational conditions. H₂S removal efficiency was found higher at a higher Fe(III) concentration and also higher for higher H₂S₍g₎ concentrations and lower flow rates of the treated air. The rate limiting step of the H₂S reactive absorption was found to be the chemical reaction rather than the transition from gas to liquid phase. H₂S₍g₎ removal efficiency of >95% was recorded with Fe(III) concentration of 9 g/L using typical AFO air compositions. The 2ⁿᵈ part of the work focused on kinetics of Fe(II) oxidation by AF. A new lab technique was developed for determining the kinetic equation and kinetic parameters (KS, Kₚ and mₘₐₓ) for the bacteria. The 3ʳᵈ part focused on iron oxide precipitation under the operational conditions. It was found that at lower pH (1.5) jarosite accumulation is slower and that the performance of the AF at this pH was sufficient for successive operation of the proposed process at the H₂S fluxes predicted from AFOs. A laboratory-scale test was carried out at Purdue University on the use of the integrated system for simultaneous hydrogen sulfide removal from a H₂S bubble column filled with ferric sulfate solution and biological regeneration of ferric ions in a packed column immobilized with enriched AFbacteria. Results demonstrated the technical feasibility of the integrated system for H₂S removal and simultaneous biological regeneration of Fe(III) for potential continuous treatment of H₂S released from CAFO. NH₃ and H₂S gradient measurements at egg layer and swine barns were conducted in winter and summer at Purdue. Results showed high potential to concentrate NH₃ and H₂S in hog buildings, and NH₃ in layer houses. H₂S emissions from layer houses were too low for a significant gradient. An NH₃ capturing system was designed and tested in a 100-chicken broiler room. Five bell-type collecting devices were installed over the litter to collect NH₃ emissions. While the air extraction system moved only 10% of the total room ventilation airflow rate, the fraction of total ammonia removed was 18%, because of the higher concentration air taken from near the litter. The system demonstrated the potential to reduce emissions from broiler facilities and to concentrate the NH₃ effluent for use in an emission control system. In summary, the project laid a solid foundation for the implementation of both processes, and also resulted in a significant scientific contribution related to AF kinetic studies and ferrous analytical measurements.
APA, Harvard, Vancouver, ISO, and other styles
5

Briggs, Nicholas E., Robert Bailey Bond, and Jerome F. Hajjar. Cyclic Behavior of Steel Headed Stud Anchors in Concrete-filled Steel Deck Diaphragms through Push-out Tests. Northeastern University. Department of Civil and Environmental Engineering., February 2023. http://dx.doi.org/10.17760/d20476962.

Full text
Abstract:
Earthquake disasters in the United States account for $6.1 billion of economic losses each year, much of which is directly linked to infrastructure damage. These natural disasters are unpredictable and represent one of the most difficult design problems in regard to constructing resilient infrastructure. Structural floor and roof diaphragms act as the horizontal portion of the lateral force resisting system (LFRS), distributing the seismically derived inertial loads out from the heavy concrete slabs to the vertical LFRS. Composite concrete-filled steel deck floor and roof diaphragms are ubiquitously used in commercial construction worldwide due to the ease of construction and cost-effective use of structural material. This report presents a series of composite steel deck diaphragm Push-out tests at full scale that explore the effect that cyclic loading has on the strength of steel headed stud anchors. The effect that cyclic loading has on structural performance is explored across the variation of material and geometric parameters in the Push-out specimens, such as concrete density, steel headed stud anchor placement and grouping, steel deck orientation, and edge conditions. As compared to prior tests in the literature, the push-out tests conducted in this work have an extended specimen length that includes four rows of studs along the length rather than the typical two rows of studs, and an ability to impose cyclic loading. This provides novel insight into force flows in the specimens, failure mechanisms, and load distribution between studs and stud groups.
APA, Harvard, Vancouver, ISO, and other styles
6

Dow, Nick, and Daniel Madrzykowski. Residential Flashover Prevention with Reduced Water Flow: Phase 2. UL's Fire Safety Research Institute, November 2021. http://dx.doi.org/10.54206/102376/nuzj8120.

Full text
Abstract:
The purpose of this study was to investigate the feasibility of a residential flashover prevention system with reduced water flow requirements relative to a residential sprinkler system designed to meet NFPA 13D requirements. The flashover prevention system would be designed for retrofit applications where water supplies are limited. In addition to examining the water spray’s impact on fire growth, this study utilized thermal tenability criteria as defined in UL 199, Standard for Automatic Sprinklers for Fire-Protection Service. The strategy investigated was to use full cone spray nozzles that would discharge water low in the fire room and directly onto burning surfaces of the contents in the room. Where as current sprinkler design discharges water in a manner that cools the hot gas layer, wets the walls and wets the surface of the contents in the fire room. A series of eight full-scale, compartment fire experiments with residential furnishings were conducted with low flow nozzles. While the 23 lpm (6 gpm) of water was the same between experiments, the discharge density or water flux around the area of ignition varied between 0.3 mm/min (0.008 gpm/ft2) and 1.8 mm/min (0.044 gpm/ft2). Three of the experiments prevented flashover. Five of the experiments resulted in the regrowth of the fire while the water was flowing. Regrowth of the fire led to untenable conditions, per UL 199 criteria, in the fire room. At approximately the same time as the untenability criteria were reached, the second sprinkler in the hallway activated. In a completed system, the activation of the second sprinkler would reduce the water flow to the fire room, which would potentially lead to flashover. The variations in the burning behavior of the sofa resulted in shielded fires which led to the loss of effectiveness of the reduced flow solid cone water sprays. As a result of these variations, a correlation between discharge density at the area of ignition and fire suppression performance could not be determined given the limited number of experiments. An additional experiment using an NFPA 13D sprinkler system, flowing 30 lpm (8 gpm), demonstrated more effective suppression than any of the experiments with a nozzle. The success of the sprinkler compared with the unreliable suppression performance of the lower flow nozzles supports the minimum discharge density requirements of 2 mm/min (0.05 gpm/ft2) from NFPA 13D. The low flow nozzle system tested in this study reliably delayed fire growth, but would not reliably prevent flashover.
APA, Harvard, Vancouver, ISO, and other styles
7

Terzic, Vesna, and William Pasco. Novel Method for Probabilistic Evaluation of the Post-Earthquake Functionality of a Bridge. Mineta Transportation Institute, April 2021. http://dx.doi.org/10.31979/mti.2021.1916.

Full text
Abstract:
While modern overpass bridges are safe against collapse, their functionality will likely be compromised in case of design-level or beyond design-level earthquake, which may generate excessive residual displacements of the bridge deck. Presently, there is no validated, quantitative approach for estimating the operational level of the bridge after an earthquake due to the difficulty of accurately simulating residual displacements. This research develops a novel method for probabilistic evaluation of the post-earthquake functionality state of the bridge; the approach is founded on an explicit evaluation of bridge residual displacements and associated traffic capacity by considering realistic traffic load scenarios. This research proposes a high-fidelity finite-element model for bridge columns, developed and calibrated using existing experimental data from the shake table tests of a full-scale bridge column. This finite-element model of the bridge column is further expanded to enable evaluation of the axial load-carrying capacity of damaged columns, which is critical for an accurate evaluation of the traffic capacity of the bridge. Existing experimental data from the crushing tests on the columns with earthquake-induced damage support this phase of the finite-element model development. To properly evaluate the bridge's post-earthquake functionality state, realistic traffic loadings representative of different bridge conditions (e.g., immediate access, emergency traffic only, closed) are applied in the proposed model following an earthquake simulation. The traffic loadings in the finite-element model consider the distribution of the vehicles on the bridge causing the largest forces in the bridge columns.
APA, Harvard, Vancouver, ISO, and other styles
8

Rankin, Nicole, Deborah McGregor, Candice Donnelly, Bethany Van Dort, Richard De Abreu Lourenco, Anne Cust, and Emily Stone. Lung cancer screening using low-dose computed tomography for high risk populations: Investigating effectiveness and screening program implementation considerations: An Evidence Check rapid review brokered by the Sax Institute (www.saxinstitute.org.au) for the Cancer Institute NSW. The Sax Institute, October 2019. http://dx.doi.org/10.57022/clzt5093.

Full text
Abstract:
Background Lung cancer is the number one cause of cancer death worldwide.(1) It is the fifth most commonly diagnosed cancer in Australia (12,741 cases diagnosed in 2018) and the leading cause of cancer death.(2) The number of years of potential life lost to lung cancer in Australia is estimated to be 58,450, similar to that of colorectal and breast cancer combined.(3) While tobacco control strategies are most effective for disease prevention in the general population, early detection via low dose computed tomography (LDCT) screening in high-risk populations is a viable option for detecting asymptomatic disease in current (13%) and former (24%) Australian smokers.(4) The purpose of this Evidence Check review is to identify and analyse existing and emerging evidence for LDCT lung cancer screening in high-risk individuals to guide future program and policy planning. Evidence Check questions This review aimed to address the following questions: 1. What is the evidence for the effectiveness of lung cancer screening for higher-risk individuals? 2. What is the evidence of potential harms from lung cancer screening for higher-risk individuals? 3. What are the main components of recent major lung cancer screening programs or trials? 4. What is the cost-effectiveness of lung cancer screening programs (include studies of cost–utility)? Summary of methods The authors searched the peer-reviewed literature across three databases (MEDLINE, PsycINFO and Embase) for existing systematic reviews and original studies published between 1 January 2009 and 8 August 2019. Fifteen systematic reviews (of which 8 were contemporary) and 64 original publications met the inclusion criteria set across the four questions. Key findings Question 1: What is the evidence for the effectiveness of lung cancer screening for higher-risk individuals? There is sufficient evidence from systematic reviews and meta-analyses of combined (pooled) data from screening trials (of high-risk individuals) to indicate that LDCT examination is clinically effective in reducing lung cancer mortality. In 2011, the landmark National Lung Cancer Screening Trial (NLST, a large-scale randomised controlled trial [RCT] conducted in the US) reported a 20% (95% CI 6.8% – 26.7%; P=0.004) relative reduction in mortality among long-term heavy smokers over three rounds of annual screening. High-risk eligibility criteria was defined as people aged 55–74 years with a smoking history of ≥30 pack-years (years in which a smoker has consumed 20-plus cigarettes each day) and, for former smokers, ≥30 pack-years and have quit within the past 15 years.(5) All-cause mortality was reduced by 6.7% (95% CI, 1.2% – 13.6%; P=0.02). Initial data from the second landmark RCT, the NEderlands-Leuvens Longkanker Screenings ONderzoek (known as the NELSON trial), have found an even greater reduction of 26% (95% CI, 9% – 41%) in lung cancer mortality, with full trial results yet to be published.(6, 7) Pooled analyses, including several smaller-scale European LDCT screening trials insufficiently powered in their own right, collectively demonstrate a statistically significant reduction in lung cancer mortality (RR 0.82, 95% CI 0.73–0.91).(8) Despite the reduction in all-cause mortality found in the NLST, pooled analyses of seven trials found no statistically significant difference in all-cause mortality (RR 0.95, 95% CI 0.90–1.00).(8) However, cancer-specific mortality is currently the most relevant outcome in cancer screening trials. These seven trials demonstrated a significantly greater proportion of early stage cancers in LDCT groups compared with controls (RR 2.08, 95% CI 1.43–3.03). Thus, when considering results across mortality outcomes and early stage cancers diagnosed, LDCT screening is considered to be clinically effective. Question 2: What is the evidence of potential harms from lung cancer screening for higher-risk individuals? The harms of LDCT lung cancer screening include false positive tests and the consequences of unnecessary invasive follow-up procedures for conditions that are eventually diagnosed as benign. While LDCT screening leads to an increased frequency of invasive procedures, it does not result in greater mortality soon after an invasive procedure (in trial settings when compared with the control arm).(8) Overdiagnosis, exposure to radiation, psychological distress and an impact on quality of life are other known harms. Systematic review evidence indicates the benefits of LDCT screening are likely to outweigh the harms. The potential harms are likely to be reduced as refinements are made to LDCT screening protocols through: i) the application of risk predication models (e.g. the PLCOm2012), which enable a more accurate selection of the high-risk population through the use of specific criteria (beyond age and smoking history); ii) the use of nodule management algorithms (e.g. Lung-RADS, PanCan), which assist in the diagnostic evaluation of screen-detected nodules and cancers (e.g. more precise volumetric assessment of nodules); and, iii) more judicious selection of patients for invasive procedures. Recent evidence suggests a positive LDCT result may transiently increase psychological distress but does not have long-term adverse effects on psychological distress or health-related quality of life (HRQoL). With regards to smoking cessation, there is no evidence to suggest screening participation invokes a false sense of assurance in smokers, nor a reduction in motivation to quit. The NELSON and Danish trials found no difference in smoking cessation rates between LDCT screening and control groups. Higher net cessation rates, compared with general population, suggest those who participate in screening trials may already be motivated to quit. Question 3: What are the main components of recent major lung cancer screening programs or trials? There are no systematic reviews that capture the main components of recent major lung cancer screening trials and programs. We extracted evidence from original studies and clinical guidance documents and organised this into key groups to form a concise set of components for potential implementation of a national lung cancer screening program in Australia: 1. Identifying the high-risk population: recruitment, eligibility, selection and referral 2. Educating the public, people at high risk and healthcare providers; this includes creating awareness of lung cancer, the benefits and harms of LDCT screening, and shared decision-making 3. Components necessary for health services to deliver a screening program: a. Planning phase: e.g. human resources to coordinate the program, electronic data systems that integrate medical records information and link to an established national registry b. Implementation phase: e.g. human and technological resources required to conduct LDCT examinations, interpretation of reports and communication of results to participants c. Monitoring and evaluation phase: e.g. monitoring outcomes across patients, radiological reporting, compliance with established standards and a quality assurance program 4. Data reporting and research, e.g. audit and feedback to multidisciplinary teams, reporting outcomes to enhance international research into LDCT screening 5. Incorporation of smoking cessation interventions, e.g. specific programs designed for LDCT screening or referral to existing community or hospital-based services that deliver cessation interventions. Most original studies are single-institution evaluations that contain descriptive data about the processes required to establish and implement a high-risk population-based screening program. Across all studies there is a consistent message as to the challenges and complexities of establishing LDCT screening programs to attract people at high risk who will receive the greatest benefits from participation. With regards to smoking cessation, evidence from one systematic review indicates the optimal strategy for incorporating smoking cessation interventions into a LDCT screening program is unclear. There is widespread agreement that LDCT screening attendance presents a ‘teachable moment’ for cessation advice, especially among those people who receive a positive scan result. Smoking cessation is an area of significant research investment; for instance, eight US-based clinical trials are now underway that aim to address how best to design and deliver cessation programs within large-scale LDCT screening programs.(9) Question 4: What is the cost-effectiveness of lung cancer screening programs (include studies of cost–utility)? Assessing the value or cost-effectiveness of LDCT screening involves a complex interplay of factors including data on effectiveness and costs, and institutional context. A key input is data about the effectiveness of potential and current screening programs with respect to case detection, and the likely outcomes of treating those cases sooner (in the presence of LDCT screening) as opposed to later (in the absence of LDCT screening). Evidence about the cost-effectiveness of LDCT screening programs has been summarised in two systematic reviews. We identified a further 13 studies—five modelling studies, one discrete choice experiment and seven articles—that used a variety of methods to assess cost-effectiveness. Three modelling studies indicated LDCT screening was cost-effective in the settings of the US and Europe. Two studies—one from Australia and one from New Zealand—reported LDCT screening would not be cost-effective using NLST-like protocols. We anticipate that, following the full publication of the NELSON trial, cost-effectiveness studies will likely be updated with new data that reduce uncertainty about factors that influence modelling outcomes, including the findings of indeterminate nodules. Gaps in the evidence There is a large and accessible body of evidence as to the effectiveness (Q1) and harms (Q2) of LDCT screening for lung cancer. Nevertheless, there are significant gaps in the evidence about the program components that are required to implement an effective LDCT screening program (Q3). Questions about LDCT screening acceptability and feasibility were not explicitly included in the scope. However, as the evidence is based primarily on US programs and UK pilot studies, the relevance to the local setting requires careful consideration. The Queensland Lung Cancer Screening Study provides feasibility data about clinical aspects of LDCT screening but little about program design. The International Lung Screening Trial is still in the recruitment phase and findings are not yet available for inclusion in this Evidence Check. The Australian Population Based Screening Framework was developed to “inform decision-makers on the key issues to be considered when assessing potential screening programs in Australia”.(10) As the Framework is specific to population-based, rather than high-risk, screening programs, there is a lack of clarity about transferability of criteria. However, the Framework criteria do stipulate that a screening program must be acceptable to “important subgroups such as target participants who are from culturally and linguistically diverse backgrounds, Aboriginal and Torres Strait Islander people, people from disadvantaged groups and people with a disability”.(10) An extensive search of the literature highlighted that there is very little information about the acceptability of LDCT screening to these population groups in Australia. Yet they are part of the high-risk population.(10) There are also considerable gaps in the evidence about the cost-effectiveness of LDCT screening in different settings, including Australia. The evidence base in this area is rapidly evolving and is likely to include new data from the NELSON trial and incorporate data about the costs of targeted- and immuno-therapies as these treatments become more widely available in Australia.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography