Tesi sul tema "Civil Engineering not elsewhere classified"
Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili
Vedi i top-50 saggi (tesi di laurea o di dottorato) per l'attività di ricerca sul tema "Civil Engineering not elsewhere classified".
Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.
Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.
Vedi le tesi di molte aree scientifiche e compila una bibliografia corretta.
(5930270), Mehdi Shishehbor. "Numerical Investigation on the Mechanical Properties of Neat Cellulose Nanocrystal". Thesis, 2020.
Cerca il testo completo(5929580), Man Chung Chim. "Prototype L-band Synthetic Aperture Radar on Low-altitude / Near-ground Platforms". Thesis, 2020.
Cerca il testo completo(6616565), Yunchang Zhang. "PEDESTRIAN-VEHICLE INTERACTIONS AT SEMI-CONTROLLED CROSSWALKS: EXPLANATORY METRICS AND MODELS". Thesis, 2019.
Cerca il testo completoA large number of crosswalks are indicated by pavement markings and signs but are not signal-controlled. In this study, such a location is called “semi-controlled”. In locations where such a crosswalk has moderate amounts of pedestrian and vehicle traffic, pedestrians and motorists often engage in a non-verbal “negotiation”, to determine who should proceed first.
In this study, 3400 pedestrian-motorist non-verbal interactions at such semi-controlled crosswalks were recorded by video. The crosswalk locations observed during the study underwent a conversion from one-way operation in Spring 2017 to two-way operation in Spring 2018. This offered a rare opportunity to collect and analyze data for the same location under two conditions.
This research explored factors that could be associated with pedestrian crossing behavior and motorist likelihood of decelerating. A mixed effects logit model and binary logistic regression were utilized to identify factors that influence the likelihood of pedestrian crossing under specific conditions. The complementary motorist models used generalized ordered logistic regression to identify factors that impact a driver’s likelihood of decelerating, which was found to be a more useful factor than likelihood of yielding to pedestrian. The data showed that 56.5% of drivers slowed down or stopped for pedestrians on the one-way street. This value rose to 63.9% on the same street after it had been converted to 2-way operation. Moreover, two-way operation eliminated the effects of the presence of other vehicles on driver behavior.
Also investigated were factors that could influence how long a pedestrian is likely to wait at such semi-controlled crosswalks. Two types of models were proposed to correlate pedestrian waiting time with various covariates. First, survival models were developed to analyze pedestrian wait time based on the first-event analysis. Second, multi-state Markov models were introduced to correlate the dynamic process between recurrent events. Combining the first-event and recurrent events analyses addressed the drawbacks of both methods. Findings from the before-and-after study can contribute to developing operational and control strategies to improve the level of service at such unsignalized crosswalks.
The results of this study can contribute to policies and/or control strategies that will improve the efficiency of semi-controlled and similar crosswalks. This type of crosswalk is common, so the benefits of well-supported strategies could be substantial.
(5930783), Chintan Hitesh Patel. "Pack Rust Identification and Mitigation Strategies for Steel Bridges". Thesis, 2019.
Cerca il testo completo(5930969), Augustine M. Agyemang. "THE IMPACTS OF ROAD CONSTRUCTION WORK ZONES ON THE TRANSPORTATION SYSTEM, TRAVEL BEHAVIOR OF ROAD USERS AND SURROUNDING BUSINESSES". Thesis, 2019.
Cerca il testo completoIn our daily use of the transportation system, we are faced with several road construction work zones. These construction work zones change how road users interact with the transportation system due to the changes that occur in the system such as increased travel times, increased delay times and vehicle stopped times. A microscopic traffic simulation was developed to depict the changes that occur in the transportation system. The impacts of the changes in the transportation system on the human travel behavior was investigated using ordered probit and logit models using five independent variables; age, gender, driving experience, annual mileage and percentage of non-work trips. Finally, a business impact assessment framework was developed to assess the impact of the road construction work zones on various businesses categories such as grocery stores, pharmacy, liquor stores and fast foods. Traffic simulation results showed that the introduction of work zones in the road network introduces an increase in delay times, vehicle stopped times, and travel times. Also, the change in average travel times, delay times and vehicle stopped times differed from road link to link. The observed average changes saw an increase as high as 318 seconds per vehicle, 237 seconds per vehicle and 242 seconds per vehicle for travel time, delay time and vehicle stopped time, respectively, for the morning peak period. An average increase as high as 1607 seconds per vehicle, 258 seconds per vehicle and 265 seconds per vehicle was observed for travel time, delay time and vehicle stopped time, respectively, for the afternoon peak period. The statistical model results indicated that, on a work trip, a high driving experience, high annual mileage, and high percentage of non-work trips makes an individual more likely to change their route. The results also showed gender difference in route choice behavior. Concerning business impacts, businesses in the work zone were impacted differently with grocery and pharmacy stores having the highest and lowest total loss in revenue, respectively.
(11178147), Hala El Fil. "Shear Response of Rock Discontinuities: Through the Lens of Geophysics". Thesis, 2021.
Cerca il testo completoFailure along rock discontinuities can result in economic losses as well as loss of life. It is essential to develop methods that monitor the response of these discontinuities to shear loading to enable prediction of failure. Laboratory experiments are performed to investigate geophysical techniques to monitor shear failure of a pre-existing discontinuity to detect signatures of impending failure. Previous studies have detected precursors to shear failure in the form of maxima of transmitted waves across a discontinuity under shear. However, those experiments focused on well-matched discontinuities. However, in nature, rock discontinuities are not always perfectly matched because the asperities may be weathered by chemical, physical or mechanical processes. Further, the specific shear mechanism of mismatched discontinuities is still poorly understood. In this thesis, the ability to detect seismic precursors to shear failure for various discontinuity conditions—well-matched (rough and saw-tooth), mismatched (rough), and nonplanar (discontinuity profile with a half-cycle sine wave (HCS))—was assessed. The investigation was carried out through a coupled geophysical and mechanical experimental program that integrated detailed laboratory observations at the micro- and meso-scales. Shear experiments on gypsum discontinuities were conducted to observe changes in compressional (P) and shear (S) waves transmitted across the discontinuity. Digital Image Correlation (DIC) was used to quantify the vertical and horizontal displacements along the discontinuity during shearing to relate the location and magnitude of slip with the measured wave amplitudes.
Results from the experiments conducted on planar, well-matched rough discontinuities (grit 36 sandpaper roughness) showed that seismic precursors to failure took the form of peaks in the normalized transmitted amplitude prior to the peak shear stress. Seismic wave transmission detected non-uniform dilation and closure of the discontinuity at a normal stress of 1 MPa. The results showed that large-scale roughness (presence of a HCS) could mask the generation of precursors, as it can cause non-uniform closure/dilation along the fracture plane at low normal stress.
The experiments on idealized saw-toothed gypsum discontinuities showed that seismic precursors to failure appeared as maxima in the transmitted wave amplitude and conversely as minima in the reflected amplitudes. Converted waves (S to P & P to S) were also detected, and their amplitudes reached a maximum prior to shear failure. DIC results showed that slip occurred first at the top of the specimen, where the load was applied, and then progressed along the joint as the shear stress increased. This process was consistent with the order of emergence of precursors, i.e., precursors were first recorded near the top and later at the center, and finally at the bottom of the specimen.
Direct shear experiments conducted on specimens with a mismatched discontinuity did not show any precursors (in the transmitted amplitude) to failure at low normal stresses (2 MPa), while those precursors appeared at higher normal stresses (5 MPa). The interplay between wave transmission, the degree of mismatch, and the discontinuity’s micro-physical, -chemical and -mechanical properties was assessed through: (1) 3D CT in-situ Xray scans to quantify the degree of mismatch at various normal stresses; (2) micro-indentation testing, to measure the micro-strength of the asperities; and (3) Scanning Electron Microscopy (SEM) and Electron Xray Diffraction (EDX), to study the micro-structure and chemical composition of the discontinuity. The X-ray results showed that contact between asperities increased with normal stress, even when the discontinuity was mismatched. The results indicated that: (1) at 2 MPa, the void aperture was large, so significant shear displacement was needed to interlock and damage the asperities; and (2) the micro-hardness of the asperities of the mismatched discontinuity was larger than that of the well-matched discontinuity, which points to inducing less damage for the same shear displacement. Both mechanisms contribute to the need for larger shear displacements to the mismatched discontinuity asperities to cause damage, which is consistent with the inability to detect seismic precursors to failure. The experimental results suggest that monitoring changes in transmitted wave amplitude across a discontinuity is a promising method for predicting impending failure for well-matched rock discontinuities. Precursor monitoring for mismatched rock discontinuities seems only possible when there is sufficient contact between the two rock surfaces, which occurs at large normal stresses.
(7041299), Sijia Wang. "Post-Fire Assessment of Concrete in Bridge Decks". Thesis, 2019.
Cerca il testo completoIn recent years, there have been a number of truck fires involving bridges with concrete components. If the fire burns for a significant period of time, the structural integrity of concrete components could be damaged due to fire. Research-based guidance for evaluating the level of fire damage is currently unavailable and would be beneficial for post-fire bridge inspectors.
This research project focused on evaluating the effects of fire induced damage on concrete bridge deck elements. In order to achieve this goal, a series of controlled heating experiments and material analysis were conducted. Two concrete bridge deck specimens from the I-469 bridge over Feighner Road were heated for different time durations (40 - 80 min.) following the ISO-834 temperature-time curve. The deck specimens were cooled naturally after the specific heating durations. The temperature profiles through the depth of deck specimens were measured during heating and cooling. After testing, concrete samples were taken from the deck specimens for material analysis. Different types of material tests were conducted on samples taken from the undamaged and damaged deck specimens. The material test results were used to evaluate the effects of fire induced damage on the concrete microstructure, and to correlate the microstructure degradation with the through-depth temperature profiles of deck specimens.
From the experimental results, several critical parameters that can affected by fire temperature and duration were discussed: (i) through-depth temperature profiles of deck specimens, (ii) cracks on the exposed surface of deck specimens, (iii) color changes of deck specimens, (iv) microstructure of heated concrete samples, (v) content of calcium hydroxide in fire damaged concrete samples at various depths. Based on the results from heating experiments and observations from material analysis, recommendations and guidance for evaluating concrete decks subjected to realistic fire scenarios are provided to assist bridge inspectors.
(5930996), Linji Wang. "EVALUATION OF VEGETATED FILTER STRIP IMPLEMENTATIONS IN DEEP RIVER PORTAGE-BURNS WATERWAY WATERSHED USING SWAT MODEL". Thesis, 2019.
Cerca il testo completo(5930987), Mingda Lu. "ASSESSING THE PERFORMANCE OF BROOKVILLE FLOOD CONTROL DAM". Thesis, 2019.
Cerca il testo completo(7046339), Luz Maria Agudelo Urrego. "FINITE ELEMENT MODELING OF BURIED ARCHED PIPES FOR THE ESTIMATION OF MAXIMUM FILL COVERS". Thesis, 2019.
Cerca il testo completo(9179471), Kuan Hung Lin. "COMPARATIVE ANALYSIS OF SWAT CUP AND SWATSHARE FOR CALIBRATING SWAT MODELS". Thesis, 2020.
Cerca il testo completoSoil and water assessment tool model (SWAT model) is a widely used model when dealing with large and complex watershed simulations. To correctly predict runoff of a watershed, auto-calibration methods are applied. Among all the platforms, SWAT CUP is widely used in the SWAT model community. The new web-based calibration platform: SWATShare is also gaining its popularity due to the benefits of user-friendly interface, access to high-performance computing resources, and collaborative interface. While the algorithm implemented in SWAT CUP is Sequential Uncertainty Fitting version 2 (SUFI2), Sorting Genetic Algorithm II (NSGA-II) is the algorithm employed by SWATShare. There is a limited amount of research comparing the model performance between these two calibration algorithms and platforms.
This study aims to examine whether the performances of calibrated models are providing equally reliable results. Thirty US watersheds are studied in this research, SWAT models were calibrated using seven years of rainfall data and outflow observations from 2001 to 2007, and then the models were validated using three years of historical records from 2008 to 2010. Inconsistency exists between different algorithms calibrated parameter sets, and the percentage difference between parameter values ranges from 8.7% to 331.5%. However, in two-thirds of the study basins, there is no significant difference between objective function values in two algorithms calibrated models. Correlations are examined using values of parameters and watershed features. Among all the features and parameters, Length of reach and GW_DELAY, CH_N2 and ALPHA_BF, climate zone and GWQMN, SFTMP and NSE have medium correlation exist in both SWATShare and SWAT CUP calibrated models among 30 watersheds. The correlation coefficient difference between them are less than 0.1. When visualizing results by Ecoregions, KGE and NSE are similar in calibrated models from both tools.
The initial parameter range used for SWAT CUP calibration could lead to satisfactory results with greater than 0.5 objective function values. However, the parameter values of the calibrated model might not be presenting a real physical condition since they are out of the realistic range. The inaccurate parameter values might lead to lower objective function values in the validation. The objective function values can be improved by setting the range of parameter values to match the realistic values.
By comparing two tools, SWATShare accurately calibrates parameter values to a realistic range using default range in most cases. For those models with an unsatisfactory result from SWATShare, the objective function values could be improved after specifying the parameters to the best-fit range given by SWAT CUP results. Also, for those watersheds which have similar satisfactory calibrated objective values from both tools, constraining the parameter to a reasonable range could generate a new calibrated model that performs as well as the original one. Using the approach to constrain parameter values to a realistic range gradually can exclude some statistically satisfactory but physically meaningless models. Comparing two auto-calibration software, SWATShare accurately calibrates parameter values to a realistic range using default range in most cases. Also, in some of the ecoregions, the best parameter sets in SWATShare fall in a more physically meaningful range. Overall, the newly emerged platform, SWATShare, is found to have the capability of conducting good SWAT model calibration.
(9739793), Audrey Lafia-Bruce. "TRANSPORTATION INFRASTRUCTURE NETWORK PERFORMANCE ANALYSIS". Thesis, 2020.
Cerca il testo completoThe main objective of this thesis is to analyze the transportation infrastructure based on performance measures. In doing so, the abstract presents a transportation network as a system of nodes and links. It is important to identify critical components in transportation networks. In identifying critical components of the network, performance measures such as nodal degree, nodal closeness, nodal eigen vector, nodal betweenness, which are the most widely used were explored in the analysis of the network. These measures account for the vulnerability of a node to failure in the transportation network.
In our daily use of transportation networks, we are faced with disruptions that engender change in the transportation network. Disruptions tend to be commonplace in transportation systems. These include manmade disruptions such as accidents to natural disasters such as floods due to rainfall and hurricanes, seismic activities among others which are unprecedented. These incidents change how road users interact with the transportation system due to the disruptions that occur. The disruptions cause increased travel time, delays and even loss of property. These disruptions lead to direct, indirect and induced impacts.
This study provides a firsthand diagnosis of the vulnerability of the transportation network to flood by ranking the nodes using performance measures and multicriteria evaluation. The paper found out that various performance measures may produce different critical nodes but with the employment of sensitivity analysis and veto rule, the most critical node can be established The paper found out that node 80 is the most critical and essential node of the entire network after the impact of flood.
(8086718), Xiangxi Tian. "IMAGE-BASED ROAD PAVEMENT MACROTEXTURE DETERMINATION". Thesis, 2021.
Cerca il testo completoPavement macrotexture contributes greatly to road surface friction, which in turn plays a significant role in reducing road incidents. Conventional methods for macrotexture measurement techniques (e.g., the sand patch method, the outflow method, and laser measuring) are either expensive, time-consuming, or of poor repeatability. This thesis aims to develop and evaluate affordable and convenient alternative approaches to determine pavement macrotexture. The proposed solution is based on multi-view smartphone images collected in situ over the pavement. Computer vision techniques are then applied to create high resolution three-dimensional (3D) models of the pavement. The thesis develops the analytics to determine two primary macrotexture metrics: mean profile depth and aggregation loss. Experiments with 790 images over 25 spots of three State Roads and 6 spots of the INDOT test site demonstrated that the image-based method can yield reliable results comparable to conventional laser texture scanner results. Moreover, based on experiments with 280 images over 7 sample plates with different aggregate loss percentage, the newly developed analytics were proven to enable estimation of the aggregation loss, which is largely compromised in the laser scanning technique and conventional MPD calculation approach. The root mean square height based on the captured images was verified in this thesis as a more comprehensive metric for macrotexture evaluation. It is expected that the developed approach and analytics can be adopted for practical use at a large scale.
(8922227), Mohamadreza Moini. "BUILDABILITY AND MECHANICAL PERFORMANCE OF ARCHITECTURED CEMENT-BASED MATERIALS FABRICATED USING A DIRECT-INK-WRITING PROCESS". Thesis, 2020.
Cerca il testo completoAdditive Manufacturing (AM) allows for the creation of elements with novel forms and functions. Utilizing AM in development of components of civil infrastructure allows for achieving more advanced, innovative, and unique performance characteristics. The research presented in this dissertation is focused on development of a better understanding of the fabrication challenges and opportunities in AM of cement-based materials. Specifically, challenges related to printability and opportunities offered by 3D-printing technology, including ability to fabricate intricate structures and generate unique and enhanced mechanical responses have been explored. Three aspects related to 3D-printing of cement-based materials were investigated. These aspects include: fresh stability of 3D-printed elements in relation to materials rheological properties, microstructural characteristics of the interfaces induced during the 3D-printing process, and the mechanical response of 3D-printed elements with bio-inspired design of the materials’ architecture. This research aims to contribute to development of new pathways to obtain stability in freshly 3D-printed elements by determining the rheological properties of material that control the ability to fabricate elements in a layer-by-layer manner, followed by the understanding of the microstructural features of the 3D-printed hardened cement paste elements including the interfaces and the pore network. This research also introduces a new approach to enhance the mechanical response of the 3D-printed elements by controlling the spatial arrangement of individual filaments (i.e., materials’ architecture) and by harnessing the weak interfaces that are induced by the 3D-printing process.
(5929889), Joo Min Kim. "Behavior, Analysis and Design of Steel-Plate Composite (SC) Walls for Impactive Loading". Thesis, 2019.
Cerca il testo completo(6845639), Farida Ikpemesi Mahmud. "Simplified Assessment Procedure to Determine the Seismic Vulnerability of Reinforced Concrete Bridges in Indiana". Thesis, 2019.
Cerca il testo completoThe possibility of earthquakes in Indiana due to the presence of the New Madrid Seismic Zone is well known. However, the identification of the Wabash Valley Seismic Zone has increased our understanding of the seismic hazard in the state of Indiana. Due to this awareness of the increased potential for earthquakes, specifically in the Vincennes District, the seismicvulnerability of Indiana’s bridge network must be assessed. As such, the objective of this thesis is to develop a simplified assessment procedure that can be used to conduct a state-wide seismic vulnerability assessment of reinforced concrete bridges in Indiana.
Across the state, variability in substructure type, seismic hazard level, and soil site class influences the vulnerability of bridges. To fully understand the impact of this variation, a detailed assessment is completed on a representative sample. Twenty-five reinforced concrete bridges are selected across the state, and analyzed using information from the bridge drawings and a finite element analysis procedure. These bridges are analyzed using synthetic ground motions representative of the hazard level in Indiana. The results of the detailed analysis are used to develop a simplified assessment procedure that uses information that is available in BIAS or can be added to BIAS. At this time, BIAS does not contain all the necessary information required for accurate estimates of dynamic properties, thus, certain assumptions are made. Several candidate models are developed by incrementally increasing the level of information proposed to be added into BIAS, which resulted in an increase in the level of accuracy of the results. The simplified assessment is then validated through a comparison with the detailed analysis.
Through the development of the simplified assessment procedure, the minimum data item which must be added to BIAS to complete the assessment is the substructure type, and bridges with reinforced concrete columns in the substructure require a detailed assessment. Lastly, by increasing the level of information available in BIAS, the agreement between the results of the simplified assessment and the detailed assessment is improved.
(6640721), Alexandra D. Mallory. "On the development of an open-source preprocessing framework for finite element simulations". Thesis, 2019.
Cerca il testo completo(6641012), Genisson Silva Coutinho. "FACULTY BELIEFS AND ORIENTATIONS TO TEACHING AND LEARNING IN THE LAB: AN EXPLORATORY CASE STUDY". Thesis, 2019.
Cerca il testo completo(7040873), Ting-Wei Wang. "ANCHORING TO LIGHTWEIGHT CONCRETE: CONCRETE BREAKOUT STRENGTH OF CAST-IN, EXPANSION, AND SCREW ANCHORS IN TENSION". Thesis, 2019.
Cerca il testo completo(7479359), Cheng Qian. "Evaluation of Deep Learning-Based Semantic Segmentation Approaches for Autonomous Corrosion Detection on Metallic Surfaces". Thesis, 2019.
Cerca il testo completo(8800811), Mingmin Liu. "MODELLING OF INTERSTATE I-465 CRASH COUNTS DURING SNOW EVENTS". Thesis, 2020.
Cerca il testo completo(6613415), Leonardo Enrico Bertassello. "Eco-Hydrological Analysis of Wetlandscapes". Thesis, 2019.
Cerca il testo completoKovacevic, Vlado S. "The impact of bus stop micro-locations on pedestrian safety in areas of main attraction". 2005. http://arrow.unisa.edu.au:8081/1959.8/28389.
Testo completo(6411944), Francisco J. Montes Sr. "EFFECTS ON RHEOLOGY AND HYDRATION OF THE ADDITION OF CELLULOSE NANOCRYSTALS (CNC) IN PORTLAND CEMENT". Thesis, 2019.
Cerca il testo completoAll the CNCs used were characterized in length, aspect ratio, and zeta potential to identify a definitive factor that governs the effect in the rheology of cement pastes. However, no definitive evidence was found that any of these characteristics dominated the measured effects.
The CNC dosage at which the maximum yield stress reduction occurred increased with the amount of water used in the paste preparation, which provides evidence of the dominance of the water to cement ratio in the rheological impact of CNC.
14
Isothermal calorimetry showed that CNC cause concerning retardation effects in cement hydration. CNC slurries were then tested for sugars and other carbohydrates that could cause the aforementioned effect, then slurries were filtered, and impurities were detected in the filtrate, these impurities were quantified and characterized, however, the retardation appeared to be unaffected by the amount of the species detected, suggesting that the crystal chemistry, which is a consequence of the production method, is responsible of this retardation.
This work explores the benefits and drawbacks of the use of CNC in cement composites by individually approaching rheology and heat of hydration on a range of physical and chemical tests to build a better understanding of the observed effects.
Understanding the effect of CNCs on cement paste rheology can provide insights for future work of CNCs applications in cement composites.
(7484339), Fu-Chen Chen. "Deep Learning Studies for Vision-based Condition Assessment and Attribute Estimation of Civil Infrastructure Systems". Thesis, 2021.
Cerca il testo completo(5930171), Yuxiao Qin. "Sentinel-1 Wide Swath Interferometry: Processing Techniques and Applications". Thesis, 2019.
Cerca il testo completo(9229868), Jephunneh Bonsafo-Bawuah. "AN AGENT-BASED FRAMEWORK FOR INFRASTRUCTURE MAINTENANCE DECISION MAKING". Thesis, 2020.
Cerca il testo completo(9750833), Zilong Yang. "Automated Building Extraction from Aerial Imagery with Mask R-CNN". Thesis, 2020.
Cerca il testo completoBuildings are one of the fundamental sources of geospatial information for urban planning, population estimation, and infrastructure management. Although building extraction research has gained considerable progress through neural network methods, the labeling of training data still requires manual operations which are time-consuming and labor-intensive. Aiming to improve this process, this thesis developed an automated building extraction method based on the boundary following technique and the Mask Regional Convolutional Neural Network (Mask R-CNN) model. First, assisted by known building footprints, a boundary following method was used to automatically best label the training image datasets. In the next step, the Mask R-CNN model was trained with the labeling results and then applied to building extraction. Experiments with datasets of urban areas of Bloomington and Indianapolis with 2016 high resolution aerial images verified the effectiveness of the proposed approach. With the help of existing building footprints, the automatic labeling process took only five seconds for a 500*500 pixel image without human interaction. A 0.951 intersection over union (IoU) between the labeled mask and the ground truth was achieved due to the high quality of the automatic labeling step. In the training process, the Resnet50 network and the feature pyramid network (FPN) were adopted for feature extraction. The region proposal network (RPN) then was trained end-to-end to create region proposals. The performance of the proposed approach was evaluated in terms of building detection and mask segmentation in the two datasets. The building detection results of 40 test tiles respectively in Bloomington and Indianapolis showed that the Mask R-CNN model achieved 0.951 and 0.968 F1-scores. In addition, 84.2% of the newly built buildings in the Indianapolis dataset were successfully detected. According to the segmentation results on these two datasets, the Mask R-CNN model achieved the mean pixel accuracy (MPA) of 92% and 88%, respectively for Bloomington and Indianapolis. It was found that the performance of the mask segmentation and contour extraction became less satisfactory as the building shapes and roofs became more complex. It is expected that the method developed in this thesis can be adapted for large-scale use under varying urban setups.
(6618812), Harsh Patel. "IMPLEMENTING THE SUPERPAVE 5 ASPHALT MIXTURE DESIGN METHOD IN INDIANA". Thesis, 2019.
Cerca il testo completo(7860779), Mohammadreza Pouranian. "Aggregate Packing Characteristics of Asphalt Mixtures". Thesis, 2019.
Cerca il testo completoThe first task was to propose an analytical approach for estimating changes in voids in the mineral aggregate (VMA) due to gradation variation and determining the relevant aggregate skeleton characteristics of asphalt mixtures using the linear-mixture packing model, an analytical packing model that considers the mechanisms of particle packing, filling and occupation. Application of the linear-mixture packing model to estimate the VMA of asphalt mixtures showed there is a high correlation between laboratory measured and model estimated values. Additionally, the model defined a new variable, the central particle size of asphalt mixtures that characterized an asphalt mixture’s aggregate skeleton. Finally, the proposed analytical model showed a significant potential to be used in the early stages of asphalt mixture design to determine the effect of aggregate gradation changes on VMA and to predict mixture rutting performance.
As the second task, a framework to define and understand the aggregate structure of asphalt mixtures was proposed. To develop this framework, an analytical model for binary mixtures was proposed. The model considers the effect of size ratio and air volume between the particles on the aggregate structure and packing density of binary mixtures. Based on this model, four aggregate structures, namely coarse pack (CP), coarse-dense pack (CDP), fine-dense pack (FDP) and fine pack (FP), were defined. The model was validated using a series of 3D discrete element simulation. Furthermore, the simulation of multi-sized aggregate blends using two representative sizes for fine and coarse stockpiles was carried out to apply the proposed analytical model to actual aggregate blends. The numerical simulations verified the proposed analytical model could satisfactorily determine the particle structure of binary and multi-sized asphalt mixture gradations and could, therefore, be used to better design asphalt mixtures for improved performance.
The third task virtually investigated the effect of shape characteristics of coarse aggregates on the compactability of asphalt mixtures using a discreet element method (DEM). The 3D particles were constructed using a method based on discrete random fields’ theory and spherical harmonic and their size distribution in the container was controlled by applying a constrained Voronoi tessellation (CVT) method. The effect of fine aggregates and asphalt binder was considered by constitutive Burger’s interaction model between coarse particles. Five aggregate shape descriptors including flatness, elongation, roundness, sphericity and regularity and, two Superpave gyratory compactor (SGC) parameters (initial density at Nini and compaction slope) were selected for investigation and statistical analyses. Results revealed that there is a statistically significant correlation between flatness, elongation, roundness, and sphericity as shape descriptors and initial density as compaction parameter. Also, the results showed that the maximum percentage of change in initial density is 5% and 18% for crushed and natural sands, respectively. The results of analysis discovered that among all particle shape descriptors, only roundness and regularity had a statistically significant relation with compaction slope, and as the amount of roundness and regularity increase (low angularity), the compaction slope decreases. Additionally, the effect of flat and elongated (F&E) particles percentage in a mixture using a set of simulations with five types of F&E particles (dimensional ratios 1:2, 1:3, 1:4 and 1:5) and ten different percentage (0, 5, 10, 15, 20, 30, 40, 50, 80 and 100) with respect to a reference mixture containing particles with flatness and elongation equal to 0.88 was conducted. Results indicated that increase of F&E particles in a mixture (more than 15%) results in a significant reduction in the initial density of the mixture especially for lower dimensional ratio (1:4 and 1:5).
(8065844), Jedadiah Floyd Burroughs. "Influence of Chemical and Physical Properties of Poorly-Ordered Silica on Reactivity and Rheology of Cementitious Materials". Thesis, 2019.
Cerca il testo completoSilica fume is a widely used pozzolan in the concrete industry that has been shown to have numerous benefits for concrete including improved mechanical properties, refined pore structure, and densification of the interfacial transition zone between paste and aggregates. Traditionally, silica fume is used as a 5% to 10% replacement of cement; however, newer classes of higher strength concretes use silica fume contents of 30% or greater. At these high silica fume contents, many detrimental effects, such as poor workability and inconsistent strength development, become much more prominent.
In order to understand the fundamental reasons why high silica fume contents can have these detrimental effects on concrete mixtures, eight commercially available silica fumes were characterized for their physical and chemical properties. These included traditional properties such as density, particle size, and surface area. A non-traditional property, absorption capacity, was also determined. These properties or raw material characteristics were then related to the hydration and rheological behavior of pastes and concrete mixtures. Other tests were performed including isothermal calorimetry, which showed that each silica fume reacted differently than other silica fumes when exposed to the same reactive environment. Traditional hydration models for ordinary portland cement were expanded to include the effects that silica fumes have on water consumption, volumes of hydration products, and final degree of hydration.
As a result of this research, it was determined necessary to account for the volume and surface area of unhydrated cement and unreacted silica fume particles in water-starved mixture proportions. An adjustment factor was developed to more accurately apply the results from hydration modeling. By combining the results from hydration modeling with the surface area adjustments, an analytical model was developed to determine the thickness of paste (hydration products and capillary water) that surrounds all of the inert and unreacted particles in the system. This model, denoted as the “Paste Thickness Model,” was shown to be a strong predictor of compressive strength results. The results of this research suggest that increasing the paste thickness decreases the expected compressive strength of concretes at ages or states of hydration.
The rheological behavior of cement pastes containing silica fume was studied using a rotational rheometer. The Herschel-Bulkley model was fit to the rheological data to characterize the rheological behavior. A multilinear model was developed to relate the specific surface area of the silica fume, water content, and silica fume content to the Herschel-Bulkley rate index. The Herschel-Bulkley rate index is practically related to the ease at which the paste mixes. This multilinear model was shown to have strong predictive capability when used on randomly generated paste compositions.
Additionally, an analytical model was developed that defines a single parameter, idealized as the thickness of water surrounding each particle in the cementitious system. This model, denoted as the “Water Thickness Model,” incorporated the absorption capacity of silica fumes discovered during the characterization phase of this study and was shown to correlate strongly with the Herschel-Bulkley rate index. The Water Thickness Model demonstrates how small changes in water content can have a drastic effect on the rheology of low w/c or high silica fume content pastes due to the combined effects of surface area and absorption. The effect of additional water on higher w/c mixtures is significantly less.
(5930657), Go-Eun Han. "A STUDY ON THE FAILURE ANALYSIS OF THE NEUTRON EMBRITTLED REACTOR PRESSURE VESSEL SUPPORT USING FINITE ELEMENT ANALYSIS". Thesis, 2020.
Cerca il testo completoOne of the major degradation mechanisms in a nuclear power plant structural or mechanical component is the neutron embrittlement of the irradiated steel component. High energy neutrons change the microstructure of the steel, so the steel loses its fracture toughness. This neutron embrittlement increases the risk of the brittle fracture. Meanwhile, the reactor pressure vessel support is exposed in low temperature with high neutron irradiation environment which is an unfavorable condition for the fracture failure. In this study, the failure assessment of a reactor pressure vessel support was conducted using the fitness-for-service failure assessment diagram of API 579-1/ASME FFS-1(2016, API) with quantifying the structural margin under the maximum irradiation and extreme load events.
Two interrelated studies were conducted. For the first investigation, the current analytical methods were reviewed to estimate the embrittled properties, such as fracture toughness and the yield strength incorporates the low irradiation temperature. The analytical results indicated that the reactor pressure vessel support may experience substantial fracture toughness decrease during the operation near the lower bound of the fracture toughness. A three-dimensional (3D) solid element finite element model was built for the linear stress analysis. Postulated cracks were located in the maximum stress region to compute the stress intensity and the reference stress ratio. Based on the stress result and the estimated physical properties, the structural margin of the reactor pressure vessel support was analyzed in the failure assessment diagram with respect to the types of the cracks, level of the applied load and the level of the neutron influence.
The second study explored the structural stress analysis approaches at hot-spot which was found to be key parameter in failure analysis. Depending on the methods to remove the non-linear peak stress and the stress singularities, the accuracy of the failure assessment result varies. As an alternative proposal to evaluate the structural stress in 3D finite element analysis (FEA), the 3D model was divided into two-dimensional (2D) plane models. Five structural stress determination approaches were applied in 2D FEA for a comparison study, the stress linearization, the single point away approach, the stress extrapolation and the stress equilibrium method and the nodal force method. Reconstructing the structural stress in 3D was carried by the 3x3 stress matrix and compared to the 3D FEA results. The difference in 2D FEA structural stress results were eliminated by the constructing the stress in 3D.
This study provides the failure assessment analysis of irradiated steel with prediction of the failure modes and safety margin. Through the failure assessment diagram, we could understand the effects of different levels of irradiation and loadings. Also, this study provides an alternative structural stress determination method, dividing the 3D solid element model into two 2D models, using the finite element analysis.
(6592994), Nicholas R. Olsen. "Long Term Trends in Lake Michigan Wave Climate". Thesis, 2019.
Cerca il testo completoTests show significant long-term decreases in annual mean wave height in the lake's southern basin (up to -1.5mm/yr). When wave-approach direction was removed by testing directional bins for trends independently, an increase in the extent of the affected coast and rate of the shrinking waves was found (up to -4mm/yr). A previously unseen increasing trend in wave size in the northern basin (up to 2mm/yr) was also revealed.
Data from the WIS model indicated that storm duration and peak wave height in the southern basin has decreased at an averaged rate of -0.085hr/yr and -5mm/yr, respectively, from 1979 to 2017. An analysis of the extreme value distribution's shape in the southern basin found a similar pattern in the WIS hindcast model, with the probability of observing a wave larger than 5 meters decreasing by about -0.0125yr-1. In the northern basin, the probability of observing a wave of the same size increased at a rate of 0.0075yr-1.
The results for trends in the annual means revealed the importance of removing temporal- and spatial-within-series dependencies, in wave-height data. The strong dependence of lake waves on approach direction, as compared to ocean waves, may result from the relatively large differences in fetch length in the enclosed body of water. Without removal or isolation of these dependencies trends may be lost. Additionally, removal of the seasonal component in lake water level and mean wave-height series revealed that there was no significant correlation between these series.
(7484483), Soohyun Yang. "COUPLED ENGINEERED AND NATURAL DRAINAGE NETWORKS: DATA-MODEL SYNTHESIS IN URBANIZED RIVER BASINS". Thesis, 2019.
Cerca il testo completoIn urbanized river basins, sanitary wastewater and urban runoff (non-sanitary water) from urban agglomerations drain to complex engineered networks, are treated at centralized wastewater treatment plants (WWTPs) and discharged to river networks. Discharge from multiple WWTPs distributed in urbanized river basins contributes to impairments of river water-quality and aquatic ecosystem integrity. The size and location of WWTPs are determined by spatial patterns of population in urban agglomerations within a river basin. Economic and engineering constraints determine the combination of wastewater treatment technologies used to meet required environmental regulatory standards for treated wastewater discharged to river networks. Thus, it is necessary to understand the natural-human-engineered networks as coupled systems, to characterize their interrelations, and to understand emergent spatiotemporal patterns and scaling of geochemical and ecological responses.
My PhD research involved data-model synthesis, using publicly available data and application of well-established network analysis/modeling synthesis approaches. I present the scope and specific subjects of my PhD project by employing the Drivers-Pressures-Status-Impacts-Responses (DPSIR) framework. The defined research scope is organized as three main themes: (1) River network and urban drainage networks (Foundation-Pathway of Pressures); (2) River network, human population, and WWTPs (Foundation-Drivers-Pathway of Pressures); and (3) Nutrient loads and their impacts at reach- and basin-scales (Pressures-Impacts).
Three inter-related research topics are: (1) the similarities and differences in scaling and topology of engineered urban drainage networks (UDNs) in two cities, and UDN evolution over decades; (2) the scaling and spatial organization of three attributes: human population (POP), population equivalents (PE; the aggregated population served by each WWTP), and the number/sizes of WWTPs using geo-referenced data for WWTPs in three large urbanized basins in Germany; and (3) the scaling of nutrient loads (P and N) discharged from ~845 WWTPs (five class-sizes) in urbanized Weser River basin in Germany, and likely water-quality impacts from point- and diffuse- nutrient sources.
I investigate the UDN scaling using two power-law scaling characteristics widely employed for river networks: (1) Hack’s law (length-area power-law relationship), and (2) exceedance probability distribution of upstream contributing area. For the smallest UDNs, length-area scales linearly, but power-law scaling emerges as the UDNs grow. While area-exceedance plots for river networks are abruptly truncated, those for UDNs display exponential tempering. The tempering parameter decreases as the UDNs grow, implying that the distribution evolves in time to resemble those for river networks. However, the power-law exponent for mature UDNs tends to be larger than the range reported for river networks. Differences in generative processes and engineering design constraints contribute to observed differences in the evolution of UDNs and river networks, including subnet heterogeneity and non-random branching.
In this study, I also examine the spatial patterns of POP, PE, and WWTPs from two perspectives by employing fractal river networks as structural platforms: spatial hierarchy (stream order) and patterns along longitudinal flow paths (width function). I propose three dimensionless scaling indices to quantify: (1) human settlement preferences by stream order, (2) non-sanitary flow contribution to total wastewater treated at WWTPs, and (3) degree of centralization in WWTPs locations. I select as case studies three large urbanized river basins (Weser, Elbe, and Rhine), home to about 70% of the population in Germany. Across the three river basins, the study shows scale-invariant distributions for each of the three attributes with stream order, quantified using extended Horton scaling ratios; a weak downstream clustering of POP in the three basins. Variations in PE clustering among different class-sizes of WWTPs reflect the size, number, and locations of urban agglomerations in these catchments.
WWTP effluents have impacts on hydrologic attributes and water quality of receiving river bodies at the reach- and basin-scales. I analyze the adverse impacts of WWTP discharges for the Weser River basin (Germany), at two steady river discharge conditions (median flow; low-flow). This study shows that significant variability in treated wastewater discharge within and among different five class-sizes WWTPs, and variability of river discharge within the stream order <3, contribute to large variations in capacity to dilute WWTP nutrient loads. For the median flow, reach-scale water quality impairment assessed by nutrient concentration is likely at 136 (~16%) locations for P and 15 locations (~2%) for N. About 90% of the impaired locations are the stream order < 3. At basin-scale analysis, considering in stream uptake resulted 225 (~27%) P-impaired streams, which was ~5% reduction from considering only dilution. This result suggests the dominant role of dilution in the Weser River basin. Under the low flow conditions, water quality impaired locations are likely double than the median flow status for the analyses. This study for the Weser River basin reveals that the role of in-stream uptake diminishes along the flow paths, while dilution in larger streams (4≤ stream order ≤7) minimizes the impact of WWTP loads.
Furthermore, I investigate eutrophication risk from spatially heterogeneous diffuse- and point-source P loads in the Weser River basin, using the basin-scale network model with in-stream losses (nutrient uptake).Considering long-term shifts in P loads for three representative periods, my analysis shows that P loads from diffuse-sources, mainly from agricultural areas, played a dominant role in contributing to eutrophication risk since 2000s, because of ~87% reduction of point-source P loads compared to 1980s through the implementation of the EU WFD. Nevertheless, point-sources discharged to smaller streams (stream order < 3) pose amplification effects on water quality impairment, consistent with the reach-scale analyses only for WWTPs effluents. Comparing to the long-term water quality monitoring data, I demonstrate that point-sources loads are the primary contributors for eutrophication in smaller streams, whereas diffuse-source loads mainly from agricultural areas address eutrophication in larger streams. The results are reflective of spatial patterns of WWTPs and land cover in the Weser River basin.
Through data-model synthesis, I identify the characteristics of the coupled natural (rivers) – humans – engineered (urban drainage infrastructure) systems (CNHES), inspired by analogy, coexistence, and causality across the coupled networks in urbanized river basins. The quantitative measures and the basin-scale network model presented in my PhD project could extend to other large urbanized basins for better understanding the spatial distribution patterns of the CNHES and the resultant impacts on river water-quality impairment.
(8735910), Josept David Revuelta Acosta Sr. "WATER-DRIVEN EROSION PREDICTION TECHNOLOGY FOR A MORE COMPLICATED REALITY". Thesis, 2020.
Cerca il testo completoHydrological modeling has been a valuable tool to understand the processes governing water distribution, quantity, and quality of the planet Earth. Through models, one has been able to grasp processes such as runoff, soil moisture, soil erosion, subsurface drainage, plant growth, evapotranspiration, and effects of land use changes on hydrology at field and watershed scales. The number and diversity of water-related challenges are vast and expected to increase. As a result, current models need to be under continuous modifications to extend their application to more complex processes. Several models have been extensively developed in recent years. These models include the Soil and Water Assessment Tool (SWAT), Variable Infiltration Capacity (VIC) model, MIKE-SHE, and the Water Erosion Prediction Project (WEPP) model. The latter, although it is a well-validated model at field scales, the WEPP watershed model has been limited to small catchments, and almost no research has been introduced regarding water quality issues (only one study).
In this research, three objectives were proposed to improve the WEPP model in three areas where either the model has not been applied, or modifications can be performed to improve algorithms of the processes within the model (e.g. erosion, runoff, drainage). The enhancements impact the WEPP model by improving the current stochastic weather generation, extending its applicability to subsurface drainage estimation, and formulating a new routing model that allows future incorporation of transport of reactive solutes.
The first contribution was development of a stochastic storm generator based on 5-min time resolution and correlated non-normal Monte Carlo-based numerical simulation. The model considered the correlated and non-normal rainstorm characteristics such as time between storms, duration, and amount of precipitation, as well as the storm intensity structure. The model was tested using precipitation data from a randomly selected 5-min weather station in North Carolina. Results showed that the proposed storm generator captured the essential statistical features of rainstorms and their intensity patterns, preserving the first four moments of monthly storm events, good annual extreme event correspondence, and the correlation structure within each storm. Since the proposed model depends on statistical properties at a site, this may allow the use of synthetic storms in ungauged locations provided relevant information from a regional analysis is available.
A second development included the testing, improvement, and validation of the WEPP model to simulate subsurface flow discharges. The proposed model included the modification of the current subsurface drainage algorithm (Hooghoudt-based expression) and the WEPP model percolation routine. The modified WEPP model was tested and validated on an extensive dataset collected at four experimental sites managed by USDA-ARS within the Lake Erie Watershed. Predicted subsurface discharges show Nash-Sutcliffe Efficiency (NSE) values ranging from 0.50 to 0.70, and percent bias ranging from -30% to +15% at daily and monthly resolutions. Evidence suggests the WEPP model can be used to produce reliable estimates of subsurface flow with minimum calibration.
The last objective presented the theoretical framework for a new hillslope and channel-routing model for the Water Erosion Prediction Project (WEPP) model. The routing model (WEPP-CMT) is based on catchment geomorphology and mass transport theory for flow and transport of reactive solutes. The WEPP-CMT uses the unique functionality of WEPP to simulate hillslope responses under diverse land use and management conditions and a Lagrangian description of the carrier hydrologic runoff at hillslope and channel domains. An example of the model functionality was tested in a sub-catchment of the Upper Cedar River Watershed in the U.S. Pacific Northwest. Results showed that the proposed model provides an acceptable representation of flow at the outlet of the study catchment. Model efficiencies and percent bias for the calibration period and the validation period were NSE = 0.55 and 0.65, and PBIAS = -2.8% and 2.1%, respectively. The WEPP-CMT provides a suitable foundation for the transport of reactive solutes (e.g. nitrates) at basin scales.
(6620447), Yen-Chen Chiang. "Studies on Aboveground Storgae Tanks Subjeected to Wind Loading: Static, Dynamic, and Computational Fluid Dynamics Analyses". Thesis, 2019.
Cerca il testo completoDue to the slender geometries of aboveground storage tanks, maintaining the stability under wind gusts of these tanks has always been a challenge. Therefore, this thesis aims to provide a through insight on the behavior of tanks under wind gusts using finite element analysis and computational fluid dynamic (CFD) analysis. The present thesis is composed of three independent studies, and different types of analysis were conducted. In Chapter 2, the main purpose is to model the wind loading dynamically and to investigate whether a resonance can be triggered. Research on tanks subjected to static wind load have thrived for decades, while only few studies consider the wind loading dynamically. Five tanks with different height (H) to diameter (D) ratios, ranging from 0.2 to 4, were investigated in this chapter. To ensure the quality of the obtained solution, a study on the time step increment of an explicit dynamic analysis, and a on the mesh convergence were conducted before the analyses were performed. The natural vibration frequencies and the effective masses of the selected tanks were first solved. Then, the tanks were loaded with wind gusts with the magnitude of the pressure fluctuating at the frequency associating with the most effective mass and other frequencies. Moreover, tanks with eigen-affine imperfections were also considered. It was concluded that resonance was not observed in any of these analyses. However, since the static buckling capacity and the dynamic buckling capacity has a relatively large difference for tall tanks (H/D ≥ 2.0), a proper safety factor shall be included during the design if a static analysis is adopted.
Chapter 3 focus on the effect of an internal pressure generated by wind gusts on open-top tanks. Based on boundary layer wind tunnel tests (BLWT), a significant pressure would be generated on the internal side of the tank shell when a gust of wind blow through an open-top tank. This factor so far has not been sufficiently accounted for by either ASCE-7 or API 650, despite the fact that this internal pressure may almost double the design pressure. Therefore, to investigate the effect of the wind profile along with the internal pressure, multiple wind profiles specified in different design documents were considered. The buckling capacities of six tanks with aspect ratios (H/D) ranging from 0.1 to 4 were analyzed adopting geometrically nonlinear analysis with imperfection using an arc-length algorithm (Riks analysis). Material nonlinearity was also included in some analyses. It was observed that the buckling capacity of a tank obtained using ASCE-7/API 650 wind profile is higher than buckling capacities obtained through any other profiles. It was then concluded that the wind profile dictated by the current North American design documents may not be conservative enough and may need a revision.
Chapter 4 investigates how CFD can be applied to obtain the wind pressure distribution on tanks. Though CFD has been widely employed in different research areas, to the author’s best knowledge, only one research has been dedicated to investigate the interaction between wind gusts and tanks using CFD. Thus, a literature review on the guideline of selecting input parameter for CFD and a parametric study as how to choose proper input parameters was presented in Chapter 4. A tank with an aspect ratio of 0.5 and a flat roof was employed for the parametric study. To ensure the validity of the input parameters, the obtained results were compared with published BLWT results. After confirming that the selected input parameters produces acceptable results, tanks with aspect ratio ranging from 0.4 to 2 were adopted and wind pressure distribution on such tanks were reported. It was concluded that the established criteria for deciding the input parameters were able to guarantee converged results, and the obtained pressure coefficients agree well with the BLWT results available in the literature.
(6622427), Zhe Sun. "APPLICATION OF PHOTOCHEMICAL AND BIOLOGICAL APPROACHES FOR COST-EFFECTIVE ALGAL BIOFUEL". Thesis, 2019.
Cerca il testo completoRapid growth of energy consumption and greenhouse gas emissions from fossil fuels have promoted extensive research on biofuels. Algal biofuels have been considered as a promising and environmentally friendly renewable energy source. However, several limitations have inhibited the development of cost-effective biofuel production, which includes unstable cultivation caused by invading organisms and high cost of lipid extraction. This dissertation aims to investigate photochemical approaches to prevent culture collapse caused by invading organisms and biological approaches for the development of cost-effective lipid extraction methods.
As a chemical-free water treatment technology, ultraviolet (UV) irradiation has been widely applied to inactivate pathogens but has not been used in algal cultivation to control invading organisms. To evaluate the potential of using UV irradiation to control invading algal species and minimize virus predation, Tetraselmis sp. and Paramecium bursaria Chlorella virus 1 (PBCV-1) were examined as challenge organisms to evaluate effectiveness of UV disinfection. The concentration of viable (reproductively/infectively active) cells and viruses were quantified by a most probable number (MPN) assay and a plaque assay. A low-pressure collimated-beam reactor was used to investigate UV254 dose-response behavior of both challenge organisms. A medium-pressure collimated-beam reactor equipped with a series of narrow bandpass optical filters was used to investigate the action spectra of both challenge organisms. Both challenge organisms showed roughly five log10 units of inactivation for UV254 doses over 120 mJ/cm2. the most effective wavelengths for inactivation of Tetraselmis were from 254 nm to 280 nm, in which the inactivation was mainly attributed to UV-induced DNA damage. On the contrary, the most effective wavelength for inactivation of PBCV-1 was observed at 214 nm, where the loss of infectivity was mainly attributed to protein damage. These results provide important information for design of UV reactors to minimize the impact of invading organisms in algal cultivation systems.
Additionally, a virus-assisted cell disruption method was developed for cost-effective lipid extraction from algal biomass. Detailed mechanistic studies were conducted to evaluate infection behavior of Chlorovirus PBCV-1 on Chlorella sp., impact of infection on mechanical strength of algal cell wall, lipid yield, and lipid distribution. Viral disruption with multiplicity of infection (MOI) of 10-8 completely disrupted concentrated algal biomass in six days. Viral disruption significantly reduced the mechanical strength of algal cells for lipid extraction. Lipid yield with viral disruption increased more than three times compared with no disruption control and was similar to that of ultrasonic disruption. Moreover, lipid composition analysis showed that the quality of extracted lipids was not affected by viral infection. The results showed that viral infection is a cost-effective process for lipid extraction from algal cells as extensive energy input and chemicals required by existing disruption methods are no longer needed.
Overall, this dissertation provides innovative approaches for the development of cost-efficient algal biofuels. Application of UV disinfection and viral disruption significantly reduces chemical consumption and improves sustainability of algal biofuel production.
(6594389), Mahsa Modiri-Gharehveran. "INDIRECT PHOTOCHEMICAL FORMATION OF COS AND CS2 IN NATURAL WATERS: KINETICS AND REACTION MECHANISMS". Thesis, 2019.
Cerca il testo completoIn the first part of this thesis (chapters 2 and 3), nine natural waters ranging in salinity were spiked with various organic sulfur precursors (e.g. cysteine, cystine, dimethylsulfide (DMS) and methionine) exposed to simulated sunlight over varying exposures. Other water quality conditions including the presence of O2, CO and temperature were also varied. Results indicated that COS and CS2 formation increased up to 11× and 4×, respectively, after 12 h of sunlight while diurnal cycling exhibited varied effects. COS and CS2 formation were also strongly affected by the DOC concentration, organic sulfur precursor type, O2 concentration, and temperature while salinity differences and CO addition did not play a significant role.
To then specifically evaluate the role of DOM in cleaner matrices, COS and CS2 formation was examined in synthetic waters (see chapters 4 and 5). In this case, synthetic waters were spiked with different types of DOM isolates ranging from freshwater to ocean water along with either cysteine or DMS and exposed to simulated sunlight for up to 4 h. Surprisingly, CS2 was not formed under any of the tested conditions, indicating that other water quality constituents, aside from DOM, were responsible for its formation. However, COS formation was observed. Interestingly, COS formation with cysteine was fairly similar for all DOM types, but increasing DOM concentration actually decreased formation. This is likely due to the dual role of DOM on simultaneously forming and quenching the reactive intermediates (RIs). Additional experiments with quenching agents to RIs (e.g. 3DOM* and ·OH) further indicated that ·OH was not involved in COS formation with cysteine but 3DOM* was involved. This result differed with DMS in that ·OH and 3DOM* were both found to be involved. In addition, treating DOM isolates with sodium borohydride (NaBH4) to reduce ketone/aldehydes to their corresponding alcohols increased COS formation, which implied that the RIs formed by these functional groups in DOM were not involved. The alcohols formed by this process were not likely to act as quenching agents since they have been shown to low in reactivity. Since ketones are known to form high-energy-triplet-states of DOM while quinones are known to form low-energy-triplet-states of DOM, removing ketones from the system further supported the role of low-energy-triplet-states on COS formation. This was initially hypothesized by findings from the testes on DOM types. In the end there are several major research contributions from this thesis. First, cysteine and DMS have different mechanisms for forming COS. Second, adding O2 decreased COS formation, but it did not stop it completely, which suggests that further research is required to evaluate the role of RI in the presence of O2. Lastly, considering the low formation yields of COS and CS2 formation from the organic sulfur precursors tested in this study, it is believed that some other organic sulfur precursors are missing which are likely to generate these compounds to higher levels and this needs to be investigated in future research.
(8770325), Anzy Lee. "RIVERBED MORPHOLOGY, HYDRODYNAMICS AND HYPORHEIC EXCHANGE PROCESSES". Thesis, 2020.
Cerca il testo completoHyporheic exchange is key to buffer water quality and temperatures in streams and rivers, while also providing localized downwelling and upwelling microhabitats. In this research, the effect of geomorphological parameters on hyporheic exchange has been assessed from a physical standpoint: surface and subsurface flow fields, pressure distribution across the sediment/water interface and the residence time in the bed.
First, we conduct a series of numerical simulations to systematically explore how the fractal properties of bedforms are related to hyporheic exchange.We compared the average interfacial flux and residence time distribution in the hyporheic zone with respect to the magnitude of the power spectrum and the fractal dimension of riverbeds. The results show that the average interfacial flux increases logarithmically with respect to the maximum spectral density whereas it increases exponentially with respect to fractal dimension.
Second, we demonstrate how the Froude number affects the free-surface profile, total head over sediment bed and hyporheic flux. When the water surface is fixed,the vertical velocity profile from the bottom to the air-water interface follows the law of the wall so that the velocity at the air-water interface has the maximum value. On the contrary, in the free-surface case, the velocity at the interface no longer has the maximum value: the location having the maximum velocity moves closer to the sediment bed. This results in increasing velocity near the bed and larger head gradients, accordingly.
Third,we investigate how boulder spacing and embeddedness affect the near-bed hydrodynamics and the surface-subsurface water exchange.When the embeddedness is small, the recirculation vortex is observed in both closely-packed and loosely-packed cases, but the size of vortex was smaller and less coherent in the closely-packed case. For these dense clusters, the inverse relationship between embeddedness and flux no longer holds. As embeddedness increases, the subsurface flowpaths move in the lateral direction, as the streamwise route is hindered by the submerged boulder. The average residence time therefore decreases as the embeddedness increases.
Lastly, we propose a general artificial neural network for predicting the pressure field at the channel bottom using point velocities at different level. We constructed three different data-driven models with multivariate linear regression, local linear regression and artificial neural network. The input variable is velocity in x, y, and z directions and the target variable is pressure at the sediment bed. Our artificial neural network model produces consistent and accurate prediction performance under various conditions whereas other linear surrogate models such as linear multivariate regression and local linear multivariate regression significantly depend on input variable.
As restoring streams and rivers has moved from aesthetics and form to a more holistic approach that includes processes, we hope our study can inform designs that benefit both structural and functional outcomes. Our results could inform a number of critical processes, such as biological filtering for example. It is possible to use our approach to predict hyporheic exchange and thus constrain the associated biogeochemical processing under different topographies. As river restoration projects become more holistic, geomorphological, biogeochemical and hydro-ecological aspects should also be considered.
(5929958), Qinghua Li. "Geospatial Processing Full Waveform Lidar Data". Thesis, 2019.
Cerca il testo completo(5930687), Jinglin Jiang. "Investigating How Energy Use Patterns Shape Indoor Nanoaerosol Dynamics in a Net-Zero Energy House". Thesis, 2019.
Cerca il testo completoResearch on net-zero energy buildings (NZEBs) has been largely centered around improving building energy performance, while little attention has been given to indoor air quality. A critically important class of indoor air pollutants are nanoaerosols – airborne particulate matter smaller than 100 nm in size. Nanoaerosols penetrate deep into the human respiratory system and are associated with deleterious toxicological and human health outcomes. An important step towards improving indoor air quality in NZEBs is understanding how occupants, their activities, and building systems affect the emissions and fate of nanoaerosols. New developments in smart energy monitoring systems and smart thermostats offer a unique opportunity to track occupant activity patterns and the operational status of residential HVAC systems. In this study, we conducted a one-month field campaign in an occupied residential NZEB, the Purdue ReNEWW House, to explore how energy use profiles and smart thermostat data can be used to characterize indoor nanoaerosol dynamics. A Scanning Mobility Particle Sizer and Optical Particle Sizer were used to measure indoor aerosol concentrations and size distributions from 10 to 10,000 nm. AC current sensors were used to monitor electricity consumption of kitchen appliances (cooktop, oven, toaster, microwave, kitchen hood), the air handling unit (AHU), and the energy recovery ventilator (ERV). Two Ecobee smart thermostats informed the fractional amount of supply airflow directed to the basement and main floor. The nanoaerosol concentrations and energy use profiles were integrated with an aerosol physics-based material balance model to quantify nanoaerosol source and loss processes. Cooking activities were found to dominate the emissions of indoor nanoaerosols, often elevating indoor nanoaerosol concentrations beyond 104 cm-3. The emission rates for different cooking appliances varied from 1011 h-1 to 1014 h-1. Loss rates were found to be significantly different between AHU/ERV off and on conditions, with median loss rates of 1.43 h-1 to 3.68 h-1, respectively. Probability density functions of the source and loss rates for different scenarios will be used in Monte Carlo simulations to predict indoor nanoaerosol concentrations in NZEBs using only energy consumption and smart thermostat data.
(5930027), Ganeshchandra Mallya. "DROUGHT CHARACTERIZATION USING PROBABILISTIC MODELS". Thesis, 2020.
Cerca il testo completoDroughts are complex natural disasters caused due to deficit in water availability over a region. Water availability is strongly linked to precipitation in many parts of the world that rely on monsoonal rains. Recent studies indicate that the choice of precipitation datasets and drought indices could influence drought analysis. Therefore, drought characteristics for the Indian monsoon region were reassessed for the period 1901-2004 using two different datasets and standard precipitation index (SPI), standardized precipitation-evapotranspiration index (SPEI), Gaussian mixture model-based drought index (GMM-DI), and hidden Markov model-based drought index (HMM-DI). Drought trends and variability were analyzed for three epochs: 1901-1935, 1936-1970 and 1971-2004. Irrespective of the dataset and methodology used, the results indicate an increasing trend in drought severity and frequency during the recent decades (1971-2004). Droughts are becoming more regional and are showing a general shift to the agriculturally important coastal south-India, central Maharashtra, and Indo‑Gangetic plains indicating food security challenges and socioeconomic vulnerability in the region.
Drought severities are commonly reported using drought classes obtained by assigning pre-defined thresholds on drought indices. Current drought classification methods ignore modeling uncertainties and provide discrete drought classification. However, the users of drought classification are often interested in knowing inherent uncertainties in classification so that they can make informed decisions. A probabilistic Gamma mixture model (Gamma-MM)-based drought index is proposed as an alternative to deterministic classification by SPI. The Bayesian framework of the proposed model avoids over-specification and overfitting by choosing the optimum number of mixture components required to model the data - a problem that is often encountered in other probabilistic drought indices (e.g., HMM-DI). When sufficient number of components are used in Gamma-MM, it can provide a good approximation to any continuous distribution in the range (0,infinity), thus addressing the problem of choosing an appropriate distribution for SPI analysis. The Gamma-MM propagates model uncertainties to drought classification. The method is tested on rainfall data over India. A comparison of the results with standard SPI shows significant differences, particularly when SPI assumptions on data distribution are violated.
Finding regions with similar drought characteristics is useful for policy-makers and water resources planners in the optimal allocation of resources, developing drought management plans, and taking timely actions to mitigate the negative impacts during droughts. Drought characteristics such as intensity, frequency, and duration, along with land-use and geographic information, were used as input features for clustering algorithms. Three methods, namely, (i) a Bayesian graph cuts algorithm that combines the Gaussian mixture model (GMM) and Markov random fields (MRF), (ii) k-means, and (iii) hierarchical agglomerative clustering algorithm were used to find homogeneous drought regions that are spatially contiguous and possess similar drought characteristics. The number of homogeneous clusters and their shape was found to be sensitive to the choice of the drought index, the time window of drought, period of analysis, dimensionality of input datasets, clustering method, and model parameters of clustering algorithms. Regionalization for different epochs provided useful insight into the space-time evolution of homogeneous drought regions over the study area. Strategies to combine the results from multiple clustering methods were presented. These results can help policy-makers and water resources planners in the optimal allocation of resources, developing drought management plans, and taking timely actions to mitigate the negative impacts during droughts.
(9192656), Yue Ke. "Oh, the Places You'll Move: Urban Mass Transit's Effects on Nearby Housing Markets". Thesis, 2020.
Cerca il testo completo(9760799), Juan Esteban Suarez Lopez. "CAPACITATED NETWORK BASED PARKING MODELS UNDER MIXED TRAFFIC CONDITIONS". Thesis, 2020.
Cerca il testo completoNew technologies such as electric vehicles, Autonomous vehicles and transportation platforms are changing the way humanity move in a dramatic way and cities around the world need to adjust to this rapid change brought by technology. One of the aspects more challenging for urban planners is the parking problem as the new increase or desire for these private technologies may increase traffic congestion and change the parking requirements across the city. For example, Electric vehicles will need parking places for both parking and charging and Autonomous vehicles could increase the congestion by making longer trips in order to search better parking alternatives. Thus, it becomes essential to have clear, precise and practical models for transportation engineers in order to better represent present and future scenarios including normal vehicles, autonomous vehicles and electric vehicles in the context of parking and traffic alike. Classical network model such as traffic assignment have been frequently used for this purpose although they do not take into account essential aspects of parking such as fixed capacities, variety of users and autonomous vehicles. In this work a new methodology for modelling parking for multi class traffic assignment is proposed including autonomous vehicles and hard capacity constraints. The proposed model is presented in the classical Cournot Game formulation based on path flows and in a new link-node formulation which states the traffic assignment problem in terms of link flows instead of path flows. This proposed model allows for the creation of a new algorithm which is more flexible to model requirements such as linear constrains among different players flows and take advantage of fast convergence of Linear programs in the literature and in practice. Also, this link node formulation is used to redefine the network capacity problem as a linear program making it more tractable and easier to calculate. Numerical examples are presented across this work to better exemplify its implications and characteristics. The present work will allow planners to have a clear methodology for modelling parking and traffic in the context of multiusers which can represent diverse characteristics as parking time or type of vehicles. This model will be modified to take into account AV and the necessary assumptions and discussion will be provided.
(9045878), Mitra Khanibaseri. "Developing Artificial Neural Networks (ANN) Models for Predicting E. Coli at Lake Michigan Beaches". Thesis, 2020.
Cerca il testo completoA neural network model was developed to predict the E. Coli levels and classes in six (6) select Lake Michigan beaches. Water quality observations at the time of sampling and discharge information from two close tributaries were used as input to predict the E. coli. This research was funded by the Indiana Department of Environmental Management (IDEM). A user-friendly Excel Sheet based tool was developed based on the best model for making future predictions of E. coli classes. This tool will facilitate beach managers to take real-time decisions.
The nowcast model was developed based on historical tributary flows and water quality measurements (physical, chemical and biological). The model uses experimentally available information such as total dissolved solids, total suspended solids, pH, electrical conductivity, and water temperature to estimate whether the E. Coli counts would exceed the acceptable standard. For setting up this model, field data collection was carried out during 2019 beachgoer’s season.
IDEM recommends posting an advisory at the beach indicating swimming and wading are not recommended when E. coli counts exceed advisory standards. Based on the advisory limit, a single water sample shall not exceed an E. Coli count of 235 colony forming units per 100 milliliters (cfu/100ml). Advisories are removed when bacterial levels fall within the acceptable standard. However, the E. coli results were available after a time lag leading to beach closures from previous day results. Nowcast models allow beach managers to make real-time beach advisory decisions instead of waiting a day or more for laboratory results to become available.
Using the historical data, an extensive experiment was carried out, to obtain the suitable input variables and optimal neural network architecture. The best feed-forward neural network model was developed using Bayesian Regularization Neural Network (BRNN) training algorithm. Developed ANN model showed an average prediction accuracy of around 87% in predicting the E. coli classes.
(7036595), KwangHyuk Im. "ASSESSMENT MODEL FOR MEASURING CASCADING ECONOMIC IMPACTS DUE TO SEVERE WEATHER-INDUCED POWER OUTAGES". Thesis, 2019.
Cerca il testo completo(6611465), Nathaniel J. Shellhamer. "Direct Demand Estimation for Bus Transit in Small Cities". Thesis, 2019.
Cerca il testo completoPublic transportation is vital for many people who do not have the means to use other forms of transportation. In small communities, transit service is often limited, due to funding constraints of the transit agency. In order to maximize the use of available funding resources, agencies strive to provide effective and efficient service that meets the needs of as many people as possible. To do this, effective service planning is critical.
Unlike traditional road-based transportation projects, transit service modifications can be implemented over the span of just a few weeks. In planning for these short-term changes, the traditional four-step transportation planning process is often inadequate. Yet, the characteristics of small communities and the resources available to them limit the applicability of existing transit demand models, which are generally intended for larger cities.
This research proposes a methodology for using population and demographic data from the Census Bureau, combined with stop-level ridership data from the transit agency, to develop models for forecasting transit ridership generated by a given geographic area with known population and socioeconomic characteristics. The product of this research is a methodology that can be applied to develop ridership models for transit agencies in small cities. To demonstrate the methodology, the thesis built ridership models using data from Lafayette, Indiana.
A total of four (4) ridership models are developed, giving a transit agency the choice to select a model, based on available data and desired predictive power. More complex models are expected to provide greater predictive power, but also require more time and data to implement. Simpler models may be adequate where data availability is a challenge. Finally, examples are provided to aid in applying the models to various situations. Aggregation levels of the American Community Survey (ACS) data provided some challenge in developing accurate models, however, the developed models are still expected to provide useful information, particularly in situations where local knowledge is limited, or where additional information is unavailable.
(10693164), Chen Ma. "Modeling Alternatives for Implementing the Point-based Bundle Block Adjustment". Thesis, 2021.
Cerca il testo completo(5929718), Yuntao Guo. "Leveraging Information Technologies and Policies to Influence Short- and Long-term Travel Decisions". Thesis, 2019.
Cerca il testo completo(10692402), Jorge Alfredo Rojas Rondan. "A BIM-based tool for formwork management in building projects". Thesis, 2021.
Cerca il testo completo