To see the other types of publications on this topic, follow the link: Civil Engineering not elsewhere classified.

Dissertations / Theses on the topic 'Civil Engineering not elsewhere classified'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Civil Engineering not elsewhere classified.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

(5930270), Mehdi Shishehbor. "Numerical Investigation on the Mechanical Properties of Neat Cellulose Nanocrystal." Thesis, 2020.

Find full text
Abstract:
Nature has evolved efficient strategies to make materials with hierarchical internal structure that often exhibit exceptional mechanical properties. One such example is found in cellulose, which has achieved a high order of functionality and mechanical properties through a hierarchical structure with an exceptional control from the atomic level all the way to the macroscopic level. Cellulose is present in a wide variety of living species (trees, plants, algae, bacteria, tunicates), and provides the base reinforcement structure used by organisms for high mechanical strength, high strength-to-weight ratio, and high toughness. Additionally, being the most abundant organic substance on earth, cellulose has been used by our society as an engineering material for thousands of years, and are prolific within our society, as demonstrated by the enormity of the world-wide industries in cellulose derivatives, paper/packaging, textiles, and forest products.

More recently, a new class of cellulose base particles are being extracted from plants/trees, cellulose nanocrystals (CNCs), which are spindle-shaped nano-sized particles (3 ̶ 20 nm in width and 50 ̶ 500 nm in length) that are distinct from the more traditional cellulose materials currently used (e.g. molecular cellulose and wood pulp). They offer a new combination of particle morphology, properties and chemical functionalities that enable CNCs for use in applications that were once thought impossible for cellulosic materials.

CNCs have shown utility in many engineering applications, for example, biomedical, nanocomposites, barrier/separation membranes and cementitious materials. To gain greater insight as to how best use CNCs in various engineering application areas, a comprehensive understanding of the mechanics of CNCs is needed. The characterization of the mechanical properties of nanomaterials via experimental testing has always been challenging due to their small size, resulting in large uncertainties related to testing near sensitivity limits of a given technique, the same is true when characterizing CNCs. For CNCs, to help offset limitations in experimental testing, numerical modeling has been useful in predicting the mechanical properties of CNCs. We present a continuum-based structural model to study the mechanical behavior of cellulose nanocrystals (CNCs), and analyze the effect of bonded and non-bonded interactions on the mechanical properties under various loading conditions. In particular, this model assumes the uncoupling between the bonded and nonbonded interactions and their behavior is obtained from atomistic simulations.

For large deformations and when there is interaction and dynamics of many particles involved, continuum models could become as expensive as MD simulations. In addition, it has been shown that traditional material models in the continuum mechanics context, cannot model all the mechanical properties of CNC, especially for large deformation. To overcome these setbacks and to be able to model real size of CNC, 50-1000 nm, and/or to increase the number of particles involved in the simulation, a so called ‘‘coarse-grained’’ (CG) model for mechanical and interfacial properties of CNC is proposed. The proposed CG model is based on both mechanical properties and crystal-crystal interactions. Parametrization of the model is carried out in comparison with all-atom (AA) molecular dynamics and experimental results of some specific mechanical and interfacial tests.

Subsequently, verification is done with other tests. Finally, we analyze the effect of interface properties on the mechanical performance of CNC-based materials including, bending of a CNC bundle, tensile load and fracture in bioinspired structure of CNCs such as staggered brick-and-mortar and Bouligand structures of interest.
APA, Harvard, Vancouver, ISO, and other styles
2

(5929580), Man Chung Chim. "Prototype L-band Synthetic Aperture Radar on Low-altitude / Near-ground Platforms." Thesis, 2020.

Find full text
Abstract:
Synthetic Aperture Radar (SAR) is a technique to synthesize a large antenna array using the motion of a small antenna. When it comes to remote sensing, mapping, and change detection, SAR has been shown to be a good candidate by its ability to penetrate moisture and vegetation, and the avilibility of phase information for precise interferometric measurements [1] [13].

This study was motivated by the fact that satellite and high-altitude SAR has limited data availability in terms of temporal resolution and the cost of every measurement. It is believed that SAR systems mounted on smaller UAV or ground vehicles could provide a much better coverage of the target in time, and in dierent geometry.

We proposed a L-band SAR system based on Software-Defined Radio to be mounted on automotive platform. Novel motion estimation and compensation, as well as autofocusing techniques were developed to aid the SAR signal processing under much more demanding environment - the instability of radar platforms. It is expected this research development could bring down the cost of SAR being used as a remote sensing solution, and allow SAR system to be mounted on much smaller platforms by overcoming the instability of the track using novel signal processing methods, and eventually making SAR measurement available in places and times that was previously impossible.
APA, Harvard, Vancouver, ISO, and other styles
3

(6616565), Yunchang Zhang. "PEDESTRIAN-VEHICLE INTERACTIONS AT SEMI-CONTROLLED CROSSWALKS: EXPLANATORY METRICS AND MODELS." Thesis, 2019.

Find full text
Abstract:

A large number of crosswalks are indicated by pavement markings and signs but are not signal-controlled. In this study, such a location is called “semi-controlled”. In locations where such a crosswalk has moderate amounts of pedestrian and vehicle traffic, pedestrians and motorists often engage in a non-verbal “negotiation”, to determine who should proceed first.

In this study, 3400 pedestrian-motorist non-verbal interactions at such semi-controlled crosswalks were recorded by video. The crosswalk locations observed during the study underwent a conversion from one-way operation in Spring 2017 to two-way operation in Spring 2018. This offered a rare opportunity to collect and analyze data for the same location under two conditions.

This research explored factors that could be associated with pedestrian crossing behavior and motorist likelihood of decelerating. A mixed effects logit model and binary logistic regression were utilized to identify factors that influence the likelihood of pedestrian crossing under specific conditions. The complementary motorist models used generalized ordered logistic regression to identify factors that impact a driver’s likelihood of decelerating, which was found to be a more useful factor than likelihood of yielding to pedestrian. The data showed that 56.5% of drivers slowed down or stopped for pedestrians on the one-way street. This value rose to 63.9% on the same street after it had been converted to 2-way operation. Moreover, two-way operation eliminated the effects of the presence of other vehicles on driver behavior.

Also investigated were factors that could influence how long a pedestrian is likely to wait at such semi-controlled crosswalks. Two types of models were proposed to correlate pedestrian waiting time with various covariates. First, survival models were developed to analyze pedestrian wait time based on the first-event analysis. Second, multi-state Markov models were introduced to correlate the dynamic process between recurrent events. Combining the first-event and recurrent events analyses addressed the drawbacks of both methods. Findings from the before-and-after study can contribute to developing operational and control strategies to improve the level of service at such unsignalized crosswalks.

The results of this study can contribute to policies and/or control strategies that will improve the efficiency of semi-controlled and similar crosswalks. This type of crosswalk is common, so the benefits of well-supported strategies could be substantial.

APA, Harvard, Vancouver, ISO, and other styles
4

(5930783), Chintan Hitesh Patel. "Pack Rust Identification and Mitigation Strategies for Steel Bridges." Thesis, 2019.

Find full text
Abstract:
Pack rust or crevice corrosion is a type of localized corrosion. When a metal is in contact with a metal, or even non-metal, the metal starts to corrode, and rust starts to pack in between the surfaces. When signicant development of pack rust occurs, it can cause overstressing of bolts and rivets causing them to fail, and it can bend connecting plates and member elements thus reducing their buckling capacity. Thus it is important to mitigate the formation and growth of pack rust in bridges. This study was conducted to determine if pack rust occurs frequently and thereby may pose a problem in the state of Indiana. The study is divided into three primary tasks.The rst part of the study involves understanding the parameters involved in the initiation process of crevice corrosion and post-initiation crevice corrosion process. The second part of the study involves reviewing existing mitigation strategies and repair procedures used by state DOTs. The third part of the study involves identifying steel bridges with pack rust in Indiana. Analyses were performed on the data collected from Indiana bridges that have pack rust. This involved nding the components and members of bridges which are most aected by pack rust and nding parameters which in uence the formation of pack rust. Pack rust in the steel bridges were identied using the INDOT inspection reports available through BIAS system. The study revealed that good maintenance practices helped in reducing pack rust formation. The study identied locations on steel bridges which have a high probability towards pack rust formation. A mitigating strategy possessing qualities which can show promising results is identied.
APA, Harvard, Vancouver, ISO, and other styles
5

(5930969), Augustine M. Agyemang. "THE IMPACTS OF ROAD CONSTRUCTION WORK ZONES ON THE TRANSPORTATION SYSTEM, TRAVEL BEHAVIOR OF ROAD USERS AND SURROUNDING BUSINESSES." Thesis, 2019.

Find full text
Abstract:

In our daily use of the transportation system, we are faced with several road construction work zones. These construction work zones change how road users interact with the transportation system due to the changes that occur in the system such as increased travel times, increased delay times and vehicle stopped times. A microscopic traffic simulation was developed to depict the changes that occur in the transportation system. The impacts of the changes in the transportation system on the human travel behavior was investigated using ordered probit and logit models using five independent variables; age, gender, driving experience, annual mileage and percentage of non-work trips. Finally, a business impact assessment framework was developed to assess the impact of the road construction work zones on various businesses categories such as grocery stores, pharmacy, liquor stores and fast foods. Traffic simulation results showed that the introduction of work zones in the road network introduces an increase in delay times, vehicle stopped times, and travel times. Also, the change in average travel times, delay times and vehicle stopped times differed from road link to link. The observed average changes saw an increase as high as 318 seconds per vehicle, 237 seconds per vehicle and 242 seconds per vehicle for travel time, delay time and vehicle stopped time, respectively, for the morning peak period. An average increase as high as 1607 seconds per vehicle, 258 seconds per vehicle and 265 seconds per vehicle was observed for travel time, delay time and vehicle stopped time, respectively, for the afternoon peak period. The statistical model results indicated that, on a work trip, a high driving experience, high annual mileage, and high percentage of non-work trips makes an individual more likely to change their route. The results also showed gender difference in route choice behavior. Concerning business impacts, businesses in the work zone were impacted differently with grocery and pharmacy stores having the highest and lowest total loss in revenue, respectively.

APA, Harvard, Vancouver, ISO, and other styles
6

(11178147), Hala El Fil. "Shear Response of Rock Discontinuities: Through the Lens of Geophysics." Thesis, 2021.

Find full text
Abstract:

Failure along rock discontinuities can result in economic losses as well as loss of life. It is essential to develop methods that monitor the response of these discontinuities to shear loading to enable prediction of failure. Laboratory experiments are performed to investigate geophysical techniques to monitor shear failure of a pre-existing discontinuity to detect signatures of impending failure. Previous studies have detected precursors to shear failure in the form of maxima of transmitted waves across a discontinuity under shear. However, those experiments focused on well-matched discontinuities. However, in nature, rock discontinuities are not always perfectly matched because the asperities may be weathered by chemical, physical or mechanical processes. Further, the specific shear mechanism of mismatched discontinuities is still poorly understood. In this thesis, the ability to detect seismic precursors to shear failure for various discontinuity conditions—well-matched (rough and saw-tooth), mismatched (rough), and nonplanar (discontinuity profile with a half-cycle sine wave (HCS))—was assessed. The investigation was carried out through a coupled geophysical and mechanical experimental program that integrated detailed laboratory observations at the micro- and meso-scales. Shear experiments on gypsum discontinuities were conducted to observe changes in compressional (P) and shear (S) waves transmitted across the discontinuity. Digital Image Correlation (DIC) was used to quantify the vertical and horizontal displacements along the discontinuity during shearing to relate the location and magnitude of slip with the measured wave amplitudes.

Results from the experiments conducted on planar, well-matched rough discontinuities (grit 36 sandpaper roughness) showed that seismic precursors to failure took the form of peaks in the normalized transmitted amplitude prior to the peak shear stress. Seismic wave transmission detected non-uniform dilation and closure of the discontinuity at a normal stress of 1 MPa. The results showed that large-scale roughness (presence of a HCS) could mask the generation of precursors, as it can cause non-uniform closure/dilation along the fracture plane at low normal stress.

The experiments on idealized saw-toothed gypsum discontinuities showed that seismic precursors to failure appeared as maxima in the transmitted wave amplitude and conversely as minima in the reflected amplitudes. Converted waves (S to P & P to S) were also detected, and their amplitudes reached a maximum prior to shear failure. DIC results showed that slip occurred first at the top of the specimen, where the load was applied, and then progressed along the joint as the shear stress increased. This process was consistent with the order of emergence of precursors, i.e., precursors were first recorded near the top and later at the center, and finally at the bottom of the specimen.

Direct shear experiments conducted on specimens with a mismatched discontinuity did not show any precursors (in the transmitted amplitude) to failure at low normal stresses (2 MPa), while those precursors appeared at higher normal stresses (5 MPa). The interplay between wave transmission, the degree of mismatch, and the discontinuity’s micro-physical, -chemical and -mechanical properties was assessed through: (1) 3D CT in-situ Xray scans to quantify the degree of mismatch at various normal stresses; (2) micro-indentation testing, to measure the micro-strength of the asperities; and (3) Scanning Electron Microscopy (SEM) and Electron Xray Diffraction (EDX), to study the micro-structure and chemical composition of the discontinuity. The X-ray results showed that contact between asperities increased with normal stress, even when the discontinuity was mismatched. The results indicated that: (1) at 2 MPa, the void aperture was large, so significant shear displacement was needed to interlock and damage the asperities; and (2) the micro-hardness of the asperities of the mismatched discontinuity was larger than that of the well-matched discontinuity, which points to inducing less damage for the same shear displacement. Both mechanisms contribute to the need for larger shear displacements to the mismatched discontinuity asperities to cause damage, which is consistent with the inability to detect seismic precursors to failure. The experimental results suggest that monitoring changes in transmitted wave amplitude across a discontinuity is a promising method for predicting impending failure for well-matched rock discontinuities. Precursor monitoring for mismatched rock discontinuities seems only possible when there is sufficient contact between the two rock surfaces, which occurs at large normal stresses.

APA, Harvard, Vancouver, ISO, and other styles
7

(7041299), Sijia Wang. "Post-Fire Assessment of Concrete in Bridge Decks." Thesis, 2019.

Find full text
Abstract:

In recent years, there have been a number of truck fires involving bridges with concrete components. If the fire burns for a significant period of time, the structural integrity of concrete components could be damaged due to fire. Research-based guidance for evaluating the level of fire damage is currently unavailable and would be beneficial for post-fire bridge inspectors.

This research project focused on evaluating the effects of fire induced damage on concrete bridge deck elements. In order to achieve this goal, a series of controlled heating experiments and material analysis were conducted. Two concrete bridge deck specimens from the I-469 bridge over Feighner Road were heated for different time durations (40 - 80 min.) following the ISO-834 temperature-time curve. The deck specimens were cooled naturally after the specific heating durations. The temperature profiles through the depth of deck specimens were measured during heating and cooling. After testing, concrete samples were taken from the deck specimens for material analysis. Different types of material tests were conducted on samples taken from the undamaged and damaged deck specimens. The material test results were used to evaluate the effects of fire induced damage on the concrete microstructure, and to correlate the microstructure degradation with the through-depth temperature profiles of deck specimens.

From the experimental results, several critical parameters that can affected by fire temperature and duration were discussed: (i) through-depth temperature profiles of deck specimens, (ii) cracks on the exposed surface of deck specimens, (iii) color changes of deck specimens, (iv) microstructure of heated concrete samples, (v) content of calcium hydroxide in fire damaged concrete samples at various depths. Based on the results from heating experiments and observations from material analysis, recommendations and guidance for evaluating concrete decks subjected to realistic fire scenarios are provided to assist bridge inspectors.

APA, Harvard, Vancouver, ISO, and other styles
8

(5930996), Linji Wang. "EVALUATION OF VEGETATED FILTER STRIP IMPLEMENTATIONS IN DEEP RIVER PORTAGE-BURNS WATERWAY WATERSHED USING SWAT MODEL." Thesis, 2019.

Find full text
Abstract:
In 2011, the Deep River Portage-Burns Waterway Watershed was identified as a priority in the Northwest Indiana watershed management framework by the Northwester Indiana Regional Planning Committee. 319 grant cost-share programs were initiated in effort of maintaining and restoring the health of Deep River Portage-Burns Waterway Watershed. A watershed management plans have been developed for this watershed which proposed the implementation of vegetated filter strips (VFS) as an option. In this thesis work, the effectiveness of VFS as a best management practice (BMP) for the Deep River system was evaluated using a hydrological model scheme.

In this research, a Nonpoint Source Pollution and Erosion Comparison Tool (NSPECT) model and a Soil Water Assessment Tool (SWAT) model were constructed with required watershed characteristic data and climate data. The initial hydrologic and nutrient parameters of the SWAT model were further calibrated using SWAT Calibration and Uncertainty Programs (SWAT_CUP) with historical flow and nutrient data in a two-stage calibration process. The calibrated parameters were validated to accurately simulate the field condition and preserved in SWAT model for effectiveness analysis of BMP implementations.

To evaluate the effectiveness of VFS as a BMP, four different scenarios of VFS implementations along the Turkey Creek was simulated with the calibrated SWAT model. With the implementation of VFS in the tributary subbasin of Turkey Creek, the annual total phosphorus (TP) of the VFS implemented subbasin was reduced by 1.60% to 78.95% and the annual TP of downstream subbasins were reduced by 0.09% to 55.42%. Daily percentage of TP reductions ranged from 0% to 90.3% on the VFS implemented subbasin. Annual TP reductions of the four scenarios ranged from 28.11 kg to 465.01 kg.
APA, Harvard, Vancouver, ISO, and other styles
9

(5930987), Mingda Lu. "ASSESSING THE PERFORMANCE OF BROOKVILLE FLOOD CONTROL DAM." Thesis, 2019.

Find full text
Abstract:
In this study, the performance of a flood control reservoir called Brookville Reservoir located in the East fork of the Whitewater River Basin, was analyzed using historic and futuristic data. For that purpose, USEPA HSPF software was used to develop the rainfall runoff modelling of the entire Whitewater River Basin up to Brookville, Indiana. Using uncontrolled flow data, the model was calibrated using 35 years of data and validated using 5 years by evaluating the goodness-offit with R2, RMSE, and NSE. Using historic data, the historic performances were accessed initially.
Using downscaled daily precipitation data obtained from. GCM for the considered region, flows were generated using the calibrated HSPF model. A reservoir operation model was built using the present operating policies. By appending the reservoir simulation model with HSPF model results, performance of the reservoir was assessed for the future conditions.
APA, Harvard, Vancouver, ISO, and other styles
10

(7046339), Luz Maria Agudelo Urrego. "FINITE ELEMENT MODELING OF BURIED ARCHED PIPES FOR THE ESTIMATION OF MAXIMUM FILL COVERS." Thesis, 2019.

Find full text
Abstract:
The Indiana Department of Transportation implements maximum soil fill covers to ensure the safe installation and operation of buried pipes. Historically, fill cover tables are provided by INDOT, but the methodology for calculating these covers is not well documented. The finite element method enables a comprehensive analysis of the soil-pipe system taking into account soil conditions, pipe type and geometry, and conditions on the pipe-soil interface.

This thesis discusses the calculation of maximum fill covers for corrugated and structural plate pipe-arches using the finite element software CANDE and compares the results with previous estimates provided by INDOT. The CANDE software uses the Finite Element Method, and the Load and Resistance Factored design based on a two-dimensional culvert installation in a soil-pipe model. The model is set up under plain strain conditions and is subjected to factored dead and live load, and provides an analysis of the structure based on safety measures against all factored failure modes associated with the structural material.

Significant issues were encountered when calculating the maximum fill covers for pipe-arches in CANDE, including the inability of standard CANDE (Level 2 mesh) to model pipe-arches, lack of convergence for nonlinear analysis, and fill cover results higher than expected. To solve these issues, the pipe-arches were modeled using Level 3 solution in CANDE. The CANDE analyses were run using small-deformation analysis after buckling was eliminated as a governing failure mode using parallel simulations in Abaqus. Numerical results were compared to analytical solutions following ASTM standards.

The results showed that CANDE and INDOT calculations differ significantly, with the CANDE results yielding higher fill covers than those provided in INDOT specifications. These differences are attributed to the assumed loading pattern at failure. While the CANDE results assume that the maximum fill cover height is defined by the failure of the pipe considering the radial pressure (Pv), the INDOT results are consistent with results obtained by limiting the bearing capacity of the soil around the corner radius (Pc).
APA, Harvard, Vancouver, ISO, and other styles
11

(9179471), Kuan Hung Lin. "COMPARATIVE ANALYSIS OF SWAT CUP AND SWATSHARE FOR CALIBRATING SWAT MODELS." Thesis, 2020.

Find full text
Abstract:

Soil and water assessment tool model (SWAT model) is a widely used model when dealing with large and complex watershed simulations. To correctly predict runoff of a watershed, auto-calibration methods are applied. Among all the platforms, SWAT CUP is widely used in the SWAT model community. The new web-based calibration platform: SWATShare is also gaining its popularity due to the benefits of user-friendly interface, access to high-performance computing resources, and collaborative interface. While the algorithm implemented in SWAT CUP is Sequential Uncertainty Fitting version 2 (SUFI2), Sorting Genetic Algorithm II (NSGA-II) is the algorithm employed by SWATShare. There is a limited amount of research comparing the model performance between these two calibration algorithms and platforms.

This study aims to examine whether the performances of calibrated models are providing equally reliable results. Thirty US watersheds are studied in this research, SWAT models were calibrated using seven years of rainfall data and outflow observations from 2001 to 2007, and then the models were validated using three years of historical records from 2008 to 2010. Inconsistency exists between different algorithms calibrated parameter sets, and the percentage difference between parameter values ranges from 8.7% to 331.5%. However, in two-thirds of the study basins, there is no significant difference between objective function values in two algorithms calibrated models. Correlations are examined using values of parameters and watershed features. Among all the features and parameters, Length of reach and GW_DELAY, CH_N2 and ALPHA_BF, climate zone and GWQMN, SFTMP and NSE have medium correlation exist in both SWATShare and SWAT CUP calibrated models among 30 watersheds. The correlation coefficient difference between them are less than 0.1. When visualizing results by Ecoregions, KGE and NSE are similar in calibrated models from both tools.

The initial parameter range used for SWAT CUP calibration could lead to satisfactory results with greater than 0.5 objective function values. However, the parameter values of the calibrated model might not be presenting a real physical condition since they are out of the realistic range. The inaccurate parameter values might lead to lower objective function values in the validation. The objective function values can be improved by setting the range of parameter values to match the realistic values.

By comparing two tools, SWATShare accurately calibrates parameter values to a realistic range using default range in most cases. For those models with an unsatisfactory result from SWATShare, the objective function values could be improved after specifying the parameters to the best-fit range given by SWAT CUP results. Also, for those watersheds which have similar satisfactory calibrated objective values from both tools, constraining the parameter to a reasonable range could generate a new calibrated model that performs as well as the original one. Using the approach to constrain parameter values to a realistic range gradually can exclude some statistically satisfactory but physically meaningless models. Comparing two auto-calibration software, SWATShare accurately calibrates parameter values to a realistic range using default range in most cases. Also, in some of the ecoregions, the best parameter sets in SWATShare fall in a more physically meaningful range. Overall, the newly emerged platform, SWATShare, is found to have the capability of conducting good SWAT model calibration.

APA, Harvard, Vancouver, ISO, and other styles
12

(9739793), Audrey Lafia-Bruce. "TRANSPORTATION INFRASTRUCTURE NETWORK PERFORMANCE ANALYSIS." Thesis, 2020.

Find full text
Abstract:

The main objective of this thesis is to analyze the transportation infrastructure based on performance measures. In doing so, the abstract presents a transportation network as a system of nodes and links. It is important to identify critical components in transportation networks. In identifying critical components of the network, performance measures such as nodal degree, nodal closeness, nodal eigen vector, nodal betweenness, which are the most widely used were explored in the analysis of the network. These measures account for the vulnerability of a node to failure in the transportation network.

In our daily use of transportation networks, we are faced with disruptions that engender change in the transportation network. Disruptions tend to be commonplace in transportation systems. These include manmade disruptions such as accidents to natural disasters such as floods due to rainfall and hurricanes, seismic activities among others which are unprecedented. These incidents change how road users interact with the transportation system due to the disruptions that occur. The disruptions cause increased travel time, delays and even loss of property. These disruptions lead to direct, indirect and induced impacts.

This study provides a firsthand diagnosis of the vulnerability of the transportation network to flood by ranking the nodes using performance measures and multicriteria evaluation. The paper found out that various performance measures may produce different critical nodes but with the employment of sensitivity analysis and veto rule, the most critical node can be established The paper found out that node 80 is the most critical and essential node of the entire network after the impact of flood.

APA, Harvard, Vancouver, ISO, and other styles
13

(8086718), Xiangxi Tian. "IMAGE-BASED ROAD PAVEMENT MACROTEXTURE DETERMINATION." Thesis, 2021.

Find full text
Abstract:

Pavement macrotexture contributes greatly to road surface friction, which in turn plays a significant role in reducing road incidents. Conventional methods for macrotexture measurement techniques (e.g., the sand patch method, the outflow method, and laser measuring) are either expensive, time-consuming, or of poor repeatability. This thesis aims to develop and evaluate affordable and convenient alternative approaches to determine pavement macrotexture. The proposed solution is based on multi-view smartphone images collected in situ over the pavement. Computer vision techniques are then applied to create high resolution three-dimensional (3D) models of the pavement. The thesis develops the analytics to determine two primary macrotexture metrics: mean profile depth and aggregation loss. Experiments with 790 images over 25 spots of three State Roads and 6 spots of the INDOT test site demonstrated that the image-based method can yield reliable results comparable to conventional laser texture scanner results. Moreover, based on experiments with 280 images over 7 sample plates with different aggregate loss percentage, the newly developed analytics were proven to enable estimation of the aggregation loss, which is largely compromised in the laser scanning technique and conventional MPD calculation approach. The root mean square height based on the captured images was verified in this thesis as a more comprehensive metric for macrotexture evaluation. It is expected that the developed approach and analytics can be adopted for practical use at a large scale.

APA, Harvard, Vancouver, ISO, and other styles
14

(8922227), Mohamadreza Moini. "BUILDABILITY AND MECHANICAL PERFORMANCE OF ARCHITECTURED CEMENT-BASED MATERIALS FABRICATED USING A DIRECT-INK-WRITING PROCESS." Thesis, 2020.

Find full text
Abstract:

Additive Manufacturing (AM) allows for the creation of elements with novel forms and functions. Utilizing AM in development of components of civil infrastructure allows for achieving more advanced, innovative, and unique performance characteristics. The research presented in this dissertation is focused on development of a better understanding of the fabrication challenges and opportunities in AM of cement-based materials. Specifically, challenges related to printability and opportunities offered by 3D-printing technology, including ability to fabricate intricate structures and generate unique and enhanced mechanical responses have been explored. Three aspects related to 3D-printing of cement-based materials were investigated. These aspects include: fresh stability of 3D-printed elements in relation to materials rheological properties, microstructural characteristics of the interfaces induced during the 3D-printing process, and the mechanical response of 3D-printed elements with bio-inspired design of the materials’ architecture. This research aims to contribute to development of new pathways to obtain stability in freshly 3D-printed elements by determining the rheological properties of material that control the ability to fabricate elements in a layer-by-layer manner, followed by the understanding of the microstructural features of the 3D-printed hardened cement paste elements including the interfaces and the pore network. This research also introduces a new approach to enhance the mechanical response of the 3D-printed elements by controlling the spatial arrangement of individual filaments (i.e., materials’ architecture) and by harnessing the weak interfaces that are induced by the 3D-printing process.


APA, Harvard, Vancouver, ISO, and other styles
15

(5929889), Joo Min Kim. "Behavior, Analysis and Design of Steel-Plate Composite (SC) Walls for Impactive Loading." Thesis, 2019.

Find full text
Abstract:
There is significant interest in the used of Steel-plate composite (SC) walls for protective structures, particularly for impactive and impulsive loading. The behavior of SC walls is fundamentally different from that of reinforced concrete (RC) walls due to the addition of steel plates on the exterior surfaces, which prevent concrete scabbing and enhance local perforation resistance.

Laboratory-scale SC wall specimens were fabricated, cast with concrete, and then tested in an indoor missile impact test-setup specially-built and commissioned for this research. The parameters included in the experimental investigations were the steel plate reinforcement ratio (3.7% - 5.2%), tie bar spacing, size, and reinforcement ratio (0.37% - 1.23%), and the steel plate yield strength (Gr.50 - Gr.65). Additional parameters include the missile diameter (1.0 in., 1.5 in.), weight (1.3 lbs, 2.0, lbs, 3.5 lbs), and velocity (410 - 760 ft/s). A total of sixteen tests were conducted, the results of which are presented in detail including measurements of missile velocity, penetration depth, rear steel plate bulging deformation, and test outcome (stopped or perforated). The test results are further used to illustrate the significant conservatism of a design method developed previously by researchers (Bruhl et al. 2015a), and the sources of this conservatism including differences in the missile penetration mechanism, dimensions of the concrete conical frustum (breaking out), and the penetration depth equations assumed in the design method.

Numerical models were developed to further investigate local damage behavior of SC walls. Three-dimensional finite element models were built using LS-DYNA software and employed to simulate the missile impact tests on the SC wall specimens. The numerical analysis results were benchmarked to the experimental test results for the validation of the models.

Two sets of parametric studies were conducted using the benchmarked numerical models. The first set of the parametric studies was intended to narrow the perforation velocity ranges from the experimental results for use in evaluating the accuracy of a rational design method developed later in this research. The second set of the parametric studies was intended to evaluate the influence of design parameters on the perforation resistance of SC walls. It was found that flexural reinforcement ratio and steel plate strength are significant parameters which affect the penetration depth. However, shear reinforcement ratio has negligible influence.

Results from the experimental investigations and the numerical parametric studies were used to develop a rational design method which modifies the three-step design method. The modified design method incorporates a proposed modification factor applicable to the penetration depth equations and the missile penetration mechanism observed from the experiments. The modified design method was verified using the larger-scale missile impact test data from South Korean tests as well.

Additional research was performed to evaluate the local failure modes when the perforation was prevented from missile impactive loading on SC walls. Through numerical parametric studies, three different local failure modes (punching shear, flexural yielding, and plastic mechanism formation) were investigated. Also, an innovative approach to generating static resistance functions was proposed for use in SDOF or TDOF model analysis.
APA, Harvard, Vancouver, ISO, and other styles
16

(6845639), Farida Ikpemesi Mahmud. "Simplified Assessment Procedure to Determine the Seismic Vulnerability of Reinforced Concrete Bridges in Indiana." Thesis, 2019.

Find full text
Abstract:

The possibility of earthquakes in Indiana due to the presence of the New Madrid Seismic Zone is well known. However, the identification of the Wabash Valley Seismic Zone has increased our understanding of the seismic hazard in the state of Indiana. Due to this awareness of the increased potential for earthquakes, specifically in the Vincennes District, the seismicvulnerability of Indiana’s bridge network must be assessed. As such, the objective of this thesis is to develop a simplified assessment procedure that can be used to conduct a state-wide seismic vulnerability assessment of reinforced concrete bridges in Indiana.

Across the state, variability in substructure type, seismic hazard level, and soil site class influences the vulnerability of bridges. To fully understand the impact of this variation, a detailed assessment is completed on a representative sample. Twenty-five reinforced concrete bridges are selected across the state, and analyzed using information from the bridge drawings and a finite element analysis procedure. These bridges are analyzed using synthetic ground motions representative of the hazard level in Indiana. The results of the detailed analysis are used to develop a simplified assessment procedure that uses information that is available in BIAS or can be added to BIAS. At this time, BIAS does not contain all the necessary information required for accurate estimates of dynamic properties, thus, certain assumptions are made. Several candidate models are developed by incrementally increasing the level of information proposed to be added into BIAS, which resulted in an increase in the level of accuracy of the results. The simplified assessment is then validated through a comparison with the detailed analysis.

Through the development of the simplified assessment procedure, the minimum data item which must be added to BIAS to complete the assessment is the substructure type, and bridges with reinforced concrete columns in the substructure require a detailed assessment. Lastly, by increasing the level of information available in BIAS, the agreement between the results of the simplified assessment and the detailed assessment is improved.

APA, Harvard, Vancouver, ISO, and other styles
17

(6640721), Alexandra D. Mallory. "On the development of an open-source preprocessing framework for finite element simulations." Thesis, 2019.

Find full text
Abstract:
Computational modeling is essential for material and structural analyses for a multitude of reasons, including for the improvement of design and reducing manufacturing costs. However, the cost of commercial finite element packages prevent companies with limited financial resources from accessing them. Free finite element solvers, such as Warp3D, exist as robust alternatives to commercial finite element analysis (FEA) packages. This and other open-source finite element solvers are not necessarily easy to use. This is mainly due to a lack of a preprocessing framework, where users can generate meshes, apply boundary conditions and forces, or define materials. We developed a preprocessor for Warp3d, which is referred to as W3DInput, to generate input files for the processor. W3DInput creates a general framework, at no cost, to go from CAD models to structural analysis. With this preprocessor, the user can import a mesh from a mesh generator software – for this project, Gmsh was utilized – and the preprocessor will step the user through the necessary inputs for a Warp3D file. By using this preprocessor, the input file is guaranteed to be in the correct order and format that is readable by the solver, and makes it more accessible for users of all levels. With this preprocessor, five use cases were created: a cantilever beam, a displacement control test, a displacement control test with a material defined by a user-defined stress-strain curve, a crystal plasticity model, and pallet. Results were outputted to Exodus II files for viewing in Paraview, and the results were verified by checking the stress-strain curves. Results from these use cases show that the input files generated from the preprocessor functions were correct.
APA, Harvard, Vancouver, ISO, and other styles
18

(6641012), Genisson Silva Coutinho. "FACULTY BELIEFS AND ORIENTATIONS TO TEACHING AND LEARNING IN THE LAB: AN EXPLORATORY CASE STUDY." Thesis, 2019.

Find full text
Abstract:
This dissertation presents a two-phase multiple case study conducted to investigate the faculty
beliefs regarding the integration of labs into engineering and engineering technology education
and the relationship between such beliefs and the teaching practices adopted in the labs. In the first
phase, an exploratory study grounded on a framework of beliefs was conducted to elicit the beliefs
espoused by the participants. Interviews were used to elicit the participants’ beliefs. The
transcribed interviews were analyzed through the constant comparative method. Thirteen faculty
members from the College of Engineering and Engineering Technology participated. In the second
phase, a triangulation approach was used to investigate the relationships between the participants’
beliefs and their corresponding teaching practices. The findings from phase one were triangulated
with the data from interviews, questionnaires, and documents to elicit the relationships between
beliefs and practices.
APA, Harvard, Vancouver, ISO, and other styles
19

(7040873), Ting-Wei Wang. "ANCHORING TO LIGHTWEIGHT CONCRETE: CONCRETE BREAKOUT STRENGTH OF CAST-IN, EXPANSION, AND SCREW ANCHORS IN TENSION." Thesis, 2019.

Find full text
Abstract:
The useof lightweight concrete in the concrete industry provides economical and practical advantages. Structural anchors are commonly used in the industry for various structural applications. In ACI 318-19: Building Code Requirements for Structural Concrete and Commentary, a modification factor, λa, is specified for the calculated design strengths of anchors installed in lightweight concrete that experience concrete or bond failure. The modification factor consists of the general lightweight concrete modification factor,λ, specified in the code multiplied by an additional reduction factor dependent on the anchor and failure type. For the concrete breakout strength of expansion and screw anchors in lightweight concrete, the value of λais specified as 0.8λ. For the concrete breakout strength of cast-in anchors in lightweight concrete, the value of λais 1.0λ. In both cases, however, the specified value of λais based on limited test data. A research program was therefore conducted to provide the data needed for more appropriate lightweight modification factors. A primary objective of the research was to evaluate the concrete breakout strengths of cast-in, expansion, and screw anchors installed in lightweight concrete by conducting a systematic experimental program that included various types of lightweight concrete. More specifically, the experimental program included tension tests on torque-controlled expansion anchors, displacement-controlled expansion anchors, and screw anchors from four manufacturers in addition to tension tests on cast-in headed stud anchors. A total of seven concrete types were included in the research: one normalweight concrete mixture and six lightweight concrete mixtures. The lightweight concrete included sand-lightweight and all-lightweight mixtures composed ofexpanded shale, clay, and slate aggregates. The results of the experimental program are compared to limited data available from previous tension tests on anchors in lightweight concrete. Based on the results of the research, revised lightweight concrete modification factors for the concrete breakout design strengths of the anchor types included in the test program are provided.
APA, Harvard, Vancouver, ISO, and other styles
20

(7479359), Cheng Qian. "Evaluation of Deep Learning-Based Semantic Segmentation Approaches for Autonomous Corrosion Detection on Metallic Surfaces." Thesis, 2019.

Find full text
Abstract:
The structural defects can lead to serious safety issues and the corrosponding economic losses. In 2013, it was estimated that 2.5 trillion US dollars were spent on corrosion around the world, which was 3.4\% of the global Gross Domestic Product (GDP) (Koch, 2016). Periodical inspection of corrosion and maintenance of steel structures are essential to minimize these losses. Current corrosion inspection guidelines require inspectors to visually assess every critical member within arm's reach. This process is time-consuming, subjective and labor-intensive, and therefore is done only once every two years.

A promising solution is to use a robotic system, such as an Unmanned Aerial Vehicle (UAV), with computer vision techniques to assess corrosion on metallic surfaces. Several studies have been conducted in this area, but the shortcoming is that they cannot quantify the corroded region reliably: some studies only classify whether corrosion exists in the image or not; some only draw a box around corroded region; and some need human-engineered features to identify corrosion. This study aims to address this problem by using deep learning-based semantic segmentation to let the computer capture useful features and find the bounding of corroded regions accurately.

In this study, the performance of four state-of-the-art deep learning techniques for semantic segmentation was investigated for corrosion assessment task,including U-Net, DeepLab, PSPNet, and RefineNet. Six hundred high-resolution images of corroded regions were used to train and test the networks. Ten sets of experiments were performed on each architecture for cross-validation. Since the images were large, two approaches were used to analyze images: 1) subdividing images, 2) down-sampling images. A parametric analysis on these two prepossessing methods was also considered.

Prediction results were evaluated based on intersection over union (IoU), recall and precision scores. Statistical analysis using box chart and Wilcoxon singled ranked test showed that subdivided image dataset gave a better result, while resized images required less time for prediction. Performance of PSPNet outperformed the other three architectures on the subdivided dataset. DeepLab showed the best performance on the resized dataset. It was found Refinenet was not appropriate for corrosion detection task. U-Net was found to be ideal for real-time processing of image while RefineNet did not perform well for corrosion assessment.
APA, Harvard, Vancouver, ISO, and other styles
21

(8800811), Mingmin Liu. "MODELLING OF INTERSTATE I-465 CRASH COUNTS DURING SNOW EVENTS." Thesis, 2020.

Find full text
Abstract:

Traffic safety management on interstates is crucial during adverse winter weather. According to the Federal Highway Administration (FHWA), there are over 5,891,000 vehicle crashes each year in the United States. Approximately 21% of these crashes are weather-related. INDOT spends $60 million on winter operations each year to minimize the weather impacts on driver capability, vehicle performance, road infrastructure, and crash risk. Several studies have sought to investigate the relationship of crash counts with weather, speed, traffic and roadway data during snow events, in order to help agencies, identify needs and to distribute the resources effectively and efficiently during winter weather events. The limitation of these studies is that weather variables are often correlated to each other, for example, visibility may be correlated to snow precipitation and air temperature may be correlated to net solar surface radiation. The randomness of crash occurrence also increases difficulty in such studies. In this study, a random parameter negative binomial model was used for Interstate I-465 in Indianapolis in winter 2018 and 2019.The results show that during snow events in Indiana, air temperature, wind speed, snow precipitation, net solar surface radiation, and visibility significantly impact the number of crashes on I-465. Driving over the speed limit (55 mph), especially on wet pavements are more likely to lose control of vehicles and cause crashes. Travel speed between 45 mph to 55 mph and travel speed between 15 mph to 25 mph are both strong factors. Somewhat surprising was that speeds between 25mph and 45mph were not found to be significant. The number of interchanges is also positively related to crash counts due to the high number of conflict points at ramp merging sections. Also, travelling over speed limit is a random parameter with unobserved heterogeneity which is intuitive since speeding could be more dangerous in certain areas with complex road geometry and narrower lanes. Traffic counts have a negative correlation with crash counts, likely due to faster speeds when fewer vehicles are travelling on the loop. Crash counts increased about70% during severe storm days on I-465, and visibility and air temperature are highly correlated to crash counts. These key findings can help the agency to deploy warnings when visibility is low, or temperature falls sharply.help the agency to deploy warnings when visibility is low,or temperature falls sharply.
APA, Harvard, Vancouver, ISO, and other styles
22

(6613415), Leonardo Enrico Bertassello. "Eco-Hydrological Analysis of Wetlandscapes." Thesis, 2019.

Find full text
Abstract:
Wetlands are dispersed fractal aquatic habitats that play a key role in watershed eco-hydrology. Wetlands provide critical habitats for specialized fauna and flora, process nutrients, and store water. Wetlands are found in a wide range of landscapes and climates, including humid/tropical regions where surface water is abundant, and in semiarid/arid regions with surface-water deficits. Wetland morphology and hydrology are governed by geomorphology and climate. Wetlands are dynamic; they change in space and time in response to unsteady external conditions, and over longer term to internal process feedbacks. Together, wetlands form a mosaic of heterogeneous, dynamic, aquatic habitats in varying spatial organizations, networked by hydrological and ecological connections.

The overarching goal of the proposed research is to provide a robust theoretical framework to model the dynamics of multiple wetlands spread across watersheds (wetlandscape). In particular, the three main lens I used for identifying the spatiotemporal variability in wetlandscapes were: hydrology, morphology and ecology. Indeed, the hydrological modeling of wetlands is of key importance to determine which habitats are potentially able to host aquatic and semiaquatic species, as well as function as retention basin for storing considerable amount of water or for processing nutrients. Wetlands interaction with the landscape topography is essential to characterize the morphological attributes of these waterbodies. Different generating mechanisms have produced differences in wetland shapes and extent. However, even if wetlands are different among regions, and also within the same landscape, the set of function that they can support is similar. In the present research, I have also proposed that because water accumulates at low elevations, topography-based models helpful for the identification of wetlands in landscapes. These types of models are useful especially in those cases were wetlands data are sparse or not available. The proposed approaches could reproduce the abundance and distribution of active wetlands found in the NWI database, despite the differences in identification methods. In particular, I found that wetland size distributions in all the conterminous United States share the same Pareto pdf. Furthermore, the wetland shape is constrained into a narrow range of 2D fractal dimension (1.33;1.5). Since this method can be carried out with only a DEM as input, the proposed framework can be applied to any DEM to extract the location and the extent of depressional wetlands.

Wetlands are among the most biologically diverse ecosystems, serving as habitats to a wide range of unique plants and animal life. In fact, wetlands and their surrounding terrestrial habitats are critical for the conservation and management of aquatic and semi-aquatic species. Understating the degree and dynamics of connectedness among individual wetlands is a challenge that unites the fields of ecology and hydrology. Connectivity among spatially distributed mosaic of wetlands, embedded in uplands, is critical for aquatic habitat integrity and to maintain metapopulation biodiversity. Land-use and climate change, among other factors, contribute to wetland habitat loss and fragmentation of dispersal networks. Here, I present an approach for modeling dynamic spatiotemporal changes, driven by stochastic hydroclimatic forcing, in topology of dispersal networks formed by connecting habitat zones within wetlands. I examined changes in topology of dispersal networks resulting from temporal fluctuations in hydroclimatic forcing, finding that optimal dispersal network are available only for limited time period, thus species need to constantly adapt to cope with adverse conditions.

Loss of wetlands leads to habitat fragmentation and decrease in landscape connectivity, which in turn hampers the dispersal and survival of wetland-dependent species. Ecosystem functions arise from interdependent processes and feedbacks operating concurrently at multiple scales. In this thesis, I integrated stochastic models for landscape hydrology to study the temporal variability in wetlands attributes (e.g., stage, surface area and storage volume, carrying capacity) with ecological network theory allows for characterization of the spatiotemporal dynamics of habitat distribution and connectivity that is essential to meta-communities. The proposed framework can be applied in diverse landscapes and hydro-climates, and could thus be used at larger scales. The proposed approach could also inform conservation and restoration efforts that target landscape functions linked to transport in wet ecological corridors. The interdisciplinarity that characterizes this work allows for a wide spectrum of potential applications. Despite the ultimate goal of the thesis consists in the eco-hydrologic modeling of wetlandscapes, the backbone of the proposed models could be extended to any kind of patchily habitat driven by stochastic forcing.
APA, Harvard, Vancouver, ISO, and other styles
23

Kovacevic, Vlado S. "The impact of bus stop micro-locations on pedestrian safety in areas of main attraction." 2005. http://arrow.unisa.edu.au:8081/1959.8/28389.

Full text
Abstract:
From the safety point of view, the bus stop is perhaps the most important part of the Bus Public Transport System, as it represents the point where bus passengers may interact directly with other road users and create conflicting situations leading to traffic accidents. For example, travellers could be struck walking to/from or boarding/alighting a bus. At these locations, passengers become pedestrians and at some stage crossing busy arterial roads at the bus stop in areas or at objects of main attraction usually outside of pedestrian designated facilities such as signal controlled intersections, zebra and pelican crossings. Pedestrian exposure to risk or risk-taking occurs when people want to cross the road in front of the stopped bus, at the rear of the bus or between the buses, particularly where bus stops are located on two-way roads (i.e. within the mid-block of the road with side streets, at non-signalised cross-section). However, it is necessary to have a better understanding of the pedestrian road-crossing risk exposure (pedestrian crossing distraction, obscurity and behaviour) within bus stop zones so that it can be incorporated into new design, bus stop placement, and evaluation of traffic management schemes where bus stop locations will play an increasingly important role. A full range of possible incidental interactions are presented in a tabular model that looks at the most common interacting traffic movements within bus stop zones. The thesis focused on pedestrian safety, discusses theoretical foundations of bus stops, and determines the types of accident risks between bus travellers as pedestrians and motor vehicles within the zones of the bus stop. Thus, the objectives of this thesis can be summarized as follows: (I) - Classification of bus stops, particularly according to objects of main attraction (pedestrian-generating activities); (II) - Analysis of traffic movement and interactions as an accident/risk exposure in the zone of bus stops with respect to that structure; (III) - Categorizing traffic accident in the vicinity of bus stops, and to analyse the interactions (interacting movements) that occur within bus stop zones in order to discover the nature of problems; (IV) - Formulation of tabular (pedestrian traffic accident prediction) models/forms (based on traffic interactions that creating and causing possibilities of accident conflict) for practical statistical methods of those accidents related to bus stop, and; (V) - Safety aspects related to the micro-location of bus stops to assist in the micro-location design, operations of bus stop safety facilities and safer pedestrian crossing for access between the bus stop and nearby objects of attraction. The scope of this thesis focuses on the theoretical foundation of bus stop microâ??location in areas of main attractions or at objects of main attraction, and traffic accident risk types as they occur between travellers as pedestrians and vehicle flow in the zone of the bus stop. The knowledge of possible interactions leads to the identification of potential conflict situations between motor vehicles and pedestrians. The problems discussed for each given conflict situation, has a great potential in increasing the knowledge needed to prevent accidents and minimise any pedestrian-vehicle conflict in this area and to aid in the development and planning of safer bus stops.
APA, Harvard, Vancouver, ISO, and other styles
24

(6411944), Francisco J. Montes Sr. "EFFECTS ON RHEOLOGY AND HYDRATION OF THE ADDITION OF CELLULOSE NANOCRYSTALS (CNC) IN PORTLAND CEMENT." Thesis, 2019.

Find full text
Abstract:
Cellulose Nanocrystals have been used in a wide range of applications including cement composites as a strength enhancer. This work analyses the use of CNC from several sources and production methods, and their effects on rheology and hydration of pastes made using different cement types with different compositions. Cement Types I/II and V were used to prepare pastes with different water to cement ratios (w/c) and measure the changes in rheology upon CNC addition. The presence of tricalcium aluminate (cement chemistry denotes as C3A) made a difference in the magnitude of CNC effects. At dosages under 0.5vol% to dry cement, CNC reduced the yield stress up to 50% the control value. Pastes with more C¡A reduced yield stress over a wider range of CNC dosages. CNC also increased yield stress of pastes with dosages above 0.5%, twice the control value for pastes with high C3A content at 1.5% CNC and up to 20 times for pastes without C3A at the same dosage.
All the CNCs used were characterized in length, aspect ratio, and zeta potential to identify a definitive factor that governs the effect in the rheology of cement pastes. However, no definitive evidence was found that any of these characteristics dominated the measured effects.
The CNC dosage at which the maximum yield stress reduction occurred increased with the amount of water used in the paste preparation, which provides evidence of the dominance of the water to cement ratio in the rheological impact of CNC.
14
Isothermal calorimetry showed that CNC cause concerning retardation effects in cement hydration. CNC slurries were then tested for sugars and other carbohydrates that could cause the aforementioned effect, then slurries were filtered, and impurities were detected in the filtrate, these impurities were quantified and characterized, however, the retardation appeared to be unaffected by the amount of the species detected, suggesting that the crystal chemistry, which is a consequence of the production method, is responsible of this retardation.
This work explores the benefits and drawbacks of the use of CNC in cement composites by individually approaching rheology and heat of hydration on a range of physical and chemical tests to build a better understanding of the observed effects.
Understanding the effect of CNCs on cement paste rheology can provide insights for future work of CNCs applications in cement composites.
APA, Harvard, Vancouver, ISO, and other styles
25

(7484339), Fu-Chen Chen. "Deep Learning Studies for Vision-based Condition Assessment and Attribute Estimation of Civil Infrastructure Systems." Thesis, 2021.

Find full text
Abstract:
Structural health monitoring and building assessment are crucial to acquire structures’ states and maintain their conditions. Besides human-labor surveys that are subjective, time-consuming, and expensive, autonomous image and video analysis is a faster, more efficient, and non-destructive way. This thesis focuses on crack detection from videos, crack segmentation from images, and building assessment from street view images. For crack detection from videos, three approaches are proposed based on local binary pattern (LBP) and support vector machine (SVM), deep convolution neural network (DCNN), and fully-connected network (FCN). A parametric Naïve Bayes data fusion scheme is introduced that registers video frames in a spatiotemporal coordinate system and fuses information based on Bayesian probability to increase detection precision. For crack segmentation from images, the rotation-invariant property of crack is utilized to enhance the segmentation accuracy. The architectures of several approximately rotation-invariant DCNNs are discussed and compared using several crack datasets. For building assessment from street view images, a framework of multiple DCNNs is proposed to detect buildings and predict their attributes that are crucial for flood risk estimation, including founding heights, foundation types (pier, slab, mobile home, or others), building types (commercial, residential, or mobile home), and building stories. A feature fusion scheme is proposed that combines image feature with meta information to improve the predictions, and a task relation encoding network (TREncNet) is introduced that encodes task relations as network connections to enhance multi-task learning.
APA, Harvard, Vancouver, ISO, and other styles
26

(5930171), Yuxiao Qin. "Sentinel-1 Wide Swath Interferometry: Processing Techniques and Applications." Thesis, 2019.

Find full text
Abstract:
The Sentinel-1 (S1) mission is a part of the European Space Agency (ESA) Copernicus program. In 2014 and 2016, the mission launched the twin Synthetic Aperture Radar (SAR) satellites, Sentinel-1A (S1A) and Sentinel-1B (S1B). The S1 mission has started a new era for earth observations missions with its higher spatial resolution, shorter revisit days, more precise control of satellites orbits and the unprecedented free-to-public distribution policy. More importantly, S1 adopts a new wide swath mode, the TOPS mode as it default acquisition mode. The TOPS mode scans several different subswaths for gaining a larger coverage. Because the S1 mission is aimed at earth observation missions, for example, earthquakes, oods, ice sheets flow, etc., thus it is desired to have large monitoring areas. Although TOPS is still a relatively new idea, the high quality data and wide application scopes from S1 has earned tremendous attention in the SAR community.

The signal properties of wide swath mode such as TOPS are different from the more conventional stripmap mode, and it requires special techniques for successfully processing such data in the sense of interferometry. For the purpose of doing Interferometric SAR (InSAR), the coregistration step is of most critical because it requires a 1/1000 accuracy. In addition, processing wide swath mode requires special steps such as bursts stitching, deramping and reramping, and so on. Compared with stripmap, the processing techniques of wide swath mode are less developed. Much exploitations are still needed for how to design a generic and robust wide swath interferometric
processing chain.

Driven by the application needs of S1 wide swath interferometric processing, this research studies the key methodologies, explores and implements new processing chain, designs a generic wide swath processing
flow that would utilize the existing stripmap processing platform, as well as carries out preliminary applications. For studying key methods, this study carries out a quantitative analysis between two different coregistration methods, namely the cross-correlation approach and the geometrical
approach. The advantages and disadvantages for each method are given by the author, and it is proposed to choose the suitable method based on one's study area. For the implementation of the new processing chain, the author proposes a user-friendly stripmap-like processing
ow with all the wide swath related process done behind the scene. This approach allows people with basic knowledge in InSAR and very few knowledge in wide swath mode to be able to process and get interferometric products. For designing the generic process flow, the author applied TOPS's work flow to the other wide swath mode, ScanSAR mode and demonstrated the feasibility of processing two different wide swath mode with the same processing chain.
For preliminary applications, the author shows a large number of interferometric data throughout the research and presents a case study with multi temporal time series analysis using a stack of S1 dataset.

This research is application oriented, which means the study serves for real-world applications. Up to now, the processing chain and methodologies implemented in this
research has been shared by many research groups around the world and has seen a number of promising outcomes. The recognition from others is also an affrmation to the value of this research.
APA, Harvard, Vancouver, ISO, and other styles
27

(9229868), Jephunneh Bonsafo-Bawuah. "AN AGENT-BASED FRAMEWORK FOR INFRASTRUCTURE MAINTENANCE DECISION MAKING." Thesis, 2020.

Find full text
Abstract:
A transportation system plays a significant role in the economic development of a region by facilitating the movement of goods and services. No matter how well the infrastructure is designed or constructed, it is beneficial to know maintenance needs in the life cycle of the infrastructure so that the service life of the pavement is prolonged by minimizing its life-cycle cost requirements. The pavement life-cycle performance helps maintenance investment decision-makers to efficiently utilize the available infrastructure funds. This research focuses on developing
a framework for identifying a more effective and efficient way of decision making on the management and maintenance of infrastructure. The developed framework uses an agent-based modeling approach to capture the interaction that exists between different components of the transportation system and their characteristics such as traffic volume and pavement condition, user cost, agency cost, etc. The developed agent-based is useful to investigate the effects of time-varying vehicular density on pavement deterioration and the road users’ driving behavior. The developed framework was demonstrated as a two-lane highway as a case study. Using the developed agent-based simulation framework, it was possible to identify when the road
infrastructure maintenance should be done to increase the desired PCR value. Also, it was possible to show a decrease in PCR can affect the cost of road users. The framework can track the time to determine when maintenance should be done based on the PCR values that determine whether the
pavement is in good, moderate, or bad condition. Regardless of the degree of road users’ patience to stay in their travel lanes (patience), the vehicle distribution on the road is balanced in the long run because road users tend to change their travel lanes to minimize their overall travel times. When the patient level is low, road users tend to change lanes more, causing a high number of vehicles on the left lane as it is considered the lane changing lane in two-lane highways. It was also observed that as the patience of the road users increases, the number of vehicles that use the right lane is almost the same as the number of vehicles that use the left lane.
APA, Harvard, Vancouver, ISO, and other styles
28

(9750833), Zilong Yang. "Automated Building Extraction from Aerial Imagery with Mask R-CNN." Thesis, 2020.

Find full text
Abstract:

Buildings are one of the fundamental sources of geospatial information for urban planning, population estimation, and infrastructure management. Although building extraction research has gained considerable progress through neural network methods, the labeling of training data still requires manual operations which are time-consuming and labor-intensive. Aiming to improve this process, this thesis developed an automated building extraction method based on the boundary following technique and the Mask Regional Convolutional Neural Network (Mask R-CNN) model. First, assisted by known building footprints, a boundary following method was used to automatically best label the training image datasets. In the next step, the Mask R-CNN model was trained with the labeling results and then applied to building extraction. Experiments with datasets of urban areas of Bloomington and Indianapolis with 2016 high resolution aerial images verified the effectiveness of the proposed approach. With the help of existing building footprints, the automatic labeling process took only five seconds for a 500*500 pixel image without human interaction. A 0.951 intersection over union (IoU) between the labeled mask and the ground truth was achieved due to the high quality of the automatic labeling step. In the training process, the Resnet50 network and the feature pyramid network (FPN) were adopted for feature extraction. The region proposal network (RPN) then was trained end-to-end to create region proposals. The performance of the proposed approach was evaluated in terms of building detection and mask segmentation in the two datasets. The building detection results of 40 test tiles respectively in Bloomington and Indianapolis showed that the Mask R-CNN model achieved 0.951 and 0.968 F1-scores. In addition, 84.2% of the newly built buildings in the Indianapolis dataset were successfully detected. According to the segmentation results on these two datasets, the Mask R-CNN model achieved the mean pixel accuracy (MPA) of 92% and 88%, respectively for Bloomington and Indianapolis. It was found that the performance of the mask segmentation and contour extraction became less satisfactory as the building shapes and roofs became more complex. It is expected that the method developed in this thesis can be adapted for large-scale use under varying urban setups.

APA, Harvard, Vancouver, ISO, and other styles
29

(6618812), Harsh Patel. "IMPLEMENTING THE SUPERPAVE 5 ASPHALT MIXTURE DESIGN METHOD IN INDIANA." Thesis, 2019.

Find full text
Abstract:
Recent research developments have indicated that asphalt mixture durability and pavement life can be increased by modifying the Superpave asphalt mixture design method to achieve an in-place density of 95%, 2% higher than the conventional density requirements of approximately 93% (7% air voids content). Doing so requires increasing the design air voids content to 5% from the conventional requirement of 4 percent. After successful laboratory testing of this modified mixture design method, known as Superpave 5, two controlled field trials and one full scale demonstration project, the Indiana Department of Transportation (INDOT) let 12 trial projects across the six INDOT districts based on the design method. The Purdue University research team was tasked with observing the implementation of the Superpave 5 mixture design method, documenting the construction and completing an in-depth analysis of the quality control and quality assurance (QC/QA) data obtained from the projects. QC/QA data for each construction project were examined using various statistical metrics to determine construction performance with respect to INDOT Superpave 5 specifications. The data indicate that, on average, the contractors achieved 5% laboratory air voids, which coincides with the Superpave 5 recommendation of 5% laboratory air voids. However, on average, the as-constructed in-place density of 93.8% is roughly 1% less than the INDOT Superpave 5 specification. The findings of this study will benefit the future implementation of this modified mixture design method.
APA, Harvard, Vancouver, ISO, and other styles
30

(7860779), Mohammadreza Pouranian. "Aggregate Packing Characteristics of Asphalt Mixtures." Thesis, 2019.

Find full text
Abstract:

Voids in the mineral aggregate (VMA), as a main volumetric design parameter in the Superpave mixture design method, is an important factor to ensure asphalt mixture durability and rutting performance. Moreover, an asphalt mixture’s aggregate skeleton, related to VMA, is another important factor that affects critical asphalt mixture properties such as durability, workability, permeability, rutting, and cracking resistance. The objective of this study is to evaluate the effects of aggregate size distribution and shape parameters on aggregate packing characteristics (volumetric and compaction properties) of asphalt mixtures. Three tasks were undertaken to reach this goal.

The first task was to propose an analytical approach for estimating changes in voids in the mineral aggregate (VMA) due to gradation variation and determining the relevant aggregate skeleton characteristics of asphalt mixtures using the linear-mixture packing model, an analytical packing model that considers the mechanisms of particle packing, filling and occupation. Application of the linear-mixture packing model to estimate the VMA of asphalt mixtures showed there is a high correlation between laboratory measured and model estimated values. Additionally, the model defined a new variable, the central particle size of asphalt mixtures that characterized an asphalt mixture’s aggregate skeleton. Finally, the proposed analytical model showed a significant potential to be used in the early stages of asphalt mixture design to determine the effect of aggregate gradation changes on VMA and to predict mixture rutting performance.

As the second task, a framework to define and understand the aggregate structure of asphalt mixtures was proposed. To develop this framework, an analytical model for binary mixtures was proposed. The model considers the effect of size ratio and air volume between the particles on the aggregate structure and packing density of binary mixtures. Based on this model, four aggregate structures, namely coarse pack (CP), coarse-dense pack (CDP), fine-dense pack (FDP) and fine pack (FP), were defined. The model was validated using a series of 3D discrete element simulation. Furthermore, the simulation of multi-sized aggregate blends using two representative sizes for fine and coarse stockpiles was carried out to apply the proposed analytical model to actual aggregate blends. The numerical simulations verified the proposed analytical model could satisfactorily determine the particle structure of binary and multi-sized asphalt mixture gradations and could, therefore, be used to better design asphalt mixtures for improved performance.

The third task virtually investigated the effect of shape characteristics of coarse aggregates on the compactability of asphalt mixtures using a discreet element method (DEM). The 3D particles were constructed using a method based on discrete random fields’ theory and spherical harmonic and their size distribution in the container was controlled by applying a constrained Voronoi tessellation (CVT) method. The effect of fine aggregates and asphalt binder was considered by constitutive Burger’s interaction model between coarse particles. Five aggregate shape descriptors including flatness, elongation, roundness, sphericity and regularity and, two Superpave gyratory compactor (SGC) parameters (initial density at Nini and compaction slope) were selected for investigation and statistical analyses. Results revealed that there is a statistically significant correlation between flatness, elongation, roundness, and sphericity as shape descriptors and initial density as compaction parameter. Also, the results showed that the maximum percentage of change in initial density is 5% and 18% for crushed and natural sands, respectively. The results of analysis discovered that among all particle shape descriptors, only roundness and regularity had a statistically significant relation with compaction slope, and as the amount of roundness and regularity increase (low angularity), the compaction slope decreases. Additionally, the effect of flat and elongated (F&E) particles percentage in a mixture using a set of simulations with five types of F&E particles (dimensional ratios 1:2, 1:3, 1:4 and 1:5) and ten different percentage (0, 5, 10, 15, 20, 30, 40, 50, 80 and 100) with respect to a reference mixture containing particles with flatness and elongation equal to 0.88 was conducted. Results indicated that increase of F&E particles in a mixture (more than 15%) results in a significant reduction in the initial density of the mixture especially for lower dimensional ratio (1:4 and 1:5).


APA, Harvard, Vancouver, ISO, and other styles
31

(8065844), Jedadiah Floyd Burroughs. "Influence of Chemical and Physical Properties of Poorly-Ordered Silica on Reactivity and Rheology of Cementitious Materials." Thesis, 2019.

Find full text
Abstract:

Silica fume is a widely used pozzolan in the concrete industry that has been shown to have numerous benefits for concrete including improved mechanical properties, refined pore structure, and densification of the interfacial transition zone between paste and aggregates. Traditionally, silica fume is used as a 5% to 10% replacement of cement; however, newer classes of higher strength concretes use silica fume contents of 30% or greater. At these high silica fume contents, many detrimental effects, such as poor workability and inconsistent strength development, become much more prominent.

In order to understand the fundamental reasons why high silica fume contents can have these detrimental effects on concrete mixtures, eight commercially available silica fumes were characterized for their physical and chemical properties. These included traditional properties such as density, particle size, and surface area. A non-traditional property, absorption capacity, was also determined. These properties or raw material characteristics were then related to the hydration and rheological behavior of pastes and concrete mixtures. Other tests were performed including isothermal calorimetry, which showed that each silica fume reacted differently than other silica fumes when exposed to the same reactive environment. Traditional hydration models for ordinary portland cement were expanded to include the effects that silica fumes have on water consumption, volumes of hydration products, and final degree of hydration.

As a result of this research, it was determined necessary to account for the volume and surface area of unhydrated cement and unreacted silica fume particles in water-starved mixture proportions. An adjustment factor was developed to more accurately apply the results from hydration modeling. By combining the results from hydration modeling with the surface area adjustments, an analytical model was developed to determine the thickness of paste (hydration products and capillary water) that surrounds all of the inert and unreacted particles in the system. This model, denoted as the “Paste Thickness Model,” was shown to be a strong predictor of compressive strength results. The results of this research suggest that increasing the paste thickness decreases the expected compressive strength of concretes at ages or states of hydration.

The rheological behavior of cement pastes containing silica fume was studied using a rotational rheometer. The Herschel-Bulkley model was fit to the rheological data to characterize the rheological behavior. A multilinear model was developed to relate the specific surface area of the silica fume, water content, and silica fume content to the Herschel-Bulkley rate index. The Herschel-Bulkley rate index is practically related to the ease at which the paste mixes. This multilinear model was shown to have strong predictive capability when used on randomly generated paste compositions.

Additionally, an analytical model was developed that defines a single parameter, idealized as the thickness of water surrounding each particle in the cementitious system. This model, denoted as the “Water Thickness Model,” incorporated the absorption capacity of silica fumes discovered during the characterization phase of this study and was shown to correlate strongly with the Herschel-Bulkley rate index. The Water Thickness Model demonstrates how small changes in water content can have a drastic effect on the rheology of low w/c or high silica fume content pastes due to the combined effects of surface area and absorption. The effect of additional water on higher w/c mixtures is significantly less.

APA, Harvard, Vancouver, ISO, and other styles
32

(5930657), Go-Eun Han. "A STUDY ON THE FAILURE ANALYSIS OF THE NEUTRON EMBRITTLED REACTOR PRESSURE VESSEL SUPPORT USING FINITE ELEMENT ANALYSIS." Thesis, 2020.

Find full text
Abstract:

One of the major degradation mechanisms in a nuclear power plant structural or mechanical component is the neutron embrittlement of the irradiated steel component. High energy neutrons change the microstructure of the steel, so the steel loses its fracture toughness. This neutron embrittlement increases the risk of the brittle fracture. Meanwhile, the reactor pressure vessel support is exposed in low temperature with high neutron irradiation environment which is an unfavorable condition for the fracture failure. In this study, the failure assessment of a reactor pressure vessel support was conducted using the fitness-for-service failure assessment diagram of API 579-1/ASME FFS-1(2016, API) with quantifying the structural margin under the maximum irradiation and extreme load events.

Two interrelated studies were conducted. For the first investigation, the current analytical methods were reviewed to estimate the embrittled properties, such as fracture toughness and the yield strength incorporates the low irradiation temperature. The analytical results indicated that the reactor pressure vessel support may experience substantial fracture toughness decrease during the operation near the lower bound of the fracture toughness. A three-dimensional (3D) solid element finite element model was built for the linear stress analysis. Postulated cracks were located in the maximum stress region to compute the stress intensity and the reference stress ratio. Based on the stress result and the estimated physical properties, the structural margin of the reactor pressure vessel support was analyzed in the failure assessment diagram with respect to the types of the cracks, level of the applied load and the level of the neutron influence.

The second study explored the structural stress analysis approaches at hot-spot which was found to be key parameter in failure analysis. Depending on the methods to remove the non-linear peak stress and the stress singularities, the accuracy of the failure assessment result varies. As an alternative proposal to evaluate the structural stress in 3D finite element analysis (FEA), the 3D model was divided into two-dimensional (2D) plane models. Five structural stress determination approaches were applied in 2D FEA for a comparison study, the stress linearization, the single point away approach, the stress extrapolation and the stress equilibrium method and the nodal force method. Reconstructing the structural stress in 3D was carried by the 3x3 stress matrix and compared to the 3D FEA results. The difference in 2D FEA structural stress results were eliminated by the constructing the stress in 3D.

This study provides the failure assessment analysis of irradiated steel with prediction of the failure modes and safety margin. Through the failure assessment diagram, we could understand the effects of different levels of irradiation and loadings. Also, this study provides an alternative structural stress determination method, dividing the 3D solid element model into two 2D models, using the finite element analysis.


APA, Harvard, Vancouver, ISO, and other styles
33

(6592994), Nicholas R. Olsen. "Long Term Trends in Lake Michigan Wave Climate." Thesis, 2019.

Find full text
Abstract:
Waves are a primary factor in beach health, sediment transport, safety, internal nutrient loading, and coastal erosion, the latter of which has increased along Lake Michigan's western coastline since 2014. While high water levels are undoubtedly the primary cause of this erosion, the recent losses may also be indicative of changes in the lake's wind-driven waves. This study seeks to examine long-term trends in the magnitude and direction of Lake Michigan waves, including extreme waves and storm events using buoy measurements (National Data Buoy Center Buoys 45002 and 45007) and the United States Army Corps of Engineers Wave Information Study (USACE WIS) wave hindcast.

Tests show significant long-term decreases in annual mean wave height in the lake's southern basin (up to -1.5mm/yr). When wave-approach direction was removed by testing directional bins for trends independently, an increase in the extent of the affected coast and rate of the shrinking waves was found (up to -4mm/yr). A previously unseen increasing trend in wave size in the northern basin (up to 2mm/yr) was also revealed.

Data from the WIS model indicated that storm duration and peak wave height in the southern basin has decreased at an averaged rate of -0.085hr/yr and -5mm/yr, respectively, from 1979 to 2017. An analysis of the extreme value distribution's shape in the southern basin found a similar pattern in the WIS hindcast model, with the probability of observing a wave larger than 5 meters decreasing by about -0.0125yr-1. In the northern basin, the probability of observing a wave of the same size increased at a rate of 0.0075yr-1.

The results for trends in the annual means revealed the importance of removing temporal- and spatial-within-series dependencies, in wave-height data. The strong dependence of lake waves on approach direction, as compared to ocean waves, may result from the relatively large differences in fetch length in the enclosed body of water. Without removal or isolation of these dependencies trends may be lost. Additionally, removal of the seasonal component in lake water level and mean wave-height series revealed that there was no significant correlation between these series.
APA, Harvard, Vancouver, ISO, and other styles
34

(7484483), Soohyun Yang. "COUPLED ENGINEERED AND NATURAL DRAINAGE NETWORKS: DATA-MODEL SYNTHESIS IN URBANIZED RIVER BASINS." Thesis, 2019.

Find full text
Abstract:

In urbanized river basins, sanitary wastewater and urban runoff (non-sanitary water) from urban agglomerations drain to complex engineered networks, are treated at centralized wastewater treatment plants (WWTPs) and discharged to river networks. Discharge from multiple WWTPs distributed in urbanized river basins contributes to impairments of river water-quality and aquatic ecosystem integrity. The size and location of WWTPs are determined by spatial patterns of population in urban agglomerations within a river basin. Economic and engineering constraints determine the combination of wastewater treatment technologies used to meet required environmental regulatory standards for treated wastewater discharged to river networks. Thus, it is necessary to understand the natural-human-engineered networks as coupled systems, to characterize their interrelations, and to understand emergent spatiotemporal patterns and scaling of geochemical and ecological responses.


My PhD research involved data-model synthesis, using publicly available data and application of well-established network analysis/modeling synthesis approaches. I present the scope and specific subjects of my PhD project by employing the Drivers-Pressures-Status-Impacts-Responses (DPSIR) framework. The defined research scope is organized as three main themes: (1) River network and urban drainage networks (Foundation-Pathway of Pressures); (2) River network, human population, and WWTPs (Foundation-Drivers-Pathway of Pressures); and (3) Nutrient loads and their impacts at reach- and basin-scales (Pressures-Impacts).


Three inter-related research topics are: (1) the similarities and differences in scaling and topology of engineered urban drainage networks (UDNs) in two cities, and UDN evolution over decades; (2) the scaling and spatial organization of three attributes: human population (POP), population equivalents (PE; the aggregated population served by each WWTP), and the number/sizes of WWTPs using geo-referenced data for WWTPs in three large urbanized basins in Germany; and (3) the scaling of nutrient loads (P and N) discharged from ~845 WWTPs (five class-sizes) in urbanized Weser River basin in Germany, and likely water-quality impacts from point- and diffuse- nutrient sources.


I investigate the UDN scaling using two power-law scaling characteristics widely employed for river networks: (1) Hack’s law (length-area power-law relationship), and (2) exceedance probability distribution of upstream contributing area. For the smallest UDNs, length-area scales linearly, but power-law scaling emerges as the UDNs grow. While area-exceedance plots for river networks are abruptly truncated, those for UDNs display exponential tempering. The tempering parameter decreases as the UDNs grow, implying that the distribution evolves in time to resemble those for river networks. However, the power-law exponent for mature UDNs tends to be larger than the range reported for river networks. Differences in generative processes and engineering design constraints contribute to observed differences in the evolution of UDNs and river networks, including subnet heterogeneity and non-random branching.


In this study, I also examine the spatial patterns of POP, PE, and WWTPs from two perspectives by employing fractal river networks as structural platforms: spatial hierarchy (stream order) and patterns along longitudinal flow paths (width function). I propose three dimensionless scaling indices to quantify: (1) human settlement preferences by stream order, (2) non-sanitary flow contribution to total wastewater treated at WWTPs, and (3) degree of centralization in WWTPs locations. I select as case studies three large urbanized river basins (Weser, Elbe, and Rhine), home to about 70% of the population in Germany. Across the three river basins, the study shows scale-invariant distributions for each of the three attributes with stream order, quantified using extended Horton scaling ratios; a weak downstream clustering of POP in the three basins. Variations in PE clustering among different class-sizes of WWTPs reflect the size, number, and locations of urban agglomerations in these catchments.


WWTP effluents have impacts on hydrologic attributes and water quality of receiving river bodies at the reach- and basin-scales. I analyze the adverse impacts of WWTP discharges for the Weser River basin (Germany), at two steady river discharge conditions (median flow; low-flow). This study shows that significant variability in treated wastewater discharge within and among different five class-sizes WWTPs, and variability of river discharge within the stream order <3, contribute to large variations in capacity to dilute WWTP nutrient loads. For the median flow, reach-scale water quality impairment assessed by nutrient concentration is likely at 136 (~16%) locations for P and 15 locations (~2%) for N. About 90% of the impaired locations are the stream order < 3. At basin-scale analysis, considering in stream uptake resulted 225 (~27%) P-impaired streams, which was ~5% reduction from considering only dilution. This result suggests the dominant role of dilution in the Weser River basin. Under the low flow conditions, water quality impaired locations are likely double than the median flow status for the analyses. This study for the Weser River basin reveals that the role of in-stream uptake diminishes along the flow paths, while dilution in larger streams (4≤ stream order ≤7) minimizes the impact of WWTP loads.


Furthermore, I investigate eutrophication risk from spatially heterogeneous diffuse- and point-source P loads in the Weser River basin, using the basin-scale network model with in-stream losses (nutrient uptake).Considering long-term shifts in P loads for three representative periods, my analysis shows that P loads from diffuse-sources, mainly from agricultural areas, played a dominant role in contributing to eutrophication risk since 2000s, because of ~87% reduction of point-source P loads compared to 1980s through the implementation of the EU WFD. Nevertheless, point-sources discharged to smaller streams (stream order < 3) pose amplification effects on water quality impairment, consistent with the reach-scale analyses only for WWTPs effluents. Comparing to the long-term water quality monitoring data, I demonstrate that point-sources loads are the primary contributors for eutrophication in smaller streams, whereas diffuse-source loads mainly from agricultural areas address eutrophication in larger streams. The results are reflective of spatial patterns of WWTPs and land cover in the Weser River basin.


Through data-model synthesis, I identify the characteristics of the coupled natural (rivers) – humans – engineered (urban drainage infrastructure) systems (CNHES), inspired by analogy, coexistence, and causality across the coupled networks in urbanized river basins. The quantitative measures and the basin-scale network model presented in my PhD project could extend to other large urbanized basins for better understanding the spatial distribution patterns of the CNHES and the resultant impacts on river water-quality impairment.


APA, Harvard, Vancouver, ISO, and other styles
35

(8735910), Josept David Revuelta Acosta Sr. "WATER-DRIVEN EROSION PREDICTION TECHNOLOGY FOR A MORE COMPLICATED REALITY." Thesis, 2020.

Find full text
Abstract:

Hydrological modeling has been a valuable tool to understand the processes governing water distribution, quantity, and quality of the planet Earth. Through models, one has been able to grasp processes such as runoff, soil moisture, soil erosion, subsurface drainage, plant growth, evapotranspiration, and effects of land use changes on hydrology at field and watershed scales. The number and diversity of water-related challenges are vast and expected to increase. As a result, current models need to be under continuous modifications to extend their application to more complex processes. Several models have been extensively developed in recent years. These models include the Soil and Water Assessment Tool (SWAT), Variable Infiltration Capacity (VIC) model, MIKE-SHE, and the Water Erosion Prediction Project (WEPP) model. The latter, although it is a well-validated model at field scales, the WEPP watershed model has been limited to small catchments, and almost no research has been introduced regarding water quality issues (only one study).

In this research, three objectives were proposed to improve the WEPP model in three areas where either the model has not been applied, or modifications can be performed to improve algorithms of the processes within the model (e.g. erosion, runoff, drainage). The enhancements impact the WEPP model by improving the current stochastic weather generation, extending its applicability to subsurface drainage estimation, and formulating a new routing model that allows future incorporation of transport of reactive solutes.

The first contribution was development of a stochastic storm generator based on 5-min time resolution and correlated non-normal Monte Carlo-based numerical simulation. The model considered the correlated and non-normal rainstorm characteristics such as time between storms, duration, and amount of precipitation, as well as the storm intensity structure. The model was tested using precipitation data from a randomly selected 5-min weather station in North Carolina. Results showed that the proposed storm generator captured the essential statistical features of rainstorms and their intensity patterns, preserving the first four moments of monthly storm events, good annual extreme event correspondence, and the correlation structure within each storm. Since the proposed model depends on statistical properties at a site, this may allow the use of synthetic storms in ungauged locations provided relevant information from a regional analysis is available.

A second development included the testing, improvement, and validation of the WEPP model to simulate subsurface flow discharges. The proposed model included the modification of the current subsurface drainage algorithm (Hooghoudt-based expression) and the WEPP model percolation routine. The modified WEPP model was tested and validated on an extensive dataset collected at four experimental sites managed by USDA-ARS within the Lake Erie Watershed. Predicted subsurface discharges show Nash-Sutcliffe Efficiency (NSE) values ranging from 0.50 to 0.70, and percent bias ranging from -30% to +15% at daily and monthly resolutions. Evidence suggests the WEPP model can be used to produce reliable estimates of subsurface flow with minimum calibration.

The last objective presented the theoretical framework for a new hillslope and channel-routing model for the Water Erosion Prediction Project (WEPP) model. The routing model (WEPP-CMT) is based on catchment geomorphology and mass transport theory for flow and transport of reactive solutes. The WEPP-CMT uses the unique functionality of WEPP to simulate hillslope responses under diverse land use and management conditions and a Lagrangian description of the carrier hydrologic runoff at hillslope and channel domains. An example of the model functionality was tested in a sub-catchment of the Upper Cedar River Watershed in the U.S. Pacific Northwest. Results showed that the proposed model provides an acceptable representation of flow at the outlet of the study catchment. Model efficiencies and percent bias for the calibration period and the validation period were NSE = 0.55 and 0.65, and PBIAS = -2.8% and 2.1%, respectively. The WEPP-CMT provides a suitable foundation for the transport of reactive solutes (e.g. nitrates) at basin scales.


APA, Harvard, Vancouver, ISO, and other styles
36

(6620447), Yen-Chen Chiang. "Studies on Aboveground Storgae Tanks Subjeected to Wind Loading: Static, Dynamic, and Computational Fluid Dynamics Analyses." Thesis, 2019.

Find full text
Abstract:

Due to the slender geometries of aboveground storage tanks, maintaining the stability under wind gusts of these tanks has always been a challenge. Therefore, this thesis aims to provide a through insight on the behavior of tanks under wind gusts using finite element analysis and computational fluid dynamic (CFD) analysis. The present thesis is composed of three independent studies, and different types of analysis were conducted. In Chapter 2, the main purpose is to model the wind loading dynamically and to investigate whether a resonance can be triggered. Research on tanks subjected to static wind load have thrived for decades, while only few studies consider the wind loading dynamically. Five tanks with different height (H) to diameter (D) ratios, ranging from 0.2 to 4, were investigated in this chapter. To ensure the quality of the obtained solution, a study on the time step increment of an explicit dynamic analysis, and a on the mesh convergence were conducted before the analyses were performed. The natural vibration frequencies and the effective masses of the selected tanks were first solved. Then, the tanks were loaded with wind gusts with the magnitude of the pressure fluctuating at the frequency associating with the most effective mass and other frequencies. Moreover, tanks with eigen-affine imperfections were also considered. It was concluded that resonance was not observed in any of these analyses. However, since the static buckling capacity and the dynamic buckling capacity has a relatively large difference for tall tanks (H/D ≥ 2.0), a proper safety factor shall be included during the design if a static analysis is adopted.

Chapter 3 focus on the effect of an internal pressure generated by wind gusts on open-top tanks. Based on boundary layer wind tunnel tests (BLWT), a significant pressure would be generated on the internal side of the tank shell when a gust of wind blow through an open-top tank. This factor so far has not been sufficiently accounted for by either ASCE-7 or API 650, despite the fact that this internal pressure may almost double the design pressure. Therefore, to investigate the effect of the wind profile along with the internal pressure, multiple wind profiles specified in different design documents were considered. The buckling capacities of six tanks with aspect ratios (H/D) ranging from 0.1 to 4 were analyzed adopting geometrically nonlinear analysis with imperfection using an arc-length algorithm (Riks analysis). Material nonlinearity was also included in some analyses. It was observed that the buckling capacity of a tank obtained using ASCE-7/API 650 wind profile is higher than buckling capacities obtained through any other profiles. It was then concluded that the wind profile dictated by the current North American design documents may not be conservative enough and may need a revision.

Chapter 4 investigates how CFD can be applied to obtain the wind pressure distribution on tanks. Though CFD has been widely employed in different research areas, to the author’s best knowledge, only one research has been dedicated to investigate the interaction between wind gusts and tanks using CFD. Thus, a literature review on the guideline of selecting input parameter for CFD and a parametric study as how to choose proper input parameters was presented in Chapter 4. A tank with an aspect ratio of 0.5 and a flat roof was employed for the parametric study. To ensure the validity of the input parameters, the obtained results were compared with published BLWT results. After confirming that the selected input parameters produces acceptable results, tanks with aspect ratio ranging from 0.4 to 2 were adopted and wind pressure distribution on such tanks were reported. It was concluded that the established criteria for deciding the input parameters were able to guarantee converged results, and the obtained pressure coefficients agree well with the BLWT results available in the literature.

APA, Harvard, Vancouver, ISO, and other styles
37

(6622427), Zhe Sun. "APPLICATION OF PHOTOCHEMICAL AND BIOLOGICAL APPROACHES FOR COST-EFFECTIVE ALGAL BIOFUEL." Thesis, 2019.

Find full text
Abstract:

Rapid growth of energy consumption and greenhouse gas emissions from fossil fuels have promoted extensive research on biofuels. Algal biofuels have been considered as a promising and environmentally friendly renewable energy source. However, several limitations have inhibited the development of cost-effective biofuel production, which includes unstable cultivation caused by invading organisms and high cost of lipid extraction. This dissertation aims to investigate photochemical approaches to prevent culture collapse caused by invading organisms and biological approaches for the development of cost-effective lipid extraction methods.

As a chemical-free water treatment technology, ultraviolet (UV) irradiation has been widely applied to inactivate pathogens but has not been used in algal cultivation to control invading organisms. To evaluate the potential of using UV irradiation to control invading algal species and minimize virus predation, Tetraselmis sp. and Paramecium bursaria Chlorella virus 1 (PBCV-1) were examined as challenge organisms to evaluate effectiveness of UV disinfection. The concentration of viable (reproductively/infectively active) cells and viruses were quantified by a most probable number (MPN) assay and a plaque assay. A low-pressure collimated-beam reactor was used to investigate UV254 dose-response behavior of both challenge organisms. A medium-pressure collimated-beam reactor equipped with a series of narrow bandpass optical filters was used to investigate the action spectra of both challenge organisms. Both challenge organisms showed roughly five log10 units of inactivation for UV254 doses over 120 mJ/cm2. the most effective wavelengths for inactivation of Tetraselmis were from 254 nm to 280 nm, in which the inactivation was mainly attributed to UV-induced DNA damage. On the contrary, the most effective wavelength for inactivation of PBCV-1 was observed at 214 nm, where the loss of infectivity was mainly attributed to protein damage. These results provide important information for design of UV reactors to minimize the impact of invading organisms in algal cultivation systems.

Additionally, a virus-assisted cell disruption method was developed for cost-effective lipid extraction from algal biomass. Detailed mechanistic studies were conducted to evaluate infection behavior of Chlorovirus PBCV-1 on Chlorella sp., impact of infection on mechanical strength of algal cell wall, lipid yield, and lipid distribution. Viral disruption with multiplicity of infection (MOI) of 10-8 completely disrupted concentrated algal biomass in six days. Viral disruption significantly reduced the mechanical strength of algal cells for lipid extraction. Lipid yield with viral disruption increased more than three times compared with no disruption control and was similar to that of ultrasonic disruption. Moreover, lipid composition analysis showed that the quality of extracted lipids was not affected by viral infection. The results showed that viral infection is a cost-effective process for lipid extraction from algal cells as extensive energy input and chemicals required by existing disruption methods are no longer needed.

Overall, this dissertation provides innovative approaches for the development of cost-efficient algal biofuels. Application of UV disinfection and viral disruption significantly reduces chemical consumption and improves sustainability of algal biofuel production.

APA, Harvard, Vancouver, ISO, and other styles
38

(6594389), Mahsa Modiri-Gharehveran. "INDIRECT PHOTOCHEMICAL FORMATION OF COS AND CS2 IN NATURAL WATERS: KINETICS AND REACTION MECHANISMS." Thesis, 2019.

Find full text
Abstract:

COS and CS2 are sulfur compounds that are formed in natural waters. These compounds are also volatile, which leads them move into the atmosphere and serve as critical precursors to sulfate aerosols. Sulfate aerosols are known to counteract global warming by reflecting solar radiation. One major source of COS and CS2 stems from the ocean. While previous studies have linked COS and CS2 formation in these waters to the indirect photolysis of organic sulfur compounds, much of the chemistry behind how this occurs remains unclear. This study examined this chemistry by evaluating how different organic sulfur precursors, water quality constituents, and temperature affected COS and CS2 formation in natural waters.

In the first part of this thesis (chapters 2 and 3), nine natural waters ranging in salinity were spiked with various organic sulfur precursors (e.g. cysteine, cystine, dimethylsulfide (DMS) and methionine) exposed to simulated sunlight over varying exposures. Other water quality conditions including the presence of O2, CO and temperature were also varied. Results indicated that COS and CS2 formation increased up to 11× and 4×, respectively, after 12 h of sunlight while diurnal cycling exhibited varied effects. COS and CS2 formation were also strongly affected by the DOC concentration, organic sulfur precursor type, O2 concentration, and temperature while salinity differences and CO addition did not play a significant role.

To then specifically evaluate the role of DOM in cleaner matrices, COS and CS2 formation was examined in synthetic waters (see chapters 4 and 5). In this case, synthetic waters were spiked with different types of DOM isolates ranging from freshwater to ocean water along with either cysteine or DMS and exposed to simulated sunlight for up to 4 h. Surprisingly, CS2 was not formed under any of the tested conditions, indicating that other water quality constituents, aside from DOM, were responsible for its formation. However, COS formation was observed. Interestingly, COS formation with cysteine was fairly similar for all DOM types, but increasing DOM concentration actually decreased formation. This is likely due to the dual role of DOM on simultaneously forming and quenching the reactive intermediates (RIs). Additional experiments with quenching agents to RIs (e.g. 3DOM* and ·OH) further indicated that ·OH was not involved in COS formation with cysteine but 3DOM* was involved. This result differed with DMS in that ·OH and 3DOM* were both found to be involved. In addition, treating DOM isolates with sodium borohydride (NaBH4) to reduce ketone/aldehydes to their corresponding alcohols increased COS formation, which implied that the RIs formed by these functional groups in DOM were not involved. The alcohols formed by this process were not likely to act as quenching agents since they have been shown to low in reactivity. Since ketones are known to form high-energy-triplet-states of DOM while quinones are known to form low-energy-triplet-states of DOM, removing ketones from the system further supported the role of low-energy-triplet-states on COS formation. This was initially hypothesized by findings from the testes on DOM types. In the end there are several major research contributions from this thesis. First, cysteine and DMS have different mechanisms for forming COS. Second, adding O2 decreased COS formation, but it did not stop it completely, which suggests that further research is required to evaluate the role of RI in the presence of O2. Lastly, considering the low formation yields of COS and CS2 formation from the organic sulfur precursors tested in this study, it is believed that some other organic sulfur precursors are missing which are likely to generate these compounds to higher levels and this needs to be investigated in future research.


APA, Harvard, Vancouver, ISO, and other styles
39

(8770325), Anzy Lee. "RIVERBED MORPHOLOGY, HYDRODYNAMICS AND HYPORHEIC EXCHANGE PROCESSES." Thesis, 2020.

Find full text
Abstract:

Hyporheic exchange is key to buffer water quality and temperatures in streams and rivers, while also providing localized downwelling and upwelling microhabitats. In this research, the effect of geomorphological parameters on hyporheic exchange has been assessed from a physical standpoint: surface and subsurface flow fields, pressure distribution across the sediment/water interface and the residence time in the bed.

First, we conduct a series of numerical simulations to systematically explore how the fractal properties of bedforms are related to hyporheic exchange.We compared the average interfacial flux and residence time distribution in the hyporheic zone with respect to the magnitude of the power spectrum and the fractal dimension of riverbeds. The results show that the average interfacial flux increases logarithmically with respect to the maximum spectral density whereas it increases exponentially with respect to fractal dimension.

Second, we demonstrate how the Froude number affects the free-surface profile, total head over sediment bed and hyporheic flux. When the water surface is fixed,the vertical velocity profile from the bottom to the air-water interface follows the law of the wall so that the velocity at the air-water interface has the maximum value. On the contrary, in the free-surface case, the velocity at the interface no longer has the maximum value: the location having the maximum velocity moves closer to the sediment bed. This results in increasing velocity near the bed and larger head gradients, accordingly.

Third,we investigate how boulder spacing and embeddedness affect the near-bed hydrodynamics and the surface-subsurface water exchange.When the embeddedness is small, the recirculation vortex is observed in both closely-packed and loosely-packed cases, but the size of vortex was smaller and less coherent in the closely-packed case. For these dense clusters, the inverse relationship between embeddedness and flux no longer holds. As embeddedness increases, the subsurface flowpaths move in the lateral direction, as the streamwise route is hindered by the submerged boulder. The average residence time therefore decreases as the embeddedness increases.

Lastly, we propose a general artificial neural network for predicting the pressure field at the channel bottom using point velocities at different level. We constructed three different data-driven models with multivariate linear regression, local linear regression and artificial neural network. The input variable is velocity in x, y, and z directions and the target variable is pressure at the sediment bed. Our artificial neural network model produces consistent and accurate prediction performance under various conditions whereas other linear surrogate models such as linear multivariate regression and local linear multivariate regression significantly depend on input variable.

As restoring streams and rivers has moved from aesthetics and form to a more holistic approach that includes processes, we hope our study can inform designs that benefit both structural and functional outcomes. Our results could inform a number of critical processes, such as biological filtering for example. It is possible to use our approach to predict hyporheic exchange and thus constrain the associated biogeochemical processing under different topographies. As river restoration projects become more holistic, geomorphological, biogeochemical and hydro-ecological aspects should also be considered.

APA, Harvard, Vancouver, ISO, and other styles
40

(5929958), Qinghua Li. "Geospatial Processing Full Waveform Lidar Data." Thesis, 2019.

Find full text
Abstract:
This thesis focuses on the comprehensive and thorough studies on the geospatial processing of airborne (full) waveform lidar data, including waveform modeling, direct georeferencing, and precise georeferencing with self-calibration.

Both parametric and nonparametric approaches of waveform decomposition are studied. The traditional parametric approach assumes that the returned waveforms follow a Gaussian mixture model where each component is a Gaussian. However, many real examples show that the waveform components can be neither Gaussian nor symmetric. To address the problem, this thesis proposes a nonparametric mixture model to represent lidar waveforms without any constraints on the shape of the waveform components. To decompose the waveforms, a fuzzy mean-shift algorithm is then developed. This approach has the following properties: 1) it does not assume that the waveforms follow any parametric or functional distributions; 2) the waveform decomposition is treated as a fuzzy data clustering problem and the number of components is determined during the process of decomposition; 3) neither peak selection nor noise floor filtering prior to the decomposition is needed; and 4) the range measurement is not affected by the process of noise filtering. In addition, the fuzzy mean-shift approach is about three times faster than the conventional expectationmaximization algorithm and tends to lead to fewer artifacts in the resultant digital elevation model.

This thesis also develops a framework and methodology of self-calibration that simultaneously determines the waveform geospatial position and boresight angles. Besides using the flight trajectory and plane attitude recorded by the onboard GPS receiver and inertial measurement unit, the framework makes use of the publically accessible digital elevation models as control over the study area. Compared to the conventional calibration and georeferencing method, the new development has minimum requirements on ground truth: no extra ground control, no planar objects, and no overlap flight strips are needed. Furthermore, it can also solve the problem of clock synchronization and boresight calibration simultaneously. Through a developed two-stage optimization strategy, the self-calibration approach can resolve both the time synchronization bias and boresight misalignment angles to achieve a stable and correct solution. As a result, a consistency of 0.8662 meter is achieved between the waveform derived digital elevation model and the reference one without systematic trend. Such experiments demonstrate the developed method is a necessary and more economic alternative to the conventional, high demanding georeferencing and calibration approach, especially when no or limited ground control is available.
APA, Harvard, Vancouver, ISO, and other styles
41

(5930687), Jinglin Jiang. "Investigating How Energy Use Patterns Shape Indoor Nanoaerosol Dynamics in a Net-Zero Energy House." Thesis, 2019.

Find full text
Abstract:

Research on net-zero energy buildings (NZEBs) has been largely centered around improving building energy performance, while little attention has been given to indoor air quality. A critically important class of indoor air pollutants are nanoaerosols – airborne particulate matter smaller than 100 nm in size. Nanoaerosols penetrate deep into the human respiratory system and are associated with deleterious toxicological and human health outcomes. An important step towards improving indoor air quality in NZEBs is understanding how occupants, their activities, and building systems affect the emissions and fate of nanoaerosols. New developments in smart energy monitoring systems and smart thermostats offer a unique opportunity to track occupant activity patterns and the operational status of residential HVAC systems. In this study, we conducted a one-month field campaign in an occupied residential NZEB, the Purdue ReNEWW House, to explore how energy use profiles and smart thermostat data can be used to characterize indoor nanoaerosol dynamics. A Scanning Mobility Particle Sizer and Optical Particle Sizer were used to measure indoor aerosol concentrations and size distributions from 10 to 10,000 nm. AC current sensors were used to monitor electricity consumption of kitchen appliances (cooktop, oven, toaster, microwave, kitchen hood), the air handling unit (AHU), and the energy recovery ventilator (ERV). Two Ecobee smart thermostats informed the fractional amount of supply airflow directed to the basement and main floor. The nanoaerosol concentrations and energy use profiles were integrated with an aerosol physics-based material balance model to quantify nanoaerosol source and loss processes. Cooking activities were found to dominate the emissions of indoor nanoaerosols, often elevating indoor nanoaerosol concentrations beyond 104 cm-3. The emission rates for different cooking appliances varied from 1011 h-1 to 1014 h-1. Loss rates were found to be significantly different between AHU/ERV off and on conditions, with median loss rates of 1.43 h-1 to 3.68 h-1, respectively. Probability density functions of the source and loss rates for different scenarios will be used in Monte Carlo simulations to predict indoor nanoaerosol concentrations in NZEBs using only energy consumption and smart thermostat data.

APA, Harvard, Vancouver, ISO, and other styles
42

(5930027), Ganeshchandra Mallya. "DROUGHT CHARACTERIZATION USING PROBABILISTIC MODELS." Thesis, 2020.

Find full text
Abstract:

Droughts are complex natural disasters caused due to deficit in water availability over a region. Water availability is strongly linked to precipitation in many parts of the world that rely on monsoonal rains. Recent studies indicate that the choice of precipitation datasets and drought indices could influence drought analysis. Therefore, drought characteristics for the Indian monsoon region were reassessed for the period 1901-2004 using two different datasets and standard precipitation index (SPI), standardized precipitation-evapotranspiration index (SPEI), Gaussian mixture model-based drought index (GMM-DI), and hidden Markov model-based drought index (HMM-DI). Drought trends and variability were analyzed for three epochs: 1901-1935, 1936-1970 and 1971-2004. Irrespective of the dataset and methodology used, the results indicate an increasing trend in drought severity and frequency during the recent decades (1971-2004). Droughts are becoming more regional and are showing a general shift to the agriculturally important coastal south-India, central Maharashtra, and Indo‑Gangetic plains indicating food security challenges and socioeconomic vulnerability in the region.



Drought severities are commonly reported using drought classes obtained by assigning pre-defined thresholds on drought indices. Current drought classification methods ignore modeling uncertainties and provide discrete drought classification. However, the users of drought classification are often interested in knowing inherent uncertainties in classification so that they can make informed decisions. A probabilistic Gamma mixture model (Gamma-MM)-based drought index is proposed as an alternative to deterministic classification by SPI. The Bayesian framework of the proposed model avoids over-specification and overfitting by choosing the optimum number of mixture components required to model the data - a problem that is often encountered in other probabilistic drought indices (e.g., HMM-DI). When sufficient number of components are used in Gamma-MM, it can provide a good approximation to any continuous distribution in the range (0,infinity), thus addressing the problem of choosing an appropriate distribution for SPI analysis. The Gamma-MM propagates model uncertainties to drought classification. The method is tested on rainfall data over India. A comparison of the results with standard SPI shows significant differences, particularly when SPI assumptions on data distribution are violated.



Finding regions with similar drought characteristics is useful for policy-makers and water resources planners in the optimal allocation of resources, developing drought management plans, and taking timely actions to mitigate the negative impacts during droughts. Drought characteristics such as intensity, frequency, and duration, along with land-use and geographic information, were used as input features for clustering algorithms. Three methods, namely, (i) a Bayesian graph cuts algorithm that combines the Gaussian mixture model (GMM) and Markov random fields (MRF), (ii) k-means, and (iii) hierarchical agglomerative clustering algorithm were used to find homogeneous drought regions that are spatially contiguous and possess similar drought characteristics. The number of homogeneous clusters and their shape was found to be sensitive to the choice of the drought index, the time window of drought, period of analysis, dimensionality of input datasets, clustering method, and model parameters of clustering algorithms. Regionalization for different epochs provided useful insight into the space-time evolution of homogeneous drought regions over the study area. Strategies to combine the results from multiple clustering methods were presented. These results can help policy-makers and water resources planners in the optimal allocation of resources, developing drought management plans, and taking timely actions to mitigate the negative impacts during droughts.

APA, Harvard, Vancouver, ISO, and other styles
43

(9192656), Yue Ke. "Oh, the Places You'll Move: Urban Mass Transit's Effects on Nearby Housing Markets." Thesis, 2020.

Find full text
Abstract:
The last couple of decades have seen a renewed interest among urban transportation planners in light rail transit (LRT) systems in large cities across the United States (US) as a possible means of addressing negative transportation externalities such as congestion and greenhouse gas emissions while encouraging the use of public transit [1]. LRT infrastructure investments have also gained traction as a means of revitalizing decayed urban centres because transportation infrastructure developments are highly correlated with economic growth in surrounding areas [2].
The primary objective of this dissertation is to examine the externalities associated with LRTs during its the construction and operations phases. In particular, three areas of concern are addressed: (1) The effect that proximity to LRT stations
have on nearby single family residences (SFRs) throughout the LRT life-cycle; (2) the effect that directional heterogeneity between LRT stations, the central business district (CBD), and the SFR; and (3) the longer term effects on nearby populations due to LRT operations. To answer the first two research objectives, quasi-experimental spatial econometric models are used; to address the last objective, a-spatial fixed effects panel models are developed. The analyses primarily relies on SFR sales data from 2001-2019, publicly available geographical information systems data, as well as demographic data from eight 5-year American Communities Surveys (ACS). Charlotte, NC, a medium-sized US city, is chosen as the site of analysis, both due to the relative novelty factor of its LRT in the region and data availability.
The results show that SFR values are positively associated with proximity to LRT stations in the announcement and construction phases but negatively associated with proximity to stations once the LRT is operational. Additionally, potential homeowners with prior experience with LRT do not behave any differently than potential homeowners with no prior experience with LRT in terms of willingness to pay to live a certain distance from LRT stations. Further, directional heterogeneity is shown to be a statistically significant source factor in deciding the extent to which house-buyers are willing to pay to be near LRT stations. Lastly, distance from LRT stations are found to have no statistically significant effect on changes in the racial composition of nearby areas but have significant positive effects on educational attainment and average median incomes of residents living in nearby areas over time.
The contributions of this research are twofold. First, in addition to highlighting the need to use spatial econometric methods when analyzing the effect that LRTs have on surrounding real estate markets, this research provides a framework by which directional heterogeneity can be incorporated into these analyses. Second, this research adds to the existing pool of knowledge on land use externalities of LRT through incorporating the life-cycle of LRT from announcement to operations. Furthermore, this research examines the effects that LRT have on surrounding populations in transit adjacent areas to provide a look at the broader effects of LRT over time.
A major challenge in the analyses conducted in this dissertation is its reliance on SFR sales data. Urban areas near LRT may contain additional land uses. In order to fully determine LRT’s effects on its surrounding area, one should examine the proximity effects on all land use types. Furthermore, LRT stations and rail lines are assumed exogenous, which may not be the case as public hearings and town halls during the planning phase may influence stations’ locations. Future research should seek to understand how the circumstances surrounding the planning process could indirectly affect the socio-demographic characteristics in transit adjacent areas over time. Finally, additional research is needed to better understand the extent to which LRT affects urban intra- and inter-migration. Knowing the population repulsion and attraction of LRT can help planners design facilities to better serve the public.
APA, Harvard, Vancouver, ISO, and other styles
44

(9760799), Juan Esteban Suarez Lopez. "CAPACITATED NETWORK BASED PARKING MODELS UNDER MIXED TRAFFIC CONDITIONS." Thesis, 2020.

Find full text
Abstract:

New technologies such as electric vehicles, Autonomous vehicles and transportation platforms are changing the way humanity move in a dramatic way and cities around the world need to adjust to this rapid change brought by technology. One of the aspects more challenging for urban planners is the parking problem as the new increase or desire for these private technologies may increase traffic congestion and change the parking requirements across the city. For example, Electric vehicles will need parking places for both parking and charging and Autonomous vehicles could increase the congestion by making longer trips in order to search better parking alternatives. Thus, it becomes essential to have clear, precise and practical models for transportation engineers in order to better represent present and future scenarios including normal vehicles, autonomous vehicles and electric vehicles in the context of parking and traffic alike. Classical network model such as traffic assignment have been frequently used for this purpose although they do not take into account essential aspects of parking such as fixed capacities, variety of users and autonomous vehicles. In this work a new methodology for modelling parking for multi class traffic assignment is proposed including autonomous vehicles and hard capacity constraints. The proposed model is presented in the classical Cournot Game formulation based on path flows and in a new link-node formulation which states the traffic assignment problem in terms of link flows instead of path flows. This proposed model allows for the creation of a new algorithm which is more flexible to model requirements such as linear constrains among different players flows and take advantage of fast convergence of Linear programs in the literature and in practice. Also, this link node formulation is used to redefine the network capacity problem as a linear program making it more tractable and easier to calculate. Numerical examples are presented across this work to better exemplify its implications and characteristics. The present work will allow planners to have a clear methodology for modelling parking and traffic in the context of multiusers which can represent diverse characteristics as parking time or type of vehicles. This model will be modified to take into account AV and the necessary assumptions and discussion will be provided.

APA, Harvard, Vancouver, ISO, and other styles
45

(9045878), Mitra Khanibaseri. "Developing Artificial Neural Networks (ANN) Models for Predicting E. Coli at Lake Michigan Beaches." Thesis, 2020.

Find full text
Abstract:

A neural network model was developed to predict the E. Coli levels and classes in six (6) select Lake Michigan beaches. Water quality observations at the time of sampling and discharge information from two close tributaries were used as input to predict the E. coli. This research was funded by the Indiana Department of Environmental Management (IDEM). A user-friendly Excel Sheet based tool was developed based on the best model for making future predictions of E. coli classes. This tool will facilitate beach managers to take real-time decisions.

The nowcast model was developed based on historical tributary flows and water quality measurements (physical, chemical and biological). The model uses experimentally available information such as total dissolved solids, total suspended solids, pH, electrical conductivity, and water temperature to estimate whether the E. Coli counts would exceed the acceptable standard. For setting up this model, field data collection was carried out during 2019 beachgoer’s season.

IDEM recommends posting an advisory at the beach indicating swimming and wading are not recommended when E. coli counts exceed advisory standards. Based on the advisory limit, a single water sample shall not exceed an E. Coli count of 235 colony forming units per 100 milliliters (cfu/100ml). Advisories are removed when bacterial levels fall within the acceptable standard. However, the E. coli results were available after a time lag leading to beach closures from previous day results. Nowcast models allow beach managers to make real-time beach advisory decisions instead of waiting a day or more for laboratory results to become available.

Using the historical data, an extensive experiment was carried out, to obtain the suitable input variables and optimal neural network architecture. The best feed-forward neural network model was developed using Bayesian Regularization Neural Network (BRNN) training algorithm. Developed ANN model showed an average prediction accuracy of around 87% in predicting the E. coli classes.

APA, Harvard, Vancouver, ISO, and other styles
46

(7036595), KwangHyuk Im. "ASSESSMENT MODEL FOR MEASURING CASCADING ECONOMIC IMPACTS DUE TO SEVERE WEATHER-INDUCED POWER OUTAGES." Thesis, 2019.

Find full text
Abstract:
This research has developed an assessment model and framework to measure cascading economic impacts in terms of gross domestic product (GDP) loss due to severe weather-induced power outages. The major objectives of this research were to (1) identify physical correlations between different industries within an economic system, (2) define deterministic relationships through the values of interconnectedness and interdependency between 71 industries, (3) complete probabilistic estimation of economic impacts using historical economic data spanning from 1997 to 2016, and (4) develop an assessment model that can be used in the future to measure economic loss in terms of gross domestic product (GDP) across 71 industries.
APA, Harvard, Vancouver, ISO, and other styles
47

(6611465), Nathaniel J. Shellhamer. "Direct Demand Estimation for Bus Transit in Small Cities." Thesis, 2019.

Find full text
Abstract:

Public transportation is vital for many people who do not have the means to use other forms of transportation. In small communities, transit service is often limited, due to funding constraints of the transit agency. In order to maximize the use of available funding resources, agencies strive to provide effective and efficient service that meets the needs of as many people as possible. To do this, effective service planning is critical.

Unlike traditional road-based transportation projects, transit service modifications can be implemented over the span of just a few weeks. In planning for these short-term changes, the traditional four-step transportation planning process is often inadequate. Yet, the characteristics of small communities and the resources available to them limit the applicability of existing transit demand models, which are generally intended for larger cities.

This research proposes a methodology for using population and demographic data from the Census Bureau, combined with stop-level ridership data from the transit agency, to develop models for forecasting transit ridership generated by a given geographic area with known population and socioeconomic characteristics. The product of this research is a methodology that can be applied to develop ridership models for transit agencies in small cities. To demonstrate the methodology, the thesis built ridership models using data from Lafayette, Indiana.

A total of four (4) ridership models are developed, giving a transit agency the choice to select a model, based on available data and desired predictive power. More complex models are expected to provide greater predictive power, but also require more time and data to implement. Simpler models may be adequate where data availability is a challenge. Finally, examples are provided to aid in applying the models to various situations. Aggregation levels of the American Community Survey (ACS) data provided some challenge in developing accurate models, however, the developed models are still expected to provide useful information, particularly in situations where local knowledge is limited, or where additional information is unavailable.


APA, Harvard, Vancouver, ISO, and other styles
48

(10693164), Chen Ma. "Modeling Alternatives for Implementing the Point-based Bundle Block Adjustment." Thesis, 2021.

Find full text
Abstract:
This thesis examines the multilinear equations of the calibrated pinhole camera.
The multilinear equations describe the linear relations between camera parameters and image observations in matrix or tensor formats.
This thesis includes derivations and analysis of the trilinear equations through the point feature relation. For the four-frame and more than four frame cases, this paper gives derivations and analysis using a combination of the bilinear and trilinear equations to represent general multi-frame point geometry. As a result, a three-frame model (TFM) for general multi-frame point geometry is given. This model provides a concise set of minimal and sufficient equations and minimal unknowns.
Based on the TFM, there are two bundle adjustment (BA) approaches developed.
The TFM does not involve the object parameters/coordinates necessary and indispensable for the collinearity equation employed by BA.
The two methods use TFM as the condition equation fully and partially, replacing the collinearity equation.
One operation using both TFM and the collinearity equation is designed to engage the object structures' prior knowledge.
The synthetical and real data experiments demonstrate the rationality and validity of the TFM and the two TFM based methods.
When the unstable estimate of the object structures appears, the TFM-based BA methods have a higher acceptance ratio of the adjustment results.
APA, Harvard, Vancouver, ISO, and other styles
49

(5929718), Yuntao Guo. "Leveraging Information Technologies and Policies to Influence Short- and Long-term Travel Decisions." Thesis, 2019.

Find full text
Abstract:
Growing automobile dependency and usage continue to exacerbate traffic congestion, air pollution, and physical inactivity in metropolitan areas. Extensive efforts have been made to leverage advanced technology and related policies to influence short- (within-day and day-to-day) and long-term (mobility and lifestyle) travel decisions to address these issues from the system operator and individual traveler perspectives. However, most studies have yet to address system operator and individual traveler needs together; provide sufficient understanding of the impacts of such technologies on safety and health; and consider the impacts of distinctive regional and political characteristics on responses to different policies among population subgroups.
This dissertation seeks to facilitate the leveraging of information technologies and related policies to influence short- and long-term travel decisions by: (1) developing a framework for apps that integrate augmented reality, gamification, and social component to influence travel decisions that address multiple user- and system-level goals, (2) understanding the safety and health impacts of these apps, (3) developing strategies to influence residential location decision-making to foster sustainable post-relocation travel behavior, (4) investigating the impacts of economic and legal policies on travel decisions by considering distinctive regional and political characteristics.
This dissertation can provide insights to system operators for designing a new generation of apps to dynamically manage traffic in real-time, promote long-term mode shifts from single-occupancy driving to carpooling, public transit use, walking and cycling, and address individual traveler needs. The dissertation also presents app mechanisms for providing feedback to legislators and app developers for designing policies and apps geared towards safe usage and promoting the physical and mental health of its users.
In addition, by considering the impacts of distinctive regional and political characteristics on population subgroups in terms of their responses to information technologies and economic and legal policies, additional measures can be deployed to support and facilitate the implementation of such technologies and policies.

APA, Harvard, Vancouver, ISO, and other styles
50

(10692402), Jorge Alfredo Rojas Rondan. "A BIM-based tool for formwork management in building projects." Thesis, 2021.

Find full text
Abstract:
A BIM-based tool for formwork management was developed using Dynamo Studio and Revit, based on practitioners preferences regarding LOD and rental option. The BIM tool is a toolset of Dynamo scripts able to create a BIM model for formwork enable with parameters that describes formwork features necessary for formwork management. The BIM model created with this toolset is able to compute quantities, cost analysis, generate a demand profile, and cerate a 4D & 5D simulation automatically.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography