Dissertations / Theses on the topic 'Construction Processes not elsewhere classified'

To see the other types of publications on this topic, follow the link: Construction Processes not elsewhere classified.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 20 dissertations / theses for your research on the topic 'Construction Processes not elsewhere classified.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Csató, Lehel. "Gaussian processes : iterative sparse approximations." Thesis, Aston University, 2002. http://publications.aston.ac.uk/1327/.

Full text
Abstract:
In recent years there has been an increased interest in applying non-parametric methods to real-world problems. Significant research has been devoted to Gaussian processes (GPs) due to their increased flexibility when compared with parametric models. These methods use Bayesian learning, which generally leads to analytically intractable posteriors. This thesis proposes a two-step solution to construct a probabilistic approximation to the posterior. In the first step we adapt the Bayesian online learning to GPs: the final approximation to the posterior is the result of propagating the first and second moments of intermediate posteriors obtained by combining a new example with the previous approximation. The propagation of em functional forms is solved by showing the existence of a parametrisation to posterior moments that uses combinations of the kernel function at the training points, transforming the Bayesian online learning of functions into a parametric formulation. The drawback is the prohibitive quadratic scaling of the number of parameters with the size of the data, making the method inapplicable to large datasets. The second step solves the problem of the exploding parameter size and makes GPs applicable to arbitrarily large datasets. The approximation is based on a measure of distance between two GPs, the KL-divergence between GPs. This second approximation is with a constrained GP in which only a small subset of the whole training dataset is used to represent the GP. This subset is called the em Basis Vector, or BV set and the resulting GP is a sparse approximation to the true posterior. As this sparsity is based on the KL-minimisation, it is probabilistic and independent of the way the posterior approximation from the first step is obtained. We combine the sparse approximation with an extension to the Bayesian online algorithm that allows multiple iterations for each input and thus approximating a batch solution. The resulting sparse learning algorithm is a generic one: for different problems we only change the likelihood. The algorithm is applied to a variety of problems and we examine its performance both on more classical regression and classification tasks and to the data-assimilation and a simple density estimation problems.
APA, Harvard, Vancouver, ISO, and other styles
2

Abdullah. "Knowledge sharing processes for identity theft prevention within online retail organisations." Thesis, University of Central Lancashire, 2017. http://clok.uclan.ac.uk/23998/.

Full text
Abstract:
The occurrence of identity theft has increased dramatically in recent times, becoming one of the fastest-growing crimes in the world. Major challenges associated with identity theft related offences include problems of consumers with credit, such as: aggravation by debt collectors; rejection of loans; disturbance in normal lives such as reputation damage; and the psychological disruption of providing personal data to organisations and banks during the investigation. For these reasons, and with the ready access of identity thieves to the retail industry, this problem is acute in the online retail industry, yet there has been insufficient research undertaken in this domain. This research investigated knowledge sharing processes for identity theft prevention within online retail organisations. An analysis of how individual staff and teams share their knowledge for identity theft prevention in organisations is presented, which includes the investigation of existing barriers in knowledge sharing for identity theft prevention in organisations. A qualitative case study research approach, using the guiding framework proposed by Salleh (2010), was adopted and extended to improve knowledge sharing processes for identity theft prevention in online retail organisations. Three case studies were conducted with leading online retailers in the UK. Data collection included one-to- one semi-structured interviews, internal documents from the researched companies and external documents from various secondary sources. The researcher used the thematic analysis approach using the NVivo software tool and a manual coding process. The total number of interviews was 34 across 3 case studies, with each interview lasting between 45 and 75 minutes. The participants were selected according to their experience, knowledge and involvement in solving identity theft issues and knowledge sharing. Investigation of internal documents included email conversations, policy documents and internal conversations such as emails and memos from the researched companies. This study found that knowledge of identity theft prevention is not being shared within online retail organisations. Individual staff members are learning from their experiences, which is time-consuming. Existing knowledge sharing barriers within the organisations were identified, and improvements in knowledge sharing processes in the online retail industry of the UK using the extended framework are proposed. This research contributes to existing research by providing new insights into knowledge sharing for identity theft prevention. It extends an existing framework proposed by Salleh (2010) in the new context of knowledge sharing processes for ID theft prevention in the retail industry by simplifying the model and combining elements into a more coherent framework. The present study also contributes by investigating the online retail sector for knowledge sharing for ID theft prevention. The empirical research identifies the barriers to knowledge sharing for ID theft prevention and the weaknesses of knowledge sharing in online retail organisations relevant to ID theft prevention. Finally, this study provides managers with useful guidelines for developing appropriate knowledge sharing processes for ID theft prevention in their organisation, and to educate staff in effective knowledge sharing.
APA, Harvard, Vancouver, ISO, and other styles
3

Panopoulos, Georgios D. "Economic aspects of safety in the Greek construction industry." Thesis, Aston University, 2003. http://publications.aston.ac.uk/12233/.

Full text
Abstract:
The thesis addresses the economic impacts of construction safety in Greece. The research involved the development of a methodology for determining the overall costs of safety, namely the sum of the costs of accidents and the costs of safety management failures (with or without accident) including image cost. Hitherto, very little work has been published on the cost of accidents in practical case studies. Moreover, to the author’s belief, no research has been published that seeks to determine in real cases the costs of prevention. The methodology developed is new, transparent, and capable of being replicated and adapted to other employment sectors and to other countries. The methodology was applied to three construction projects in Greece to test the safety costing methodology and to offer some preliminary evidence on the business case for safety. The survey work took place between 1999 and 2001 and involved 27 months of costing work on site. The study focuses on the overall costs of safety that apply to the main (principal) contractor. The methodology is supported by 120 discrete cost categories, and systematic criteria for determining which costs are included (counted) in the overall cost of safety. A quality system (in compliance with ISO9000 series) was developed to support the work and ensure accuracy of data gathering. The results of the study offer some support for the business case for safety. Though they offer good support for the economics of safety as they demonstrate need for cost effectiveness. Subject to important caveats, those projects that appeared to manage safety more cost-effectively achieved the lowest overall safety cost. Nevertheless, results are significantly lower than of other published works for two main reasons; first costs due to damages with no potential to injury were not included and second only costs to main constructor were considered. Study’s results are discussed and compared with other publish works.
APA, Harvard, Vancouver, ISO, and other styles
4

Shorrock, Sarah. "Protecting vulnerable people : an exploration of the risk factors and processes associated with Lancashire's Multi-Agency Safeguarding Hubs (MASH)." Thesis, University of Central Lancashire, 2017. http://clok.uclan.ac.uk/23075/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Cushing, Karen. "An analysis of the mandatory admission criterion within youth justice diversionary processes." Thesis, University of Bedfordshire, 2016. http://hdl.handle.net/10547/622545.

Full text
Abstract:
‘To require old heads upon young shoulders is inconsistent with the law’s compassion to human infirmity’ (Lord Diplock in Director of Public Prosecutions v Camplin Appellant [1978] AC 717)’. For young people in England and Wales who offend, diversion from formal proceedings has historically been a principle constituent of youth justice policy and practice, and presently accounts for over a third of all outcomes for detected youth offending (Youth Justice Board for England and Wales, 2015). Although attitudes concerning diversion have often oscillated between favour and criticism, and there has rarely been a period of sustained consensus or constancy of processes (Bernard, 1992; Goldson, 2010), eligibility for an out of court disposal has traditionally been dependent on an admission of some form being made by a young person. This thesis seeks to place the evolution of diversionary measures for young people who commit low level offences or engage in nuisance behaviours into a contextual and historical context, and explore why an admission has become, in the absence of any discernible political, academic or professional considerations, a central tenet of diversionary policies in England and Wales. Potential barriers which may prevent some young people making an admission and unnecessarily losing eligibility for an out of court disposal are considered, as well as the nature and standard of admission expected from young people, and the circumstances in which admissions are usually sought from them. This thesis also explores whether the mandatory admission criterion is compatible with other statutory and international obligations to consider the welfare of a young person when determining a suitable disposal, and whether it sufficiently distinguishes between young people unwilling to make an admission and those who may feel unable to. The thesis seeks to identify the gaps in current academic and professional knowledge concerning whether some young people may unnecessarily forfeit eligibility for a diversionary outcome for the sole reason that they do not make an admission. The research undertaken with relevant professionals’ endeavours to fill these gaps by exploring the practical application of the admission criterion, as well considering any suitable alternatives within the existing statutory regime.
APA, Harvard, Vancouver, ISO, and other styles
6

Hermann, Inge. "Cold War heritage (and) tourism : exploring heritage processes within Cold War sites in Britain." Thesis, University of Bedfordshire, 2012. http://hdl.handle.net/10547/326057.

Full text
Abstract:
For most of the second half of the 20th century the world's political map was divided by the Cold War, a name given to the 40-year long standoff between the superpowers - the Unites States and the USSR - and their allies. Due to its geographical location and alliance with the United States, Britain was at the 'frontline' of the Cold War. As a response to increasing tensions, the British Government made arrangements by building hundreds of military sites and structures, which were often dismantled or abandoned as the technology on which they relied became rapidly ineffective. Nowadays, there is a growing (academic) recognition of Cold War sites and their new or contemporary uses, including as heritage attractions within a tourism context. This study has brought forward a constructionist approach as to investigate how heritage works as a cultural and social practice that constructs and regulates a range of values and ideologies about what constitutes Cold War heritage (and) tourism in Britain. It has done this by, firstly, exploring the dominant and professional 'authorised heritage discourse', which aims to construct mutually, agreed and shared concepts about the phenomenon of 'Cold War heritage' within a tourism context. The study identified a network of actors, values, policies and discourses that centred on the concept of 'Cold War heritage' at selected sites through which a 'material reality' of the past is constructed. Although various opposing viewpoints were identified, the actors effectively seem to privilege and naturalise certain narratives of cultural and social meanings and values through tourism of what constitutes Cold War heritage and the ways it should be manifested through material and natural places, sites and objects within society. Differences were particularly noticeable in the values, uses and meanings of Cold iii Cold War heritage (and) tourism War heritage within the contemporary context of heritage management in Britain. For some, the sites were connected with a personal 'past', a place to commemorate, celebrate or learn from the past. For others, the sites were a source of income, a tourism asset, or contrary, a financial burden as the sites were not 'old enough' or 'aesthetically pleasing' to be regarded as a monument to be preserved as heritage. Subsequently, the study also explored the (disempowered) role of visitors to the sites as passive receivers, leaving little room for individual reflections on the wider social and cultural processes of Cold War heritage. Although, most visitors believed that the stewardship and professional view of the Cold War representations at the sites should not directly be contested, this study has illustrated the idea that what makes places valuable and gives them meaning as heritage sites is not solely based on contemporary practices by a dominant heritage discourse. Despite the visitors' support for the sole ownership by site managers, and the selective representations of the Cold War and events, they did question or negotiate the idea of 'heritage' as a physical and sole subject of management practices. Despite having little prior knowledge about the Cold War era or events, by pressing the borders of the authorised parameters of 'Cold War heritage', visitors actively constructed their experiences as being, or becoming, part of their personal and collective moments of 'heritage'. By inscribing (new) memories and meaning into their identity, and therefore also changing the nature of that identity, they reflected upon the past, present and future, (some more critically than others). To conclude, understanding these discursive meanings of Cold War heritage (and) tourism, and the ways in which ideas about Cold War heritage are constructed, negotiated and contested within and between discourses also contributes to understandings about the philosophical, historical, conceptual and political barriers that exist in identifying and engaging with different forms of heritage.
APA, Harvard, Vancouver, ISO, and other styles
7

Thompson, Diane. "The social and political construction of care : community care policy and the 'private' carer." Thesis, University of Bedfordshire, 2000. http://hdl.handle.net/10547/233629.

Full text
Abstract:
This thesis presents a retrospective critique of the social and political construction of 'informal care' within community care policy from the period of the late 1970s to the mid 1990s. The thesis considers the question of the degree of 'choice' available to informal carers to say 'no' to caring, or aspects of caring, within the reforms' positioning of informal care as the first line of support for adult dependants. The critique focuses on subjectivity, difference, agency and choice. A theoretical and methodological synthesis is developed between feminist post-structuralism, feminist critiques of mainstream social policy, and feminist theory and research, within which a qualitative in-depth interview study with informal carers is situated. The critique is then expanded through the development of a 'Q' Methodology study with a larger cohort of informal carers. The research identified gendered generational differences between the carers, and a 'burden' of care imposed as an outcome of consecutive governments' attempts to residualise welfare. The older carers' levels of agency and choice were severely curtailed. However, the younger female carers were more able to resist the drive of the community care reforms, their counter discourses being based on a new emergent notion of 'rights'. The direction of community care policy was found to be out of step with how the carers within this study perceived their responsibilities and 'obligations'. The thesis argues that whilst post-modernism may have constrained the capacity of governments and reconstituted our understanding of 'care', it has not done so to the extent that we are no longer prepared to make demands for 'care' from and by the state.
APA, Harvard, Vancouver, ISO, and other styles
8

Wang, Yu. "The development of a novel on-line system for the monitoring and control of fermentation processes." Thesis, University of Bedfordshire, 1995. http://hdl.handle.net/10547/610796.

Full text
Abstract:
This thesis describes the development of a computer controlled on-line system for fermentation monitoring and control. The entire system consists of a laboratory fermenter, flow injection system (four channels), a newly designed on-line filter, biomass analysis channel, pH and oxygen controllers as well as a spectrophotometer. A new design of gas driven flow injection analysis (FIA) allows a large number of reagents to be handled. The computer-controlled four channel PIA system is well suited for sequential analysis, which is important for fermentation on-line mOnitoring. The system can change the wavelength of the spectrophotometer automatically for each PIA channel, which makes the system powerful and flexible. A high frequency, low energy ultrasonic filter was modified and applied to the system for on-line mammalian cell culture sampling without breaking the sterile barrier. The results show that this novel application of ultrasonic filter technology results in higher efficiency and reliability and a longer life cycle than other types of filter. All the operations of the analytical system are controlled by a Macintosh computer (Quadra 950). The control program was written in LabVIEW which is a graphical programming language and well applicable to fermentation control. The software communicates with detectors, data acquisition, data analysis and presentation. The system can programmatically control up to 50 devices. Mammalian cell batch culture was used as an example of the application of the system. The system consists of a laboratory fermenter with a continuous sample withdrawal filter and an analysis system where glucose, lactate and ammonia, lactate dehydrogenase and biomass were measured. Cell viability was estimated by microscopic assay with trypan blue. pH and Oxygen were also measured. The system response was fast and yields a large number of reliable and precise analytical results which can be of great importance in the monitoring and control of mammalian cell culture conditions.
APA, Harvard, Vancouver, ISO, and other styles
9

(9155963), Fekadu Debella. "PROFITABILITY IMPROVEMENT OF CONSTRUCTION FIRMS THROUGH CONTINUOUS IMPROVEMENT USING RAPID IMPROVEMENT PRINCIPLES AND BEST PRACTICES." Thesis, 2020.

Find full text
Abstract:

The internal and external challenges construction companies face such as variability, low productivity, inefficient processes, waste, uncertainties, risks, fragmentation, adversarial contractual relationships, competition, and those resulting from internal and external challenges such as cost overruns and delays negatively affect company performance and profitability. Though research publications abound, these challenges persist, which indicates that the following gaps exist. Lean construction, process improvement, and performance improvement research have been conducted wherein improvement principles, and best practices are used to ameliorate performance issues, but several knowledge gaps exist. Few companies use these improvement principles and best practices. For those companies applying improvements, there is no established link between these improvements and performance/profitability to guide companies. Further, even when companies use improvement principles and best practices, they apply only one or two, whereas an integrated application of these improvement principles and best practices would be more effective. The other gap the author identified is the lack of strategic tools that construction companies can use to improve and manage their profitability. This thesis tried to fill the knowledge gap, at least partially, by developing a two-part excellence model for profitability improvement of construction companies. The excellence model lays out strategies that would enable companies to overcome the challenges and improve their profitability. The excellence model also gives an iterative and recursive continuous improvement model and flowchart to improve the profitability of construction companies. The researcher used high impact principles, guidelines, and concepts from the literature on organizational effectiveness, critical success factors, strategic company profitability growth enablers, process improvement, and process maturity models, performance improvement, and organizational excellence guidelines to develop the two-part excellence model.

The author also translated the two-part excellence model into the diagnostic tool and Decision Support System (DSS) by use of process diagrams, fishbone diagrams, root cause analysis, and use of improvement principles, countermeasures and best practices at the most granular (lowest intervention) levels to do away with root causes of poor performance. The author developed the diagnostic tool and Decision Support System (DSS) in Access 2016 to serve as a strategic tool to improve and manage the profitability of construction companies. The researcher used improvement principles, and best practices from scientific and practitioner literature to develop company and project process flow diagrams, and fishbone (cause and effect) diagrams for company, department, employee, interactions and project performance for the profitability improvement, which are the engines of the diagnostic tool and DSS. The diagnostic tool and DSS use continuous improvement cycles iteratively and recursively to improve the profitability of construction companies from the current net profit of 2-3 percent to a higher value.

APA, Harvard, Vancouver, ISO, and other styles
10

(8770325), Anzy Lee. "RIVERBED MORPHOLOGY, HYDRODYNAMICS AND HYPORHEIC EXCHANGE PROCESSES." Thesis, 2020.

Find full text
Abstract:

Hyporheic exchange is key to buffer water quality and temperatures in streams and rivers, while also providing localized downwelling and upwelling microhabitats. In this research, the effect of geomorphological parameters on hyporheic exchange has been assessed from a physical standpoint: surface and subsurface flow fields, pressure distribution across the sediment/water interface and the residence time in the bed.

First, we conduct a series of numerical simulations to systematically explore how the fractal properties of bedforms are related to hyporheic exchange.We compared the average interfacial flux and residence time distribution in the hyporheic zone with respect to the magnitude of the power spectrum and the fractal dimension of riverbeds. The results show that the average interfacial flux increases logarithmically with respect to the maximum spectral density whereas it increases exponentially with respect to fractal dimension.

Second, we demonstrate how the Froude number affects the free-surface profile, total head over sediment bed and hyporheic flux. When the water surface is fixed,the vertical velocity profile from the bottom to the air-water interface follows the law of the wall so that the velocity at the air-water interface has the maximum value. On the contrary, in the free-surface case, the velocity at the interface no longer has the maximum value: the location having the maximum velocity moves closer to the sediment bed. This results in increasing velocity near the bed and larger head gradients, accordingly.

Third,we investigate how boulder spacing and embeddedness affect the near-bed hydrodynamics and the surface-subsurface water exchange.When the embeddedness is small, the recirculation vortex is observed in both closely-packed and loosely-packed cases, but the size of vortex was smaller and less coherent in the closely-packed case. For these dense clusters, the inverse relationship between embeddedness and flux no longer holds. As embeddedness increases, the subsurface flowpaths move in the lateral direction, as the streamwise route is hindered by the submerged boulder. The average residence time therefore decreases as the embeddedness increases.

Lastly, we propose a general artificial neural network for predicting the pressure field at the channel bottom using point velocities at different level. We constructed three different data-driven models with multivariate linear regression, local linear regression and artificial neural network. The input variable is velocity in x, y, and z directions and the target variable is pressure at the sediment bed. Our artificial neural network model produces consistent and accurate prediction performance under various conditions whereas other linear surrogate models such as linear multivariate regression and local linear multivariate regression significantly depend on input variable.

As restoring streams and rivers has moved from aesthetics and form to a more holistic approach that includes processes, we hope our study can inform designs that benefit both structural and functional outcomes. Our results could inform a number of critical processes, such as biological filtering for example. It is possible to use our approach to predict hyporheic exchange and thus constrain the associated biogeochemical processing under different topographies. As river restoration projects become more holistic, geomorphological, biogeochemical and hydro-ecological aspects should also be considered.

APA, Harvard, Vancouver, ISO, and other styles
11

(6612953), Benjamin P. Wilkins. "Deciphering Soil Nitrogen Biogeochemical Processes Using Nitrogen and Oxygen Stable Isotopes." Thesis, 2019.

Find full text
Abstract:

Variations in stable isotope abundances of nitrogen (δ15N) and oxygen (δ18O) of nitrate are a useful tool for determining sources of nitrate as well as understanding the transformations of nitrogen within soil (Chapter 2). Various sources of nitrate are known to display distinctive isotopic compositions, while nitrogen transformation processes fractionate both N and O isotopes and can reveal the reaction pathways of nitrogen compounds. However, to fully understand the δ15N and δ18O values of nitrate sources, we must understand the chemistry and the isotopic fractionations that occur during inorganic and biochemical reactions. Among all N cycle processes, nitrification and denitrification displayed some of the largest and most variable isotope enrichment factors, ranging from -35 to 0‰ for nitrification, and -40 to -5‰ for denitrification. In this dissertation, I will first characterize the isotopic enrichment factors of 15N during nitrification and denitrification in a Midwestern agricultural soil, two important microbial processes in the soil nitrogen cycle. Nitrification incubations found that a large enrichment factor of -25.5‰ occurs during nitrification NH4+ è NO3-, which agrees well with previous studies (Chapter 3). Additionally, oxygen isotopic exchange that occurs between nitrite and water during nitrification was also quantified and found that 82% of oxygen in NO3- are derived from H2O, much greater than the 66% predicted by the biochemical steps of nitrification. The isotopic enrichment that occurs during denitrification was assessed by measuring the change in δ15N as the reactant NO3- was reduced to N2 gas (Chapter 4). The incubations and kinetic models showed that denitrification can causes large isotopic enrichment in the δ15N of remaining NO3-. The enrichment factor for NO2- è gaseous N was -9.1‰, while the enrichment factors for NO3- è NO2- were between -17 to -10‰, both of which were within the range of values report in literature. The results demonstrated that nitrification and denitrification caused large isotope fractionation and can alter the presumed δ15N and δ18O values of nitrate sources, potentially leading to incorrect apportionment of nitrate sources.

The results of the denitrification incubation experiments were applied to a field study, where the measured enrichment factor was utilized to quantify loss of N by field-scale denitrification (Chapter 5). Field-based estimates of total denitrification have long been a challenge and only limited success has been found using N mass balance, N2O gas flux, or isotope labeling techniques. Here, the flux of nitrate and chloride from tile drain discharge from a small field was determined by measuring both dissolved ions (ion chromatography) and monitoring water discharge. The δ15N and δ18O of tile nitrate was also measured at a high temporal resolution. Fluxes of all N inputs, which included N wet and dry deposition, fertilizer application, and soil mineralization were determined. The d15N and d18O values of these nitrate sources was also determined. Using this data, I first detected shifts in δ15N and δ18O values in the tile drain nitrate, which indicated variable amounts of denitrification. Next, a Rayleigh distillation model was used to determine the fraction of NO3- loss by field scale denitrification. This natural abundance isotope method was able to account for the spatial and temporal variability of denitrification by integrating it across the field scale. Overall, I found only 3.3% of applied N was denitrified. Furthermore, this study emphasized the importance of complementary information (e.g. soil moisture, soil temperature, precipitation, isotopic composition of H2O, etc.), and the evidence it can provide to nitrogen inputs and processes within the soil.

APA, Harvard, Vancouver, ISO, and other styles
12

(6629942), Anna N. Tatara. "Rate Estimators for Non-stationary Point Processes." Thesis, 2019.

Find full text
Abstract:
Non-stationary point processes are often used to model systems whose rates vary over time. Estimating underlying rate functions is important for input to a discrete-event simulation along with various statistical analyses. We study nonparametric estimators to the marked point process, the infinite-server queueing model, and the transitory queueing model. We conduct statistical inference for these estimators by establishing a number of asymptotic results.

For the marked point process, we consider estimating the offered load to the system over time. With direct observations of the offered load sampled at fixed intervals, we establish asymptotic consistency, rates of convergence, and asymptotic covariance through a Functional Strong Law of Large Numbers, a Functional Central Limit Theorem, and a Law of Iterated Logarithm. We also show that there exists an asymptotically optimal interval width as the sample size approaches infinity.

The infinite-server queueing model is central in many stochastic models. Specifically, the mean number of busy servers can be used as an estimator for the total load faced to a multi-server system with time-varying arrivals and in many other applications. Through an omniscient estimator based on observing both the arrival times and service requirements for n samples of an infinite-server queue, we show asymptotic consistency and rate of convergence. Then, we establish the asymptotics for a nonparametric estimator based on observations of the busy servers at fixed intervals.

The transitory queueing model is crucial when studying a transitory system, which arises when the time horizon or population is finite. We assume we observe arrival counts at fixed intervals. We first consider a natural estimator which applies an underlying nonhomogeneous Poisson process. Although the estimator is asymptotically unbiased, we see that a correction term is required to retrieve an accurate asymptotic covariance. Next, we consider a nonparametric estimator that exploits the maximum likelihood estimator of a multinomial distribution to see that this estimator converges appropriately to a Brownian Bridge.
APA, Harvard, Vancouver, ISO, and other styles
13

(5930969), Augustine M. Agyemang. "THE IMPACTS OF ROAD CONSTRUCTION WORK ZONES ON THE TRANSPORTATION SYSTEM, TRAVEL BEHAVIOR OF ROAD USERS AND SURROUNDING BUSINESSES." Thesis, 2019.

Find full text
Abstract:

In our daily use of the transportation system, we are faced with several road construction work zones. These construction work zones change how road users interact with the transportation system due to the changes that occur in the system such as increased travel times, increased delay times and vehicle stopped times. A microscopic traffic simulation was developed to depict the changes that occur in the transportation system. The impacts of the changes in the transportation system on the human travel behavior was investigated using ordered probit and logit models using five independent variables; age, gender, driving experience, annual mileage and percentage of non-work trips. Finally, a business impact assessment framework was developed to assess the impact of the road construction work zones on various businesses categories such as grocery stores, pharmacy, liquor stores and fast foods. Traffic simulation results showed that the introduction of work zones in the road network introduces an increase in delay times, vehicle stopped times, and travel times. Also, the change in average travel times, delay times and vehicle stopped times differed from road link to link. The observed average changes saw an increase as high as 318 seconds per vehicle, 237 seconds per vehicle and 242 seconds per vehicle for travel time, delay time and vehicle stopped time, respectively, for the morning peak period. An average increase as high as 1607 seconds per vehicle, 258 seconds per vehicle and 265 seconds per vehicle was observed for travel time, delay time and vehicle stopped time, respectively, for the afternoon peak period. The statistical model results indicated that, on a work trip, a high driving experience, high annual mileage, and high percentage of non-work trips makes an individual more likely to change their route. The results also showed gender difference in route choice behavior. Concerning business impacts, businesses in the work zone were impacted differently with grocery and pharmacy stores having the highest and lowest total loss in revenue, respectively.

APA, Harvard, Vancouver, ISO, and other styles
14

(5930708), David J. Kortge. "Simulation, Construction, and Testing of a Lloyd's Mirror Lithographic Interferometer." Thesis, 2019.

Find full text
Abstract:
Fabrication of nanoscale highly periodic structures is a vital capability for research on quasicrystals, directional and specular selective emitters, and plasmonics. Laser interference lithography is a maskless lithography process capable of producing patterns with high periodicity over large areas, and is compatible with standard optical lithography processing. In this work, a Lloyd's mirror lithographic interferometer is simulated, built, and tested. Featuring a HeCd CW laser at 325 nm, spatial lter, and vacuum stage, it is capable of generating patterns with a sub-100 nanometer half pitch, over a large area (approximately 8 cm2), with minimal distortion, in a single exposure; with 2D patterns possible using multiple exposures. The interferometer features a compact sliding enclosure, simple alignment and operation, and quick adjustments to the desired period. One-dimensional and two-dimensional patterns were generated and matched well with simulation.
APA, Harvard, Vancouver, ISO, and other styles
15

(9757040), Lina M. Aboulmouna. "Towards cybernetic modeling of biological processes in mammalian systems—lipid metabolism in the murine macrophage." Thesis, 2020.

Find full text
Abstract:

Regulation of metabolism in mammalian cells is achieved through a complex interplay between cellular signaling, metabolic reactions, and transcriptional changes. The modeling of metabolic fluxes in a cell requires the knowledge of all these mechanisms, some of which may be unknown. A cybernetic approach provides a framework to model these complex interactions through the implicit accounting of such regulatory mechanisms, assuming a biological “goal”. The goal-oriented control policies of cybernetic models have been used to predict metabolic phenomena ranging from complex substrate uptake patterns and dynamic metabolic flux distributions to the behavior of gene knockout strains. The premise underlying the cybernetic framework is that the regulatory processes affecting metabolism can be mathematically formulated as a cybernetic objective through variables that constrain the network to achieve a specified biological “goal”.

Cybernetic theory builds on the perspective that regulation is organized towards achieving goals relevant to an organism’s survival or displaying a specific phenotype in response to a stimulus. While cybernetic models have been established by prior work carried out in bacterial systems, we show its applicability to more complex biological systems with a predefined goal. We have modeled eicosanoid, a well-characterized set of inflammatory lipids derived from arachidonic acid, metabolism in mouse bone marrow derived macrophage (BMDM) cells stimulated by Kdo2-Lipid A (KLA, a chemical analogue of Lipopolysaccharide found on the surface of bacterial cells) and adenosine triphosphate (ATP, a danger signal released in response to surrounding cell death) using cybernetic control variables. Here, the cybernetic goal is inflammation; the hallmark of inflammation is the expression of cytokines which act as autocrine signals to stimulate a pro-inflammatory response. Tumor necrosis factor (TNF)-α is an exemplary pro-inflammatory marker and can be designated as a cybernetic objective for modeling eicosanoid—prostaglandin (PG) and leukotriene (LK)—metabolism. Transcriptomic and lipidomic data for eicosanoid biosynthesis and conversion were obtained from the LIPID Maps database. We show that the cybernetic model captures the complex regulation of PG metabolism and provides a reliable description of PG formation using the treatment ATP stimulation. We then validated our model by predicting an independent data set, the PG response of KLA primed ATP stimulated BMDM cells.

The process of inflammation is mediated by the production of multiple cytokines, chemokines, and lipid mediators each of which contribute to specific individual objectives. For such complex processes in mammalian systems, a cybernetic objective based on a single protein/component may not be sufficient to capture all the biological processes thereby necessitating the use of multiple objectives. The choice of the objective function has been made by intuitive considerations in this thesis. If objectives are conjectured, an argument can be made for numerous alternatives. Since regulatory effects are estimated from unregulated kinetics, one encounters the risk of multiplicity in this regard giving rise to multiple models. The best model is of course that which is able to predict a comprehensive set of perturbations. Here, we have extended our above model to also capture the dynamics of LKs. We have used migration as a biological goal for LK using the chemoattractant CCL2 as a key representative molecule describing cell activation leading to an inflammatory response where a goal composed of multiple cybernetic objectives is warranted. Alternative model objectives included relating both branches of the eicosanoid metabolic network to the inflammatory cytokine TNF-α, as well as the simple maximization of all metabolic products such that each equally contributes to the inflammatory system outcome. We were again able to show that all three cybernetic objectives describing the LK and PG branches for eicosanoid metabolism capture the complex regulation and provide a reliable description of eicosanoid formation. We performed simulated drug and gene perturbation analyses on the system to identify differences between the models and propose additional experiments to select the best cybernetic model.

The advantage to using cybernetic modeling is in its ability to capture system behavior without the same level of detail required for these interactions as standard kinetic modeling. Given the complexity of mammalian systems, the cybernetic goal for mammalian cells may not be based solely on survival or growth but on specific context dependent cellular responses. In this thesis, we have laid the groundwork for the application of cybernetic modeling in complex mammalian systems through a specific example case of eicosanoid metabolism in BMDM cells, illustrated the case for multiple objectives, and highlighted the extensibility of the cybernetic framework to other complex biological systems.

APA, Harvard, Vancouver, ISO, and other styles
16

(8065844), Jedadiah Floyd Burroughs. "Influence of Chemical and Physical Properties of Poorly-Ordered Silica on Reactivity and Rheology of Cementitious Materials." Thesis, 2019.

Find full text
Abstract:

Silica fume is a widely used pozzolan in the concrete industry that has been shown to have numerous benefits for concrete including improved mechanical properties, refined pore structure, and densification of the interfacial transition zone between paste and aggregates. Traditionally, silica fume is used as a 5% to 10% replacement of cement; however, newer classes of higher strength concretes use silica fume contents of 30% or greater. At these high silica fume contents, many detrimental effects, such as poor workability and inconsistent strength development, become much more prominent.

In order to understand the fundamental reasons why high silica fume contents can have these detrimental effects on concrete mixtures, eight commercially available silica fumes were characterized for their physical and chemical properties. These included traditional properties such as density, particle size, and surface area. A non-traditional property, absorption capacity, was also determined. These properties or raw material characteristics were then related to the hydration and rheological behavior of pastes and concrete mixtures. Other tests were performed including isothermal calorimetry, which showed that each silica fume reacted differently than other silica fumes when exposed to the same reactive environment. Traditional hydration models for ordinary portland cement were expanded to include the effects that silica fumes have on water consumption, volumes of hydration products, and final degree of hydration.

As a result of this research, it was determined necessary to account for the volume and surface area of unhydrated cement and unreacted silica fume particles in water-starved mixture proportions. An adjustment factor was developed to more accurately apply the results from hydration modeling. By combining the results from hydration modeling with the surface area adjustments, an analytical model was developed to determine the thickness of paste (hydration products and capillary water) that surrounds all of the inert and unreacted particles in the system. This model, denoted as the “Paste Thickness Model,” was shown to be a strong predictor of compressive strength results. The results of this research suggest that increasing the paste thickness decreases the expected compressive strength of concretes at ages or states of hydration.

The rheological behavior of cement pastes containing silica fume was studied using a rotational rheometer. The Herschel-Bulkley model was fit to the rheological data to characterize the rheological behavior. A multilinear model was developed to relate the specific surface area of the silica fume, water content, and silica fume content to the Herschel-Bulkley rate index. The Herschel-Bulkley rate index is practically related to the ease at which the paste mixes. This multilinear model was shown to have strong predictive capability when used on randomly generated paste compositions.

Additionally, an analytical model was developed that defines a single parameter, idealized as the thickness of water surrounding each particle in the cementitious system. This model, denoted as the “Water Thickness Model,” incorporated the absorption capacity of silica fumes discovered during the characterization phase of this study and was shown to correlate strongly with the Herschel-Bulkley rate index. The Water Thickness Model demonstrates how small changes in water content can have a drastic effect on the rheology of low w/c or high silica fume content pastes due to the combined effects of surface area and absorption. The effect of additional water on higher w/c mixtures is significantly less.

APA, Harvard, Vancouver, ISO, and other styles
17

(6411944), Francisco J. Montes Sr. "EFFECTS ON RHEOLOGY AND HYDRATION OF THE ADDITION OF CELLULOSE NANOCRYSTALS (CNC) IN PORTLAND CEMENT." Thesis, 2019.

Find full text
Abstract:
Cellulose Nanocrystals have been used in a wide range of applications including cement composites as a strength enhancer. This work analyses the use of CNC from several sources and production methods, and their effects on rheology and hydration of pastes made using different cement types with different compositions. Cement Types I/II and V were used to prepare pastes with different water to cement ratios (w/c) and measure the changes in rheology upon CNC addition. The presence of tricalcium aluminate (cement chemistry denotes as C3A) made a difference in the magnitude of CNC effects. At dosages under 0.5vol% to dry cement, CNC reduced the yield stress up to 50% the control value. Pastes with more C¡A reduced yield stress over a wider range of CNC dosages. CNC also increased yield stress of pastes with dosages above 0.5%, twice the control value for pastes with high C3A content at 1.5% CNC and up to 20 times for pastes without C3A at the same dosage.
All the CNCs used were characterized in length, aspect ratio, and zeta potential to identify a definitive factor that governs the effect in the rheology of cement pastes. However, no definitive evidence was found that any of these characteristics dominated the measured effects.
The CNC dosage at which the maximum yield stress reduction occurred increased with the amount of water used in the paste preparation, which provides evidence of the dominance of the water to cement ratio in the rheological impact of CNC.
14
Isothermal calorimetry showed that CNC cause concerning retardation effects in cement hydration. CNC slurries were then tested for sugars and other carbohydrates that could cause the aforementioned effect, then slurries were filtered, and impurities were detected in the filtrate, these impurities were quantified and characterized, however, the retardation appeared to be unaffected by the amount of the species detected, suggesting that the crystal chemistry, which is a consequence of the production method, is responsible of this retardation.
This work explores the benefits and drawbacks of the use of CNC in cement composites by individually approaching rheology and heat of hydration on a range of physical and chemical tests to build a better understanding of the observed effects.
Understanding the effect of CNCs on cement paste rheology can provide insights for future work of CNCs applications in cement composites.
APA, Harvard, Vancouver, ISO, and other styles
18

Piper, Christine. "The impact of certification on women-owned construction firms in the United States." 2007. http://arrow.unisa.edu.au:8081/1959.8/46352.

Full text
Abstract:
The purpose of this study was to investigate the impact of certification on women-owned construction companies in the United States. The primary objectives were to determine if certification has impacted accessibility to public (government) and private construction work as well as the financial performance of women-owned construction firms. The secondary research objectives were to determine what challenges these firms have encountered during the certification process and their perception of it.
APA, Harvard, Vancouver, ISO, and other styles
19

(9154730), Russell S. Brayfield. "ELECTRODE EFFECTS ON ELECTRON EMISSION AND GAS BREAKDOWN FROM NANO TO MICROSCALE." Thesis, 2020.

Find full text
Abstract:
Developments in modern electronics drive device design to smaller scale and higher electric fields and currents. Device size reductions to microscale and smaller have invalidated the assumption of avalanche formation for the traditional Paschen’s law for predicting gas breakdown. Under these conditions, the stronger electric fields induce field emission driven microscale gas breakdown; however, these theories often rely upon semi-empirical models to account for surface effects and the dependence of gas ionization on electric field, making them difficult to use for predicting device behavior a priori.
This dissertation hypothesizes that one may predict a priori how to tune emission physics and breakdown conditions for various electrode conditions (sharpness and surface roughness), gap size, and pressure. Specifically, it focuses on experiments to demonstrate the implications of surface roughness and emitter shape on gas breakdown for microscale and nanoscale devices at atmospheric pressure and simulations to extend traditional semi-empirical representations of the ionization coefficient to the relevant electric fields for these operating conditions.
First, this dissertation reports the effect of multiple discharges for 1 μm, 5 μm, and 10 μm gaps at atmospheric pressure. Multiple breakdown events create circular craters to 40 μm deep with crater depth more pronounced for smaller gap sizes and greater cathode surface roughness. Theoretical models of microscale breakdown using this modified effective gap distance agree well with the experimental results.
We next investigated the implications of gap distance and protrusion sharpness for nanoscale devices made of gold and titanium layered onto silicon wafers electrically isolated with SiO2 for gas breakdown and electron emission at atmospheric pressure. At lower voltages, the emitted current followed the Fowler-Nordheim (FN) law for field emission (FE). For either a 28 nm or 450 nm gap, gas breakdown occurred directly from FE, as observed for microscale gaps. For a 125 nm gap, emission current begins to transition toward the Mott-Gurney law for space-charge limited emission (SCLE) with collisions prior to undergoing breakdown. Thus, depending upon the conditions, gas breakdown may directly transition from either SCLE or FE for submicroscale gaps.
Applying microscale gas breakdown theories to predict this experimental behavior requires appropriately accounting for all physical parameters in the model. One critical parameter in these theories is the ionization coefficient, which has been determined semi-empirically with fitting parameters tabulated in the literature. Because these models fail at the strong electric fields relevant to the experiments reported above, we performed particle-in-cell simulations to calculate the ionization coefficient for argon and helium at various gap distances, pressures, and applied voltages to derive more comprehensive semi-empirical relationships to incorporate into breakdown theories.
In summary, this dissertation provides the first comprehensive assessment of the implications of surface roughness on microscale gas breakdown, the transition in gas breakdown and electron emission mechanisms at nanoscale, and the extension of semi-empirical laws for ionization coefficient. These results will be valuable in developing theories to predict electron emission and gas breakdown conditions for guiding nanoscale device design.
APA, Harvard, Vancouver, ISO, and other styles
20

(10692402), Jorge Alfredo Rojas Rondan. "A BIM-based tool for formwork management in building projects." Thesis, 2021.

Find full text
Abstract:
A BIM-based tool for formwork management was developed using Dynamo Studio and Revit, based on practitioners preferences regarding LOD and rental option. The BIM tool is a toolset of Dynamo scripts able to create a BIM model for formwork enable with parameters that describes formwork features necessary for formwork management. The BIM model created with this toolset is able to compute quantities, cost analysis, generate a demand profile, and cerate a 4D & 5D simulation automatically.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography