Dissertations / Theses on the topic 'Transit method'

To see the other types of publications on this topic, follow the link: Transit method.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Transit method.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Koppenhoefer, Johannes. "Searching for extra-solar planets with the transit method." Diss., lmu, 2009. http://nbn-resolving.de/urn:nbn:de:bvb:19-106531.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Gattolin, Elena. "Merge in Transit, a distribution method in the industrial environment." Thesis, Jönköping University, JTH, Industrial Engineering and Management, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-1334.

Full text
Abstract:

In a fast moving environment and in a globalized market, companies are searching efficient distribution methods that enable broad product assortment, lower level of inventories, shorter customer order fulfilment, lower transportation costs in order to achieve a more efficient procurement process and a improved customer service. This paper will focus on a new solution in supply chain design to solve these trade-offs between management cost cutting and higher customer level within markets characterized by an increasing globalisation. Merge in transit (MIT) distribution method allows companies to reduce inventory and transportation costs while guaranteeing a high customer perceived service level. It is a new technique in which goods shipped from several supply locations are consolidated into one final customer delivery. The company needs to coordinate shipments so that they arrive simultaneously and goods can be bundled and shipped immediately to the final customer for arrival on due date. Economical benefits and drawbacks will be investigated from a supply chain prospective.

APA, Harvard, Vancouver, ISO, and other styles
3

Alsubai, Khalid. "Wide angle search for extrasolar planets by the transit method." Thesis, St Andrews, 2008. http://hdl.handle.net/10023/521.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Chen, Lijuan. "An efficient method to compute shortest paths in real road network /." View abstract or full-text, 2005. http://library.ust.hk/cgi/db/thesis.pl?IEEM%202005%20CHEN.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Goodloe, John Bennett. "STANDARDIZED SUB-SCALE DYNAMOMETER SCALING METHOD FOR TRANSIT AND FREIGHT TRAIN APPLICATIONS." OpenSIUC, 2016. https://opensiuc.lib.siu.edu/theses/1899.

Full text
Abstract:
AN ABSTRACT OF THE THESIS OF John Goodloe, for the Master of Science degree in Mechanical Engineering, presented on April 13, 2016, at Southern Illinois University Carbondale. TITLE: STANDARDIZED SUB-SCALE DYNAMOMETER SCALING METHOD FOR TRANSIT AND FREIGHT TRAIN APPLICATIONS MAJOR PROFESSOR: Dr. Peter Filip Dynamometers are machines that are used in several industries for measuring force, torque, or power of a mechanism. These devices are in fact very useful in the friction material industry. Friction materials are created and then tested on dynamometers to analyze physical properties such as the dynamic coefficient of friction of the material based upon its retarding force against the wheel or disc, which is mounted to the dynamometer drive shaft. Dynamometer testing is expensive and often time consuming. Sub-scale dynamometers may be used to reduce cost, time, and material use while providing similar test results by implementing a proper scaling method. There are several scaling methods, but this approach will use surface analysis and the energy dispersed per surface contact area strategy to verify the testing conditions of both sub-scale and full scale testing. Since lab analysis costs are expensive, the project budget is restricted to analyzing the maximum of 1 full-scale disc and pad specimen and 2 subscale disc and pad sets. The test results are expected to prove that when the surface conditions of the analyzed specimens agree to each other, the dynamometer test results will also agree. Due to restrictions with budget and time the fastest and most effective way to test this hypothesis is by creating the baseline on the full-scale and then adjusting the scaling on the subscale dynamometer until similar results are given. Once similar dynamometer test results are obtained, the material specimens can be analyzed in the lab. Testing will continue as long as necessary, and if the expected results are not obtained, the results will still be tested for analysis and compared to the baseline. The results are expected to show that two separate machines may provide similar surface conditions for testing, as well as similar dynamometer test results for any given friction material. However, if the expected results cannot be obtained, then it may still prove that without matching the surface layer conditions while testing, the dynamometers recorded test results will not match either, which is in agreeance with the hypothesis.
APA, Harvard, Vancouver, ISO, and other styles
6

Lee, Sui-chun Macella. "The impact of Mass Transit Railway on land development in Hong Kong an analysis of the island line using expansion method /." Click to view the E-thesis via HKUTO, 1989. http://sunzi.lib.hku.hk/hkuto/record/B42574146.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Hughes, Arthur D. "Analysis of in-transit visibility as a method of reducing material lost in shipment." Monterey, California. Naval Postgraduate School, 1994. http://hdl.handle.net/10945/30913.

Full text
Abstract:
The purpose of this thesis is to evaluate the impact that improved in-transit visibility, obtained through implementation of the Defense Total Asset Visibility {DTAV) plan a11d the Global Transportation Network (GTN), will ha\'e on reducing material lost in shipment. This research utilizes financial data generated aboard Navy ships outfitted with the Shipboard Uniform Automated Data Processing System (SUADPS) to determine the extent of material lost in shipment and to e\•aluate the possible savings that could be derived through improving material visibility at the requisitioncr (user) level. The exisJing methods used to track material arc reviewed, weaknesses and deficiencies arc identified, potential savings are analyzed using linear regression analysis. The Defense Total Asset Visibility Plan (DTAV) and Global Transportation Network (GTN) arc introduced, and available methods of accessing improved intransit visibility data arc discussed. This analysis concludes that improved in-transit visibility can re
APA, Harvard, Vancouver, ISO, and other styles
8

Naylor, A. Ross. "Evaluation and clinical application of a new method of quantifying mean cerebral transit time." Thesis, University of Aberdeen, 1990. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.240668.

Full text
Abstract:
Recent work using Positron Emission Tomography has indicated that the best indicator of cerebral vascular reserve (CVR) is the ratio of cerebral blood flow to cerebral blood volume, which is the reciprocal of mean cerebral transit time (MCTT). However, previous attempts to quantify MCTT have ben unsuccessful. A new isotopic method of quantifying MCTT, which has overcome previous problems, is described and has been subjected to validation and application in two studies: (i) in patients with acute stroke, (ii) in patients undergoing carotid endarterectomy. In the validation study, MCTT was compared with blood flow velocity in the middle cerebral artery, using Transcranial Doppler (TCD) sonography. Both methods were reproducible and there was a linear relationship between MCTT and inter-hemispheric MCTT asymmetry has been defined. The transit time and TCD methods were employed in 32 patients with acute, first-time cerebral infarction. Patterns of underlying vascular pathology correlated with a clinical and CT scan/autopsy classification of cerebral infarction and there was good correlation between the transit time and TCD findings. The new technique, when applied to 55 patients undergoing carotid endarterectomy, showed that 31% had pre-operative evidence of impaired CVR in the symptomatic hemisphere, 75% returning to normal after surgery. Significant predictors for intra-operative sroke were; (i) age over 65, (ii) residual neurological deficit, (iii) complex plaque morphology, (iv) the combination of impaired CVR and CT infarction in the symptomatic hemisphere. No patient with recurrent symptoms after carotid endarterectomy has developed impaired CVR or recurrent disease in the operated internal carotid artery (ICA) during follow-up. One patient has developed impaired CVR in the non-operated hemisphere in association with disease progression in the non-operated ICA. The transit time method shows considerable potential as an inexpensive, quick and simple alternative to the previously available methods of evaluating CVR.
APA, Harvard, Vancouver, ISO, and other styles
9

Gee, Wilfred T., Olivier Guyon, Josh Walawender, Nemanja Jovanovic, and Luc Boucher. "Project PANOPTES: a citizen-scientist exoplanet transit survey using commercial digital cameras." SPIE-INT SOC OPTICAL ENGINEERING, 2016. http://hdl.handle.net/10150/622806.

Full text
Abstract:
Project PANOPTES (http://www.projectranoptes.org) is aimed at establishing a collaboration between professional astronomers, citizen scientists and schools to discover a large number of exoplanets with the transit technique. We have developed digital camera based imaging units to cover large parts of the sky and look for exoplanet transits. Each unit costs approximately $5000 USD and runs automatically every night. By using low-cost, commercial digital single-lens reflex (DSLR) cameras, we have developed a uniquely cost-efficient system for wide field astronomical imaging, offering approximately two orders of magnitude better etendue per unit of cost than professional wide-field surveys. Both science and outreach, our vision is to have thousands of these units built by schools and citizen scientists gathering data, making this project the most productive exoplanet discovery machine in the world.
APA, Harvard, Vancouver, ISO, and other styles
10

Chen, Aijing. "Bus Transit Passenger Origin-Destination Flow Estimation: Capturing Terminal Carry-Over Movements Using the Iterative Proportional Fitting Method." The Ohio State University, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=osu1593675738643412.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Lee, Sui-chun Macella, and 李萃珍. "The impact of Mass Transit Railway on land development in Hong Kong: an analysis of the island line usingexpansion method." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1989. http://hub.hku.hk/bib/B42574146.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Savic, Radojka. "Improved pharmacometric model building techniques." Doctoral thesis, Uppsala University, Department of Pharmaceutical Biosciences, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-9272.

Full text
Abstract:

Pharmacometric modelling is an increasingly used method for analysing the outcome from clinical trials in drug development. The model building process is complex and involves testing, evaluating and diagnosing a range of plausible models aiming to make an adequate inference from the observed data and predictions for future studies and therapy.

The aim of this thesis was to advance the approaches used in pharmacometrics by introducing improved models and methods for application in essential parts of model building procedure: (i) structural model development, (ii) stochastic model development and (iii) model diagnostics.

As a contribution to the structural model development, a novel flexible structural model for drug absorption, a transit compartment model, was introduced and evaluated. This model is capable of describing various drug absorption profiles and yet simple enough to be estimable from data available from a typical trial. As a contribution to the stochastic model development, three novel methods for parameter distribution estimation were developed and evaluated; a default NONMEM nonparametric method, an extended grid method and a semiparametric method with estimated shape parameters. All these methods are useful in circumstances when standard assumptions of parameter distributions in the population do not hold. The new methods provide less biased parameter estimates, better description of variability and better simulation properties of the model. As a contribution to model diagnostics, the most commonly used diagnostics were evaluated for their usefulness. In particular, diagnostics based on individual parameter estimates were systematically investigated and circumstances which are likely to misguide modelers towards making erroneous decisions in model development, relating to choice of structural, covariate and stochastic model components were identified.

In conclusion, novel approaches, insights and models have been provided to the pharmacometrics community.

Implementation of these advances to make model building more efficient and robust has been facilitated by development of diagnostic tools and automated routines.

APA, Harvard, Vancouver, ISO, and other styles
13

Georgieva, Iskra. "Searching for Exoplanets in K2 Data." Thesis, Luleå tekniska universitet, Rymdteknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-70960.

Full text
Abstract:
The field of extrasolar planets is undoubtedly one of the most exciting and fast-moving in astronomy. Thanks to the Kepler Space Telescope, which has given us the Kepler and K2 missions, we now have thousands of planets to study and thousands more candidates waiting to be confirmed.For this thesis work, I used K2 data in the form of stellar light curves for Campaign 15 – the 15th observation field of this mission – to search for transiting exoplanets. I present one way to produce a viable list of planetary candidates, which is the first step to exoplanet discovery. I do this by first applying a package of subroutines called EXOTRANS to the light curves. EXOTRANS uses two wavelet-based filter routines: VARLET and PHALET. VARLET is used to remove stellar variability and abrupt discontinuities in the light curve. Since a transit appears box-like, EXOTRANS utilises a box-fitting least-squares algorithm to extract the transit event by fitting a square box. PHALET removes disturbances of known frequencies (and their harmonics) and is used to search the light curve for additional planets. Once EXOTRANS finishes its run, I examine the resulting plots and flag the ones, which contain a transit feature that does not appear to be a false positive. I then perform calculations on the shortlisted candidates to further refine their quality. This resulted in a list of 30 exoplanet candidates. Finally, for eight of them, I used a light curve detrending routine (Exotrending) and another software package, Pyaneti, for transit data fitting. Pyaneti uses MCMC sampling with a Bayesian approach to derive the most accurate orbital and candidate parameters. Based on these estimates and combined with stellar parameters from the Ecliptic Plane Input Catalogue, I comment on the eight candidates and their host stars.However, these comments are only preliminary and speculative until follow-up investigation has been conducted. The most widely used method to do this is the radial velocity method, through which more detailed information is obtained about the host star and in turn, about the candidate. This information, specifically the planetary mass, allows for the bulk density to be estimated, which can give indication about a planet’s composition.Although the Kepler Space Telescope is at the end of its life, new missions with at least a partial focus on exoplanets, are either ongoing (Transiting Exoplanets Survey Satellite – TESS) or upcoming (Characterising Exoplanets Satellite – CHEOPS, James Webb Space Telescope – JWST, Planetary Transits and Oscillations – PLATO). They will add thousands of new planets, providing unprecedented accuracy on the transit parameters and will make significant advances in the field of exoplanet characterisation. The methods used in this work are as applicable to these missions as they have been for the now retired Convection, Rotation et Transits planétaires (CoRoT) – the first space mission dedicated to exoplanet research, and Kepler.
APA, Harvard, Vancouver, ISO, and other styles
14

Strohl, Brandon A. "Empirical Assessment of the Iterative Proportional Fitting Method for Estimating Bus Route Passenger Origin-Destination Flows." The Ohio State University, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=osu1261583295.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Hamacher, Duane Willis Physics Faculty of Science UNSW. "A search for transiting extrasolar planets from the southern hemisphere." Publisher:University of New South Wales. Physics, 2008. http://handle.unsw.edu.au/1959.4/40943.

Full text
Abstract:
To date, more than 300 planets orbiting stars other than our sun have been discovered using a range of observing techniques, with new discoveries occuring monthly. The work in this thesis focused on the detection of exoplanets using the transit method. Planets orbiting close to their host stars have a roughly 10 per cent chance of eclipsing (transiting) the star, with Jupiter?sized planets causing a one per cent dip in the flux of the star over a few hours. A wealth of orbital and physical information on the system can be extracted from these systems, including the planet density which is essential in constraining models of planetary formation. To detect these types of planets requires monitoring tens of thousands of stars over a period of months. To accomplish this, we conduct a wide-field survey using the 0.5-meter Automated Patrol Telescope (APT) at Siding Spring Observatory (SSO) in NSW, Australia. Once candidates were selected from the data?set, selection criteria were applied to separate the likely planet candidates from the false?positives. For this thesis, the methods and instrumentation used in attaining data and selecting planet candidates are discussed, as well as the results and analysis of the planet candidates selected from star fields observed from 2004?2007. Of the 65 planet candidates initially selected from the 25 target fields observed, only two were consistent with a planet transit. These candidates were later determined to be eclipsing binary stars based on follow up observations using the 40-inch telescope, 2.3-m telescope, and the 3.9-m Anglo-Australian Telescope, all located at SSO. Additionally, two planet candidates from the SuperWASP-North consortium were observed on the 40-inch telescope. Both proved to be eclipsing binary stars. While no planets were found, our search methods and results are consistent with successful transit surveys targeting similar fields with stars in a similar magnitude range and using similar methods.
APA, Harvard, Vancouver, ISO, and other styles
16

Weishaupt, Holger. "A study of power spectral densities of real and simulated Kepler light curves." Thesis, Linnéuniversitetet, Institutionen för fysik och elektroteknik (IFE), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-45802.

Full text
Abstract:
During the last decade, the transit method has evolved to one of the most promising techniques in the search for extrasolar planets and the quest to find other earth-like worlds. In theory, the transit method is straight forward being based on the detection of an apparent dimming of the host star’s light due to an orbiting planet traversing in front of the observer. However, in practice, the detection of such light curve dips and their confident ascription to a planetary transit is heavily burdened by the presence of different sources of noise, the most prominent of which is probably the so called intrinsic stellar variability. Filtering out potential transit signals from background noise requires a well adjusted high-pass filter. In order to optimize such a filter, i.e. to achieve best separation between signal and noise, one typically requires access to benchmark datasets that exhibit the same light curve with and without obstructing noise. Several methods for simulating stellar variability have been proposed for the construction of such benchmark datasets. However, while such methods have been widely used in testing transit method detection algorithms in the past, it is not very well known how such simulations compare to real recorded light curves - a fact that might be contributed to the lack of large databases of stellar light curves for comparisons at that time. With the increasing amount of light curve data now available due to missions such as Kepler, I have here undertaken such a comparison of synthetic and real light curves for one particular method that simulates stellar variability based on scaled power spectra of the Sun’s flux variations. Conducting the respective comparison also in terms of estimated power spectra of real and simulated light curves, I have revealed that the two datasets exhibit substantial differences in average power, with the synthetic power spectra having generally a lower power and also lacking certain distinct power peaks present in the real light curves. The results of this study suggest that scaled power spectra of solar variability alone might be insufficient for light curve simulations and that more work will be required to understand the origin and relevance of the observed power peaks in order to improve on such light curve models.
APA, Harvard, Vancouver, ISO, and other styles
17

Weishaupt, Hrafn N. H. "Implementing a pipeline to search for transiting exoplanets : application to the K2 survey data." Thesis, Linnéuniversitetet, Institutionen för fysik och elektroteknik (IFE), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-79207.

Full text
Abstract:
The detection of exoplanets has rapidly evolved to one of the most important frontiers of astronomical and astrophysical research. The recent decades have seen the development of various techniques for detecting exoplanets. Of these approaches the transit method has received particular interest and has lead to the largest number of discoveries to date. The Kepler K2 mission is an ongoing observational survey, which has generated light curves for thousands of stars, a large fraction of which have yet to be fully explored. To discover and characterize the transiting planets hosted by the respective stars, extensive transit screens are required. However, implementing a pipeline for transit analyses is not straight forward, considering the light curve properties of different survey, the rapid changes brought by technological advancements, and the apparent lack of a golden standard with respect to the applied methodology. The project has reviewed several aspects of exoplanet detection via the transit method. Particular focus was placed on the identification of a suitable workflow covering the relevant steps to move from raw light curve files to a final prediction and characterization of transiting planetary candidates. Adhering to the identified strategy, the major part of the project then dealt with the implementation of a pipeline that integrates and executes all the different steps in a streamlined fashion. Of note, primary focus was placed on the actual selection and implementation of methods into an operational pipeline, but due to the given time constraints extensive optimizations of each individual processing step was outside the scope of this project. Nevertheless, the pipeline was employed to predict transit candidates for K2 campaigns C7, C8, C10, C11, and C12. A comparsion of the most conservative predictions from campaigns C7 and C10 with previously reported exoplanet candidates demonstrated that the pipeline was highly capable of discovering reliable transit candidates. Since campaigns C11 and C12 have not yet been fully explored, the respective candidates predicted for those campaigns in the current project might thus harbour novel planetary transit candidates that would be suitable for follow-up confirmation runs. In summary, the current project has produced a pipeline for performing transiting exoplanet searches in K2 data, which integrates the steps from raw light curve processing to transit candidate selection and characterization. The pipeline has been demonstrated to predict credible transit candidates, but future work will have to focus on additional optimizations of individual method parameters and on the analysis of transit detection efficiencies.
APA, Harvard, Vancouver, ISO, and other styles
18

AL, ABD ASSAAD. "Comparaison des methodes de mesure, du flux des contenus digestifs chez le ruminant : application a l'etude de la digestion de trois types de ration." Clermont-Ferrand 2, 1986. http://www.theses.fr/1986CLF21035.

Full text
Abstract:
Comparaison chez le mouton des differentes methodes proposees pour l'estimation du flux des contenus digestifs a l'entree de l'intestin grele, en utilisant la canule simple et les marqueurs, et mise au point d'une methodologie complementaire de la precedente, permettant de mesurer la vitesse de passage dans le tube digestif des residus alimentaires et des liquides constituant le contenu digestif
APA, Harvard, Vancouver, ISO, and other styles
19

Alapini, Odunlade Aude Ekundayo Pauline. "Transiting exoplanets : characterisation in the presence of stellar activity." Thesis, University of Exeter, 2010. http://hdl.handle.net/10036/104834.

Full text
Abstract:
The combined observations of a planet’s transits and the radial velocity variations of its host star allow the determination of the planet’s orbital parameters, and most inter- estingly of its radius and mass, and hence its mean density. Observed densities provide important constraints to planet structure and evolution models. The uncertainties on the parameters of large exoplanets mainly arise from those on stellar masses and radii. For small exoplanets, the treatment of stellar variability limits the accuracy on the de- rived parameters. The goal of this PhD thesis was to reduce these sources of uncertainty by developing new techniques for stellar variability filtering and for the determination of stellar temperatures, and by robustly fitting the transits taking into account external constraints on the planet’s host star. To this end, I developed the Iterative Reconstruction Filter (IRF), a new post-detection stellar variability filter. By exploiting the prior knowledge of the planet’s orbital period, it simultaneously estimates the transit signal and the stellar variability signal, using a com- bination of moving average and median filters. The IRF was tested on simulated CoRoT light curves, where it significantly improved the estimate of the transit signal, particu- lary in the case of light curves with strong stellar variability. It was then applied to the light curves of the first seven planets discovered by CoRoT, a space mission designed to search for planetary transits, to obtain refined estimates of their parameters. As the IRF preserves all signal at the planet’s orbital period, t can also be used to search for secondary eclipses and orbital phase variations for the most promising cases. This en- abled the detection of the secondary eclipses of CoRoT-1b and CoRoT-2b in the white (300–1000 nm) CoRoT bandpass, as well as a marginal detection of CoRoT-1b’s orbital phase variations. The wide optical bandpass of CoRoT limits the distinction between thermal emission and reflected light contributions to the secondary eclipse. I developed a method to derive precise stellar relative temperatures using equiv- alent width ratios and applied it to the host stars of the first eight CoRoT planets. For stars with temperature within the calibrated range, the derived temperatures are con- sistent with the literature, but have smaller formal uncertainties. I then used a Markov Chain Monte Carlo technique to explore the correlations between planet parameters derived from transits, and the impact of external constraints (e.g. the spectroscopically derived stellar temperature, which is linked to the stellar density). Globally, this PhD thesis highlights, and in part addresses, the complexity of perform- ing detailed characterisation of transit light curves. Many low amplitude effects must be taken into account: residual stellar activity and systematics, stellar limb darkening, and the interplay of all available constraints on transit fitting. Several promising areas for further improvements and applications were identified. Current and future high precision photometry missions will discover increasing numbers of small planets around relatively active stars, and the IRF is expected to be useful in characterising them.
APA, Harvard, Vancouver, ISO, and other styles
20

Croxton, Keely L., Bernard Gendon, and Thomas L. Magnanti. "Models and Methods for Merge-In-Transit Operations." Massachusetts Institute of Technology, Operations Research Center, 2001. http://hdl.handle.net/1721.1/5135.

Full text
Abstract:
We develop integer programming formulations and solution methods for addressing operational issues in merge-in-transit distribution systems. The models account for various complex problem features including the integration of inventory and transportation decisions, the dynamic and multimodal components of the application, and the non-convex piecewise linear structure of the cost functions. To accurately model the cost functions, we introduce disaggregation techniques that allow us to derive a hierarchy of linear programming relaxations. To solve these relaxations, we propose a cutting-plane procedure that combines constraint and variable generation with rounding and branch-and-bound heuristics. We demonstrate the effectiveness of this approach on a large set of test problems with instances with up to almost 500,000 integer variables derived from actual data from the computer industry. Key words : Merge-in-transit distribution systems, logistics, transportation, integer programming, disaggregation, cutting-plane method.
APA, Harvard, Vancouver, ISO, and other styles
21

Fan, Lang. "Metaheuristic methods for the urban transit routing problem." Thesis, Cardiff University, 2009. http://orca.cf.ac.uk/54237/.

Full text
Abstract:
In our research, we focus on these three issues, and concentrate on developing a metaheuristic framework for solving the UTRP.  Embedding simple heuristic algorithms (hill-climbing and simulated annealing) within this framework, we have beaten previously best published results for Mandi’s benchmark problem, which is the only generally available data set.  Due to the lack of “standard models” for the UTRP, and a shortage of benchmark data it is difficult for researchers to compare their approaches.  Thus we introduce a simplified model and implement a data set generation program to produce realistic test data sets much larger than Mandi’s problem.  Furthermore, some Lower Bounds and necessary constraints of the UTRP are also researched, which we use to help validate the quality of our results, particularly those obtained for our new data sets. Finally, a multi-objective optimisation algorithm is designed to solve our urban transit routing problem in which the operator’s cost is modelled in addition to passenger quality of service.
APA, Harvard, Vancouver, ISO, and other styles
22

Bucciarelli, Mark. "Cluster sampling methods for monitoring route-level transit ridership." Thesis, Massachusetts Institute of Technology, 1991. http://hdl.handle.net/1721.1/13485.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Kotol, Martin. "Výkonová bilance laserového dálkoměru." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2014. http://www.nusl.cz/ntk/nusl-220580.

Full text
Abstract:
The thesis analyzes the optical laser range finders in real propagation environments. Itdescribes the various parts of optical laser range finders their properties and principles in relation to their performance balance. The paper presents the basic optical properties of lenses used in the transmitter and receiver. A separate chapter is devoted to the transit time method and factors influencing the measurement. The proportional directional reflectance analysisis a part of the thesis. In conclusion, the thesis contains the practical laboratory measurements of the relative directional reflectance of different materials and colors, and verification of the power level diagram.
APA, Harvard, Vancouver, ISO, and other styles
24

Oliver-Taylor, A. "Parallel transit methods for arterial spin labelling magnetic resonance imaging." Thesis, University College London (University of London), 2013. http://discovery.ucl.ac.uk/1382488/.

Full text
Abstract:
Vessel selective arterial spin labelling (ASL) is a magnetic resonance imaging technique which permits the visualisation and assessment of the perfusion territory of a specific set of feeding arteries. It is of clinical importance in both acute and chronic cerebrovascular disease, and the mapping of blood supplied to tumours. Continuous ASL is capable of providing the highest signal-to-noise (SNR) ratio of the various ASL methods. However on clinical systems it suffers from high hardware demands, and the control of systematic errors decreases perfusion sensitivity. A separate labelling coil avoids these problems, enabling high labelling efficiency and subsequent high SNR, and vessel specificity can be localised to one carotid artery. However this relies on the careful and accurate positioning of the labelling coil over the common carotid arteries in the neck. It is proposed to combine parallel transmission (multiple transmit coils, each transmitting with different amplitudes and phases) to spatially tailor the labelling field, removing the reliance on coil location for optimal labelling efficiency, and enabling robust vessel selective labelling with a high degree of specificity. Presented is the application of parallel transmission methods to continuous ASL, requiring the development of an ASL labelling coil array, and a two channel transmitter system. Coil safety testing was performed using a novel MRI temperature mapping technique to accurately measure small temperature changes on the order of 0.1 ⁰C. A perfusion phantom with distinct vascular territories was constructed for sequence testing and development. Phantom and in-vivo testing of parallel transmit CASL using a 3D-GRASE acquisition showed an improvement of up to 35% in vessel specificity when compared with using a single labelling coil, whilst retaining the high labelling efficiency and associated SNR of separate coil CASL methods.
APA, Harvard, Vancouver, ISO, and other styles
25

Jiang, Yu, and 姜宇. "Reliability-based transit assignment : formulations, solution methods, and network design applications." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2014. http://hdl.handle.net/10722/207991.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Moreira, Joana Conceição 1975. "The use of market research methods in understanding choice transit riders." Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/80950.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Awiphan, Supachai. "Exomoons to Galactic structure : high precision studies with the microlensing and transit methods." Thesis, University of Manchester, 2017. https://www.research.manchester.ac.uk/portal/en/theses/exomoons-to-galactic-structure-high-precision-studies-with-the-microlensing-and-transit-methods(7b257bb2-d0ad-4d47-afb8-5c691486cf9b).html.

Full text
Abstract:
Today the search for and study of exoplanets is one of the most interesting areas of modern astronomy. Over the last two decades, the number of detected exoplanets continues to increase. At present, over 3,300 exoplanets have been discovered. This thesis presents high precision studies based on the transit and microlensing methods which are used to detect hot and cool exoplanets, respectively. In this thesis, the effects of intrinsic stellar noise to the detectability of an exomoon orbiting a transiting exoplanet are investigated using transit timing variation and transit duration variation. The effects of intrinsic stellar variation of an M-dwarf reduce the detectability correlation coefficient by 0.0-0.2 with 0.1 median reduction. The transit timing variation and transmission spectroscopy observations and analyses of a hot-Neptune, GJ3470b, from telescopes at Thai National Observatory, and the 0.6-metre PROMPT-8 telescope in Chile are presented, in order to investigate the possibility of a third body in the system and to study its atmosphere. From the transit timing variation analyses, the presence of a hot Jupiter with a period of less than 10 days or a planet with an orbital period between 2.5 and 4.0 days in GJ3470 system are excluded. From transmission spectroscopy analyses, combined optical and near-infrared transmission spectroscopy favour a H/He dominated haze (mean molecular weight 1.08 \pm 0.20) with methane in the atmosphere of GJ3470b. With the microlensing technique, real-time online simulations of microlensing properties based on the Besancon Galactic model, called Manchester-Besancon Microlensing Simulator (MaBulS), are presented. We also apply it to the recent MOA-II survey results. This analysis provides the best comparison of Galactic structure between a simulated Galactic model and microlensing observations. The best-fitting model between Besancon and MOA-II data provides a brown dwarf mass function slope of -0.4. The Besancon model provides only 50 per cent of the measured optical depth and event rate per star at low Galactic latitude around the inner bulge. However, the revised MOA-II data are consistent the Besancon model without any missing inner bulge population.
APA, Harvard, Vancouver, ISO, and other styles
28

Zhu, Yi Ph D. Massachusetts Institute of Technology. "Spatiotemporal learning and geo-visualization methods for constructing activity-travel patterns from transit card transaction data." Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/93807.

Full text
Abstract:
Thesis: Ph. D. in Urban and Regional Planning, Massachusetts Institute of Technology, Department of Urban Studies and Planning, 2014.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 149-157).
The study of human activity-travel patterns for transportation demand forecast has evolved a long way in theories, methodologies and applications. However, the scarcity of data has become a major barrier for the advancement of research in the field. At the same time, the proliferation of urban sensing and location-based devices generate voluminous streams of spatio-temporal registered information. These urban sensing data contain massive information on urban dynamics and individuals' mobility. For example, the transit smart card transaction data reveal the places that transit passengers visit at different times of day. As tempting as it appears to be, the incorporation of these urban sensing data into activity-travel study remains a big challenge, which demands new analytics, theories and frameworks to bridge the gap between the information observed directly from the imperfect urban sensing data and the knowledge about how people use the city. In this study, we propose a framework of analysis that focuses on the recurring processing and learning of voluminous transit smart card data flows in juxtaposition with additional auxiliary spatio-temporal data, which are used to improve our understanding of the context of the data. The framework consists of an ontology-based data integration process, a built environment measurement module, an activity-learning module and visualization examples that facilitate the exploration and investigation of activity-travel patterns. The ontology-based data integration approach helps to integrate and interpret spatio-temporal data from multiple sources in a systematic way. These spatio-temporally detailed data are used to formulate quantitative variables for the characterization of the context under which the travelers made their transit trips. In particular, a set of spatial metrics are computed to measure different dimensionalities of the urban built environment of trip destinations. In order to understand why people make trips to destinations, researchers and planners need to know the possible activities associated with observed transit trips. Therefore, an activity learning module is developed to infer the unknown activity types from millions of trips recorded in transit smart card transactions by learning the context dependent behaviors of travelers from a traditional household travel survey. The learned activities not only help the interpretation of the behavioral choices of transit riders, but also can be used to improve the characterization of urban built form by uncovering the likely activity landscapes of various places. The proposed framework and the methodology is demonstrated by focusing on the use of transit smart card transaction data, i.e., EZ-Link data, to study activity-travel patterns in Singapore. Although different modules of the framework are loosely coupled at the moment, we have tried to pipeline as much of the process as possible to facilitate efficient data processing and analysis. This allows researchers and planners to keep track of the evolution of human activity-travel patterns over time, and examine the correlations between the changes in activities and the changes in the built environment. The knowledge gained from continuous urban sensing data will certainly help policy makers and planners understand the current states of urban dynamics and monitor changes as transportation infrastructure and travel behaviors evolve over time.
by Yi Zhu.
Ph. D. in Urban and Regional Planning
APA, Harvard, Vancouver, ISO, and other styles
29

Mitchell, B. J. "GLOBAL EXPLORATION OF TITAN’S CLIMATE: OFF THE SHELF TECHNOLOGY AND METHODS AS AN ENABLER." International Foundation for Telemetering, 1995. http://hdl.handle.net/10150/608541.

Full text
Abstract:
International Telemetering Conference Proceedings / October 30-November 02, 1995 / Riviera Hotel, Las Vegas, Nevada
Recent narrow band imagery of the surface of Titan reveals a very non-uniform surface. While there are no global oceans of liquid ethane/methane as once conjectured, the imagery does suggest the possibility of seas or lakes of liquid ethane, methane, and other organic materials. If these exist, Titan could be considered a gigantic analog model of the Earth's climate system complete with land masses, moderately thick atmosphere, and large bodies of liquid. By studying the climate of Titan, we could gain further understanding of the processes and mechanisms that shape the Earth's climate. Reuse of existing technology and methods may be a way to speed development and lower costs for the global study of Titan. Surprisingly, one of the key technologies could be a Transit or Global Positioning System (GPS) descendant for use in tracking probes wandering the surface of Titan.
APA, Harvard, Vancouver, ISO, and other styles
30

Lundergan, Ryan W. "Parking regulation strategies and policies to support transit-oriented development." Amherst, Mass. : University of Massachusetts Amherst, 2009. http://scholarworks.umass.edu/theses/365/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Louisell, William. "A Framework and Analytical Methods for Evaluation of Preferential Treatment for Emergency and Transit Vehicles at Signalized Intersections." Diss., Virginia Tech, 2003. http://hdl.handle.net/10919/26820.

Full text
Abstract:
Preferential treatments are employed to provide preemption for emergency vehicles (EV) and conditional priority for transit vehicles at signalized intersections. EV preemption employs technologies and signal control strategies seeking to reduce emergency vehicle crash potential and response times. Transit priority employs the same technologies with signal control strategies seeking to reduce travel time and travel time variability. Where both preemption and transit technologies are deployed, operational strategies deconflict simultaneous requests. Thus far, researchers have developed separate evaluation frameworks for preemption and priority. This research addresses the issue of preemption and priority signal control strategies in breadth and depth. In breadth, this research introduces a framework that reveals planning interdependence and operational interaction between preemption and priority from the controlling strategy down to roadway hardware operation under the inclusive title: preferential treatment. This fulfills a current gap in evaluation. In depth, this research focuses on evaluation of EV preemption. There are two major analytical contributions resulting from this research. The first is a method to evaluate the safety benefits of preemption based on conflict analysis. The second is an algorithm, suitable for use in future traffic simulation models, that incorporates the impact of auto driver behavior into the determination of travel time savings for emergency vehicles operating on signalized arterial roadways. These two analytical methods are a foundation for future research that seeks to overcome the principal weakness of current EV preemption evaluation. Current methods, which rely on modeling and simulation tools, do not consider the unique auto driver behaviors observed when emergency vehicles are present. This research capitalizes on data collected during a field operational test in Northern Virginia, which included field observations of emergency vehicles traversing signalized intersections under a wide variety of geometric, traffic flow, and signal operating conditions. The methods provide a means to quantify the role of EV preemption in reducing the number and severity of conflict points and the delay experienced at signalized intersections. This forms a critical basis for developing deployment and operational guidelines, and eventually, warrants.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
32

Shulman, Katharine. "Paintings in Transit: A New Means For Protection of Collections, Balancing Traditional and Modern Conservation Philosophies and Methods." Scholarship @ Claremont, 2015. http://scholarship.claremont.edu/scripps_theses/697.

Full text
Abstract:
This thesis examines the motives behind museum loan and exchange policies and how loans can act as a new avenue for protecting and preserving paintings. When borrowed artworks are transported from one museum to another, there are many variables that come into play and careful planning is necessary for the safety of the object. Specific case studies of several well-known and respected museums that engage in and encourage loans will explore the complex nature of lending policies, their effect on the condition of artworks and their significance and impact on the museum world. This thesis culminates in an analysis of the loan fitness and possibility for transport of Joachim Beich’s Untitled, a painting in the Scripps College collection.
APA, Harvard, Vancouver, ISO, and other styles
33

Rackham, Benjamin V., Dániel Apai, and Mark S. Giampapa. "The Transit Light Source Effect: False Spectral Features and Incorrect Densities for M-dwarf Transiting Planets." IOP PUBLISHING LTD, 2018. http://hdl.handle.net/10150/627040.

Full text
Abstract:
Transmission spectra are differential measurements that utilize stellar illumination to probe transiting exoplanet atmospheres. Any spectral difference between the illuminating light source and the disk-integrated stellar spectrum due to starspots and faculae will be imprinted in the observed transmission spectrum. However,. few constraints exist for the extent of photospheric heterogeneities in M dwarfs. Here we model spot and faculae covering fractions consistent with observed photometric variabilities for M dwarfs and the associated 0.3-5.5. mu m stellar contamination spectra. We find that large ranges of spot and faculae covering fractions are consistent with observations and corrections assuming a linear relation between variability amplitude, and covering fractions generally underestimate the stellar contamination. Using realistic estimates for spot and faculae covering fractions, we find that stellar contamination can be more than 10x. larger than the transit depth changes expected for atmospheric features in rocky exoplanets. We also find that stellar spectral contamination can lead to systematic errors in radius and therefore the derived density of small planets. In the case of the TRAPPIST-1 system, we show that TRAPPIST-1 ' s rotational variability is consistent with spot covering fractions f(spot) = 8(7)(+18)% and faculae covering fractions f(fac) = 54(-46)(+16)%. The associated stellar contamination signals alter the transit depths of the TRAPPIST-1 planets at wavelengths of interest for planetary atmospheric species by roughly 1-15x. the strength of planetary features, significantly complicating JWST follow-up observations of this system. Similarly, we find that stellar contamination can lead to underestimates of the bulk densities of the TRAPPIST-1 planets of Delta(rho) = -8(-20)(+7)%, thus leading to overestimates of their volatile contents.
APA, Harvard, Vancouver, ISO, and other styles
34

Kuhlman, Kristopher Lee. "Laplace Transform Analytic Element Method for Transient Groundwater Flow Simulation." Diss., The University of Arizona, 2008. http://hdl.handle.net/10150/193735.

Full text
Abstract:
The Laplace transform analytic element method (LT-AEM), applies the traditionally steady-state analytic element method (AEM) to the Laplace-transformed diffusion equation (Furman and Neuman, 2003). This strategy preserves the accuracy and elegance of the AEM while extending the method to transient phenomena. The approach taken here utilizes eigenfunction expansion to derive analytic solutions to the modified Helmholtz equation, then back-transforms the LT-AEM results with a numerical inverse Laplace transform algorithm. The two-dimensional elements derived here include the point, circle, line segment, ellipse, and infinite line, corresponding to polar, elliptical and Cartesian coordinates. Each element is derived for the simplest useful case, an impulse response due to a confined, transient, single-aquifer source. The extension of these elements to include effects due to leaky, unconfined, multi-aquifer, wellbore storage, and inertia is shown for a few simple elements (point and line), with ready extension to other elements. General temporal behavior is achieved using convolution between these impulse and general time functions; convolution allows the spatial and temporal components of an element to be handled independently.Comparisons are made between inverse Laplace transform algorithms; the accelerated Fourier series approach of de Hoog et al. (1982) is found to be the most appropriate for LT-AEM applications. An application and synthetic examples are shown for several illustrative forward and parameter estimation simulations to illustrate LT-AEM capabilities. Extension of LT-AEM to three-dimensional flow and non-linear infiltration are discussed.
APA, Harvard, Vancouver, ISO, and other styles
35

Grigorian, Zachary. "Modeling, Discontinuous Galerkin Approximation and Simulation of the 1-D Compressible Navier Stokes Equations." Thesis, Virginia Tech, 2019. http://hdl.handle.net/10919/93197.

Full text
Abstract:
In this thesis we derive time dependent equations that govern the physics of a thermal fluid flowing through a one dimensional pipe. We begin with the conservation laws described by the 3D compressible Navier Stokes equations. This model includes all residual terms resulting from the 1D flow approximations. The final model assumes that all the residual terms are negligible which is a standard assumption in industry. Steady state equations are obtained by assuming the temporal derivatives are zero. We develop a semi-discrete model by applying a linear discontinuous Galerkin method in the spatial dimension. The resulting finite dimensional model is a differential algebraic equation (DAE) which is solved using standard integrators. We investigate two methods for solving the corresponding steady state equations. The first method requires making an initial guess and employs a Newton based solver. The second method is based on a pseudo-transient continuation method. In this method one initializes the dynamic model and integrates forward for a fixed time period to obtain a profile that initializes a Newton solver. We observe that non-uniform meshing can significantly reduce model size while retaining accuracy. For comparison, we employ the same initialization for the pseudo-transient algorithm and the Newton solver. We demonstrate that for the systems considered here, the pseudo-transient initialization algorithm produces initializations that reduce iteration counts and function evaluations when compared to the Newton solver. Several numerical experiments were conducted to illustrate the ideas. Finally, we close with suggestions for future research.
Master of Science
In this thesis we derive time dependent equations that govern the physics of a fluid flowing through a one dimensional pipe. This model includes all error terms that result from 1D modeling approximations. The final model assumes that all of these error terms are negligible which is a standard assumption in industry. Steady state equations result when all time dependence is removed from the 1D equations. We approximate the true solution by a discontinuous piece-wise linear function. Standard techniques are used to solve for this approximate solution. We investigate two methods for solving the steady state equations. In the first method, one makes an educated guess about the solution profile and uses Newton’s method to solve for the true solution. The second method, pseudo-transient initialization, attempts to improve this initial guess through dynamic simulation. In this method, an initial guess is treated as the initial conditions for dynamic simulation. The dynamic simulation is then run for a fixed amount of time. The solution at the end of the simulation is the improved initial guess for Newton’s method and is used to solve for the steady state profile. To test the pseudo-transient initialization, we determine the number of function evaluations required to obtain the steady state solution for an initial guess with and without performing pseudo-transient initialization on it. We demonstrate that for the systems considered here, the pseudo-transient initialization algorithm reduced overall computational costs. Also, we observe that non-uniform meshing can significantly reduce model size while retaining accuracy. Several numerical experiments were conducted to illustrate these ideas. Finally, we close with suggestions for future research.
APA, Harvard, Vancouver, ISO, and other styles
36

Das, Samik. "Ultrasonic Field Modeling in Non-Planar and Inhomogeneous Structures Using Distributed Point Source Method." Diss., The University of Arizona, 2008. http://hdl.handle.net/10150/195602.

Full text
Abstract:
Ultrasonic wave field is modeled inside non-planar and inhomogeneous structures using a newly developed mesh-free semi-analytical technique called Distributed Point Source Method (DPSM). Wave field inside a corrugated plate which is a non-planar structure is modeled using DPSM when the structure is excited by a bounded acoustic beam generated by a finite-size transducer. The ultrasonic field is computed both inside the plate and in the surrounding fluid medium. It is observed that the reflected beam strength is weaker for the corrugated plate in comparison to that of the flat plate, as expected. Whereas the backward scattering is found to be stronger for the corrugated plate. DPSM generated results in the surrounding fluid medium are compared with the experimental results.Ultrasonic wave field is also modeled inside inhomogeneous structures. Two types of inhomogeneity are considered - a circular hole and a damaged layered half-space. Elastic wave scattering inside a half-space containing a circular hole is first modeled using DPSM when the structure is excited with a bounded acoustic beam. Then the ultrasonic wave field is computed in presence and absence of a defect in a layered half-space. For the layered problem geometry it is shown how the layer material influences the amount of energy that propagates through the layer and that penetrates into the solid half-space when the solid structure is struck by a bounded acoustic beam. It is also shown how the presence of a crack and the material properties of the layer material affect the ultrasonic fields inside the solid and fluid media.After solving the above problems in the frequency domain the DPSM technique is extended to produce the time domain results by the Fast Fourier Transform technique. Time histories are obtained for a bounded beam striking an elastic half-space. Numerical results are generated for normal and inclined incidences, for defect-free and cracked half-spaces. A number of useful information that is hidden in the steady state response can be obtained from the transient results.
APA, Harvard, Vancouver, ISO, and other styles
37

Bezuidenhout, Johannes Jurie. "Convective heat flux determination using surface temperature history measurements and an inverse calculation method." Thesis, Virginia Tech, 2000. http://hdl.handle.net/10919/35706.

Full text
Abstract:
Effective gages to measure skin friction and heat transfer have been established over decades. One of the most important criteria in designing such a gage is the physical size of the gage to minimise the interference of the flow, as well as the mass of these devices. The combined measurement of skin friction and heat flux using one single gage on the other hand, present unique opportunities and with it, unique technical problems.

The objective of this study is therefore to develop a cost-effective single gage that can be used to measure both skin friction and heat flux. The method proposed in this study is to install a coaxial thermocouple into an existing skin friction gage to measure the unsteady temperature on the surface of the gage. By using the temperature history and a computer program the heat flux through the surface can be obtained through an iterative guessing method. To ensure that the heat flux through the gage is similar to the heat flux through the rest of the surface, the gage is manufactured of a material very similar to the rest of the surface.

Walker developed a computer program capable of predicting the heat flux through a surface from the measured surface temperature history. The program is based on an inverse approach to calculate the heat flux through the surface. The biggest advantages of this method are its stability and the small amount of noise induced into the system. The drawback of the method is that it is limited to semi-infinite objects. For surfaces with a finite thickness, a second thermocouple was installed into the system some distance below the first thermocouple. By modifying the computer program these two unsteady temperatures can be used to predict the heat flux through a surface of finite thickness.

As part of this study, the effect of noise induced by the Cook-Felderman technique, found in the literature were investigated in detail and it was concluded that the method proposed in this study is superior to this Cook-Felderman method. Heat flux measurements compared well with measurements recorded with heat flux gages. In all cases evaluated the difference was less than 20%. It can therefore be concluded that heat flux gages on their own can measure surface heat flux very accurately. These gages are however too large to install in a skin-friction gage. The method introduced in this study is noisier than the heat flux gages on their own, but the size which is very important, is magnitudes smaller when using a coaxial thermocouple, to measure the surface temperature history.
Master of Science

APA, Harvard, Vancouver, ISO, and other styles
38

Wang, Mianzhi. "Numerical Analysis of Transient Teflon Ablation with a Domain Decomposition Finite Volume Implicit Method on Unstructured Grids." Digital WPI, 2012. https://digitalcommons.wpi.edu/etd-theses/284.

Full text
Abstract:
This work investigates numerically the process of Teflon ablation using a finite-volume discretization, implicit time integration and a domain decomposition method in three-dimensions. The interest in Teflon stems from its use in Pulsed Plasma Thrusters and in thermal protection systems for reentry vehicles. The ablation of Teflon is a complex process that involves phase transition, a receding external boundary where the heat flux is applied, an interface between a crystalline and amorphous (gel) phase and a depolymerization reaction which happens on and beneath the ablating surface. The mathematical model used in this work is based on a two-phase model that accounts for the amorphous and crystalline phases as well as the depolymerization of Teflon in the form of an Arrhenius reaction equation. The model accounts also for temperature-dependent material properties, for unsteady heat inputs and boundary conditions in 3D. The model is implemented in 3D domains of arbitrary geometry with a finite volume discretization on unstructured grids. The numerical solution of the transient reaction-diffusion equation coupled with the Arrhenius-based ablation model advances in time using implicit Crank-Nicolson scheme. For each time step the implicit time advancing is decomposed into multiple sub-problems by a domain decomposition method. Each of the sub-problems is solved in parallel by Newton-Krylov non-linear solver. After each implicit time-advancing step, the rate of ablation and the fraction of depolymerized material are updated explicitly with the Arrhenius-based ablation model. After the computation, the surface of ablation front and the melting surface are recovered from the scalar field of fraction of depolymerized material and the fraction of melted material by post-processing. The code is verified against analytical solutions for the heat diffusion problem and the Stefan problem. The code is validated against experimental data of Teflon ablation. The verification and validation demonstrates the ability of the numerical method in simulating three dimensional ablation of Teflon.
APA, Harvard, Vancouver, ISO, and other styles
39

Carvalho, Rafael Aleixo de. "Tempo de transito em meios com isotropia transversal vertical (VTI) : aproximações e inversão dos parametros." [s.n.], 2009. http://repositorio.unicamp.br/jspui/handle/REPOSIP/307310.

Full text
Abstract:
Orientador: Jorg Dietrich Wilhelm Schleicher
Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Matematica, Estatistica e Computação Cientifica
Made available in DSpace on 2018-08-14T00:48:52Z (GMT). No. of bitstreams: 1 Carvalho_RafaelAleixode_D.pdf: 9585016 bytes, checksum: 31508e558a4b01742760ced71847289a (MD5) Previous issue date: 2009
Resumo: Como os alvos de exploração tornaram-se mais profundos, os comprimentos dos cabos têm aumentado em conformidade, fazendo a aproximação hiperbólica convencional produzir tempos de trânsito cada vez mais imprecisos. Em outras palavras, para as modernas geometrias de aquisição para grandes afastamentos, a aproximação hiperbólica já não é suficiente para horizontalizar a família CMP por causa da não homogeneidade ou anisotropia dos meios. Para resolver este problema, muitas fórmulas para o tempo de trânsito foram propostas na literatura que fornecem aproximações de qualidade diferente. Demonstrou-se que para meios com isotropia transversal vertical (meios VTI), apenas dois parâmetros do tempo de trânsito são suficientes para a realização de todo o processamento temporal, sendo a velocidade NMO e um parâmetro de anisotropia. Por isso, nesta tese, nos concentramos, na dedução de aproximações simples para o tempo de trânsito que dependem de um único parâmetro de anisotropia. Começamos por dar uma visão geral de uma coleção de tais aproximações para o tempo de trânsito encontradas na literatura e comparar suas qualidades. Em seguida, deduzimos um conjunto de novas aproximações para o tempo de trânsito que dependem de um parâmetro baseado em aproximações encontradas na literatura. A principal vantagem das nossas aproximações é que algumas delas são expressões analíticas bastante simples que as tornam fáceis de serem utilizadas, ao mesmo tempo que têm a mesma qualidade ou maior que as fórmulas já estabelecidas. Utilizamos estas aproximações para o tempo de trânsito para uma inversão dos parâmetros de anisotropia. Utilizando uma estimativa da velocidade NMO a partir da análise de velocidades hiperbólica, pode-se estimar o parâmetro anisotrópico a partir de uma aproximação para o tempo de trânsito mais geral. Estendemos o procedimento em dois passos utilizando um termo não hiperbólico mais preciso na aproximação para o tempo de trânsito. As aproximações para o tempo de trânsito deduzidas permitem predizer o viés na estimativa da velocidade NMO, proporcionando assim um meio de corrigir, tanto a estimativa a velocidade NMO, quanto o conseqüente valor do parâmetro de anisotropia. Por meio de um exemplo numérico, demonstramos que a estimativa dos parâmetros do tempo de trânsito, usando este processo iterativo, apresenta considerável melhora. Palavras-chave: Aproximação para tempos de trânsito, meios VTI, análise de velocidade e geofísica
Abstract: As exploration targets have become deeper, cable lengths have increased accordingly, making the conventional two term hyperbolic traveltime approximation produce increasingly erroneous traveltimes. In other words, for modern long-offset acquisition geometries, a hyperbolic traveltime approximation is no longer sufficient to flatten the CMP gather because of medium inhomogeneity or anisotropy. To overcome this problem, many traveltime formulas were proposed in the literature that provide approximations of different quality. It has been demonstrated that for transversly isotropic media with a vertical symmetry axis (VTI media), just two traveltime parameters are sufficient to perform all time-related processing, being the NMO velocity and one anisotropy parameter. Therefore, we concentrate in this thesis,on simple traveltime approximations that depend on a single anisotropy parameter. We start by giving an overview of a collection of such traveltime approximations found in the literature and compare their quality. Next, we derive a set of new single-parameter traveltime approximations based on the ones found in the literature. The main advantage of our approximations is that some of them are rather simple analytic expressions that make them easy to use, while achieving the same quality as the better of the established formulas. We then use these traveltime aproximations for an inversion of the anisotropy parameters. Using an estimate of the NMO velocity from a hyperbolic velocity analysis, one can estimate the anisotropic parameter from a more general traveltime approximation. We extend this two-step procedure using a more accurate nonhyperbolicity term in the traveltime approximation. The used traveltime approximations allow to predict the bias in the NMO velocity estimate, thus providing a means of correcting both the estimated NMO velocity and the resulting anisotropy parameter value. By means of a numerical example, we demonstrate that the estimation of the traveltime parameters, using this iterative procedure, is improved considerably. Keywords: Traveltime approximations, VTI media, velocity analysis and geophysics
Doutorado
Geofisica Computacional
Doutor em Matemática Aplicada
APA, Harvard, Vancouver, ISO, and other styles
40

Adams, Bruce Keith. "The use of scintigraphy to study gastric emptying, motility and small intestinal transit in patients who have ingested a selection of common poisons." Doctoral thesis, University of Cape Town, 1995. http://hdl.handle.net/11427/27036.

Full text
Abstract:
Poisoning is common and carries considerable morbidity and mortality. Two to three patients are admitted to the Emergency Unit at Groote Schuur Hospital every day with drug overdose. As absorption occurs in the small intestine the rates at which ingested poisons pass into and through the small bowel are important factors in determining the amount of poison potentially available for absorption. Although the effects of pharmacological doses of many drugs on gastric emptying and motility are known, information on the effects of higher doses is limited. I investigated patients who took overdoses of certain commonly used drugs to determine their effects on gastric emptying and motility and small intestinal transit. The study was divided into two parts. One hundred and four patients were studied in Part 1. These patients took overdoses of tricyclic antidepressants (n = 31), carbamazepine (n = 15), phenytoin (n = 12), paracetamol (n = 29) and opioid-paracetamol mixtures (n = 17). They received standard hospital management of which sorbitol was not a part. Part 2 consisted of sixty-one patients who had sorbitol added to their treatment. These patients had taken overdoses of the tricyclic antidepressants (n = 15), carbamazepine (n = 7), phenytoin (n = 8), paracetamol (n = 13) and opioid-paracetamol mixtures (n = 18). The effects of sorbitol on gastric emptying and small intestinal transit were evaluated. A third study-the paracetamol control test was done on 5 healthy volunteers. Each subject was studied twice; the first time after taking 1 G of paracetamol and the second time after no drug ingestion.
APA, Harvard, Vancouver, ISO, and other styles
41

Ma, Manyou. "A comparative evaluation of two Synthetic Transmit Aperture with Virtual Source beamforming methods in biomedical ultrasound." Thesis, University of British Columbia, 2015. http://hdl.handle.net/2429/56244.

Full text
Abstract:
This thesis studies the Synthetic Transmit Aperture with Virtual Source (STA-VS) beamforming method, which is an emerging technique in biomedical ultrasound. It promises better imaging quality compared to conventional beamforming, with the same imaging speed. Several specific realizations of the STA-VS methods have been proposed in the literature and the topic is an active research area. The first part of the thesis examines two realizations of the STA-VS method, namely the Synthetic Aperture Sequential Beamforming (SASB) method and the bi-directional pixel-based focusing (BiPBF) method. Studies are performed with both ultrasound simulation software and a commercial ultrasound scanner's research interface. The studies show that the STA-VS methods can improve the spatial and contrast resolution of ultrasound imaging. The two stage implementation of SASB has lower complexity between the two STA-VS methods. However, compared to other beamformers, SASB is more susceptible to speed-of-sound (SOS) errors in the beamforming calculations. The second part of the thesis proposes an SOS estimation and correction algorithm. The SOS estimation part of the algorithm is based on second-order polynomial fitting to point scatterers in pre-beamformed data, and is specifically applicable to the two stage realization of the SASB method. The SOS correction part of the algorithm is incorporated into the second stage beamforming of the SASB method and is shown to improve the spatial resolution of the beamformed image. This algorithm is also adapted to, and tested on, vertical two-layer structures with two distinct SOS's, through simulations and measurements on an in-house phantom. The premise is that two layers can simulate a fat/muscle or fat/organ anatomy. Spatial resolution is shown to improve with the SOS correction. Future work will investigate whether that this two-layer SOS estimation and correction algorithm will similarly improve the imaging quality in vivo such as abdominal ultrasound examinations of overweight patients.
Applied Science, Faculty of
Electrical and Computer Engineering, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
42

Al, Marhoon Hussain Hassan. "A Practical Method for Power Systems Transient Stability and Security." ScholarWorks@UNO, 2011. http://scholarworks.uno.edu/td/114.

Full text
Abstract:
Stability analysis methods may be categorized by two major stability analysis methods: small-signal stability and transient stability analyses. Transient stability methods are further categorized into two major categories: numerical methods based on numerical integration, and direct methods. The purpose of this thesis is to study and investigate transient stability analysis using a combination of step-by-step and direct methods using Equal Area Criterion. The proposed method is extended for transient stability analysis of multi machine power systems. The proposed method calculates the potential and kinetic energies for all machines in a power system and then compares the largest group of kinetic energies to the smallest groups of potential energies. A decision based on the comparison can be made to determine stability of the power system. The proposed method is used to simulate the IEEE 39 Bus system to verify its effectiveness by comparison to the results obtained by pure numerical methods.
APA, Harvard, Vancouver, ISO, and other styles
43

Koutsouris, Dionissios. "Etude de la deformabilite erythrocytaire par la methode du debit initial de filtration et l'analyse du temps de transit cellulaire." Paris 5, 1987. http://www.theses.fr/1987PA05S015.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Wood, Stephen L. "Modeling of Pipeline Transients: Modified Method of Characteristics." FIU Digital Commons, 2011. http://digitalcommons.fiu.edu/etd/456.

Full text
Abstract:
The primary purpose of this research was to improve the accuracy and robustness of pipeline transient modeling. An algorithm was developed to model the transient flow in closed tubes for thin walled pipelines. Emphasis was given to the application of this type of flow to pipelines with small radius 90° elbows. An additional loss term was developed to account for the presence of 90° elbows in a pipeline. The algorithm was integrated into an optimization routine to fit results from the improved model to experimental data. A web based interface was developed to facilitate the pre- and post- processing operations. Results showed that including a loss term that represents the effects of 90° elbows in the Method of Characteristics (MOC) [1] improves the accuracy of the predicted transients by an order of magnitude. Secondary objectives of pump optimization, blockage detection and removal were investigated with promising results.
APA, Harvard, Vancouver, ISO, and other styles
45

Zhu, Ruiying. "An eigenmatrices method to obtain transient solutions for the M/M/k:(N/FIFO) queueing system (k=1,2)." Ohio : Ohio University, 1991. http://www.ohiolink.edu/etd/view.cgi?ohiou1183989760.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Tsia, Man Juliana. "Positron deep level transient spectroscopy in semi-insulating GaAs using the positron velocity transient method." Hong Kong : University of Hong Kong, 2000. http://sunzi.lib.hku.hk/hkuto/record.jsp?B22424775.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

謝敏 and Man Juliana Tsia. "Positron deep level transient spectroscopy in semi-insulating GaAs using the positron velocity transient method." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2000. http://hub.hku.hk/bib/B3122524X.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Petersen, Todd H. "A transient solver for current density in thin conductors for magnetoquasistatic conditions." Diss., Manhattan, Kan. : Kansas State University, 2009. http://hdl.handle.net/2097/1360.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Ramos, Gustavo Roberto. "Método multiescala para modelagem da condução de calor transiente com geração de calor : teoria e aplicação." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2015. http://hdl.handle.net/10183/133134.

Full text
Abstract:
O presente trabalho trata da modelagem da condução de calor transiente com geração de calor em meios heterogêneos, e tem o objetivo de desenvolver um modelo multiescala adequado a esse fenômeno. Já existem modelos multiescala na literatura relacionados ao problema proposto, e que são válidos para os seguintes casos: (a) o elemento de volume representativo tem tamanho desprezível quando comparado ao comprimento característico macroscópico (e como consequência, a microescala tem inércia térmica desprezível); ou (b) a geração de calor é homogênea na microescala. Por outro lado, o modelo proposto nesta tese, o qual é desenvolvido utilizando uma descrição variacional do problema, pode ser aplicado a elementos de volume representativos finitos e em condições em que a geração de calor é heterogênea na microescala. A discretização temporal (diferenças finitas) e as discretizações espaciais na microescala e na macroescala (método dos elementos finitos) são apresentadas em detalhes, juntamente com os algoritmos necessários para implementar a solução do problema. Nesta tese são apresentados casos numéricos simples, procurando verificar não só o modelo teórico multiescala desenvolvido, mas também a implementação feita. Para tanto, são analisados, por exemplo, (a) casos em que considera-se a microescala um material homogêneo, tornando possível a comparação da solução multiescala com a solução convencional (uma única escala) pelo método dos elementos finitos, e (b) um caso em um material heterogêneo para o qual a solução completa, isto é, modelando diretamente os constituintes no corpo macroscópico, é obtida, tornando possível a comparação com a solução multiescala. A solução na microescala para vários casos analisados nesta tese sofre grande influência da inércia térmica da microescala. Para demonstrar o potencial de aplicação do modelo multiescala, simula-se a cura de um elastômero carregado com negro de fumo. Embora a simulação demonstre que a inércia térmica não precise ser considerada para esse caso em particular, a aplicação da presente metodologia torna possível modelar a cura do elastômero diretamente sobre a microescala, uma abordagem até então não utilizada no contexto de métodos multiescala. Essa metodologia abre a possibilidade para futuros aperfeiçoamentos da modelagem do estado de cura.
This work deals with the modeling of transient heat conduction with heat generation in heterogeneous media, and its objective is to develop a proper multiscale model for this phenomenon. There already exist multiscale models in the literature related to this proposed problem, and which are valid for the following cases: (a) the representative volume element has a negligible size when compared to the characteristic macroscopic size (and, as a consequence, the microscale has a negligible thermal inertia); or (b) the heat generation is homogeneous at the microscale. On the other hand, the model proposed in this thesis, which is developed using a variational description of the problem, can be applied to finite representative volume elements and in conditions in which the heat generation is heterogeneous at the microscale. The time discretization (finite difference) and the space discretizations at both the microscale and the macroscale (finite element method) are presented in details, together with the algorithms needed for implementing the solution of the problem. In this thesis, simple numerical cases are presented, aiming to verify not only the theoretical multiscale model developed, but also its implementation. For this, it is analyzed, for instance, (a) cases in which the microscale is taken as a homogeneous material, making it possible the comparison of the multiscale solution with the conventional solution (one single scale) by the finite element method, and (b) a case in a heterogeneous material for which the full solution, that is, modeling all constituents directly on the macroscale, is obtained, making it possible the comparison with the multiscale solution. The solution at the microscale for several cases analyzed in this thesis suffers a large influence of the microscale thermal inertia. To demonstrate the application potential of the multiscale model, the cure of a carbon black loaded elastomer is simulated. Although the simulation shows that the thermal inertia does not have to be considered for this case in particular, the application of the present methodology makes it possible to model the cure of the elastomer directly at the microscale, an approach not used in multiscale methods context until now. This methodology opens the possibility for future improvements of the state of cure modeling.
APA, Harvard, Vancouver, ISO, and other styles
50

Erhart, Kevin. "EFFICIENT LARGE SCALE TRANSIENT HEAT CONDUCTION ANALYSIS USING A PARALLELIZED BOUNDARY ELEMENT METHOD." Master's thesis, University of Central Florida, 2006. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/2973.

Full text
Abstract:
A parallel domain decomposition Laplace transform Boundary Element Method, BEM, algorithm for the solution of large-scale transient heat conduction problems will be developed. This is accomplished by building on previous work by the author and including several new additions (most note-worthy is the extension to 3-D) aimed at extending the scope and improving the efficiency of this technique for large-scale problems. A Laplace transform method is utilized to avoid time marching and a Proper Orthogonal Decomposition, POD, interpolation scheme is used to improve the efficiency of the numerical Laplace inversion process. A detailed analysis of the Stehfest Transform (numerical Laplace inversion) is performed to help optimize the procedure for heat transfer problems. A domain decomposition process is described in detail and is used to significantly reduce the size of any single problem for the BEM, which greatly reduces the storage and computational burden of the BEM. The procedure is readily implemented in parallel and renders the BEM applicable to large-scale transient conduction problems on even modest computational platforms. A major benefit of the Laplace space approach described herein, is that it readily allows adaptation and integration of traditional BEM codes, as the resulting governing equations are time independent. This work includes the adaptation of two such traditional BEM codes for steady-state heat conduction, in both two and three dimensions. Verification and validation example problems are presented which show the accuracy and efficiency of the techniques. Additionally, comparisons to commercial Finite Volume Method results are shown to further prove the effectiveness.
M.S.M.E.
Department of Mechanical, Materials and Aerospace Engineering;
Engineering and Computer Science
Mechanical Engineering
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!