To see the other types of publications on this topic, follow the link: COSα TECHNIQUE.

Dissertations / Theses on the topic 'COSα TECHNIQUE'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'COSα TECHNIQUE.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Pereira, Alex Lopes. "A cost-effective background subtraction technique." Instituto Tecnológico de Aeronáutica, 2008. http://www.bd.bibl.ita.br/tde_busca/arquivo.php?codArquivo=596.

Full text
Abstract:
Background Subtraction is a very important task in image processing because its results are used in algorithms that recognize more complex object behaviors. This proposed research technique extracts movement evidences from difference: 1) between two consecutive frames; 2) between current frame and the fourth previous frame; and 3) between the current frame and a background model. These evidences are combined using the strategy of adding complementary values before applying thresholds. This strategy, combined with the application of the "iterate only once" requirement, leads to a Cost-effective Background Subtraction Technique. The main contribution of this work is the development of a novel pixel classification metric. Besides, it was extended by the following incremental improvements: 1st) The proposition of a half-connected filter as a fullfilment of the "iterate only once" requirement; 2nd) The extension of a simple and efficient shadow filter; and 3rd) The development of a quick way to evaluate accuracy of background subtraction techniques, based on a Genetic Algorithm (GA) and a Distributed Processing environment. When compared to recent research, the proposed technique results are better in performance and accuracy, this last one is due to an optmization process using a Genetic Algorithm. When performing tests on an Intel Dual Core Pentium 1.60GHz microprocessor with 1GB RAM, up to 376 Frames Per Second (FPS) of 160x120 color images were classified using this technique.
APA, Harvard, Vancouver, ISO, and other styles
2

Wise, Michael Anthony. "A variance reduction technique for production cost simulation." Ohio : Ohio University, 1989. http://www.ohiolink.edu/etd/view.cgi?ohiou1182181023.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Pai, Satish. "Multiplexed pipelining : a cost effective loop transformation technique." PDXScholar, 1992. https://pdxscholar.library.pdx.edu/open_access_etds/4425.

Full text
Abstract:
Parallel processing has gained increasing importance over the last few years. A key aim of parallel processing is to improve the execution times of scientific programs by mapping them to many processors. Loops form an important part of most computational programs and must be processed efficiently to get superior performance in terms of execution times. Important examples of such programs include graphics algorithms, matrix operations (which are used in signal processing and image processing applications), particle simulation, and other scientific applications. Pipelining uses overlapped parallelism to efficiently reduce execution time.
APA, Harvard, Vancouver, ISO, and other styles
4

Kennedy, Michael A. D. "Development of Cost Effective Composites using Vacuum Processing Technique." Ohio University / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1523633403784733.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Aras, Tuce. "Cost Anaysis Of Sediment Removal Techniques From Reservoir." Master's thesis, METU, 2009. http://etd.lib.metu.edu.tr/upload/12610585/index.pdf.

Full text
Abstract:
Siltation in reservoirs is becoming an important problem as the dams are getting older in the world. The general dam practice has been implemented in a sequence that
planning, design, construction, operation of dam until the accumulated sediment prevents its purpose function or functions. Unfortunately, effects of sedimentation and fate of the left over dams in the future are not figured. Indeed, these negative effects could be avoided, life of the reservoir can be prolonged and even the reservoir will last forever by minimizing the sedimentation. Therefore in this study, the methods that provide extension of reservoir life are discussed hydraulically, economically and applicability point of view. In addition, there is open source package program RESCON which examines and compares some sediment removal techniques economically and also hydraulically. RESCON is used in conjunction with several cases
namely Ç
ubuk Dam-I, Borç
ka Dam and Muratli Dam. Moreover, some sensitivity analyses are carried out in order to scrutiny of the program for Turkish economic conditions.
APA, Harvard, Vancouver, ISO, and other styles
6

Saravi, Mohammad Ebrahimzadeh. "Improving cost estimation accuracy through quality improvement techniques." Thesis, University of Bath, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.520948.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Lin, Yang. "Cost-effective radiation hardened techniques for microprocessor pipelines." Thesis, University of Southampton, 2015. https://eprints.soton.ac.uk/381275/.

Full text
Abstract:
The aggressive scaling of semiconductor devices has caused a significant increase in the soft error rate induced by radiation particle strikes. This has led to an increasing need for soft error tolerance techniques to maintain system reliability, even for sea-level commodity computer products. Conventional radiation-hardening techniques, typically used in safety-critical applications, are prohibitively expensive for non-safety-critical microprocessors in terrestrial environments. Providing effective hardening solutions for general logic in microprocessor pipelines, in particular, is a major challenge and still remains open. This thesis studies the soft error effects on modern microprocessors, with the aim to develop cost-effective soft error mitigation techniques for general logic, and provide a comprehensive soft error treatment for commercial microprocessor pipelines. This thesis presents three major contributions. The first contribution proposes two novel radiation hardening flip-flop architectures, named SETTOFF. A reliability evaluation model, which can statistically analyse the reliability of different circuit architectures, is also developed. The evaluation results from 65nm and 120nm technologies show that SETTOFF can provide better error-tolerance capabilities than most previous techniques. Compared to a TMR-latch, SETTOFF can reduce area, power, and delay overheads by over 50%, 86%, and 78%, respectively. The second contribution proposes a self-checking technique based on the SETTOFF architectures. The self-checking technique overcomes the common limitation of most previous techniques by realising a self checking capability, which allows SETTOFF to mitigate both the errors occurring in the original circuitry, and the errors occurring in the redundancies added for error-tolerance. Evaluation results demonstrated that the self-checking architecture can provide much higher Multiple-Bit-Upsets tolerant capabilities with significantly less power and delay penalties, compared to the traditional ECC technique, for protecting the register file. The third contribution proposes a novel pipeline protection mechanism, which is achieved by incorporating the SETTOFF-based self-checking cells into the microprocessor pipeline. An architectural replay recovery scheme is developed to recover the relevant errors detected by the self-checking SETTOFF architecture. The evaluation results show that the proposed mechanism can effectively mitigate both SEUs and SETs occurring in different parts of the pipeline. It overcomes the drawback of most previous pipeline protection techniques and achieves a complete and cost effective pipeline protection.
APA, Harvard, Vancouver, ISO, and other styles
8

Alsuwailem, A. M. "A low-cost microprocessor-based correlator for high bandwidth data." Thesis, University of Bradford, 1986. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.371467.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

DeBardelaben, James Anthony. "An optimization-based approach for cost-effective embedded DSP system design." Diss., Georgia Institute of Technology, 1998. http://hdl.handle.net/1853/15757.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Xie, Qing. "Developing cost-effective model-based techniques for GUI testing." College Park, Md. : University of Maryland, 2006. http://hdl.handle.net/1903/4061.

Full text
Abstract:
Thesis (Ph. D.) -- University of Maryland, College Park, 2006.
Thesis research directed by: Computer Science. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
APA, Harvard, Vancouver, ISO, and other styles
11

Donglikar, Swapneel B. "Design for Testability Techniques to Optimize VLSI Test Cost." Thesis, Virginia Tech, 2009. http://hdl.handle.net/10919/43712.

Full text
Abstract:
High test data volume and long test application time are two major concerns for testing scan based circuits. The Illinois Scan (ILS) architecture has been shown to be effective in addressing both these issues. The ILS achieves a high degree of test data compression thereby reducing both the test data volume and test application time. The degree of test data volume reduction depends on the fault coverage achievable in the broadcast mode. However, the fault coverage achieved in the broadcast mode of ILS architecture depends on the actual configuration of individual scan chains, i.e., the number of chains and the mapping of the individual flip-flops of the circuit to the respective scan chain positions. Current methods for constructing scan chains in ILS are either ad-hoc or use test pattern information from an a-priori automatic test pattern generation (ATPG) run. In this thesis, we present novel low cost techniques to construct ILS scan configuration for a given design. These techniques efficiently utilize the circuit topology information and try to optimize the flip-flop assignment to a scan chain location without much compromise in the fault coverage in the broadcast mode. Thus, they eliminate the need of an a-priori ATPG run or any test set information. In addition, we also propose a new scan architecture which combines the broadcast mode of ILS and Random Access Scan architecture to enable further test volume reduction on and above effectively configured conventional ILS architecture using the aforementioned heuristics with reasonable area overhead. Experimental results on the ISCASâ 89 benchmark circuits show that the proposed ILS configuration methods can achieve on an average 5% more fault coverage in the broadcast mode and on average 15% more test data volume and test application time reduction than existing methods. The proposed new architecture achieves, on an average, 9% and 33% additional test data volume and test application time reduction respectively on top of our proposed ILS configuration heuristics.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
12

Fonte, Samuel Vince A. "A cost estimation analysis of U.S. Navy fuel-saving techniques and technologies." Thesis, Monterey, California : Naval Postgraduate School, 2009. http://edocs.nps.edu/npspubs/scholarly/theses/2009/Sep/09Sep%5FFonte.pdf.

Full text
Abstract:
Thesis (M.S. in Operations Research)--Naval Postgraduate School, September 2009.
Thesis Advisor(s): Nussbaum, Daniel A. "September 2009." Description based on title screen as viewed on November 6, 2009. Author(s) subject terms: Energy efficiency, fuel savings, cost of fuel, discount factor, prioritization listing, surface fleet. Includes bibliographical references (p. 37-38). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
13

Kelly, Michael A. "A methodology for software cost estimation using machine learning techniques." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from the National Technical Information Service, 1993. http://handle.dtic.mil/100.2/ADA273158.

Full text
Abstract:
Thesis (M.S. in Information Technology Management) Naval Postgraduate School, September 1993.
Thesis advisor(s): Ramesh, B. ; Abdel-Hamid, Tarek K. "September 1993." Bibliography: p. 135. Also available online.
APA, Harvard, Vancouver, ISO, and other styles
14

Norrington, Peter. "Novel, robust and cost-effective authentication techniques for online services." Thesis, University of Bedfordshire, 2009. http://hdl.handle.net/10547/134951.

Full text
Abstract:
This thesis contributes to the study of the usability and security of visuo-cognitive authentication techniques, particularly those relying on recognition of abstract images, an area little researched. Many usability and security problems with linguistic passwords (including traditional text-based passwords) have been known for decades. Research into visually-based techniques intends to overcome these by using the extensive human capacity for recognising images, and add to the range of commercially viable authentication solutions. The research employs a mixed methodology to develop several contributions to the field. A novel taxonomy of visuo-cognitive authentication techniques is presented. This is based on analysis and synthesis of existing partial taxonomies, combined with new and extensive analysis of features of existing visuo-cognitive and other techniques. The taxonomy advances consistent terminology, and coherent and productive classification (cognometric, locimetric, graphimetric and manipulometric, based respectively on recognition of, location in, drawing of and manipulation of images) and discussion of the domain. The taxonomy is extensible to other classes of cognitive authentication technique (audio-cognitive, spatio-cognitive, biometric and token-based, etc.). A revised assessment process of the usability and security of visuo-cognitive techniques is proposed (employing three major assessment categories – usability, memorability and security), based on analysis, synthesis and refinement of existing models. The revised process is then applied to the features identified in the novel taxonomy to prove the process‘s utility as a tool to clarify both the what and the why of usability and security issues. The process is also extensible to other classes of authentication technique. iii Cognitive psychology experimental methods are employed, producing new results which show with statistical significance that abstract images are harder to learn and recall than face or object images. Additionally, new experiments and a new application of the chi-squared statistic show that users‘ choices of abstract images are not necessarily random over a group, and thus, like other cognitive authentication techniques, can be attacked by probabilistic dictionaries. A new authentication prototype is designed and implemented, embodying the usability and security insights gained. Testing of this prototype shows good usability and user acceptance, although speed of use remains an issue. A new experiment shows that abstract image authentication techniques are vulnerable to phishing attacks. Further, the testing shows two new results: that abstract image visuo-cognitive techniques are usable on mobile phones; and that such phones are not, currently, necessarily a threat as part of observation attacks on visual passwords.
APA, Harvard, Vancouver, ISO, and other styles
15

Onditi, Victor O. "Decision rationale management : techniques for addressing the cost-benefit dilemma." Thesis, Lancaster University, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.524730.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Hagood, Nesbitt W. "Cost averaging techniques for robust control of parametrically uncertain systems." Thesis, Massachusetts Institute of Technology, 1991. http://hdl.handle.net/1721.1/13922.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Gutierrez, Alcala Mauricio Daniel. "Fault tolerance & error monitoring techniques for cost constrained systems." Thesis, University of Southampton, 2017. https://eprints.soton.ac.uk/415792/.

Full text
Abstract:
With technology scaling, the reliability of circuits is becoming a growing concern. The appearance of logic errors in-the-field caused by faults escaping manufacturing testing, single-event upsets, aging, or process variations is increasing. Traditional techniques for online testing and circuit protection often require a high design effort or result in high area overhead and power consumption and are unsuitable for low cost systems. This thesis presents three original contributions in the form of low cost techniques for online error detection and protection in cost constrained systems. The first contribution consists on low cost fault tolerance design technique that protects the most susceptible workload on the most susceptible logic cones of a circuit, by targeting both timing independent and timing-dependent errors. The susceptible workload is protected by a partial Triple Modular Redundancy (TMR) scheme. Protecting the 32 most susceptible patterns, an average error coverage improvement of 63.5% and 58.2% against errors induced by stuck-at and transition faults is achieved, respectively, compared an unranked pattern selection and protection. Additionally, this technique produces an average error coverage improvement of 163% and 96% against temporary erroneous output transition and errors induced by bit-flips, respectively. These error coverage improvements incur in an area/power cost in the range of 18.0-54.2%, a 145.8-182.0% reduction compared to TMR. The second contribution proposes a low cost probabilistic online error monitoring technique that produces an alarm signal when systematic erroneous behaviour has occurred over a pre-defined time interval. To detect systematic erroneous behaviour, the collected data is compared on-chip against the signature of error-free behaviour. Results demonstrate on the largest circuits, an average error coverage of 84.4% and 73.1% of errors induced by bit-flips and stuck-at faults, respectively, with an average area cost of 1.66%. The final contribution consists of a circuit approximation technique that can be used for low cost non-intrusive fault tolerance and concurrent error detection, based on finding functionality at the logic level that behaves similarly to single logic gates or constant values. An algorithm is proposed to select the input subsets to approximate. Results show an average coverage of 33.59% of all the input space with an average 7.43% area cost. Using this approximate circuits in a reduced TMR scheme results in significant area cost reductions compared to existing techniques.
APA, Harvard, Vancouver, ISO, and other styles
18

Amigun, Bamikole. "Processing cost analysis of the African biofuels industry with special reference to capital cost estimation techniques." Doctoral thesis, University of Cape Town, 2008. http://hdl.handle.net/11427/5338.

Full text
Abstract:
Includes abstract.
Includes bibliographical references.
Access to energy, in the form of electricity and fuels, is a necessary condition for development. There are several reasons for biofuels to be considered important in many African countries. They include energy security, environmental concerns, foreign exchange savings and socio-economic opportunities for the rural population. Biofuels such as biogas, biodiesel and bioethanol may be easier to commercialise than other alternatives to crude-oil derived fuels, considering performance, infrastructure and other factors. Biofuels are in use in a number of developing countries (including some African ones for example, Mauritius, South Africa, Kenya), and have been commercialised in several OECD countries, as well as Brazil and China. A good understanding of the production cost of biofuels, and the availability of robust and indigenous cost estimation models is essential to their eventual commercialization. However, available process engineering cost estimation relationships and factors are based on plant costs from developed countries, and thus have limited applicability and unknown accuracy when applied to African installations. The need to develop indigenous cost prediction relationships, which is central to economic feasibility studies, is driven not only by the limitations in terms of current data bases and methodologies for the generation of such. There is also a requirement for a more systematic presentation of cost data in equation forms, which will ensure easier and more rapid use of the data in numerical and economic models, and in preliminary design and plant optimisation in a time and cost effective manner, providing decision-makers with key information in the early design stages of a project. It is these shortcomings and challenges that this dissertation attempts to address, through an analysis of the economic input factors, and the development of more robust, indigenous cost estimation relationships for both capital and operating costs for the biofuels process industry in Africa. The conceptual approach developed within this thesis addresses the current data gaps and deficiencies through analyses of establishment and operating costs of existing biofuels plants both on the continent and elsewhere. It aims to determine which factors most influence the production cost, and then proceed to modify known cost estimation tools for both capital and operating costs specifically for African biomass-to-biofuel conversion plants as a function of plant size, feedstock, location, exchange rates, and other site-specific variables. Shortcomings in the use of existing cost estimation models are addressed with the aid of a literature study, supported by the analysis of African biofuels plant establishment (biogas and bio-ethanol) and operating (bio-ethanol) costs‡. Plant establishment costs are analysed at two different levels of detail, corresponding to the concept development and pre-feasibility phases in the project planning cycle.
APA, Harvard, Vancouver, ISO, and other styles
19

Trasi, Ashutosh. "Developing a technique to support design concurrent cost estimation using feature recognition." Ohio : Ohio University, 2001. http://www.ohiolink.edu/etd/view.cgi?ohiou1174405391.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Hambly, Catherine. "Cost of flight in small birds using the '1'3C labelled bicarbonate technique." Thesis, University of Aberdeen, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.369631.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Xiong, Bo. "Improving cost estimation performance: An investigation of prediction technique and person-environment interaction." Thesis, Queensland University of Technology, 2016. https://eprints.qut.edu.au/97935/1/Bo_Xiong_Thesis.pdf.

Full text
Abstract:
Construction Cost engineers provide early estimates for projects with limited information leading to an intense desire for efficient tools and complex interactions between these professionals and working environment. This research involved several endeavours such as the development of a hybrid approach for overfitting and collinearity problems frequently occurred in cost estimation; and the development of a framework explaining relationships between the work environment, job satisfaction, work stress and job performance of construction cost engineers. These contributions could lead to improving estimation performance of cost engineers which is critical to the successful operation of projects and organisations.
APA, Harvard, Vancouver, ISO, and other styles
22

PARIKH, NIRAV RAJENDRA. "LOW-COST MULTI GLOBAL POSITIONING SYSTEM FOR SHORT BASELINE ATTITUDE DETERMINATION." Ohio University / OhioLINK, 2006. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1163482121.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Sampath, Sreedevi. "Cost-effective techniques for user-session-based testing of Web applications." Access to citation, abstract and download form provided by ProQuest Information and Learning Company; downloadable PDF file 1.02 Mb., 169 p, 2006. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&res_dat=xri:pqdiss&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&rft_dat=xri:pqdiss:3220722.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Qin, Wei Wei. "Quick and cost-efficient measurement techniques for high-performance AD converters." Thesis, University of Macau, 2017. http://umaclib3.umac.mo/record=b3691120.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Izad, Shenas Seyed Abdolmotalleb. "Predicting High-cost Patients in General Population Using Data Mining Techniques." Thèse, Université d'Ottawa / University of Ottawa, 2012. http://hdl.handle.net/10393/23461.

Full text
Abstract:
In this research, we apply data mining techniques to a nationally-representative expenditure data from the US to predict very high-cost patients in the top 5 cost percentiles, among the general population. Samples are derived from the Medical Expenditure Panel Survey’s Household Component data for 2006-2008 including 98,175 records. After pre-processing, partitioning and balancing the data, the final MEPS dataset with 31,704 records is modeled by Decision Trees (including C5.0 and CHAID), Neural Networks. Multiple predictive models are built and their performances are analyzed using various measures including correctness accuracy, G-mean, and Area under ROC Curve. We conclude that the CHAID tree returns the best G-mean and AUC measures for top performing predictive models ranging from 76% to 85%, and 0.812 to 0.942 units, respectively. Among a primary set of 66 attributes, the best predictors to estimate the top 5% high-cost population include individual’s overall health perception, history of blood cholesterol check, history of physical/sensory/mental limitations, age, and history of colonic prevention measures. It is worthy to note that we do not consider number of visits to care providers as a predictor since it has a high correlation with the expenditure, and does not offer a new insight to the data (i.e. it is a trivial predictor). We predict high-cost patients without knowing how many times the patient was visited by doctors or hospitalized. Consequently, the results from this study can be used by policy makers, health planners, and insurers to plan and improve delivery of health services.
APA, Harvard, Vancouver, ISO, and other styles
26

Negreiros, Marcelo. "Low cost BIST techniques for linear and non-linear analog circuits." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2005. http://hdl.handle.net/10183/6225.

Full text
Abstract:
Com a crescente demanda por produtos eletrônicos de consumo de alta complexidade, o mercado necessita de um rápido ciclo de desenvolvimento de produto com baixo custo. O projeto de equipamentos eletrônicos baseado no uso de núcleos de propriedade intelectual ("IP cores") proporciona flexibilidade e velocidade de desenvolvimento dos chamados "sistemas num chip". Entretanto, os custos do teste destes sistemas podem alcançar um percentual significativo do valor total de produção, principalmente no caso de sistemas contendo "IP cores" analógicos ou "mixed-signal". Técnicas de teste embarcado (BIST e DFT) para circuitos analógicos, embora potencialmente capazes de minimizar o problema, apresentam limitações que restringem seu emprego a casos específicos. Algumas técnicas são dependentes do circuito, necessitando reconfiguração do circuito sob teste, e não são, em geral, utilizáveis em RF. No ambiente de "sistemas num chip", como recursos de processamento e memória estão disponíveis, eles poderiam ser utilizados durante o teste. No entanto, a sobrecarga de adicionar conversores AD e DA pode ser muito onerosa para a maior parte dos sistemas, e o roteamento analógico dos sinais pode não ser possível, além de poder introduzir distorção do sinal. Neste trabalho um digitalizador simples e de baixo custo é usado ao invés de um conversor AD para possibilitar a implementação de estratégias de teste no ambiente de "sistemas num chip". Graças ao baixo acréscimo de área analógica do conversor, múltiplos pontos de teste podem ser usados. Graças ao desempenho do conversor, é possível observar características dos sinais analógicos presentes nos "IP cores", incluindo a faixa de freqüências de RF usada em transceptores para comunicações sem fio. O digitalizador foi utilizado com sucesso no teste de circuitos analógicos de baixa freqüência e de RF. Como o teste é baseado no domínio freqüência, características nãolineares como produtos de intermodulação podem também ser avaliadas. Especificamente, resultados práticos com protótipos foram obtidos para filtros de banda base e para um mixer a 100MHz. A aplicação do conversor para avaliação da figura de ruído também foi abordada, e resultados experimentais utilizando amplificadores operacionais convencionais foram obtidos para freqüências na faixa de áudio. O método proposto é capaz de melhorar a testabilidade de projetos que utilizam circuitos de sinais mistos, sendo adequado ao uso no ambiente de "sistemas num chip" usado em muitos produtos atualmente.
With the ever increasing demands for high complexity consumer electronic products, market pressures demand faster product development and lower cost. SoCbased design can provide the required design flexibility and speed by allowing the use of IP cores. However, testing costs in the SoC environment can reach a substantial percent of the total production cost. Analog testing costs may dominate the total test cost, as testing of analog circuits usually require functional verification of the circuit and special testing procedures. For RF analog circuits commonly used in wireless applications, testing is further complicated because of the high frequencies involved. In summary, reducing analog test cost is of major importance in the electronic industry today. BIST techniques for analog circuits, though potentially able to solve the analog test cost problem, have some limitations. Some techniques are circuit dependent, requiring reconfiguration of the circuit being tested, and are generally not usable in RF circuits. In the SoC environment, as processing and memory resources are available, they could be used in the test. However, the overhead for adding additional AD and DA converters may be too costly for most systems, and analog routing of signals may not be feasible and may introduce signal distortion. In this work a simple and low cost digitizer is used instead of an ADC in order to enable analog testing strategies to be implemented in a SoC environment. Thanks to the low analog area overhead of the converter, multiple analog test points can be observed and specific analog test strategies can be enabled. As the digitizer is always connected to the analog test point, it is not necessary to include muxes and switches that would degrade the signal path. For RF analog circuits, this is specially useful, as the circuit impedance is fixed and the influence of the digitizer can be accounted for in the design phase. Thanks to the simplicity of the converter, it is able to reach higher frequencies, and enables the implementation of low cost RF test strategies. The digitizer has been applied successfully in the testing of both low frequency and RF analog circuits. Also, as testing is based on frequency-domain characteristics, nonlinear characteristics like intermodulation products can also be evaluated. Specifically, practical results were obtained for prototyped base band filters and a 100MHz mixer. The application of the converter for noise figure evaluation was also addressed, and experimental results for low frequency amplifiers using conventional opamps were obtained. The proposed method is able to enhance the testability of current mixed-signal designs, being suitable for the SoC environment used in many industrial products nowadays.
APA, Harvard, Vancouver, ISO, and other styles
27

Worley, Stacy K. "Bearing failure detection in farm machinery using low-cost acoustic techniques." Thesis, This resource online, 1994. http://scholar.lib.vt.edu/theses/available/etd-06302009-040529/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Jeunehomme, Eric J. S. "Design of low cost biomimetic flexible robots using additive manufacturing techniques." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/122313.

Full text
Abstract:
Thesis: S.M. in Naval Architecture and Marine Engineering, Massachusetts Institute of Technology, Department of Mechanical Engineering, 2019
Thesis: S.M., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2019
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 109-112).
In this thesis, I designed and fabricated robots leveraging additive manufacturing. This had two overarching purpose. One to make a testing apparatus that would allow the measurements of the influence of a flexible flapping foil onto a subsequent, in-line, foil with the optic of researching optimized propulsion solutions for under water vehicles. The second was to show that filament deposition modeling has advanced enough to produce bio-mimetic flexible robots of academic relevance that would allow, for a low price, the making of a number of experimental setup with specific measurements in mind. In order to reach those goals, two versions of a bio-mimetic archer fish of the genus Toxotes were modeled using various software. The models were modified to accept actuator assemblies and interface to the electronics and built using a modified hobby grade 3D printer.
by Eric J.S. Jeunehomme.
S.M. in Naval Architecture and Marine Engineering
S.M.
S.M.inNavalArchitectureandMarineEngineering Massachusetts Institute of Technology, Department of Mechanical Engineering
S.M. Massachusetts Institute of Technology, Department of Mechanical Engineering
APA, Harvard, Vancouver, ISO, and other styles
29

ZITO, GABRIELLA. "Efficiency and safeness improvement and cost containment strategy in Assisted Reproduction Technique (ART)." Doctoral thesis, Università degli Studi di Trieste, 2017. http://hdl.handle.net/11368/2908123.

Full text
Abstract:
Over the years, progress has been made in the field of Reproductive Medicine, passing from the first treatments made of spontaneous cycle, to the introduction of medical strategies of multiple follicular growth stimulation, which is associated to an improvement of outcomes in terms of the number of mature oocytes retrieved, pregnancy rate and live birth rate. The clinical introduction of GnRH antagonists, at the end of the years ' 90, opened new perspectives in ovarian stimulation strategy. Clinical data obtained from randomized controlled trials have shown the benefits of using these drugs in terms of lower dose of gonadotropins, shorter duration of treatment, reduced rates of OHSS, with an overall reduction of costs (Al-Inany HG et al., 2011). Also permitted the use of GnRH agonists for induction of oocyte final maturation, in order to reduce as much as possible OHSS rates. The ovarian response to stimulation with exogenous gonadotropins during IVF is a critical determinant of live birth rates and adverse outcomes (R.G. Steward et al., 2014; Sunkara et al., 2011). Healthcare providers and national guidelines recognize the need for individualization of the starting dose of gonadotropin by using predictive factors related to patient characteristics and diagnostic markers of ovarian reserve to attain an optimal oocyte yield while minimizing the risk of an excessive response and OHSS. This research work was carried out in order to identify the best strategies to improve the effectiveness and safety of IVF treatments. The ultimate goal is a reduction of costs associated with IVF by a reduction of cycle cancellation rate and hospitalization rate for ovarian hyperstimulation syndrome (OHSS).
APA, Harvard, Vancouver, ISO, and other styles
30

Malepe, JS. "Perception on the application of cost accounting in the budgeting process of a municipality: A case of the CoT." Tshwane University of Technology, 2014. http://encore.tut.ac.za/iii/cpro/DigitalItemViewPage.external?sp=1001155.

Full text
Abstract:
This study analysed perceptions on the application of cost accounting in the budgeting process, a case of the City of Tshwane (CoT) municipality. Employee perceptions were analysed to determine whether recognised costing techniques were being applied, and if so, were those costing techniques being efficiently and effectively applied. An analysis of the employee perceptions of the reliability of the currently implemented costing techniques for the preparation of budget estimates, together with the employees’ perceptions of management’s implementation and maintenance of the budget estimates, as required by legislation, was also conducted. The research instruments comprise questionnaires that were distributed to all municipal officials at CoT who are responsible for budgeting within their municipal department or division, and semi-structured interviews were conducted with selected officials. Data collected from the participants were descriptively analysed. This study identified a performance gap between the potential of the costing techniques nominally being used and the manner in which CoT budget officials actually apply them. Based on the conclusions drawn from the analysis of the data, recommendations were made. Some of these recommendations include that officials of the CoT should improve training on the proper application of currently used costing techniques and that the CoT should conduct a pilot study aimed at introducing transfer pricing (TP) and standard costing as their next-generation budget costing techniques. In addition, including decisionmaking tools, such as cost-volume and profit (CVP) analysis in the costing process can add value to the budget costing process if applied as a cost-volume and service (CVS) analysis.
APA, Harvard, Vancouver, ISO, and other styles
31

Rodenbeck, Christopher Timothy. "Novel technologies and techniques for low-cost phased arrays and scanning antennas." Diss., Texas A&M University, 2004. http://hdl.handle.net/1969.1/1053.

Full text
Abstract:
This dissertation introduces new technologies and techniques for low-cost phased arrays and scanning antennas. Special emphasis is placed on new approaches for low-cost millimeter-wave beam control. Several topics are covered. A novel reconfigurable grating antenna is presented for low-cost millimeter-wave beam steering. The versatility of the approach is proven by adapting the design to dual-beam and circular-polarized operation. In addition, a simple and accurate procedure is developed for analyzing these antennas. Designs are presented for low-cost microwave/millimeter-wave phased-array transceivers with extremely broad bandwidth. The target applications for these systems are mobile satellite communications and ultra-wideband radar. Monolithic PIN diodes are a useful technology, especially suited for building miniaturized control components in microwave and millimeter-wave phased arrays. This dissertation demonstrates a new strategy for extracting bias-dependent small-signal models for monolithic PIN diodes. The space solar-power satellite (SPS) is a visionary plan that involves beaming electrical power from outer space to the earth using a high-power microwave beam. Such a system must have retrodirective control so that the high-power beam always points on target. This dissertation presents a new phased-array architecture for the SPS system that could considerably reduce its overall cost and complexity. In short, this dissertation presents technologies and techniques that reduce the cost of beam steering at microwave and millimeter-wave frequencies. The results of this work should have a far-ranging impact on the future of wireless systems.
APA, Harvard, Vancouver, ISO, and other styles
32

Zhang, Yanshuai. "Optimization of construction time and cost using the ant colony system techniques." Click to view the E-thesis via HKUTO, 2007. http://sunzi.lib.hku.hk/hkuto/record/B38984362.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Soutos, Michail K. "Forecasting Elemental Building cost percentages using regression analysis and neural network techniques." Thesis, University of Manchester, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.556705.

Full text
Abstract:
Abstract Early stage project estimates are a key component in business decision making, and generally form the basis of the project's ultimate funding. Their strategic importance has been long recognised, leading to increased research and development in cost modelling. For example, research initiated at The University of Manchester resulted in the production of ProCost - early stage cost estimating software, which has the ability to forecast the total cost of a proposed building in the form of a single figure output. This research project commenced with a nationwide questionnaire survey of current cost modelling and elemental cost estimating practice. One of its major findings was that quantity surveyors are not satisfied with single figure output cost models. Based on this, an investigation into the feasibility of generating an elemental breakdown of the ProCost output was initiated. An investigation into an appropriate elemental output format resulted in the adoption of 17 elements based on the Royal Institution of Chartered Surveyors (RICS) Standard Form of Cost Analysis (SFCA). Models were created for each of these elements using both multiple linear regression and artificial neural network (ANNs) techniques. Initially, data from 120 office buildings were collected and modelled using multiple linear regression analysis. The accuracy of these models, as measured by the mean absolute percentage error (MAPE), ranged from 9.2% to 319.6%. Recognising the deficiency of some of these models, the study proceeded by using ANNs as an alternative modelling method. Accepting the requirement of this method for increased data cases, a second data collection programme was initiated, extending the database to industrial and residential buildings and resulting in a total of 360 projects. ANN produced superior models for the majority of the elements, generating MAPEs from 9.7% to 43%. The final decision support tool presented is a hybrid of these two methods with 5 of the models based on multiple linear regression and 12 on ANN techniques. The mean MAPE of the 17 models is 22.1%. The model compares favourably against previous cost modelling attempts in terms of accuracy, generalisation, sample size, and application spectrum and flexibility. It is anticipated that its utilisation will improve current practice enabling quantity surveyors (cost estimators) to generate quick elemental estimates at an early design stage. Further its elemental character will introduce a crosschecking mechanism into the decision-making process, increasing user confidence in the model's application. 13
APA, Harvard, Vancouver, ISO, and other styles
34

Bautista-Quintero, Ricardo. "Techniques for the implementation of control algorithms using low-cost embedded systems." Thesis, University of Leicester, 2009. http://hdl.handle.net/2381/8220.

Full text
Abstract:
The feedback control literature has reported success in numerous implementations of systems that employ state-of-the-art components. In such systems, the quality of computer controller, actuators and sensors are largely unaffected by nonlinear effects, external disturbances and finite precision of the digital computer. Overall, this type of control systems can be designed and implemented with comparative ease. By contrast, in cases when the implementation is based on limited resources, such as, low-cost computer hardware along with simple actuators and sensors, there are significant challenges for the developer. This thesis has the goal of simplifying the design of mechatronic systems implemented using low-cost hardware. This approach involves design techniques that enhance the links between feedback control algorithms (in theory) and reliable real-time implementation (in practice). The outcome of this research provides a part of a framework that can be used to design and implement efficient control algorithms for resource-constrained embedded computers. The scope of the thesis is limited to situations where 1) the computer hardware has limited memory and CPU performance; 2) sensor-related uncertainties may affect the stability of the plant and 3) unmodelled dynamic of actuator(s) limit the performance of the plant. The thesis concludes by emphasising the importance of finding mechanisms to integrate low-cost components with nontrivial robust control algorithms in order to satisfy multi-objective requirements simultaneously.
APA, Harvard, Vancouver, ISO, and other styles
35

Obuh, Isibor Ehi. "Low-cost fabrication techniques for RF microelectromechanical systems (MEMS) switches and varactors." Thesis, University of Leeds, 2018. http://etheses.whiterose.ac.uk/21479/.

Full text
Abstract:
A novel low-cost microfabrication technique for manufacturing RF MEMS switches and varactors is proposed. The fabrication process entails laser microstructuring and non-clean room micro-lithography standard wet bench techniques. An optimized laser microstructuring technique was employed to fabricate the MEMS component members and masks with readily available materials that include, Aluminum foils, sheets, and copper clad PCB boards. The non-clean room micro-lithography process was optimized to make for the patterning of the MEMS dielectric and bridge support layers, which were derived from deposits of negative-tone photosensitive epoxy-based polymers, SU-8 resins (glycidyl-ether-bisphenol-A novolac) and photoacid activated ADEXTM dry films. The novel microfabrication technique offers comparatively reasonably yields without intensive cleanroom manufacturing techniques and their associated equipment and processing costs. It is an optimized hybrid rapid prototyping manufacturing process that makes for a reduction in build cycles while ensuring good turnarounds. The techniques are characterized by analysing each contributing technology and dependent parameters: laser structuring, lithography and spin coating and thin film emboss. They are developed for planar substrates and can be modified to suit specific work material for optimized outcomes. The optimized laser structuring process offers ablation for pitches as small as 75 μm (track width of 50 μm and gap 25 μm), with a deviation of 3.5 % in the structured vector’s dimensions relative to design. The lithography process also developed for planar and microchannel applications makes for the realization of highly resolved patterned deposits of the SU-8 resin and the laminated ADEXTM polymer from 1 μm to 6 μm and with an accuracy ±0.2 μm. The complete micro-fabrication technique fabrication techniques are demonstrated by realizing test structures consisting of RF MEMS switches and varactors on FR4 substrates. Both MEMS structures and FR4 substrate were integrated by employing the micro-patterned polymers, developed from dry-film ADEXTM and SU-8 deposits, to make for a functional composite assembly. Average fabrication yield up to 60 % was achieved, calculated from ten fabrication attempts. The RF measurement results show that the RF MEMS devices fabricated by using the novel micro-fabrication process have good figure-of-merits, at much lower overall fabrication costs, as compared to the devices fabricated by conventional cleanroom process, enabling it to be used as a very good micro-fabrication process for cost-effective rapid prototyping of MEMS.
APA, Harvard, Vancouver, ISO, and other styles
36

Fernandes, Diogo. "Low-cost implementation techniques for generic square and cross M-QAM constellations." Universidade Federal de Juiz de Fora, 2015. https://repositorio.ufjf.br/jspui/handle/ufjf/1555.

Full text
Abstract:
Submitted by Renata Lopes (renatasil82@gmail.com) on 2016-05-17T12:37:21Z No. of bitstreams: 1 diogofernandes.pdf: 2723080 bytes, checksum: 27ac16e618618f1cb4c4dc6394956f80 (MD5)
Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2016-06-28T14:08:15Z (GMT) No. of bitstreams: 1 diogofernandes.pdf: 2723080 bytes, checksum: 27ac16e618618f1cb4c4dc6394956f80 (MD5)
Made available in DSpace on 2016-06-28T14:08:15Z (GMT). No. of bitstreams: 1 diogofernandes.pdf: 2723080 bytes, checksum: 27ac16e618618f1cb4c4dc6394956f80 (MD5) Previous issue date: 2015-08-31
CNPq - Conselho Nacional de Desenvolvimento Científico e Tecnológico
Este trabalho tem como objetivo apresentar técnicas com complexidade computacional reduzida para implementação em hardware do modulador de amplitude em quadratura M-ária (M-ary quadrature amplitude modulation - M-QAM) de elevada ordem, que pode ser viável para sistemas banda larga. As técnicas propostas abrangem as constelações M-QAM quadradas e cruzadas (número par e ímpar de bits), a regra de decisão abrupta (hard decison rule), derivação de constelações M-QAM de baixa ordem das de elevada ordem. A análise de desempenho em termos de taxa de bits errados (bit error rate - BER) é realizada quando os símbolos M-QAM são corrompidos por ruído Gaussiano branco aditivo (additive white Gaussian noise - AWGN) e ruído Gaussiano impulsivo aditivo (additive impulsive Gaussian noise - AIGN). Os resultados de desempenho da taxa de bits errados mostram que a perda de desempenho das técnicas propostas é, em média, inferior a 1 dB, o que é um resultado surpreendente. Além disso, a implementação das técnicas propostas em arranjo de portas programáveis em campo (field programmable gate array - FPGA) é descrita e analisada. Os resultados obtidos com as implementações em dispositivo FPGA mostram que as técnicas propostas podem reduzir consideravelmente a utilização de recursos de hardware se comparadas com as técnicas presentes na literatura. Uma melhoria notável em termos de redução da utilização de recursos de hardware é conseguida através da utilização da técnica de modulação M-QAM genérica em comparação com a técnica de regra de decisão heurística (heuristic decision rule - HDR) aprimorada e uma técnica previamente concebida, a tà c cnica HDR. Com base nas análises apresentadas, a técnica HDR aprimorada é menos complexa do que a técnica HDR. Finalmente, os resultados numéricos mostram que a técnica de modulação M-QAM genérica pode ser oito vezes mais rápida do que as outras duas técnicas apresentadas, quando um grande número de símbolos M-QAM (p. ex., > 1000) são transmitidos consecutivamente.
This work aims at introducing techniques with reduced computational complexity for hardware implementation of high order M-ary quadrature amplitude modulation (MQAM) which may be feasible for broadband communication systems. The proposed techniques cover both square and cross M-QAM constellations (even and odd number of bits), hard decision rule, derivation of low-order M-QAM constellations from high order ones. Performance analyses, in terms of bit error rate (BER) is carried out when the M-QAM symbols are corrupted by either additive white Gaussian noise (AWGN) or additive impulsive Gaussian noise (AIGN). The bit error rate performance results show that the performance loss of the proposed techniques is, on average, less than 1 dB, which is a remarkable result. Additionally, the implementation of the proposed techniques in field programmable gate array (FPGA) device is described and outlined. The results based on FPGA show that the proposed techniques can considerably reduce hardware resource utilization. A remarkable improvement in terms of hardware resource utilization reduction is achieved by using the generic M-QAM technique in comparison with the enhanced heuristic decision rule (HDR) technique and a previously designed technique, the HDR technique. Based on the analyses performed, the enhanced HDR technique is less complex than the HDR technique. Finally, the numerical results show that the generic M-QAM technique can be eight times faster than the other two techniques when a large number of M-QAM symbols (e.g., > 1000) are consecutively transmitted.
APA, Harvard, Vancouver, ISO, and other styles
37

Salomon, Sophie. "Bias Mitigation Techniques and a Cost-Aware Framework for Boosted Ranking Algorithms." Case Western Reserve University School of Graduate Studies / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=case1586450345426827.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Tse, Kam Tim. "Cost and benefits of response mitigation techniques for wind-excited tall buildings /." View abstract or full-text, 2009. http://library.ust.hk/cgi/db/thesis.pl?CIVL%202009%20TSE.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Papayiannis, Andreas. "On revenue management techniques : a continuous-time application to airport carparks." Thesis, University of Manchester, 2014. https://www.research.manchester.ac.uk/portal/en/theses/on-revenue-management-techniques-a-continuoustime-applicationto-airport-carparks(152e2261-1113-49ba-a846-8d6607024e11).html.

Full text
Abstract:
This thesis investigates the revenue management (RM) problem encountered in an airport carpark of finite capacity, where the available parking spaces should be sold optimally in advance in order to maximise the revenues on a given day. Customer demand is stochastic, where random pre-booking times and stay lengths overlap with each other, a setting that generates strong inter-dependence among consecutive days and hence leads to a complex network optimisation problem. Several mathematical models are introduced to approximate the problem; a model based on a discrete-time formulation which is solved using Monte Carlo (MC) simulations and two single-resource models, the first based on a stochastic process and the other on a deterministic one, both developed in continuous-time that lead to a partial differential equation (PDE). The optimisation for the spaces is based on the expected displacement costs which are then used in a bid-price control mechanism to optimise the value of the carpark. Numerical tests are conducted to examine the methods’ performance under the network setting. Taking into account the methods’ efficiency, the computation times and the resulting expected revenues, the stochastic PDE approach is shown to be the preferable method. Since the pricing structure among operators varies, an adjusted model based on the stochastic PDE is derived in order to facilitate the solution applicable in all settings. Further, for large carparks facing high demand levels, an alternative second-order PDE model is proposed. Finally, an attempt to incorporate more information about the network structure and the inter-dependence between consecutive days leads to a weighted PDE scheme. Given a customer staying on day T, a weighting kernel is introduced to evaluate the conditional probability of stay on a neighbouring day. Then a weighted average is applied on the expected marginal values over all neighbouring days. The weighted PDE scheme shows significant improvement in revenue for small-size carparks. The use of the weighted PDE opens the possibility for new ways to approximate network RM problems and thus motivates further research in this direction.
APA, Harvard, Vancouver, ISO, and other styles
40

Laptali, Emel. "Application of optimisation techniques to planning and estimating decisions in the building process." Thesis, University of South Wales, 1996. https://pure.southwales.ac.uk/en/studentthesis/application-of-optimisation-techniques-to-planning-and-estimating-decisions-in-the-building-process(3bc5337e-375b-43b8-acd9-3dc12553eb61).html.

Full text
Abstract:
An integrated computer model for time and cost optimisation has been developed for multi-storey reinforced concrete office buildings. The development of the model has been based on interviews completed with Planners, Estimators and Researchers within 2 of the top 20 (in terms of turnover) UK main contractors, and on published literature, bar charts and bills of quantities of concrete framed commercial buildings. The duration and cost of construction of a typical multistorey reinforced concrete office building is calculated through the first part of the integrated model, i.e. the simulation model. The model provides a set of choices for the selection of materials and plant and possible methods of work. It also requires the user to input the quantities of work, gang sizes and the quantity of plant required, lag values between activities, output rates, unit costs of plant, labour costs and indirect costs. A linked bar chart is drawn automatically by using the data available from the simulation model. The second part of the model, (optimisation) uses the data provided by the simulation part and provides sets of solutions of time vs. cost from which the minimum project cost corresponding to the optimum project duration is calculated under the given schedule restrictions. Linear programming is used for the optimisation problem. The objective function is set to be the minimisation of the project cost which is the total of the direct costs of all the activities creating the project and the indirect costs of the project. The constraints are formulated from the precedence relationship, lag values, and normal and crash values of time and cost for the activities supplied by the simulation model. The simulation part has been validated by comparing and contrasting the results with those methods and practices adopted by commercial planners and estimators. The validation of the optimisation part has been undertaken by plotting time-direct cost curves from the results and checking the convexity of the curves. Additionally, the validation procedures included taking account of the opinions of practitioners in the industry on the practical and commercial viability of the model.
APA, Harvard, Vancouver, ISO, and other styles
41

曾伯裕 and Pak-yu Tsang. "Application of life cycle costing (LCC) technique in Hong Kong warehouse industry." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2000. http://hub.hku.hk/bib/B31251626.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Altrabsheh, Bilal. "Investigation of low cost techniques for realising microwave and millimeter-wave network analysers." Thesis, University of Surrey, 2003. http://epubs.surrey.ac.uk/843309/.

Full text
Abstract:
The work presented in this thesis is on the development of reliable low cost measurement systems for measuring microwave and millimetre-wave devices. The purpose of this work is to find techniques which use multiple power detectors and can measure magnitude and phase without the need for expensive superheterodyne receivers. Two novel microwave measurement systems have been designed with the intention of providing a measurement facility which enables the characterisation of both active and passive devices in terms of their scattering parameters. The first method is based on using a multistate reflectometer, which uses dielectric waveguide in the frequency range of 110GHz up to 170GHz. The dielectric multistate reflectometer is a four-port reflectometer, which uses a programmable phase shifter to give a flat relative phase shift over the entire frequency range of the dielectric waveguides used in the multistate reflectometer. The phase shifter has an eccentric rotating cylinder with an offset axis to allow a number of different phase shifts to the wave travelling in the dielectric waveguides in the multistate reflectometer. This system has been developed as an equivalent to a one-port network analyser. The second method is based on using the multi-probe reflectometer in which the standing wave in a line is measured using a number of fixed detector probes. A microstrip line prototype in the frequency range of 1GHz to 5.5GHz has been demonstrated and the design of a monolithic microwave integrated circuit (MMIC) version for the frequency range of 40GHz to 325GHz has been earned out. Improved methods of calibration of the system have been derived as well as different methods for error correction. The realisation of a full two-port network analyser using the technique has been demonstrated. Key words: dielectric multistate reflectometer, programmable phase shifter, multi-probe reflectometer, detection, microwave measurement, millimetre-wave measurement, calibration, error corrections.
APA, Harvard, Vancouver, ISO, and other styles
43

Abcarius, John. "High-speed/low-cost Delta-Sigma modulation techniques for analog-to-digital conversion." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape11/PQDD_0027/MQ50588.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Abcarius, John 1972. "High-speed low-cost Delta-Sigma modulation techniques for analog-to-digital conversion." Thesis, McGill University, 1998. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=20898.

Full text
Abstract:
As digital electronics becomes increasingly popular, the need for efficient data conversion to provide the link to our analog world grows all the more important. To sustain the current rate of technological advancement, the requirements on the data conversion systems are becoming more stringent. Wireless communication systems demand high speed, high performance analog-to-digital conversion front-ends. Furthermore, consumers demand quality electronics at low cost, which precludes the use of expensive analog processes.
This thesis investigates the potential of DeltaSigma modulation techniques in addressing both of these issues through the design, implementation and experimentation of several prototype integrated circuits. Delta-Sigma modulation has recently become widely recognized for its ability to perform high performance data conversion without the use of high precision components. To extend these benefits to wireless applications, a novel eighth-order bandpass DeltaSigma modulator for A/D conversion will be presented. The modulator design is developed beginning at the signal processing level and realized in a 0.8mu BiCMOS process using the switched-capacitor (SC) technique. To address the cost issue, the design of a data conversion system based on the DeltaSigma modulation technique using an economical purely digital CMOS implementation is investigated. The distortion performance of experimental prototypes implemented using switched-capacitor (with capacitors realized using MOSFETs) and switched-current techniques is assessed.
This work therefore contributes to the ongoing drive to improve the performance and applicability of the DeltaSigma modulation technique in meeting modern-day data conversion needs.
APA, Harvard, Vancouver, ISO, and other styles
45

Potash, Benjamin R. "Characterization and preservation techniques of plant xylem as low cost membrane filtration devices." Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/92069.

Full text
Abstract:
Thesis: S.B., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2014.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from PDF student-submitted version of thesis.
Includes bibliographical references (pages 62-64).
Safe drinking water remains inaccessible for roughly 1.1 billion people in the world.³⁴ As a result, 400 children under the age of 5 die every hour from biological contamination of drinking water.³⁴ Studies have been done to show that plant xylem from the sapwood of coniferous trees is capable of rejecting 99.99% of bacteria from feed solutions.16 Additionally, 4 L/d of water can be filtered with a ~ 1 cm² filter area using a transmembrane pressure of 5 psi, an amount sufficient to meet the drinking needs of one person. However, the main drawback of xylem is that its permeability drops by a factor of 100 or more after being left out to dry for only a few hours. This paper seeks to characterize the performance of the xylem as a filter, determine the minimum length at which the xylem is effective for filtering bacteria, and increase the xylem's ability to rewet (retaining its permeability and rejective capabilities) after drying through the use of polymer coatings. Finally, potential techniques for decreasing the minimum particulate size the xylem can filter are discussed, with the aim of allowing the membrane to filter viruses.
by Benjamin R. Potash.
S.B.
APA, Harvard, Vancouver, ISO, and other styles
46

Doğan, Sevgi Zeynep Günaydın H. Murat. "Using machine learning techniques for early cost prediction of structural systems of buildings/." [s.l.]: [s.n.], 2005. http://library.iyte.edu.tr/tezlerengelli/doktora/mimarlik/T000357.pdf.

Full text
Abstract:
Thesis (Doctoral)--İzmir Institute of Technology, İzmir, 2005.
Keywords:Artificial neural networks, artificial intelligence, cost estimation, predictive models, construction management. Includes bibliographical references (leaves.111).
APA, Harvard, Vancouver, ISO, and other styles
47

Nahapetyan, Artyom. "Nonlinear approximation techniques to solve network flow problems with nonlinear arc cost functions." [Gainesville, Fla.] : University of Florida, 2006. http://purl.fcla.edu/fcla/etd/UFE0015623.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Arkhurst, Bettina K. "Identification and evaluation of techniques for quality control of low-cost xylem filters." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/120268.

Full text
Abstract:
Thesis: S.B., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2018.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 49-51).
2.1 billion people worldwide, majority of whom are of the poorest income quintile, lack access to safe, readily available water in their homes. The need for affordable, decentralized methods of water filtration led to the development of a low-cost membrane filter produced from the xylem of coniferous trees. Due to xylem structure variation and the potential for improper filter processing during mass production, quality control protocols are a necessity. Manufacturers must ensure xylem filters are functional in terms of microbial rejection and adequate flow rates. Testing methods similar to those mentioned in this thesis can also be developed for other membrane filters. The suitability of two fluids, water and air, were evaluated for use in the quality control process. For testing using water, turmeric and blue dye were used to create a visual indication test to detect a filter's major failures. We found that this method has the potential to detect both leaks and improperly prepared filters, but it lacks affordable, quantitative analysis for determining rejection percentages. Air was found to be a viable option for xylem filter testing at pressures of 6 psi and above, though presence of the xylem lowered the concentration of particles detected at the outlet by one-fourth. The substances found to be most suitable for testing the filter were Baker's yeast, jeweler's rouge, turmeric, and buttermilk given their affordability, particle/microbe size, and availability. Further exploration is required to determine the optimal particle to use in water and air testing and the equipment necessary for the quality control process to be implemented.
by Bettina K. Arkhurst.
S.B.
APA, Harvard, Vancouver, ISO, and other styles
49

Ogilvie, William Fraser. "Reducing the cost of heuristic generation with machine learning." Thesis, University of Edinburgh, 2018. http://hdl.handle.net/1842/31274.

Full text
Abstract:
The space of compile-time transformations and or run-time options which can improve the performance of a given code is usually so large as to be virtually impossible to search in any practical time-frame. Thus, heuristics are leveraged which can suggest good but not necessarily best configurations. Unfortunately, since such heuristics are tightly coupled to processor architecture performance is not portable; heuristics must be tuned, traditionally manually, for each device in turn. This is extremely laborious and the result is often outdated heuristics and less effective optimisation. Ideally, to keep up with changes in hardware and run-time environments a fast and automated method to generate heuristics is needed. Recent works have shown that machine learning can be used to produce mathematical models or rules in their place, which is automated but not necessarily fast. This thesis proposes the use of active machine learning, sequential analysis, and active feature acquisition to accelerate the training process in an automatic way, thereby tackling this timely and substantive issue. First, a demonstration of the efficiency of active learning over the previously standard supervised machine learning technique is presented in the form of an ensemble algorithm. This algorithm learns a model capable of predicting the best processing device in a heterogeneous system to use per workload size, per kernel. Active machine learning is a methodology which is sensitive to the cost of training; specifically, it is able to reduce the time taken to construct a model by predicting how much is expected to be learnt from each new training instance and then only choosing to learn from those most profitable examples. The exemplar heuristic is constructed on average 4x faster than a baseline approach, whilst maintaining comparable quality. Next, a combination of active learning and sequential analysis is presented which reduces both the number of samples per training example as well as the number of training examples overall. This allows for the creation of models based on noisy information, sacrificing accuracy per training instance for speed, without having a significant affect on the quality of the final product. In particular, the runtime of high-performance compute kernels is predicted from code transformations one may want to apply using a heuristic which was generated up to 26x faster than with active learning alone. Finally, preliminary work demonstrates that an automated system can be created which optimises both the number of training examples as well as which features to select during training to further substantially accelerate learning, in cases where each feature value that is revealed comes at some cost.
APA, Harvard, Vancouver, ISO, and other styles
50

Horst, Stephen Jonathan. "Low cost fabrication techniques for embedded resistors on flexible organics at millimeter wave frequencies." Thesis, Available online, Georgia Institute of Technology, 2006, 2006. http://etd.gatech.edu/theses/available/etd-11162006-171058/.

Full text
Abstract:
Thesis (M. S.)--Electrical and Computer Engineering, Georgia Institute of Technology, 2007.
Dr. John Cressler, Committee Member ; Dr. John Papapolymerou, Committee Chair ; Dr. Manos Tentzeris, Committee Member.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography