Dissertations / Theses on the topic 'Heterogeneous bank'

To see the other types of publications on this topic, follow the link: Heterogeneous bank.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 27 dissertations / theses for your research on the topic 'Heterogeneous bank.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

PORTOGHESE, LUCA MICHELE. "Essays on macroeconomic impact of heterogeneous banks." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2020. http://hdl.handle.net/10281/290981.

Full text
Abstract:
Lo scopo della presente tesi è far luce sul ruolo del mercato interbancario nella trasmissione di shock idiosincratici verso l’economia reale nel suo complesso. Maggiore attenzione è spesa nello studio di shock che nascono dal lato dell’offerta di credito. Il modello utilizzato è composto da un continuo di famiglie identiche e da una massa di imprenditori atomistici. In particolare, gli imprenditori sono di due tipi, uno rischioso e uno sicuro. Abbiamo costruito un modello DSGE che considera un sistema finanziario stilizzato, con un mercato interbancario con due intermediari. Una banca può raccogliere una quantità limitata di depositi dalle famiglie, ed è quindi spinta a raccogliere maggiori risorse sul mercato interbancario. Poiché i tassi di interesse sui prestiti riflettono la rischiosità dei due imprenditori, il tasso più alto dovuto dagli imprenditori rischiosi permette alla banca che prende a prestito sull’interbancario di essere sempre profittevole, permettendo così all’interbancario di verificarsi.
In the present paper we aim at shedding some light on the role played by the interbank market in the transmission of idiosyncratic shocks hitting single institutions to the whole system and the real economy. We focus, in particular, on shocks to the supply side of credit. We analyse an economy populated by a continuum of identical households, and a mass of atomistic entrepreneurs. In particular, entrepreneurs are of two types, one risky and one safe. We set up a DSGE model representing a simple economy featuring a stylised financial system, with two intermediaries, and an interbank market. One bank faces an exogeneous constraint on the aggregate amount of deposits it can collect. It can overcome its liquidity constraint by borrowing additional financial resources from the unconstrained bank on the interbank market. Since interest rates on loans reflect the relative riskyness of the two types of entrepreneurs, for each unit of borrowed money, the borrower bank is able to realise higher profits than the lender bank, hence the scope for mutually profitable trade on the interbank market. The second chapter of my thesis focused on two aspects that have not been already properly inspected, in the contribution offered by economists in the evaluation of economic effects brought by the recent SARS-CoV-19 pandemic. Specifically, as far as we know, the economic models tipically used to study the Corona-virus effects did not take into account i) the possibility to provide workers forced into lockdown with a subsidy, in order to mitigate the economic loss that comes along with the lockdown, and ii) the different levels of people's compliance to the lockdown policy and how to influence it. The aim of this paper is to shed some light on these two aspects. We try to evaluate the effectiveness of a common lockdown policy combined with a subsidy in influencing the behaviour of people in respecting the confinement.
APA, Harvard, Vancouver, ISO, and other styles
2

M'Baye, Cheick Kader. "Theoretical and empirical essays on inflation targeting and central bank transparency." Thesis, Lyon 2, 2013. http://www.theses.fr/2013LYO22010.

Full text
Abstract:
Cette thèse contribue au débat sur les politiques de ciblage d’inflation et de transparence des banques centrales en présentant notamment trois essais théoriques et empiriques sur le sujet. Dans le premier essai, nous étudions théoriquement les conditions sous lesquelles il serait optimal pour une banque centrale d’adopter explicitement un régime de ciblage d’inflation. Nous proposons un nouveau cadre théorique qui combine les deux principales raisons avancées dans la littérature pour expliquer les effets réels à court terme de la politique monétaire et qui sont d’une part, la présence d’informations hétérogènes entre les agents économiques (Phelps, 1970 ; Lucas, 1972), et d’autre part, la rigidité des salaires ou des prix (Taylor, 1980 ; Calvo, 1983). Nous analysons ensuite notre problématique dans ce nouveau cadre en considérant l’interaction entre le degré de rigidité des prix, et le degré de complémentarités stratégiques dans la fixation de prix des firmes. Nos résultats montrent que l’adoption d’un régime de ciblage d’inflation dépend fortement de l’importance relative des paramètres du modèle. En particulier, nous montrons que le ciblage d’inflation devrait être toujours adopté lorsque les complémentarités stratégiques sont faibles, alors que dans le cas contraire, il est optimal uniquement lorsque les prix sont assez rigides et que la banque centrale détient des informations suffisamment précises sur les fondamentaux de l’économie. Dans le second essai, nous utilisons la macroéconomie expérimentale afin d’évaluer dans quelle mesure l’annonce de la cible d’inflation est pertinente dans un cadre de ciblage de l’inflation. Nos résultats montrent que lorsque la banque centrale ne se soucie que de la stabilisation de l’inflation, l’annonce de la cible d’inflation n’apporte pas de gain supplémentaire en termes de performances macro-économiques, par rapport à une politique monétaire active (type règle de Taylor). Cependant, si la banque centrale intègre également la stabilisation de l’activité économique dans ses objectifs, la communication de la cible contribue à réduire la volatilité de l’inflation, du taux d’intérêt, et de l’écart de production, bien que leurs niveaux moyens ne soient pas affectés. Ce résultat fournit ainsi une justification pour l’adoption d’un régime de ciblage flexible d’inflation par la majorité des pays ciblant l’inflation. Enfin dans le troisième essai, nous appliquons une analyse transversale ainsi que la technique des variables instrumentales, afin d’analyser les effets de la transparence des banques centrales sur les résultats macroéconomiques dans les pays émergents. Nous construisons un nouvel indice de transparence qui combine certains aspects de l’indice de transparence globale d’Eijffinger et Geraats (2006), avec ceux de l’indice de transparence sur le comité de politique monétaire de Hayo et Mazhar (2011). Nous analysons ensuite le rôle individuel de chaque composante du nouvel indice en termes de réduction du niveau de l’inflation et de sa volatilité, ainsi que de la volatilité du produit. Contrairement à la littérature antérieure, nous trouvons que le nouvel indice de transparence ainsi que ses aspects économique, politique, procédurale et de transparence sur la politique monétaire impactent négativement le niveau moyen de l’inflation, mais pas sa volatilité dans ces pays. L’unique composante du nouvel indice qui permet de réduire à la fois la volatilité de l’inflation et celle de la production est la transparence opérationnelle. Ces résultats s’avèrent robustes aux différentes spécifications de modèles économétriques utilisés dans cet essai
This dissertation contributes to the debate on inflation targeting and central bantransparency by presenting three theoretical and empirical essays on the topic. In the first essay, we theoretically investigate the conditions under which it would be optimal for a central bank to explicitly adopt an inflation targeting regime. We propose a new theoretical framework that combines the two main frictions put forward in the literature to explain the real short run effects of monetary policy that is, heterogeneous information among agents (Phelps, 1970; Lucas, 1972), and wage or price rigidities (Taylor, 1980; Calvo, 1983). We then analyze our issue in this new framework by considering the interaction between the degree of price stickiness, and the degree of strategic complementarities in firms’ price setting. Our results show that adopting an inflation targeting regime crucially depends on the relative importance of the model’s parameters. In particular, we show that inflation targeting should always be adopted when strategic complementarities are low, while in the opposite case, it is optimal only if prices are sticky enough and the central bank holds sufficiently accurate information on the fundamentals of the economy. In the second essay, we use experimental macroeconomics to evaluate to what extent communication of the inflation target is relevant in an inflation targeting framework. Our results show that first, when the central bank only cares about inflation stabilization, announcing the inflation target does not make a difference in terms of macroeconomic performance compared to a standard active monetary policy. However, if the central bank also cares about the stabilization of the economic activity, communicating the target helps to reduce the volatility of inflation, interest rate, and output gap although their average levels are not affected. This finding provides a rationale for the adoption of flexible inflation targeting by the majority of inflation targeting countries. In the third essay, using a cross-sectional analysis and instrumental variables technique, we analyze the impact of central bank transparency on macroeconomic outcomes in emerging economies. We build a new index of transparency that combines some aspects of the overall Eijffinger and Geraats (2006) transparency index, with those of monetary policy committee transparency developed in Hayo and Mazhar (2011). We then analyze the individual role of each component of the new index in mitigating inflation and its volatility, as well as output volatility. By contrast to the previous literature, we interestingly find that the overall new index of transparency as well as its political, economic, procedural, and policy aspects negatively impact the average level of inflation, but not its volatility in these countries. The unique component of the new index that reduces the volatility of both inflation and output is operational transparency, and these results are robust to different econometric and instruments setting specifications
APA, Harvard, Vancouver, ISO, and other styles
3

Berkefeld, Markus Till [Verfasser], and Frank C. [Akademischer Betreuer] Englmann. "Bank credit, inside money, and debt deflation in a continuous-time macro finance model with heterogeneous agents / Markus Till Berkefeld ; Betreuer: Frank C. Englmann." Stuttgart : Universitätsbibliothek der Universität Stuttgart, 2019. http://d-nb.info/1206184043/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Hazard, Colby. "Validity of Holocene Analogs for Ancient Carbonate Stratigraphic Successions: Insights from a Heterogeneous Pleistocene Carbonate Platform Deposit." BYU ScholarsArchive, 2015. https://scholarsarchive.byu.edu/etd/5496.

Full text
Abstract:
Observations of modern carbonate depositional environments and their accompanying depositional models have been used for decades in the reconstruction and interpretation of ancient carbonate depositional environments and stratigraphic successions. While these Holocene models are necessary for interpreting their more ancient counterparts, they inherently exclude important factors related to the erosion, diagenesis, and ultimate preservation of sediments and sedimentary structures that are ubiquitous in shallow marine carbonate environments. Andros Island, Bahamas is an ideal location to examine the validity of Holocene conceptual models, where geologically young (Late Pleistocene) limestones can be studied immediately adjacent to their well-documented modern equivalents. For this study, two 3D ground-penetrating radar (GPR) datasets (200 MHz and 400 MHz) were collected at a schoolyard in northwest Andros. These surveys reveal the geometries and internal characteristics of a peloidal-oolitic sand wave and tidal channel in unprecedented detail. These two prominent features are underlain by low-energy lagoonal wackestones and packstones, and are bordered laterally to the northwest by wackestones-packstones intermixed with thin sheets of peloidaloolitic grainstone. A deeper radar surface is observed at approximately 6 m depth dipping gently to the west, and is interpreted to be a karstified exposure surface delineating the base of a complete depositional sequence. Interpretation of the 3D radar volumes is enhanced and constrained by data from three cores drilled through the crest and toe of the sand wave, and through the tidal channel. This study is the first of its kind to capture the complex heterogeneity of a carbonate depositional package in three dimensions, where various depositional environments, sedimentary structures, and textures (mudstone to grainstone) have been preserved within a small volume.The results from this study suggest that the degree of vertical and lateral heterogeneity in preserved carbonate successions is often more complex than what can be observed in modern depositional environments, where sediments can generally only be observed in two dimensions, at an instant in time. Data from this study demonstrate the value of using two overlapping GPR datasets at differing resolutions to image the internal characteristics of a complete carbonate depositional package in three dimensions. From these datasets, a depositional model similar to other Holocene and Pleistocene carbonate depositional models is derived.
APA, Harvard, Vancouver, ISO, and other styles
5

Grozdek, Marino. "Load Shifting and Storage of Cooling Energy through Ice Bank or Ice Slurry Systems modelling and experimental analysis /." Doctoral thesis, Stockholm : Skolan för industriell teknik och management, Kungliga Tekniska högskolan, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-11119.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Pereira, Ana Elisa Gonçalves. "Essays on coordination problems in economics." reponame:Repositório Institucional do FGV, 2016. http://hdl.handle.net/10438/16668.

Full text
Abstract:
Submitted by Ana Elisa Gonçalves Pereira (anaelisagpereira@gmail.com) on 2016-07-15T20:58:30Z No. of bitstreams: 1 tese_biblio.pdf: 1099623 bytes, checksum: 3fae0f61b515374855a0c7773cd4cb47 (MD5)
Approved for entry into archive by Suzinei Teles Garcia Garcia (suzinei.garcia@fgv.br) on 2016-07-18T12:03:10Z (GMT) No. of bitstreams: 1 tese_biblio.pdf: 1099623 bytes, checksum: 3fae0f61b515374855a0c7773cd4cb47 (MD5)
Made available in DSpace on 2016-07-18T13:39:54Z (GMT). No. of bitstreams: 1 tese_biblio.pdf: 1099623 bytes, checksum: 3fae0f61b515374855a0c7773cd4cb47 (MD5) Previous issue date: 2016-06-24
There are several economic situations in which an agent’s willingness to take a given action is increasing in the amount of other agents who are expected to do the same. These kind of strategic complementarities often lead to multiple equilibria. Moreover, the outcome achieved by agents’ decentralized decisions may be inefficient, leaving room for policy interventions. This dissertation analyzes different environments in which coordination among individuals is a concern. The first chapter analyzes how information manipulation and disclosure affect coordination and welfare in a bank-run model. There is a financial regulator who cannot credibly commit to reveal the situation of the banking sector truthfully. The regulator observes banks’ idiosyncratic information (through a stress test, for example) and chooses whether to disclose it to the public or only to release a report on the health of the entire financial system. The aggregate report may be distorted at a cost – higher cost means higher credibility. Investors are aware of the regulator’s incentives to conceal bad news from the market, but manipulation may still be effective. If the regulator’s credibility is not too low, the disclosure policy is state-contingent and there is always a range of states in which there is information manipulation in equilibrium. If credibility is low enough, the regulator opts for full transparency, since opacity would trigger a systemic run no matter the state. In this case only the most solid banks survive. The level of credibility that maximizes welfare from an ex ante perspective is interior. The second and the third chapters study coordination problems in dynamic environments. The second chapter analyzes welfare in a setting where agents receive random opportunities to switch between two competing networks. It shows that whenever the intrinsically worst one prevails, this is efficient. In fact, a central planner would be even more inclined towards the worst option. Inefficient shifts to the intrinsically best network might occur in equilibrium. When there are two competing standards or networks of different qualities, if everyone were to opt for one of them at the same time, the efficient solution would be to choose the best one. However, when there are timing frictions and agents do not switch from one option to another all at once, the efficient solution differs from conventional wisdom. The third chapter analyzes a dynamic coordination problem with staggered decisions where agents are ex ante heterogeneous. We show there is a unique equilibrium, which is characterized by thresholds that determine the choices of each type of agent. Although payoffs are heterogeneous, the equilibrium features a lot of conformity in behavior. Equilibrium vii thresholds for different types of agents partially coincide as long as there exists a set of beliefs that would make this coincidence possible. However, the equilibrium strategies never fully coincide. Moreover, we show conformity is not inefficient. In the efficient solution, agents follow others even more often than in the decentralized equilibrium.
No estudo da economia, há diversas situações em que a propensão de um indivíduo a tomar determinada ação é crescente na quantidade de outras pessoas que este indivíduo acredita que tomarão a mesma ação. Esse tipo de complementaridade estratégica geralmente leva à existência de múltiplos equilíbrios. Além disso, o resultado atingido pelas decisões decentralizadas dos agentes pode ser ineficiente, deixando espaço para intervenções de política econômica. Esta tese estuda diferentes ambientes em que a coordenação entre indivíduos é importante. O primeiro capítulo analisa como a manipulação de informação e a divulgação de informação afetam a coordenação entre investidores e o bem-estar em um modelo de corridas bancárias. No modelo, há uma autoridade reguladora que não pode se comprometer a revelar a verdadeira situação do setor bancário. O regulador observa informações idiossincráticas dos bancos (através de um stress test, por exemplo) e escolhe se revela essa informação para o público ou se divulga somente um relatório agregado sobre a saúde do sistema financeiro como um todo. O relatório agregado pode ser distorcido a um custo – um custo mais elevado significa maior credibilidade do regulador. Os investidores estão cientes dos incentivos do regulador a esconder más notícias do mercado, mas a manipulação de informação pode, ainda assim, ser efetiva. Se a credibilidade do regulador não for muito baixa, a política de divulgação de informação é estado-contingente, e existe sempre um conjunto de estados em que há manipulação de informação em equilíbrio. Se a credibilidade for suficientemente baixa, porém, o regulador opta por transparência total dos resultados banco-específicos, caso em que somente os bancos mais sólidos sobrevivem. Uma política de opacidade levaria a uma crise bancária sistêmica, independentemente do estado. O nível de credibilidade que maximiza o bem-estar agregado do ponto de vista ex ante é interior. O segundo e o terceiro capítulos estudam problemas de coordenação dinâmicos. O segundo capítulo analisa o bem-estar em um ambiente em que agentes recebem oportunidades aleatórias para migrar entre duas redes. Os resultados mostram que sempre que a rede de pior qualidade (intrínseca) prevalece, isto é eficiente. Na verdade, um planejador central estaria ainda mais inclinado a escolher a rede de pior qualidade. Em equilíbrio, pode haver mudanças ineficientes que ampliem a rede de qualidade superior. Quando indivíduos escolhem entre dois padrões ou redes com níveis de qualidade diferentes, se todos os indivíduos fizessem escolhas simultâneas, a solução eficiente seria que todos adotassem a rede de melhor qualidade. No entanto, quando há fricções e os agentes tomam decisões escalonadas, a solução eficiente difere ix do senso comum. O terceiro capítulo analisa um problema de coordenação dinâmico com decisões escalonadas em que os agentes são heterogêneos ex ante. No modelo, existe um único equilíbrio, caracterizado por thresholds que determinam as escolhas para cada tipo de agente. Apesar da heterogeneidade nos payoffs, há bastante conformidade nas ações individuais em equilíbrio. Os thresholds de diferentes tipos de agentes coincidem parcialmente contanto que exista um conjunto de crenças arbitrário que justifique esta conformidade. No entanto, as estratégias de equilíbrio de diferentes tipos nunca coincidem totalmente. Além disso, a conformidade não é ineficiente. A solução eficiente apresentaria estratégias ainda mais similares para tipos distintos em comparação com o equilíbrio decentralizado.
APA, Harvard, Vancouver, ISO, and other styles
7

Müller, Carola [Verfasser]. "Four essays on financial stability and competition with heterogeneous banks / Carola Müller." Magdeburg : Universitätsbibliothek, 2018. http://d-nb.info/1162952822/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Love, Bryan Matthew. "Multiscale Analysis of Failure in Heterogeneous Solids Under Dynamic Loading." Diss., Virginia Tech, 2004. http://hdl.handle.net/10919/29650.

Full text
Abstract:
Plane strain transient finite thermomechanical deformations of heat-conducting particulate composites comprised of circular tungsten particulates in nickel-iron matrix are analyzed using the finite element method to delineate the initiation and propagation of brittle/ductile failures by the nodal release technique. Each constituent and composites are modeled as strain hardening, strain-rate-hardening and thermally softening microporous materials. Values of material parameters of composites are derived by analyzing deformations of a representative volume element whose minimum dimensions are determined through numerical experiments. These values are found to be independent of sizes and random distributions of particulates, and are close to those obtained from either the rule of mixtures or micromechanics models. Brittle and ductile failures of composites are first studied by homogenizing their material properties; subsequently their ductile failure is analyzed by considering the microstructure. It is found that the continuously varying volume fraction of tungsten particulates strongly influences when and where adiabatic shear bands (ASB) initiate and their paths. Furthermore, an ASB initiates sooner in the composite than in either one of its constituents. We have studied the initiation and propagation of a brittle crack in a precracked plate deformed in plane strain tension, and a ductile crack in an infinitely long thin plate with a rather strong defect at its center and deformed in shear. The crack may propagate from the tungsten-rich region to nickel-iron-rich region or vice-a-versa. It is found that at the nominal strain-rate of 2000/s the brittle crack speed approaches Rayleigh's wave speed in the tungsten-plate, the nickel-iron-plate shatters after a small extension of the crack, and the composite plate does not shatter; the minimum nominal strain-rate for the nickel-iron-plate to shatter is 1130/s. The ductile crack speed from tungsten-rich to tungsten-poor regions is nearly one-tenth of that in the two homogeneous plates. The maximum speed of a ductile crack in tungsten and nickel-iron is found to be about 1.5 km/s. Meso and multiscale analyses have revealed that microstructural details strongly influence when and where ASBs initiate and their paths. ASB initiation criteria for particulate composites and their homogenized counterparts are different.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
9

SHI, YU. "Enhanced SAR Image Processing Using A Heterogeneous Multiprocessor." Thesis, Linköping University, Department of Computer and Information Science, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-11517.

Full text
Abstract:

Synthetic antenna aperture (SAR) is a pulses focusing airborne radar which can achieve high resolution radar image. A number of image process algorithms have been developed for this kind of radar, but the calculation burden is still heavy. So the image processing of SAR is normally performed “off-line”.

The Fast Factorized Back Projection (FFBP) algorithm is considered as a computationally efficient algorithm for image formation in SAR, and several applications have been implemented which try to make the process “on-line”.

CELL Broadband Engine is one of the newest multi-core-processor jointly developed by Sony, Toshiba and IBM. CELL is good at parallel computation and floating point numbers, which all fit the demands of SAR image formation.

This thesis is going to implement FFBP algorithm on CELL Broadband Engine, and compare the results with pre-projects. In this project, we try to make it possible to perform SAR image formation in real-time.

APA, Harvard, Vancouver, ISO, and other styles
10

Rashid, S. M. Shahriar. "Design and Heterogeneous Integration of Single and Dual Band Pulse Modulated Class E RF Power Amplifiers." The Ohio State University, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=osu1543505207173487.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Barnwell, Ellis. "One and two-dimensional propagation of waves in periodic heterogeneous media : transient effects and band gap tuning." Thesis, University of Manchester, 2015. https://www.research.manchester.ac.uk/portal/en/theses/one-and-twodimensional-propagation-of-waves-in-periodic-heterogeneous-media-transient-effects-and-band-gap-tuning(9a5bd9d7-09da-46a8-ae57-b63d9d815f10).html.

Full text
Abstract:
In this thesis, the propagation of transient waves in heterogeneous media and the tuning of periodic elastic materials are studied. The behaviour of time harmonic waves in complex media is a well understood phenomenon. The primary aim of this text is to gain a deeper understanding into the propagation of transient waves in periodic media. The secondary aim is to explore the time harmonic behaviour of two dimensional pre-stressed elastic media and investigate the plausibility of band gap tuning. We begin this text by investigating the reflection of pulses from a semi-infinite set of point masses (we call 'beads') on a string. The reflected pulse is formulated using Fourier transforms which involve the harmonic reflection coefficient. We find that the reflected amplitude of a harmonic wave depends on its frequency. We then ask whether it is possible to find an effective reflection coefficient by assuming the beaded portion of the string is given by some effective homogeneous medium. An effective reflection coefficient is found by assuming the homogeneous medium has the wavenumber given by the infinite beaded string. This effective reflection coefficient is compared to the exact reflection coefficient found using the Wiener-Hopf technique. The results from studying the reflection problem gave inspiration to chapter 4, which focuses on the time dependent forcing of an infinite beaded string that is initially at rest. We again use the Fourier transform to find a time dependent solution. The z-transform is then used, after sampling the solution at the bead positions. We impose a sinusoidal loading which is switched on at a specified time. In doing this we are able to explore how the system behaves differently when excited in a stop band, a pass band and at a frequency on the edge between the two. An exact solution for the infinite beaded string is found at any point in time by expanding the branch points of the solution as a series of poles. We compare this exact solution to the long time asymptotics. The energy input into the system is studied with the results from the exact solution and long time approximation showing agreement. Interesting behaviour is discovered on the two edges between stop and pass bands. In chapter 5 the effect of a nonlinear elastic pre-stress on the wave band structure of a two dimensional phononic crystal is investigated. In this chapter we restrict ourselves to incompressible materials with the strain energy functions used being the neo-Hookean, Mooney-Rivlin and Fung. The method of small-on-large is used to derive the equation for incremental elastic waves and then the plane wave expansion method is used to find the band structure. Finally, chapter 6 focuses on the same geometry with a compressible elastic material. The strain energy function used is the one suggested by Levinson and Burgess. We use the theory of small-on-large to derive the incremental equations for coupled small amplitude pressure and shear waves in this material. In both compressible and incompressible materials we show how it is possible to control the stop bands in a material by applying a large elastic pre-stress.
APA, Harvard, Vancouver, ISO, and other styles
12

Lo, Shin-en. "A Fire Simulation Model for Heterogeneous Environments Using the Level Set Method." Scholarship @ Claremont, 2012. http://scholarship.claremont.edu/cgu_etd/72.

Full text
Abstract:
Wildfire hazard and its destructive consequences have become a growing issue around the world especially in the context of global warming. An effective and efficient fire simulation model will make it possible to predict the fire spread and assist firefighters in the process of controlling the damage and containing the fire area. Simulating wildfire spread remains challenging due to the complexity of fire behaviors. The raster-based method and the vector-based method are two major approaches that allow one to perform computerized fire spread simulation. In this thesis, we present a scheme we have developed that utilizes a level set method to build a fire spread simulation model. The scheme applies the strengths and overcomes some of the shortcomings of the two major types of simulation method. We store fire data and local rules at cells. Instead of calculating which are the next ignition points cell by cell, we apply Huygens' principle and elliptical spread assumption to calculate the direction and distance of the expanding fire by the level set method. The advantage to storing data at cells is that it makes our simulation model more suitable for heterogeneous fuel and complex topographic environment. Using a level set method for our simulation model makes it possible to overcome the crossover problem. Another strength of the level set method is its continuous data processing. Applying the level set method in the simulation models, we need fewer vector points than raster cells to produce a more realistic fire shape. We demonstrate this fire simulation model through two implementations using narrow band level set method and fast marching method. The simulated results are compared to the real fire image data generated from Troy and Colina fires. The simulation data are then studied and compared. The ultimate goal is to apply this simulation model to the broader picture to better predict different types of fires such as crown fire, spotting fires, etc.
APA, Harvard, Vancouver, ISO, and other styles
13

Joseph, Laya. "Development of Ultra-Wide band 500 MHz – 20 GHz Heterogeneous Multi-Layered Phantom Comprises of Human Skin, Fat and Muscle Tissues for Various Microwaves Based Biomedical Application." Thesis, Uppsala universitet, Fasta tillståndets elektronik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-402458.

Full text
Abstract:
In biomedical applications realistic phantoms are becoming more useful for validation and testing of precursor systems. These artificial phantoms require stable and flexible tissue-mimicking materials with realistic dielectric properties in order to properly model human tissues. We have fabricated a low-water-content, low cost, mechanically and electrically stable, good shelf life and multi-layered heterogeneous phantom consisting of skin, fat and muscle tissues. We have chosen semi-solid type phantom for each tissue layer. The size and thickness of the each layer is chosen based on the average thickness of human tissue. Altering the ingredient composition wisely we can alter its dielectric properties also. By reason of no osmosis occurrence, the tissues can be employed to construct heterogeneous phantoms or even anthropomorphic phantoms without causing any changes in geometry or electrical properties. The performance of the fabricated phantom is carried out using an open-ended coaxial slim probe system by Agilent Technologies. Nearly all previous studies were based on very high frequency( VHF), so we present ultra-wide band (UWB), 500MHz-20GHz multilayered phantoms. We have measured our phantom after 2 month time period and we got quite good results for the dielectric properties without having significant variations. Thus, our fabricated sets of ATE phantom have good long lasting properties with good physical and dielectric stability.
APA, Harvard, Vancouver, ISO, and other styles
14

Cordier, Florian. "Morphodynamique des bancs dans des rivières aménagées à granulométrie étendue." Thesis, Paris Est, 2018. http://www.theses.fr/2018PESC1149/document.

Full text
Abstract:
Les rivières présentent généralement un lit ondulé dû à la présence de bancs, qui correspondent à des larges dépôts sédimentaires alternant avec des zones profondes (mouilles). Des travaux d'ingénierie importants et répétés modifiant la géométrie du cours d'eau, le régime hydraulique et l'apport sédimentaire peuvent à terme modifier la dynamique des bancs à l'échelle de la rivière. Une meilleure compréhension des processus liés aux bancs est importante pour les gestionnaires des rivières, car les bancs contrôlent activement la morphologie du lit et érodent les berges, impactant la navigation, les prises d'eaux, les ouvrages et la qualité des habitats. Même si les mécanismes régissant la formation et le développement des bancs sont bien compris, la compréhension de la morphodynamique des bancs en milieu fluvial reste limitée, en raison de la non-linéarité induite par la combinaison de facteurs naturels en atrophiques. La non-uniformité des sédiments est une caractéristique inhérente des rivières, mais son impact sur la propriété des bancs, même dans des cas simplifiés, reste obscure. Ce travail de thèse vise à étudier l'impact de l'hétérogénéité granulométrique sur des bancs dans des chenaux endigués, via l'utilisation d'un modèle morphodynamique bi-dimensionnel implémenté dans le système Telemac-Mascaret. Les développements numériques nécessaires sont apportés afin d'être en accord avec l'état de l'art sur la modélisation de bancs avec un sédiment-non-uniforme. Par la suite, ce modèle est utilisé pour l'étude de la morphodynamique des bancs dans un chenal rectiligne sur la base d'expériences réalisées en laboratoire. Finalement, l'analyse est transférée sur un tronçon de 1 km de la Loire, caractérisé par sa géométrie relativement complexe. Les résultats permettent de valider l'approche de modélisation et démontrent la portabilité pour d'autres études, apportent plus d'éléments sur la relation entre sédiment non-uniforme et les bancs, et pourront être utilisés pour préconiser les gestionnaires des rivières
Rivers often present a wavy bed topography due to the presence of bars, corresponding to large sediment deposits alternating with deeper areas (pools). Intense and continual river engineering works altering the river geometry, flow regime and sediment supply can impact bar morphodynamics at the reach scale. A deep knowledge of bar processes is important for river managers, because bars actively control the river bed topography and influence bank erosion, with consequences for navigation, water intakes, infrastructure and the quality of their habitats. Although the origin of bar formation and propagation is well understood, the understanding of bar morphodynamics in rivers remains limited due to the non-linearity of combined natural and anthropogenic effects. Sediment non-uniformity is an inherent characteristic of rivers, but its impact on bar properties, even considering simple configurations, is still unclear. This Ph.D. thesis aims at investigating the impact of sediment size heterogeneity on bars in rivers with non-erodible banks, using a two-dimensional morphodynamic model implemented in the Telemac-Mascaret system. The necessary model developments are brought to comply with the state of the art on the modelling of heterogeneous sediment with bars. Then, the model is used to study bar morphodynamics in a straight channel on the basis of laboratory experiments. Eventually, the analysis is transferred in a 1 km reach of the Loire river characterized by a relatively complex geometry. General outcomes validate the modelling approach and demonstrate the application portability for other study cases, shed more light on the relationship between non-uniform sediment and bars, and can be later used as recommendations for river managers
APA, Harvard, Vancouver, ISO, and other styles
15

Long, Bruce Raymond. "Informationist Science Fiction Theory and Informationist Science Fiction." Thesis, The University of Sydney, 2009. http://hdl.handle.net/2123/5838.

Full text
Abstract:
Informationist Science Fiction theory provides a way of analysing science fiction texts and narratives in order to demonstrate on an informational basis the uniqueness of science fiction proper as a mode of fiction writing. The theoretical framework presented can be applied to all types of written texts, including non-fictional texts. In "Informationist Science Fiction Theory and Informationist Science Fiction" the author applies the theoretical framework and its specific methods and principles to various contemporary science fiction works, including works by William Gibson, Neal Stephenson and Vernor Vinge. The theoretical framework introduces a new informational theoretic re-framing of existing science fiction literary theoretic posits such as Darko Suvin's novum, the mega-text as conceived of by Damien Broderick, and the work of Samuel R Delany in investigating the subjunctive mood in SF. An informational aesthetics of SF proper is established, and the influence of analytic philosophy - especially modal logic - is investigated. The materialist foundations of the metaphysical outlook of SF proper is investigated with a view to elucidating the importance of the relationship between scientific materialism and SF. SF is presented as The Fiction of Veridical, Counterfactual and Heterogeneous Information.
APA, Harvard, Vancouver, ISO, and other styles
16

Long, Bruce Raymond. "Informationist Science Fiction Theory and Informationist Science Fiction." University of Sydney, 2009. http://hdl.handle.net/2123/5838.

Full text
Abstract:
Master of Philosophy (MPhil)
Informationist Science Fiction theory provides a way of analysing science fiction texts and narratives in order to demonstrate on an informational basis the uniqueness of science fiction proper as a mode of fiction writing. The theoretical framework presented can be applied to all types of written texts, including non-fictional texts. In "Informationist Science Fiction Theory and Informationist Science Fiction" the author applies the theoretical framework and its specific methods and principles to various contemporary science fiction works, including works by William Gibson, Neal Stephenson and Vernor Vinge. The theoretical framework introduces a new informational theoretic re-framing of existing science fiction literary theoretic posits such as Darko Suvin's novum, the mega-text as conceived of by Damien Broderick, and the work of Samuel R Delany in investigating the subjunctive mood in SF. An informational aesthetics of SF proper is established, and the influence of analytic philosophy - especially modal logic - is investigated. The materialist foundations of the metaphysical outlook of SF proper is investigated with a view to elucidating the importance of the relationship between scientific materialism and SF. SF is presented as The Fiction of Veridical, Counterfactual and Heterogeneous Information.
APA, Harvard, Vancouver, ISO, and other styles
17

Anderson, Emily. "Essays on Monetary and Fiscal Policy." Diss., 2013. http://hdl.handle.net/10161/7253.

Full text
Abstract:

This dissertation consists of two chapters studying monetary and fiscal policy. In the first chapter, I study the welfare benefits and costs of increased central bank transparency in a dynamic model of costly information acquisition where agents can either choose to gather new costly information or remember information from the past for free. Information is costly to acquire due to an agent's limited attention. Agents face an intratemporal decision on how to allocate attention across public and private signals within the period and an intertemporal decision on how to allocate attention over time. The model embeds a coordination externality into the dynamic framework which motivates agents to be overly attentive to public information and creates the possibility of costly transparency. Interestingly, allowing for intratemporal and intertempral tradeoffs for attention amplifies (attenuates) the benefits (costs) of earlier transparency whereas it attenuates (amplifies) the benefits (costs) of delayed transparency.

The second chapter, co-authored with Barbara Rossi and Atsushi Inoue, studies the empirical effects of unexpected changes in government spending and tax policy on heterogeneous agents. We use data from the Consumption Expenditure Survey (CEX) to estimate individual-level impulse responses as well as multipliers for government spending and tax policy shocks. The main empirical finding of this paper is that unexpected fiscal shocks have substantially different effects on consumers depending on their age, income levels, and education. In particular, the wealthiest individuals tend to behave according to the predictions of standard RBC models, whereas the poorest individuals tend to behave according to standard IS-LM (non-Ricardian) models, due to credit constraints. Furthermore, government spending policy shocks tend to decrease consumption inequality, whereas tax policy shocks most negatively affect the lives of the poor, more so than the rich, thus increasing consumption inequality.


Dissertation
APA, Harvard, Vancouver, ISO, and other styles
18

Partyka, Karol. "Empirical patterns in central bank interest rates policy." Doctoral thesis, 2018. https://depotuw.ceon.pl/handle/item/3232.

Full text
Abstract:
The last two decades of research in macroeconomics resulted in a large body of empirical research examining the tendency of central banks to adopt policies which de facto differ from official statements and de jure policy objectives of these institutions.The aim of the thesis is to analyze the empirical patterns that potentially deviate from the official narrative of central banks. The thesis investigates three hypothesis regarding the realization of monetary policy using three different methodologies based on dynamic panel cointegration methods applied to the estimation of Taylor rules. The choice of the panel framework is driven by the small size of the sample for the individual countries and significant cross-sectional dependency between central bank policies that make single country investigations problematic.Chapter 1 investigates the hypothesis that the central bank interest rates in EEA economies are cointegrated with the ECB policy rates. The results suggest an empirical pattern of dependence of local policies on the leading ECB policy and in consequence monetary policy spillovers outside of Euro Area.Chapter 2 investigates the hypothesis that the central bank policy is reacting to the stock market developments, the so-called “leaning against the wind” hypothesis. Our empirical investigation shows that there is a pattern consistent with that type of policy behavior.Chapter 3 investigates the hypothesis that there is a relationship between central bank policy and housing prices. We find no evidence that monetary policy is reacting to local housing prices misalignments in the full sample. However, this pattern is statistically significant in the advanced economies. The sources of these empirical patterns are ultimately uncertain. The thesis discusses competing interpretations as well as their policy implications. One of the main interpretation is that the central banks purse the observed policies, but substantially limit or simplify the official statements and narratives to manage communication and anchor expectations. Alternatively, the observed patterns are a manifestation of inefficiencies in the transmission or operational mechanisms. In the former case, the thesis supplements the research about the actual behavior of an important economic actor and in the latter points out the policy issues that should be addressed to improve the monetary policy framework in the search for a new paradigm.
W ostatnich latach badania nad zachowaniem banków centralnych doprowadziły do zgromadzenia znaczących dowodów na to, że polityka pieniężna stosowana de facto może się różnić od oficjalnych deklaracji i skodyfikowanych prawnie celów banków centralnych. Biorąc pod uwagę przemożny wpływ polityki pieniężnej na gospodarkę, niezależna empiryczna ocena faktyczne prowadzonej polityki ma istotne znaczenie dla zrozumienia działalności banków centralnych oraz budowania modeli ekonomicznych.Celem pracy jest analiza empirycznych wzorców stóp procentowych w kontekście potencjalnej sprzeczności z oficjalnymi deklaracjami banków centralnych. W pracy zbadano trzy hipotezy częściowe dotyczące polityki centralnych instytucji monetarnych. Każda z hipotez badana jest oddzielnie za pomocą metodologii wspólnie wywodzącej się wspólnie z dynamicznych metod kointegracji panelowej. Głównym założeniem jest użycie oszacowań reguł Taylora do identyfikacji systematycznych komponentów reakcji polityki pieniężnej. Wybór metody panelowej spowodowany jest fundamentalnym ograniczeniem liczby makroekonomicznych obserwacji dla pojedynczej gospodarki oraz znaczącymi współzależnościami pomiędzy wynikami gospodarczymi. Praca odnosi się w ten sposób do jednego z głównych problemów dotychczasowej literatury, skupionej na analizach pojedynczych banków centralnych, które stoją przed wyzwaniami zarówno interpretacyjnymi jak i metodologicznymi. Rozdział 1 stawia hipotezę, że istnieje kointegracja pomiędzy stopami procentowymi Europejskiego Banku Centralnego (EBC) a stopami procentowymi krajów Unii Europejskiej pozostających poza strefą euro. Wyniki sugerują znaczącą zależność lokalnej polityki pieniężnej od polityki pieniężnej EBC. Wnioskiem z badania jest znaczący wpływ polityki EBC na gospodarki krajów pozostających poza strefą Euro oraz znaczące zintegrowanie systemów finansowych w Unii Europejskiej.Rozdział 2 bada hipotezę, że wydarzenia na rynkach giełdowych mają bezpośredni wpływ na zmiany w polityce monetarnej banków centralnych, które to zmiany nie są wynikiem zmian związanych z inflacją lub luką popytową. W badaniu tym odnajdujemy wzorce spójne z tzw. „polityką przeciwstawiania się wiatrowi”, czyli wzrostu stóp procentowych przy dodatnich odchyleniach rynku giełdowego od trendu.W rozdziale trzecim badana jest hipoteza, że polityka pieniężna reaguje na lokalne odchylenia cen nieruchomości od trendu. Nie znajdujemy empirycznego potwierdzenia tej hipotezy w pełnej próbce. Reakcje na lokalne ceny nieruchomości są za to statystycznie istotne dla próbki krajów rozwiniętych. Źródło zaobserwowanych wzorców empirycznych nie są znane ex ante, dlatego praca przedstawia dyskusję dotyczącą różnych interpretacji uzyskanych wyników opierając się na konkurencyjnych teoriach. Jedna z głównych interpretacji instytucjonalnych sugeruje, że banki centralne świadomie realizują taką politykę, ale w celu uproszczenia i zakotwiczenia oczekiwań używają celu inflacyjnego jako instrumentu komunikacji. Alternatywą jest, że banki centralne prowadzą taka politykę nieświadomie. W obu przypadkach identyfikacja wzorców stóp procentowych służy rozwojowi naszego zrozumienia współczesnej polityki pieniężnej. Zaobserwowane wzorce są znakiem zmiany paradygmatu polityki pieniężnej.
APA, Harvard, Vancouver, ISO, and other styles
19

Correia, Luís Miguel Ribeiro. "Hybrid equalization techniques for heterogeneous systems in the millimeter wave band." Master's thesis, 2021. http://hdl.handle.net/10773/32345.

Full text
Abstract:
With the constant demand for better service and higher transmission rates current technologies are reaching the limits of the channel capacity. Although, technologies such as MIMO and Heterogeneous systems appear to increase the channel capacity by introducing more antennas at the transceivers making the link between users and base station more reliable. Furthermore, the current spectrum, sub-6GHz, is becoming saturated and due to the properties of such frequencies the deployment of heterogeneous systems can introduce some levels of interference. Towards improving future communication systems a new part of the frequencies spectrum available should be used, researchers have their eyes on the mmWave band. This band allows to increase the carrier frequency and respective signal bandwidth and therefore increase the transmission speeds, moreover the properties of such frequencies unlock some advantages over the frequencies used in the sub-6G band. Additionally, mmWave band can be combined with massive MIMO technology to enhance the system capacity and to deploy more antenna elements in the transceivers. One more key technology that improves the energy efficiency in systems with hundreds of antenna elements is the possibility to combine analog and digital precoding techniques denoted as hybrid architectures. The main advantages of such techniques is that contrary to the full digital precoding processing used in current systems this new architecture allows to reduce the number of RF chains per antenna leading to improved energy efficiency. Furthermore to handle heterogeneous systems that have small-cells within the macro-cell, techniques such as Interference Alignment (IA) can be used to efficiently remove the existing multi-tier interference. In this dissertation a massive MIMO mmWave heterogeneous system is implemented and evaluated. It is designed analog-digital equalizers to efficiently remove both the intra an inter-tier interference. At digital level, an interference alignment technique is used to remove the interference and increase the spectral efficiency. The results showed that the proposed solutions are efficient to remove the macro and small cells interference.
Com a constante procura de melhores serviços e taxas de transmissão mais elevadas, as tecnologias atuais estão a atingir os limites de capacidade do canal. Contudo tecnologias como o MIMO e os sistemas heterogéneos permitem aumentar a capacidade do canal através da introdução de mais antenas nos transcetores e através da implementação de pequenos pontos de acesso espalhados pela célula primária, com o intuito de tornar as ligações entre os utilizadores e a estação base mais fiáveis. Tendo também em atenção que o espectro atual, sub-6GHz, está sobrecarregado e que devido às propriedades das frequências utilizadas a implementação de sistemas heterogéneos pode levar a níveis de interferência insustentáveis. Por modo a resolver esta sobrecarga futuros sistemas de comunicação devem aproveitar uma maior parte do espectro de frequências disponível. A banda das ondas milimétricas (mmWave) tem sido apontada como solução, o que permite aumentar a frequência utilizada para transportar o sinal e consequentemente aumentar as velocidades de transmissão. Uma outra vantagem da banda mmWave é que pode ser combinada com a tecnologia MIMO massivo, permitindo implementar mais elementos de antena nos terminais e consequentemente aumentar a capacidade do sistema. Umas das tecnologias desenvolvida para melhorar a eficiência energética em sistemas com centenas de antenas é a possibilidade de combinar técnicas de codificação analógica e digital, designadas como arquiteturas híbridas. A principal vantagem desta técnica é que, contrariamente ao processamento feito nos sistemas atuais, totalmente no domínio digital, esta nova arquitetura permite reduzir o número de cadeias RF por antena. Com o intuito de reduzir a interferência em sistemas heterogéneos, técnicas como o alinhamento de interferência são usadas para separar utilizadores das células secundárias dos utilizadores das células primárias de modo a reduzir a interferência multi-nível existente no sistema geral. Nesta dissertação, é implementado e avaliado um sistema heterogéneo que combina MIMO massivo e ondas milimétricas. Este sistema é projetado com equalizadores analógico-digitais para remover com eficiência a interferência intra e inter-camadas. No domínio digital é utilizada a técnica de alinhamento de interferência para remover a interferência e aumentar a eficiência espectral. Os resultados mostram que as soluções propostas são eficientes para remover a interferência entre as células secundárias e a primária.
Mestrado em Engenharia Eletrónica e Telecomunicações
APA, Harvard, Vancouver, ISO, and other styles
20

Huang, Chi-lan, and 黃祺嵐. "The Research of Efficiency and Risk in Taiwan Domestic Heterogeneous Banks." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/58167918616841964084.

Full text
Abstract:
碩士
世新大學
經濟學研究所(含碩專班)
91
We started our study with Data Envelopment Analysis (DEA) to measure the 48 local banks efficiency in Taiwan. Also we used loan and investment for output variables and interest expenditures, wage cost, and other operating cost for input variables in Banker Charnes Cooper (BCC) model in order to measure the Technique Efficiency (TE) of the 48 Data Measuring Units (DMU). Second, we translated TE into risk-taking with the formula introduced by Luc Laeven (1999) to reflect the changing of risk-taking while efficiency was no more the same. Besides, we tried to find variables explaining the risk-taking from the old-day literature with Ordinary Least Square (OLS) regression. This research also included analyzing the difference between business type and the changing in management environment with banks risk-taking in several groups. In conclusion, we could get TE measures by using output orientation in BCC model. After averaging the TE measures among these three years, we could find the four DMUs who had the best or the worst performance. The best four were as follow:Chiao Tung Bank(1), Bank of Taiwan(1), Taipeibank(1) and United World Chinese Commercial Bank(0.997). The worst four were as follow:Kaohsiung Business Bank(0.615), Chung Shing Bank(0.673), Chinfon Commercial Bank (0.681) and Taitung Business Bank(0.693). After averaging the risk-taking measures translated by TE measures, we could also find the four DMUs who had the highest and lowest performance. The lowest four were as follow:Chung Shing Bank(-30.79\%), The Chinese Bank(-16.59\%), Pan Asia Bank (-14.15\%) and Chinatrust Commercial Bank(-11.34\%). The highest International Bank (22.68\%) and The International Commercial Bank of China(21.13\%). To summarize all the results above, obviously the efficiency performance and the risk-taking of the DMUs did not have absolute relationship during these three years. In section 2, the relationship between all explicatory variables and the risk-taking was stated as follow:First, the growth of bank asset market share had banner positive correlation with the risk-taking. It meant individual bank might take higher chance in risk-taking while attending to expand the market share. Second, the growth of the owners’ equities banks owned also had banner positive correlation with the risk-taking. It meant the more owners’ equities bank held, the more risk taking only by stockholders. Third, the growth in the ratio of loan loss provisioning to loans had distinguished negative correlation with the risk-taking. It revealed the promotion of the ratio of loan loss provisioning to loans year by year reflected the modifying of the financial report to be even close the reality. At last, the growth in the ratio of non-performing loans to loans might have negative correlation with the risk-taking. It meant banks might take lower risk while the ratio of non-performing loans to loans was growing. Because of the management level of banks truly replied the potential crisis little by little lowed the uncertainty of banks. In the classification depending on banks established period, the risk taking of the new bank group egregiously less than the old bank group. It revealed that neither the senior banks tool advantage in management beyond the junior banks, nor took less risk than early formative banks might have support by conglomerate enterprises. And the latest inaugural banks might take less risk than early formative banks. This might be caused by the better constitution and the long-term local advantage with the latest constituted banks. The results above replied although the later formatted banks face more competitive surround, they did not take higher risk. At last, we did not have distinguished evidence to prove banks did take different risk in management with disparate business type. It might indirectly identify as the public banks privatized, it was lame to classify banks with business type.
APA, Harvard, Vancouver, ISO, and other styles
21

Fang, Ruei-Hong, and 方瑞鴻. "QoS-Based Admission Control and Unlicensed Band Traffic Balancing Strategy in Heterogeneous Networks." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/49368149076631413763.

Full text
Abstract:
碩士
國立交通大學
電信工程研究所
104
LTE unlicensed (LTE-U) and licensed-assisted access (LAA) are new access schemes which allow the coexistence with Wi-Fi to offload the LTE traffic in the licensed band. Fair and efficient coexistence between LTE and Wi-Fi is thus a key issue to be addressed. In this thesis, a traffic balancing strategy with guaranteed quality of service (QoS) is proposed to maximize system capacity and user satisfaction. The proposed method considers unlicensed band resource partition to achieve fair coexistence with Wi-Fi and performs admission control in the licensed band to guarantee QoS. Moreover, We propose a distributed small cell power adaption algorithm aimed at reducing power consumption for small cells while maintaining cell throughput. Finally, computer simulations shows that the proposed traffic balancing strategy can effectively improves network performance, and the proposed power adaption algorithm can help reduce power consumption.
APA, Harvard, Vancouver, ISO, and other styles
22

Nguyen, Duc Thien. "Adaptive packet transmission techniques for energy-efficient and quality-of-service optimized heterogeneous IoT networks." Thesis, 2018. http://hdl.handle.net/1959.13/1395201.

Full text
Abstract:
Research Doctorate - Doctor of Philosophy (PhD)
The Internet-of-Things (IoT) is an intelligent network infrastructure wherein a large number of uniquely identifiable things or objects are interconnected to perform complex tasks in cooperative manners. Recently, IoT applications based on a heterogeneous wireless sensor network (WSN) architecture have been found in many domains, such as smart home, smart health-care, smart transportation, smart city and smart grid. In IoT networks, the demand for energy efficiency (EE) and quality-of-service (QoS) is on the rise. However, maintaining higher EE is difficult when high QoS requirements are required. It is therefore pivotal to simultaneously address EE and QoS issues for the IoT networks. In this thesis, the issues of EE and QoS for IoT applications are studied by developing an adaptive packet transmission algorithm at the MAC sub-layer of an IEEE 802.15.4-based network. This algorithm can efficiently adapt to the varying traffic load generated by IoT applications and the queue status of sensor nodes, and adjust the IEEE 802.15.4 super-frame parameters of sensor nodes accordingly. In the presence of energy harvesting (EH) techniques including solar-based, moving vehicles-based and radio frequency (RF)-based, I develop new energy-harvesting and QoS-aware algorithms. The proposed algorithms minimize the network contention level, which in turn improve the EE and the QoS values for IoT outdoor applications. These algorithms allow the sensor nodes to harvest sufficient energy to power them up and achieve self-sustainable operability. Cloud computing and fifth-generation (5G) mobile networks have been attracting much attention from the research and industrial communities. In this work, I address an integration of the IoT, cloud computing and 5G technology to extend the coverage and tackle bottlenecks due to the high demand for data transmission in the network.
APA, Harvard, Vancouver, ISO, and other styles
23

Chen, Kai-Chun, and 陳凱鈞. "The Process Development of a Heterogeneous InGaAs n-MOSFET Device with High Energy Gap InAlAs Back Barrier Layer." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/8pwe5p.

Full text
Abstract:
碩士
國立交通大學
照明與能源光電研究所
107
In order to improve the performance of III-V MOSFETs and suppress the leakage current, we have adopted a layer of Indium aluminum arsenide (InAlAs) as a back barrier layer in the device structure. A mesa structure is also applied to improve the isolation between devices. Various process modules were developed and verified for InGaAs MOSFET integration. Besides, a novel T-gate formation method combing both anisotropic and lateral etching processes and a bilayer metal stack is proposed. For the MOS structure, HCl-based solution was used for surface clean before oxide deposition in order to improve the interface quality between Al2O3 and InGaAs. In terms of MOS characteristics, the measured maximum capacitance is 1.02 uF/cm2 (CET = 3.26 nm) with frequency dispersion of 3 %/decade and the extracted Dit is about 3E12 cm-2eV-1. TiN/Ti or Al/TiN/Ti-based metal contact structures were fabricated for InGaAs source/drain ohmic contacts, and the lowest contact resistance value of 1.39E-7 Ω•cm2 is achieved. Finally, the fabricated InGaAs MOSFET shows on-current Ion of 250 uA/um. However, the performance of devices in terms of on/off ratio and leakage level requires further improvement by material refinement as well as process optimization.
APA, Harvard, Vancouver, ISO, and other styles
24

Jin, X. L., and Geyong Min. "Modelling and Analysis of an Integrated Scheduling Scheme with Heterogeneous LRD and SRD Traffic." Thesis, 2013. http://hdl.handle.net/10454/9671.

Full text
Abstract:
no
Multimedia applications in wireless networks are usually categorized into various classes according to their traffic patterns and differentiated Quality-of-Service (QoS) requirements. The traffic of heterogeneous multimedia applications often exhibits the Long-Range Dependent (LRD) and Short-Range Dependent (SRD) properties, respectively. The integrated scheduling scheme that combines Priority Queuing (PQ) and Generalized Processor Sharing (GPS) within a hierarchical structure, referred to as PQ-GPS, has been identified as an efficient mechanism for QoS differentiation in wireless networks and attracted significant research efforts. However, due to the high complexity and interdependent relationship among traffic flows, modelling of the integrated scheduling scheme poses great challenges. To address this challenging and important research problem, we develop an original analytical model for PQ-GPS systems under heterogeneous LRD and SRD traffic. A cost-effective flow decomposition approach is proposed to equivalently divide the integrated scheduling system into a group of Single-Server Single-Queue (SSSQ) systems. The expressions for calculating the queue length distribution and loss probability of individual traffic flows are further derived. After validating its accuracy, the developed model is adopted as an efficient performance tool to investigate the important issues of resource allocation and call admission control in the integrated scheduling system under QoS constraints.
APA, Harvard, Vancouver, ISO, and other styles
25

Niknam, Solmaz. "Interference Modeling and Performance Analysis of 5G MmWave Networks." Diss., 2018. http://hdl.handle.net/2097/39369.

Full text
Abstract:
Doctor of Philosophy
Department of Electrical and Computer Engineering
Balasubramaniam Natarajan
Triggered by the popularity of smart devices, wireless traffic volume and device connectivity have been growing exponentially during recent years. The next generation of wireless networks, i.e., 5G, is a promising solution to satisfy the increasing data demand through combination of key enabling technologies such as deployment of a high density of access points (APs), referred to as ultra-densification, and utilization of a large amount of bandwidth in millimeter wave (mmWave) bands. However, due to unfavorable propagation characteristics, this portion of spectrum has been under-utilized. As a solution, large antenna arrays that coherently direct the beams will help overcome the hostile characteristics of mmWave signals. Building networks of directional antennas has given rise to many challenges in wireless communication design. One of the main challenges is how to incorporate 5G technology into current networks and design uniform structures that bring about higher network performance and quality of service. In addition, the other factor that can be severely impacted is interference behavior. This is basically due to the fact that, narrow beams are highly vulnerable to obstacles in the environment. Motivated by these factors, the present dissertation addresses some key challenges associated with the utilization of mmWave signals. As a first step towards this objective, we first propose a framework of how 5G mmWave access points can be integrated into the current wireless structures and offer higher data rates. The related resource sharing problem has been also proposed and solved, within such a framework. Secondly, to better understand and quantify the interference behavior, we propose interference models for mmWave networks with directional beams for both large scale and finite-sized network dimension. The interference model is based on our proposed blockage model which captures the average number of obstacles that cause a complete link blockage, given a specific signal beamwidth. The main insight from our analysis shows that considering the effect of blockages leads to a different interference profile. Furthermore, we investigate how to model interference considering not only physical layer specifications but also upper layers constraints. In fact, upper network layers, such as medium access control (MAC) protocol controls the number of terminals transmitting simultaneously and how resources are shared among them, which in turn impacts the interference power level. An interesting result from this analysis is that, from the receiving terminal standpoint, even in mmWave networks with directional signals and high attenuation effects, we still need to maintain some sort of sensing where all terminals are not allowed to transmit their packets, simultaneously. The level of such sensing depends on the terminal density. Lastly, we provide a framework to detect the network regime and its relation to various key deployment parameters, leveraging the proposed interference and blockage models. Such regime detection is important from a network management and design perspective. Based on our finding, mmWave networks can exhibit either an interference-limited regime or a noise-limited regime, depending on various factors such as access point density, blockage density, signal beamwidth, etc.
APA, Harvard, Vancouver, ISO, and other styles
26

Chen, Dong. "A dual-radio self-configurable heterogeneous area network architecture for machine-to-machine communications." Thesis, 2019. http://hdl.handle.net/1959.13/1397772.

Full text
Abstract:
Research Doctorate - Doctor of Philosophy (PhD)
Machine-to-machine (M2M) communications can be envisaged as an efficient means to provide automated data transmissions among low-power devices in large-scale geographical areas. The data from these devices from different systems such as the Internet of Things (IoT) and the Smart Grid must be accumulated and relayed to the cloud in a reliable manner. To do this, many networking technologies could be used to establish a heterogeneous networking environment, in which information exchange processes need to meet the Quality of Service (QoS) requirements for various M2M applications. This research focuses on heterogeneous area networks comprised of the IEEE 802.15.4 and IEEE 802.11g devices. The former has intrinsic shortcomings such as low throughput, high delays, the lack of end-to-end Internet Protocol (IP) connectivity and intra-network collisions, whereas the latter could cause inter-network collisions in a heterogeneous network when sharing the license-free band. As a result, mitigating the intra-and inter-cluster collisions and maintaining the QoS requirements for M2M applications are key challenges for the M2M communication network design. In addition, several M2M applications may need to support two-way communication links such as electric vehicles exchanging location and system information with charging stations. During this process, the downlink traffic mixed with the uplink traffic may experience traffic congestion, thus degrading the network performance. To tackle these challenges, new simulation models, techniques, link designs and algorithms were proposed in this research. To enable the IP end-to-end connectivity from the devices to the cloud, a 6LoWPAN-based wireless area network architecture for M2M applications was first proposed. To investigate the proposed architecture, several OPNET simulation models were developed. These models ensure IPv6 connectivity and serve as a cornerstone for the following research. After that, to mitigate the intra-network collisions caused by beacons and data packets, a staggered link design was proposed to superimpose the incoming superframe on the outgoing superframe to schedule packet transmissions. A packet aggregation technique, combined with the staggered link design, was proposed to further decrease the number of the transmitted packets in the network. Both the techniques can significantly mitigate the intra-network collisions, thus increasing the packet delivery ratio and lowering the end-to-end delay for a homogeneous wireless area network. In addition, a heterogeneous area network was proposed to extend the transmission range over a large geographical area and to maintain the QoS requirements for different M2M applications. However, the heterogeneous area network can cause inter-network collisions, which degrades the network performance. To solve this problem, a novel algorithm named as Blank Burst was proposed to schedule 6LoWPAN packet transmissions to avoid the inter-network collisions in the heterogeneous area network. This algorithm was further enhanced to a lifetime-based algorithm that schedules the packet flows and differentiates them as per their lifetimes and priorities to maintain the QoS of different M2M applications. Finally, to solve the downlink traffic congestion problem in the proposed heterogeneous area network, a congestion mitigation algorithm was proposed. The algorithm classifies the queue length into several intervals corresponding to different traffic flows and uses ACK packets to schedule the downlink traffic from the end device side. The main advantage of the proposed algorithm is that it can quickly detect the downlink traffic congestion, schedule the traffic and alleviate the network congestion. The simulation results showed that the proposed designs and algorithms can successfully tackle the above challenges and are superior to the existing solutions in the literature, especially in terms of mitigating the intra-and inter-network collisions while maintaining necessary QoS requirements for M2M applications.
APA, Harvard, Vancouver, ISO, and other styles
27

Mekonnen, Teramaje Walle. "Capital structure decisions of firms: evidence on determinants and dynamics of capital structures of Ethiopian banks." Thesis, 2016. http://hdl.handle.net/10500/22766.

Full text
Abstract:
Despite the fact that a preponderance of past studies in corporate finance mainly focus on capital structure decision of firms, the problems of “what factors determine the capital structure choice of firms and how firms adjust their capital structure dynamically” are still riddling. Hence, the aim of this study is to investigate the determinants of capital structure and capital structure adjustment dynamics of banks. To this end, the study employed a quantitative research approach. Specifically, secondary data have been collected through document review of annual reports of selected banks for longitudinal/panel research design. Besides, primary data have been collected through a self- administered questionnaire distributed to the selected Chief Financial Officers (CFOs) for the cross-sectional survey research design of the study. As the method of data analysis, the study estimates both static and dynamic panel models using fixed effect and GMM estimators respectively. Besides, in analyzing the cross-sectional survey responses, appropriate statistical techniques for order-ranked and nominal/categorical items of the responses have been employed. Specifically, in the univariate analysis of survey responses, mean scores and percentage of categorical responses have been computed for order-ranked and nominal items respectively. Moreover, to test the significance of differences of mean scores of order-ranked and percentage of responses of nominal items conditional on bank characteristics, the study employed the nonparametric Mann-Whitney test and the likelihood ratio test respectively. As the result, the tax shield from interest tax deductibility, profitability and/or size of free cash flows, growth opportunities and regulatory pressure factors are found to be significant determinants of capital structure decisions, consistently in estimations of panel models and cross-sectional survey. In iii examining the capital structure adjustment dynamics, both the regression estimation and survey results revealed the tendency of banks in Ethiopia to set target capital structure and adjust towards it at a relatively faster speed of adjustment. Besides, both regression model estimation and survey results disclose the asymmetrical target capital structure adjustment of banks. To be specific, overleveraged or undercapitalized banks adjust more quickly than underleveraged or overcapitalized banks. Further, the speed of target capital structure adjustment is found to be heterogeneous across banks that differ in their absolute deviations from target capital structure, size, regulatory pressure for capital adequacy and ownership. Hence, by empirically examining the determinants and dynamics of capital structure of banks in Ethiopia, the study contributes to the existing body of knowledge on the subject under study, and/or it fills a gap in the existing reference literature on the subject. Most importantly, the study tries to untangle the capital structure issues of banks, especially the dynamics, in the context of the least developed financial system where there are no secondary market and oligopolistic banking sector.
Graduate School of Business Leadership (SBL)
D.B.L.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography