Dissertations / Theses on the topic 'Large-scale study'

To see the other types of publications on this topic, follow the link: Large-scale study.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Large-scale study.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Gilbert, Candace June. "Large-scale portfolio assessment: Pitfalls and pathways." CSUSB ScholarWorks, 1999. https://scholarworks.lib.csusb.edu/etd-project/1524.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Aivars, Sablis. "Benefits of transactive memory systems in large-scale development." Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-11703.

Full text
Abstract:
Context. Large-scale software development projects are those consisting of a large number of teams, maybe even spread across multiple locations, and working on large and complex software tasks. That means that neither a team member individually nor an entire team holds all the knowledge about the software being developed and teams have to communicate and coordinate their knowledge. Therefore, teams and team members in large-scale software development projects must acquire and manage expertise as one of the critical resources for high-quality work. Objectives. We aim at understanding whether software teams in different contexts develop transactive memory systems (TMS) and whether well-developed TMS leads to performance benefits as suggested by research conducted in other knowledge-intensive disciplines. Because multiple factors may influence the development of TMS, based on related TMS literature we also suggest to focus on task allocation strategies, task characteristics and management decisions regarding the project structure, team structure and team composition. Methods. We use the data from two large-scale distributed development companies and 9 teams, including quantitative data collected through a survey and qualitative data from interviews to measure transactive memory systems and their role in determining team performance. We measure teams’ TMS with a latent variable model. Finally, we use focus group interviews to analyze different organizational practices with respect to team management, as a set of decisions based on two aspects: team structure and composition, and task allocation. Results. Data from two companies and 9 teams are analyzed and the positive influence of well-developed TMS on team performance is found. We found that in large-scale software development, teams need not only well-developed team’s internal TMS, but also have well- developed and effective team’s external TMS. Furthermore, we identified practices that help of hinder development of TMS in large-scale projects. Conclusions. Our findings suggest that teams working in large-scale software development can achieve performance benefits if transactive memory practices within the team are supported with networking practices in the organization.
APA, Harvard, Vancouver, ISO, and other styles
3

Singh, Babita 1986. "Large-scale study of RNA processing alterations in multiple cancers." Doctoral thesis, Universitat Pompeu Fabra, 2017. http://hdl.handle.net/10803/572859.

Full text
Abstract:
RNA processing and their alterations are determinant to understand normal and disease cell phenotypes. In particular, specific alterations in the RNA processing of genes has been linked to widely accepted cancer hallmarks. With the availability of large-scale genomic and transcriptomic data for multiple cancer types, it is now possible to address ambitious questions such as obtaining a global view of alterations in RNA processing specific to each cancer type as well as in common across all types. The first objective of this thesis is to obtain a global view of RNA processing alterations across different tumor types along with alterations with respect to RNA binding proteins (trans-component), their tumor-type specificity, differential expression, mutations, copy number variation and whether these alterations result in differential splicing. Using data for more than 4000 patients from 11 tumor types, we provide the link between alterations of RNA binding proteins and splicing changes across multiple tumor types. Second objective moves one step further and explores in detail the RNA-processing alterations with respect to mutations on RNA regulatory sequences (cis-components). Using whole genome sequencing data for more than 1000 cancer patients, we thoroughly study the sequence of entire genes and report significantly mutated short regions in coding and non-coding parts of genes that are moreover enriched in RNA putative RNA regulatory sites, including regions deep into the introns. The recurrence of some of the mutations in non-coding regions is comparable to some of already known driver genes in coding regions. We further analyze the impact of these mutations at the RNA level by using RNA sequencing from the same samples. This work proposes a novel and powerful strategy to study mutations in cancer to identify novel oncogenic mechanisms. In addition, we share the immense amount of data generated in these analyses so that other researchers can study them in detail and validate them experimentally.
El procesamiento del ARN y sus alteraciones son determinantes para entender el fenotipo de las células en condiciones normales y de enfermedad. En particular, alteraciones en el procesamiento de ARN de determinados genes se han vinculado a características distintivas del cáncer ampliamente aceptadas. Con la disponibilidad de datos genómicos y transcriptómicos a gran escala paramúltiples tipos de cáncer, es posible abordar cuestiones ambiciosas como la obtención de una visión global de las alteraciones en el procesamiento de ARN que son específicas para cada tipo de cáncer, así como de aquellas las comunes a varios tipos. El primer objetivo de esta tesis es obtener una visión global de las alteraciones del procesamiento de ARN en diferentes tipos de tumores, así como de las alteraciones en las proteínas de unión a ARN (componente trans), y si dichas alteraciones resultan en un procesamiento diferencial del RNA. Utilizando datos de más de 4000 pacientes para 11 tipos de tumores, establecemos la relación entre las alteraciones de las proteínas de unión a ARN y cambios de splicing en múltiples tipos de tumores. El segundo objetivo va un paso más allá y explora en detalle las alteraciones del procesamiento de ARN con respecto a mutaciones en las secuencias reguladoras del ARN (componente cis). Utilizando datos de genomas completos para más de 1000 pacientes, estudiamos a fondo la secuencia de genes para identificar regiones cortas significativamente mutadas en partes codificantes y no codificantes por proteína, y que además están enriquecidas en posibles sitios reguladores del ARN, incluyendo regiones intrónicas profundas. La recurrencia de las mutaciones en algunas regiones no codificantes es comparable a la de algunos genes drivers de cáncer conocidos. Además, analizamos el impacto de estas mutaciones a nivel del ARN mediante el uso de datos de secuenciación de ARN de las mismas muestras. Este trabajo propone una estrategia novedosa y potente para estudiar las mutaciones en cáncer con el fin de identificar nuevos mecanismos oncogénicos. Además, compartimos la inmensa cantidad de datos generados en estos análisis para que otros investigadores los puedan estudiar en detalle y validarlos experimentalmente.
APA, Harvard, Vancouver, ISO, and other styles
4

Monroe, James T. "A large-scale modeling study of the California current system." Monterey, California. Naval Postgraduate School, 1997. http://hdl.handle.net/10945/8607.

Full text
Abstract:
Approved for public release; distribution is unlimited
A high resolution, multi-level, primitive equation ocean model is used to investigate the combined role of wind forcing, thermohaline gradients, and coastline irregularities on the formation of currents, meanders, eddies, and filaments in the California Current System (CCS) from 22.5 deg N to 47.5 deg N. An additional objective is to further characterize the formation of the Davidson Current, seasonal variability off Baja California, and the meandering jet south of Cape Blanco. The model includes a realistic coastline and is forced from rest using climatological winds, temperatures, and salinities. The migration pattern of the North Pacific Subtropical High plays a significant role in the generation and evolution of CCS structures. In particular, variations in wind stress induce flow instabilities which are enhanced by coastline perturbations. An inshore train of cyclonic eddies, combined with a poleward undercurrent of varying seasonal depths, forms a discontinuous countercurrent called the Davidson Current north of Point Conception. Off Baja, the equator-ward surface jet strengthens (weakens) during spring and summer (fall and winter). Model results also substantiate Point Eugenia as a persistent cyclonic eddy generation area. The model equator-ward jet south of Cape Blanco is a relatively continuous feature, meandering offshore and onshore, and divides coastally influenced water from water of offshore origin
APA, Harvard, Vancouver, ISO, and other styles
5

El, Khatib Dounia. "Municipal Solid Waste in Bioreactor Landfills: A Large Scale Study." University of Cincinnati / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1289943004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Joshi, Prakash. "A Comparative Study Of Large-Scale Network Data Visualization Tools." ScholarWorks@UNO, 2018. https://scholarworks.uno.edu/honors_theses/108.

Full text
Abstract:
One of the most important parts of Data Analysis is Data Visualization [15]. The easy thing about Data Visualization is that there are hundreds of ways to do it, one better than the other. Ironically, however, it is difficult to choose the right tool for the job. This can be a concern because it is really important to know which tool is best depending on the resources we have. This thesis tries to answer that question – to an extent. In this thesis, I have tried to compare three Data Visualization tools: Gephi, Pajek and NodeXL. I have mainly discussed what each tool can do, what each tool is best at, and when to and when not to use each tool. Therefore, using the right tool can not only save us a lot of time by making the task easy and get the work done using a minimal number of resources, but also help to get the best results. The comparison is based on what Visualization features each tool has, how each tool computes different graph features, and how Compatible and Scalable each tool is. In the process, I used different Network datasets and tried to calculate certain features of the graph and wrote the findings. The end report discusses which tool can be best to use given the size of dataset, the problem we are trying to solve, the resources we have and the time we can spend.
APA, Harvard, Vancouver, ISO, and other styles
7

余學東 and Hok-tung Dion Yu. "A study of the large scale redevelopment concept in urban redevelopment." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2002. http://hub.hku.hk/bib/B43895001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Iwashita, Takeshi. "Study on Stabilization of Large-Scale Coal-Fired Linear MHD Generators." Kyoto University, 1997. http://hdl.handle.net/2433/77867.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Pisetta, de Oliveira Maria. "Integrating batteries with large-scale wind power: a Canadian case-study." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-278074.

Full text
Abstract:
Canada is a country with a mostly fossil free electricity generation mix, with more than 80% of electricity being produced from hydropower, nuclear and other renewables. The province of Alberta, on the other hand, still has a long way to go in making its electricity less fossil-fuel based, and for that, it aims to invest in renewables in the coming years. This increased deployment of renewables, an intermittent energy source, could mean a good investment opportunity for batteries in the province as well. This thesis investigates the different revenue possibilities of a battery operating in Alberta’s real-time electricity market, reserve market and in a combination of both markets. To understand how wind energy would influence such an operation, these strategies are then analyzed taking into account the wind generation’s annual variability for the charging of the battery. All of these strategies were fixed, meaning the battery had a fixed operation schedule for every day of the year. Lastly, this thesis analyzed an optimal battery operation, with access to perfect information and possibility to optimize revenues between the aforementioned markets.
Kanada är ett land med en hög andel fossilfri elproduktion då 80 % av elektricitetenkommer från vattenkraft, kärnkraft eller förnybara källor. Delstaten Alberta har dock entill största delen fossilbaserad elproduktion och kommer därför investera i förnybarelproduktion de kommande åren. Den ökade mängden förnybar variabel elproduktionkan innebära en bra investeringsmöjlighet för batterier för energilagring. I dennauppsats undersöks inkomsterna för ett batteri som används på Albertas elmarknad,antingen på den vanliga marknaden som körs i realtid, reglermarknaden eller enkombination av båda marknaderna. För att förstå hur vindkraft skulle påverka driftenav batteriet analyseras sedan dessa strategier under förutsättningen att batterietladdas med den variabla produktionen från en vindkraftpark. För alla dessa strategiervar schemat för laddningen av batteriet bestämt i förväg. Som jämförelse analyserasden optimala driften av batteriet under förutsättningen att man har tillgång till perfektinformation och möjlighet att optimera driften mellan de nämnda marknaderna.
APA, Harvard, Vancouver, ISO, and other styles
10

Biwole, Pascal Henry. "Large scale particle tracking velocimetry for 3-dimensional indoor airflow study." Lyon, INSA, 2009. http://theses.insa-lyon.fr/publication/2009ISAL0070/these.pdf.

Full text
Abstract:
While most research on particle tracking velocimetry (PTV) is devoted either to 2D flows or to small scale 3D flows, this paper describes a complete 3D PTV algorithm and some applications to indoor airflow velocity measurement. A particle detection procedure specially adapted to the physical characteristics of neutrally buoyant helium filled soap bubbles is described. In particular, overlarge particle images are removed. Particle center are calculated by a weigh averaging method. Two temporal tracking schemes are presented and compared. The first one is based on fast normalized cross-correlation with Lagrangian extrapolation in image space to solve ambiguities. The second uses polynomial regression to find an estimated position and applies a quality criterion based on a minimization of changes in particle acceleration; to increase the measurement area and the number of trajectories, the correspondence problem is addressed by a new procedure involving fundamental matrices using both three and two 3D calibrated cameras. 3D triangulation is done by a least squares method. Some guidelines are given in terms of camera and light positioning for 3D PTV in large volumes with various wall colors. Applications of the algorithm include Lagrangian tracking: (i) in a light-gray walled 3. 1mx3. 1mx2. 5m high test-room; (ii) inside a black walled 5. 5mx3. 7mx2. 4m high test-room; (iii) over a heat source and inside; (iv) in experimental aircraft cabin; (v) in a 100cmx 100cmx100cm tank filled with an aqueous solution using continuous light and 10µm-large hollow particles. Results show that the algorithm is capable of tracking more than 1400 tracers in volumes up to 3mx3mx1. 2m high
Alors que la majorité des méthodes actuelles de suivi Lagrangien de particules ne permettent que la mesure d'écoulements 2D ou 3D à petite échelle, le présent document décrit un algorithme complet de suivi Lagrangien 3D en grand champ. Des bulles de savon gonflées à l'hélium servent de traceurs. L'éclairement est fourni par des sources continues de type halogènes. Des cameras synchrones sont préalablement calibrées suivant des algorithmes connus afin d'avoir un repère 3D commun. Apres suppression du fond, une procédure spéciale supprime les larges spots créées par les images des bulles se rapprochant des caméras. Les centres de masse des bulles restantes sont ensuite calculés. Le suivi temporel se fait soit par corrélation croisée, soit par régression linéaire tempéré par l'emploi d'un critère de qualité. La triangulation 3D se fait par une méthode de moindres carrés. L'algorithme complet est testé dans plusieurs configurations : dans une cellule expérimentale de taille 3. 1mx3. 1mx2. 5m aux parois gris clair ; (ii) dans une salle de taille 5. 5mx3. 7mx2. 4m aux parois noires ; (iii) au dessus d'une source de chaleur; (iv) à l'intérieur d'une maquette d'avion de taille réelle; (v) dans un réservoir 100cmx100cm 100cmx rempli d'une solution aqueuse en utilisant une lumière continue et des particules creuses de 10um de large. Dans chaque cas, des lignes directrices sont données pour un positionnement optimal des cameras et des sources de lumière. Les résultats montrent que l'algorithme est capable de suivre plus de 1400 traceurs dans des volumes allant jusqu'a 3mx3mx1. 2m
APA, Harvard, Vancouver, ISO, and other styles
11

Yu, Hok-tung Dion. "A study of the large scale redevelopment concept in urban redevelopment." Hong Kong : University of Hong Kong, 2002. http://sunzi.lib.hku.hk/hkuto/record.jsp?B2524839x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Tomori, Opeoluwa. "Feasibility study of a large scale biogas plant in Lagos, Nigeria." Thesis, Tomori, Opeoluwa (2012) Feasibility study of a large scale biogas plant in Lagos, Nigeria. Masters by Coursework thesis, Murdoch University, 2012. https://researchrepository.murdoch.edu.au/id/eprint/17325/.

Full text
Abstract:
Lagos state hosts one of the largest emerging cities in the world, with such a status comes the problem of a population that is fast outgrowing the infrastructure required for its survival, such as sewage treatment facilities to prevent pollution. In a situation where the government is finding it difficult to solve problems the private sector must intervene where possible, but for this to happen they must see an avenue for profit making. In doing this research work I intend to investigate the possibility of turning the state’s sewage waste to energy for profit making such that private entities have an incentive to solve this problem. This study includes a detailed study of the technology of a biogas plant, the process of digestion and a look at a number of other biogas plants operating around the world. The costs per cubic meter of gas produced were examined and were used to estimate a possible cost of such a plant in Lagos. The net present cost of such a plant was calculated, and a case study was used to compare the use of biogas with natural gas for industrial heat processes. The feasibility of the plant was explained and a sensitivity analysis of the effects of changes in capital cost and the discounting rate was done. The NPC of the plant over its 10 year lifetime is $1.75million and it has a Levelised cost of $0.78/m3. The plant at current operation parameters is not viable because the net present value is negative. To conclude the plant is not viable but with some changes in the regulatory environment, improvements in plant operations and extra efforts in sourcing revenue it has the potential to solve a major problem and also generate a good profit.
APA, Harvard, Vancouver, ISO, and other styles
13

Constantinescu, Gabriel Cristian. "Large-scale density functional theory study of van-der-Waals heterostructures." Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/274876.

Full text
Abstract:
Research on two-dimensional (2D) materials currently occupies a sizeable fraction of the materials science community, which has led to the development of a comprehensive body of knowledge on such layered structures. However, the goal of this thesis is to deepen the understanding of the comparatively unknown heterostructures composed of different stacked layers. First, we utilise linear-scaling density functional theory (LS-DFT) to simulate intricate interfaces between the most promising layered materials, such as transition metal dichalcogenides (TMDC) or black phosphorus (BP) and hexagonal boron nitride (hBN). We show that hBN can protect BP from external influences, while also preventing the band-gap reduction in BP stacks, and enabling the use of BP heterostructures as tunnelling field effect transistors. Moreover, our simulations of the electronic structure of TMDC interfaces have reproduced photoemission spectroscopy observations, and have also provided an explanation for the coexistence of commensurate and incommensurate phases within the same crystal. Secondly, we have developed new functionality to be used in the future study of 2D heterostructures, in the form of a linear-response phonon formalism for LS-DFT. As part of its implementation, we have solved multiple implementation and theoretical issues through the use of novel algorithms.
APA, Harvard, Vancouver, ISO, and other styles
14

Thompson, David John. "Large-Scale Display Interaction Techniques to Support Face-to-Face Collaboration." Thesis, University of Canterbury. Computer Science and Software Engineering, 2006. http://hdl.handle.net/10092/1192.

Full text
Abstract:
This research details the development of a large-scale, computer vision-based touch screen capable of supporting a large number of simultaneous hand interactions. The system features a novel lightweight multi-point tracking algorithm to improve real-time responsiveness. This system was trialled for six months in an exhibition installation at World Expo 2005 in Aichi, Japan, providing a robust, fault-tolerant interface. A pilot study was then conducted to directly compare the system against other, more established input methods (a single-touch case, a two-mouse case and a physical prototype) to determine the effectiveness and affordances of the multi-touch technology for arranging information on a large-scale wall space in a paired collaborative task. To assist in this study, a separate visualisation and interaction classification tool was developed, allowing the replay of XML log data in real time to assist in the video analysis required for observation and hypothesis testing.
APA, Harvard, Vancouver, ISO, and other styles
15

Belay, Eyuel. "Challenges of Large-ScaleSoftware Testing and the Role of Quality Characteristics : Empirical Study." Thesis, Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-93124.

Full text
Abstract:
Currently, information technology is influencing every walks of life. Our livesincreasingly depend on the software and its functionality. Therefore, thedevelopment of high-quality software products is indispensable. Also, inrecent years, there has been an increasing interest in the demand for high-qualitysoftware products. The delivery of high-quality software products and services isnot possible at no cost. Furthermore, software systems have become complex andchallenging to develop, test, and maintain because of scalability. Therefore, withincreasing complexity in large scale software development, testing has been acrucial issue affecting the quality of software products. In this paper, large-scalesoftware testing challenges concerning quality and their respective mitigations arereviewed using a systematic literature review, and interviews. Existing literatureregarding large-scale software development deals with issues such as requirementand security challenges, so research regarding large-scale software testing and itsmitigations is not dealt with profoundly.In this study, a total of 2710 articles were collected from 1995-2020; 1137(42%)IEEE, 733(27%) Scopus, and 840(31%) Web of Science. Sixty-four relevant articleswere selected using a systematic literature review. Also, to include missed butrelevant articles, snowballing techniques were applied, and 32 additional articleswere included. A total of 81 challenges of large-scale software testing wereidentified from 96 total articles out of which 32(40%) performance, 10(12 %)security, 10(12%) maintainability, 7(9 %) reliability, 6(8%) compatibility, 10(12%)general, 3(4%) functional suitability, 2(2%) usability, and 1(1%) portability weretesting challenges were identified. The author identified more challenges mainlyabout performance, security, reliability, maintainability, and compatibility qualityattributes but few challenges about functional suitability, portability, and usability.The result of the study can be used as a guideline in large-scale software testingprojects to pinpoint potential challenges and act accordingly.
APA, Harvard, Vancouver, ISO, and other styles
16

Dasiyici, Mehmet Celal. "Multi-Scale Cursor: Optimizing Mouse Interaction for Large Personal Workspaces." Thesis, Virginia Tech, 2008. http://hdl.handle.net/10919/32706.

Full text
Abstract:
As increasingly large displays are integrated into personal workspaces, mouse-based interaction becomes more problematic. Users must repeatedly â clutchâ the mouse for long distance movements [61]. The visibility of the cursor is also problematic in large screens, since the percentage of the screen space that the cursor takes from the whole display gets smaller. We test multi-scale approaches to mouse interaction that utilize dynamic speed and size techniques to grow the cursor larger and faster for long movements. Using Fittsâ Law methods, we experimentally compare different implementations to optimize the mouse design for large displays and to test how they scale to large displays. We also compare them to techniques that integrate absolute pointing with head tracking. Results indicate that with some implementation level modifications the mouse device can scale well up to even a 100 megapixel display with lower mean movement times as compared to integrating absolute pointing techniques to mouse input while maintaining fast performance of the typical mouse configuration on small screens for short distance movements. Designs that have multiple acceleration levels and 4x maximum acceleration reduced average number of clutching to less than one per task in a 100 megapixel display. Dynamic size cursors statistically improve pointing performance. Results also indicated that dynamic speed transitions should be as smooth as possible without steps of more than 2x increase in speed.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
17

Wahl, Emil. "Reflecting and adjusting in large-scale Agile software development : A case study." Thesis, Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-19676.

Full text
Abstract:
Background. Agile software development has seen increased use in large-scale projects in recent times. Many larger corporations transition from using a traditional plan-driven approach for developing software to applying the Agile methodology within its processes. Large-scale Agile projects are inherently difficult to implement as there are many challenges associated with it. Many Agile frameworks have been developed to make it easier to apply the Agile methodology on a large-scale. The Agile principle of reflecting and adjusting at regular intervals can be used for developing these frameworks and allows practitioners to find ways to mitigate the challenges that large-scale Agile projects face. Objectives. This thesis aims to explore how a large-scale Agile project applies the Agile principle of reflecting and adjusting its work process, both at the overall and team level. The objectives of the thesis are to find out how the case organization regularly reflects on its work process and how it enables adjustments through the distribution of roles that can enforce changes. An additional objective is to find out what the perceived challenges are that are associated with performing regular reflections and adjustments in a large-scale Agile context. Methods. A field study is conducted at a large-scale Agile project. The field study includes direct observations of day-to-day work and scheduled meetings, interviewing project participants, and reading company documentation. The collected data is thematically analyzed to identify how the case organization reflects and adjust its work process and what the perceived challenges are. Results. Three different events are identified at the case organization to apply the Agile principle of reflecting and adjusting: reference groups to reflect on larger matters affecting much of the project, retrospective meetings to some extent to reflect within the different teams, and day-to-day reflections. All the identified roles can influence change for most parts of the process, but can only enforce change on their part of the process. Six themes are identified as perceived challenges associated with the Agile principle of reflecting and adjusting: Deadlines and time limits, multiple tasks within the teams, disinterest or misunderstanding the Agile principles, different levels of Agile, and established process and complacency. Conclusions. The case organization applies several different reflective events that address some of the challenges that are associated with large-scale Agile projects. The case organization has many other challenges relating to these events and they are all associated with other challenges previously discovered in related works.
Bakgrund. Agil mjukvaruutveckling har sett en ökad användning i storskaliga projekt under den senaste tiden. Många större företag övergår från att använda en traditionell plandriven strategi för att utveckla programvara till att tillämpa den Agila metodiken i sina processer. Det finns många utmaningar när man använder den Agila metodiken i ett storskaligt projekt. Agila projekt på stor skala är svårt att genomföra, och många Agila ramverk har utvecklats för att göra det lättare att tillämpa den Agila metodiken på stor skala. Den Agila principen att reflektera och justera med jämna mellanrum kan användas för att utveckla dessa ramverk och gör det möjligt för utövare att hitta sätt att tackla de utmaningar som storskaliga Agila projekt står inför. Syfte. Denna avhandling undersöker hur ett storskaligt Agilt projekt tillämpar den Agila principen att reflektera och justera sin arbetsprocess, både på en övergripande nivå och teamnivå. Målet med avhandlingen är att ta reda på hur organisationen regelbundet reflekterar över sin arbetsprocess och hur den möjliggör justeringar genom fördelning av roller som kan verkställa förändringarna. Ett ytterligare mål är att ta reda på vilka upplevda utmaningar som är förknippade med att utföra regelbundna reflektioner och justeringar i ett storskaligt Agilt sammanhang. Metod. En fältstudie genomförs på ett storskaligt Agilt projekt. Fältstudien inkluderar direkta observationer av det dagliga arbetet och schemalagda möten, intervjuer med projektdeltagare, och läsa företagetsdokumentation. Den insamlade datan analyseras tematiskt för att identifiera hur organisationen reflekterar och justerar sin arbetsprocess och de upplevda utmaningarna som relaterar till det. Resultat. Organisationen använder sig av tre olika sätt för att tillämpa den Agila principen för reflektion och justering: referensgrupper för att reflektera över större frågor som påverkar stora delar av projektet, retrospektiva möten i viss mån för att reflektera i de olika teamen, och dagliga reflektioner. Alla identifierade roller kan influera förändring på processen, men kan bara verkställa förändringar på sin del av processen. Sex teman identifieras som upplevda utmaningar förknippade med den Agila principen att reflektera och justera: Tidsfrister och tidsgränser, flera uppgifter inom teamen, ointresse eller missförståelse av de Agila principerna, olika nivåer av Agile, och etablerad process och självgodhet. Slutsatser. Organisationen tillämpar flera olika funktioner för reflektion som hanterar några av de utmaningar som är förknippade med storskaliga Agila projekt. Organisationen har många andra utmaningar relaterade till dessa funktioner och de är alla förknippade med andra utmaningar som tidigare upptäckts i relaterade arbeten.
APA, Harvard, Vancouver, ISO, and other styles
18

Saltzman, Ashley Joelle. "Spatiotemporally-Resolved Velocimetry for the Study of Large-Scale Turbulence in Supersonic Jets." Diss., Virginia Tech, 2021. http://hdl.handle.net/10919/101813.

Full text
Abstract:
The noise emitted from tactical supersonic aircraft presents a dangerous risk of noise-induced hearing loss for personnel who work near these jets. Although jet noise has many interacting features, large-scale turbulent structures are believed to dominate the noise produced by heated supersonic jets. To characterize the unsteady behavior of these large-scale turbulent structures, which can be correlated over several jet diameters, a velocimetry technique resolving a large region of the flow spatially and temporally is desired. This work details the development of time-resolved Doppler global velocimetry (TRDGV) for the study of large-scale turbulence in high-speed flows. The technique has been used to demonstrate three-component velocity measurements acquired at 250 kHz, and an analysis is presented to explore the implications of scaling the technique for studying large-scale turbulent behavior. The work suggests that the observation of low-wavenumber structures will not be affected by the large-scale measurement. Finally, a spatiotemporally-resolved measurement of a heated supersonic jet is achieved using large-scale TRDGV. By measuring a region spanning several jet diameters, the lifetime of turbulent features can be observed. The work presented in this dissertation suggests that TRDGV can be an invaluable tool for the discussion of turbulence with respect to aeroacoustics, providing a path for linking the flow to far-field noise.
Doctor of Philosophy
During takeoff, the intense noise emitted from tactical supersonic aircraft exposes personnel to dangerous risks of noise-induced hearing loss. In order to develop noise-reduction techniques which can be applied to these aircraft, a better understanding of the links between the jet flow and sound is needed. Laser-based diagnostics present an opportunity for studying the flow-field through time and space; however, achieving both temporal and spatial resolution is a technically challenging task. The research presented herein seeks to develop a diagnostic technique which is optimized for the study of turbulent structures which dominate jet noise production. The technique, Doppler global velocimetry (DGV), uses the Doppler shift principle to measure the velocity of the flow. First, the ability of DGV to measure the three orthogonal components of velocity is demonstrated, acquiring data at 250 kHz. Since turbulent structures in heated jets can be correlated over long distances, the effects on measurement error due to a large field-of-view measurement are investigated. The work suggests that DGV can be an invaluable tool for the discussion of turbulence and aeroacoustics, particularly for the consideration of full-scale measurements. Finally, a large-scale velocity measurement resolved in time and space is demonstrated on a heated supersonic jet and used to make observations about the turbulence structure of the flow field.
APA, Harvard, Vancouver, ISO, and other styles
19

Ardekani, Kamyar. "Feature Recommender : a large-scale in-situ study of proactive software feature recommendations." Thesis, University of British Columbia, 2016. http://hdl.handle.net/2429/59761.

Full text
Abstract:
In this thesis, we describe our design of Feature Recommender, a Mozilla Firefox browser extension, which proactively recommends features that it predicts will benefit users based on their individual usage behaviors. The goal of these pop-up notifications is to help users discover new features. How to maximize the effectiveness of such notifications while minimizing user interruptions remains a difficult open problem. One approach is to carefully time when the notifications are delivered. In our deployment of Feature Recommender, we study the effect of two delivery timing parameters: delivery rate and the user's context at the moment of delivery. We also investigate the effect of prediction algorithm sensitivity. We conducted three field studies, each about 4 weeks: (1) A preliminary study (N=10) to determine reasonable interruptible-moments; (2) A qualitative study (N=20) to assess the design and effectiveness of our extension; and (3) A near-identical study (N= ~3K) to assess quantitatively the effect of the timing parameters. Across all conditions Feature Recommender helped users adopt on average 18% of the features they were recommended, and as many as 24% when they were delivered at random times with a 1-per-day delivery rate limit. We show that lack of trust in recommendations is a key factor in hindering their effectiveness.
Science, Faculty of
Computer Science, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
20

Gong, Zhixiong, and Feng Lyu. "Technical debt management in a large-scale distributed project : An Ericsson case study." Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-14803.

Full text
Abstract:
Context. Technical debt (TD) is a metaphor reflecting technical compromises that sacrifice long-term health of a software product to achieve short term benefit. TD is a strategy for the development team to obtain business value. TD can do both harm and good to a software based on the situation of TD accumulation. Therefore, it is important to manage TD in order to avoid the accumulated TD across the breaking point. In large-scale distributed projects, development teams located in different sites, technical debt management (TDM) becomes more complex and difficult compared with traditional collocated projects. In recent years, TD metaphor has attracted the attention from academics, but there are few studies in real settings and none in large-scale globally distributed projects. Objectives. In this study, we aim to explore the factors that have significant impact on TD and how practitioner manage TD in large-scale distributed projects. Methods. We conducted an exploratory case study to achieve the objectives. The data was collected through archival records and a semi-structured interview. For the archival data, hierarchical multiple regression was used to analyze the relationship between identified factors and TD. For interview data, we used qualitative content analysis method to get a deep understanding of TDM in this studied case. Results. Based on the results of archival data analysis, we identified three factors that show significant positive correlation with TD. These three factors were task complexity, global distance, and maturity, which were evaluated by the architect during the semi-structured interview. The architect also believed that these factors have strong relationships with TD. TDM in this case includes seven management activities: TD prevention, identification, measurement, documentation, communication, prioritization, and repayment. The tool used for TDM is an internally implemented tool called wiki page. We also summarize the roles involved and approaches used with respect to each TDM activity. Two identified TDM challenges in this case were TD measurement and prioritization. Conclusions. We conclude that 1) TDM in this case is not complete. Due to the lack of TD monitoring, the measurement of TD is static and lacks an efficient way to track the change of cost and benefit of unresolved TD over time. Therefore, it is difficult to find a proper time point to repay a TD. 2) The wiki page is not enough to support TDM, and some specific tools should be combined with wiki page to manage TD comprehensively. 3) TD measurement and prioritization should get more attention both from practitioners and academics to find a suitable way to solve such challenges in TDM. 4) Factors that make significant contribution to TD should be carefully considered, which increase the accuracy of TD prediction and improve the efficiency of TDM.
APA, Harvard, Vancouver, ISO, and other styles
21

Atcheson, Mairéad. "A large-scale model experimental study of tidal turbines in uniform steady flow." Thesis, Queen's University Belfast, 2013. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.602411.

Full text
Abstract:
Similar to wind turbines, it is planned that tidal energy converters (TECs) will be deployed in arrays. However, before industry can progress to this stage, more information is required on the optimum spacing between tidal turbines. Experimental studies were carried out to understand how a single TEC interacts with its surrounding environment, with a view to informing device developers on the spacing requirements. The availability of two turbine models also permitted tests to be conducted assessing the performance of a tidal turbine in the wake of another. A new large-scale towing test facility was established at Montgomery Lough. As a simplification of the marine environment, the lake provided in principle the steady, uniform flow conditions required to quantify and understand the wake produced by a tidal device. A 16m long x 6m wide twin-hull catamaran was constructed for the test programme. This doubled as a towing rig and instrument measurement platform, providing a fixed frame of reference for measurements in the wake of the turbine. Tests carried out documented the performance of the single TEC models tested and mapped the downstream wake generated. Tests were also completed to investigate the influence of the wake of a TEC, on the performance of a second device, at varying separation distances. The large-scale experiments also provided a test bed to compare the ability of different velocity measurement instruments to measures the wake of a tidal turbine. Three different acoustic instruments were used, two varieties of acoustic Doppler current profilers and an acoustic Doppler velocimeter
APA, Harvard, Vancouver, ISO, and other styles
22

Palmeira, Ennio Marques. "The study of soil-reinforcement interaction by means of large scale laboratory tests." Thesis, University of Oxford, 1987. http://ora.ox.ac.uk/objects/uuid:88588438-fbf0-4d4f-a25c-21c24fcfebd0.

Full text
Abstract:
This thesis presents the results of an investigation into soil-reinforcement interaction by means of direct shear and pull-out tests. Scale and other factors affecting test results were studied; for this purpose an apparatus able to contain a 1 cu.m sample of sand was designed by the author in order to perform large scale tests. Plastic and metal sheet and grid reinforcements were used in conjunction with Leighton Buzzard sand. Direct Shear tests on unreinforced sand samples showed that soil strength parameters were not affected by the test scale, although the post peak behaviour and the shear band thickness at the centre of the sample were significantly affected by the scale of the test. The presence of a reinforcement layer inclined to the central plane of the box had a marked effect on the strength and behaviour of the sample. The reinforcement increased the vertical stress and inhibited the shear strain development in the central region of the sample. The behaviour of the reinforced sample was found to depend on the type and form of the reinforcement as well as its mechanical properties. Pull-out test results can be severely affected by boundary conditions, in particular by the friction on the front wall of the box. The results obtained in the series of tests showed that interference between grid bearing members is the main factor conditioning the pull-out resistance of a grid reinforcement. The intensity of such interference was quantified on the basis of results obtained in tests using single isolated bearing members and grids with different geometric characteristics. An expression for the bond coefficient between soil and grid, taking into account the degree of interference, was suggested. It was also observed that the maximum bearing pressure exhibited by a bearing member depends on the ratio of the member diameter to the mean particle size.
APA, Harvard, Vancouver, ISO, and other styles
23

Raford, Noah (Noah A. ). "Large scale participatory futures systems : a comparative study of online scenario planning approaches." Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/68444.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Urban Studies and Planning, 2011.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 224-231).
This dissertation explores the role that participatory online collective intelligence systems might play in urban planning research. Specifically, it examines methodological and practical issues raised by the design and use of such systems in long-term policy formulation, with a focus on their potential as data collection instruments and analytical platforms for qualitative scenario planning. The research questions addressed herein examine how the use of collective intelligence platforms informs the process of scenario planning in urban public policy. Specifically, how (if at all) does the design and deployment of such platforms influence the number and type of participants involved, people's reasons for participation, the kinds of activities they perform, and the speed and timeline of the scenario creation process? Finally, what methodological considerations does the use of such instruments raise for urban planning research in the future? In-depth interviews with experts in the fields of urban planning, public participation, crowdsourcing, and scenarios were conducted, combined with secondary analysis of comparable approaches in related fields. The results were used to create an analytical framework for comparing systems across a common set of measurement constructs. Findings were then used to develop a series of prototypical online platforms that generated data for two related urban planning cases. These were then analyzed relative to a base case, using the framework described above. The dissertation closes with a reflection on how the use of such online approaches might impact the role and process of qualitative scenario research in public policy formulation in the future, and what this suggests for subsequent scholarly inquiry.
by Noah Raford.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
24

Ariizumi, Tatsuyuki. "Evaluation of large scale industrial development using real options analysis : a case study." Thesis, Massachusetts Institute of Technology, 2006. http://hdl.handle.net/1721.1/37438.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Architecture, 2006.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Includes bibliographical references (leaves 67-69).
Recently, real-option analysis has gained attention as an innovative valuation method for complex real estate projects. However, considering its potential, this method has not become as popular as it should have. One major reason may be its complexity, and perhaps, its effectiveness is not yet widely known in the industry. Accumulating high-quality case studies can help demonstrate the effectiveness of any theory. Case studies can also help standardize the application process, providing guidelines that help people use the model more easily. In addition, it can reveal and provide solutions for various types of properties, and the means to accommodate the specifics of real-world problems met while applying the model. This case study deals with a large-scale industrial development project, which is suitable for the application of the real-option model. Usually industrial developers obtain large sites and then develop them in a phased manner. This allows them the freedom to choose phase timing and to modify their initial building plans more freely than with other types of property development.
(cont.) This flexibility adds certain amount of value to the land. We found that, with some modifications, the real-option model is fairly effective when applied to large-scale industrial development. The model facilitates more precise valuations of land by taking into account various options, such as waiting for better timing and selling the vacant land as is. This study also offers a method to analyze the proper timing of each phase's commencement-a useful decision-making tool for the developer.
by Tatsuyuki Ariizumi.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
25

Domike, Kristin Rebecca. "A study of large-scale aggregation mechanisms and kinetics of β-lactoglobulin protein." Thesis, University of Cambridge, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.612242.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

McEwan, Robert. "Interdisciplinary study of hydrodynamic and biogeochemical processes of a large-scale river plume." Thesis, University of Plymouth, 2013. http://hdl.handle.net/10026.1/1551.

Full text
Abstract:
This research has utilised the Massachusetts Institute of Technology gen- eral circulation model (MITgcm) along with observations taken as part of the River In uences on Shelf Ecosystems (RISE) study to investigate the dynamic processes associated with the Columbia River plume at different temporal and spatial scales. Firstly, a high resolution ( x= y=25 m) investigation of the near-field plume was undertaken using the fully non-hydrostatic mode of the MITgcm. This resulted in the reproduction of a detailed inner plume as well as a series of radiated internal waves. In addition to first mode internal waves, second order waves were radiated from the plume boundary when propagation ve- locity becomes sub-critical. Third mode internal waves were also observed, trapped at the plume head. The fine plume structure produced revealed sec- ondary fronts within the plume that also generated internal waves. These features increase the mixing occurring inside the plume, resulting in greater entrainment of underlying waters into the plume. The use of Lagrangian drifters within the model produced detailed results of the recirculation tak- ing place within the emerging plume and how this recirculation changes with depth. This has implications for the near-field recirculation of biologically important solutes present in the plume waters. A second coarser resolution horizontal grid ( x= y=500 m) was imple- mented to investigate the processes of the large-scale plume with the addi- tion of wind forcing. Experiments with both simplified and realistic wind scenarios were carried out and comparisons with in-situ data were made. This revealed the dominance of wind effects on the outer plume and tidal effects on the inner plume. In the simplified wind cases, the classical the- ory of plume propagation under the action of upwelling and downwelling favourable winds was recreated. For the case of realistic winds, there was some success in reproducing a hindcast of the plume location. Tracer fields were used to represent nutrient concentrations based on observed data. Whilst these results showed variations from observations, they did allow a spatially and temporally complete view to be taken of nutrient distribu- tion in the region.
APA, Harvard, Vancouver, ISO, and other styles
27

Souza, Daniel Sampaio. "Numerical study of the large scale turbulent structures responsible for slat noise generation." Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/18/18148/tde-27112017-092717/.

Full text
Abstract:
The main sources of airframe noise in commercial aircrafts are the landing gear and the highlift devices. Among the high-lift devices, the slat deserves special attention since it represents a distributed source along the wing span. During approach and landing the noise generated by the slat can be comparable to the engine generated noise. For the design of quieter high-lift systems, it is important to understand the physics responsible for the slat noise generation. The objective of the work described in this thesis is to correlate the dynamics of large scale turbulent structures at different airfoil configurations with the characteristics of the noise generated by these structures. Four different configurations were investigated, ranging two airfoil angles of attack and three slat positions relative to the main element. The unsteady flow data was provided by a Lattice-Boltzmann based computational code. The Proper Orthogonal Decomposition technique was used for the objective identification of large scale structures in the slat region. Two different metrics were considered for the eduction of the coherent structures: one based on the Turbulent Kinetic Energy of the structures; and one based on their correlation to the noise emitted by the slat. The results of the transient simulations showed good agreement with wind tunnel measurements, providing confidence on the relevance of the analysis. The noise spectra of three of the cases simulated were dominated by a series of narrowband peaks at low frequency, while the spectrum of the remaining case was broadband in nature. Analysis of the averaged flow showed a large variation of the size and shape of the recirculating zone inside the slat cove and on the reattachment position of the mixing layer, between the simulated cases. The results indicated that, as the reattachment point approximates the region of the gap between the slat and the main element, the noise emission power increases. The large scale structures most correlated to the noise were typically two-dimensional and their shape suggests they resulted from the growth of disturbances in the mixing layer due to the inflectional instability. The dynamics of the noise correlated structures at the frequencies of the peaks was consistent with the existence of an acoustic feedback mechanism acting inside the slat cove. Based on the observation of the educed structures a model to predict the frequencies of peaks was proposed, showing good agreement with the frequencies computed from the unsteady flow data.
As principais fontes de ruído não propulsivo em aeronaves comerciais são os trens-de-pouso e os dispositivos híper-sustentadores. Entre os dispositivos híper-sustentadores, o eslate se destaca por constituir uma fonte distribuída ao longo da envergadura da asa. Durante a fase de aproximação e aterrissagem, o eslate pode gerar ruído com níveis comparáveis ao gerado pelos motores. Para viabilizar projetos de aerofólios com eslates menos ruidosos, é importante compreender os fenômenos fluidodinâmicos responsáveis pela geração desse ruído. O trabalho descrito neste texto tem por objetivo verificar se existe correlação entre o comportamento de grandes estruturas turbulentas em diferentes configurações do aerofólio com as características do ruído aeroacústico gerado por elas. O escoamento em quatro configurações diferentes foi estudado, abrangendo dois ângulos de ataque e três posições do eslate em relação ao elemento principal. Os dados do escoamento para análise foram gerados através de simulações numéricas não estacionárias utilizando um código comercial baseado no Método Lattice-Boltzmann. O método da Decomposição Ortogonal Apropriada foi utilizado para a identificação das estruturas de grande escala baseada em critérios objetivos. Duas métricas distintas foram utilizadas, uma baseada na energia cinética turbulenta e outra baseada na correlação com as ondas acústicas geradas a partir do eslate. Os resultados das simulações transientes apresentaram boa concordância com resultados experimentais. O espectro de ruído de três casos simulados são dominados por picos de baixa frequência, enquanto o espectro do quarto caso é tipicamente de banda larga. A análise do escoamento indica uma tendência de aumento do ruído à medida que o recolamento se aproxima do bordo de fuga do eslate. As estruturas mais correlacionadas com o ruído são tipicamente bi-dimensionais e seu formato indica que são resultado do crescimento de perturbações na camada de mistura devido à instabilidade inflexional. A dinâmica das estruturas correlecionadas com o ruído na frequência dos picos é consistentes com a existência de uma retro-alimentação das perturbações da camada de mistura por ondas acústica na cova do eslate. Um modelo para previsão das frequências dos picos foi proposto a partir da observação das estruturas identificadas pela Decomposição Ortogonal Apropriada, mostrando boa concordância com as frequências observadas nos espectros calculados com base nos dados transientes das simulações.
APA, Harvard, Vancouver, ISO, and other styles
28

Fischer, Horn af Rantzien Douglas, and Christian Weigelt. "A Process for Threat Modeling of Large-Scale Computer Systems : A Case Study." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-280097.

Full text
Abstract:
As businesses use more digital services connected to the internet, these services and systems become more and more vulnerable to attacks carried out digitally. As a way to prevent cyber attacks and provide possible countermeasures to such threats, threat modeling methods have been constructed. This report studies the efficacy of a recently developed threat modeling method, referred to in the report as “TMM”. This was done by looking at the results of the process as well as the process itself. The results of the different stages of the process are detailed and discussed in the context of performing the implementation and how valuable the results are to stakeholders. We found that TMM was less complex than similar methods of threat modeling and risk assessment, and that it is well suited for an iterative process which would provide a well developed threat model and risk assessment through repeated implementation. The threat models and risk assessments produced by TMM would then give appropriate and accurate recommendations for improving system security.
I takt med att företag använder fler digitala tjänster med internetuppkoppling blir dessa tjänster mer och mer sårbara mot digitala attacker. För att motverka dessa cyberattacker och utrusta företag med verktyg mot dessa hot så har metoder för hotmodellering utvecklats. Denna studie undersöker effektiviteten hos en nyligen utvecklad hotmodelleringsmetod, som vi i studien kallar “TMM”. Detta gjordes genom att utvärdera processens resultat såväl som själva processen. Resultat från de olika delarna av processen redovisas och diskuteras i kontexten av att utföra implementationen och hur resultaten värderas av intressenter. Vi fann att TMM var en mindre komplex metodik än liknande hotmodellerings- och riskbedömningsmetoder och att den passar en iterativ implemenationsprocess som skulle ge en välutvecklad hotbild och riskbedömning genom upprepad implementation. Hotmodellerna och riskbedömningen som TMM lägger fram skulle då ge lämpliga och välgrundade rekommendationer för förbättring av systemsäkerhet.
APA, Harvard, Vancouver, ISO, and other styles
29

Jovanovic, Mihajlo A. "Modeling Large-scale Peer-to-Peer Networks and a Case Study of Gnutella." University of Cincinnati / OhioLINK, 2001. http://rave.ohiolink.edu/etdc/view?acc_num=ucin989967592.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Voroshilova, Alexandra. "Comparison study on graph sampling algorithms for interactive visualizations of large-scale networks." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-254656.

Full text
Abstract:
Networks are present in computer science, sociology, biology, and neuroscience as well as in applied fields such as transportation, communication, medical industries. The growing volumes of data collection are pushing scalability and performance requirements on graph algorithms, and at the same time, a need for a deeper understanding of these structures through visualization arises. Network diagrams or graph drawings can facilitate the understanding of data, making intuitive the identification of the largest clusters, the number of connected components, the overall structure, and detecting anomalies, which is not achievable through textual or matrix representations. The aim of this study was to evaluate approaches that would enable visualization of a large scale peer-to-peer video live streaming networks. The visualization of such large scale graphs has technical limitations which can be overcome by filtering important structural data from the networks. In this study, four sampling algorithms for graph reduction were applied to large overlay peer-to-peer network graphs and compared. The four algorithms cover different approaches: selecting links with the highest weight, selecting nodes with the highest cumulative weight, using betweenness centrality metrics, and constructing a focus-based tree. Through the evaluation process, it was discovered that the algorithm based on betweenness centrality approximation offers the best results. Finally, for each of the algorithms in comparison, their resulting sampled graphs were visualized using a forcedirected layout with a 2-step loading approach to depict their effect on the representation of the graphs.
Nätverk återfinns inom datavetenskap, sociologi, biologi och neurovetenskap samt inom tillämpade områden så som transport, kommunikation och inom medicinindustrin. Den växande mängden datainsamling pressar skalbarheten och prestandakraven på grafalgoritmer, samtidigt som det uppstår ett behov av en djupare förståelse av dessa strukturer genom visualisering. Nätverksdiagram eller grafritningar kan underlätta förståelsen av data, identifiera de största grupperna, ett antal anslutna komponenter, visa en övergripande struktur och upptäcka avvikelser, något som inte kan uppnås med texteller matrisrepresentationer. Syftet med denna studie var att utvärdera tillvägagångssätt som kunde möjliggöra visualisering av ett omfattande P2P (peer-to-peer) livestreamingnätverk. Visualiseringen av större grafer har tekniska begränsningar, något som kan lösas genom att samla viktiga strukturella data från nätverken. I den här studien applicerades fyra provtagningsalgoritmer för grafreduktion på stora överlagringar av P2P-nätverksgrafer för att sedan jämföras. De fyra algoritmerna är baserade på val av länkar med högsta vikt, av nodar med högsta kumulativa vikt, betweenness-centralitetsvärden för att konstruera ett fokusbaserat träd som har de längsta vägarna uteslutna. Under utvärderingsprocessen upptäcktes det att algoritmen baserad på betweenness-centralitetstillnärmning visade de bästa resultaten. Dessutom, för varje algoritm i jämförelsen, visualiserades deras slutliga samplade grafer genom att använda en kraftstyrd layout med ett 2-stegs laddningsinfart.
APA, Harvard, Vancouver, ISO, and other styles
31

QAJA, Besjana. "“Transport Corridors”. Large scale planning for regional and national development. Case study: Albania." Doctoral thesis, Università degli studi di Ferrara, 2021. http://hdl.handle.net/11392/2478842.

Full text
Abstract:
Transportation and the need for movement was born together with humans, and the transportation system in its modern concept consists of a series of factors such as network infrastructure, control systems, movement flow, accessibility of regions, etc. These constituent components have been the basis for the development of this research topic. Beyond the development of transport, the opportunity and the trail they create in the territory creates the possibility of the development of corridors which are much more than the geographical crossing line. The quality of the development of regional connections through transport corridors have of great importance in the economic and social development of a country. A transport corridor represents an important structure to serve and strengthen the functional characteristics of a region, and the corridor can provide important interconnection and communication between two or more separate functional regions. The transport corridor is a model based on the use of a high-density flow along an artery and short capillary services at the corridor nodes, where these nodes are arranged hierarchically creating an interconnected network. Regions that have no connection and interaction with others are considered as isolated and inaccessible places remaining unexploited. Transport corridors depend on its construction objectives. In this context, this research has been developed about the concept of road transport corridors, the impact they have on the social and economic development of the region where they pass, built only within one state or between several states, often impacting the cultures of the regions where they pass and bringing the settlements closer to each other in time. In addition to developing and discussing the concept of the corridor in its construction theory, the research includes fieldwork through: visual observation of the connection of the settlements to the corridor, focus groups with 5 different stakeholders who mostly frequent this infrastructure, analysis of annual reports and data from preliminary studies, interpretation of results and their processing in giving some conclusions on the situation. Methodologically, the geographical area of the research is a road corridor which connects the settlements with each other and northern Albania with the state of Kosova. In generating this process, the work is developed in 3 different clusters which are grouped according to several characteristics. Work has also been done on the analysis of different contexts to see the differences between them (Egnatia Odos). The findings revealed that in the case of the "National Road" the regional benefit was only in terms of the effect of reducing travel time, while the region has faced a population exodus. To help this situation, a combined model of corridor management is proposed by integrating and connecting the settlements with each other, where in the center of this corridor observatory is placed and some theoretical conceptions for "transport corridors" are suggested. The research provides a logical framework for further and in-depth study of this field based on the recommendations given at the end of this study to give these projects another territorial range of their importance.
APA, Harvard, Vancouver, ISO, and other styles
32

Roman, Greice de Carli. "Characterizing the presence of agility in large-scale agile software development." Pontif?cia Universidade Cat?lica do Rio Grande do Sul, 2016. http://tede2.pucrs.br/tede2/handle/tede/7518.

Full text
Abstract:
Submitted by Caroline Xavier (caroline.xavier@pucrs.br) on 2017-06-30T18:19:05Z No. of bitstreams: 1 DIS_GREICE_DE_CARLI_ROMAN_COMPLETO.pdf: 9835425 bytes, checksum: aa605361de91b916006af4710a54365b (MD5)
Made available in DSpace on 2017-06-30T18:19:05Z (GMT). No. of bitstreams: 1 DIS_GREICE_DE_CARLI_ROMAN_COMPLETO.pdf: 9835425 bytes, checksum: aa605361de91b916006af4710a54365b (MD5) Previous issue date: 2016-12-15
Em fevereiro de 2001, o Manifesto ?gil foi proposto tendo como princ?pio equipes pequenas e co-localizadas. No entanto, ao longo destes 16 anos, a agilidade tamb?m foi posta em pr?tica em outros contextos, como por exemplo: equipes distribu?das e sistemas complexos, utilizando-se o termo "Desenvolvimento ?gil em Larga Escala". N?o h? uma defini??o clara e compreensiva de como a agilidade est? presente neste contexto. Assim, nosso trabalho preenche essa lacuna com o objetivo de caracterizar a agilidade no Desenvolvimento ?gil em Larga Escala. Neste trabalho, realizou-se um estudo organizado em duas fases. Na Fase 1, denominada Base Te?rica, realizamos um estudo do estado-da-arte da ?rea. Na Fase 2, denominado Estudo Emp?rico, n?s realizamos duas investiga??es: um estudo de campo em uma empresa ?gil em larga escala, para identificar o desenvolvimento durante o processo de transforma??o da empresa para esta nova abordagem e, um grupo focal, para identificar como as equipes ?geis em larga escala que v?m utilizando os m?todos ?geis o quanto se percebem em termos de aspectos de maturidade ?gil. Estes resultados contribuem para os pesquisadores e profissionais entenderem melhor como a agilidade e definida e percebida nestes grandes ambientes. O conhecimento e ?til para aqueles que querem entender como o desenvolvimento ?gil se adapta a tais ambientes e para pesquisadores com o objetivo de se aprofundar sobre o tema.
The Agile Manifesto was proposed in February 2001 having in mind small and collocated teams. However, agile has also been put in practice in other settings (e.g. large teams, distributed teams, complex systems) under the term ?Large-Scale Agile Development' (LSAD). There is no clear definition for and understanding of how agility is present in this setting. Thus, our work fills in this gap aiming to characterize agility in LSAD. We conducted a study organized in two phases. In Phase 1, named Theoretical Base, we conducted the state-of-the-art of the area. In Phase 2, named Empirical Study, we conducted two investigations: a field study in a large-scale agile company to identify how agility was developed during the transformation process of the company to this new approach, and a focus group to identify how large-scale agile teams that have been using agile for a certain while perceive themselves in terms of maturity in agile aspects. Findings contribute to researchers and professionals better understand how agility is defined and perceived in large settings. This knowledge is useful for those who want to enter the agile journey in such similar environments and for researchers aiming to further explore the topic.
APA, Harvard, Vancouver, ISO, and other styles
33

Goulding, John Stuart. "A study of large-scale focusing Schlieren systems." Thesis, 2008. http://hdl.handle.net/10539/4841.

Full text
Abstract:
Abstract The interrelationship between variables involved in focusing schlieren systems is fairly well understood, however how changing the variables affects the resultant images is not. In addition, modified grids and arrangements, such as two dimensional, colour and retroreflective systems have never been directly compared to a standard system. The existing theory is developed from first principles to its current state. An apparatus was specifically designed to test grid and arrangement issues while keeping the system geometry, optical components and the test object identical. Source grid line spacing and clear line width to dark line width ratio were varied to investigate the limits of diffraction and banding and to find an optimum grid for this apparatus. Two dimensional, colour, retroreflective and a novel projected arrangement were then compared to this optimum case. In conclusion, the diffraction limit is accurately modelled by the mathematical equations. The banding limit is slightly less well modelled as additional factors seem to affect the final image. Inherent problems with the two dimensional and colour systems indicate that while they can be useful, they are not worth developing further though chromatism in the system meant that colour systems were not fully investigated. The retroreflective and projected systems have the most potential for large scale use and should be developed further.
APA, Harvard, Vancouver, ISO, and other styles
34

Liu, Jun-Gu, and 劉俊谷. "A Study of Steel Jacketing for Large Scale." Thesis, 2001. http://ndltd.ncl.edu.tw/handle/43043147613206191566.

Full text
Abstract:
碩士
國立臺灣大學
土木工程學研究所
89
A Study of Steel Jacketing for Large Scale Rectangular RC Bridge Columns Abstract During the past years, experimental results on eleven 0.4-scale specimens have indicated that octagonal steel jacketing can effectively improve the seismic performance of rectangular RC bridge columns deficient in concrete lateral confinement or shear, or having vertical reinforcing bars improperly lap-spliced in the potential plastic hinge zone. In order to further substantiate the effectiveness of the octagonal steel jacketing for seismic retrofit of full-scale rectangular RC bridge columns, a combined analytical and experimental research program is launched. Two analytical methods were employed to compare the analytical lateral force versus displacement relationships with those obtained from the past tests of the 0.4 scale specimens. First method incorporates the strain versus stress relationships from both the tensile coupon responses of the vertical reinforcement and the Mander’s confined concrete model considering the equivalent lateral ties computed from the steel jackets and the transverse reinforcement. Second method considers a modified confined concrete model proposed by Lin and Li and a combined low-cycle fatigue and soften-branch model for vertical reinforcing bars. In both methods, moment versus curvature relationships were computed assuming plane remain plane after bending. Analytical results indicate that both methods can satisfactorily predict the overall lateral force versus displacement relationships of the column specimens. However, while the analysis accurately predict the experimental responses of column specimens lacking concrete lateral confinement in the plastic hinge zone, the analytical results are conservative for those column specimens lacking shear strength or proper lap-spliced vertical bar details. Based on the test and the analytical results, the design procedures for seismic retrofit of rectangular RC columns employing octagonal steel jackets are given. The octagonal steel jackets for the full scale bridge column specimen having 100% vertical bars lap-spliced for a length of 40 bar diameter above the footing were designed accordingly. The initial experimental results indicate that the retrofit measure was not effective due to the poor quality of the concrete. Subsequently, the lower part of the steel jackets and the concrete cover were removed before recasting concrete and reinstalling the steel jackets. Final tests results on the repaired specimen confirm that, if the concrete strength is appropriate even all the vertical bars are 40 bar diameter lap-spliced in the plastic hinge zone, the proposed octagonal steel jacket design can effectively improve the seismic performance of the full scale bridge column.
APA, Harvard, Vancouver, ISO, and other styles
35

Cheng, Jen-hsuan, and 鄭任軒. "The feasibility study on outdoor large scale microalgaeculture." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/88452735369424630568.

Full text
Abstract:
碩士
國立中山大學
海洋環境及工程學系研究所
99
Nannochloropsis oculata is one of promising oleaginous microalga, containing a plenty of fat which can be extracted and transformed into biodiesel. The purpose of this study is to develop a closed system, Outdoor Temperature Controllable Photobioreactor System (OTCPS), to cultivate the algae in pure and massive quantity. In this research, the seawater from Sizihwan is used as the cultivation liquid. Lambert-Beer’s Law is adopted to calculate the attenuation coefficient of light intensity in a water column. By adjusting the water depth, not only the light intensity but also the water temperature could be controlled at the optimal situation and thus avoids unfavorable temperature changing in harsh weather. Therefore to establish the relationship of light intensity and water temperature is critical for the success of growing microalgae in outdoor conditions. The temperature variation of culture medium can be explained by the heat transfer theorem. In this study, the heat radiation mechanism and the first order of Fourier heat conductivity were adopted to simulate the liquid temperature change. The simulation results have shown good agreement with the filed data especially during daytime. The experimental results reveal that the winter grow rate of Nannochloroposis oculata is 0.33 d-1 , while the summer growth rate is only 0.20 d-1 . This may imply that the high temperature is an inhibition to the growth of Nannochloroposis oculata. Besides when the cell density of microalgae is getting higher, each individual alga may create mutual shading effect and thus reduce the photosynthetic efficiency. In conclusion, the proposed photobioreactor has been successfully tested in summer, autumn, and winter at Kaohsiung, in the south of Taiwan. This indicates that this device can be broadly used in the subtropic zone
APA, Harvard, Vancouver, ISO, and other styles
36

Teng-ChiehHsu and 許登傑. "Study on Risk Analysis of Large Scale Landslide." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/7x5m53.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Tzu-YingYu and 余姿瑩. "Study on occurrence rainfall of large-scale landslide." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/m64wv2.

Full text
Abstract:
碩士
國立成功大學
水利及海洋工程學系
107
With the impact of global warming and climate change, extreme rainfall events may become the norm in the future, and major disasters will become more frequent. During the rainy season, extreme rains often induce more serious large-scale landslide. For example, Taiwan has been hit by typhoons all year round. In 2009, typhoon Morakot caused severe damage to the South of Taiwan. In the Xiaolin Village landslide, residents’ lives and property were seriously damaged. After this disaster, the issue of large-scale landslide prevention has become significant research. This study collects he large-scale landslide data in Taiwan from 2001 to 2016, and combines the time and place of the ground motion signal to determine the rainfall warning value using time, rainfall, and geo-factors, and explores the warning value. In this study, we use time series rainfall analysis method, and dimensionless rainfall analysis method to get the warning values of rainfall of large-scale landslide. The research results show that only the data who belong to the group of single and new large-scale landslides can get a better relationship. With the relationship between dimensionless parameter R/D and Φ/θ we get, it is easier to definite the safety value of rainfall for large-scale landslide monitoring and prevention when we can get some geostatistical parameters before it occurs.
APA, Harvard, Vancouver, ISO, and other styles
38

Yeh, Po Hung. "Channel Meander Migration in Large-Scale Physical Model Study." 2009. http://hdl.handle.net/1969.1/ETD-TAMU-2009-08-7215.

Full text
Abstract:
A set of large-scale laboratory experiments were conducted to study channel meander migration. Factors affecting the migration of banklines, including the ratio of curvature to channel width, bend angle, and the Froude number were tested in the experiments. The effect of each factor on the evolution of channel plan form was evaluated and quantified. The channel bankline displacement was modeled by a hyperbolic function with the inclusion of an initial migration rate and a maximum migration distance. It is found that both the initial migration rate and maximum migration distance exhibit a Gaussian distribution along a channel bend. Correlations between the distributions and the controlling parameters were then studied. Two sets of equations were developed for predicting the initial migration rate and the maximum migration distance. With the initial migration rate and maximum migration distance being developed as a function of geometric and flow parameters, a hyperbolic-function model can be applied to estimate the bankline migration distance. The prediction of channel centerline migration was also developed in this study. The channel centerline was represented with a combination of several circular curves and straight lines. Each curve with the radius of curvature and bend angle was used to describe the channel bend geometry. HEC-RAS was applied to estimate the flow hydraulic properties along the channel by adjusting the channel bed slope. The intersections of two consecutive centerlines were found to be the inflection points of the centerline migration rate. Phase lag to the bend entrance was measured and correlated with the bend length and water depth. The migration rate between two successive inflection points demonstrated a growth and decay cycle. A sine function was used to model the centerline migration rate with regression analysis of the measurement data. The method was applied to four sites of four natural rivers in Texas. The results showed that the prediction equation provides agreeable results to the centerline migration of natural rivers.
APA, Harvard, Vancouver, ISO, and other styles
39

Lin, Po-Yu, and 林柏宇. "The Study of Large-scale Network Security Auditing Mechanism." Thesis, 2002. http://ndltd.ncl.edu.tw/handle/68390005800054997108.

Full text
Abstract:
碩士
樹德科技大學
資訊管理研究所
90
Internet services are becoming more popular and convenience as the information technology and network applications advance daily in the last few years. To ensure the quality and accessibility of Internet, the network security is an important concern. In order to maintain the reliability, continuity and its quality of Internet services, domain administrators must have access to the most updated information of every node within the network domain, so that they can take any precautionary steps or provide immediate solutions to decrease damages of network security incidents. The purpose of this thesis is to establish a Large-scale network security scanning system, which assists domain administrators in obtaining network nodes information efficiently, and analyzes the scanning data automatically. The research evaluate the targeted network nodes by using both Active Scanning and Passive Scanning methods; and collecting version information of Web Server, FTP Server, Mail Server, DNS Server, Operational System, and SSL. Furthermore, store those networks nodes information into the database for further analysis and comparison. Moreover, collecting the vulnerabilities of Internet service by using Common Vulnerabilities and Exposures (CVE) Information database, and then the vulnerabilities ratings of various Internet services can be obtained. The network security scanning system can be used to scan the targeted network domain periodically and consistently, and the scanning reports are available to domain administrators in HTML format. This research used Taiwan network domain for evaluation purpose, the study covers the most common used servers, obtained the version information and overall vulnerabilities rating of various server in this domain. At the same time, the recommendations for insuring network securities are provided.
APA, Harvard, Vancouver, ISO, and other styles
40

hsun, Hsu-Shih, and 許世勳. "Study of Electrical Discharge Machining by a large scale." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/09870544496123157505.

Full text
Abstract:
碩士
國立中央大學
機械工程研究所碩士在職專班
100
Due to EDM have low working speed defect,working debris weren’t easily flushed and also debris deposit between electrode and work. In order to solve this problem, We propose three solutions .First ,We add aluminum powder in the working fluid can increase the gap distance between the electrode and workpiece and wastes bits can become small quality .Second , Add the magnetic field on both sides of the workpiece, increasing processing speed of debris flow. Third, while increasing the magnetic field and adding aluminum powder in the EDM, to enhance the material removal rate. From the experiment results, The use of better methods of processing parameters and no improvement of the workpiece surface will have a processing chip accumulation phenomenon. 30min after processing, electrode diameter, 30mm, 40mm, 55mm material removal rates were 0.70g/min, 0.69g/min, 0.20g/min;The first Techniques experiment results, Add aluminum material removal rates were 0.82g/min, 0.77g/min, 0.72g/min, and the workpiece surface did not chip accumulation phenomenon; Magnetic field-assisted material removal rates were 0.96g/min, 0.83g/min, 0.70g/min, and the workpiece surface did not chip accumulation phenomenon; Add aluminum powder processed with the magnetic field-assisted material removal rates were 1g/min, 0.93g/min, 0.76g/min, and the workpiece surface did not chip accumulation phenomenon. Three kinds of improvements can make the process smooth chip flow and enhance the material removal rate. Add aluminum powder and add magnet while the best results after processing
APA, Harvard, Vancouver, ISO, and other styles
41

Liou, Jer-Ming, and 劉哲明. "The study of large-scale production of gamma-linolenic acid." Thesis, 1995. http://ndltd.ncl.edu.tw/handle/32846432595453504367.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Hsun, Chan Po, and 詹博勳. "Study on the Development Stratey of Large Scale Shopping Center." Thesis, 1993. http://ndltd.ncl.edu.tw/handle/59251415125561834554.

Full text
Abstract:
碩士
淡江大學
建築(工程)學系
81
Due to the rise of income level and the change of living, somsuming style in Taiwan area, the multifunctional shopping mall of European-American type shopping center has been pro- posed at present. In the same time, because the lack of civic leisure facilities, central government has listed the const- ruction of large scale shopping center as the important econo- mic plan of National Six-year Construction Plan.The main purpose of this study is to propose the development strat- egies of large scale shopping center from the foriegn development experience and the analysis of the development cir- cumstance in Taiwan at this stage. The structure of this study is as the follows. First, to establish the integrate development concept of shopping center from the discussion of the development ideas of shopping center including development history, delvelopment charateristics, planning and development strategies. Second, to discuss the function, development process,site selection, market potential, plan and design, operation and management of two proposed de- velopment cases in Taiwan and Two foreign developed cases in shopping center. From the above study, the development of shopping center, we should propose integral clear large scale shopping center development plan, establish the entire development system of shopping center and propose site location plan to guide the development of large scale shopping center in Taiwan efficien- tly. In planning, operation and management, we shold absorb the successful experience of foriegn cases, and sevise from our development circumstance to develop large scale shopping center of Taiwan style.
APA, Harvard, Vancouver, ISO, and other styles
43

Wang, Min Hsuan, and 王敏璇. "A Study on the Ranking Methods for Large Scale Competitions." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/84070323469512824801.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Chieh-HsuanWeng and 翁傑軒. "Study on Correlation of Water Quality and Large Scale Landslide." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/g4mwjs.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Yen, Chin-Yi, and 顏勤益. "The Study of Large-scale Events Fireworks Execution Management Cast." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/7tp29q.

Full text
Abstract:
碩士
國立交通大學
工學院產業安全與防災學程
105
Professional fireworks are usually displayed when a celebration or a folk activity is taking place. Firework display is not managed correctly, it may cause a serious danger to the public. The study not only looks into the topic of holding national fireworks and art festival in Miaoli County but also discusses firework behavior through related researches. Besides, the research analyzes causes of accident by doing case studies on each fireworks accident and finds difficulties of executing a professional firework checking、displaying or destroying. The goal aims at the risk assessment of fireworks management and tries to define a method to verify effectiveness of correct action. How to improved safety of fireworks display? The investigation shows that as follows. Firstly, a communication system between sponsor and related departments should be established. Secondly, professional fireworks checking and the checklist of fireworks safety management should be carried out certainly. Last but not least, related laws and regulations of firework should be promoted and then the safety of fireworks display can be built up
APA, Harvard, Vancouver, ISO, and other styles
46

Kao, Chia-yang, and 高嘉陽. "A Study on Test-sheet Composition from Large-scale Test Banks." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/rd7rat.

Full text
Abstract:
碩士
銘傳大學
資訊工程學系碩士班
96
With the prosperity of information technology development, much more attentions have been paid to the test sheet composition based on a computerized test item bank. In the past, the most popular way to pick up test items from an item bank is determined randomly. Although the random selection is quite easy to conduct, it is hard to meet the various requirements of developing a better test sheet. Currently, some literature works have been contributed to solve the problem by using heuristic algorithm, such as the tabu search algorithm. Given multiple assessment criteria of a test, a test sheet with an approximate optimal solution can be generated from a large item bank. This paper proposes a greedy algorithm to solve the test-sheet composition problem. According to the experimental results, the proposed greedy algorithm achieves a better solution as compared to the known tabu search algorithm in a shorter time in most situations and explores some issues. Besides, we also adopt the item response theory to formulate the test-sheet composition. With the consideration of students’ ability, a test-sheet satisfying all the criteria can be generated by Ant Colony Optimization algorithm. According to the experimental results, the ACO algorithm reaches a better solution than other algorithms in acceptable time
APA, Harvard, Vancouver, ISO, and other styles
47

Su, Zhi-hao, and 蘇智豪. "Study on Low PAPR Precoding for Large-Scale Multiple Antenna System." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/9w968m.

Full text
Abstract:
碩士
國立中山大學
通訊工程研究所
102
Massive MIMO antenna system employs a few hundred base station antennas to simultaneously serve many tens of user equipments in the same radio channel. Such system can dramatically improve the data rates and energy-efficient, and thus is widely considered as a future cellular network architecture. Because the amount of RF power amplifier is large, reducing the hardware cost becomes a critical issue in implementation. Low‐cost RF amplifiers have poor linearization property so that they cannot be used to signals with high peak‐to‐average‐power‐ratio (PAPR). To overcome this problem, several works have considered developing low PAPR precoding techniques. Previous works have proposed the gradient descent (GD) method to find the low PAPR precoding. However, the GD method has high computational complexity. Our contributions include the introducing of the approximated message passing (AMP) algorithm in searching the low PAPR precoding. Compared to the GD method, the AMP algorithm is more suitable for the hardware implementation because it enjoys the much lower computational complexity.
APA, Harvard, Vancouver, ISO, and other styles
48

Lin, Tsung-Yi, and 林宗毅. "Voltage Stability Study for Power System with Large Scale Wind Farm." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/664n3q.

Full text
Abstract:
碩士
國立臺灣海洋大學
電機工程學系
102
The main purpose of this thesis is to investigate the voltage stability of power system with a large-scale wind farm. Calculations of power flow and bus voltage variation of the system without and with the wind farm are conducted, and according to the related interconnection criteria, the fault current and cables loading are investigated to determine whether the system exceeds relevant specifications. Both the real-power-voltage curves (P-V curves) and the reactive-power-voltage curves (Q-V curves) methods are used for analyzing the voltage stability limit of the system under normal operation as well as contingency cases. In this thesis, the Changpin area power system in Taiwan with existent onshore and future offshore wind farms is taken as the study system to explore the impacts of wind farms on system voltage stability. The doubly-fed induction generator and the full converter wind turbine of the wind farms, which are set to be operated under the constant power factor control mode and constant voltage control mode, are utilized as the generating units for the wind farm and the impacts from the two types of generating units on the system are investigated based on the calculations of power flow, bus voltage variation and steady-state voltage stability. The results from the P-V and Q-V curves show that power system voltage stability will be influenced by the wind farm with different type of units under various control modes.
APA, Harvard, Vancouver, ISO, and other styles
49

Chu, I.-Jhom, and 朱奕璋. "Study on the mechanism of slope failure using large-scale tests." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/42357810786512575444.

Full text
Abstract:
碩士
國立暨南國際大學
地震與防災工程研究所
94
Among all types of natural disasters, debris flow is one of the most threatening events for human and the environment. Up-stream slope failures induced by heavy rainfall usually constitute the source of debris flow. As a first step towards the mitigation of debris flow disaster, the mechanism of slope failure is investigated herein. An experimental program for investigating the mechanism of slope failure is conducted using the outdoor large-scale debris flow test channel and the artificial raining system at the campus of National Chi Nan University. A sand classified as SP-SM was used to establish two slopes; one of them is a 4.6 m- long, 1.5 m-wide and 0.75 m-deep approximately trapezoidal dam with a slope angle of 30°; the second one is a 2.9 m-long, 1.5 m-wide and 0.6 m-deep infinite slope with a slope angle of 30°. Artificial raining tests were performed on these two slopes. Bi-axial loadcells and pore-pressure transducers are used in the tests. Cameras are used to observe the deformation of the slop surface and the settlement of the test slopes. Results of the tests show that the toe of the slope tends to be saturated earlier than other parts of the slope and its pore-pressure increasing rate is also higher than that observed at other portions of the slope. The soil strength decreases because of the rising pore pressure at the toe of the slope induced by the seepage in the soil mass, and this may be the main reason that causes slope failures. Results of the normal stress measurement show that the normal stress near slope toe increases rapidly immediately before the ultimate failure of the slope, indicating the stress concentration at the toe of the slope may be the sign of the beginning of slope failure.
APA, Harvard, Vancouver, ISO, and other styles
50

Chen, Hong-Yao, and 陳泓堯. "Study on Operation Strategy for Large Scale Customers Considering Transformer Losses." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/72579688721128633681.

Full text
Abstract:
碩士
國立臺灣科技大學
電機工程系
96
The starting point of this research lies in setting up capacity rationalized studying the distribution systems of large-scale users, to the self-criticism of user's end voltage transformer load rate , probe into losses of the transformer further, expect to be able to get and do rational load of the transformer in the distribution structure to amalgamate the foundation from the lost relation between the load rate and transformer , can cut the system of leaving , use to raise the load rate of the distribution system while examining out that group of transformers in structure, reduce losses at the same time .By this operation tactics proposed of thesis, can really reduce losses of the transformer under the low situation of load rate, and is suitable for planning the capacity of the transformer to consult in the future.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography