Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: 1.1 scale prototype.

Dissertationen zum Thema „1.1 scale prototype“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Dissertationen für die Forschung zum Thema "1.1 scale prototype" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Laffont, Charlotte. „La conception du logement à l’expérience des sonorités – COLEXSON : Un prototype construit pour expérimenter à échelle 1 les ambiances sonores de demain depuis un logement ventilé naturellement“. Electronic Thesis or Diss., Université Grenoble Alpes, 2024. http://www.theses.fr/2024GRALH014.

Der volle Inhalt der Quelle
Annotation:
Dans les théories et pratiques architecturales et urbaines, dans les réglementations et les labels de la construction, le son est abordé principalement comme une source de bruit, de nuisance contre lequel il faut isoler le logement. Malgré cela, les constats répétés de manque de qualité sonore des lieux de vie révèlent l’insuffisance de cette approche et la nécessité d’aborder la conception sonore par des outils complémentaires.Comment introduire l’écoute dans la conception des logements collectifs afin qu’ils répondent aux enjeux de la ville de demain et participent à une bonne qualité de vie ? Les espaces intermédiaires, situés entre les écoutes de la sphère privée et celles de la dimension publique, jouent-ils un grand rôle dans les perceptions quotidiennes ? Que peut-on retenir du vécu sonore des usager.ère.s dans les formes architecturales construites qui pourrait intéresser la future écoute de la ville ?Pour y répondre, nous allons nous intéresser aux écoutes de trois catégories d’espaces intermédiaires dans des formes architecturales historiques et contemporaines : les espaces de transition (hall, cour intérieure, etc.), les espaces extérieurs aux abords des logements (terrasse partagée, toiture, balcon, etc.) et l’enveloppe bâtie (double peau, fenêtre).Pour imaginer les écoutes de la ville et du logement avec l’urbanité post-carbone nous allons nous intéresser aux sons des sociabilités, ceux du paysage naturel, aux sons technologiques et à ceux des mobilités. Dans la prospective d’une ville aux nombreux aléas climatiques, nous imaginons une diversité d’usages et de cultures, des programmations mixtes, des mobilités douces (moteurs hybrides et électriques notamment), des proximités et des voisinages apportés par la densité. Les épisodes caniculaires annoncés nécessitent dès à présent de penser à des moyens de rafraîchir les logements. Comment va-t-on pouvoir vivre dans la densité avec les sons du dehors ? Comment rafraîchir un logement par une ventilation naturelle tout en apportant une modulation de l’écoute ?Cette thèse, réalisée dans le cadre d’un contrat A.N.R.T.-C.I.F.R.E. au sein du B.E.T. LASA, est rattachée au laboratoire A.A.U.-CRESSON et à la chaire de recherche « Habitat du futur ». Nous sommes convaincus de l’importance de l’expérimentation pour répondre à l’ensemble de ces questions et intégrer l’écoute dans la pratique de l’architecture. C’est pourquoi nous testerons des outils concrets et appréhensibles par les acteur.ice.s d’un projet. Une analyse d’un projet à Villeurbanne (69), le macro-lot B, sera réalisée dès les premières phases de conception pour anticiper ses ambiances sonores et les modulations d’écoute qu’il pourrait apporter. La conception d’un prototype à échelle 1:1 d’ECHAfaudage SONore – ECHASON – visera à expérimenter ses futures ambiances depuis un logement ventilé naturellement et depuis ses espaces intermédiaires. N’ayant pas pu être réalisé dans sa taille maximale, ce seront finalement deux prototypes de dispositifs de ventilation naturelle intégrant un filtrage sonore qui ont été construits et expérimentés avec des acteur.ice.s du projet. Plusieurs bandes sonores anticipant des espaces intermédiaires du macro-lot B ont pu être expérimentées avec des futur.e.s habitant.e.s de ce projet. Les résultats obtenus ont permis d’aboutir à l’ébauche de trois cahiers des charges pour intégrer la dimension sonore dans les futurs concours d’architecture et d’urbanisme. La qualité de l’environnement sonore doit être intégrée à la conception des logements au même titre que les préoccupations sur l’ensoleillement, la qualité de l’air ou encore le confort d’été. Cela représente un enjeu social, économique et sanitaire. C’est en cela que ce travail défend l’idée d’une conception des logements par l’écoute qui ne se fasse plus uniquement de manière défensive ou corrective mais plutôt de manière créative et engagée
In architectural and urban theories and practices, in building regulations and labels, sound is approached primarily as a source of noise and nuisance, against which housing must be insulated. Despite this fact, repeated reports of a lack of sound quality in living spaces have revealed the inadequacy of this approach, and the need for complementary tools to address sound design.How can we introduce listening into the design of multi-family housing so that it meets the challenges of tomorrow's city and contributes to a good quality of life? Do intermediate spaces, located between the private and public spheres, play a major role in everyday perceptions? What can we learn from users' experience of sound in built architectural forms that might be of interest to future city listening?To answer this question, we're going to look at three categories of intermediate spaces in historic and contemporary architectural forms : transitional spaces (hall, inner courtyard, etc.), outdoor spaces around dwellings (shared terrace, roof, balcony, etc.) and the envelope of the building (double skin, window).To imagine the sounds of the city and housing in post-carbon urbanity, we're going to look at the sounds of sociability, the sounds of the natural landscape, technological sounds and the sounds of mobility. Looking ahead to a city with many climatic hazards, we imagine a diversity of uses and cultures, mixed-use programming, soft mobility (hybrid and electric motors in particular), proximity enhanced by density. With heatwaves on the horizon, we need to start thinking about ways on how cooling our homes. How can we live in density with the sounds of the outside world ? How can we cool a home with natural ventilation, while at the same time modulating the listening experience ?This thesis, carried out under an A.N.R.T.-C.I.F.R.E. contract within the B.E.T. LASA, is linked to the A.A.U.-CRESSON laboratory and the "Habitat of the Future" research chair. We are convinced of the importance of experimentation in answering all these questions and integrating the listening experience into the practice of architecture. That's why, throughout this work, we'll be experimenting with concrete tools that can be grasped by those involved in a project. An analysis of a project in Villeurbanne (69), the « macro-lot B », will be carried out in the early design phases to anticipate its sound ambiances and the listening modulations it could bring. The design of a 1:1 scale prototype of ECHASON - ECHAfaudage SONore - will be intended to experiment with its future environment of a naturally ventilated dwelling and from its intermediate spaces. As the project could not be carried out to its maximum size, two prototypes of natural ventilation systems incorporating sound filtration were built and tested with the project's actors. Several soundtracks anticipating the intermediate spaces of macro-lot B were tested with future residents of the project. The results led to the drafting of three specifications for integrating the sound dimension into future architectural and urban planning competitions. The quality of the sound environment must be integrated into housing design in the same way as concerns about sunlight, air quality and summer comfort. This represents a social, economic and health issue. In this perspective, this work defends the idea of designing housing through listening, not anymore in a defensive or corrective way, but rather in a creative and committed way
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Breuss, Fritz. „A Prototype Model of EU's 2007 Enlargement“. Europainstitut, WU Vienna University of Economics and Business, 2007. http://epub.wu.ac.at/918/1/document.pdf.

Der volle Inhalt der Quelle
Annotation:
EU's 2007 enlargement by Bulgaria and Romania is evaluated by applying a simple macroeconomic integration model able to encompass as many of the theoretically predicted integration effects possible. The direct integration effects of Bulgaria and Romania spill-over to EU15, including Austria and the 10 new member states of the 2004 EU enlargement. The pattern of the integration effects is qualitatively similar to those of EU's 2004 enlargement by 10 new member states. Bulgaria and Romania gain much more from EU accession than the incumbents in the proportion of 20:1. In the medium-run up to 2020, Bulgaria and Romania can expect a sizable overall integration gain, amounting to additional ½ percentage point real GDP growth per annum. Within the incumbent EU member states Austria will gain somewhat more (+0.05%) than the average of EU15 (+0.02%) and the 10 new EU member states (+0.01%), which joined the EU in 2004. (author's abstract)
Series: EI Working Papers / Europainstitut
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Trendle, Mark William. „CHRONOS : a prototype executive information system“. Thesis, Queensland University of Technology, 1988. https://eprints.qut.edu.au/36844/1/36844_Trendle_1988.pdf.

Der volle Inhalt der Quelle
Annotation:
Executives in business are facing great competitive pressure in the modern commercial world, and must look to improved information systems as a means of surviving and prospering. They suffer from information starvation, and information overload simultaneously: Their systems do not cater adequately for their information needs. CHRONOS, an Executive Information System, was conceived to meet the executive's need for concise, appropriate information, for decision-making, planning and control in business. The system meets this need by implementing the concept that business variables, held as time series, are convenient and understandable objects for processing, graphing, and presenting numerically. The form of the information is a key factor in the effectiveness of an executive information system. An objective of CHRONOS was to achieve maximum functionality with minimum conceptual complexity for the user. This approach is intuitively appealing to senior executives, many of whom do not have the time nor the inclination to become highly proficient users of computer systems. CHRONOS was therefore designed to be flexible, extendible, and practical, to meet the needs of a user population with senior management backgrounds, with diverse information needs, presentation requirements and computer expertise. It has been implemented as a temporal database system, and the experience provides some insights into the problems developers of such system may encounter.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Pasini, Samuele <1979&gt. „Uniquitous internet middleware: architecture design and prototype evaluation“. Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2009. http://amsdottorato.unibo.it/1760/1/pasini_samuele_tesi.pdf.

Der volle Inhalt der Quelle
Annotation:
Technology advances in recent years have dramatically changed the way users exploit contents and services available on the Internet, by enforcing pervasive and mobile computing scenarios and enabling access to networked resources almost from everywhere, at anytime, and independently of the device in use. In addition, people increasingly require to customize their experience, by exploiting specific device capabilities and limitations, inherent features of the communication channel in use, and interaction paradigms that significantly differ from the traditional request/response one. So-called Ubiquitous Internet scenario calls for solutions that address many different challenges, such as device mobility, session management, content adaptation, context-awareness and the provisioning of multimodal interfaces. Moreover, new service opportunities demand simple and effective ways to integrate existing resources into new and value added applications, that can also undergo run-time modifications, according to ever-changing execution conditions. Despite service-oriented architectural models are gaining momentum to tame the increasing complexity of composing and orchestrating distributed and heterogeneous functionalities, existing solutions generally lack a unified approach and only provide support for specific Ubiquitous Internet aspects. Moreover, they usually target rather static scenarios and scarcely support the dynamic nature of pervasive access to Internet resources, that can make existing compositions soon become obsolete or inadequate, hence in need of reconfiguration. This thesis proposes a novel middleware approach to comprehensively deal with Ubiquitous Internet facets and assist in establishing innovative application scenarios. We claim that a truly viable ubiquity support infrastructure must neatly decouple distributed resources to integrate and push any kind of content-related logic outside its core layers, by keeping only management and coordination responsibilities. Furthermore, we promote an innovative, open, and dynamic resource composition model that allows to easily describe and enforce complex scenario requirements, and to suitably react to changes in the execution conditions.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Patti, Mauro <1989&gt. „MAORY: wavefront sensor prototype and instrument optical design“. Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2018. http://amsdottorato.unibo.it/8534/1/Mauro_Patti.pdf.

Der volle Inhalt der Quelle
Annotation:
MAORY will be the multi-conjugate adaptive optics module for the ELT first light. Its main goal is to feed the high-resolution NIR imager and spectrograph MICADO. The present Thesis address the MAORY system at the level of optical design and analysis. MAORY is a complex science projects whose stakeholder is the scientific community. Its requirements are driven by the science cases which request high resolution and astrometric accuracy. In an ideal world without atmospheric turbulence, MAORY optics must deliver diffraction-limited images with very low optical distortions. The tolerance process is one of the most important step in the instrument design since it is intended to ensure that MAORY requested performances are satisfied when the final assembled instrument is operative. The baseline is to operate wavefront sensing using six sodium Laser Guide Stars and three Natural Guide Stars to solve intrinsic limitations of artificial sources and to mitigate the impact of the sodium layer structure and variability. The implementation of a laboratory Prototype for Laser Guide Star wavefront sensor at the beginning of the phase study of MAORY has been indispensable to consolidate the choice of the baseline of wavefront sensing technique. The first part of this Thesis describes the results obtained with the Prototype for Laser Guide Star wavefront sensor under different working conditions. The second part describes the logic behind the tolerance analysis at the level of MAORY optical design starting from definition of quantitative figures of merit for requirements and ending with estimation of MAORY performances perturbed by opto-mechanical tolerances. The sensitivity analysis on opto-mechanical tolerance of MAORY is also a crucial step to plan the alignment concept that concludes the arguments addressed by this Thesis.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Oates, Howard Stephen. „A prototype centrifugal separator for bulk materials“. Thesis, Queensland University of Technology, 1990. https://eprints.qut.edu.au/36459/1/36459_Oates_1990.pdf.

Der volle Inhalt der Quelle
Annotation:
Vibrating screen centrifuge screens, commonly used for de-watering coal, are subject to high abrasive wear rates and consequently large maintenance costs. Ceramic and composite materials were investigated and tested to find materials suitable for design of a cost effective screening surface with a greatly improved service life through reduced wear. Alumina, bauxite, silicon carbide, fusion cast Zac and Basalt ceramics, and alumina or bauxite particle composites with epoxy and polyurethane matrices were tested for wear resistance using six different tests. A reduction in wear by more than an order of magnitude over present centrifuge screen materials was shown if alumina was used. A centrifuge screen and integral structural support frame were designed. Alumina ceramic was used as the screening surface material, and glass and kevlar fibre reinforced plastics used as structural components in order to reduce weight. Extensive use was made of adhesively bonded joints. The screen and frame mechanical integrity were determined by analysis of all forces acting in service, calculation of the stresses in structural components, and finite element modelling of the frame, screen surface, and adhesive joints. The mechanical properties of the fibre reinforced plastic components and adhesives used were measured to check for adequate strength. In addition, adhesive formulation tests were carried out to maximize adhesion to ceramic surfaces and check the effect of moisture and temperature. A financial analysis of both the total cost of the project and centrifuge screen unit costs demonstrated that there are substantial cost benefits available to the centrifuge operators. Adequate financial returns for the investment in research and development for manufacture of the centrifuge screen itself are also apparent.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Denaro, Chris. „Dialogues with the Prototype“. Thesis, Queensland University of Technology, 2008. https://eprints.qut.edu.au/92740/1/Dialogues%20with%20the%20Prototype-Denaro.pdf.

Der volle Inhalt der Quelle
Annotation:
This exegesis traces a path through the production of an animated work, and discusses the developmental process of an individual production workflow. Through the application and development of a Model of Structured Reflection (Johns, 2002), the creative output of the project focussed on a series of non-narrative, process-driven animation pieces, based on time lapse consumer objects. The creative project fused form and process into a series of mediated collage constructions, promoting spontaneity and reflexivity within my animation workflow. The creative work occupies 75% of this Masters project, and the exegesis 25% (7500 Words) 2
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Flanagan, Arlen. „Design, construction and evaluation of a multi layered solar distillation prototype /“. Click here to view, 2009. http://digitalcommons.calpoly.edu/braesp/1/.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Mikawa, Kohsuke. „MICRO-F3: As a prototype co-creative Futures Film Festival“. Thesis, Queensland University of Technology, 2016. https://eprints.qut.edu.au/94780/1/Kohsuke_Mikawa_Thesis.pdf.

Der volle Inhalt der Quelle
Annotation:
This research designs, executes, and evaluates a prototype film event called micro-f3 Futures Film Festival. This event is a beta version of an original f3 Futures Film Festival concept created by external partner TExT-TUBE FUTURES STUDIOS, which included futures-themed media content in various genres and formats. The research here proposes that films could have a significant impact for identifying issues about the future. Therefore, through the micro-f3 event, this research aims to reveal the potential of futures-oriented films and events for telling futures scenarios.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Wohlgenannt, Gerhard, Stefan Belk und Matthias Schett. „A Prototype for Automating Ontology Learning and Ontology Evolution“. SciTePress, 2013. http://epub.wu.ac.at/4106/1/keod2013.pdf.

Der volle Inhalt der Quelle
Annotation:
Ontology learning supports ontology engineers in the complex task of creating an ontology. Updating ontologies at regular intervals greatly increases the need for expensive expert contribution. This naturally leads to endeavors to automate the process wherever applicable. This paper presents a model for automated ontology learning and a prototype which demonstrates the feasibility of the proposed approach in learning lightweight domain ontologies. The system learns ontologies from heterogeneous sources periodically and delegates all evaluation processes, eg. the verification of new concept candidates, to a crowdsourcing framework which currently relies on Games with a Purpose. Furthermore, we sketch ontology evolution experiments to trace trends and patterns facilitated by the system.(authors' abstract)
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Ouadid, Abdelkarim. „Prototype micro-électronique d'un décodeur itératif pour des codes doublement orthogonaux“. Mémoire, École de technologie supérieure, 2004. http://espace.etsmtl.ca/715/1/OUADID_Abdelkarim.pdf.

Der volle Inhalt der Quelle
Annotation:
Ce mémoire porte sur le prototypage microélectronique FPGA d'un décodeur itératif doublement orthogonal issu de récents travaux de recherche. Le nouvel algorithme est simple et présente un certain nombre d'avantages par rapport aux codes turbo très prisés actuellement dans le codage de canal. En effet, ces derniers outre la complexité de leur algorithme de décodage, souffrent d'un problème de latence qui les rend inadaptés pour certaines applications, comme la téléphonie par exemple. Le décodeur utilisé, est un décodeur itératif à quantification souple, basé sur le décodage seuil tel que présenté par Massey et amélioré par l'approximation de la probabilité a posteriori (AAPP). Grâce à cette approche, on arrive à concilier complexité, latence, performance en correction d'erreurs, et haut débit de fonctionnement. Le prototype vise à valider les résultats de simulation, ainsi que l'estimation de la complexité et de la fréquence maximale que l'on peut atteindre sur des FPGA Virtex-II XC2V6000 et ceci pour différentes structures du décodeur.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Stroian, Vasile. „Development of a prototype for multidimensional performance management in software engineering“. Mémoire, École de technologie supérieure, 2011. http://espace.etsmtl.ca/870/1/STROIAN_Vasile.pdf.

Der volle Inhalt der Quelle
Annotation:
Une meilleure compréhension et l’amélioration de la performance sont des problèmes importants, d’actualité et difficiles pour les organisations. Conséquemment, les gestionnaires sont toujours à l'affût de meilleures solutions pour gérer la performance au sein de leurs organisations. Une conséquence importante de ne pas avoir de cadre conceptuel de gestion de la performance (Performance Management Framework ou PMF) en place est l’incapacité de différencier le succès de l'échec au sein d’une organisation. Les cadres conceptuels de gestion de la performance sont nécessaires aux organisations qui doivent planifier, assurer un suivi et contrôler leurs activités, ainsi que prendre des décisions éclairées. L'utilisation d'un cadre conceptuel de gestion de la performance peut offrir à une organisation une meilleure vision de son fonctionnement réel et indiquer si elle est en voie d’atteindre ses objectifs. Au fil des ans, plusieurs cadres ont été développés pour les gérer les actifs tangibles et intangibles de l'organisation. Dans le passé, la gestion de la performance a surtout été orientée vers le point de vue économique. Kaplan et Norton ont ajouté trois autres points de vue dans leur cadre, soit le Balanced Scorecard (BSC), et cet ajout représente une contribution majeure au domaine. Les cadres de gestion de la performance existants ne satisfont pas aux exigences de la gestion du génie logiciel étant donné que différents points de vue doivent être pris en compte en même temps. De plus, les données quantitatives sous-jacentes sont multidimensionnelles et les techniques de visualisation à deux et trois dimensions ne sont pas adéquates. Troisièmement, chaque organisation a ses propres points de vue de performance qui lui sont spécifiques. Dernièrement, ces points de vue doivent être représentés de façon consolidée pour une saine gestion de l'ensemble. Le but de cette thèse est de développer un prototype pour la gestion de la performance multidimensionnelle en génie logiciel. La thèse commence par définir les termes importants et les concepts clés utilisés dans la recherche : le logiciel, la performance, la gestion, les modèles multidimensionnels, le développement, l'ingénierie, et le prototype, et les diverses combinaisons de ces termes. Il est suivi par une revue des modèles multidimensionnels de la performance qui sont spécifiques au génie logiciel et des modèles multidimensionnels de performance qui sont disponibles de façon générique en management. Un cadre de gestion de la performance en génie logiciel est proposé qui est divisé en quatre phases : la conception, la mise en oeuvre, l'utilisation du cadre, et l’amélioration de la performance. Un prototype est ensuite proposé en appui à ce cadre. Le prototype comprend notamment des outils d'analyse visuelle pour gérer, interpréter et comprendre les résultats sous une forme consolidée tout en permettant l’accès aux valeurs des dimensions individuelles de la performance. De plus, le référentiel de données de projet logiciel mis à disposition par l'International Software Benchmarking Standard Group (ISBSG) est intégré au sein du prototype.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Dumas, Léonard. „Élaboration d'un prototype de veille marketing en hôtellerie“. Thèse, Université du Québec à Trois-Rivières, 2005. http://depot-e.uqtr.ca/1356/1/000126213.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Ortiz, Carlos. „Développement d'un prototype de système expert en électrothermie“. Thèse, Université du Québec à Trois-Rivières, 1995. http://depot-e.uqtr.ca/5145/1/000620612.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Ben, Letaifa Abdelkader. „Réalisation d'un prototype testable par la méthode de chaînes parallèles de courant“. Mémoire, École de technologie supérieure, 2002. http://espace.etsmtl.ca/795/1/BEN_LETAIFA_Abdelkader.pdf.

Der volle Inhalt der Quelle
Annotation:
Avec la croissance exponentielle de la complexité des systèmes électroniques, la testabilité des circuits intégrés revêt plus que jamais une importance capitale dans le domaine de la microélectronique. La réduction à l'échelle dont est issue cette croissance fait apparaître de nouveaux mécanismes de défectuosités ou amplifie l'effet des mécanismes existants. La révision des stratégies de test utilisées et le développement de nouvelles stratégies est alors obligatoire. Une nouvelle méthode de test a été développée à l'ÉTS pour permettre de détecter rapidement les circuits ne respectant pas les spécifications de vitesse et d'éviter de les encapsuler inutilement, réduisant de même coup les coûts liés au test. L'objectif de ce mémoire vise d'abord la conception, la fabrication et le test d'un circuit intégrant la logique nécessaire pour réaliser la méthodologie de test envisagée. Ensuite, évaluer l'impact de l'intégration de la méthode sur le flot conventionnel de conception et de proposer les ajustements nécessaires.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Shah, Saurabh Mahesh Kumar. „Multi-scale imaging of porous media and flow simulation at the pore scale“. Thesis, Imperial College London, 2014. http://hdl.handle.net/10044/1/34323.

Der volle Inhalt der Quelle
Annotation:
In the last decade, the fundamental understanding of pore-scale flow in porous media has been undergoing a revolution through the recent development of new pore-scale imaging techniques, reconstruction of three-dimensional pore space images, and advances in the computational methods for solving complex fluid flow equations directly or indirectly on the reconstructed three-dimensional pore space images. Important applications include hydrocarbon recovery from - and CO2 storage in - reservoir rock formations. Of particular importance is the consideration of carbonate reservoirs, as our understanding of carbonates with respect to geometry and fluid flow processes is still very limited in comparison with sandstone reservoirs. This thesis consists of work mainly performed within the Qatar Carbonates and Carbon Storage Research Centre (QCCSRC) project, focusing on development of three dimensional imaging techniques for accurately characterizing and predicting flow/transport properties in both complex benchmark carbonate and sandstone rock samples. Firstly, the thesis presents advances in the application of Confocal Laser Scanning Microscopy (CLSM), including the improvement of existing sample preparation techniques and a step-by step guide for imaging heterogeneous rock samples exhibiting sub-micron resolution pores. A novel method has been developed combining CLSM with sequential grinding and polishing to obtain deep 3D pore-scale images. This overcomes a traditional limitation of CLSM, where the depth information in a single slice is limited by attenuation of the laser light. Other features of this new method include a wide field of view at high resolution to arbitrary depth; fewer grinding steps than conventional serial sectioning using 2D microscopy; the image quality does not degrade with sample size, as e.g. in micro-computed tomography (micro- CT) imaging. Secondly, it presents two fundamental issues - Representative Element of Volume (REV) and scale dependency which are addressed with qualitative and quantitative solutions for rocks increasing in heterogeneity from beadpacks to sandpacks to sandstone to carbonate rocks. The REV is predicted using the mathematical concept of the Convex Hull, CH, and the Lorenz coefficient, LC, to investigate the relation between two macroscopic properties simultaneously, in this case porosity and absolute permeability. The effect of voxel resolution is then studied on the segmented macro-pore phase (macro-porosity) and intermediate phase (micro-porosity) and the fluid flow properties of the connected macro-pore space using lattice-Boltzmann (LB) and pore network (PN) modelling methods. A numerical coarsening (up-scaling) algorithm have also been applied to reduce the computational power and time required to accurately predict the flow properties using the LB and PN methods. Finally, a quantitative methodology has been developed to predict petrophysical properties, including porosity and absolute permeability for X-ray medical computed tomography (CT) carbonate core images of length 120 meters using image based analysis. The porosity is calculated using a simple segmentation based on intensity grey values and the absolute permeability using the Kozeny-Carman equation. The calculated petrophysical properties were validated with the experimental plug data.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Acworth, Elaine Elizabeth. „Dan Kelly danced into the shadows : large-scale personas in small-scale stories“. Thesis, Queensland University of Technology, 2008. https://eprints.qut.edu.au/18342/1/Elaine_Acworth_Thesis.pdf.

Der volle Inhalt der Quelle
Annotation:
Using an analysis of the creation of the character Dan Kelly in my play, risk, I argue that fairytale characters work as more than personage representations. They function on a big canvas for the audience; they carry large chains of association. Given this, I then propose that the human response is to infer additional meaning, meaning beyond the scope of plot and immediate character interaction - the audience infers symbolic meaning, ‘amplifying’ what is there into more. They enter a ‘generative empty space’ within the play where they infer or ‘unfold’ more meaning. In creating this ‘greater tale’, they are engaged beyond their personal ‘horizon of understanding’, and so, ‘take in’ the work through a heightened perceptual acuity. Therefore, I pursued the idea of making space for the operation of this process, of leveraging the creation of meaning around a character. My inquiry led me to believe that a powerful way to do this was through absence rather than presence and silence rather than sound; and this had a profound impact on my choice of form for Dan Kelly: he progressed, through a number of stages, from reportage to a digital representation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Minarini, Francesco. „Anomaly detection prototype for log-based predictive maintenance at INFN-CNAF tier-1“. Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/19304/.

Der volle Inhalt der Quelle
Annotation:
Splitting the evolution of HEP from the one of computational resources needed to perform analyses is, nowadays, not possible. Each year, in fact, LHC produces dozens of PetaBytes of data (e.g. collision data, particle simulation, metadata etc.) that need orchestrated computing resources for storage, computational power and high throughput networks to connect centers. As a consequence of the LHC upgrade, the Luminosity of the experiment will increase by a factor of 10 over its originally designed value, entailing a non negligible technical challenge at computing centers: it is expected, in fact, an uprising in the amount of data produced and processed by the experiment. With this in mind, the HEP Software Foundation took action and released a road-map document describing the actions needed to prepare the computational infrastructure to support the upgrade. As a part of this collective effort, involving all computing centres of the Grid, INFN-CNAF has set a preliminary study towards the development of AI driven maintenance paradigm. As a contribution to this preparatory study, this master thesis presents an original software prototype that has been developed to handle the task of identifying critical activity time windows of a specific service (StoRM). Moreover, the prototype explores the viability of a content extraction via Text Processing techniques, applying such strategies to messages belonging to anomalous time windows.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Walsch, Alexander. „Architecture and prototype of a real time processor farm running at 1 MHz“. [S.l. : s.n.], 2002. http://www.bsz-bw.de/cgi-bin/xvms.cgi?SWB10605034.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Nguyen, Albert Thu. „The molecular mechanism of action of bevirimat : a prototype HIV-1 maturation inhibitor /“. Oklahoma City : [s.n.], 2009.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Gent, N. D. „Scale covariance and non-triviality“. Thesis, Imperial College London, 1985. http://hdl.handle.net/10044/1/37703.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Morellon, Émeric. „Développement d'actionneurs en alliage à mémoire de forme pour un prototype d'aile d'avion adaptative“. Mémoire, École de technologie supérieure, 2010. http://espace.etsmtl.ca/280/1/MORELLON_%C3%89meric.pdf.

Der volle Inhalt der Quelle
Annotation:
Dans le contexte actuel, le fait de réduire la consommation de carburant des avions constitue à la fois un enjeu écologique, car il irait de pair avec une réduction des gaz à effet de serre, mais aussi économique, la raréfaction des énergies fossiles ne pouvant qu’entraîner une hausse du prix du kérosène à long terme. Ceci peut être obtenu grâce à une réduction de la traînée des ailes, qui se traduirait par une baisse de l’énergie que devront générer les moteurs. C’est dans ce contexte que s’inscrit le projet CRIAQ 7.1, nommé Amélioration de l'écoulement laminaire sur une voilure aéroélastique, et financé par le Consortium de recherche et d’innovation en aérospatiale au Québec (CRIAQ), le Conseil de recherches en sciences naturelles et en génie du Canada (CRSNG), Bombardier Aéronautique et Thales Canada, et est réalisé par deux universités québécoises, l’École de technologie supérieure (ÉTS) et l’École polytechnique de Montréal, ainsi que par le laboratoire d’aérodynamique du Conseil National de Recherche du Canada - Institut de recherche en aérospatiale (CNRCIRA). L’objectif est la réalisation d’un système actif permettant de retarder l’apparition de la transition laminaire/turbulent de l’écoulement sur l’aile. Pour cela, un prototype d’aile adaptative est conçu et réalisé, et est composé de trois éléments principaux hormis la structure rigide : une peau flexible sur laquelle sont installés des capteurs de pression pour détecter la position de la transition, un contrôleur et des actionneurs intelligents. Ces derniers sont composés de fils d’alliage à mémoire de forme (AMF) jumelés à un élément de rappel et permettent de modifier la géométrie de l’aile selon les conditions de vol. Celles-ci dépendent de la vitesse de l’écoulement (le nombre de Mach variant entre 0,2 et 0,35) et de l’angle d’attaque de l’aile, variant entre -1º et 2º. L’équipe de l’École polytechnique a généré une série de profils optimisés adaptés à chacune des configurations étudiées, et un modèle numérique décrivant le comportement de la peau flexible a été développé par le Laboratoire sur les alliages à mémoire et systèmes intelligents (LAMSI) à l’ÉTS. Les résultats de ces études montrent que les mouvements de la peau devront être obtenus à partir de ceux de deux lignes d’action. Celles-ci devront être capables de générer des déplacements verticaux de 8 mm par rapport à leur position de référence dans le but d’obtenir des profils les plus proches possible des profils optimisés. Le modèle numérique a aussi permis de connaître les forces et déplacements requis aux deux lignes d’action, qui doivent être générés par les actionneurs et transmis à la peau flexible par l’intermédiaire d’un système de transmission. Le présent mémoire décrit le procédé de dimensionnement qui a mené au choix de la géométrie des actionneurs, c'est-à-dire de la longueur et de la section nécessaires des éléments actifs permettant d’obtenir les forces et déplacements requis. Le premier paramètre à fixer est la force de l’élément de rappel, qui doit maintenir la peau dans sa position nominale en l’absence d’actionnement mais en présence de succion aérodynamique. Afin d’obtenir le mouvement de la peau, les actionneurs se retrouvent confrontés aux forces créées par l’élément de rappel et par la déformation de la peau, mais sont aidés pour cela par la force de succion liée au chargement aérodynamique. Le comportement des alliages à mémoire de forme étant difficile à modéliser, la méthode de dimensionnement la plus couramment utilisée est celle dite par essais et erreurs. Dans le cadre de cette étude, une caractérisation en laboratoire d’échantillons de faibles dimensions a permis de déterminer l’enveloppe des forces et déplacements que ceux-ci peuvent générer pour une déformation initiale, une température d’actionnement et un nombre de cycles donnés. Ensuite, une méthode de design basée sur la mise à l’échelle de ces résultats est mise en oeuvre afin de déterminer la géométrie des actionneurs qui sont installés dans l’aile. Il en ressort qu’une géométrie constituée de 6 fils d’alliage nickel-titane ayant subi une préparation thermomécanique préalable (laminage à froid et traitements thermiques), de longueur 1800 mm et de section unitaire 0,7 mm2 satisfont aux exigences de l’application. Lors de l’installation dans le prototype, ils sont déformés à froid, ce qui leur permet de générer les courses et forces nécessaires pour déformer la peau flexible lorsqu’ils sont chauffés. Le prototype d’aile adaptative a été validé expérimentalement au cours de trois séries d’essais en soufflerie, en octobre 2008, puis en février et mai 2009, au cours desquelles tous les paramètres des actionneurs AMF ont été mesurés (forces, déplacements, etc.) et comparés aux valeurs prévues lors de la phase de dimensionnement. Les mesures expérimentales ont validé la démarche développée dans ce mémoire.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Leonardi, Matteo. „Development of a squeezed light source prototype for Advanced Virgo“. Doctoral thesis, University of Trento, 2016. http://eprints-phd.biblio.unitn.it/1843/1/phd_thesis-LeonardiMatteo.pdf.

Der volle Inhalt der Quelle
Annotation:
A century after the prediction of the existence of gravitational waves by A. Einstein and after over fifty years of experimental efforts, gravitational waves have been detected at Earth directly. This result is a major achievement and opens new prospectives for the exploration of our universe. Gravitational waves carry different and complementary information about the source with respect to electromagnetic signals. In particular the first detection demonstrated the existence of stellar-mass black holes, binary systems of black holes and their coalescence. The detection was made by the LIGO instruments which are twin kilometer-scale Michelson interferometers in the US. These detectors represent the second generation of gravitational wave interferometers and, for the first time, they achieved the outstanding strain sensitivity of 10^(-23) Hz^(-1/2) between 90Hz and 400Hz. In the next months the LIGO network will be joined by another second generation detector: Advanced Virgo located near Pisa, Italy. The sensitivity of these advanced detectors is set by different noise sources. In particular, in the low frequency range (below 100Hz) major contributions come from thermal noises, gravity gradient noise and radiation pressure noise; instead, the high frequency band (above 100-200Hz) is dominated by shot noise. Quantum noise (radiation pressure and shot noise) is expected to dominate the detector sensitivity in the whole frequency band at the final target laser input power. To decrease the shot noise while increasing the radiation-pressure noise, or vice-versa, Caves \cite{Caves1981} proposed in 1981 the idea of the squeezed-state technique. The LIGO collaboration demonstrated for the first time in 2011 that the injection of a squeezed vacuum state into the dark port of the interferometer can reduce the shot noise due to the quantum nature of light. This result was achieved with the German-British interferometer GEO600 and was replicated in 2013 with the LIGO interferometer at Livingston. After these results, the LIGO collaboration have pursued further the research in the squeezed-state technique which is considered mandatory for third generation of ground based interferometric detectors. In 2013, the Virgo collaboration started developing the squeezed-state technique. The subject of my thesis is the realization of a prototype of frequency independent squeezed vacuum state source to be injected in Advanced Virgo. This prototype is developed in collaboration with other Virgo groups.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Jakobsen, Jakob Lieng. „Autonomous Drifting of a 1:5 Scale Model Car“. Thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for teknisk kybernetikk, 2011. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-13844.

Der volle Inhalt der Quelle
Annotation:
Current automotive safety systems restrict the vehicle to the linear region of operation where the sideslip angle is small. Recent research in the field has discovered that drifting possesses unstable equilibria in which the vehicle is controllable even after the handling limits in the linear region have been exceeded.This thesis presents the design and simulation of a feedback linearization controller that, by using yaw rate as input to controlling the sideslip angle, is able to find the equilibrium point corresponding to the initial velocity and the desired yaw rate. Simulation results show that the controller is able to achieve a yaw rate within 5 degrees of the desired yaw rate. It is demonstrated that utilization of drifting techniques increases the maneuverability of the vehicle compared to normal cornering. Based on the successful handling of coupling in actuator authority at high angles of sideslip, feedback linearization as a control design tool is recommended for further development of controllers in the LocalHawk project.The LocalBug simulator has been improved by the addition of a dc motor model that includes selection of front, rear and four wheel drive. Measurement of the moment of inertia of LocalBug and recording of true noise data, which is added to the simulator output, has increased the fidelity of the simulator. Validation of the simulator shows that the simulation results largely is in agreement with logged test data, except for the case of hard breaking where the simulation model is inclined to experience a spin.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Savoie, Jean-Philippe. „Prototype de validation de la couche physique d'un réseau optique sans filtre“. Mémoire, École de technologie supérieure, 2010. http://espace.etsmtl.ca/1083/1/SAVOIE_Jean%2DPhilippe.pdf.

Der volle Inhalt der Quelle
Annotation:
L'avènement de certains développements technologiques permet l'élaboration de nouvelles architectures de réseaux optiques. La compensation électronique de la dispersion, l'utilisation de transmetteurs accordables ainsi que de récepteurs accordables constituent des avancées technologiques déterminantes dans le développement du concept de réseaux optiques sans filtre. En effet, les réseaux optiques sans filtre apparaissent maintenant comme une solution envisageable pour répondre à l'augmentation des demandes et au besoin de se doter d'architectures flexibles et simplifiées. Les réseaux optiques sans filtre reposent sur l'utilisation de transmetteurs et de récepteurs accordables et d'un nombre minimum d'éléments actifs de commutation photonique pour la configuration du réseau. Les réseaux optiques sans filtre sont aussi caractérisés par l'utilisation de diviseurs de puissance optique comme élément d'interconnexion entre des liaisons de fibre optique et comme élément de multiplexage et démultiplexage des longueurs d'onde. Dans ce mémoire, une méthode analytique de validation des contraintes physiques d'une solution de réseau optique sans filtre est présentée. La méthode analytique est validée à 1' aide d'un logiciel de simulation, VPitansmissionMaker TM. Finalement, un prototype d'outil de validation de la couche physique d'un réseau optique sans filtre est proposé à partir de la méthode analytique présentée. Ce prototype, développé dans l'environnement MATLAB®, s'articule autour de la gestion et de la prédiction du niveau de bruit, qui est principalement causé par les amplificateurs optiques, dans un réseau optique sans filtre. Le prototype développé se veut un outil qui permet de faciliter l'étude des réseaux optiques sans filtre. La rapidité avec laquelle l'outil évalue une solution de liaison optique sans filtre ainsi que sa simplicité en fait un atout majeur dans le processus de génération de solutions de réseaux optiques sans filtre.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Walczak, Katarzyna I. „Prototype decision support framework using geospatial technologies for analysing human health risk“. Thesis, Queensland University of Technology, 2017. https://eprints.qut.edu.au/103630/1/Katarzyna%20Izabella_Walczak_Thesis.pdf.

Der volle Inhalt der Quelle
Annotation:
This thesis concentrates on the development of a prototype Decision Support Framework based on the landscape epidemiology concept and using GIS to determine human health risk in Semarang (Indonesia). This site was selected as representative of a rapidly urbanizing area in a developing country. The decision support framework examines climatic, landscape and socio-economic factors identified as having significant impacts on water quality and subsequent causation of waterborne and water-related diseases. The research outcomes potentially may be applied worldwide to identify and isolate areas most vulnerable to the effects of the mentioned diseases thus improving quality of life in developing countries.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Egodawatta, Prasanna Kumarasiri. „Translation of small-plot scale pollutant build-up and wash-off measurements to urban catchment scale“. Thesis, Queensland University of Technology, 2007. https://eprints.qut.edu.au/16502/1/Prasanna_Egodawatta_Thesis.pdf.

Der volle Inhalt der Quelle
Annotation:
Accurate and reliable estimations are the most important factors for the development of efficient stormwater pollutant mitigation strategies. Modelling is the primary tool used for such estimations. The general architecture of typical modelling approaches is to replicate pollutant processes along with hydrologic processes on catchment surfaces. However, due to the lack of understanding of these pollutant processes and the underlying physical parameters, the estimations are subjected to gross errors. Furthermore, the essential requirement of model calibration leads to significant data and resource requirements. This underlines the necessity for simplified and robust stormwater pollutant estimation procedures. The research described in this thesis primarily details the extensive knowledge developed on pollutant build-up and wash-off processes. Knowledge on both build-up and wash-off were generated by in-depth field investigations conducted on residential road and roof surfaces. Additionally, the research describes the use of a rainfall simulator as a tool in urban water quality research. The rainfall simulator was used to collect runoff samples from small-plot surfaces. The use of a rainfall simulator reduced the number of variables which are common to pollutant wash-off. Pollutant build-up on road and roof surfaces was found to be rapid during the initial time period and the rate reduced when the antecedent dry days increase becoming asymptote to a constant value. However, build-up on roofs was gradual when compared to road surfaces where the build-up on the first two days was 66% of the total build-up. Though the variations were different, it was possible to develop a common replication equation in the form of a power function for build-up for the two surface types with a as a multiplication coefficient and b as a power coefficient. However, the values for the two build-up equation coefficients, a, and b were different in each case. It was understood that the power coefficient b varies only with the surface type. The multiplication coefficient varies with a range of parameters including land-use and traffic volume. Additionally, the build-up observed on road surfaces was highly dynamic. It was found that pollutant re-distribution occurs with finer particles being removed from the surface thus allowing coarser particles to build up. This process results in changes to the particle size composition of build-up. However, little evidence was noted of re-distribution of pollutants on roof surfaces. Furthermore, the particulate pollutants in both road and roof surfaces were high in adsorption capacity. More than 50% of the road and more than 60% of the roof surface particulates were finer than 100 μm which increases the capacity to adsorb other pollutants such as heavy metals and hydrocarbons. In addition, the samples contained a significant amount of DOC which would enhance the solubility of other pollutants. The wash-off investigations on road and roof surfaces showed a high concentration of solid pollutants during the initial part of events. This confirmed the occurrence of the 'first flush' phenomenon. The observed wash-off patterns for road and roof surfaces were able to be mathematically replicated using an exponential equation. The exponential equation proposed is a modified version of an equation proposed in past research. The modification was primarily in terms of an additional parameter referred to as the 'capacity factor' (CF). CF defines the rainfall's ability to mobilise solid pollutants from a given surface. It was noted that CF varies with rainfall intensity, particle size distribution and surface characteristics. Additional to the mathematical replication of wash-off, analysis further focused on understanding the physical processes governing wash-off. For this, both particle size distribution and physicochemical parameters of wash-off pollutants were analysed. It was noted that there is little variation in the particle size distribution of particulates in wash-off with rainfall intensity and duration. This suggested that particle size is not an influential parameter in wash-off. It is hypothesised that the particulate density and adhesion to road surfaces are the primary criteria that govern wash-off. Additionally, significantly high pollutant contribution from roof surfaces was noted. This justifies the significance of roof surfaces as an urban pollutant source particularly in the case of first flush. This dissertation further describes a procedure to translate the knowledge created on pollutant build-up and wash-off processes using small-plots to urban catchment scale. This leads to a simple and robust urban water quality estimation tool. Due to its basic architecture, the estimation tool is referred to as a 'translation procedure'. It is designed to operate without a calibration process which would require a large amount of data. This is done by using the pollutant nature of the catchment in terms of buildup and wash-off processes as the basis of measurements. Therefore, the translation procedure is an extension of the current estimation techniques which are typically complex and resource consuming. The use of a translation procedure is simple and based on the graphical estimation of parameters and tabular form of calculations. The translation procedure developed is particularly accurate in estimating water quality in the initial part of runoff events.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Gokben, Ilhan. „Prototype fabrication and measurements of uplink and downlink microstrip patch antennas for NPSAT-1“. Thesis, Monterey, California. Naval Postgraduate School, 2003. http://hdl.handle.net/10945/1109.

Der volle Inhalt der Quelle
Annotation:
Approved for public release; distribution is unlimited
This thesis addresses the prototyping, measurement, and validation of two circularly polarized microstrip patch antennas designed by LTJG Mahmut Erel for the NPSAT-1. The antenna system (receive and transmit), consisting of two antennas on a ground plane and their feed systems, was field-tested. The results were compared to the CSTʼ Microwave Studio. Finite Difference Time Domain (FDTD) software package predictions in order to verify that this design satisfies the NPSAT-1 requirements for bandwidth, free-space radiation pattern and low-profile shape.
Lieutenant Junior Grade, Turkish Navy
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Gok̈ben, Il̇han. „Prototype fabrication and measurements of uplink and downlink microstrip patch antennas for NPSAT-1 /“. Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2003. http://library.nps.navy.mil/uhtbin/hyperion-image/03Mar%5FGokben.pdf.

Der volle Inhalt der Quelle
Annotation:
Thesis (M.S. in Electrical Engineering)--Naval Postgraduate School, March 2003.
Thesis advisor(s): Jovan Lebaric, Richard W. Adler. Includes bibliographical references (p. 49). Also available online.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Pereira, Nunes Joao Paulo. „Pore-scale modelling of carbonate dissolution“. Thesis, Imperial College London, 2016. http://hdl.handle.net/10044/1/34683.

Der volle Inhalt der Quelle
Annotation:
High resolution micro-CT images of porous rocks provide a very useful starting point to the development of pore-scale models of fluid flow and transport. Following a literature review covering recent results on the applicability of tomographic imaging to study reaction phenomena at the pore and core scales, this thesis presents a pore-scale streamline-based reactive transport model to simulate rock dissolution. The focus is on carbonate dissolution in CO2-saturated fluids. After injecting CO2-rich fluids into carbonate reservoirs, chemical reactions between the acidic fluid and the host rock are to be expected. Such reactions may cause significant variations in the flow and transport properties of the reservoir, with possible consequences for field development and monitoring. The interplay between flow and reaction exhibits a very rich behaviour that has not yet been fully understood, especially in the case of carbonate rocks, which possess a complex pore structure. The model is developed within a Lagrangian framework, where the advective displacement employs a novel streamline tracing method which respects the no-flow boundary condition at the pore walls. The method is implemented in the pore-space geometry reconstructed from micro-CT images of sedimentary rocks. Diffusion is incorporated with a random walk and fluid-solid reactions are defined in terms of the diffusive flux of reactants through the grain surfaces. To validate the model, simulation results are compared against a dynamic imaging experiment where a carbonate sample was flooded with CO2-saturated brine at reservoir conditions. The agreement is very good and a decrease of one order of magnitude in the average dissolution rate, compared to the rate measured in an ideal reactor, is explained in terms of transport limitations arising from the flow field heterogeneity. The impact of the flow heterogeneity in the reactive transport is illustrated in a series of simulations performed in rocks with different degrees of complexity. It is shown that more heterogeneous rocks, in the sense of flow heterogeneity, may exhibit a decrease of up to two orders of magnitude in the sample-averaged reaction rates, and that the flow rate is also an important factor when studying carbonate dissolution.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Shire, Thomas. „Micro-scale modelling of granular filters“. Thesis, Imperial College London, 2014. http://hdl.handle.net/10044/1/12967.

Der volle Inhalt der Quelle
Annotation:
Granular filters are considered to be among the most safety critical elements of embankment dams. The behaviour of such filters is poorly understood, which is reflected in the empirically derived rules used for filter design, which have been shown to be conservative and to contradict each other in some cases. In this thesis particle-scale numerical analysis is used to improve the understanding of internal stability, a requirement for granular filters, and to assess the fundamental basis for some commonly used empirical design rules. Internal stability describes the ability of the coarse fraction of a broadly or gap-graded cohesionless soil to prevent the erosion of the finer fraction under seepage. Two conditions for internal instability are that: (i) the fine particles carry relatively lower stress than the coarse particles (hydromechanical condition) and (ii) the fine particles should be small enough to pass through the void constrictions between the coarse particles (geometric condition). Each of these conditions is assessed in turn. The hydromechanical condition is assessed by using discrete element modelling (DEM) to analyse the fabric and effective stress distribution within soils of varying internal stability according to empirical criteria. In particular the hypothesis of Skempton and Brogan (1994) that a prerequisite for internal instability is a reduction of the effective stress in the finer fraction is explored. The results show that the stress transferred by the fines is related to the soil fabric, in particular the number and strength of contacts between particles. This is in turn shown to be influenced by the particle size distribution (PSD), fines content and relative density of the material. A conceptual framework to describe this behaviour is introduced. The geometric condition is intimately linked to the size of the constrictions within the void space. The constriction size distribution (CSD) within DEM samples with differing PSDs and relative densities is quantified using two approaches: the Weighted Delaunay method (Reboul et al., 2010) and the Maximal Ball method (Dong and Blunt, 2009). CSD curves are shown to have similar shapes which can be usefully normalised using characteristic filter particle diameters. The results show very good qualitative agreement with the experimental work of Kenney et al. (1985), and lend scientific support to the use of characteristic particle diameters in filter design. An analysis of constrictions within DEM samples with stress-induced anisotropy shows that larger and smaller constrictions align with the major and minor principal stresses respectively. A random walk network model describing void space is proposed to simulate the movement of a polydisperse fine material through a granular filter. This model is useful for identifying effective and ineffective base/filter combinations.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Wang, Lingyu. „Large-scale structures from infrared surveys“. Thesis, Imperial College London, 2008. http://hdl.handle.net/10044/1/4410.

Der volle Inhalt der Quelle
Annotation:
To use the AKARI All-Sky Survey Point Source Catalogue as a validation sample for future missions such as Planck and to study large-scale structure, we first investigate the AKARI point source detection limit at 90 μm and the nature of bright spurious sources. Due to the degradation of the sensitivity of the AKARI All-Sky Survey and formidable difficulties in filtering out excessive noise, we return to the IRAS Faint Source Catalog to construct a redshift catalogue of over 60,000 galaxies selected at 60 μm, the Imperial IRAS-FSC Redshift Catalogue (IIFSCz). Around 50% of the sources in the IIFSCz have spectroscopic redshifts and a further 20% have photometric redshifts. The luminosity and selection functions are obtained for the IIFSCz flux-limited at 0.36 Jy at 60 μm. The dependence of galaxy clustering on spectral type and luminosity is studied using correlation statistics. A possible detection of the baryon acoustic oscillations in the power spectrum of the flux-limited sample of the IIFSCz is discussed. Finally, we present future research directions which include the FIR-radio correlation, ultraluminous and hyperluminous infrared galaxies, galaxy bias in the SWIRE Photometric Redshift Catalogue and convergence of the cosmological dipole.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Candy, Adam S. „Subgrid scale modelling of transport processes“. Thesis, Imperial College London, 2008. http://hdl.handle.net/10044/1/5496.

Der volle Inhalt der Quelle
Annotation:
Consideration of stabilisation techniques is essential in the development of physical models if they are to faithfully represent processes over a wide range of scales. Careful application of these techniques can significantly increase flexibility of models, allowing the computational meshes used to discretise the underlying partial differential equations to become highly nonuniform and anisotropic, for example. This exibility enables a model to capture a wider range of phenomena and thus reduce the number of parameterisations required, bringing a physically more realistic solution. The next generation of fluid flow and radiation transport models employ unstructured meshes and anisotropic adaptive methods to gain a greater degree of flexibility. However these can introduce erroneous artefacts into the solution when, for example, a process becomes unresolvable due to an adaptive mesh change or advection into a coarser region of mesh in the domain. The suppression of these effects, caused by spatial and temporal variations in mesh size, is one of the key roles stabilisation can play. This thesis introduces new explicit and implicit stabilisation methods that have been developed for application in fluid and radiation transport modelling. With a focus on a consistent residual-free approach, two new frameworks for the development of implicit methods are presented. The first generates a family of higher-order Petrov-Galerkin methods, and the example developed is compared to standard schemes such as streamline upwind Petrov-Galerkin and Galerkin least squares in accurate modelling of tracer transport. The dissipation generated by this method forms the basis for a new explicit fourth-order subfilter scale eddy viscosity model for large eddy simulation. Dissipation focused more sharply on unresolved scales is shown to give improved results over standard turbulence models. The second, the inner element method, is derived from subgrid scale modelling concepts and, like the variational multiscale method and bubble enrichment techniques, explicitly aims to capture the important under-resolved fine scale information. It brings key advantages to the solution of the Navier-Stokes equations including the use of usually unstable velocity-pressure element pairs, a fully consistent mass matrix without the increase in degrees of freedom associated with discontinuous Galerkin methods and also avoids pressure filtering. All of which act to increase the flexibility and accuracy of a model. Supporting results are presented from an application of the methods to a wide range of problems, from simple one-dimensional examples to tracer and momentum transport in simulations such as the idealised Stommel gyre, the lid-driven cavity, lock-exchange, gravity current and backward-facing step. Significant accuracy improvements are demonstrated in challenging radiation transport benchmarks, such as advection across void regions, the scattering Maynard problem and demanding source-absorption cases. Evolution of a free surface is also investigated in the sloshing tank, transport of an equatorial Rossby soliton, wave propagation on an aquaplanet and tidal simulation of the Mediterranean Sea and global ocean. In combination with adaptive methods, stabilising techniques are key to the development of next generation models. In particular these ideas are critical in achieving the aim of extending models, such as the Imperial College Ocean Model, to the global scale.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Winkler, Roland Verfasser], und Rudolf [Akademischer Betreuer] [Kruse. „Prototype based clustering in high-dimensional feature spaces / Roland Winkler. Betreuer: Rudolf Kruse“. Magdeburg : Universitätsbibliothek, 2015. http://nbn-resolving.de/urn:nbn:de:gbv:ma9:1-7159.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Maturi, Laura. „Building skin as energy supply: Prototype development of a wooden prefabricated BiPV wall“. Doctoral thesis, University of Trento, 2013. http://eprints-phd.biblio.unitn.it/954/1/PhD_Thesis_LM_final_version.pdf.

Der volle Inhalt der Quelle
Annotation:
In the perspective of “nearly zero energy buildings” as foreseen in the EPBD 2010/31/EU, herein a prototype of a wooden prefabricated BiPV wall is conceived, designed, built and tested. The prototype key concepts, identified according to the recommendations of the IEA Task 41 research project, are: multi-functionality, prefabrication, sustainability and integration. The prototype design is the result of a theoretical study which takes into account both architectural integration aspects and energy performance issues. The latter in particular, is based on the evaluation and improvement of both PV and building-related aspects, through the investigation and implementation of low-cost passive strategies to improve the overall BiPV performance. A modular specimen of the prototype was built thanks to an industrial collaboration and tested through an experimental approach, based on the combination of several phases performed in two test facilities (i.e. INTENT lab and SoLaRE-PV lab) by means of original experimental set-up. The effectiveness of the proposed BiPV prototype configuration is proven by comparing the results of the experiments with monitored data of two BiPV systems (a roof and a façade system) located in South Tyrol (North of Italy). The experimental results are then generalized, providing significant data and experimental expressions for a deeper understanding of BiPV systems energy performance.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Womack, Trevor W. Sr. „Economies of scale: 9-1-1 center consolidation as a means to strengthen the homeland security enterprise“. Thesis, Monterey, California: Naval Postgraduate School, 2014. http://hdl.handle.net/10945/41458.

Der volle Inhalt der Quelle
Annotation:
CHDS State/Local
Since the Great Recession, local governments have been under pressure to cut programs, personnel, and services due to decreased tax revenues and a weak economic recovery. As a result, government agencies, aiming to do more with less, have consolidated services, including those of local 9-1-1 dispatch centers. This thesis explores whether 9-1-1 center consolidation has been successful thus far. Through a multiple case-study approach, the effects of consolidation upon cost efficiency, service levels, and organizational structure are examined. Primary data were gathered from semi-structured interviews with executives of three consolidated 9-1-1 centers. Secondary data were obtained from related budget documents, published reports, emergency call and response statistics, staffing rosters, organizational charts, and intergovernmental agreements. This mixture of qualitative and quantitative data was analyzed to identify individual first-order concepts, generalized into patterns, and synthesized into overarching dimensions. The key findings suggest that the consolidation of 9-1-1 centers can result in increased cost efficiency through economies of scale; regionally, 9-1-1 center consolidation may standardize and raise the quality of service provided across disciplines and jurisdictions; and in the near-term, organizational behavior issues present challenges for the newly consolidated 9-1-1 center
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Morales-Perez, Jose Luis. „Computational methods for large-scale quadratic programming“. Thesis, Imperial College London, 1993. http://hdl.handle.net/10044/1/7511.

Der volle Inhalt der Quelle
Annotation:
For theoretical and practical reasons, quadratic programming problems have attracted the interest of the mathematical programming community. They naturally arise from applications and as subproblems in other numerical techniques. However most existing techniques, designed for solving small and dense problems, tend to be prohibitively expensive when applied directly to solve large-scale problems. In this work we explore methods suitable for solving large-scale sparse convex quadratic programming problems. An interior-point primal-dual algorithmic framework and its computational implementation are presented in the first part of this work. Primal and dual updates are computed at each step by iteratively solving the linear systems posed by the classical method of barriers using a preconditioned Krylov-subspace method. Several variants are suggested by a Taylor approximation of the central path. A truncated Newton strategy has been implemented in order to achieve a significant reduction in the CPU time. In the second part, sparse implementations for Lemke's algorithm and a row-action algorithm based on diagonal approximations of the Hessian, are suggested. Lemke's algorithm implementation is based on updating the sparse LU factorization of a matrix representing the basis at the current step. The implementation of the row-action algorithm relies on the efficient solution of single-constrained diagonal subproblems. In order to compare the relative merits of our implementations, numerical experimentation is conducted on two sets of problems that use randomly generated Hessian matrices and constraints taken from a subset of the netlib problems. Several aspects are studied: the use of iterative linear algebra for solving the linear systems of equations posed by the interior-point variants, the impact on the computational resources (memory and CPU) when different approaches are used to solve large scale problems, and finally, the effectiveness of a second order correction and the truncated Newton strategy implemented in the interior-point methods.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Gu, Jiaping. „Nonlinear dynamic analysis of large scale structures“. Thesis, Imperial College London, 2018. http://hdl.handle.net/10044/1/63829.

Der volle Inhalt der Quelle
Annotation:
The nonlinear dynamic analysis to obtain the response of whole building structures or structural components under blast loading can be computationally prohibitive. Two approaches have been considered in this study to improve the efficiency of such analyses: i) to employ an appropriate time integration scheme and, ii) to employ accurate simplified models of structural components. A new implicit-explicit time integration scheme has been developed and implemented with a novel automatic element-based mesh partitioning approach. The scheme allows simultaneous execution of implicit integration and explicit integration in different parts of a system to maximise computational efficiency. The developed scheme has also been notably incorporated to the novel domain decomposition approach developed previously at Imperial College London. The scheme is also successfully incorporated with the mixed-dimensional coupling technique included in the domain decomposition approach. Simplified models of structural components have been improved for a better representation of responses under blast loading. Mechanical models of fin plate connections have been modified by including material nonlinearity and material strain rate effect in the coupled axial and shear response of bolt rows. The flat shell elements have been verified in their ability to capture the influence of transverse damage in floor slabs due to uplift on the in-plane diaphragm stiffness and strength. These simplified models have been incorporated in the global model of a reference building, which has been analysed and assessed under characteristic blast loading. Typical masonry cavity cladding has been investigated as a case study. The failure mode and the interaction between the cladding and the structural frame have been successfully obtained from mesoscale models employing the mixed-dimensional domain decomposition approach and the implicit-explicit time integration scheme. A SDOF model based on the results of the detailed model has been constructed and incorporated in the global model of the reference building.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Phunpeng, Veena. „Gradient theories for scale dependent material simulations“. Thesis, Imperial College London, 2014. http://hdl.handle.net/10044/1/29336.

Der volle Inhalt der Quelle
Annotation:
Since composite materials have been developed, many types of materials (e.g. carbon fibre, carbon nanotubes (CNTs)) can be embedded in a standard matrix in order to obtain materials with enhanced physical properties. To investigate enhanced properties of nano-composites, not only mechanical properties but also electrical properties should be taken into account. Furthermore, at the nano-scale, covalent forces between atoms play a crucial role in their behaviour. This thesis is focused on eletromechanical effects (i.e. piezoelectricity and flexoelectricity) and the size effect in micro/nano materials. The aim is to implement continuum modelling solutions for nonlocal/gradient elastic problems in which size effect plays a significant role in material behaviour. The FEniCS Project is used to provide a novel tool for automated solutions of partial differential equations (PDE) by the finite element method. In particular, it offers significant flexibility with regards to discretization choices for triangular elements. When implementing a nonlocal/strain gradient elastic framework using FEniCS, a weak form of the gradient elasticity derived from the Principal of Virtual Work (PVW) is required. Due to the fourth order PDE in term of displacements in the gradient elasticity, C1 continuous elements (e.g. Hermitian finite element) are usually required. However, to avoid the use of C1 continuous elements, an equivalent mixed-type finite element formulation is considered. To investigate the material behaviour, strain gradient finite element formulations based on a mixed variational approach are used. Numerical results are compared with analytical solutions or experimental data to confirm the convergence and accuracy of the simulations. To extend the capability of the implementation to allow the modelling of nanocomposites efficiently, Extended Finite Element Method (XFEM) is introduced. By increasing mesh density only around the discontinuities, the resulting program runs faster than if a finer mesh had been used everywhere, with the additional benefit that more accurate results are obtained.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Cooper, Michael. „Atomic scale simulation of irradiated nuclear fuel“. Thesis, Imperial College London, 2015. http://hdl.handle.net/10044/1/23808.

Der volle Inhalt der Quelle
Annotation:
Atomic scale simulations have been performed investigating various phenomena governing nuclear fuel performance during reactor operation and during post irradiation storage or disposal. Following a review of some of the key features of irradiated nuclear fuel, such as fission product distribution, two key factors were identified as the focus for this investigation: i) the role of uranium dioxide non-stoichiometry and ii) the effect of temperature. The former has been carried out using a previous pair potential model, whilst a new many-body potential was developed to enable temperature effects to be studied over the full range of temperatures of interest. Secondary oxide precipitates are known to exist in irradiated nuclear fuel with Ba, Sr and Zr precipitating to form the perovskite (Ba,Sr)ZrO3 grey phase. The binary BaO, SrO and ZrO2 may also be formed. The precipitation enthalpies of these oxides were predicted as a function of hyper-stoichiometry. Additionally, CrUO4 can also precipitate from Cr-doped UO2+x. The possibility of fission product segregation to the phases from UO2 or UO2+x was also investigated with a broad range of species preferring segregation from stoichiometric UO2. The role of defect cluster configuration on vacancy mediated uranium migration was investigated for UO2 and UO2+x. In both cases the lowest enthalpy migration pathway involved reconfiguration of the cluster to a metastable configuration. Furthermore, there were a very large number of alternative pathways that had similar migration enthalpies, especially for UO2+x. A new potential model was developed that uses a novel approach to include many-body interactions in the description of the actinide oxide series. This represents a significant improvement on previous models in the ability to describe the thermal expansion, specific heat capacity and elastic properties of CeO2, ThO2, UO2, NpO2, PuO2, AmO2 and CmO2 from 300 to 3000 K. Using the new model the thermal expansion, specific heat capacity, oxygen diffusivity and thermal conductivity of the mixed oxides (UxTh1-x)O2 and (UxPu1-x)O2 were predicted. Enhanced oxygen diffusion and a degradation in thermal conductivity were predicted in terms of the non-uniform cation sublattice.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Buxton, Oliver R. H. „Fine scale features of turbulent shear flows“. Thesis, Imperial College London, 2011. http://hdl.handle.net/10044/1/9080.

Der volle Inhalt der Quelle
Annotation:
This thesis presents an investigation into kinematic features of fine scale turbulence in free shear flows. In particular it seeks to examine the interaction between the different length scales present in shear flow turbulence as well as the interaction between the strain-rate tensor and the rotation tensor, which are the symmetric and skew-symmetric components of the velocity gradient tensor respectively. A new multi-scale particle image velocimetry (PIV) technique is developed that is capable of resolving the flow at two different dynamic ranges, centred on inertial range scales and on dissipative range scales, simultaneously. This data is used to examine the interaction between large-scale fluctuations, of the order of the integral scale, and inertial and dissipative range fluctuations. The large-scale fluctuations are observed to have an amplitude and frequency modulation effect on the small scales, and the small scales are shown to have a slight effect on the large scales, illustrating the two way nature of the energy cascade. A mechanism whereby integral scale rollers leave behind a wake of intense small-scale fluctuations is proposed. The interaction between strain and rotation is examined with regards to the rate of enstrophy amplification (ωiSijωj). It is found that the mechanism that is responsible for the nature of enstrophy amplification is the alignment tendency between the extensive strain-rate eigenvector and the vorticity vector. This mechanism is also observed to be scale dependent for ωiSijωj > 0, but independent for ωiSijωj < 0. This is subsequently confirmed with new dual-plane stereoscopic PIV experiments performed as part of this study. Finally, computational data is used to examine the effect of experimental noise and variation of spatial resolution on the observation and understanding of this strain - rotation interaction.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Menke, Hannah Paris. „Reservoir condition pore-scale imaging of reaction“. Thesis, Imperial College London, 2015. http://hdl.handle.net/10044/1/33324.

Der volle Inhalt der Quelle
Annotation:
This thesis presents the first dynamic imaging of fluid/rock reaction using X-ray microtomography (μ-CT) and focuses on three series of experiments: (1) imaging a homogenous carbonate during dissolution using a laboratory scanner; (2) imaging heterogeneous carbonates at multiple flow rates using a synchrotron pink beam; (3) imaging the same rocks using a laboratory scanner at multiple reactive conditions incorporating effluent analysis. First the in situ reservoir condition imaging apparatus was adapted to image Ketton carbonate dynamically using a laboratory μ-CT scanner. 10 images were acquired over 2½ hours. Porosity and surface area were measured from the images and permeability and connectivity were calculated using flow models. Ketton dissolved uniformly at these conditions although the effective reaction rate (reff) was 16 times lower than those measured in batch reactor experiments with no transport limitations. Second the experimental apparatus was used with fast synchrotron-based μ-CT to image two more complex carbonates, Estaillades and Portland Basebed at two different flow conditions. ~100 images were taken over 2 hours, which captured the complexity of dissolution. It was found that the type of dissolution is both pore structure and flow rate dependent. A new type of dissolution, channelling, is observed which has a reff up to 100 times lower than batch rates. Third, effluent analysis was incorporated into the experimental apparatus. All three rocks were imaged again at two separate reactive conditions. The reff was between 10 and 100 times lower than the batch rates, with the lowest rates in samples with the most channelized flow, confirming that transport limitations are the dominant mechanism in determining reff at the fluid/solid boundary. Effluent analysis confirmed that using the in situ, rather than the injected pH, to determine reff is valid in the uniform regime, but overestimates reff with channelling by an order of magnitude.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Streeter, Elaine. „Computer-aided music therapy evaluation : investigating and testing the music therapy logbook prototype 1 system“. Thesis, University of York, 2010. http://etheses.whiterose.ac.uk/1201/.

Der volle Inhalt der Quelle
Annotation:
This thesis describes the investigation and testing of a prototype music therapy practice evaluation system: Music Therapy Logbook, Prototype 1. Such a system is intended to be used by music therapists as an aid to their existing evaluation techniques. The investigation of user needs, the multi-disciplinary team work, the pre-field and field recording tests, and the computational music analysis tests are each presented in turn, preceded by an in depth literature review on historical and existing music therapy evaluation methods. A final chapter presents investigative design work for proposed user interface software pages for the Music Therapy Logbook system. Four surveys are presented (n = 6, n = 10, n = 44, n =125). These gathered information on current music therapy evaluation methods, therapists‘ suggested functions for the system, and therapists‘ attitudes towards using the proposed automatic and semi-automatic music therapy evaluation functions, some of which were tested during the research period. The results indicate enthusiasm for using the system to; record individual music therapy sessions, create written notes linked to recordings and undertake automatic and/or semi-automatic computer aided music therapy analysis; the main purpose of which is to quantify changes in a therapist‘s and patient‘s use of music over time, (Streeter, 2010). Simulated music therapy improvisations were recorded and analysed. The system was then used by a music therapist working in a neuro-disability unit, to record individual therapy sessions with patients with acquired brain injuries. These recordings constitute the first music therapy audio recordings employing multi-track audio recording techniques, using existing radio microphone technology. The computational music analysis tests applied to the recordings are the first such tests to be applied to recordings of music therapy sessions in which an individual patient played acoustic, rather than MIDI, instruments. The findings prove it is possible to gather objective evidence of changes in a patient‘s and therapist‘s use of music over time, using the Music Therapy Logbook Prototype 1 system.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Le, Minh Thi Hong. „Brand fanaticism: Scale development“. Thesis, Queensland University of Technology, 2019. https://eprints.qut.edu.au/130710/1/Minh%20Thi%20Hong_Le_Thesis.pdf.

Der volle Inhalt der Quelle
Annotation:
This research develops a definition of brand fanaticism, and a valid brand fanaticism scale. Fanatical consumers may not represent the majority of consumers but have a disproportionate impact on the revenue and image of their focal brand. Five studies were conducted to validate the brand fanaticism scale with online survey data. Brand fanaticism captures self-brand connection, brand prominence, obsessive passion, and cognitive rigidity that loyal consumers experience with focal brands. Brand commitment, brand love are antecedents of brand fanaticism which in turn, predicts word-of-mouth, behavioural intention, and willingness to sacrifice. The results provide theoretical and empirical research contributions in marketing.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

VALENCIA, JAVIER ANDRES FORERO. „EXPERIMENTAL ANALYSIS OF A FIBER-REINFORCED POLYMER CROSSTIE PROTOTYPE“. PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2011. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=19909@1.

Der volle Inhalt der Quelle
Annotation:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
Desde o surgimento da ferrovia, por suas características físicas, a madeira foi o material de comportamento mais satisfatório, cumprindo as funções principais do dormente de ser o elemento de transferência de carga do trilho para o lastro. Visando a substituição da madeira, tanto por razões econômicas como ambientais, tem-se adotado e pesquisado dormentes de outros materiais, tais como concreto, aço, e polímero. O objetivo deste trabalho é conhecer as características mecânicas dos dormentes de madeira plástica composta de um polímero reforçado com fibras. A fim de verificar e avaliar os resultados, o dormente foi submetido a ensaios de fluência, flexão e de impacto, segundo as especificações dos relatórios do Instituto de Engenheira Civil de vias Férreas da Índia PUNE feitos por Gupta(2003), o relatório TRL Limit proposto por R W Jordan e G Morris, e os resultados obtidos pelo professor Ney Augusto Dumont 2006 no seu relatório de instrumentação de laboratório de um Protótipo de Dormente de Polímero. Todos os ensaios foram feitos com dormentes da empresa Wisewood Soluções Ecológicas S.A. apresentando resultados que descrevem que os dormentes atuam como vigas que suportam cargas de cisalhamento muito grandes, e uma resistência muito menor à flexão em um material não homogêneo com um comportamento viscoelástico. Uma propriedade muito importante para um dormente é a capacidade de absorção de energia. Em um carregamento de impacto a resposta estrutural depende não somente da energia de impacto, mas também da rigidez da estrutura, da rigidez do contato, e das propriedades mecânicas dos materiais.
Since the beginning of railroad constructions, wood has been the most useful material to fulfil the sleeper s main function of transfering actions from the rail to the track ballast. Aiming to replace the wood for both economic and environmental reasons, sleepers of other materials such as concrete, steel, and polymer have also been adopted. The objective of this study is to understand the mechanical properties of a plastic sleeper prototype. In order to verify and evaluate the results, the sleepers were subjected to creep, bending, and impact tests, according to specifications of reports from the Indian Railway Institute of Civil Engineering in Pune, the TRL Limit report proposed by R. W. Jordan and G. Morris, and the results obtained by Professor N. A. Dumont in his 2006 Laboratory Instrumentation of a Polymer Rail Sleeper Prototype report. All tests were conducted in sleepers delivered by the company Wisewood Ecological Solutions S.A. and the results describe that the sleepers act as beams that support very large shear loads. However, they present a much lower resistance to bending, high inhomogeneity and remarkable viscoelastic behavior. A very important property of a sleeper is the ability to absorb energy. In a loading impact, the structural response depends not only on the shock absorption, but also on the rigidity of the structure, the contact stiffness, and the mechanical properties of the materials.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Chhikara, Aakanksha. „The design and development of a wearable prototype device to monitor movement of the lumbar spine and pelvis“. Thesis, Imperial College London, 2011. http://hdl.handle.net/10044/1/6871.

Der volle Inhalt der Quelle
Annotation:
Chronic Low Back Pain (CLBP) is a leading cause of disability with high economic costs and severe psychological and social consequences. Difficulties in the management of Low Back Pain (LBP) include: credibility of self-reported health and function surveys, accurate patient evaluation and identifying reassessment time. The most critical issue is the assessment of LBP severity at a single point of time during clinic visits, rather than through continuous monitoring at home or in the workplace. This thesis describes the selection of parameters indicative of LBP impact on daily function followed by the design, development, testing, validation, and evaluation of a wearable prototype device that monitors the movement of the lumbar spine and pelvis simultaneously, and quantifies the motion characteristics. I identified three areas of disability and measurable parameters, proposed suitable sensing technologies and a placement map. Focussing on monitoring movement of the lumbar spine and pelvis, I investigated the use of inertial sensor technology and a miniaturised, cost effective, wireless sensor platform. On confirming reliability and reproducibility of the sensor signals for different movements, customised sensor boards, Graphical User Interface (GUI), application software and analytical procedures were developed to enable data acquisition from two networked sensor nodes. After investigating possible wearable strategies, I designed a unique sensor case and fixation method. Ethics approval was available only for healthy participants and I conducted a Pilot Study with the developed prototypes on 16 volunteers performing movements affected in LBP patients. The intra and inter-subject analyses of 11 participants demonstrated that the developed sensor prototypes correctly monitor the movements, measure consistent angular velocities and range of motion within a participant, detect similarities and differences between participants, and reveal the lumbar and pelvic contribution during movements. The two wireless sensors measure lumbo-pelvic motion simultaneously. The results are very promising and the sensor prototype can be tested with CLBP patients.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Wang, Yu. „Large scale agent interactions : mathematical modelling and simulation“. Thesis, Imperial College London, 2006. http://hdl.handle.net/10044/1/11286.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Wilkinson, Stephen James. „Aggregate formulations for large-scale process scheduling problems“. Thesis, Imperial College London, 1996. http://hdl.handle.net/10044/1/7255.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Jarvis, Jeremy James. „Large scale toppling failure in metamorphic rock slopes“. Thesis, Imperial College London, 1985. http://hdl.handle.net/10044/1/11287.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Betkaoui, Brahim. „Reconfigurable computing for large-scale graph traversal algorithms“. Thesis, Imperial College London, 2013. http://hdl.handle.net/10044/1/25049.

Der volle Inhalt der Quelle
Annotation:
This thesis proposes a reconfigurable computing approach for supporting parallel processing in large-scale graph traversal algorithms. Our approach is based on a reconfigurable hardware architecture which exploits the capabilities of both FPGAs (Field-Programmable Gate Arrays) and a multi-bank parallel memory subsystem. The proposed methodology to accelerate graph traversal algorithms has been applied to three case studies, revealing that application-specific hardware customisations can benefit performance. A summary of our four contributions is as follows. First, a reconfigurable computing approach to accelerate large-scale graph traversal algorithms. We propose a reconfigurable hardware architecture which decouples computation and communication while keeping multiple memory requests in flight at any given time, taking advantage of the high bandwidth of multi-bank memory subsystems. Second, a demonstration of the effectiveness of our approach through two case studies: the breadth-first search algorithm, and a graphlet counting algorithm from bioinformatics. Both case studies involve graph traversal, but each of them adopts a different graph data representation. Third, a method for using on-chip memory resources in FPGAs to reduce off-chip memory accesses for accelerating graph traversal algorithms, through a case-study of the All-Pairs Shortest-Paths algorithm. This case study has been applied to process human brain network data. Fourth, an evaluation of an approach based on instruction-set extension for FPGA design against many-core GPUs (Graphics Processing Units), based on a set of benchmarks with different memory access characteristics. It is shown that while GPUs excel at streaming applications, the proposed approach can outperform GPUs in applications with poor locality characteristics, such as graph traversal problems.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie