Dissertations / Theses on the topic 'ING-IND/01'

To see the other types of publications on this topic, follow the link: ING-IND/01.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 45 dissertations / theses for your research on the topic 'ING-IND/01.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Ausiello, Ludovico <1979&gt. "Two-and-Three level representation of analog and digital signals by means of advanced sigma-delta modulation." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2009. http://amsdottorato.unibo.it/2184/1/Ausiello_Ludovico_Tesi.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ausiello, Ludovico <1979&gt. "Two-and-Three level representation of analog and digital signals by means of advanced sigma-delta modulation." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2009. http://amsdottorato.unibo.it/2184/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Tosato, Giovanna <1987&gt. "Nascita e sviluppo dell'architettura navale passeggeri nel XIX secolo. Dalle navi alle "città galleggianti"." Master's Degree Thesis, Università Ca' Foscari Venezia, 2013. http://hdl.handle.net/10579/3638.

Full text
Abstract:
nel XIX secolo si riscontra la nascita della nave per passeggeri. l'analisi delle trasformazioni architettoniche della struttura navale dalla nave mercantile a quella passeggeri, consente di valutare come si siano creati spazi adatti ai viaggiatori. tutto questo in concomitanza con l'evoluzione generale delle imbarcazioni ottocentesca, dall'uso di nuovi materiali alle moderne tecnologie (macchina a vapore). i cambiamenti navali per far spazio ai passeggeri non riguardano solo la struttura ma anche le architetture degli ambienti interni. infatti, nascono locali dedicati solamente ai viaggiatori di prima classe che, nel corso dei decenni, diventano sempre più lussuosi e confortevoli, tanto da poter essere paragonabili, per stile architettonico e arredo, agli interni di normali edifici del secolo.
APA, Harvard, Vancouver, ISO, and other styles
4

Mattio, Erika <1989&gt. "L'Artiglieria Ottomana Navale e Campale." Master's Degree Thesis, Università Ca' Foscari Venezia, 2013. http://hdl.handle.net/10579/3816.

Full text
Abstract:
Il lavoro che in questa sede si viene a presentare ha la finalità di raggruppare l'artiglieria ottomana custodita in vari musei europei, realizzando un catalogo dei pezzi, con analisi e confronti anche con modelli veneziani. La cornice temporale si colloca nel periodo fra XV e XVIII secolo, mentre la cornice geografica entro cui verrà collocato il lavoro è moto vasta: da San Pietroburgo ad Istanbul, da Venezia a Londra. Il lavoro si presenta con una prima parte di inquadramento storico e istituzionale, una parte centrale relativa agli scambi fra ottomani ed europei, ai metalli, alle fonderie, agli arsenali e alle imbarcazioni e ad una terza ed ultima parte relativa propriamente alle artiglierie e al loro studio. L'obiettivo è quindi quello di indagare quest'ambito ancora poco conosciuto dando inizio ad un approccio scientifico e metodologico sullo studio delle artiglierie ottomane.
APA, Harvard, Vancouver, ISO, and other styles
5

AGENO, EMANUELA. "Ship Motions and Added Resistance with a BEM in frequency and time domain." Doctoral thesis, Università degli studi di Genova, 2019. http://hdl.handle.net/11567/944942.

Full text
Abstract:
This thesis is focused on the calculation of ship motions and on the evaluation of added resistance in waves. A partial desingularized panel method based on potential theory has been developed. Rankine sources are distributed on the hull and at small distance above the free surface. In such way only the free surface is desingularized. This choice allows to consider also thin hull shapes at the bow where desingularization could cause numerical problems. The main advantage of this approach leads to reduce the computational time, especially when non linear effects are considered, provided an adequate source-panel center vertical distance is selected. The fluid domain boundaries have been represented as a structured grid consisting of flat quadrilater panels. In the linear case the boundary conditions have been applied on the mean body wetted surface and the free-surface is considered at the calm water level. By using an Eulerian timestepping integration scheme the kinematic and dynamic boundary conditions are updated on the free-surface at every time-step. After the potential is obtained, the pressure on the mean hull surface can be calculated and forces and moments can be determined by integrating the pressure on the body surface. Therefore in two-dimensional environment an introduction of non-linear effects has been analysed. In particular a 2D body exact method has been developed. The added resistance is determined by a near field method integrating the second-order pressure on the body surface. Then it is corrected using a semi-empirical method to allow to consider the wave reflection of short waves. The adequacy of the results has been verified applying the code to different test cases and comparing the numerical output with experimental data available in literature. Furthermore in order to discuss the improvements obtained with this present method the results have been compared with another numerical method in frequency domain.
APA, Harvard, Vancouver, ISO, and other styles
6

PETACCO, NICOLA. "Second Generation Intact Stability criteria: Analysis, Implementation and Applications to significant ship typologies." Doctoral thesis, Università degli studi di Genova, 2019. http://hdl.handle.net/11567/945565.

Full text
Abstract:
The following doctoral thesis addresses all the stability failures embraced by the Second-Generation Intact Stability criteria (SGISc). These criteria are under finalization within the sub-Committee at the International Maritime Organization (IMO). SGISc are one of the most discussed topic since the session in 2008 and they drew the attentions not only of the international scientific community but also of other stakeholders such as designers and shipyards. It is forecast that the second-generation criteria will be finalized and published in an official document at IMO by 2020 session. The new regulation has introduced a modern method to apply the criteria that is called multi-layered approach: it consist of three different vulnerability level with increasing accuracy and complexity. One of the aims of this dissertation is to evaluate, both qualitatively and quantitatively, how the SGISc will affect existing vessels and new projects. To chase this main object, it has been necessary to develop a set of computational codes, for each stability failure and vulnerability level, integrated with an existing in-house software already developed. Before beginning to compile the codes, the phenomena physics behind each stability failure has been studied together with a detailed analysis of the regulations texts. Subsequently, a comprehensive campaign of application on a representative mega yacht unit and on a Ro-Ro pax ferry has been carried out in order to verify and validate the computational codes developed. Navy vessel and container-ship have been included in the analysis because they are deemed to be vulnerable to the phenomena addressed by SGISc. To identify a relationship between the stability failures and main design parameters, a set of parent hull variations has been carried out. To better understand which parameters are more relevant in each specific phenomenon, it has relied on a useful tool adopted in systems engineering: the Design Structure Matrix (DSM). Thanks to DSM it has been possible to classify the direction and the magnitude of relationships among parameters introduced by the SGISc.
APA, Harvard, Vancouver, ISO, and other styles
7

Bulian, Gabriele. "DEVELOPMENT OF ANALYTICAL NONLINEAR MODELS FOR PARAMETRIC ROLL AND HYDROSTATIC RESTORING VARIATIONS IN REGULAR AND IRREGULAR WAVES." Doctoral thesis, Università degli studi di Trieste, 2006. http://hdl.handle.net/10077/2518.

Full text
Abstract:
2004/2005
Parametrically excited roll motion has become a relevant technical issue, especially in recent years, due the increasing number of accidents related to this phenomenon. For this reason, its study has attracted the interest of researchers, regulatory bodies and classification societies. The objective of this thesis is the developing of nonlinear analytical models able to provide simplified tools for the analysis of parametrically excited roll motion in longitudinal regular and irregular long crested waves. The sought models will take into account the nonlinearities of restoring and of damping, in order to try filling the gap with the analytical modelling in beam sea. In addition, semi-empirical methodologies will be provided to try extending the usual static approach to ship stability based on the analysis of GZ curve, in a probabilistic framework where the propensity of the ship to exhibit restoring variations in waves is rationally accounted for. The thesis addresses three main topics: the modelling of parametric roll in regular sea (Chapter 2 to Chapter 5), the modelling of parametric roll motion in irregular long crested sea (Chapter 6 and Chapter 7) and the extension of deterministic stability criteria based on the analysis of geometrical GZ curve properties to a probabilistic framework (Chapter 8). Chapter 1 gives an introduction, whereas Chapter 9 reports a series of final remarks. For the regular sea case an analytical model is developed and analysed both in time domain and in frequency domain. In this latter case an approximate analytical solution for the nonlinear response curve in the first parametric resonance region is provided by using the approximate method of averaging. Prediction are compared with experimental results for four ships, and the analytical model is investigated with particular attention to the presence of multiple stable steady states and the inception of chaotic motions. The influence of harmonic components higher than the first one in the fluctuation of the restoring is also investigated. In the case of irregular sea, the Grim's effective wave concept is used to develop an analytical model for the long crested longitudinal sea condition, that allows for an approximate analytical determination of the stochastic stability threshold in the first parametric resonance region. Experimental results are compared with Monte Carlo simulations on a single ship, showing the necessity of a tuning factor reducing the hydrostatically predicted magnitude of parametric excitation. The non-Gaussianity of parametrically excited roll motion is also discussed. Finally, on the basis of the analytical modelling of the restoring term in irregular waves, an extension of the classical deterministic approach to ship static stability in calm water is proposed, to take into account, although is a semi-empirical form, restoring variations in waves. Classical calm water GZ curve is then extended from a deterministic quantity to a stochastic process. By limiting the discussion to the instantaneous ensemble properties of this process, it is shown how it is possible to extend any static stability criterion based on the geometrical properties of the GZ curve, in a rational probabilistic framework taking into account the actual operational area of the ship and the propensity of the ship to show restoring variations in waves. General measures of restoring variations are also discussed, such as the coefficient of variation of metacentric height, restoring lever and area under GZ. Both the short-term and long-term point of view are considered, and the method is applied to three different ships in different geographical areas.
APA, Harvard, Vancouver, ISO, and other styles
8

Ali, Muhammad. "Terahertz Detectors and Imaging Array with In-Pixel Low-Noise Amplification and Filtering in CMOS technologies." Doctoral thesis, Università degli studi di Trento, 2017. https://hdl.handle.net/11572/368746.

Full text
Abstract:
Terahertz gap corresponding to the frequency band of 0.3-3.0 THz is historically the last unexplored region of the electromagnetic spectrum left to be fully investigated. The major difficulty that has hampered the maturation of technologies operating in this region lies in the fact that much unlike its bordering millimeter and infrared regions, generation and detection of THz radiation is not trivial. Yet, such is the intriguing nature and properties of the terahertz radiations that the interest in this region has not faded. Infact, potential applications of THz based systems have emerged in various fields including biomedical imaging, safety and security, quality control and communication. Over the past decade, a lot of research work has been published with an aim to bridge this gap by both electronics and photonics based systems. While these attempts have succeeded to a certain extent, the available solutions either lack in terms of performance or are mostly bulky and difficult to integrate for portable and commercial purpose. This PhD dissertation focuses on the design and investigation of direct terahertz detectors which could be operated at room temperature and fabricated in standard silicon technologies, thereby making use of several advantages like high level of integration, low cost and small device size that these technologies have to offer. In particular, the emphasis is on developing and characterizing terahertz systems for imaging application by using field effect transistor devices as detectors. This objective is pursued in three parts. The first part (chapter 3) of the dissertation deals with the measurement and characterization challenges of terahertz systems. Unlike guided mode solutions, measurements of terahertz detectors and their systems require free space which presents several challenges due to atmospheric attenuation, spurious reflections and diffractions, beam shaping, and so on. Moreover, background noise is also significant considering that the detected signal is typically in the order of a few microvolts. In this regard, an overview of the most common techniques is given and a measurement methodology involving the use of a reference pyroelectric detector to measure the impinging input power and techniques for the evaluation of the detector under-test effective area is presented. The second part (chapter 4) is related to the investigation of variants of antenna-coupled field effect transistor and schottky barrier diode in standard 180 nm CMOS process as examples of direct detectors. During laboratory characterization, detection of terahertz radiation from schottky diode could not be achieved due to matching issues. Moreover, optimization of schottky diode by modifying its standard cell proved to be challenging as compared to field effect transistor, which can be optimized easily to enhance performance parameters and was therefore finally chosen as the preferred choice. The final part of the thesis (chapters 5 and 6) concerns with the implementation of analog readout interface to perform signal processing of detected terahertz signal. First, a single pixel consisting of on-chip antenna-coupled detector and a switched capacitor based filtering operation is designed and fabricated in 0.15μm process. The pixel is tested by performing both electrical and terahertz characterization, achieving high voltage responsivity value of 470 kV/W and a minimum NEP of 480 pW/sqrt (Hz). The interface architecture is highly repeatable and it can be used with any commercially terahertz source, even if its operation is limited by low modulation frequency. On the basis of the successful measurement results, an 8 x 6 terahertz array for real-time imaging application is fabricated in the same technology by modifying the interface architecture to make it power and area efficient.
APA, Harvard, Vancouver, ISO, and other styles
9

Somov, Andrey. "Power Management and Power Consumption Optimization Techniques in Wireless Sensor Networks." Doctoral thesis, Università degli studi di Trento, 2009. https://hdl.handle.net/11572/367818.

Full text
Abstract:
A Wireless Sensor Network (WSN) is a distributed collection of resource constrained tiny nodes capable of operating with minimal user attendance. Due to their flexibility and low cost, WSNs have recently become widely applied in traffic regulation, fire alarm in buildings, wild fire monitoring, agriculture, health monitoring, building energy management, and ecological monitoring. However, deployment of the WSNs in difficult-to-access areas makes it difficult to replace the batteries - the main power supply of a sensor node. It means that the power limitation of the sensor nodes appreciably constraints their functionality and potential applications. The use of harvesting components such as solar cells alone and energy storage elements such as super capacitors and rechargeable batteries is insufficient for the long-term sensor node operation. With this thesis we are going to show that long-term operation could be achieved by adopting a combination of hardware and software techniques along with energy efficient WSN design. To demonstrate the hardware power management, an energy scavenging module was designed, implemented and tested. This module is able to handle both alternating current (AC) based and direct current (DC) based ambient sources. The harvested energy is stored in two energy buffers of different kind, and is delivered to the sensor node in accordance with an efficient energy supply switching algorithm. The software part of the thesis presents an analytical criterion to establish the value of the synchronization period minimizing the average power dissipated by a WSN node. Since the radio chip is usually the most power hungry component on a board, this approach can help one to decrease the amount of power consumption and prolong the lifetime of the entire WSN. The following part of the thesis demonstrates a methodology for power consumption evaluation of WSN. The methodology supports the Platform Based Design (PBD) paradigm, providing power analysis for various sensor platforms by defining separate abstraction layers for application, services, hardware and power supply modules. Finally, we present three applications where we use the designed hardware module and apply various power management strategies. In the first application we apply the WSN paradigm to the entertainment area, and in particular to the domain of Paintball. The second one refers to a wireless sensor platform for monitoring of dangerous gases and early fire detection. The platform operation is based on the pyrolysis product detection which makes it possible to prevent fire before inflammation. The third application is connected with medical research. This work describes the powering of wireless brain-machine interfaces.
APA, Harvard, Vancouver, ISO, and other styles
10

Ciaghi, Aaron. "Standardizing ICT for Development: Towards the Definition of a Standard Process and Maturity Model for ICTD Projects." Doctoral thesis, Università degli studi di Trento, 2014. https://hdl.handle.net/11572/368172.

Full text
Abstract:
The use of ICTs to stimulate socio-economic development has significantly grown in popularity in the recent years and has been boosted by the “mobile revolution†of the last decade. Despite the multi-year experience of academics, donor agencies and businesses, the failure rate of ICT for Development projects is still very high, resulting in a significant waste of resources. The multiple challenges that must be faced simultaneously in these projects make them more difficult than “traditional†ICT projects and recommendations such as best practices to face them have been suggested over the years. Nonetheless, the systematic replicability of these practices has never been addressed and project management and formalized development processes that address the particular challenges of the domain are extremely rare. This thesis presents a Maturity Model derived from the experience of researchers and practitioners, collected through an extensive literature review and interviews, that aims at providing a set of guidelines to be used throughout an ICT for Development intervention to increase the probability of success.
APA, Harvard, Vancouver, ISO, and other styles
11

Borovin, Evgeny. "NMR Characterization of Sol-Gel derived Hybrid Nanonmaterials: insight on organic-inorganic Interfaces." Doctoral thesis, Università degli studi di Trento, 2015. https://hdl.handle.net/11572/367776.

Full text
Abstract:
This thesis is focused on the synthesis and structural characterization of hybrid Organic-Inorganic materials with different application fields (materials for VOC sensing and for polymer–based nanocomposites), exploiting the conventional sol-gel method or the Nano Building Block (NBBs) approach with the in situ water production route. In the first part of the work the co-condensation of TEOS and organofunctional alkoxysilanes allowed preparation of Hybrid Sol-Gel Networks The synergic use of XRD with NMR allowed to study in deep the phase interaction. The hybrid coatings, prepared by dip-coating technique demonstrated similarity in structural features with the bulk xerogels. Two different approaches were combined to study the coatings sorption ability towards selected Volatile Organic Compounds (VOC). The coatings appeared promising in the field of detection and removal of VOCs at low temperatures, possessing the ability to quickly desorb entrapped volatiles. Fine adjustments of such hybrids can allow to discriminate between similar compounds and decrease the water sorption phenomenon, since not only the microstructure, but the polarity of the effective hybrid coatings surface plays decisive role in sorption process. In the second part of the work the synthesis parameters were fine-tuned in order to obtain Si-based SH–functionalized NBBs. The water provided in-situ through the esterification reaction of chloroacetic acid and 1-propanol enabled the hydrolysis-condensation of –SH functionalized alkoxysilane. The choice of exploited catalyst (TFA or DBTL) and esterification reaction parameters variations clearly ruled out the preferences in NBBs structural units formation. Varying the reaction temperature conditions allowed to follow the kinetics of esterification reaction and relate the water production rate to the kinetics of NBBs growth, highlighting strong correlation of H2O availability to condensation extent. The complementary exploitation of multi-nuclear NMR, FTIR and GPC techniques elucidated in full complexity the NBBs structural features development during the reaction.
APA, Harvard, Vancouver, ISO, and other styles
12

Abdelrazek, Ahmed Abdelhakim Moustafa. "Transformerless Grid-Tied Impedance Source Inverters for Microgrids." Doctoral thesis, Università degli studi di Padova, 2018. http://hdl.handle.net/11577/3427190.

Full text
Abstract:
Renewable energy source (RESs) diffusion into the power system is continuously increasing, where the world cumulative installed capacity of solar and wind energy sources increased from around 63.2 GW in 2005 to around 903.1 GW in 2017 according to International Renewable Energy Agency (IRENA). The energy utilization from these RESs implies the use of what is called power conditioning stage (PCS). Such PCS acts as an interfacing layer between the RES side and the customer side, i. e. the load or the grid. These PCSs can utilize many different configurations depending on the employed RES, where the two-stage architecture is commonly used with solar photovoltaic (PV) systems due to the low or variable output voltage. Such two-stage architecture is usually implemented using a boost converter in order to regulate the PV source output voltage and maximize the output power, and a voltage source inverter (VSI) in order to achieve the inversion operation. On the other hand, impedance source inverters represent a different family of the existing PCSs, which are called single-stage power converters as they embraces the boosting capability within the inversion operation. This family of PCSs is seen as an interesting and competitive alternative to the twostage configuration, which are mandatory for low or variable voltage energy sources, such as PV and fuel cell energy sources. Therefore, these impedance source inverters have been utilized in many different applications, such as distributed generation and electric vehicles. This family of PCSs, i. e. impedance source inverters, has experienced a fast evolution during the last few years in order to replace the conventional two-stage architecture since the first release of the three-phase Zsource inverter (ZSI) in 2003. Consequently, many research activities have been established in order to improve the ZSIs performance from different perspectives, such as overall voltage gain, voltage stresses across the different devices, continuity of the input current, and conversion efficiency. Among these different topological improvements, the conventional ZSI and the quasi-ZSI (qZSI), are the most commonly used structures. Accordingly, the objective of this thesis is to study and reinforce the performance of this family of PCSs. Hence, the work in this thesis starts first by addressing the challenges behind eliminating the low frequency transformer in grid-tied PV systems in order to improve the conversion system efficiency, where a new measurement technique for the dc current component is proposed in order to effectively mitigate this dc current component. Then, the performance of the classical impedance source inverters has been assessed by studying all the possible modulation schemes and proposing a new one, under which the efficiency of these classical impedance source inverters have been improved. Furthermore, the partial-load operation of these impedance source inverters, considering the three-phase qZSI, has been studied and the possible ways of achieving a wide range of operation have been investigated. Due to the seen demerits behind the classical impedance source inverters, an alternative new topology, which is called split-source inverter (SSI), is proposed, under which these demerits have effectively been mitigated or eliminated. Then, the challenges behind grid-tied operation of this single-stage dc-ac power converters has been investigated considering the SSI topology. It is worth to note that all the prior mentioned contributions have been validated experimentally. Finally, this thesis is divided into two chapters, where the first chapter introduces an extended summary of the work done concerning the thesis topic, while the second part includes some selected papers from the publications that have been developed during the doctoral study. These selected papers give all the details of the work done in each section in the extended summary.
APA, Harvard, Vancouver, ISO, and other styles
13

Sultan, D. M. S. "Development of Small-Pitch, Thin 3D Sensors for Pixel Detector Upgrades at HL-LHC." Doctoral thesis, Università degli studi di Trento, 2017. https://hdl.handle.net/11572/367699.

Full text
Abstract:
3D Si radiation sensors came along with extreme radiation hard properties, primarily owing to the geometrical advantages over planar sensors where electrodes are formed penetrating through the active substrate volume. Among them: reduction of the inter-electrode distance, lower depletion voltage requirement, inter-columnar high electric field distribution, lower trapping probability, faster charge collection capability, lower power dissipation, and lower inter-pitch charge sharing. Since several years, FBK has developed 3D sensors with a double-sided technology, that have also been installed in the ATLAS Insertable B-Layer at LHC. However, the future High-Luminosity LHC (HL-LHC) upgrades, aimed to be operational by 2024, impose a complete swap of current 3D detectors with more radiation hard sensor design, able to withstand very large particle fluences up to 2×1016 cm-2 1-MeV equivalent neutrons. The extreme luminosity conditions and related issues in occupancy and radiation hardness lead to very dense pixel granularity (50×50 or 25×100 µm2), thinner active region (~100 µm), narrower columnar electrodes (~5µm diameter) with reduced inter-electrode spacing (~30 µm), and very slim edges (~100 µm) into the 3D pixel sensor design. This thesis includes the development of this new generation of small-pitch and thin 3D radiation sensors aimed at the foreseen Inner Tracker (ITk) upgrades at HL-LHC.
APA, Harvard, Vancouver, ISO, and other styles
14

D'UBALDO, OLIVIA. "Aero/Hydro-elastic instabilities occurrence in naval architecture: strategies to approach yacht appendages design." Doctoral thesis, Università degli studi di Genova, 2022. https://hdl.handle.net/11567/1104798.

Full text
Abstract:
Fluid Structure Interaction (FSI) methods are still elite and barely diffused in yacht engineering, mainly because of the specific knowledge required to treat these problems and often because of the important computational burden they required to be studied: the diffusion of simplified methods should be promoted, providing practical guidelines for FSI integration in traditional sailing yacht engineering. The main aim of this thesis is to give a contribution to the development of design strategies which allow and simplify the prevision of fluid-structure interaction phenomena in sailing yacht appendages. The author proposes a Design Strategy to solve a hydro-elastic problem on a hydrofoil, based on the simultaneous implementation of three main approaches: analytical methods, numerical computation and experimental campaigns. Special interest is given to the application of the proposed strategy for the prevision of a specific hydro-elastic instability: the phenomenon of flutter on hydrofoils. The main tool implemented to analytically predict the flutter condition, discussed in Part A of this thesis, is Theodorsen theory: this model can be considered semi-analytical since its analytical implementation requires the use of a CAD (Computer Aided Design) software for structure geometrical modelling and mass properties calculation, and Finite Element (FE) models to compute the pure natural frequencies of vibration of the structure. Since the flutter limit speed is strongly dependent on these variables, the FE models needed to be validated against experimental model assessment tests, based on static and dynamic dry testing. Simultaneously, the flutter speed was experimentally measured in INM-CNR (Institute of Marine Engineering) towing tank in Rome. The experimental campaign, aimed to encounter the flutter phenomenon, allows validating both analytical and numerical approaches: the condition of instability encountered experimentally is compared against the flutter limit computed with Theodorsen theory outcomes; FSI numerical simulations are not discussed within this thesis, but the presented experimental findings are clearly addressed to be compared with future numerical simulations outcomes. In order to develop and to present the Design Strategy here proposed, a pilot case is needed. The design process of the hydrofoil pilot case, discussed in the main body of this thesis, is aimed to find the optimised combination of structural parameters, in order to meet facilities speed range, construction issues and Theodorsen approach application field. The hydrofoil pilot case is conceived to encounter flutter at a speed compatible with the range of velocity imposed by the water tank facilities. The thesis is divided in four parts: the Main body is introduced by a wide literature review intended to build a theoretical base for fluid-structure interaction problem solving. Within the Main body, the pilot case design and construction processes are described, the structure of the Design Strategy is presented, and a results comparison is investigated. Part A, B and C report the three main pillar of the Design Strategy: respectively analytical methods, experimental campaign and numerical FSI simulations. Within these parts, the author described the proposed methods, the used tools, and the obtained results.
APA, Harvard, Vancouver, ISO, and other styles
15

Sgarbossa, Riccardo. "Unintentional Islanding in Distribution Networks with Large Penetration of Power Electronics and Renewable Energy Systems." Doctoral thesis, Università degli studi di Padova, 2017. http://hdl.handle.net/11577/3424898.

Full text
Abstract:
The PhD thesis focus on the analysis and investigation of a crucial issue related to increasing number of distributed energy resources (DERs). This recent issue is the unintentional (i.e. uncontrolled) islanding operation in distribution network with large penetration of DERs based on power electronic converters. Particular focus has been addressed to the interaction between DERs, protection systems and new connection rules required by standard bodies. The aim of the research activity is the investigation on the causes and the influencing factors of unintentional islanding in medium and low voltage (MV and LV) distribution network. The unintentional islanding issue has interested many studies and publications over the last decades. However, the literature research is lacking of considering the lately introduced European standards and technical specifications for DERs. Therefore, during the PhD research, novel aspects of how requirements and ancillary services influence the unintentional islanding operations have been studied, highlighting novel relevant factors, such as the role of the loads characteristics, the influence of the frequency measure and the inverter regulation speed.
The PhD thesis focus on the analysis and investigation of a crucial issue related to increasing number of distributed energy resources (DERs). This recent issue is the unintentional (i.e. uncontrolled) islanding operation in distribution network with large penetration of DERs based on power electronic converters. Particular focus has been addressed to the interaction between DERs, protection systems and new connection rules required by standard bodies. The aim of the research activity is the investigation on the causes and the influencing factors of unintentional islanding in medium and low voltage (MV and LV) distribution network.
APA, Harvard, Vancouver, ISO, and other styles
16

ODETTI, ANGELO. "Study of innovative autonomous marine vehicles for monitoring in remote areas and shallow waters The Shallow Water Autonomous Multipurpose Platform (SWAMP)." Doctoral thesis, Università degli studi di Genova, 2020. http://hdl.handle.net/11567/1003967.

Full text
Abstract:
The main objective of the research activity covered by the present thesis is the design of an Autonomous Surface Vehicle for the monitoring of environmental areas characterised by shallow water, difficult access and harsh environment, namely the Wetlands. Wetlands are those geographic areas where water meets the earth that cover between 5 and 8% of the Earth’s surface. Wetlands include mangrove zones, swamps, bogs and marshes, rivers and lakes, alluvial plains and flooded forests, shallow coasts and coral reefs. In recent years, their importance is becoming more and more recognized and various international conventions, directives and projects work on their protection. Their importance is related to the fact that these areas are essential ecosystems considered among the world’s most productive environment. Classified as natural purification systems and carbon resources for fauna, Wetlands provide the water and productivity upon which biological diversity relies for the growth of an enormous amount of species of plants and animals. Their importance is also related to human activities since Wetlands can be exploited commercially for fishing but especially become important when thinking that the protection of these areas can also help fighting the disasters resulting from human impact on the environment and its role in the climate change. The lack of a hydro-graphic vessels capable of performing shallow water measurements at depths of less than 1m has led to unreliable maps and data, thus motivating research on innovative technical approaches for executing the tasks of water sampling, limnological surveys, bathymetric analyses and monitoring of water quality. In recent years a variety of robotic approaches to improve the quality, speed, and accessibility of surveys have been explored by research groups using both commercial and ad-hoc solutions. In this thesis a prototype of Autonomous Surface Vehicle (ASV) named SWAMP (Shallow Water Autonomous Multipurpose Platform) is proposed as the base for an innovative class of reliable modular re-configurable lightweight ASVs for extremely shallow water applications. The vehicle was studied to solve the problem of monitoring the water status in the Wetlands but the SWAMP class ASV will also be able to support, as test-bed, the research on many aspects of marine engineering and robotics like propulsion issues, structure issues, artificial intelligence, cooperative distributed control, Guidance, Navigation and Control (GNC) systems as well as innovative technological solutions in terms of communication, materials, sensors and actuators. The heterogeneity of the themes treated by this thesis relies on the fact that the whole aspects of design were taken into consideration. In this thesis the description of the design, modeling, construction and testing of the new concept of Autonomous Surface Vehicles (ASV) is illustrated. The motivations behind the necessity of a new system are described in Chapter 1 and 2 while in Chapter 3 the general considerations on the requirements that led to the definition of the specifications of a special layout are reported. In Chapter 4 the design of the vehicle layout is illustrated together with the description of an innovative soft-material hull structure on which extensive analyses in towing tank were performed. The tests were carried out both in deep and shallow water to completely identify the surge motion of the ASV. The hardware, software and mechanical modularity represent some main ideas behind the conception of SWAMP. The two hulls of SWAMP are two separate modules. Each hull can be composed of more or less structural elements, actuation modules, powering elements, control units, sensors. This can be done without constraints thanks to the novel communication architecture all based on Wi-Fi modules. In Chapter 5 the thrusters expressly studied for environmental monitoring in the extremely shallow waters of the Wetlands (rivers, lakes swamps, marshes, mangroves..) are illustrated. These systems were modeled, designed and constructed on the Pump-Jet concept. Four Pump-Jet Module s for a class of small/medium size ASV were built and the description of the design and tests are reported in the thesis. The extremely modular hardware control system of SWAMP is described in Chapter 6 where also the modules composing the vehicle are described. Once assembled, the vehicle was tested at sea in various environments. A series of pioneering tests with the application of Machine Learning and the citizen’s engagement in teaching to a robot to self-control are described in the Chapter 7, together with more standard results. The algorithm for the training of a neural network for the control of SWAMP was tested also by using the simulator described in Chapter 8. The conclusions of this work, reported in Chapter 9, are correlated with a visionary analysis of the possible applications of SWAMP in a series of futuristic research trends of marine robotics.
APA, Harvard, Vancouver, ISO, and other styles
17

Paternoster, Giovanni. "Silicon Concentrator Solar Cells: Fabrication, Characterization and Development of Innovative Designs." Doctoral thesis, Università degli studi di Trento, 2013. https://hdl.handle.net/11572/368472.

Full text
Abstract:
This work presents the design, realization and characterization of high efficiency Silicon photovoltaic cells for concentration applications. In order to develop high efficient Si concentrator solar cells two different ways have been followed. The first one aims to optimize the design and the fabrication process of a conventional front-side contacted cell, based on a planar n-p junction. Although this cell structure is rather simple and cheap to produce, we show that a conversion effciency higher than 23% can be reached under concentrated light, if the cell design and the fabrication process are suitably optimized. The second way aims to investigate and propose completely new cell designs which use some 'three-dimensional' structures, such as deep-grooved contacts and through-silicon vertical connections. The new cell designs allow to overcome some intrinsic limits of the conventional front-side contacted cells and could be worthwhile to improve the conversion eciency in future real applications.
APA, Harvard, Vancouver, ISO, and other styles
18

Khatib, Moustafa. "THz Radiation Detection Based on CMOS Technology." Doctoral thesis, Università degli studi di Trento, 2019. https://hdl.handle.net/11572/368305.

Full text
Abstract:
The Terahertz (THz) band of the electromagnetic spectrum, also defined as sub-millimeter waves, covers the frequency range from 300 GHz to 10 THz. There are several unique characteristics of the radiation in this frequency range such as the non-ionizing nature, since the associated power is low and therefore it is considered as safe technology in many applications. THz waves have the capability of penetrating through several materials such as plastics, paper, and wood. Moreover, it provides a higher resolution compared to conventional mmWave technologies thanks to its shorter wavelengths. The most promising applications of the THz technology are medical imaging, security/surveillance imaging, quality control, non-destructive materials testing and spectroscopy. The potential advantages in these fields provide the motivation to develop room-temperature THz detectors. In terms of low cost, high volume, and high integration capabilities, standard CMOS technology has been considered as an excellent platform to achieve fully integrated THz imaging systems. In this Ph.D. thesis, we report on the design and development of field effect transistor (FET) THz direct detectors operating at low THz frequency (e.g. 300 GHz), as well as at higher THz frequencies (e.g. 800 GHz – 1 THz). In addition, we investigated the implementation issues that limit the power coupling efficiency with the integrated antenna, as well as the antenna-detector impedance-matching condition. The implemented antenna-coupled FET detector structures aim to improve the detection behavior in terms of responsivity and noise equivalent power (NEP) for CMOS based imaging applications. Since the detected THz signals by using this approach are extremely weak with limited bandwidth, the next section of this work presents a pixel-level readout chain containing a cascade of a pre-amplification and noise reduction stage based on a parametric chopper amplifier and a direct analog-to-digital conversion by means of an incremental Sigma-Delta converter. The readout circuit aims to perform a lock-in operation with modulated sources. The in-pixel readout chain provides simultaneous signal integration and noise filtering for the multi-pixel FET detector arrays and hence achieving similar sensitivity by the external lock-in amplifier. Next, based on the experimental THz characterization and measurement results of a single pixel (antenna-coupled FET detector + readout circuit), the design and implementation of a multispectral imager containing 10 x 10 THz focal plane array (FPA) as well as 50 x 50 (3T-APS) visible pixels is presented. Moreover, the readout circuit for the visible pixel is realized as a column-level correlated double sampler. All of the designed chips have been implemented and fabricated in 0.15-µm standard CMOS technology. The physical implementation, fabrication and electrical testing preparation are discussed.
APA, Harvard, Vancouver, ISO, and other styles
19

ABOU, KHALIL ALI. "Event Driven Tactile Sensors for Artificial Devices." Doctoral thesis, Università degli studi di Genova, 2020. http://hdl.handle.net/11567/1001986.

Full text
Abstract:
Present-day robots are, to some extent, able to deal with high complexity and variability of the real-world environment. Their cognitive capabilities can be further enhanced, if they physically interact and explore the real-world objects. For this, the need for efficient tactile sensors is growing day after day in such a way are becoming more and more part of daily life devices especially in robotic applications for manipulation and safe interaction with the environment. In this thesis, we highlight the importance of touch sensing in humans and robots. Inspired by the biological systems, in the the first part, we merge between neuromorphic engineering and CMOS technology where the former is a eld of science that replicates what is biologically (neurons of the nervous system) inside humans into the circuit level. We explain the operation and then characterize different sensor circuits through simulation and experiment to propose finally new prototypes based on the achieved results. In the second part, we present a machine learning technique for detecting the direction and orientation of a sliding tip over a complete skin patch of the iCub robot. Through learning and online testing, the algorithm classies different trajectories across the skin patch. Through this part, we show the results of the considered algorithm with a future perspective to extend the work.
APA, Harvard, Vancouver, ISO, and other styles
20

ZORZI, ALVISE. "A new, water cooled, 250kW, modular Matrix converter with hybrid modulation and intelligent gate drivers." Doctoral thesis, Università degli studi di Genova, 2020. http://hdl.handle.net/11567/1005334.

Full text
Abstract:
Matrix converter are direct AC/AC converters that directly connect each input phase to each output phase through an array of controlled semiconductors, inherently capable of bidirectional power flow. The main advantage of Matrix Converters is the absence of bulky reactive elements, that are subject to aging, and reduce the system reliability. In addition, Matrix Converter can work with high efficiency levels, that can be further enhanced by adopting a new PMW-based modulation technique, that reduces switching losses. These characteristics, combined with the complete custom design of the hardware components, permit to obtain a converter characterized by an excellent power density value.
APA, Harvard, Vancouver, ISO, and other styles
21

MIGLIANTI, LEONARDO PIETRO. "Modelling of the cavitating propeller noise by means of semi-empirical and data driven approaches." Doctoral thesis, Università degli studi di Genova, 2020. http://hdl.handle.net/11567/1004161.

Full text
Abstract:
Historically, the mitigation of the ship radiated noise in the water was a prerogative of naval ships due to quiet requirements. In the last decades, the need for merchant ships and pleasure craft to ensure high standards of comfort on board in terms of on board radiated noise and structural vibrations lead also, indirectly, towards the reduction of underwater radiated noise. Nowadays, the greater awareness about the damages to the marine ecosystem as a result of the ship noise pollution is leading governments and international institutions towards the study of possible limits to acoustic emissions, which could be applied, to different levels, to protected marine areas and to more general navigation routes. Propeller, when cavitating, is the main source of radiated noise for conventional ships together with the engines; propeller cavitation, contrarily to machinery, is not linked to single frequencies, being a broadband noise. Its reduction is thus becoming one of the objectives in new propellers design. One of the most effective and common way to assess the propeller cavitation noise is by experimental tests in model scale. This procedure is rather expensive and time consuming, thus it is rather difficult to include it in an iterative design loop. The aim of the present PhD thesis is the development of semi-empirical methods for the prediction of the propeller cavitating noise, in order to provide the designer with a tool capable of allowing prediction of underwater radiated noise at early design stages. Moreover, the same method can be applied in order to enhance the capability of prediction of underwater radiated noise from model scale tests, allowing to obtain indications also for operating conditions not directly reproducible due to scaling effects. Attention has been devoted to the most common cavitation phenomena, i.e. back sheet cavitation and tip vortex. The considered methods are derived from physical formulations available in literature and purely data driven models coming from the machine learning field, exploiting also the advantages of their combination in hybrid models. In order to build and test the noise models, a dataset of propeller cavitating noise has been collected and processed, including relevant information on the input characteristics (i.e. propeller geometry, working point, ship wake description) and corresponding radiated noise. The experimental campaigns have been performed at the cavitation tunnel of the University of Genoa, considering three controllable pitch propellers in twin screw configuration. The dataset has been exploited to build different models of increasing complexity, to predict the radiated noise spectrum. The methodologies proposed allowed to obtain encouraging results providing a valuable basis for further investigations and developments of this approach.
APA, Harvard, Vancouver, ISO, and other styles
22

Povoli, Marco. "Development of enhanced double-sided 3D radiation sensors for pixel detector upgrades at HL-LHC." Doctoral thesis, Università degli studi di Trento, 2013. https://hdl.handle.net/11572/368454.

Full text
Abstract:
The upgrades of High Energy Physics (HEP) experiments at the Large Hadron Collider (LHC) will call for new radiation hard technologies to be applied in the next generations of tracking devices that will be required to withstand extremely high radiation doses. In this sense, one of the most promising approaches to silicon detectors, is the so called 3D technology. This technology realizes columnar electrodes penetrating vertically into the silicon bulk thus decoupling the active volume from the inter-electrode distance. 3D detectors were first proposed by S. Parker and collaborators in the mid ’90s as a new sensor geometry intended to mitigate the effects of radiation damage in silicon. 3D sensors are currently attracting growing interest in the field of High Energy Physics, despite their more complex and expensive fabrication, because of the much lower operating voltages and enhanced radiation hardness. 3D technology was also investigated in other laboratories, with the intent of reducing the fabrication complexity and aiming at medium volume sensor production in view of the first upgrades of the LHC experiments. This work will describe all the efforts in design, fabrication and characterization of 3D detectors produced at FBK for the ATLAS Insertable B-Layer, in the framework of the ATLAS 3D sensor collaboration. In addition, the design and preliminary characterization of a new batch of 3D sensor will also be described together with new applications of 3D technology.
APA, Harvard, Vancouver, ISO, and other styles
23

tessarolo, enrico. "Componenti ottici negli ambienti spaziali: fabbricazione, caratterizzazione e degrado delle prestazioni." Doctoral thesis, Università degli studi di Padova, 2018. http://hdl.handle.net/11577/3422320.

Full text
Abstract:
The objectives of the next-generation space missions lead the development of increasingly innovative instrumentation able to visit unexplored areas of the solar system, and to operate in relative increasing harsh environments. All the instrumental sub-systems, included optical components will have to be able to withstand extreme operational environments, characterized by high thermal gradients, electromagnetic irradiation, particulate and high and low energy particles/ions irradiation. In particular, recently, it was demonstrated that low energy particles could compromise the performances of optical coatings, leading a potential failure of a space instrument. Accelerators are commonly used to reproduce the particles irradiation on components. By selecting ions energies and mission lifetime equivalent fluences it is in principle possible to replicate the effects induced by the space environment, although accelerators flux rates are higher of orders of magnitude with respect to the real space conditions. For this reason, it is arduous to be sure that ground tests are effectively representative of what occurs in space, making a crucial issue the definition of the experimental irradiation parameters like the ion flux. Moreover, the knowledge of the environmental agents of a specific space environment is still limited, and only in few cases in-situ measurements were performed. In order to overcome this lack, scaling models based on data collected at 1 A.U. can be used to estimate environment parameters of an unexplored space location. The impact on the optical components of the space environment is the main topic of this research project. In particular, a systematic investigation devoted to define clear and reliable testing procedures for optical coatings were performed. Such systematic studies started with Monte Carlo simulations to evaluate the penetration depth of a specific ion specie or particle within an optical element, determining the energy range where the ions highly interact with the coating. After that, in order to perform a systematic investigation of the damage mechanisms that a coating can undergo, many implantation experiments were carried out by using accelerators facilities. We observed that, depending on the coating structure and the ion energy, different potential damages can occur, such as changes of the optical properties of the materials, inter-diffusion at the interfaces due to elastic scattering interactions between the incident ions and the target atoms, delamination of some layers or a full detachment of the coating or part of it.
Il lavoro svolto in questo progetto di ricerca riguarda lo sviluppo e l’analisi di tecniche sperimentali atte alla validazione di un metodo affidabile per la qualifica spaziale di componenti ottici. In particolare è stato studiato l’effetto dell’irradiamento di ioni accelerati (principalmente protoni e ioni di He) in coating ottici, considerando energie simili a quelle presenti nel vento solare. Lo scopo principale è stato quello di studiare sperimentalmente l’impatto del vento solare sulle performance ottiche per componenti simili a quelli che possono essere utilizzati in missioni spaziali come ad esempio Solar Orbiter o BepiColombo. Lo studio dei molteplici fattori che influenzano e modificano le performance ottiche è stato eseguito considerando vari tipi di coating ottici, dai single layer ai multistrato. A tale scopo, diverse sessioni di irraggiamento sono state effettuate, in ognuna delle quali sono stati variati i principali parametri chiave che caratterizzano il tipo di irraggiamento, come la dose, il flusso di ioni, il tipo di ione e la sua energia. Ognuno di questi fattori è risultato essere influente nella modifica delle performance ottiche dei dispositivi irraggiati. In particolare, si è osservato che alcuni di questi fattori producono danni comparabili fra loro su alcune tipologie di struttura, mentre su altre lo stesso comportamento non veniva mostrato. Oltre alle analisi delle performance ottiche dei coating irraggiati sono state eseguite numerose indagini strutturali, allo scopo di chiarire quali modifiche a livello microstrutturale possono essere responsabili della modifica delle performance ottiche. Infine, è stato sviluppato un primo modello in grado di prevedere il cambiamento di riflettività prodotto su coating a single layer dall’irraggiamento con ioni. La realizzazione di tutti i coating ottici studiati, la loro caratterizzazione ottica e strutturale prima e dopo l’irraggiamento nonché tutte le simulazioni atte alla preparazione degli esperimenti di irraggiamento sono parte del lavoro svolto in questi tre anni di dottorato. In particolare, le attività di deposizione e caratterizzazione dei film sottili sono state realizzate utilizzando la strumentazione presente nei laboratori del CNR-IFN di Padova, nel dipartimento di fisica e di ingegneria industriale dell’Università degli Studi di Padova, e presso il sincrotrone Elettra di Trieste. Le sessioni di irraggiamento son state effettuate presso l’Ion Beam Center (IBC) dei laboratori Helmholtz-Zentrum Dresden-Rossendorf (HZDR) in Germania. In particolare, nell’ultimo anno ho personalmente seguito una sessione di irraggiamento della durata di un mese eseguendo le caratterizzazione ottiche e strutturali in situ.
APA, Harvard, Vancouver, ISO, and other styles
24

Petucco, Andrea. "Hardware in the loop, all-electronic wind turbine emulator for grid compliance testing." Doctoral thesis, Università degli studi di Padova, 2017. http://hdl.handle.net/11577/3422321.

Full text
Abstract:
During the last years the distribution of renewable energy sources is continuously increasing and their influence on the distribution grid is becoming every year more relevant. As the increasing integration of renewable resources is radically changing the grid scenario, grid code technical requirements as are needed to ensure the grid correct behavior. To be standard compliant wind turbines need to be submitted to certification tests which usually must be performed on the field. One of the most difficult tests to be performed on the field is the low voltage ride through (LVRT) certitication due to the following resons: • The standards specify it must be performed ad different power levels. For this reasons it is necessary to wait for the right atmospheric conditions. • It requires a voltage sag generator which is usually expensive and bulky. • The voltage sag generator needs to be cabled between the grid and the wind turbine. • The voltage sag generator causes disturbances and perturbation on the power grid, for this reasons agreements with the distributor operator are needed. For all these reasons a laboratory test bench to perform the LVRT certification tests on wind turbines would be a more controlled and inexpensive alternative to the classic testing methodology. The research presented in this thesis is focused on the design and the realization of a test bench to perform certification tests on energy converters for wind turbines in laboratory. More specifically, the possibility of performing LVRT certification tests directly in laboratory over controlled conditions would allow faster testing procedures and less certification overall costs. The solution presented in this thesis is based on a power hardware in the loop implementing a digitally-controlled, power electronics-based emulation of a wind turbine. This emulator is used to drive the electronic wind energy converter (WEC) under test. A grid emulator is used to apply voltage sags to the wind turbine converter and perform LVRT certification tests. In this solution AC power supplies are used to emulate both the wind turbine and the grid emulator. For this reason the test bench power rating is limited to the AC supplies one. Two working versions of the test bench has been realized and successfully tested. The work here presented has evolved through the following phases: • Study of the grid code requirements and the state of the art. • Modeling of the parts of a wind turbine and complete system simulations.
During the last years the distribution of renewable energy sources is continuously increasing and their influence on the distribution grid is becoming every year more relevant. As the increasing integration of renewable resources is radically changing the grid scenario, grid code technical requirements as are needed to ensure the grid correct behavior. To be standard compliant wind turbines need to be submitted to certification tests which usually must be performed on the field. One of the most difficult tests to be performed on the field is the low voltage ride through (LVRT) certitication due to the following resons: • The standards specify it must be performed ad different power levels. For this reasons it is necessary to wait for the right atmospheric conditions. • It requires a voltage sag generator which is usually expensive and bulky. • The voltage sag generator needs to be cabled between the grid and the wind turbine. • The voltage sag generator causes disturbances and perturbation on the power grid, for this reasons agreements with the distributor operator are needed. For all these reasons a laboratory test bench to perform the LVRT certification tests on wind turbines would be a more controlled and inexpensive alternative to the classic testing methodology. The research presented in this thesis is focused on the design and the realization of a test bench to perform certification tests on energy converters for wind turbines in laboratory. More specifically, the possibility of performing LVRT certification tests directly in laboratory over controlled conditions would allow faster testing procedures and less certification overall costs. The solution presented in this thesis is based on a power hardware in the loop implementing a digitally-controlled, power electronics-based emulation of a wind turbine. This emulator is used to drive the electronic wind energy converter (WEC) under test. A grid emulator is used to apply voltage sags to the wind turbine converter and perform LVRT certification tests. In this solution AC power supplies are used to emulate both the wind turbine and the grid emulator. For this reason the test bench power rating is limited to the AC supplies one. Two working versions of the test bench has been realized and successfully tested. The work here presented has evolved through the following phases: • Study of the grid code requirements and the state of the art. • Modeling of the parts of a wind turbine and complete system simulations.
APA, Harvard, Vancouver, ISO, and other styles
25

Maule, Luca. "Eye controlled semi-Robotic Wheelchair for quadriplegic users embedding Mixed Reality tools." Doctoral thesis, Università degli studi di Trento, 2019. https://hdl.handle.net/11572/368247.

Full text
Abstract:
Mobile assistive robotics can play a key role to improve the autonomy and lifestyle of patients. In this context, RoboEye project aims to support people affected by mobility problems that range from very impairing pathologies (like ALS, amyotrophic lateral sclerosis) to old age. Any severe motor disability is a condition that limits the capability of interacting with the environment, even the domestic one, caused by the loss of the control on our own mobility. Although these pathologies are relatively rare, the number of people affected by this disease are increasing during the years. The focus of this project is the restore of persons’ mobility using novel technologies based on the gaze on a power wheelchair designed to enable the user to move easily and autonomously inside his home. A novel and intuitive control system was designed to achieve such a goal, in which a non-invasive eye tracker, a monitor, and a 3D camera represent some of the core elements. The developed prototype integrates, on a standard power wheelchair, functionalities from the mobile robotics field, with the main benefit of providing to the user two driving options and comfortable navigation. The most intuitive, and direct, modality foresees the continuous control of the frontal and angular velocities of the wheelchair by gazing at different areas of the monitor. The second, semi-autonomous, enables the navigation toward a selected point in the environment by just pointing and activating the wished destination while the system autonomously plans and follows the trajectory that brings the wheelchair there. The main goal is the development of shared control, combining direct control by the user with the comfort of autonomous navigation based on augmented reality markers. A first evaluation has been performed on a real test bed where specific motion metrics are evaluated. The designs of the control structure and driving interfaces were tuned thanks to the testing of some volunteers, habitual users of standard power wheelchairs. The driving modalities, especially the semi-autonomous one, were modelled and qualified to verify their efficiency, reliability, and safety for domestic usage.
APA, Harvard, Vancouver, ISO, and other styles
26

ORTOLANI, FABRIZIO. "Experimental investigation on propeller and blade loads during off-design conditions." Doctoral thesis, Università degli studi di Genova, 2021. http://hdl.handle.net/11567/1048572.

Full text
Abstract:
The present thesis wants to take a step forward towards the understanding and quantification of the complex hydrodynamic mechanisms which characterize the propeller performance during real operative scenarios: a novel set–up, based on the measure of three forces and moments exerted by the single blade of the propulsor, has been developed, installed and successfully tested on a twin screw scaled model at the CNR-INM maneuvering basin and towing tank. At first, the new setup is presented, reporting the results of the extensive experimental activity performed on the selected test case, dedicated to quantify, for the first time by means of free-running model test, the entity of the single blade forces and integrated propeller loads in different operating conditions. Although free running model tests still represent the primary approach for a reliable performance assessment, they require facilities and devices that are not commonly affordable; alternatively, rectilinear towing tank can be used for manoeuvring investigations by static or dynamic tests and can be a valid alternative to investigate propeller performance in off-design. Considering this, another experimental campaign has been performed at CNR-INM towing tank. In the experiments, the drift angle and the advance speed of the model were varied systematically, to focus on the relation between propeller operating conditions and loads. Moreover, the averaged and periodic blade and propeller loads are compared, in terms of the equivalent drift angle, to the measurements obtained by free running model tests, in order to demonstrate the feasibility of simulating turning circle stabilized phase with pure oblique flow tests, providing a preliminary quantification of the off-design loads developed by the propeller. The research ends with the comparison of single blade loads that arise during transient motions of the turning manoeuvres at weak and tight rudder angles, performed at the same reference speed of the captive model test. This section allows to analyse the possibilities and limitations of the characterization of propeller performance under time varying inflow by means of quasi-steady towing conditions. The availability of cycle–resolved blade loads further highlighted their fluctuating nature with respect to the averaged ones and opens new paths for the investigation of hydrodynamic phenomena that characterize the performance of the propeller and blade(propeller)/wake interaction in behind hull, both during design and off-design conditions.
APA, Harvard, Vancouver, ISO, and other styles
27

Ischia, Giulia. "Sustainable conversion of biomass wastes via hydrothermal processes: fundamentals and technology." Doctoral thesis, Università degli studi di Trento, 2022. http://hdl.handle.net/11572/340014.

Full text
Abstract:
In a worldwide context where the community has to make giant leaps forward to contain the catastrophic consequences of climate change, we need to face the discordant “How do we power our economies?” with green and circular solutions instead of hiding behind the hypocrisy of fossil fuels. Biomass, renewable, abundant, and cheap, can trigger a shift towards a zero-carbon emission economy, in which it substitutes fossil fuels for the production of energy and materials. Among the strategies to valorize biomass, hydrothermal processes are green pathways for producing biofuels and bio-based materials. However, research has yet to fill several gaps to make these processes ready for industrial scaling and spreading. Therefore, along with this Ph.D. thesis, we provide new insights into hydrothermal processes, touching several scientific areas: from in-depth research around the thermochemical fundamentals to the engineering of new sustainable and biorefinery concepts. Through fundamental research, we try to answer “What’s happening during hydrothermal processes?” facing the enormous complexity of the process by investigating chemical pathways, kinetics, and thermodynamics. Facing sustainability, we explored the coupling of hydrothermal conversion with concentrated solar energy to develop a zero-energy process and the integration of hydrothermal carbonization with subsequent treatments to valorize by-products.
APA, Harvard, Vancouver, ISO, and other styles
28

RIGOLLI, NICOLA. "Olfactory navigation: how to make decisions using a sparse signal." Doctoral thesis, Università degli studi di Genova, 2022. http://hdl.handle.net/11567/1082240.

Full text
Abstract:
All living organisms are surrounded by fluids, either air or water, which create unique sensory landscapes. For example, chemical signals disperse in the flow by diffusion and advection and, when the flow is turbulent, odor breaks up in filaments and discrete patches of varying intensity. In my thesis I focused on olfactory navigation in turbulent environments and I aimed at understanding how organisms overcome uncertainties to make decisions. I developed three-dimensional direct numerical simulations of a turbulent channel flow to recreate a realistic environment for olfactory searches. I realized these state of the art simulations by customizing an open software called Nek5000, which solves the Navier-Stokes equations for the velocity field and the advection-diffusion equation, which regulates the evolution of the odor (passive scalar) in a fluid. After generating large fluid dynamics datasets of odorant evolution in a channel, I analyzed which features of the olfactory signal are more relevant to locate the odor source. Surprisingly, not only the signal, but also its absence can be informative to infer the distance from the odor source. Using supervised learning algorithms I showed that the intensity of odor concentration is an informative measure, but that when it is coupled to the temporal dynamics of the signal, it allows robust predictions in different conditions and at different ranges from the source. These theoretical results suggest that it is computationally advantageous to measure both odor intensity and timing. I analyzed a set of neural recording from awake mice, demonstrating that they are indeed able to store both quantities, and that the neural representation depends on the underlying flow. I then considered the problem of navigating to the source of the turbulent odor. Although animals (for example moths and crustaceans) robustly perform this task, the algorithms they use are not understood. I modeled olfactory navigation using the framework of Partially Observable Markov Decision Processes (POMDP) and I proposed a normative theory to explain the alternation between sniffing in the air and sniffing the ground, typical of mammals like rodents and dogs. Alternation stems from the physics of fluids, prescribing that odor near the ground is more continuous than up in the air, but remains relatively close to the source. In contrast, at nose level the odor is transported quickly away from the source, but is more noisy and intermittent. An agent searching for the odor source should thus sniff in the air when it is far from the source to increase its chances of detecting the odor. Once the agent localizes the odor plume, it should continue the search sniffing the ground where the trail is less intermittent. The exact timing for alternation stems from marginal value theory. Finally, the commonly observed behavior of searchers proceeding in casts and surges emerges from this computational framework, and alternation naturally complements this dynamics to ensure optimal exploration.
APA, Harvard, Vancouver, ISO, and other styles
29

PIAGGIO, BENEDETTO. "AZIMUTH-DRIVE ESCORT TUG MANOEUVRABILITY MODEL, SIMULATION AND CONTROL." Doctoral thesis, Università degli studi di Genova, 2020. http://hdl.handle.net/11567/995222.

Full text
Abstract:
The ability to predict Escort tug's handling, effectiveness, and safety at early design stage is paramount in view of an optimal design process. In this framework, the availability of a reliable manoeuvrability prediction model is beneficial. A deep insight into the manoeuvring characterisation of a wider class of Azimuthal Stern Drive Escort tugs is undertaken, giving rise to a dedicated novel four Degrees-of-Freedom (4-DOF) parametric manoeuvrability model. An extensive captive model testing campaign is exploited to develop suitable mathematical models, conceived following an `MMG-inspired' modelling concept, i.e. a non-linear manoeuvring prediction method developed by the Japanese Manoeuvring Modelling Group (Ogawa et al., 1977) and later standardised by the Japan Society of Naval Architects and Marine Engineers JSNAOE (2013). The mathematical formulation pursues a physics-based approach aimed at characterising the complete manoeuvring hydrodynamics of a category of vessels, based onto a reference tug geometry. The hull+skeg and azimuthals force contributions are analysed separately and are then coupled to include their reciprocal interaction. Collaterally, computational fluid dynamics techniques (RANSE) are cross-validated and further explored to extend the parent hull modelling in function of a series of skeg geometries. The aim of the investigation is to physically characterize and quantify by suitable models the influence of the different extit{skeg} designs and sizes onto manoeuvring, with the scope of covering the largest class of Azimuthal Stern Drive Escort tugs. To prove the adequacy of the mathematical formulations, two independent validation processes have been pursued. The first -- model-scale -- reproduces the Escort-towing tests performed at towing tank basin onto the parent hull. The second -- full-scale -- is devised to check the simulator capability of describing the model free-sailing performance of a different but compatible hull, having dimensions, propulsion, skeg characteristics significantly different with respect to the `parent design' used to principally develop the code. In conclusion, a wider `Simulation-for-Design' strategy discloses, enriched by the combination of an original parametric architecture of the Azimuthal Stern Drive Escort tug, with a fully controllable and scalable tanker, and a tunable tow-line. Full real-time Escort-towing dynamics of convoy are envisaged, enabling the study and investigation of several real-case emergency scenarios and paving the way for future design strategies. The ability of addressing real-world operative profiles, in fact widens even more the advantages of a `parametric model', promising to become a very useful tool for tug designers, tug masters/pilots, port authorities, or flag administrations. Among them are the direct assessment of the impact of design choices on operational effectiveness and safety; towing-service risk assessment-to-mitigation techniques; real-scenario simulation with focus on technical failures, human factor and underlying delay chain; and, last bot not least, `model-based' benchmarking environment for control design techniques.
APA, Harvard, Vancouver, ISO, and other styles
30

Carrozzo, Anna Eleonora. "Composite Indicators and Ranking Methods for Customer Satisfaction surveys." Doctoral thesis, Università degli studi di Padova, 2016. http://hdl.handle.net/11577/3424801.

Full text
Abstract:
Scientific literature on statistical methods for quality evaluation within the university has undergone recent developments particularly in relation to methods of analysis of the relative effectiveness of university activities, based on a comparison between various providers/units. In the field of relative effectiveness evaluation, an extremely delicate phase is identified in which the variety of indictors considered to be informative of the various aspects of the effectiveness itself requires a synthesis that permits the definition of rankings of the various compared units, and that provides a summarizing measure of the differential performance. From the application context suggestions emerge for the pursuit of a methodological path, the principal end objectives of which are the classification or ordering of a set of compared units against a complex multidimensional phenomenon, and the synthesis of a variety of indicators. From the application context suggestions emerge for the pursuit of a methodological path, the principal end objectives of which are the classification or ordering of a set of compared units against a complex multidimensional phenomenon, and the synthesis of a variety of indicators. A methodological solution in the nonparametric field is represented by the nonparametric combination of dependent tests and dependent rankings (NPC ranking; Pesarin & Salmaso, 2010), that allows the combination of rankings derived from orderings of statistical units against appropriate indicators, without the need to specify the dependence structure underlying the considered indicators that can be calculated, for example, on the same statistical units. This methodology represents an important surpassing of the usual synthesis of performance indicators made up of the simple arithmetic mean. The research contribution consists mainly in the extension of nonparametric methodological solutions above, such as those concerning the nonparametric combination of dependent tests and NPC ranking, in order for them to be used in the evaluation of satisfaction about products or services. Thus this research activity has a twofold purpose. From one side it aims to suggest innovative methodological tools with reference to problems of multivariate ranking and of combination of indicators, from the other hand it allows to solve practical problems. Indeed methodological solutions proposed in this work have been applied in the field of university teaching evaluation by analyzing data from student satisfaction surveys and in the field of the customer satisfaction related to services provided by the ski schools of Alto Adige. Other kinds of applications in industrial field, in development of new products and in life cycle of products assessment are also discussed.
La letteratura scientifica sui metodi statistici per la valutazione della qualità ha avuto diversi sviluppi soprattutto con riferimento a metodologie di analisi dell'efficacia relativa (Bird et. al, 2005). Nell'ambito della valutazione dell'efficacia relativa, si individua in particolare una fase estremamente delicata in cui la pluralità degli indicatori considerati informativi dei diversi aspetti dell'efficacia stessa, necessita di una sintesi che consenta in particolare di predisporre graduatorie delle diverse unità confrontate, e che ne fornisca una misura riassuntiva della performance. Dal contesto applicativo emergono spunti per la ricerca di un percorso metodologico che abbia come obiettivi finali principali: (i) la classificazione o l'ordinamento di un insieme di unità confrontate rispetto ad un fenomeno complesso multidimensionale, e (ii) la sintesi di una pluralità di indicatori. La metodologia considerata per risolvere i problemi sopra citati è basata sulla combinazione di test dipendenti e di graduatorie dipendenti (NPC test ed NPC ranking; Pesarin & Salmaso, 2010). Tale metodologia ha il notevole vantaggio di non dover specificare la struttura di dipendenza sottostante agli indicatori o test considerati, che possono essere calcolati ad esempio sulle stesse unità statistiche. Tale metodologia rappresenta un importate superamento dell'usuale metodo di sintesi di indicatori costituito dalla semplice media aritmetica. Il contributo di questa attività di ricerca consiste principalmente nell'estensione delle suddette soluzioni metodologiche non parametriche, in modo da renderle fruibili nell'ambito della valutazione della soddisfazione verso prodotti o servizi. Questo lavoro di ricerca ha quindi un duplice scopo. Da una parte si propone di proporre degli strumenti metodologici innovativi rispetto ai problemi di ordinamento multivariato e di combinazione di indicatori e dall’altra di risolvere problemi applicativi pratici. Le soluzioni metodologiche proposte sono state infatti applicate nell’ambito della valutazione della didattica universitaria tramite l’analisi dei questionari di soddisfazione degli studenti universitari e nell’ambito della customer satisfaction relativa ai servizi erogati dalle Scuole di Sci dell’Alto Adige. Sono stati inoltre discussi altri tipi di applicazione in ambito industriale in fase di sviluppo nuovo prodotto o nella definizione del ciclo di vita dei prodotti.
APA, Harvard, Vancouver, ISO, and other styles
31

Papale, Davide. "High performance waterjets: study of an innovative scoop inlet and development of a novel method to design ducted propellers." Doctoral thesis, Università degli studi di Padova, 2015. http://hdl.handle.net/11577/3424758.

Full text
Abstract:
In the last decades the diffusion of waterjet systems for commercial applications in the high speed field is on the increase. These marine propulsors show remarkable qualities in terms of fuel consumption, noise, vibrations and manoeuvrability but they have some disadvantages which make their use optimal only for a limited speed range and which limit the overall propulsive efficiency. In the present document is described a way to modify a conventional waterjet with the aim of reducing these problems, increasing the overall efficiency. Many problems are dealt with. In chapter 3 it is shown how the substitution of a conventional flush inlet with a new scoop inlet could be an efficient way to minimise the total pressure losses and the non-uniformity velocity distribution upstream the pump, limiting in this way the influence of the boundary layer ingestion on the machine performance. In chapter 4 a novel method to study and design axial pumps is developed and explained. In chapter 5 a rim driven propeller is designed and tested numerically and experimentally demonstrating the good prediction capabilities of the method.
Il documento riassume il progetto di dottorato sugli idrogetti ad alte prestazioni condotto dall'autore nel periodo che va dal 2012 al 2014. Durante il triennio sono stati affrontati due filoni principali di ricerca riguardo questi propulsori, focalizzandosi in particolare sullo studio dell'imbocco e del sistema pompante. Un idrogetto è una propulsore navale che riesce a produrre una forza propulsiva accelerando una massa d'acqua; durante questo processo la massa d'acqua, originariamente presente libera nell'ambiente marino o fluviale, attraversa quattro diversi componenti: l'imbocco, il sistema pompante, l'ugello e il sistema sterzante. Ogni componente possiede una sua funzionalità ma in generale massimizzando l'efficienza di questi componenti è possibile osservare un generico aumento delle prestazioni complessive. Il lavoro qui presentato si è focalizzato sullo studio dell'imbocco e del sistema pompante; volendo essere di carattere innovativo, le configurazioni e le idee qui presentate rappresentano delle alternative costruttive o metodologiche sostanzialmente differenti dalla comune prassi industriale. Lo studio dell'imbocco ha avuto come linea guida il confronto tra un imbocco commerciale di stampo tradizionale (i cosiddetti imbocchi flush) e un imbocco dinamico di derivazione aeronautica (imbocchi scoop). Lo studio, oltre a rappresentare forse l'unico caso in letteratura di studio specifico su imbocchi dinamici, mette in luce le criticità dell'imbocco tradizionale mostrando una via alternativa alla prassi industriale. Lo studio analizza le performance in termini di perdite di pressione totali e fattore di distorsione di questi due imbocchi, con e senza la presenza dell'albero di trasmissione, attraverso diverse analisi CFD. Interessante è la derivazione aeronautica dell'imbocco dinamico preso “in prestito” da studi NASA riguardanti un imbocco per un aereo sperimentale caratterizzato da importanti spessori dello strato limite. Lo studio dimostrerà, per il caso in analisi, la superiorità dell'imbocco dinamico rispetto a quello tradizionale nei termini di paragone sopra descritti, dimostrando la necessità di affrontare con critico approfondimento lo studio degli imbocchi sugli idrogetti in ambito industriale, rimettendo in discussione molti dogmi dati per scontati nella realtà industriale ma in verità mai dimostrati nella letteratura scientifica. Lo studio dell'apparato pompante è stato affrontato in due fasi, la prima squisitamente teorica , la seconda di carattere sperimentale. La fase teorica ha visto la definizione di un nuovo metodo per la progettazione di un apparato pompante assiale. Il metodo, che è stato successivamente implementato in un programma Matlab e validato, è un metodo di carattere generale frutto della combinazione di diversi metodi analitici già utilizzati in letteratura ma in maniera concettualmente differente; anche se sviluppato originariamente per una pompa di un idrogetto, è stato pensato per avere una validità generale e può essere utilizzato per lo studio di una qualsiasi pompa assiale intubata. Il metodo rappresenta una combinazione di un metodo BEM (Blade Element Momentum) con due teorie analitiche per il calcolo dei coefficienti di portanza e resistenza dei profili alari (Weinig e Lieblein) e dell'equazione di Eulero per le turbomacchine. Il metodo risultante è un metodo fortemente iterativo che permette di calcolare la geometria di una pompa assiale intubata e le sue prestazioni anche fuori dal punto di progetto senza la necessità di utilizzare fattori empirici di discutibile attendibilità; esso si dimostra quindi un metodo innovativo e flessibile per lo studio completo di un generico propulsore intubato. Il metodo è stato implementato e testato sia numericamente che sperimentalmente grazie alla collaborazione della “University of Southampton” e della ditta “TSL Technology” su un propulsore elettrico attuato in periferia. Il propulsore in questione fa parte di una classe di propulsori di nuova concezione meglio conosciuti come RDP (Rim Driven Propeller) che fra le varie caratteristiche hanno quella di abbandonare la necessità di un albero per la trasmissione della coppia motrice con la conseguente assenza delle perdite dovute alla presenza dell'albero immerso nel flusso dell'acqua. La realizzazione sperimentale di questo propulsore, oltre ad aver sensibilmente migliorato l'efficienza del propulsore rispetto a quelli sviluppati in passato dalla ditta coinvolta, ha dimostrato l'attendibilità del modello analitico sviluppato.
APA, Harvard, Vancouver, ISO, and other styles
32

Jimenez, Tejeda Kety Mayelin. "Design, development, and characterization of thin-film filters for high brilliance sources in the EUV-soft x-ray spectral range." Doctoral thesis, Università degli studi di Padova, 2019. http://hdl.handle.net/11577/3425436.

Full text
Abstract:
This thesis addresses research work on the design, fabrication development, and characterization of thin-film transmittance filters for high brilliance radiation sources in the Extreme Ultraviolet (EUV) and soft X-ray spectral regions. This development and fabrication are strongly required in many applications, by instance in third and fourth generation light sources, and high order harmonic generation (HHG) sources. Thin-film filters are used to remove multiple-order radiations; In addition to high transmittance and spectral purity, those filters must be suited to face the high peak power of this kind of source. In the EUV and soft-X ray spectral range, materials have high absorption; so finding proper materials or a combination of materials that fulfill the suitable requirements is quite challenging. By using simulations of the transmittance performance for a variety of materials, based on the theoretical values of the optical constants, and taking into account their mechanical properties, Nb and Zr were chosen as core elements for the fabrication and study of free-standing filters. The first development part of this thesis is focused on the bottom-up fabrication technique of Nb, Zr, Nb/Zr, and Zr/Nb, 100 nm thick, thin films, deposited on silicon nitrate membrane windows by magnetron sputtering technique, Nb and Nb/Zr free-standing filters were produced after reactive ion etching of the substrate. These samples were characterized using Rutherford backscattering, AFM images and transmittance measurements in the EUV between 4-20 nm using synchrotron radiation at Bear Beamline, ELETTRA, Italy and at Optical Beamline at BESSY synchrotron, Berlin, Germany. Also, the samples were characterized in the same range using a laser-plasma source based on puff gas target as a secondary technique for transmittance characterization. The second part is devoted to the structural characterization of the filters by using TEM, SEM, HRSTEM, and EDX analysis. The third development part of this thesis is focused on the study of high-density EUV radiation damage of Nb, Zr, Zr/Nb, and Nb/Zr 100 nm thick free-standing filters. For this part of the experiment, the samples were deposited on silicon nitrite windows using e-beam deposition technique, showing more stable structures, free-standing filters of each type were achieved after reactive ion etching. Samples are intended to be characterized before and after radiation exposure by using x-ray photoelectron spectroscopy (XPS) technique, AFM images, x-ray diffraction technique, and transmittance characterization in the EUV spectral range. For the radiation exposure, a high-density set-up based on a pulsed plasma discharge source at 13.5 nm wavelength was used. Also, transmission electron microscopy and scanning electron microscopy characterization were used as complementary techniques to study both the sample structures and interface properties.
APA, Harvard, Vancouver, ISO, and other styles
33

Sartori, Emanuele. "Study, analysis, design and diagnostics of plasma and beam facing components of fusion devices." Doctoral thesis, Università degli studi di Padova, 2013. http://hdl.handle.net/11577/3423129.

Full text
Abstract:
Neutral Beam Injection is the horse-power of present and future fusion devices. Modeling plays a fundamental role in anticipating/predicting the performance of the injector and optimizing its behaviour. Successful design can only be achieved through integrated approach between physics and engineering. In the last three years, my research activity has been carried out at Consorzio RFX, where the ultimate neutral beam test facility is being designed and constructed. This PhD thesis has sought to address issues critical for the design and operation of such injectors. As the dynamics of beam acceleration and transport are dominated by the density of the background gas, I developed a reliable code for simulating its distribution in high vacuum systems. The density topic is strictly connected to the characteristics of the gas cell where the beam is neutralised before reaching the fusion plasma. Consequently I developed the final design of the Neutraliser and performed the thermo-mechanical verifications, a critical task because of the high power load interesting the beam facing component. I considered also other issues related to the study of beam components - e.g. the beam source in its operating thermal conditions. The development and application of methods and models, the experimental validation and benchmarking, are described in the perspective of a practical use in neutral beam engineering.
Gli iniettori di neutri sono e saranno il principale sistema di riscaldamento nelle macchine da fusione. La modellazione numerica gioca un ruolo fondamentale nel predirne le prestazioni ed ottimizare il suo funzionamento, ed un buon design nella fisica e ingegneria può essere raggiunto solo tramite un apporoccio integrato tra i due. Nei tre anni trascorsi, ho svolto la mia attività di ricerca presso il Consorzio RFX, dovesarà installato e testato il più grande iniettore di neutri al mondo, ed il suo design è in corso. Questa tesi di dottorato ha affrontato aspetti critici per il design e le operazioni di questa tipologia di iniettori. Poiché le dinamiche di accelerazione e trasporto del fascio di particelle sono dominate dalla presenza di gas residuo, ho sviluppato un codice robusto ed affidabile per simulare la sua distribuzione e flusso nei sistemi di alto vuoto; questo argomento è strettamente legato alle caratteristiche ed il design della cella di gas dove il fascio deve venire neutralizzato prima di raggiungere il plasma da fusione, di cui ho sviluppato il final design ed ho eseguito le verifiche termomeccaniche, che presentano tutte le difficoltà di componenti affacciati ad un fascio di particelle ad alta energia. Altri aspetti hanno riguardato lo studio di componenti e diagnostiche del fascio. Vengono trattati lo sviluppo e l’applicazione di metodi e modelli, e la loro eventuale validazione sperimentale; i risultati sono valutati nella prospettiva di un uso pratico nella ingegneria degli iniettori di neutri.
APA, Harvard, Vancouver, ISO, and other styles
34

SCHUTZMANN, STEFANO. "Towards hybrid sol-gel devices for optoelectronic biosensors." Doctoral thesis, Università degli Studi di Roma "Tor Vergata", 2006. http://hdl.handle.net/2108/202687.

Full text
Abstract:
I sensori per la rilevazione di sostanze inquinanti in acqua, terra e atmosfera così come i dispositivi biosensori per l’identificazione di proteine ed enzimi, rappresentano un interessantissimo ambito di ricerca con forti applicazioni in campo industriale. In questo contesto, una promettente possibilità è rappresentata dallo sviluppo di sensori basati sull’optoelettronica dato che essi assicurano un’alta sensibilità, una buona stabilità meccanica, la possibilità di miniaturizzare i dispositivi e di produrli su larga scala. In particolare, negli ultimi anni molti sforzi sono stati orientati allo sviluppo di sensori ottici a guida d’onda. Il principio di funzionamento di questo tipo di dispositivi è basato sull’interazione fra la componente del campo evanescente di un’onda guidata e la regione che si vuole analizzare. Lo sviluppo di dispositivi optoelettronici, d’altra parte, richiede la possibilità di sintetizzare materiali dalle qualità ottiche appropriate. In particolare, la possibilità di cambiare opportunamente l’indice di rifrazione rappresenta un aspetto fondamentale per la fabbricazione di dispositivi reali. I materiali ibridi organico-inorganici sintetizzati con la tecnologia sol-gel rappresentano una valida alternativa ai più tradizionali metodi per la fabbricazione di dispositivi ottici integrati come la tecnica dello scambio ionico o la CVD (chemical vapor deposition). Questa tecnica ibrida permette lo sviluppo di materiali dalle caratteristiche nuove in modo semplice ed economico. La presente tesi tratta dello sviluppo, della sintesi e della caratterizzazione di guide d’onda sol-gel ibride per possibili applicazioni come sensori ottici a guida d’onda. La caratterizzazione ottica dei dispositivi è stata effettuata principalmente utilizzando un apparato sperimentale sviluppato ed ottimizzato dal candidato durante il primo periodo del lavori di dottorato. Il setup si basa sia sulla tecnica m-line sia sulla tecnica dell’angolo di Brewster e rappresenta uno strumento semplice e a basso costo per l’analisi dell’indice di rifrazione di film sottili. I risultati mostrano che l’apparato permette la stima dell’indice di rifrazione a diverse lunghezze d’onda nel visibile e nel vicino infrarosso di film aventi spessore da poche decine di nanometri a diversi micron. L’errore nella determinazione dell’indice di rifrazione è compreso nel range ±0.001-0.003 a seconda della lunghezza d’onda e delle proprietà del campione. L’ottima accuratezza e affidabilità del nostro apparato è anche confermata dal confronto con i risultati ottenuti da misure di ellissometria spettroscopica. Molti sforzi sono stati indirizzati alla sintesi e caratterizzazione di diverse guide d’onda sol-gel deposte su substrati sia di vetro che di silicio. I campioni sono stati caratterizzati mediante misure di indice di rifrazione di perdite ottiche usando la tecnica dell’analisi della luce di scattering. I risultati hanno mostrato la possibilità di modulare l’indice di rifrazione fra 1.45 e 1.90 semplicemente modificando la sintesi chimica e i trattamenti post-deposizione. Perdite di propagazione dell’ordine di 3-10 dB/cm sono state misurate sui nostri campioni a seconda della lunghezza d’onda, della polarizzazione del modo selezionato. Questi valori sono piuttosto comuni per guide d’onda planari di tipo sol gel ibrido. Durante questo lavoro sono state inoltre sintetizzate a caratterizzate guide d’onda drogate con molecole luminescenti al fine di mostrare la possibilità di usare le nostre strutture guidanti come dispositivi attivi. Si è investigata anche l’opportunità di modulare l’indice di rifrazione di film ibridi utilizzando le proprietà di molecole fotosensibili esposte a luce ultravioletta. Infine, l’ultimo periodo di tesi è stato dedicato con successo ad investigare la possibilità di utilizzare la guide d’onda sol-gel ibride come strutture-base per lo sviluppo di sensori ottici a fluorescenza. A tal fine sono state condotte misure di fluorescenza eccitata mediante onda evanescente.
Environment sensors for the detection of polluting substances in water, earth and atmosphere as well as biosensor devices for the recognition of proteins and enzymes represent a very intriguing topic for both research and industrial applications. In this framework, a very promising alternative is represented by the development of sensors based on optoelectronic technology since they combine high sensitivity, mechanical stability, miniaturization and the possibility of mass-production. In particular, extensive research have been devoted to evanescent-field-based optical waveguide sensors. The operation principle of this kind of devices is based on the interaction between the evanescent field component of a guided optical wave and the monitoring region. The development of optoelectronic devices requires the possibility to design materials with suitable optical properties. In particular, the possibility of changing appropriately the refractive index represents a fundamental step for design and fabrication of real devices. Hybrid organic-inorganic materials synthesized by sol-gel technology seem to be a valid alternative to more traditional methods such as ion exchange or chemical vapor deposition for fabrication of integrated optical devices. Hybrid materials combining organic and inorganic networks allow the design and fabrication of new materials with appropriate features in a simple and economic way. This thesis reported on the design, synthesis and characterization of hybrid sol-gel-based waveguides for possible applications as fluorescence-based optical sensors. Optical characterization has been accomplished using a home-made experimental setup built and optimized by the candidate during the first period of the PhD fellowship. The setup is based on both m-line and Brewster methods and represents a completely non-destructive, low cost and very simple tool for thin film refractive index estimation. Results have shown that the apparatus allows the estimation of refractive index at different wavelengths in the visible and near infrared spectral region for films having thickness from few tens of nanometers to several micrometers. The error in refractive index determination was in the range ±0.001-0.003, depending on wavelength and sample features. Comparison with results obtained by ellipsometric measurements have confirmed the high accuracy and reliability of our setup. Many efforts was dedicated to the synthesis and characterization of different hybrid sol-gel waveguides grown both on silicon and glass substrates. Samples were characterized by refractive index determination and propagation loss measurements using scattered light detection technique. Results have shown the possibility to modulate quite easily the refractive index from 1.45 to about 1.90 playing on the chemical synthesis and on the post-deposition treatments. Propagation loss coefficients in the range 3-10 dB/cm were commonly obtained on our samples, depending on wavelengths, polarization, and mode selected. These values are quite common for planar organic/inorganic sol-gel based waveguides. Waveguides doped with fluorescent molecules were synthesized and characterized showing the possibility to use our structures as active optical devices. The modulation of refractive index of hybrid films using photosensitive molecules was investigated exposing films to different UV light dose. Moreover, first efforts to fabricate channel waveguides exploiting photolithographic techniques were accomplished. Finally, the possibility to use hybrid sol-gel planar waveguides as building blocks for a fluorescence-based optical sensor has been demonstrated performing measurements of fluorescence excited by guided wave evanescent field.
APA, Harvard, Vancouver, ISO, and other styles
35

Agrusta, Andrea Antonio. "OPTIMISATION'S TECHNIQUES OF HULL SHAPES USING CFD RANSE SIMULATIONS WITH LOW NUMBER OF CELLS." Doctoral thesis, Università degli studi di Trieste, 2015. http://hdl.handle.net/10077/11116.

Full text
Abstract:
2013/2014
Negli ultimi anni le tecniche di idrodinamica numerica CFD hanno permesso di effettuare simulazioni al computer riguardanti l’ interazione tra solidi e fluidi. L’utilizzo dei software CFD permette una simulazione assolutamente realistica dei fenomeni idrodinamici, permettendo al progettista/programmatore di analizzare in tempi relativamente brevi molteplici soluzioni, onde sceglierne la migliore e di conseguenza molteplici macro o micro modifiche sulla carena prescelta, per valutarne l’impatto in termini di resistenza al moto, assetto, tenuta al mare, comfort. Negli ultimi anni si è visto un crescente utilizzo di algoritmi matematici di ottimizzazione multiobiettivo associati a modellatori 3d parametrici e successivamente a solutori CFD BEM a potenziale. Tali applicazioni tipicamente consentono di trovare le forme ottimali che, nel rispetto dei vincoli imposti, generino la minima resistenza d’onda ad una o più determinate velocità. Associare un processo di ottimizzazione ad un solutore viscoso RANS consente invece, conoscendo una moltitudine di parametri fisici in più, di ottimizzare seguendo più obiettivi ed in particolar modo la capacità di poter valutare l’effetto dell’attrito consente di poter ottimizzare le forme al fine di ridurre la resistenza totale all’avanzamento. Fino a ieri però un processo di ottimizzazione associato a simulazioni CFD RANS, se pur teoricamente possibile, era di fatto raramente utilizzato in quanto sconveniente a causa dell’enorme mole di calcoli da eseguire per valutare la bontà di centinaia di soluzioni diverse, rendendo troppo lungo ed oneroso il processo. Minimizzando il numero di celle computazionali riducendo così i tempi ei costi di simulazioni in ogni caso risultati adeguati, si dimostra come il modo simulazioni RANS viscosi saranno molto più utili rispetto a potenziali metodi BEM . Scopo infatti di questo lavoro è stato quello di associare un processo di ottimizzazione di carena basato sulla riduzione della RESISTENZA TOTALE ALL’AVANZAMENTO valutata attraverso l’utilizzo di simulazioni CFD RANSE eseguite con un dominio di calcolo a basso numero di celle. Tale dominio di calcolo deriva dall’accurato sviluppo di una procedura standardizzata che permette di eseguire simulazioni RANSE con una griglia standard che garantisce la bontà del risultato anche se “COARSE”. La presente trattazione oltre a fornire una panoramica sullo stato dell’arte in letteratura, presenta lo sviluppo di una metodologia atta ad eseguire simulazioni a basso numero di celle in maniera standardizzata, sviluppando tre tipi di meshatura standard, suddividendo le carene da studiare in tre differenti famiglie raggruppate per similitudine di geometrie e velocità di funzionamento e pertanto accomunate da una similare formazione ondosa : Round Bilge Displacement Hull, Round Bilge and Hard Chine Semiplaning Hull (Single and Multi-Hull), Hard Chine Planing Hull. Si è successivamente passati alla determinazione dei metodi di ottimizzazione investigando le potenzialità ed i limiti dei diversi metodi noti per eseguire ottimizzazioni multi-obiettivo, compreso il metodo „Sherpa“ basato su un robusto algoritmo combinato e progressivo finalizzato al raggiungimento della soluzione ottima riducendo automaticamente il numero di casi da simulare. Il processo di ottimizzazione in oggetto è stato applicato ad una innovativa carena semi-planante a spigolo dotata di bulbo prodiero a lama: si è partiti da una carena di base che soddisfaceva tutti i requisiti di progetto e, nel rispetto dei vincoli imposti, parametrizzata la carena ed impostati i set-up di calcolo, al termine dell’ottimizzazione si è ottenuta la geometria ottimale della stessa al fine della riduzione della resistenza totale a due differenti velocità (crociera e massima). Al termine delle attività si è proceduto con l’esecuzione di test in vasca navale su modello in scala per validare i risultati ottenuti per via numerica. La possibilità di ottenere simulazioni viscose con domini “standardizzati” a basso numero di celle permette l’analisi comparativa di molteplici soluzioni progettuali contenendo tempi e costi e con la certezza che i risultati siano realistici ed affidabili. L’innovativa standardizzazione studiata permette inoltre una riduzione del tempo di preparazione del set-up permettendo all’operatore di lanciare una simulazione su una nuova carena in pochi minuti, senza dover effettuare laboriose meshature ad-hoc e controlli di grid-independence dei risultati. L’utilizzo di queste griglie standard permette inoltre, come spiegato, di utilizzare le simulazioni CFD RANSE anche per eseguire ottimizzazioni multi-obiettivo riguardanti, per esempio, la riduzione della resistenza totale all’avanzamento. Senza griglie di questo tipo, raffinate ottimizzazioni basate su solutori viscosi sarebbero spesso antieconomiche. Difatti i risultati cui il presente lavoro è pervenuto riguardano un sensibile abbattimento dei tempi di calcolo necessari all’esecuzione di un’ottimizzazione morfologica di carena basata sulla minimizzazione della resistenza a due differenti velocità: in meno di 700 ore di calcolo con un tradizionale server a 12 core, ovvero in circa 80 ore utilizzando un centro di calcolo a 100 core, si riescono ad ottenere risultati importanti validi per fare delle valutazioni in senso assoluto sulla potenza necessaria all’imbarcazione per raggiungere le velocità prestabilite. Una procedura di questo tipo permette da una parte la possibilità di lavorare sulla resistenza totale o su altre quantità fisiche espresse dal solutore RANSE, dall’altra per la sua velocità e la sua semplicità d’utilizzo, consente l’avvicinamento alla CFD anche a progettisti di piccole imbarcazioni che fino ad oggi per problematiche di tempo e di budget non potevano approcciare ad una tecnologia così raffinata per progettare le loro carene. Difatti in un prossimo futuro l’utilizzo diffuso di tecniche di ottimizzazione o anche semplicemente di comparazione ed analisi di carene destinate ad imbarcazioni grandi e piccole, potrà contribuire in maniera significativa al risparmio di Potenza motrice installata a bordo (es. Grazie alla riduzione della resistenza totale), consentendo da una parte risparmi economici di carburante e dall’altra, soprattutto, una riduzione delle emissioni nocive in atmosfera.
In recent years, the techniques of numerical hydrodynamic CFD allowed to perform computer simulations on the interaction between fluids and solids. The use of CFD software allows an absolutely realistic simulation of hydrodynamic phenomena, enabling the designer to analyze several solutions relatively quickly, in order to choose the best hull and therefore various macro or micro changes on the hull chosen, in order to evaluate its impact in terms of resistance, trim angle, seakeeping, comfort. In recent years we have seen an increasing use of mathematical algorithms for multi-objective optimisation related to 3D parametric modelers and then to potential CFD BEM solvers. Typically these applications allow you to find the optimal shape that, while respecting the constraints imposed, generate the minimum wave resistance to one or more certain speeds. The association of optimisation processes to a viscous RANSE solver enables to optimize following more targets, knowing a multitude of physical parameters in addition. In particular, the ability to assess the effect of friction on the hull’s shape, in order to reduce the total resistance. Until yesterday, however, an optimisation process associated with CFD RANSE simulations was in fact rarely used in the industrial sector, because it was considered quite inappropriate though theoretically possible. It was due to the large amount of calculations to be performed to evaluate the goodness of hundreds of different solutions, making the process too long and expensive. The present work demonstrates how viscous RANSE simulations methods can be easily used through number of computational cells minimization and the consequent reduction of time and costs of simulations. Indeed, the aim of this work has been to associate an hull optimisation process based on the reduction of total resistance, evaluated through the use of CFD RANSE simulations performed through computational domain with low number of cells. This calculation domain derives from the accurate development of a standardized procedure which allows to make RANSE simulations with a standard grid ensuring the accuracy of the result even if "COARSE". The present research, as well as providing an overview of the state of the art in literature, shows the development of an innovative methodology able to perform simulations with low number of cells in a standardized way, developing three types of standard meshing, dividing the hulls to be studied in three different families grouped by similarity of geometry and operation speed and having therefore in common a similar wave pattern: Round Bilge Displacement Hull, Round Bilge and Hard Chine Semiplaning Hull (Single and Multi-Hull), Hard Chine Planing Hull. The work subsequently involved the determination of the methods of optimisation investigating potentiality and limits of several already known methods to perform multi-objective optimisation, including the "Sherpa" one, based on a robust combined progressive algorithm aiming to achieve the optimal solution, automatically reducing the number of cases to be simulated. The optimization process has been applied to an innovative semi-planing hard-chine hull with blade bulbous bow: the process started from a basic hull matching all the project requirements and, respecting the imposed constraints, the hull has been parameterized and the calculation set-up has been established. At the end of the optimisation , the best hull geometry has been obtained in order to achieve the reduction of the total resistance at two different speeds (cruise and maximum). At the end of these activities, a towing tank tests on a scale model was carried out in order to validate the results numerically obtained. The chance to get viscous simulations with "standardized" domains with low number of cells allows comparative analysis of multiple design solutions by reducing time and cost, with the certainty that the results are realistic and reliable ones. The innovative process of standardization also makes a reduction in the time of preparation of the set-up allowing the operator to run a simulation on a new hull in a few minutes without having to make laborious ad-hoc meshing activities and grid-independence controls of results. The use of these standard grids also allows, as already said, the use of CFD RANSE simulations also to perform multi-objective optimizations relating to, for example, the reduction of the total resistance. Without such kind of grids, fine optimization based on viscous solvers would be often uneconomical. In fact, these work’s results concern a significant reduction in computation time necessary for the execution of a morphological hull optimization based on the resistance minimization at two different speeds. In less than 700 hours of calculation with a traditional 12 core server, or in about 80 hours using a to 100 cores computer-center, it is possible to achieve important results available to give some absolute assessments on the necessary power to the hull to reach the target speeds. A procedure like this, on one side, allows the opportunity to work on the total resistance or other physical quantities expressed by the RANSE solver, on the other side - thanks to its speed and its simplicity of use - enables the approach to the CFD also for small boats’ designers who, due to lack of time and budget, could not approach to a so refined hulls designing technology, up to now. In the near future the widespread use of optimization techniques, or even just comparative and analysis ones on hulls for large and small boats, will be able to significantly contribute to save engine power installed on board (i.e. by reducing the total resistance), allowing both economic fuel savings and, above all, a strong toxic emissions reduction in the atmosphere.
XXVII Ciclo
1984
APA, Harvard, Vancouver, ISO, and other styles
36

BONGERMINO, COSTANTINO. "A STAMP–based Methodology Enabling the Human Factors Integration into the Design Process for Safer Ships." Doctoral thesis, Università degli studi di Genova, 2021. http://hdl.handle.net/11567/1056702.

Full text
Abstract:
The increasing of ships complexity is an everyday evidence and more and more tasks, traditionally carried out by crew members, are now managed and executed by the on board automation systems. This research has the aim of modeling the relation among human operators and automation system in order to enhance the whole ship safety. In particular, a methodology will be selected and its suitability for the purpose will be assessed to provide a tool for the ship design decision making process. When addressing safety of very complex systems the cause-effect principle and the linear propagation of failures are not appropriate neither exhaustive inferring models anymore, since safety needs to be necessarily addressed considering its real essence, i.e. an emergent property of the system. In fact, complex systems have the characteristic of being intractable in the sense that not all their behaviours can be easily predicted. This is due to the dichotomy between the so called blunt end and sharp end domains, inherently more pronounced in complex systems and due to the unknowns that are not predictable neither evident during the design stage. In this perspective, an appropriate and innovative safety paradigm is necessary in order to take into consideration, among other aspects, the new human operators’ role in complex systems. In fact, the intense presence of automation on board ships has radically changed the traditional tasks allocation and the way of performing them. Even if many simple and repetitive tasks are more and more in charge of automation, the complex tasks and the related higher responsibilities remain very often assigned to human operators. Designers should be able to consider this relevant change of human operators’ role in the system since the preliminary design phase, investing their resources on the development of a human-centered design. Complex ship design needs to rely on a systemic and systematic approach. Then, in this research System-Theoretic Accident Model Process (STAMP) has been selected and investigated as a suitable methodology that can allow design teams to effectively integrate the so called Human Factors into the ship design process. It has been already used in other complex technology fields, such as aviation, defense, healthcare with successful results. It is a causality model based on Systems Theory and it considers accidents as the result of an inadequate enforcement of safety constraints. The systemic and systematic approach is supported by the Safety Control Structure, that is a hierarchical system model where also the social and organizational layers can be represented. The STAMP accident model has four tools: one is reactive, the others are proactive. The reactive one is called CAST (Causal Analysis based on STAMP), while the second category is composed of STECA (Systems-Theoretic Early Concept Analysis), STPA (System-Theoretic Process Analysis) and STPA-Sec (Systems-Theoretic Early Concept Analysis - Security). CAST and STPA applications have been carried out in the maritime context in order to verify that STAMP approach is applicable for ship design. CAST has been applied to two ship accidents: the Herald of Free Enterprise and the Costa Concordia. It provides a framework to understand the entire accident process and identifies systemic causal factors related to both the organizational and technical system elements, spotting weakness in the existing safety control structure. In this perspective, the application of CAST to the above mentioned ship accidents has proven its effectiveness also in the maritime field to assess the complex influence of human factors into the ship safety control structure. CAST analysis output is the generation of recommendations with the aim of avoiding similar accidents in the future. Then, the focus has been shifted towards the proactive tool STPA. It consists of the following steps: identify system hazards; draw functional control structure; identify unsafe control actions; identify accident scenarios; formulate decisions and recommendations. In this research, an application case has been developed considering a large passenger ship and the specific hazard of dead ship condition (energy blackout). In fact, in case of navigation close to the shore or to another vessel and/or of heavy weather condition, this situation might rapidly evolve into a ship loss. In order to better characterize the human operator’s features and peculiarities, an innovative human mental model (improvement of a mental model already existing in literature) has been implemented in the safety control structure. It has proven to be useful to consider the concept of human performance variation in the design phases. Considering that performance variation could reveal both as a hazard or as a resilience strengthening element, the outcome of this STPA application consists of a set of recommendations focused on adding value to the on board humans operators’ role for enhancing the whole system resilience. In this perspective, specific recommendations have been identified as outcomes of the application case, focused on the improvement of human operator-automation interaction, aimed to the ship blackout avoidance.
APA, Harvard, Vancouver, ISO, and other styles
37

ROGGERI, RICCARDO. "SVILUPPO DI UNA PIATTAFORMA IT UTILE PER LA VALUTAZIONE E VALORIZZAZIONE DELLA PROPRIETÀ INDUSTRIALE." Doctoral thesis, Università degli Studi di Milano, 2014. http://hdl.handle.net/2434/232575.

Full text
Abstract:
Il sistema produttivo e industriale italiano sta attraversando una forte crisi economico-finanziaria ed è oggi in atto un processo di trasformazione che sta cambiando in profondità il modo di fare impresa e di posizionarsi sui mercati nazionali ed internazionali. L’obiettivo prioritario di molte imprese è quello di cercare un riposizionamento strategico verso nuovi mercati ad alto potenziale e verso nuovi prodotti a più alto valore aggiunto. Uno degli strumenti di maggiore efficacia per uscire dalla crisi è quello di valorizzare la proprietà industriale del portafoglio aziendale. Gli intangible assets, soprattutto di origine tecnologica, sono infatti riconosciuti come fondamentali leve di creazione di valore economico di un'azienda (Lev, 2001); tuttavia, una chiara definizione di attività immateriale non è stata sviluppata. I beni immateriali coperti da tutela giuridica sono detti Intellectual Property (IP) o beni di proprietà intellettuale e comprendono brevetti, marchi, disegni industriali, copyright e segreti commerciali (o know-how) (Harvey, Lusch, 1997). Il presente lavoro si focalizza su brevetti e know-how, ovvero quei "beni tecnologici di proprietà industriale" il cui valore crescente come driver di sviluppo del business è stato ampiamente riconosciuto (Giordani, 2002; Davis, 2003; Davis, 2004). Esistono oggi diversi metodi che differiscono tra loro per criteri di valutazione e procedure; il valore delle attività tecnologiche può essere espresso in: cifre, punteggio, indice o valore monetario. Gli approcci monetari sono largamente diffusi dal momento che producono la più accurata e obiettiva stima di valore. Questa tipologia di approccio di valutazione delle attività tecnologiche e degli asset IP si sono a volte rivelati deludenti (Y.Park e G. Park, 2004). Ciò è dovuto principalmente alla difficoltà che incontra il perito nella stima dei diversi parametri che sono necessari per l'applicazione di tecniche monetarie. Per esempio, questi metodi richiedono una valutazione accurata dei benefici futuri e dei tassi di attualizzazione; se le ipotesi sottostanti sono inesatte, allora il metodo globale diventa fuorviante. Questa limitazione dei metodi monetari deriva dalla loro origine di tipo quantitativo, vale a dire dalla necessità di tradurre in cifre monetarie l'intero insieme di variabili che influenzano il valore del patrimoniale della tecnologia; nonostante l'oggettività dei risultati della procedura, essi soffrono di ipotesi formulate durante la stima dei parametri. Sarebbe utile per un esperto poter accedere ad una piattaforma tecnologica di valutazione qualitativa che possa agevolare l'applicazione della tecnica monetaria, ovvero che suggerisca i fattori ed i parametri utili per migliorare lo sviluppo di un asset IP. Questa piattaforma dovrebbe avere la possibilità di essere utilizzabile e compilabile in prima battuta dall’utente che vuole valutare in maniera autonoma il proprio asset IP ed in seconda battuta dai valutatori che potranno svolgere meglio il loro lavoro, in base alle risposte date da chi detiene la proprietà industriale. In relazione alle considerazioni sopra esposte, l’elaborato si propone come obiettivo principale quello d’individuare ed applicare delle metodologie complesse tese alla definizione, all’analisi e alla valutazione dello strumento brevettuale. In particolare l’intento è quello di sperimentare l’applicazione di un nuovo modello per la determinazione del valore qualitativo/quantitativo di una tecnologia coperta da privativa rispetto all’adozione dei metodi tradizionali, basati esclusivamente sull’attualizzazione dei flussi di cassa. Il presupposto di base è che l’investimento in un progetto innovativo basato sullo sviluppo di un trovato difeso da proprietà industriale incorpori dei fattori strategici, riconducibili alla flessibilità d’esecuzione o alla creazione d’opportunità per il futuro, che possono essere usati in modo discrezionale dal management aziendale o dai ricercatori accademici interessati ad operare trasferimento tecnologico delle loro invenzioni, nel momento più opportuno e secondo un modello di business ben definito. Il modello sviluppato cerca di essere traslazionale ed utilizzabile per ogni settore tecnologico ma da una particolare importanza anche a tematiche che risultano particolarmente sensibili per il settore agrario, che ha una serie di confini etici a volte poco considerati. Il presente lavoro ha portato alla realizzazione di una piattaforma tecnologica che mira ad elaborare un simile quadro, inserendolo in un più ampio scenario possibile e perseguendo un duplice obiettivo: in primo luogo, identificare i fattori che sono in grado di influenzare il valore di patrimonio tecnologico che vengono scambiati nel contesto di una transazione commerciale e, in secondo luogo, cerca di identificare la direzione della relazione tra ciascun fattore e il valore del patrimonio tecnologico. Il tutto ha permesso di sviluppare una piattaforma ICT utilizzabile via web, che verrà resa disponibile sia ai ricercatori accademici sia ai centri di trasferimento tecnologico universitari e alle aziende. Alla base dello studio vi è stata anche una forte attenzione alla scelta di uno strumento che permetta l’integrazione con i sistemi di Knowledge Management (KM) e Business Intelligence al fine di poter fornire uno strumento open source, integrabile e di facile utilizzo. La piattaforma sviluppata intende quindi rivolgersi ai responsabili della ricerca e sviluppo (R&S) e ai periti che, nel contesto di una transazione commerciale specifica che coinvolge lo scambio di un'attività tecnologica tra le controparti, sono chiamati a valutarne il valore. Inoltre, è probabile che il quadro proposto nel documento sia di interesse per ricercatori accademici, che possano essere incoraggiati a studiare modi appropriati per integrare i fattori di sviluppo delle tecnologie ed i parametri da valutare sia per operare una corretta fase di trasferimento tecnologico della proprietà intellettuale sia per valutare la nascita di start up e spin-off universitari.
The Italian industrial production system and is experiencing a strong economic and financial crisis and is now undergoing a process of transformation that is changing profoundly the way of doing business and to position themselves on the national and international markets . The overarching goal of many companies is to look for a strategic repositioning towards new high potential markets and to new products with higher added value. One of the more effective tools to overcome the crisis is to enhance the company's portfolio of industrial property . The intangible assets , especially of technological origin , are in fact recognized as fundamental levers of economic value creation of a company ( Lev , 2001) , but a clear definition of an intangible asset has not been developed . Intangible assets covered by legal protection are called Intellectual Property (IP) or intellectual property assets and include patents, trademarks , industrial designs, copyrights and trade secrets (or know-how) (Harvey , Lusch , 1997). The present work focuses on patents and know -how , or those " technological goods of industrial property " whose value increased as a driver of business development has been widely recognized ( Giordani , 2002; Davis, 2003 Davis, 2004). Today there are several methods that differ in their evaluation criteria and procedures, the value of technological activities can be expressed as : numbers , score, index or monetary value. The monetary approaches are widely used since they produce the most accurate and objective estimate of value. This type of approach to the valuation of assets and technology of IP assets have sometimes been disappointing ( Y.Park G. and Park, 2004). This is mainly due to the difficulties encountered by the expert in the estimation of the various parameters that are necessary for the application of techniques monetary . For example, these methods require an accurate assessment of future benefits and the discount rate , if the underlying assumptions are incorrect, then the global method becomes misleading. This limitation of monetary methods stems from their origin quantitative , namely the need to translate into monetary figures the whole set of variables that affect the value of the assets of the technology , despite the objectivity of the results of the procedure, they suffer from assumptions made during the parameter estimation . It would be useful for an expert to have access to a technology platform that can facilitate qualitative evaluation of the implementation of monetary technique , which suggests that the factors and parameters useful for improving the development of IP assets . This platform should have the opportunity to be usable and can be completed in the first instance by the user who wants to evaluate independently their IP assets and secondly by the evaluators who can perform their jobs better , according to the answers given by those in the industrial property . In relation to the above considerations , it is proposed the elaborate main objective is to identify and implement complex methodologies aimed at the definition, analysis and evaluation of patents. In particular, the intent is to test the application of a new model for the determination of the quality / quantity of a technology covered by deprivation than the use of traditional methods , which are based exclusively on the present cash flows. The basic assumption is that the investment in an innovative project based on the development of an invention protected by industrial property incorporates strategic factor , due to the flexibility of execution or creation of opportunities for the future , which can be used in order discretion by management or by academic researchers interested in operating technology transfer of their inventions in the most appropriate time and in accordance with a well-defined business model . The model developed tries to be translational and usable for all technological sectors but also by a particular importance to issues that are particularly sensitive for the agricultural sector , which has a number of ethical boundaries sometimes little considered . This work led to the creation of a technological platform which aims to develop such a framework , by inserting it in a larger case scenario and pursuing a double objective: first, to identify the factors that can affect the value of technological assets that are exchanged in the context of a commercial transaction and , secondly , try to identify the direction of the relationship between each factor and the value of technological assets . All this has enabled us to develop an ICT platform can be used via the web, which will be made available to both academic researchers to both university technology transfer centers and companies. The basis of the study , there was also a strong focus on the choice of a tool that allows the integration with Knowledge Management (KM) and Business Intelligence in order to provide an open source , integrated and easy to use. The developed platform intends to apply to those responsible for research and development ( R & D) and to the experts who , in the context of a specific business transaction that involves the exchange of technological activity between counterparties , are called upon to assess its value . In addition , it is likely that the framework proposed in the paper is of interest to academic researchers , they can be encouraged to study appropriate ways to integrate the factors of development of technology and the parameters to be evaluated is to operate a proper stage technology transfer of intellectual property and to evaluate the emergence of start-ups and spin- offs .
APA, Harvard, Vancouver, ISO, and other styles
38

Del, Puppo Norman. "High resolution ship hydrodynamics simulations in opens source environment." Doctoral thesis, Università degli studi di Trieste, 2015. http://hdl.handle.net/10077/10983.

Full text
Abstract:
2013/2014
The numerical simulation of wake and free-surface flow around ships is a complex topic that involves multiple tasks: the generation of an optimal computational grid and the development of numerical algorithms capable to predict the flow field around a hull. In this work, a numerical framework is developed aimed at high-resolution CFD simulations of turbulent, free-surface flows around ship hulls. The framework consists in the concatenation of “tools” in the open-source finite volume library OpenFOAM®. A novel, flexible mesh-generation algorithm is presented, capable of producing high-quality computational grids for free-surface ship hydrodynamics. The numerical framework is used to solve some benchmark problems, providing results that are in excellent agreement with the experimental measures.
XXVII Ciclo
1981
APA, Harvard, Vancouver, ISO, and other styles
39

Gerlin, Francesca. "Beam Propagation in Quantum Communication." Doctoral thesis, Università degli studi di Padova, 2014. http://hdl.handle.net/11577/3424610.

Full text
Abstract:
The aim of my thesis is to demonstrate the feasibility of Quantum Communication in free space and space, pointing out how ESA Galileo constellation could be strengthens into an Optical Quantum Communication Network (OQCN), by the employment of a compact and low cost prototype (SaNe-QKD OPT). Considering table 2 according the guidelines of [70] (2012) for the European Quantum Information Processing and Quantum Communication, three crucial long term goals are fulfilled, (Satellite Quantum Communication, besides 1000 kilometers of spatial Quantum Cryptography, multi-node Quantum Network), with reference to Galileo constellation: the new devised OQCN will perform Quantum Communication between approaching satellites of the constellation in multi-node mode, and Quantum Key Distribution scheme will be employed over large distances, definitely above 1000 kilometers. My thesis summarizes three years of researches at the Luxor Laboratories in Padova. It is dedicated to two core topics, which are twofold aspects of the same issue: beam propagation in Quantum Communication over ground and over space, with the final realization of a Quantum Communication Network in which concretized the acquired knowledges in Quantum field. And Inter-island Canary links have been chosen as representing a worse case scenario for Quantum Communication experiments, an ideal test bed to investigate beam propagation in view of the space applications, where crucial aspects are the huge distances that Communication beams have to cover. The thesis is divided in two complementary parts: Ground beam propagation Beam propagation in free space along horizontal links in Canary Islands, performing long distances (143 kilometers) and aiming to exploit beam propagation through a turbulent medium. The results pointed out optical configurations and specifics in order to obtain an effective and stable communication link. These researches are within of the strategic project Quantum Future of the University of Padova, "the shift in the Quantum paradigm" Space beam propagation. The second part is dedicated to space beam propagation, regarding design and arrangement of the Quantum prototype SaNe-QKD to fuse with the Optical Communication prototype OPT (by Thales Alenia Space). The resulting system, SaNe-QKD OPT, will be positioned on board of Galileo satellites for performing Quantum Optical Inter-satellite links. Simulations about inter-satellite links, network topology features and key length evaluation, have allowed to obtain results and specifications about operative wavelengths and telescope apertures to use for inter-satellite communications. Then, the prototype SaNe-QKD have been arranged and is here shown in each part. This researches are part of the project "Application of Optical Quantum Communication Links for GNSS" by the European space Agency (ESA). My thesis is divided in four chapters, as follows Ground beam propagation Chapter 1 introduces an overview of main concepts of photonics, atmospheric models and turbulence parameters, optics and fiber-optics and telescopes, used throughout the treatment. The Newtonian telescope is investigated in order to analyze the optical path arriving at the prototype SaNe-QKD for space purpose and the Canary Telescope in order to realize an optical system implementing the centroid spot following system. Zemax simulations are presented for both to check aberrations and for collimation issues. For Newtonian off-axis field, coma and field curvature are revealed. For Canary Telescope chromatic aberration is revealed and the arrangement scheme of the experiment of beam propagation in free space in Canary Islands with co-propagating beams control is described. Kolmogorov atmospheric turbulence models with Hughnagel Valley model is then briefly reported too. It will be used in Canary experiments over 143 kilometers and in chapter 3 for up/downlink quantum communication simulations between a ground base transceiver and a satellite. Chapter 2. It is known that an unconfined optical mode propagating in a turbulent medium suffers of distortions. In Quantum Communication the information is encoded and transmitted as a train of single photons with mean average of about one photon for pulse; it follows that the link losses of a beam propagating in atmosphere increase with the distance; in contrast they cannot be reduced by increasing the signal power as in optical communication. Consequently, it is crucial to exploit beam propagation over long ranges, in order to analyze photon statistics, the transformations on the beam due to the ground in order to prevent the unleashing of the beam between transmitter and receiver. The turbulence introduces two contributes according the eddies size [51] [52]respect to the optical beam: • Beam wandering, which occurs when a laser beam is refracted by eddies with size larger than the beam diameter, causing a displacement of the beam center (centroid). • Beam spreading, which is due to the laser beam being refracted by eddies smaller than the beam diameter. The short-term beam spread is an additional spread with respect to the standard spread due to the free space laser beam propagation (without turbulence effects). These effects appears in relation to the exposition time: on short time scales the beam wandering is the dominant effect, while on long time scales the beam spread-ing is the dominant effect. Using turbulence as a resource, the research of beam propagation in free-space communication links opens to test a new equipment, the custom Canary telescope (Chapter1). The team performed two free-space propagation links with the telescope: the former, a ‘short’ local range (about 20 km) test link between Asiago and Monte Grappa (Italy) in order to examine the Canary Tele-scope and check the communication equipment at the transmitting part; the latter, a ‘long’ range (143 km long) link between La Palma and Tenerife (Canary Islands) for free-space propagation experiments with severe turbulence conditions. Developments and data analysis are presented, pointing out methodologies for turbulence characterization in Ground Quantum Optical Links. The results of the propagation of single, double beams along 143 kilometers have demonstrated that it is possible to optimize an optical system in order to reconstruct the beam long-term diameter and by techniques of beam co-propagating technique, the loss link could be reduced. This is promising as link losses are crucial aspect in Quantum Communication, when in a noisy channel the quantum signal (the in-formation encoded in single photons) cannot be improved by increasing the signal power. Besides we observed that the statistic of arrival of single photons over a free-space 143 km optical link confirms that there is a transformation from Poissonian to log-normal distribution: There is still the evidence of consecutive subintervals of low losses allowing us to envisage the exploitation of turbulence as an SNR improvement technique. Space beam propagation Chapter 3 according the requests of the ESA project, a feasibility study for Quantum Communication applications to Galileo constellation is shown. The chapter starts with an overview of the merits of Optical Communication (data rate exchanging, lightness, compactness, low power consumption..) Pointing out that the intrinsically secure is the added value that only the Quantum counterpart could supply. A brief part recalling orbital motion in space in order to check satellite orbital motion. Then we open to up/downlink simulations overview: recalling the simulations in literature of ground to space or space to ground beam propagation through the atmosphere, the feasibility study shows that unfortunately Galileo constellation can realize only inter-satellite communication links, as the atmosphere and the altitude of the satellite constellation prevents any effective and real single photon transmission with the current technologies. However, simulation results of inter-satellite beam propagation show us that the huge distances can be overwhelmed (derived requirements are telescope diameters >20 cm and operating wavelength <532nm), and time interval within perform Quantum Communication can be calculated: respect to a reference satellite of different orbital plane, the time interval in which the intra-satellite distances are within 15000 kilometers are • For satellite called three of about 176 minutes • For satellite called two of about 168 minutes Then exploiting the relative motion of satellites lying in different orbital planes, we show that it is possible to target communication between satellites that are closer in turns, and to achieve appreciable transmission rates: the distance intervals in which we calculated the raw key rate values are 6000, 10000 and 19000 kilometers, and the best values in raw key rates (18M bits/s at 6600 kilometers, related to lower attenuation values) are obtained for a telescope diameter of 50 cm and wavelength of 50nm, while the worse values in raw key rates (2.2 Kbits/s at 6600 kilometers, related to higher attenuation values) are for a telescope diameter of 20 cm and wavelength of 800nm. Finally, today information-based society security is of paramount importance: Galileo Optical Quantum Network will guarantee intrinsically secure key exchanging, free of PSN attack, within a satellite distance signed by decoy scheme approach. After a brief review of a model for a Quantum Communication System, simulations about key rate are shown in the final section, showing that for same wavelengths and within same beam propagation distances, larger apertures present higher values of key rate[bit/s]. At the same time we have observed that shorter operating wave- lengths presents higher values of key rate, assuring that in merit to Galileo OQCN decoy schemes could be applied in order to beat PNS attack over well defined communication links in relation to the link distance covered, the operative wavelength and the telescope aperture. Chapter 4 We present a summary about the results of a feasibility study with reference to the architecture of Optical Quantum Communication links to a global navigation satellite system (GNSS) Galileo: for an inter-satellite Quantum Key Distribution (QKD) network we have derived in the past chapter some specifications about the wavelength selection, by an analysis of the beam propagation outside the atmosphere for telescope radius and wavelength, showing that by decreasing the wavelength and increasing the telescope radius, the beam size at the receiver is reduced, and so is the attenuation, while the SNR increases. Given the GNSS motion, we have presented still some MATLAB simulations evaluating the time intervals in which two spacecraft reach the minimal intra-satellite distances, in order to investigate the feasibility of the OQL system and evaluate its expected performance in terms of achievable key lengths. In this chapter the expected final secret key rate are derived (by taking into account raw key rate, the average number of photons per qubit at the transmitter output, the QKD efficiency, the free-space link attenuation, the attenuation due to devices at the receiver side, the QBER) and the number of required sifted key vs QBER that must be available to Alice and Bob, after the transmission on the quantum layer and the sifting phase, are evaluated too, in order for them to be able to extract a secret key of the desired key length. (Assumed attenuation between -40 dB and -45 dB for the quantum channel.) Then the quantum prototype SaNe-QKD is described in each part, the Quantum Key Distribution protocol used B92, the transmitter and receiver opto-mechanical arrangements with the dedicated interfaces to interface the Quantum module with the Optical one built by Thales Alenia Space. Finally the Proof of Concept Demonstration tests for the quantum part described
Parole chiave: Comunicazione Quantistica (QC), propagazione di fasci ottici nello spazio e nello spazio libero, rete di comunicazione ottica quantistica (OQCN), Distribuzione di chiavi quantistiche (QKD), costellazione satellitare Galileo, turbolenza atmosferica, studi di fattibilita’ prototipo quantistico SaNe-QKD Il fine della mia tesi e’ di dimostrare la fattibilita’ della comunicazione quantistica nello spazio libero e nello spazio, evidenziando come la costellazione Galileo di ESA potrebbe essere potenziata in una rete di comunicazione ottica quantistica (OQCN) dall’impiego del prototipo compatto e a basso costo SaNe-QKD. Considerando la tabella 2 secondo le linee guida di [70] (2012) per l’ analisi dell’Informazione Quantistica r della Crittografia Quantistica Europea, tre obbiettivi a lungo termine e cruciali sono realizzati. (Comunicazione Quantistica satellitare, oltre 1000 chilometri di crittografia quantistica spaziale, rete quantistica multi-nodo), con riferimento alla costellazione Galileo: il nuovo OQCN sara’ in grado di realizzare comunicazione quantistica fra satelliti della stessa costellazione in avvicinamento in modalita’ di multi-nodo, e uno schema di Distribuzione di Chiave Quantistiche Quantum Key Distribution sara’ realizzato definitivamente su distanze superiori ai 1000 chilometri. La mia tesi riassume tre anni di ricerca presso i Laboratori Luxor di Padova. E’ dedicata a due argomenti centrali, che sono due aspetti duplici di uno stesso argomento: la Comunicazione Quantistica sia a terra che nello spazio, con la finale realizzazione di un OQCN (network di comunicazione quantistica) in cui concretizzare le conoscenze acquisite nel campo della quantistica. I collegamenti ottici fra la isole Canarie rappresentano il peggiore scenario per gli esperimenti di Comunicazione Quantistica, un banco di prova ideale per investigare le propieta’ del fascio ottico in vista delle applicazioni spaziali, dove aspetti cruciali sono le enormi distanze che i fasci di comunicazione devono coprire. La tesi si divide in due parti complementari: Ground beam propagation (propagazione terrestre di fasci ottici) la propagazione di fasci ottici nello spazio libero lungo link orizzontali alle isole Canarie, realizzando grandi distanze (143 chilometri) e avendo come scopo l 'analisi (della propagazione del fascio) in un mezzo turbolento.I risultati evidenziano configurazioni ottiche e specifiche al fine di realizzare un efficace e stabile link di comunicazione. Queste ricerche sono parte del progetto strategico Quantum Future dell’ Universita’ di Padova, "il salto nel paradigma quantistico" (The shift in the Quantum paradigm) Space beam propagation (propagazione spaziale di fasci (ottici)). La seconda parte e’ dedicata alla propagazione spaziale di fasci ottici, riguardando design e allestimento del prototipo quantistico SaNe-QKD al fine di interfacciarlo con il prototipo di comunicazione ottica (realizzato da Thales Alenia Space). Il sistema risultante, SaNe-QKD OPT, sara’ posizionato a bordo dei satelliti Galileo per realizzare link ottici quantistici inter-satellitari. Simulazioni riguardo i link inter-satellitari, gli aspetti della topologia del network realizzato e la valutazione della chiave utilizzata, hanno permesso di ottenere risultati e specifiche come la lunghezza d’ onda operativa e l’ apertura dei telescopi da usare per le comunicazioni fra satelliti. Ne segue che il prototipo SaNe-QKD e’ stato allestito ed e’ qui mostrato in ogni sua parte. Queste ricerche sono parte del progetto "Applicazione di Link di comunicazione ottica quantistica a GNSS" della Agenzia Spaziale Europea (ESA). La mia tesi e’ divisa in quattro capitoli come segue: Propagazione terrestre di fasci (Ground beam propagation Capitolo 1 introduce una panoramica sui principali concetti di fotonica, modelli atmosferici e parametri di turbolenza, ottica, fibre ottiche e telescopi, concetti richiamati durante l’ esposizione. Il telescopio Newtoniano e’ analizzato al fine di indagare il fascio ottico che giunge al prototipo SaNe-QKD per fini spaziali e il telescopio Canario e’ analizzato per realizzare un sistema ottico capace di implementare un sistema di l’ inseguimento del centroide. Simulazioni con Zemax sono presentate per entrambi i telescopi al fine di controllare le aberrazioni e per motivi di collimazione del sistema ottico. Per il Newtoniano il campo fuori asse, coma e curvatura di campo sono rivelati. Per il telescopio Canario aberrazione cromatica ed inoltre è descritto lo schema dell’ esperimento di propagazione del fascio ottico in spazio ibero nelle isole Canarie con controllo di fasci co-propaganti. I modelli di turbolenza atmosferica e di Hughnagel Valley model sono poi brevemente riportati. Saranno utilizzati negli esperimenti alle isole Canarie su 143 chilometri e nel capitolo 3 saranno richiamati per simulazioni terra-spazio di comunicazione quantistica fra trasmettitore/ricevitore a terra e satellite. Capitolo 2. E’ noto che un modo ottico non confinato che si propaga in un mezzo turbolento e’ soggetto a distorsioni. In Comunicazione Quantistica l’ informazione e’ codificata e trasmessa sotto forma di un treno di singoli impulsi con media di circa un fotone per impulso; segue che le perdite del link di un fascio che si propaga in atmosfera cresce con la distanza; di contro non possono essere ridotte aumentando la potenza del segnale (ottico) come nelle comunicazioni ottiche classiche. Di conseguenza diventa cruciale investigare la propagazione di fasci ottici su grandi distanze, al fine di analizzare la statistica del fotone, le trasformazioni del fascio dovute alla propagazione terrestre al fine di precludere il disaccoppiamento del fascio fra trasmettitore e ricevitore. La turbolenza introduce due contributi rispetto alla dimensione dei vortici di turbolenza ("eddies" [51] [52]) in relazione al fascio ottico: • "Beam wandering" (ballamento del fascio ottico), che ha luogo quando un fascio laser e’ rifratto da vortici di turbolenza con dimensione maggiore del fascio ottico causando lo spostamento del centro del fascio (centroide). • "Beam spreading" (allargamento del fascio), che e’ dovuto al fascio ottico rifratto da vortici di turbolenza di dimensione inferiore al diametro del fascio. L’ allargamento su breve termine ("short-term") e’ un contributo addizionale all’ allargamento tipico della propagazione in spazio libero (in assenza di effetti di turbolenza). Questi effetti sono legati al tempo di esposizione: su scale temporali brevi il balla- mento del fascio e’ l’ effetto dominante mentre su scale temporali lunghe l’ effetto dominante e’ l’ allargamento del fascio. Utilizzando la turbolenza come una risorsa, la ricerca sulla propagazione di fasci ottici in link ottici di comunicazione su spazio libero apre i test del nuovo allestimento, il telescopio "custom" Canario (capitolo 1). Il gruppo di ricerca ha realizzato due esperimenti di link in propagazione in spazio libero. Il primo, un test a corta distanza (circa 20 chilometri) fra Asiago e Monte Grappa (Italia) per testare il telescopio Canario e controllare i sistemi di comunicazione alla parte del trasmettitore; il secondo , un link a lunga distanza (143 chilometri) fra le isole di La Palma e Tenerife (Isole Canarie) per esperimenti di propagazione in spazio libero con severe condizioni di turbolenza. Sviluppo e analisi dei dati sono presentati, evidenziando le metodologie e la caratterizzazione della turbolenza in link terrestri ottici quantistici. I risultati della propagazione di singoli, doppi fasci lungo 143 chilometri hanno di- mostrato che e’ possibile ottimizzare un sistema ottico al fine di ricostruire il diametro del fascio di "lungo termine" ("long term") e da tecniche di co-propagazione dei fasci le perdite dei link possono essere ridotte. Cio’ e’ promettente poiche’ le perdite dei link sono un aspetto cruciale in Comunicazione Quantistica, quando in un canale "noisy"(rumoroso) il segnale quantistico (l’ informazione codificata in singoli fotoni) non puo’ essere migliorata mediante l’ aumento della potenza del segnale. Inoltre abbiamo osservato che la statistica di arrivo dei singoli fotoni nello spazio libero (143 chilometri di link ottico) conferma la trasformazione da Poissoniana a log-normale della distribuzione (statistica della sorgente): c” inoltre evidenza di sub intervalli consecutivi di basse perdite, cosa che permette di aprire uno scenario di ricerca sui fenomeni di turbolenza come miglioramento delle tecniche di ("signal to noise") miglioramento del rapporto segnale-rumore. Propagazione spaziale di fasci (Space beam propagation) . Capitolo 3 secondo le richieste del progetto dell’ Agenzia Spaziale europea (ESA) sono riportati gli studi di fattibilita’ per l’ applicazione della Comunicazione Quantistica nella costellazione Galileo. Il capitolo inizia con una panoramica sui meriti della Comunicazione Ottica (rate di scambio dati, riduzione peso e consumi, compattezza,..) evidenziando che la sicurezza (intrinseca) e’ un valore aggiunto che solo la controparte quantistica puo’ fornire. Il capitolo inizia con una breve parte che richiama la meccanica orbitale dei satelliti, In seguito si continua con simulazioni terra-spazio richiamando le simulazioni presenti in letteratura xi propagazione di fasci ottici terra-spazio e spazio-terra attraverso l’, lo studio di fattibilita’ mostra purtroppo che la costellazione Galileo puo’ relizzare solo link di comunicazione inter satellitare poiche’ atmosfera e quota dei satelliti ne impediscono ogni trasmissione di singolo fotone con la presente tecnologia. Comunque i risultati delle simulazioni di propagazione di fascio ottico fra satelliti mostrano che le enormi distanze spaziali possono essere superate (i requisiti derivati sono di diametro del telescopio <20 cm e di lunghezza d’operativa <532nm) e gli intervalli temporali entro cui realizzare la comunicazione quantistica possono essere calcolati: rispetto ad un satellite di riferimento di differente piano orbitale, l’ inter- vallo temporale in cui le distanze interstellari sono entro 15000 chilometri sono • Per il satellite chiamato tre di circa satellite 176 minuti • Per il satellite chiamato due di circa satellite 168 minuti In seguito i moti relativi dei satelliti che giacciono in differenti piani orbitali mostrano che e’ possibile comunicare fra due satelliti che sono a turno piu’ vicini, ottenendo apprezzabili rate di comunicazione: gli intervalli di distanza in cui calcolare i valori di "raw key rate" sono 6000,10000 e 19000 chilometri, e i migliori valori sono (18M bits/s aa 6600 km, relativamente a bassi valori di attenuazione) sono ottenuti per un telescopio di diametro di 50cm e lunghezza d’ onda di 50nm, mentre i peggiori valori di "raw key rates" (2.2 Kbits/s a 6600 chilometri, in relazione a maggiori valori di attenuazione) si hanno per un telescopio di 20cm e lunghezza d’di 800 nm. Infine, la sicurezza dell’odierna societa’ basata sull’ informazione e’ di fondamentale importanza: il Network Galileo (Optical Quantum Communication Network) garantira’ la sicurezza intrinseca dello scambio di chiavi quantistiche, libero da attacco PN entro una distanza inter-satellitare definita dallo schema di Decoy. Dopo un breve richiamo al modello per un sistema di Comunicazione Quantistica, simulazioni in merito al "key rate" (tasso di scambio di chiavi) sono mostrate nella sezione finale, mostrando che fissate lunghezza d’ onda e distanze interstellari, maggiori sono le aperture dei telescopi maggiori sono i valori di "key rate" ottenuti. Allo stesso tempo abbiamo osservato che piu’ corte lunghezze d’ onda operative presentano maggiori valori di "key rate", assicurando che in merito alla costellazione Galileo OQCN (Optical Quantum Communication Network) gli schemi di decoy possono essere applicati al fine di battere l’ attacco PNS, sui possibili link di comunicazione, in relazione al range di distanza (della comunicazione), alla lunghezza d’ onda coperta e all’ apertura del telescopio. Dopo un breve richiamo al modello per un sistema di Comunicazione Quantistica, simulazioni in merito al "key rate" sono mostrate nella sezione finale, evidenziando che considerando le lunghezze d’ onda e le distanze di propagazione del fascio, grandi aperture presentano elevati valori di "key rate". Allo stesso tempo abbiamo osservato che a corte lunghezza d’ onda operativa presentano maggiori valori di "key rate". Allo stesso tempo abbiamo osservato che corte lunghezze d’ operative presentano maggiori valori di "key rate" assicurando che in merito a Galileo OQCN (Optical Quantum Communication Network) gli schemi di decoy possono essere applicati al fine di battere l’ attacco PNS sui possibili link di comunicazione in relazione alla distanza coperta, la lunghezza d’ onda operativa scelta e l’ apertura del telescopio. Capitolo 4 Presentiamo un riassunto (introduttivo) riguardo ai risultati dello studio di fattibilita’ sull’ applicazione di link di comunicazione Ottici Quantistici in relazione al sistema globale satellitare (GNSS Galileo): per un network basato sulla Distribuzione di Chiavi Quantistiche inter-satellitare abbiamo derivato nel passato capitolo alcune specificazioni riguardo alla selezione della lunghezza d’ onda medi- ante una analisi del fascio che si propaga al di fuori dell’ atmosfera per telescopi di diametro fissato e per lunghezze d’ onda determinate, mostrando che decrescendo la lunghezza d’onda e aumentando il raggio del telescopio la dimensione del fascio al ricevitore si riduce e cosi’ l’ attenuazione, mentre il rapporto segnale rumore SNR aumenta. Considerato il moto del sistema Galileo GNSS, abbiamo anche mostrato alcune simulazioni in MATLAB per valutare gli intervalli temporali in cui due moduli spaziali raggiungono la minima distanza fra di loro, al fine di investigare la fattibilita’ del sistema di Link di Comunicazione Ottico Quantistica (OQL) e valutare le attese performance (rese) in termini di lunghezze delle chiavi (crittografiche) raggiungibili. In questo capitolo la chiave segreta finale attesa e’ derivata (considerando il "raw key rate", il numero medio di fotoni per qubit in uscita dal trasmettitore, l’ efficienza di Distribuzione di Chiavi Quantistiche, l’ attenuazione dei link di spazio libero, l’ attenuazione dovuta ai rivelatori dalla parte del ricevitore , il QBER) e il numero di chiavi "sifted" (esaminate/analizzate) richieste contro il QBER che deve essere a disposizione del trasmettitore e del ricevitore, poi la trasmissione a livello quantistico e la fase di "sifting" (esame/analisi), sono anche valutate, al fine di poter esaminare la chiave segreta di desiderata lunghezza. (Assunti valori di attenuazione fra -40 dB e -45 dB per il canale quantistico). Sono di seguito descritti il prototipo quantistico SaNe-QKD in ogni sua parte, il protocollo di Distribuzione di Chiavi Quantistiche utilizzato B92, il trasmettitore e il ricevitore nel suo arrangiamento opto-meccanico con le dedicate interfacce per mettere in comunicazione il modulo quantistico col modulo ottico costruito da Thales Alenia Space. Infine i test di Proof of Concept Demonstration (Prova della Dimostrazione del Concetto) per la parte quantistica sono descritti
APA, Harvard, Vancouver, ISO, and other styles
40

Malik, Nadeem Ahmed. "Optical characterization of graphene in vacuum ultraviolet spectral region & spectroscopic studies of colliding laser plasmas (Al, Si)." Doctoral thesis, Università degli studi di Padova, 2019. http://hdl.handle.net/11577/3424788.

Full text
Abstract:
Il presente lavoro di tesi ha come obiettivo principale lo studio di materiali innovativi per lo sviluppo di componenti ottici nella regione spettrale dell’estremo ultravioletto (EUV) e dell’ultravioletto da vuoto (VUV). I campi di applicazione sono molteplici e spaziano dalla litografia EUV all’esplorazione spaziale. Questo tipo di ricerca richiede contemporaneamente l’utilizzo e la messa a punto di adeguati metodi di caratterizzazione, che permettano una completa analisi delle proprietà nella regione spettrale di interesse. Il risultato più interessante presentato è sicuramente l’analisi ottica e strutturale di strati di grafene (singolo e triplo) depositati su silicon oxide, nella regione spettrale dell’ultravioletto da vuoto. Lo studio è stato affrontato combinando diverse tecniche sperimentali e partendo dalle proprietà ottiche dell’ossido di silicio depositato su silicio (SiO2/Si), che costituisce il substrato. Il SiO2/Si è stato caratterizzato alla hydrogen Lyman-alpha (121.6 nm) utilizzando un riflettometro dedicato alla riflettometria nell’ultravioletto e recentemente implementato per misure polarimetriche (CNR-IFN Padova). Sono stati determinati i parametri ellissometrici, ratio (ρ) and phase shift (), le costanti ottiche e le proprietà polarimetriche del silicon oxide. Il SiO2 si comporta effettivamente come una lamina di ritardo introducendo una differenza di fase tra le componenti s- e p- della radiazione incidente. La differenza di fase introdotta varia dai 18° ai 160° e dipende dall’angolo di incidenza. Successivamente, lo stesso tipo di analisi sperimentale è stata completata per i campioni con uno strato di grafene depositato sull’ossido di silicio (1LG/SiO2/Si). E’ stato osservato che, nonostante il sottile spessore, il singolo strato di grafene migliora la riflettività del substrato. Dall’analisi polarimetrica, non si può invece affermare che il grafene introduca un ritardo di fase osservabile. Le costanti ottiche del singolo e triplo strato di graphene cresciuto su SiO2/Si sono state studiate alla hydrogen Lyman-alpha utilizzando misure in riflettometria in polarizzazione s- e p- acquisite utilizzando luce di sincrotrone (ELETTRA Trieste, BEAR beamline). Si notano differenze misurabile nella riflettività dei campioni. Le differenze dipendono dal numero di strati di graphene. Queste misure sono state utilizzate per ricavare le costanti ottiche. E’ stata inoltre sperimentalmente osservato una anisotropia ottica con asse di simmetria quasi perpendicolare alla superficie e coerentemente correlata all’orientamento degli orbitali π. Le costanti ottiche efficaci sono state ricavate simulando l’interazione della radiazione elettromagnetica con la struttura del campione. Inoltre, l’attendibilità delle costanti ottiche trovate è stata qualitativamente testata ricavando la “surface differential reflectance (SDR)” dalle misure di riflettività. Un altro effetto molto interessante indotto dal grafene è lo spostamento dell’angolo di pseudo-Brewster rispetto a quanto osservato per il substrato. Lo spostamento, che cresce in valore assoluto con il numero di strati, induce un downshift contrariamente a quanto osservato in altre regioni spettrali. La qualità della superficie, la morfologia e il numero di layer sono stati caratterizzati con misure di microscopia a forza atomica e spettroscopia Raman. Per quanto ne sappiamo, questi risultati relativi allo studio delle proprietà ottiche del grafene nel VUV sono assolutamente nuovi. L’ultima parte della tesi riguarda lo studio dello strato di stagnazione che si forma sul fronte di collisione di due plasmi collidenti. La tecnica utilizzata è una spettroscopia risolta in tempo. Il tempo di evoluzione e le dinamiche dei plasmi collidenti di Al-Al e Al-Si sono stati studiati con tecniche spettroscopiche risolte in tempo. È stato osservato che nel caso di un "wedge target" lo strato di ristagno produce uno spettro più luminoso e in precedenza sono comparsi stati di ionizzazione più elevati con un'intensità relativamente più elevata di un "flat target". Il tempo di evoluzione della densità elettronica è stato studiato e confrontato nel caso delle due configurazioni con i target diversi e una densità elettronica relativamente più alta è stata osservata nel caso di “wedge target”.
The aim of this research is to investigate and explore new innovative material(s) and techniques regarding development and improvement of vacuum ultraviolet (VUV) and extreme ultraviolet (EUV) optics and sources; for the advancement of EUV and VUV technological areas like space exploration (e.g. observation and spectroscopic diagnostics of the solar corona) and EUV lithography (e.g. advancement and minimization of integrated electronic circuits (ICs)). The research work was primarily focused on the investigations of the optical and structural properties of graphene (mono and few-layer) deposited on SiO2/Si substrate in VUV spectral region by exploiting different diagnostic techniques, based on reflection and polarimetric measurements. The study was addressed starting from silicon dioxide deposited on silicon (SiO2 / Si), which works as the substrate for graphene samples. The optical properties of SiO2/Si were thoroughly investigated at the hydrogen Lyman–alpha line (121.6 nm) by employing the tabletop EUV-VUV polarimetry facility located at CNR-INF Padova. An approach based on the combined use of reflectometry with polarimetry technique was used to find out the reliable values of the optical constants. The results show the potential of the approach and it was demonstrated in this study that the optical constants retrieved by using ellipsometric parameters; ratio (ρ), and phase shift (), are more reliable than the retrieved one using least square fitting of the reflectivity. Moreover, it was found that SiO2 behaves as a phase retarder by introducing a phase difference between the s- and p- polarization components of the incoming light. The phase differences observed was 18° to 160° depending on the incidence angle. Using the similar experimental technique, the ellipsometric parameters (phase shift (ϕ), ratio (ρ)) of graphene (1LG/SiO2/Si) sample were also investigated and compared with that of SiO2/Si to see the effect of the graphene as capping layer. It was found that 1LG on top of SiO2 improves optical throughput and despite having atomic thickness it affects the polarimetric properties of the underlying substrate. Further, detailed optical properties of mono (1L) and tri-layer (3L) of commercial graphene grown on (SiO2/Si) substrate were studied at hydrogen Lyman alpha by using laboratory based (at CNR-IFN, Padova) and synchrotron light-based (at BEAR beamline, Elettra synchrotron) EUV-VUV reflectometer setups. Angular reflectance measurements of graphene samples along with bare substrate were performed by taking into account the light polarization. Distinguishable optical performance was observed for both samples (1LG and 3LG) in spite of the ultra-thin thickness of the films. Optical anisotropy with the axis of symmetry nearly perpendicular to the surface and coherently related to the p-orbitals structural orientation has been experimentally demonstrated. Anisotropic “effective optical constants” corresponding to “effective thickness” were retrieved by simulating the interaction of the electromagnetic wave with the structure of the sample. Furthermore, the reliability of the derived optical constants was tested qualitatively by deducing surface differential reflectance (SDR) from the reflectance measurements. Another very interesting effect induced by graphene is the shift of the pseudo-Brewster angle with respect to what was observed for the substrate. The downshift of the pseudo-Brewster angle was observed for both samples 1LG (-1.5°), and 3LG (-5°), with larger shift for an increasing number of layers. However, in literature an upshift in the Brewster angle is reported but for different spectral region. AFM, XPS and Raman spectroscopies were used to study surface morphology, quality of graphene coatings, and to estimate the thickness/ number of layers. To the best of our knowledge, these remarkable optical properties of graphene at VUV spectral region was determined for the first time and results are of considerable interest for VUV optics advancement. The last part of the thesis is about the study of the stagnation layer formed at the collision front of two colliding plasmas by employing time resolved spectroscopic technique. Time evolution and dynamics of the Al-Al, Al-Si colliding plasmas studied and compared in the case of flat and wedge targets. It was observed that in case of wedge target the overall emission from stagnation layer was more intense and higher ionization states of (Al and Si) appeared earlier in time having higher intensity compared to the flat target. The time evolution of the electron number density was also studied and it was observed that wedge target results in a relatively higher electron number density
APA, Harvard, Vancouver, ISO, and other styles
41

Bertolotti, Giulia. "Micro-analytical methodologies for the characterization of airborne inorganic pollutants collected on unconventional substrates." Doctoral thesis, Università degli studi di Trento, 2014. https://hdl.handle.net/11572/367598.

Full text
Abstract:
The present work regards the development of a methodology for the study of atmospheric particulate matter (PM) which is alternative to instrumental measurements. The methodology developed exploits the surfaces already present in the field as samplers of PM. In particular, conifer needles and building facades are employed to investigate different temporal ranges: conifer needles potentially retain particles circulating in the atmosphere from the recent past up to now, while building facades could retain particles from an older period up to know. The field of application of the approach developed are the situations in which a wide territory must be monitored, eventually including remote locations, or information on past pollution scenario must be reconstructed in the absence of monitoring stations. For instance, the evaluation of the improved efficiency of off-gas abatement systems of industrial plants is a typical case of application. These pollution sources affect large areas and might have been active before regulation on air quality required constant monitoring of their emissions. Typically in such a case the methodology could assist in evaluating how large was in the past and it is nowadays the area of impact of the plant. In general, such an approach could be valuable whenever relying on instrumental measurements is cost and time consuming in terms of installing a large network of monitoring stations to study the dispersion of pollutants from a single or few sources. To have a detailed description of the spatial distribution of pollutant particles, they are studied individually with subsequent higher magnification. Where no traces of a source are detected by scanning electron microscopy coupled with energy dispersive x-ray spectroscopy (SEM-EDXS), the samples are analyzed with the higher resolution of transmission electron microscopy coupled with energy dispersive x-ray spectroscopy (TEM-EDXS) and selected area electron diffraction (SAED) in order to make sure that no smaller particles, able to travel farther from their source, are present at a certain site. All data provided by electron microscopy analysis of particles collected by conifer needles are placed in the context of elemental concentrations measured by inductively coupled plasma atomic emission spectroscopy (ICP-AES), which is a bulk analytical technique. The same is not possible for the data on single particles present on building facades given the inorganic matrix of the substrate, especially in the case of metal oxide paints, which does not allow the bulk measurement. Both the preparation of the samples for bulk analytical techniques and single particle analysis by electron microscopy were optimized. For method development and evaluation, the analytical protocol was applied to estimate spatial and temporal trends of accumulation of inorganic pollutants that can be related with changes in the emissions of atmospheric pollutants by an electric arc furnace (EAF) steelmaking plant located in a test site. The benefits of combining the single particle and bulk analytical techniques emerged especially for the discrimination of the emissions from different sources.
APA, Harvard, Vancouver, ISO, and other styles
42

Mosannenzadeh, Farnaz. "Smart Energy City Development in Europe: Towards Successful Implementation." Doctoral thesis, Università degli studi di Trento, 2016. https://hdl.handle.net/11572/368407.

Full text
Abstract:
Smart energy city (SEC) development is a component of the urban development initiative smart city, which has been a popular response to the global energy challenge in Europe during the past two decades. SEC development aims to increase the sustainability of urban energy systems and services. Since 2011, SEC development has been supported by the European Commission as part of the Strategic Energy Technology plan (SET-Plan) and through the European Union Programmes for Research and Technological Development (specifically FP7 and Horizon 2020). This, along with the promising vision of SEC development and considerable financial support by the private sector, has encouraged numerous European cities to initiate SEC projects. Successful implementation of these projects at the urban scale is crucial to achievement of urban energy objectives and sustainability of future urban development. The here presented thesis aims to support urban decision-makers towards successful implementation of urban scale smart energy city development in Europe. The study includes three stages. The first stage is dedicated to conceptual analysis. Within this stage, I conceptualized smart city through a keyword analysis of existing literature on the concept. Then, within the context of the smart city concept, I defined SEC development through literature review and expert knowledge elicitation. The second stage is dedicated to empirical investigation. Using the definition of SEC development, I distinguished and investigated 43 previously implemented SEC projects to identify common barriers that hinder successful implementation of SEC development. In addition, I proposed a new multi-dimensional methodology that allows a simultaneous prioritization of barriers against their probability, the level of impact, scale, origin, and relationship with other barriers. The third stage of the thesis is dedicated to learning methodologies that allow efficient transfer of knowledge from the past SEC experiences to the new SEC developments. I introduced the application of two learning methodologies that support decision-makers to predict barriers to the implementation of a new SEC project: case-based learning and decision tree learning. The former predicts barriers based on internal similarities between the new SEC project and the past projects. The latter uses the past projects and creates a predictive model for each barrier based on internal and external project characteristics. These models are later used to predict barriers to a new SEC project. Both methodologies were tested in a new SEC project, named SINFONIA. The conceptual analysis revealed that application of information and communication technologies, the collaboration of multiple stakeholders, integration of multiple urban domains, and sustainability evaluation are the constant characteristics (i.e. principles) of smart city and SEC development. It resulted in, to the best of my knowledge, the first multi-dimensional and comprehensive definition of SEC development, revealing its principles, objectives, domains of intervention, stakeholders, time and spatial dimensions. Furthermore, a list of smart energy solutions in each SEC domain of intervention was provided. The empirical investigation of the past SEC projects resulted in the identification of 35 common barriers to the implementation of SEC development, categorized in policy, administrative, legal, financial, market, environmental, technical, social, and information and awareness dimensions. The barrier prioritization showed that barriers related to collaborative planning, external funding of the project, providing skilled personnel, and fragmented ownership should be the key action priorities for SEC project coordinators. Application of case-based learning methodology resulted in identifying five past SEC projects that were the most similar to the SINFONIA project in terms of project internal characteristics. Investigating the barriers to the similar projects revealed that fragmented ownership is the most probable barrier to implementation of SINFONIA project. Application of the decision trees methodology resulted in generation of 20 barrier models, four of which showed a very good performance in prediction of barriers: lack of values and interest in energy optimization measures, time-consuming requirements by European Commission concerning reporting and accountancy, economic crisis, and local unfavorable regulations for innovative technologies. None of these four barriers were predicted to occur in the SINFONIA project. The application of this method in the SINFONIA showed a higher predicting power when a barrier was absent. The findings of the here presented thesis contribute to successful implementation of SEC development by supporting decision-makers in different phases of SEC projects. The results of the conceptual analysis contribute to a common understanding and foster the dialogue on the concept among various SEC stakeholders, particularly decision-makers and urban planners. The results of the empirical investigation lead to a better comprehension and evaluation of the barriers to the implementation of SEC projects in order to efficiently allocate resources to mitigate barriers. The proposed learning methodologies proved to be promising in helping decision-makers to identify similar projects to a new SEC development and to predict barriers to the implementation of new SEC projects. The thesis concludes that SEC is an outstanding urban development that can make a valuable contribution to the sustainability of urban energy systems. The specific characteristics of SEC development pose new challenges to the future smart and sustainable urban planning. Nevertheless, SEC development brings about unprecedented opportunities for integration and application of advanced quantitative techniques with current urban planning methods. This allows efficient knowledge transfer in not only intra-urban but also inter-urban levels in order to provide a collaborative, integrated and constructive movement towards successful implementation of SEC projects and sustainability of future urban development.
APA, Harvard, Vancouver, ISO, and other styles
43

BARDAZZI, ANDREA. "Wave impact in sloshing flows: Hydroelasticity in shallow water condition." Doctoral thesis, 2017. http://hdl.handle.net/11573/1039560.

Full text
Abstract:
The hydroelastic interaction between a fluid (sloshing flow) and a flexible metal structure, as a consequence of a gravitational wave impact, has been investigated. When hydroelasticity occurs, the real stresses which the structure must overcome, may be underestimate if only the hydrodynamic pressure is taken in account. The knowledge of the stresses acting on the structure, as well as, the physical mechanisms which are able to trigger this kind of phenomena, are fundamental for a right design and the safety of a marine structures. The investigation has been performed following both the experimental and mathematical approach. An experimental set-up has been designed for the reproduction of the impact, during a two dimensional sloshing flow in low filling condition, against a flexible structure(, as well as, a rigid one). Two specific typologies of wave impacts have been considered, both of them, characterized by hydrodynamic loads which may activate hydroelastic effects on the structure: a) Flip-Through type wave impact b) Single Air-Bubble entrapment wave impact for the last one, the investigation has been extended also at the influence of the Euler number on the structural stress. The influence of the ullage pressure is an important topic related to the scaling procedure when model scale experiments are performed. For a better identification of the major physical features which play an active role in the hydroelastic phenomena, an hybrid “weak” hydroelastic and a fully hydroelastic methods have been developed. The three different sub-problems, that have been individuated from the experimental activities: 1a) sloshing stage with single phase flow, 1b) sloshing stage with two phase flow and 2) structural problem, have been modelled with a proper mathematical models, where the physical assumptions adopted, have been inspired by the experimental finding and also by the literature. The hybrid model combines a numerical model for the structural problem with hydrodynamic loads estimated during experimental tests with a fully rigid structure. More in detail, the Euler beam theory together with a model for the added mass have been used to describe the behavior of the structure. The hybrid model highlights how the added mass effect influence both the natural frequencies and the displacement of the structure. Anyway some differences on the structure displacement have been observed, especially on the higher peaks just after the impact, this suggests that a stronger hydroelastic interaction is present. For the fully hydroelastic model, also the sloshing sub-problem, which can be considered both as single phase of two phase flow, depending on the impact type, has been solved numerically. In particular, a mixed Eulerian-Lagrangian method has been applied for the evolution of the free surface, where for the liquid phase the hypotheses of incompressible and irrotational fluid have been considered. In that cases where the air cavity is present, the pressure inside the cavity has been modelled with ad-hoc semi-analytical model, such as the “lumped” model. The dynamic behaviour of the structure has been approximated as in the hybrid method. The coupling, during the numerical time integration scheme, of the cited sub-models gives the fully hydroelastic method.
APA, Harvard, Vancouver, ISO, and other styles
44

FUSCO, CIRO. "L’incremento della sicurezza nelle gallerie stradali: sviluppo di un sistema di navigazione indoor come supporto all'esodo pedonale." Doctoral thesis, 2020. http://hdl.handle.net/11573/1368982.

Full text
Abstract:
Le gallerie stradali, soprattutto quelle di significativa lunghezza, possono essere considerate ambienti a rischio d’incidente rilevante, in particolare per la gravità delle conseguenze, principalmente quando l’infrastruttura risulta interessata da un incendio. Per ridurre tali rischi le gallerie devono sottostare a particolari prescrizioni legislative e normative che interessano sia la fase di realizzazione che quella di esercizio, così da consentire il raggiungimento di un livello di sicurezza ritenuto accettabile. La sicurezza in questi ambienti dipende da numerosi fattori e una visione olistica risulta oggi indispensabile per riuscire ad individuare, comprendere e ridurre, in modo sostenibile, le criticità ancora presenti. Un fattore che rappresenta un punto debole della sicurezza in galleria è certamente quello relativo alle difficoltà di esodo in condizioni di emergenza, come accade appunto in caso d’incendio. Il mondo accademico e di conseguenza l’ambito normativo, da sempre sensibili a questi aspetti, hanno indicato nel tempo notevoli miglioramenti alle dotazioni impiantistiche di galleria, le quali oggi operano sia sul fronte della prevenzione che della riduzione del danno. In merito all'esodo pedonale i sistemi di sicurezza elettrici ed elettronici risultano d’indispensabile supporto all'autosoccorso dell’utente impegnato a lasciare l’infrastruttura. A tal riguardo risultano particolarmente rilevanti l’illuminazione e la segnalazione delle vie di esodo, ma nonostante i progressi tecnologici questi sistemi presentano ancora significative criticità, soprattutto quando occorre orientarsi in condizioni di scarsa visibilità per la presenza di densi fumi. In queste situazioni infatti l’esodo è compromesso sia da aspetti oggettivi legati alle difficoltà ambientali sia da aspetti connessi con il comportamento umano in condizioni di ansia o addirittura di panico. Questo studio, dopo un doveroso inquadramento normativo e tecnico, tratta l’individuazione e lo sviluppo, anche attraverso numerose prove sperimentali in galleria, di un sistema di navigazione indoor, fruibile su smartphone, dedicato al supporto dell’utenza durante l’esodo pedonale, al fine di contrastare le difficili condizioni ambientali e l’imprevedibilità del comportamento umano. L’obbiettivo principale del sistema è quello di fornire all'utente informazioni dinamiche sul percorso più sicuro da seguire per abbandonare a piedi la galleria. Il sistema è basato sull'utilizzo di particolari dispositivi elettronici denominati Beacon BLE (Bluetooth Low Energy), i quali trovano oggi applicazione in numerosi campi, tra cui anche quelli, in via di sviluppo, sul posizionamento e sulla navigazione in ambienti chiusi. Si evidenzia che allo stato attuale non si registrano applicazioni di indoor positioning finalizzate a fornire funzioni di sicurezza nelle gallerie stradali, a tal proposito il sistema sviluppato in questo progetto di ricerca è stato oggetto di deposito di una domanda di brevetto da parte di Sapienza Università di Roma. Si precisa inoltre che lo scopo di questa ricerca non è stato quello di studiare nuove tecniche di localizzazione indoor, bensì di sfruttare lo stato dell’arte per individuare e sviluppare un sistema con caratteristiche di fidatezza e sostenibilità economica tali da consentire specifiche applicazioni di sicurezza in un ambiente così particolare come quello delle gallerie stradali. Si ritiene che il sistema presentato in questo studio, oltre a fornire un contributo alla riduzione del rischio connesso all'esodo pedonale, possa essere sfruttato anche per altre applicazioni di sicurezza in galleria, ad esempio nel supporto alle squadre di soccorso, nella gestione della manutenzione e nel supporto alla guida.
APA, Harvard, Vancouver, ISO, and other styles
45

De, Carlo Martino. "Integrated optomechanical devices for sensing." Doctoral thesis, 2021. http://hdl.handle.net/11589/213841.

Full text
Abstract:
L'optomeccanica è un campo di ricerca in via di sviluppo che esplora l'interazione tra la luce e la meccanica. Le moderne tecniche di nanofabbricazione per dispositivi meccanici e strutture ottiche a bassissima dissipazione hanno dato permesso un importante progresso sperimentale all'optomeccanica, sia per applicazioni che per la ricerca fondamentale. Ci sono diversi modi in cui luce e meccanica interagiscono tra loro. In questa tesi sono state sviluppate tre diverse macroaree dell'optomeccanica: giroscopi ottici, forze optomeccaniche e spettroscopia fotoacustica. L'interazione tra luce e movimento meccanico è stata studiata a partire dal concetto di giroscopi ottici. I giroscopi ottici sono sensori di velocità angolare. Allo stato dell’arte, i principi fisici e le configurazioni utilizzate per la realizzazione di giroscopi ottici non sono adatti per miniaturizzarli alla microscala. In questa tesi sono state presentate e indagate alcune nuove configurazioni che sfruttano il concetto di "punti eccezionali". Secondo l'effetto relativistico chiamato effetto Sagnac, le frequenze di risonanza di due modi contropropaganti in un risonatore ad anello sono separate da una quantità proporzionale alla velocità angolare del “frame”. Tuttavia, la possibilità di miniaturizzare il giroscopio ottico è limitata dal fatto che la sperazione tra le risonanze è proporzionale al raggio del risonatore ad anello. Nel primo capitolo è stato introdotto il concetto di simmetria “parity-time” (PT) come soluzione per l'integrazione di sensori di velocità angolare. Predisponendo due risonatori ottici accoppiati progettati per essere al cosiddetto "punto eccezionale", si è potuto dimostrare che la separazione tra le autofrequenze è proporzionale alla velocità angolare del dispositivo, con una sensibilità che è di diversi ordini di grandezza superiore a quella classica Giroscopio Sagnac. In questa tesi è stato dimostrato che un problema del giroscopio a simmetria PT è l'instabilità dei modi ottici quando il sistema è in rotazione. Ecco perché è stata proposta l'idea del giroscopio a simmetria anti-PT, utilizzando una guida d'onda ausiliaria a forma di U per accoppiare indirettamente due risonatori ottici. La soluzione proposta si è dimostrata un'alternativa interessante per il rilevamento della velocità angolare, grazie allo schema di facile lettura e all'assenza di modi instabili. Una semplice sorgente a banda larga e un fotorilevatore sarebbero sufficienti per leggere l'uscita del sensore. Infine, è stata proposta una nuova configurazione per un giroscopio a simmetria anti-PT. È diverso dalla configurazione a forma di U e utilizza solo una guida d'onda diritta ausiliaria per accoppiare indirettamente due risonatori ottici. Questa architettura si è dimostrata molto più robusta, insensibile ad alcuni errori di fabbricazione, rispetto a quella ad U. La seconda area dell'optomeccanica che è stata studiata in questa tesi include le forze optomeccaniche. In particolare, è stato sviluppato un modello generalizzato in grado di calcolare lo spostamento meccanico di un solo grado di libertà di un setup optomeccanico generale. Il modello inizialmente proposto da Rakich è stato esteso a sistemi in cui si considerano guadagni o perdite. Quindi, il modello è stato utilizzato per valutare l'effetto delle forze ottiche in un sistema a simmetria PT con guide d'onda sospese nella regione di accoppiamento. È stato dimostrato che è possibile incrementare le forze ottiche grazie alla condizione di simmetria PT. In secondo luogo, è stata proposta una modellazione analitica della dinamica meccanica di guide d'onda ottiche sospese accoppiate soggette a forze optomeccaniche, inclusa una modellazione dello smorzamento, con effetto squeezing. Tale modello analitico, insieme all'algoritmo numerico proposto, può essere utilizzato per trovare l'evoluzione del sistema nel dominio del tempo di complesse strutture optomeccaniche, come gli interruttori optomeccanici. Inoltre, è stato mostrato un lavoro sperimentale su un interruttore optomeccanico. Sono state spiegate tutte le fasi di fabbricazione per realizzare il dispositivo optomeccanico integrato. Infine, è stata analizzata la spettroscopia fotoacustica. Il sensore allo stato dell'arte della spettroscopia fotoacustica al quarzo (QEPAS) è stato modellato e simulato ed è stato proposto un nuovo sensore semi-integrato. Un problema dei sensori QEPAS attuali è la necessità di allineamento per i componenti ottici. Inoltre, la dimensione di tutti i dispositivi coinvolti nel setup rende difficile realizzare sensori portatili e compatti. L'idea proposta in questa tesi è quella di integrare tutti i componenti ottici necessari a guidare la luce in prossimità del diapason al quarzo per ridurre drasticamente le dimensioni del setup complessivo ed evitare il problema dell'allineamento ottico. La possibilità di utilizzare guide d'onda ottiche integrate per guidare la luce rende possibile utilizzare risonatori ottici per migliorare il segnale fotoacustico che viene letto attraverso il diapason al quarzo. La configurazione proposta è pensata per utilizzare un laser integrato legato a un chip di silicio, dove vengono realizzate tutte le guide d'onda. In questo caso un risuonatore meccanico molto piccolo può essere collegato al chip di silicio, al fine di aumentare l'ampiezza del segnale di pressione. In tal modo, è possibile ottenere prestazioni paragonabili al sensore QEPAS all'avanguardia. Un risultato del genere potrebbe aprire la strada a una nuova generazione di sensori QEPAS compatti, in grado di superare il problema delle dimensioni dei setup e dell'allineamento dei componenti ottici.
Optomechanics is a developing field of research exploring the interaction between light and mechanical motion. The modern nanofabrication techniques for mechanical devices and ultralow dissipation optical structures have provided a way for giving an important experimental progress to optomechanics, both for applications and for fundamental investigations. In this thesis optomechanics will be investigated in different aspects, in its general meaning, both theoretically and experimentally. There are different ways in which light and mechanics interact with each other. In this thesis three different macro areas of optomechanics have been developed: optical gyroscopes, optomechanical forces and photoacoustic spectroscopy. The interaction between light and mechanical motion has been investigated starting from the concept of optical gyroscopes. Optical gyroscopes are sensors of angular velocity. In the present state of the art, the physical principles and the configurations used for realizing optical gyroscopes are not suitable for miniaturizing them to the microscale. In this thesis some new configurations exploiting the concept of "exceptional points" have been presented and investigated. According to the relativistic effect called Sagnac effect, the resonance frequencies of two counterpropagating modes in a ring resonator are separated by a quantity proportional to the angular velocity of the frame. However, the possibility of miniaturizing the optical gyroscope is limited by the fact that the resonance splitting is proportional to the radius of the ring resonator. In the first chapter the concept of parity-time symmetry has been introduced as a solution for the integration of angular velocity sensors. By setting up two coupled optical resonators designed to be at the so called "exceptional point", it could be demonstrated that the eigenfrequency splitting is proportional to the angular velocity of the device, with a sensitivity that is several orders of magnitude higher than the classical Sagnac gyroscope. In this thesis it has been demonstrated that one problem of the parity-time symmetric gyroscope is the instability of the optical eigenmodes when the system is in rotation. That is why the idea of the anti-parity-time-symmetric gyroscope was proposed, using a U-shaped auxiliary waveguide to indirectly couple two optical resonators. The proposed solution has been shown to be an interesting alternative for angular velocity sensing, thanks to the easy readout scheme and the absence of modes instability. A simple broadband source, together with a photodetector could be used to read the output of the sensor. Finally, a new configuration for an anti-parity-time-symmetric gyroscope has been proposed. It is different from the U-shaped configuration and uses only an auxiliary straight waveguide to indirectly couple two optical resonators. This architecture has been shown to be much more robust, insensitive to some fabrication errors, with respect to the U-shaped one. The second area of optomechanics that has been investigated in this thesis includes optomechanical forces. In particular, a generalized model able to calculate the mechanical displacement of only one degree of freedom of a general optomechanical setup has been developed. The model initially proposed by Rakich has been extended to systems where gain or loss are considered. Then, the model has been used to evaluate the effect of optical forces in parity-time symmetric system with suspended waveguides in the coupling region. It has been demonstrated that it is possible to enhance the optical forces thanks to condition of parity-time symmetry. Secondly, an analytical modelling of the dynamics of optomechanically coupled suspended optical waveguides has been proposed, including a modelling of the damping, with the squeezing effect. Such an analytical model, together with the numerical proposed algorithm can be used to find the evolution of the system in the time domain of complex optomechanical structures, such as optomechanical switches. Also, an experimental work on an optomechanical switch has been shown. All the fabrication steps to fabricate the integrated optomechanical device has been explained. The most critical part during the fabrication has been the underetching of suspended waveguides. In fact, using a wet HF etching process caused the suspended waveguides to get stuck. Using a ZEP mask and a vapor HF etching, unexpected HF bubbles appeared on the surface. So, a hard mask has been used to guarantee the successful underetching of the device. Finally, the experimental measurement on the chip showed the expected behaviour of the device. Finally, Photoacoustic Spectroscopy has been analysed. The state-of-art Quartz-Enhanced PhotoAcoustic Spectroscopy (QEPAS) sensor has been modelled and simulated and a new semi-integrated sensor has been proposed. One problem of the state-of-art QEPAS sensors is the necessity of alignment for optical components. Moreover, the dimension of all the devices involved in the setup makes it difficult to realize portable and compact sensors. The idea proposed in this thesis is to integrate all the optical components needed to guide the light in the proximity of the Quartz Tuning Fork to drastically reduce the dimension of the overall setup and to avoid the problem of optical alignment. The possibility of using integrated optical waveguides to guide light makes it possible to use optical resonators to enhance the photoacoustic signal that is read through a Quartz Tuning Fork. The proposed setup is meant to use an integrated laser bonded to a Silicon chip, where all the waveguides are realized. In this case a very small mechanical resonator can be bonded over the Silicon chip, in order to enhance the amplitude of the pressure signal. In such a way, performance comparable with the state-of-art QEPAS sensor can been achieved. Such a result could pave the way to a new generation of compact QEPAS sensor, that could overcome the problem of the size of the setups and of the alignment of optical components.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography