Dissertations / Theses on the topic 'ING-INF/07 MISURE ELETTRICHE ED ELETTRONICHE'

To see the other types of publications on this topic, follow the link: ING-INF/07 MISURE ELETTRICHE ED ELETTRONICHE.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'ING-INF/07 MISURE ELETTRICHE ED ELETTRONICHE.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

PIUZZI, BARBARA. "NUOVO APPROCCIO AL BIOMONITORAGGIO: SVILUPPO DI STRUMENTAZIONE INNOVATIVA BASATA SU BIOSENSORI." Doctoral thesis, Università degli studi di Trieste, 2007. http://thesis2.sba.units.it/store/handle/item/12284.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Armani, Francesco. "FOOD PRESERVATION APPLIANCES, METHODS FOR ENERGY SAVING AND QUALITY ENHANCEMENT." Doctoral thesis, Università degli studi di Trieste, 2014. http://hdl.handle.net/10077/9989.

Full text
Abstract:
2011/2012
The food preservation is a complex subject that cover multiple disciplines of natural and technical sciences. This Ph.D. thesis treats this subject under different points of view. The preservation involves the perceived and objective quality of food, monitoring of the quality and methods to achieve this task. The economical and energetic effort to achieve this task will be treated as well. All these variations on the preservation main-theme find place in the field of the green appliances of the domotic house. This work proposes as results a series of practical techniques and applications that alone or as a whole could significantly improve the quality and efficiency of the preservation of food, both for chemical and energetic aspects. This Ph.D. activity is focused on energy saving and food preservation in the field of domestic and professional cold appliances. Refrigerators constitute a large energy demanding appliance; in particular household refrigerators cover the 15% of the domestic power needs. Their capillary spread and the 24 hours a day use justify this energetic demand and make important even the smallest energy efficiency improvement. From the other point of view in these appliances the cooling effect should be intended as their mean to achieve the preservation of food. A more coherent name should then be preservation appliances instead. From this consideration the cooling capacity shouldn't be the only purpose of these devices, they should also guarantee the best preservation performances. Temperature measurement and other means to monitor the quality of preserved goods should be integrated in the control loop.
La conservazione del cibo rappresenta un argomento complesso che coinvolge diverse discipline delle scienze naturali e tecniche. Questa tesi di dottorato tratta questi argomenti sotto diversi punti di vista, analizzando gli aspetti riguardanti la qualit\`{a} oggettiva e percepita del cibo, il suo monitoraggio ed i metodi per garantirla. Vengono inoltre considerati il consumo energetico ed il peso economico necessari al raggiungimento di questo obiettivo. In questo lavoro vengono proposte una serie di tecniche e applicazioni che, opportunamente utilizzate, permettono di migliorare significativamente sia la qualità che l'efficienza nella conservazione del cibo dal punto di vista sia chimico che energetico. L'argomento principale è pertanto incentrato sul risparmio energetico e sulla conservazione degli alimenti riguardante i refrigeratori domestici e professionali. I frigoriferi, infatti, sono dispositivi dal consumo energetico molto elevato e rappresentano addirittura il 15% del consumo domestico. Ciò è dovuto alla loro grande diffusione ed al loro funzionamento ininterrotto. Pertanto anche piccoli incrementi di efficienza possono rappresentare un obiettivo importante. Nell'affrontare questo argomento, si è voluto evidenziare l'obiettivo primario che deve essere la conservazione del cibo e non la refrigerazione in quanto tale. Sarebbe quindi più adeguato parlare di dispositivi per la conservazione del cibo e non di refrigeratori, ponendo in questo modo l'accento sugli aspetti direttamente legati agli alimenti. Ne consegue l'importanza di integrare nella logica di controllo anche la misura di parametri descrittivi dello stato di conservazione degli alimenti.
XXV Ciclo
1983
APA, Harvard, Vancouver, ISO, and other styles
3

Scala, Elisa <1979&gt. "Development and characterization of a distributed measurement system for the evaluation of voltage quality in electric power networks." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2008. http://amsdottorato.unibo.it/885/1/Tesi_Scala_Elisa.pdf.

Full text
Abstract:
The research activity carried out during the PhD course in Electrical Engineering belongs to the branch of electric and electronic measurements. The main subject of the present thesis is a distributed measurement system to be installed in Medium Voltage power networks, as well as the method developed to analyze data acquired by the measurement system itself and to monitor power quality. In chapter 2 the increasing interest towards power quality in electrical systems is illustrated, by reporting the international research activity inherent to the problem and the relevant standards and guidelines emitted. The aspect of the quality of voltage provided by utilities and influenced by customers in the various points of a network came out only in recent years, in particular as a consequence of the energy market liberalization. Usually, the concept of quality of the delivered energy has been associated mostly to its continuity. Hence the reliability was the main characteristic to be ensured for power systems. Nowadays, the number and duration of interruptions are the “quality indicators” commonly perceived by most customers; for this reason, a short section is dedicated also to network reliability and its regulation. In this contest it should be noted that although the measurement system developed during the research activity belongs to the field of power quality evaluation systems, the information registered in real time by its remote stations can be used to improve the system reliability too. Given the vast scenario of power quality degrading phenomena that usually can occur in distribution networks, the study has been focused on electromagnetic transients affecting line voltages. The outcome of such a study has been the design and realization of a distributed measurement system which continuously monitor the phase signals in different points of a network, detect the occurrence of transients superposed to the fundamental steady state component and register the time of occurrence of such events. The data set is finally used to locate the source of the transient disturbance propagating along the network lines. Most of the oscillatory transients affecting line voltages are due to faults occurring in any point of the distribution system and have to be seen before protection equipment intervention. An important conclusion is that the method can improve the monitored network reliability, since the knowledge of the location of a fault allows the energy manager to reduce as much as possible both the area of the network to be disconnected for protection purposes and the time spent by technical staff to recover the abnormal condition and/or the damage. The part of the thesis presenting the results of such a study and activity is structured as follows: chapter 3 deals with the propagation of electromagnetic transients in power systems by defining characteristics and causes of the phenomena and briefly reporting the theory and approaches used to study transients propagation. Then the state of the art concerning methods to detect and locate faults in distribution networks is presented. Finally the attention is paid on the particular technique adopted for the same purpose during the thesis, and the methods developed on the basis of such approach. Chapter 4 reports the configuration of the distribution networks on which the fault location method has been applied by means of simulations as well as the results obtained case by case. In this way the performance featured by the location procedure firstly in ideal then in realistic operating conditions are tested. In chapter 5 the measurement system designed to implement the transients detection and fault location method is presented. The hardware belonging to the measurement chain of every acquisition channel in remote stations is described. Then, the global measurement system is characterized by considering the non ideal aspects of each device that can concur to the final combined uncertainty on the estimated position of the fault in the network under test. Finally, such parameter is computed according to the Guide to the Expression of Uncertainty in Measurements, by means of a numeric procedure. In the last chapter a device is described that has been designed and realized during the PhD activity aiming at substituting the commercial capacitive voltage divider belonging to the conditioning block of the measurement chain. Such a study has been carried out aiming at providing an alternative to the used transducer that could feature equivalent performance and lower cost. In this way, the economical impact of the investment associated to the whole measurement system would be significantly reduced, making the method application much more feasible.
APA, Harvard, Vancouver, ISO, and other styles
4

Scala, Elisa <1979&gt. "Development and characterization of a distributed measurement system for the evaluation of voltage quality in electric power networks." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2008. http://amsdottorato.unibo.it/885/.

Full text
Abstract:
The research activity carried out during the PhD course in Electrical Engineering belongs to the branch of electric and electronic measurements. The main subject of the present thesis is a distributed measurement system to be installed in Medium Voltage power networks, as well as the method developed to analyze data acquired by the measurement system itself and to monitor power quality. In chapter 2 the increasing interest towards power quality in electrical systems is illustrated, by reporting the international research activity inherent to the problem and the relevant standards and guidelines emitted. The aspect of the quality of voltage provided by utilities and influenced by customers in the various points of a network came out only in recent years, in particular as a consequence of the energy market liberalization. Usually, the concept of quality of the delivered energy has been associated mostly to its continuity. Hence the reliability was the main characteristic to be ensured for power systems. Nowadays, the number and duration of interruptions are the “quality indicators” commonly perceived by most customers; for this reason, a short section is dedicated also to network reliability and its regulation. In this contest it should be noted that although the measurement system developed during the research activity belongs to the field of power quality evaluation systems, the information registered in real time by its remote stations can be used to improve the system reliability too. Given the vast scenario of power quality degrading phenomena that usually can occur in distribution networks, the study has been focused on electromagnetic transients affecting line voltages. The outcome of such a study has been the design and realization of a distributed measurement system which continuously monitor the phase signals in different points of a network, detect the occurrence of transients superposed to the fundamental steady state component and register the time of occurrence of such events. The data set is finally used to locate the source of the transient disturbance propagating along the network lines. Most of the oscillatory transients affecting line voltages are due to faults occurring in any point of the distribution system and have to be seen before protection equipment intervention. An important conclusion is that the method can improve the monitored network reliability, since the knowledge of the location of a fault allows the energy manager to reduce as much as possible both the area of the network to be disconnected for protection purposes and the time spent by technical staff to recover the abnormal condition and/or the damage. The part of the thesis presenting the results of such a study and activity is structured as follows: chapter 3 deals with the propagation of electromagnetic transients in power systems by defining characteristics and causes of the phenomena and briefly reporting the theory and approaches used to study transients propagation. Then the state of the art concerning methods to detect and locate faults in distribution networks is presented. Finally the attention is paid on the particular technique adopted for the same purpose during the thesis, and the methods developed on the basis of such approach. Chapter 4 reports the configuration of the distribution networks on which the fault location method has been applied by means of simulations as well as the results obtained case by case. In this way the performance featured by the location procedure firstly in ideal then in realistic operating conditions are tested. In chapter 5 the measurement system designed to implement the transients detection and fault location method is presented. The hardware belonging to the measurement chain of every acquisition channel in remote stations is described. Then, the global measurement system is characterized by considering the non ideal aspects of each device that can concur to the final combined uncertainty on the estimated position of the fault in the network under test. Finally, such parameter is computed according to the Guide to the Expression of Uncertainty in Measurements, by means of a numeric procedure. In the last chapter a device is described that has been designed and realized during the PhD activity aiming at substituting the commercial capacitive voltage divider belonging to the conditioning block of the measurement chain. Such a study has been carried out aiming at providing an alternative to the used transducer that could feature equivalent performance and lower cost. In this way, the economical impact of the investment associated to the whole measurement system would be significantly reduced, making the method application much more feasible.
APA, Harvard, Vancouver, ISO, and other styles
5

Masi, Maria Gabriella <1983&gt. "Development of human visual system analysis for the implementation of a new instrument for flicker measurement." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2011. http://amsdottorato.unibo.it/3270/1/Masi_Maria_Gabriella_tesi.pdf.

Full text
Abstract:
Flicker is a power quality phenomenon that applies to cycle instability of light intensity resulting from supply voltage fluctuation, which, in turn can be caused by disturbances introduced during power generation, transmission or distribution. The standard EN 61000-4-15 which has been recently adopted also by the IEEE as IEEE Standard 1453 relies on the analysis of the supply voltage which is processed according to a suitable model of the lamp – human eye – brain chain. As for the lamp, an incandescent 60 W, 230 V, 50 Hz source is assumed. As far as the human eye – brain model is concerned, it is represented by the so-called flicker curve. Such a curve was determined several years ago by statistically analyzing the results of tests where people were subjected to flicker with different combinations of magnitude and frequency. The limitations of this standard approach to flicker evaluation are essentially two. First, the provided index of annoyance Pst can be related to an actual tiredness of the human visual system only if such an incandescent lamp is used. Moreover, the implemented response to flicker is “subjective” given that it relies on the people answers about their feelings. In the last 15 years, many scientific contributions have tackled these issues by investigating the possibility to develop a novel model of the eye-brain response to flicker and overcome the strict dependence of the standard on the kind of the light source. In this light of fact, this thesis is aimed at presenting an important contribution for a new Flickermeter. An improved visual system model using a physiological parameter that is the mean value of the pupil diameter, has been presented, thus allowing to get a more “objective” representation of the response to flicker. The system used to both generate flicker and measure the pupil diameter has been illustrated along with all the results of several experiments performed on the volunteers. The intent has been to demonstrate that the measurement of that geometrical parameter can give reliable information about the feeling of the human visual system to light flicker.
APA, Harvard, Vancouver, ISO, and other styles
6

Masi, Maria Gabriella <1983&gt. "Development of human visual system analysis for the implementation of a new instrument for flicker measurement." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2011. http://amsdottorato.unibo.it/3270/.

Full text
Abstract:
Flicker is a power quality phenomenon that applies to cycle instability of light intensity resulting from supply voltage fluctuation, which, in turn can be caused by disturbances introduced during power generation, transmission or distribution. The standard EN 61000-4-15 which has been recently adopted also by the IEEE as IEEE Standard 1453 relies on the analysis of the supply voltage which is processed according to a suitable model of the lamp – human eye – brain chain. As for the lamp, an incandescent 60 W, 230 V, 50 Hz source is assumed. As far as the human eye – brain model is concerned, it is represented by the so-called flicker curve. Such a curve was determined several years ago by statistically analyzing the results of tests where people were subjected to flicker with different combinations of magnitude and frequency. The limitations of this standard approach to flicker evaluation are essentially two. First, the provided index of annoyance Pst can be related to an actual tiredness of the human visual system only if such an incandescent lamp is used. Moreover, the implemented response to flicker is “subjective” given that it relies on the people answers about their feelings. In the last 15 years, many scientific contributions have tackled these issues by investigating the possibility to develop a novel model of the eye-brain response to flicker and overcome the strict dependence of the standard on the kind of the light source. In this light of fact, this thesis is aimed at presenting an important contribution for a new Flickermeter. An improved visual system model using a physiological parameter that is the mean value of the pupil diameter, has been presented, thus allowing to get a more “objective” representation of the response to flicker. The system used to both generate flicker and measure the pupil diameter has been illustrated along with all the results of several experiments performed on the volunteers. The intent has been to demonstrate that the measurement of that geometrical parameter can give reliable information about the feeling of the human visual system to light flicker.
APA, Harvard, Vancouver, ISO, and other styles
7

Mingotti, Alessandro <1992&gt. "Integration of conventional and unconventional Instrument Transformers in Smart Grids." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amsdottorato.unibo.it/9176/1/Tesi_Mingotti.pdf.

Full text
Abstract:
In this thesis the reader will be guided towards the role of Instrument Transformers inside the always evolving Smart Grid scenario. In particular, even non-experts or non-metrologists will have the chance to follow the main concepts presented; this, because the basic principles are always presented before moving to in-deep discussions. The chapter including the results of the work is preceded by three introductive chapters. These, contain the basic principles and the state of the art necessary to provide the reader the tools to approach the results chapter. The first three chapters describe: Instrument Transformers, Standards, and Metrology. In the first chapter, the studied Instrument Transformers are described and compared with particular attention to their accuracy parameters. In the second chapter instead, two fundamental international documents, concerning Instrument Transformers, are analysed: the IEC 61869 series and the EN 50160. This has been done to be completely aware of how transformers are standardized and regulated. Finally, the last introductive chapter presents one of the pillars of this work: metrology and the role of uncertainty. In the core of the work Instrument Transformers integration in Smart Grid is distinguished in two main topics. The first assesses the transformers behaviour, in terms of accuracy, when their normal operation is affected by external quantities. The second exploits the current and voltage measurements obtained from the transformers to develop new algorithm and techniques to face typical and new issue affecting Smart Grids. In the overall, this thesis has a bifold aim. On one hand it provides a quite-detailed overview on Instrument Transformers technology and state of the art. On the other hand, it describes issues and novelties concerning the use of the transformers among Smart Grids, focusing on the role of uncertainty when their measurements are used for common and critical applications.
APA, Harvard, Vancouver, ISO, and other styles
8

Ghaderi, Abbas <1988&gt. "Design and Implementation of New Measurement Models and Procedures for Characterization and Diagnosis of Electrical Assets." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amsdottorato.unibo.it/9568/1/Revised%20Thesis.V06.pdf.

Full text
Abstract:
The measurement is an essential procedure in power networks for both network stability, and diagnosis purposes. This work is an effort to confront the challenges in power networks using metrological approach. In this work three different research projects are carried out on Medium Voltage underground cable joints diagnosis, inductive Current Transformers modeling, and frequency modeling of the Low power Voltage Transformer as an example of measurement units in power networks. For the cable joints, the causes and effects of Loss Factor have been analyzed, while for the inductive current transformers a measurement model is developed for prediction of the ratio and phase error. Moreover, a frequency modeling approach has been introduced and tested on low power voltage transformers. The performance of the model on prediction of the low power voltage transformer output has been simulated and validated by experimental tests performed in the lab.
APA, Harvard, Vancouver, ISO, and other styles
9

Bartolomei, Lorenzo <1992&gt. "Development and metrological characterization of measuring instruments for low-voltage networks monitoring." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amsdottorato.unibo.it/9797/1/thesis_Lorenzo_Bartolomei.pdf.

Full text
Abstract:
This thesis collects the main results about my research and the work related to the designing of monitoring systems of LV distribution networks. The first three chapters are introductive; the first one describes the main concepts contained in the guide for the evaluation of measurement uncertainty (also called GUM), since some of them are recalled in the next sections. Chapter 2 provides the main notions on the smart grid concept, the new generation of distribution networks characterized by a high degree of automation, and on the main power quality problems affecting the grids. Therefore, the following standard, connected to the above topics, are presented: (i) EN 50160. (ii) IEEE 519. (iii) IEC 61000-4-7. Finally, chapter 3 presents a general description of the main sensors suitable for the LV monitoring systems for the acquisition of voltage and current waveforms, providing information on the working principles, the metrological performances and recalling the related standards (as the IEC 61869). Chapter 4 gets to the heart of the work done in my PhD course; in fact, the two monitoring devices specifically developed to meet the needs of the future smart grids are presented: the Guardian Meter and the Network Monitoring Unit. Hence, information is provided on the purposes of each device, on their technical characteristics, on the tests conducted for the metrological characterization and on the results related to measurement performance. It is noteworthy that the testing activity has led to the development of procedures, some of which innovative, for the metrological evaluation of monitoring devices. In fact, the last chapter collects the scientific outcomes deriving from the R&D activity, which can be the starting points for the updating of current standards related to monitoring systems and for the development of new procedures to evaluate the metrological performance of the energy meters.
APA, Harvard, Vancouver, ISO, and other styles
10

Cavaliere, Diego <1992&gt. "Metrological characterization of sensors and instrumentation for distribution grid monitoring and electrical asset diagnostics." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2022. http://amsdottorato.unibo.it/10357/1/Tesi_Diego_Cavaliere_20220514_final.pdf.

Full text
Abstract:
The Smart Grid needs a large amount of information to be operated and day by day new information is required to improve the operation performance. It is also fundamental that the available information is reliable and accurate. Therefore, the role of metrology is crucial, especially if applied to the distribution grid monitoring and the electrical assets diagnostics. This dissertation aims at better understanding the sensors and the instrumentation employed by the power system operators in the above-mentioned applications and studying new solutions. Concerning the research on the measurement applied to the electrical asset diagnostics: an innovative drone-based measurement system is proposed for monitoring medium voltage surge arresters. This system is described, and its metrological characterization is presented. On the other hand, the research regarding the measurements applied to the grid monitoring consists of three parts. The first part concerns the metrological characterization of the electronic energy meters’ operation under off-nominal power conditions. Original test procedures have been designed for both frequency and harmonic distortion as influence quantities, aiming at defining realistic scenarios. The second part deals with medium voltage inductive current transformers. An in-depth investigation on their accuracy behavior in presence of harmonic distortion is carried out by applying realistic current waveforms. The accuracy has been evaluated by means of the composite error index and its approximated version. Based on the same test setup, a closed-form expression for the measured current total harmonic distortion uncertainty estimation has been experimentally validated. The metrological characterization of a virtual phasor measurement unit is the subject of the third and last part: first, a calibrator has been designed and the uncertainty associated with its steady-state reference phasor has been evaluated; then this calibrator acted as a reference, and it has been used to characterize the phasor measurement unit implemented within a real-time simulator.
APA, Harvard, Vancouver, ISO, and other styles
11

LUCIANO, Vincenzo. "Studio e realizzazione di un dispositivo di conversione energetica per l'alimentazione di un circuito elettronico di misura impiantabile in una protesi totale di ginocchio." Doctoral thesis, Università degli studi di Bergamo, 2013. http://hdl.handle.net/10446/28964.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

MILANO, Filippo. "Short and Mid range positioning: methodologies and set-up for accurate performance." Doctoral thesis, Università degli studi di Cassino, 2021. http://hdl.handle.net/11580/86003.

Full text
Abstract:
The ability to locate people and objects is a primary issue in daily life, and a new context of spatially aware applications has developed, ranging from security and protection services, to assisted navigation of hospitals, from industrial automation to virtual reality. In this framework, the problem of positioning represents the first step to be taken within the wider process of localization. This Thesis addresses the problem of positioning in anchor-based systems. Short and mid range scenarios were addressed, in which methodologies were proposed for the design of the systems and set-ups developed for their use in different applications. In detail, simulation tools have been proposed, and are still being improved, which allow to obtain useful information on the design of positioning systems; this represents an original contribution of the Thesis with respect to the current state of the art. After that, in both contexts, systems based on magnetic elds (short-range) and systems based on Ultra-Wide Band (mid-range) have been used in various industrial and medical applications. Finally, a first preliminary activity aimed at a smart management of the energy consumption of the positioning systems was carried out on Bluetooth Low Energy devices.
APA, Harvard, Vancouver, ISO, and other styles
13

FAPANNI, TIZIANO. "Sensors Design for E-Skin by Printed and Flexible Electronics." Doctoral thesis, Università degli studi di Brescia, 2023. https://hdl.handle.net/11379/568964.

Full text
Abstract:
Il miglioramento delle condizioni di vita ottenuto negli ultimi anni ha generato un aumento e un invecchiamento della popolazione, creando il bisogno di un nuovo paradigma di sanità intelligente dove sia possibile monitorare da remoto la variazione dello stato fisiologico dei pazienti. In questo contesto, gli e-skin, definibili come dispositivi flessibili che incorporano array di sensori, rappresentano una tecnologia all'avanguardia che permette il monitoraggio di differenti parametri fisiologici direttamente dal corpo umano in modo non invasivo grazie alle loro ridotte dimensioni. Per via di queste loro caratteristiche, gli e-skin sono promettenti in varie applicazioni come quello industriale, la prostetica e al già citato campo clinico. Questa loro grande applicabilità è resa possibile dai molteplici sensori che permettono di acquisire dati in modo preciso e distribuito. In questo contesto infatti, i sensori hanno assunto un ruolo centrale dal momento che hanno come compito principale la trasduzione dei differenti segnali d'interesse come ad esempio temperature, pressioni, deformazioni, biopotenziali e marcatori biochimici (e.g. ioni, metaboliti, metalli pesanti, amminoacidi, ormoni, farmaci, stupefacenti, ...). Quest'ultimo gruppo di marcatori sta recentemente suscitando un enorme interesse da parte della comunità scientifica dal momento che permettono il veloce riconoscimento di molteplici stati fisiologici e patologici. Ad oggi, per la misura di questi marcatori biochimici, la ricerca si sta concentrando sui biosensori dal momento che sono un'alternativa valida, più economica e di uso più facile rispetto ad altre tecniche di analisi di laboratorio (e.g. protocolli ELISA, cromatografia, ...) ad oggi usate come gold standard. Fra i possibili principi di trasduzione adottati correntemente per i biosensori in letteratura, l'elettrochimica presenta molteplici vantaggi fra cui un basso costo, un' alta sensibilità e l'uso di strumentazioni relativamente semplici. In questa tesi, verranno descritti differenti approcci per lo sviluppo e il miglioramento di sensori elettrochimici stampati applicati agli e-skin. La discussione partirà da una breve descrizione del principio di trasduzione di questi sensori e si focalizzerà in seguito prima su differenti approcci per il miglioramento delle caratteristiche metrologiche e poi sulla valutazione, monitoraggio e mitigazione delle componenti d'incertezza che possono influire sui dispositivi proposti. In questo contesto, la tesi si aprirà con una revisione della letteratura per introdurre i concetti generali riguardanti gli e-skin e i biosensori, in modo tale da poter comprendere meglio sia il loro principio di trasduzione e le loro limitazioni attuali. In seguito, sarà presentato un primo prototipo di un e-skin multisensing per la misura non invasiva e personalizzata dell'affaticamento muscolare che comprende sia un sensore elettromiografico (EMG) a 8 canali sia un sensore elettrochimico. I risultati ottenuti sono promettenti, ma per quanto riguarda il sensore elettrochimico le prestazioni non sono completamente soddisfacenti per l'applicazione e devono pertanto essere migliorate. Partendo da queste considerazioni, i successivi due progetti si concentrano sulle modalità di miglioramento della sensibilità e del limite di identificazione (limit of detection, LOD) sfruttando micro- e nano- strutturazione della superficie. In seguito, il lavoro si è concentrato su tutti gli elementi che possono introdurre incertezza sul segnale misurato in modo tale da comprendere meglio la qualità e l'affidabilità dei sensori elettrochimici stampati con AJP proposti. Fra queste fonti d’’incertezza si è riscontrato che la temperatura agisce da variabile d’influenza e pertanto se ne è approfondito lo studio per provare poi a compensare per mezzo di sensori stampati innovativi.
In the modern era where the overall living conditions improve, the population increases and ages, the need for a new paradigm of smart healthcare is arising where the need to monitor and track the changes in the physiological status of patients or sports professionals represents the main objective of the scientific community. In this frame, e-skin devices, defined as flexible devices that embed arrays of sensors, are cutting-edge technology that is promising to monitor different physiological parameters from the human body in a non-invasive way thanks to their reduced size and bulkiness. Thanks to these characteristics, e-skins are promising in a plethora of applications and fields other than the clinical one such as the industrial environment and prosthesis. Their wide applicability is enabled by the vast amount of sensors that allow precise and distributed data collection. In this frame, sensors become central to transduce from the body the signals of interest such as temperatures, pressures, deformations, biopotentials and biochemical markers (e.g. ions, metabolites, heavy metals, amino acids, hormones, drugs...). This last class of markers is lately attracting huge interest from the scientific community since they allow the quick detection of a plethora of physiological conditions. Currently, biosensors are researched to detect those signals as they are valid, cheaper and easier to use than standard in-lab analysis methods (e.g ELISA protocols, chromatography, ...). Moreover, among the possible transduction principles currently employed for biosensors, the electrochemical one presents, according to the literature, many advantages such as low cost, high sensitivity and simple instrumentation. In this thesis, different approaches for the development and improvement of printed electrochemical sensors for e-skin application will be investigated. Exploiting the opportunities offered by novel printing technologies, such as Aerosol Jet Printing, the main focus was to improve the metrological characteristics as well as to evaluate, monitor and mitigate the uncertainty sources that could affect the devices. Before going into the experimental detail, the first part of the thesis will be dedicated to provide a description of the transducing principle behind the electorchemical measurements investigated. Further, literature will be deepened in order to the general concepts about e-skins and biosensors, including opportunities and limitations. Then, a prototype of a multi-sensing e-skin patch for unobtrusive and personalized fatigue assessment, that uses both an 8-channel electromyographic (EMG) sensor and an electrochemical sensor, will be presented. The achieved results are promising, but underline the need to increase the sensitivity of the printed electrochemical sensors. Starting from these cues, the next two projects that will be presented are focused on the scientific evidence to try to improve the sensitivity and the limit of detection of printed electrochemical sensors using both micro- and nano- structures. The final part of the thesis will focus on all those elements that can introduce uncertainty on the overall measured signals in order to better understand the quality and the reliability of the proposed aerosol jet printed electrochemical sensors. In this, a wide set of uncertainty components and influence variables can be identified. Among the latter, the temperature is one of the most relevant components of noise (and thus uncertainty) on those kinds of sensors, that have to be compensated using novel, fully-printed sensors.
APA, Harvard, Vancouver, ISO, and other styles
14

CLERITI, RICCARDO. "Tecniche e sistemi per la misura e caratterizzazione di dispositivi attivi per la ricezione ad alta sensibilità e amplificazione a basso rumore a frequenze sub-millimetriche." Doctoral thesis, Università degli Studi di Roma "Tor Vergata", 2013. http://hdl.handle.net/2108/203122.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

BELLO, VALENTINA. "Smart micro-opto-fluidic sensing platforms for contactless chemical and biological analyses." Doctoral thesis, Università degli studi di Pavia, 2022. http://hdl.handle.net/11571/1453167.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

FERNANDES, CARVALHO DHIEGO. "New LoRaWAN solutions and cloud-based systems for e-Health applications." Doctoral thesis, Università degli studi di Brescia, 2021. http://hdl.handle.net/11379/544075.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

CANTÙ, EDOARDO. "Printed Sensors on Non-Conventional Substrates." Doctoral thesis, Università degli studi di Brescia, 2022. http://hdl.handle.net/11379/554976.

Full text
Abstract:
L'industria 4.0 sta trasformando radicalmente i processi e i sistemi di produzione con l'adozione di tecnologie abilitanti, come l'Internet of Things (IoT), il Big Data, l'Additive Manufacturing (AM) e il Cloud Computing. I principi di queste tecnologie possono anche essere tradotti in qualsiasi aspetto della vita quotidiana grazie all'uso dell'elettronica stampata (PE), offrendo tecniche per produrre sensori e sistemi non convenzionali o per rendere "intelligenti" oggetti convenzionali. Con la PE a giocare un ruolo chiave nella progettazione di oggetti di nuova generazione, gli oggetti intelligenti adempiono alla loro funzione originale, e possono misurare quantità fisiche nell'ambiente circostante, essendo in grado di comunicare con altri oggetti o unità remote. Diverse tecnologie facenti parte della PE potrebbero essere adottate, ma soprattutto l'Aerosol Jet Printing (AJP) con le sue caratteristiche può essere considerata per tale scopo essendo in grado di stampare su qualsiasi tipo di superficie un'enorme varietà di materiali funzionali. In combinazione con il Flash Lamp Annealing (FLA), un processo termico a bassa temperatura, è possibile completare la produzione di sensori e circuiti su qualsiasi tipo di substrato. Lo scopo di questo lavoro di tesi è quello di identificare metodi e processi innovativi che permettano di incorporare direttamente sensori, circuiti ed elettronica sulla superficie degli oggetti e di analizzarne le caratteristiche metrologiche. A tal fine, sono stati effettuati studi di compatibilità considerando diversi materiali, sia in termini di substrati che di inchiostri per la realizzazione di sensori e oggetti intelligenti. Inoltre, è stata analizzata la progettazione, la fabbricazione e il test di sensori e circuiti in diversi campi. Il capitolo 1 fornirà il background e lo schema di questa tesi. Gli oggetti intelligenti possono essere fabbricati con numerose tecnologie e materiali diversi a seconda delle prestazioni richieste e dell'applicazione specifica. Lo scopo del capitolo 2 è di fornire un'analisi delle tecnologie di elettronica stampata 3D che permettono la stampa di sensori su superfici complesse. In primo luogo, viene fornita una spiegazione delle tecnologie in esame. Poi, concentrandosi sulle tecnologie utilizzate, verrà fornita un'analisi approfondita di AJP e FLA nel capitolo 3. Gli esempi svolti sono suddivisi in quattro macro-aree, dispositivi indossabili, packaging su carta, applicazioni di wet laboratory analysis (rilevamento di cellule e biomolecole), per dimostrare l'applicabilità delle metodologie proposte nella realizzazione di sensori e oggetti intelligenti. A partire dal capitolo 4 verranno riportati esempi applicativi. I prototipi testati sono stati coinvolti in diversi contesti lavorativi, dall'industria alimentare alla riabilitazione medica, passando per le analisi di laboratorio, mantenendo un tratto comune: misurare grazie a sensori non convenzionali. Questo fatto sottolinea l'applicabilità delle metodologie proposte a qualsiasi tipo di richiesta, dando la possibilità di trasformare gli oggetti quotidiani in oggetti intelligenti, dimostrando così la flessibilità dei metodi individuati e la pervasività dei sensori e degli oggetti intelligenti così realizzati.
Industry 4.0 has radically been transforming the production processes and systems with the adoption of enabling technologies, such as Internet of Things (IoT), Big Data, Additive Manufacturing (AM), and Cloud Computing. The principles of these technologies can be also translated into any aspect of everyday life thanks to the usage of printed electronics (PE), offering techniques to produce unconventional sensors and systems or to make conventional objects “smart”. With PE playing a key role in the design of next-generation objects, smart objects fulfill their original function, and they can measure physical quantities in the surrounding environment, being able to communicate with other objects or remote units. Many PE technologies could be adopted, but above all, Aerosol Jet Printing (AJP) with its characteristics can be considered for such a purpose being able to print on any kind of surface a huge variety of functional materials. In combination with Flash Lamp Annealing (FLA), a low-point temperature thermal process, it is possible to complete the production of sensors and circuits on any kind of substrate. The aim of this thesis work is to identify innovative methods and processes allowing to directly embed sensors, circuits and electronics on the surface of objects and to analyze the metrological characteristics. To this end, compatibility studies have been carried out considering different materials, both in terms of substrates and inks for the realization of smart sensors and objects. Furthermore, design, fabrication and test of sensors and circuits has been analyzed in different fields. Chapter 1 will provide the background and the outline of this dissertation. Smart objects can be manufactured with numerous different technologies and materials depending on the performance required and on the specific application. The purpose of chapter 2 is to provide an analysis of 3D PE technologies that enable sensors printing on complex surfaces. First, an explanation of the technologies under consideration is provided. Then focusing on the used technologies, a deep analysis of AJP and FLA will be provided in chapter 3. Examples carried out are divided into four macro-areas, wearable devices, paper-based packaging, wet laboratories applications (cells and biomolecules sensing), to demonstrate the applicability of the proposed methodologies in the realization of sensors and smart objects. Starting from chapter 4, applicative examples will be reported. The tested prototypes were involved in different working contexts, from food industry to medical rehabilitation, passing through laboratory analysis, keeping a common trait: measuring thanks to unconventional sensors. This fact underlines the applicability of the proposed methodologies to any kind of request, giving the possibility to turn everyday objects into smart ones, thus demonstrating the flexibility of the methods identified and the pervasiveness of sensors and smart objects made this way.
APA, Harvard, Vancouver, ISO, and other styles
18

SOLINAS, ANTONIO VINCENZO. "Tecniche di ottimizzazione per la stima e il monitoraggio di sistemi elettrici di potenza." Doctoral thesis, Università degli Studi di Cagliari, 2022. http://hdl.handle.net/11584/332670.

Full text
Abstract:
The evolution of modern electric power grids requires increasingly complex, flexible, and effective management and control tools. These rely on data obtained from monitoring systems consisting of measurement instruments with advanced features but still characterized by uncertainty, and of estimation methods influenced by all the uncertainty sources of the measurement chain. Consequently, the availability of mathematical tools able to effectively model estimation problems, fully exploiting the characteristics of measurement instruments and a priori knowledge, is increasingly important. The thesis explores an approach based on optimization techniques to some relevant problems of estimation and monitoring designed for the power systems. The aim is to optimize the estimation performance through the integration in the problem formulation of a priori information on the domain under investigation and on the state to be reconstructed, exploiting the available knowledge on measurement errors. The considered optimization formulations involve two terms: the approximation error and the regularization term. The first one, also called fitting term, evaluates how well the solution matches the available measurements, while the regularization term considers the a priori information on the solution to be recovered. The estimation problems addressed in the thesis are the estimation of the main sources of harmonic pollution, i.e., the Harmonic Source Estimation (HSoE), in a distribution network and the simultaneous estimation of transmission line parameters and systematic errors in the measurement chain. In both cases, synchronized phasor measurements (at fundamental or harmonic frequency), available from a new generation of measurement devices, are considered. The identification of the prevailing harmonic sources is modelled as an underdetermined problem with a sparse solution and is addressed by using the Compressive Sensing approach as an L1 minimization. The problem is faced using first the L1 minimization with equality constraints, then, starting from the theoretical aspects involved in the evaluation of measurement uncertainties, and aiming at the reduction of their impact on HSoE algorithms, a new formulation based on the L1 minimization with quadratic constraints is proposed. To maximize the algorithm performance, a whitening matrix that allows retrieving the information on the distributions of the measurement errors, and thus estimating the corresponding energy bounds, is presented. The effectiveness of the presented solution is evaluated by simulations on appropriate test networks. The simultaneous estimation of line parameters and systematic errors of the measurement chain is addressed by a new method based on synchronized measurements by phasor measurement units (PMUs). The method is conceived to exploit a multiple-branch approach and a potentially large number of obtained measurements, corresponding to multiple operating conditions. The method is designed in the context of Tikhonov regularization and allows exploiting the high accuracy and reporting rate of PMUs more effectively, to improve the estimation of line parameters and to refine the compensation of systematic measurement errors, particularly from instrument transformers. The validity of the proposal has been verified through simulations performed on the IEEE 14-bus test system. The validity of the proposed paradigm has been confirmed by all the applications and experiments conducted. The flexibility of the optimization techniques discussed in this thesis has in fact made it possible to model the various a priori information about the considered domain and the state to be recovered, exploiting any type of available knowledge on measurement errors. These techniques can therefore be considered a valid, flexible, and effective tool for addressing the new measurement problems posed by modern electric power grids.
APA, Harvard, Vancouver, ISO, and other styles
19

ROCCA, LUCA. "IEC 61850: a safety and security analysis in industrial multiprotocol networks." Doctoral thesis, Università degli studi di Genova, 2019. http://hdl.handle.net/11567/944847.

Full text
Abstract:
Nowadays, Ethernet is the most popular technology in digital communication thanks to its flexibility and worldwide spread. This is the reason why the main industrial communication protocols today are based on Ethernet. Everybody says that Ethernet supports a large amount of different protocols, but only accurate laboratory tests can make this assumption true. Tests are performed on a hybrid network using three protocols: Profinet, IEC 61850 and TCP/IP. The combination of these three protocols represents an ideal industrial application where process automation, substation automation and general purpose data sharing interact. However, a shared network can be the cause of a drop in terms of safety and security in the industrial plant network. Safety has always played an important role in the life of human beings and the environment. The safety of process control systems is standardized in IEC 61508 and IEC 61784-3, but this is not the same in the area of substation automation system. Several tests are performed to prove if IEC 61850 (the standard protocol for substation automation) meets the requirements stated in IEC 61508 and if it can be used for safety-related functions. Security issues for industrial plants have become increasingly relevant during the past decade as the industry relied more and more on communication protocols. This work examines the security issues for IEC 61850 addressed by IEC 62351-6 providing a deepening for a secure GOOSE communication. The major issue implementing such a standard remains the computational power requested by the SHA algorithm in low-powered devices. As no manufacturer has made available a device implementing secure GOOSE communication yet, this solution is discussed only from a theoretical point of view. After that, it is presented a security test on the GOOSE communication during which the security issues of such a communication are exploited. This test aims to show what consequences may occur when a packet artfully created is injected within the IEC 61850 network. In the last part of this section, some countermeasures to mitigate such an issue are provided.
APA, Harvard, Vancouver, ISO, and other styles
20

BODO, ROBERTO. "Tecniche di Machine Learning per Macchine Smart e Impianti Smart: Applicazioni in Manutenzione Predittiva e Condition Monitoring Industriale." Doctoral thesis, Università degli studi di Padova, 2022. http://hdl.handle.net/11577/3447552.

Full text
Abstract:
Il crescente interesse per le tecniche di intelligenza artificiale è notevolmente accresciuto nell’ultimo decennio tra tutti i settori industriali. Tali metodologie supportano lo sviluppo di nuove funzionalità attraverso l’integrazione di sistemi cyber-fisici e unità di monitoraggio integrate in macchine e impianti, come ricorda il prefisso smart. Il lavoro di tesi si concentra sul Condition Monitoring e la manutenzione predittiva, i cui compiti più diffusi riguardano l’Anomaly Detection, la Fault Classification e la stima della Remaining Useful Life. Molte applicazioni “industry-class” si affidano a metodi di Supervised Machine Learning, in quanto possiedono metriche di valutazione più mature dei metodi unsupervised. Nonostante ciò, limiti addizionali aggiungono complessità allo sviluppo di una soluzione commerciabile. Il flusso di lavoro perciò necessita di gestire restrizioni provenienti dalla piattaforma di monitoraggio, come il carico CPU sopportabile, o diverse condizioni operative. Tali limitazioni spesso sfociano in trade-off di progetto, ma tale informazione nota a priori può portare effetti benefici al flusso stesso. Il lavoro di tesi analizza le limitazioni più diffuse e il loro rapporto con il flusso di lavoro, fornendo alcuni metodi per integrare in modo benefico tale informazione a priori a seconda del compito di monitoraggio e del passo di elaborazione dati. Per validare i metodi proposti sono analizzati due casi di studio riguardanti l’impiego di umidificatori industriali. In Feature Engineering la selezione delle feature è uno dei fattori maggiormente impattanti in quanto la maggioranza delle risorse computazionali e di memoria è impiegata per la loro estrazione. Una Feature Selection sostenibile deve perciò considerarne i costi di estrazione e l’utilità in campo. A tale scopo è proposto un algoritmo chiamato Feature Voting per operare una selezione multi-obiettivo considerando dataset appartenenti a condizioni operative differenti e attributi delle feature, come i costi di estrazione. È inoltre introdotta un’ottimizzazione del Feature Voting basata sul Design Of Experiments. Il Feature Voting incrementa la portabilità di un’applicazione sviluppata in molti contesti operativi diversi, come auspicato dalla pratica industriale. È inoltre proposta una ridefinizione della funzione target, basata su classi di manutenzione condivise, allo scopo di efficientare la Fault Classification riducendo la complessità di classificazione. L’approccio è esteso considerando i costi di manutenzione e sviluppando adattamenti cost-sensitive dei classificatori attraverso l’Ensemble Learning e il paradigma client-server. La procedura riduce i costi di errata classificazione senza riallenare i classificatori sviluppati. Un successivo compito di post-elaborazione riguarda l’utilizzo di predizioni instabili fornite da un classificatore black-box sull’azione di manutenzione da eseguire. È perciò proposta una tecnica di stabilizzazione delle predizioni basata sulla teoria dei Fuzzy Set, incorporando inoltre un meccanismo di isteresi dinamica. Il sistema proposto ha stabilizzato le predizioni e anticipato la richiesta di manutenzione, anche in condizioni di elevata incertezza. Un secondo tema è inerente all’utilizzo della stima della Remaining Useful Life e dell’incertezza associata per attività di post-prognostica. Il metodo proposto utilizza il Functional Profile Modelling per modellare lo stress del sistema, è poi applicato un opportuno adattamento di scala per la correzione a posteriori della stima. Il metodo proposto rappresenta un approccio alternativo guidato dalle richieste industriali legate all’apprendimento sul campo e all'integrazione di informazioni a posteriori. L’intera dissertazione offre inoltre una metodologia di analisi per correlare un flusso di progetto basato sul Machine Learning con i temi di interesse industriale nel campo della manutenzione smart.
The increasing interest in Artificial Intelligence techniques has risen in the last decade among all the industrial knowledge domains. Such methods support new advanced functionalities assisted by the integration of cyber-physical systems and monitoring units inside machines and plants, as recalled with the smart appellative. The thesis focuses on Condition Monitoring and Predictive Maintenance in which Anomaly Detection, Fault Classification, and Remaining Useful Life estimation are the most common tasks to solve. Most industry-class applications rely on Supervised Machine Learning techniques, as the evaluation metrics are more consolidated than unsupervised methods. However, additional restrictions add complexity to the development of a marketable solution. During the development of such algorithms, a regular Machine Learning workflow needs to manage additional constraints coming from the monitoring platform, like the available CPU, memory, or different field conditions. Such restrictions often lead to design trade-offs, but such information is prior known, and the design process can benefit from it. This thesis aims to analyze some of the most diffused constraints, along with the Machine Learning workflow, to provide some techniques that enable beneficially embedding such prior information according to the monitoring task and the processing step. To validate the methods, two industrial case studies are analyzed regarding the employment of industrial humidifiers. In Feature Engineering, the selection of features is one of the most impacting factors because many computational and memory resources, when using a deployed model, are due to the extraction of features. A sustainable Feature Selection thus needs to consider their extraction costs and their validity for a given field condition. Feature Selection is addressed by proposing an algorithm called Feature Voting to perform a multi-objective selection that considers datasets belonging to different field conditions and feature attributes, like the computational and memory extraction costs. Feature Voting tuning is also performed based on the Design Of Experiments. Feature Voting boosts the usage performance of a deployed machine in plenty of working environments, as found in the industrial practice. A maintenance-based target redefinition has been proposed to efficiently improve Fault Classification, i.e. grouping fault types according to shared maintenance interventions to lower the classification complexity. The approach is extended by considering the maintenance costs, leading to cost-sensitive adaptations of classifiers exploiting Ensemble Learning and the client-server paradigm. The procedure lowers the misclassification cost without retraining the deployed classifiers. In postprocessing, an additional task focuses on the practical use of the predictions given by a black-box unstable classifier on whether maintenance action to perform on the system. A prediction stabilization technique is proposed by exploiting the Fuzzy Set theory. A dynamical hysteresis mechanism is also introduced to increase the scheduling margin. The proposed system provided more stable predictions over time and anticipated the intervention alarm, even in highly uncertain conditions. A second postprocessing task deals with the post-prognostic usage of the Remaining Useful Life estimate and the management of its uncertainty. A method for the posterior correction of such an estimate is proposed. The technique exploits Functional Profile Modeling to describe the stress experienced by the monitored system on the field; then, a proper scaling is applied. The method represents an alternative approach, driven by the industrial requirement of learning from the field and integrating the extracted posterior information. All the thesis corpus offers an analysis methodology that relates a regular Machine Learning workflow with open issues of industrial interest in the smart maintenance field.
APA, Harvard, Vancouver, ISO, and other styles
21

Pivato, Paolo. "Analysis and Characterization of Wireless Positioning Techniques in Indoor Environment." Doctoral thesis, Università degli studi di Trento, 2012. https://hdl.handle.net/11572/369196.

Full text
Abstract:
Indoor positioning, also referred to as indoor localization, shall be defined as the process of providing accurate people or objects coordinates inside a covered structure, such as an airport, an hospital,and any other building. The applications and services which are enabled by indoor localization are various, and their number is constantly growing. Industrial monitoring and control, home automation and safety, security, logistics, information services, ubiquitous computing, health care, and ambient assisted living (AAL) are just a few of the domains that indoor positioning technology can benefit. A significant example is offered by local information pushing. In this case, a positioning system sends information to a user based on her/his location. For instance, a processing plant may push workflow information to employees regarding operating and safety procedures relevant to their locations in the plant. The positioning system tracks each employee, and has knowledge of the floor plan of the facility as well as the procedures. When an employee walks inside a defined perimeter of a particular area, such as packaging department, the positioning systems displays on the user’s personal device assistant (PDA) information regarding the work expected to be done in that area This significantly increases efficiency and safety by ensuring that employees follow carefully designed guidelines. Location-enabled applications like this are becoming commonplace and will play important roles in our everyday life. The design of a positioning system for indoor applications is to be regarded as a challenging task. In fact, the global positioning system (GPS) is a great solution for outdoor uses, but its applicability is strongly limited indoors because the signals coming from GPS satellites cannot penetrate the structure of most buildings. For this reason, considerable research interest for alternative non-satellite-based indoor positioning solutions has arisen in the last years. Actually, the positioning problem is strictly related with the measurement of the distance between the object to be located and a number of landmarks with known coordinates. Then, the position of the target is commonly determined by means of appropriate statistical or geometrical algorithms. Several approaches have been proposed, and various are the fundamental technologies that have been used so far. Today, distance measurement between two objects can be easily obtained by using laser-, optical- and ultrasounds-based devices. However, evident drawbacks of these systems are their sensitivity to line-of-sight (LOS) constraint, and the strict object-to-object bearing requirement. The latter becomes an even worse downside when the topology of the system dynamically changes due to mobility of the target. On the other hand, wireless-based ranging solutions are more insensitive to obstacles and non-alignment condition of devices. In addition, they may take advantage of existing radio modules and infrastructures used for communications. Accordingly, the ever grow- ing popularity of mobile and portable embedded devices provided with wireless connectivity has encouraged the study and the development of radio-frequency (RF)-based positioning techniques. The core of such systems is the measurement of distance-related parameters of the wireless signal. The two most common approaches for wireless ranging are based on received-signal-strength (RSS) and message time-of-flight (ToF) measurements, respectively. In particular: • the RSS-based method relies on the relationship between the measured received signal power and the transmitter-receiver distance, assuming that the signal propagation model and the transmitted power are known; • the ToF-based technique leans on the measured signal propagation time and the light speed, owing to the fundamental law that relates distance to time. The work presented in this dissertation is aimed at investigating and defining novel techniques for positioning in indoor environment based on wireless distance measurements. In particular, this study is devoted to the analysis and in depth evaluation of the use of RSS and ToF measurements for different indoor positioning applications. State of the art techniques relying on RSS- and ToF-based ranging methods have already proven to be effective for the localization of objects inside buildings. Nevertheless, several limitations exist (e.g., on the accuracy of the ranging, its impact on localization algorithms, etc.). The work presented in this dissertation attempts to: 1. investigate the main sources of uncertainty affecting RSS- and ToF-based indoor distance measurement; 2. analyze the impact of ranging error on the accuracy of positioning; 3. propose, on the basis of the understanding gained from 1. and 2., novel and effective systems in order to overcome the above-mentioned limitations and improve localization performance. The novel contributions of this thesis can be summarized as follows: • In-depth analysis of both RSS- and ToF-based distance measurement techniques, in order to assess advantages and disadvantages of each of them. • Guidelines for using different ranging methods in different conditions and applications. • Implementation and field testing of a novel data fusion algorithm combining both RSS and ToF techniques to improve ranging accuracy. • Theoretical and simulation-based analysis of chirp spread spectrum (CSS) signals for low-level timestamping. • Experimental assessment of CSS-based timestamping as key enabler for high accuracy ToF-based ranging and time synchronization.
APA, Harvard, Vancouver, ISO, and other styles
22

Bevilacqua, Maurizio. "Novel models and methods for structured light 3D Scanners." Doctoral thesis, Universita degli studi di Salerno, 2012. http://hdl.handle.net/10556/289.

Full text
Abstract:
2010 - 2011
The work made during the PhD course in Information Engineering, was focused on the possibility to find out novel techniques for the quick calibration of a cheap 3D Scanner. It is based on a simple camera and a commercial projector, in order to develop low-cost devices with high reliability capable of acquiring large areas quickly. Many systems based on this configuration exist, those devices have benefits and disadvantages. They can acquire objects with large surface in a few seconds and with an adequate accuracy. On the other hand, they need a lengthy calibration and they are very sensitive to the noise due to the flicker of the light source. Considering these problems, I tried to find new robust calibration techniques in order to reduce the sensitivity to noise, and, in this way, to have high-performance low-cost 3D scanners with short-time calibration and reconfiguration. There are many calibration techniques available for these systems. First, it is necessary to calibrate the camera and then the overall system for projecting analog encoded patterns, typically sinusoidal or digital, such as Gray codes. These techniques are very time-consuming because they require a prior camera calibration phase separate from the calibration of the whole system and also disturbing factors are introduced by the ambient light noise. Indeed, a lot of projection patterns, used to map the calibration volume, are required to be projected. In order to achieve our goal, different types of structured light scanner have been studied and implemented, according to the schemes proposed in literature. For example, there exist scanners based on sinusoidal patterns and others based on digital patterns, which also allowed the implementation in real time mode. On these systems classical techniques of calibration were implemented and performance were evaluated ad a compromise between time and accuracy of the system. Classical calibration involves the acquisition of phase maps in the volume calibration following a pre-calibration of the camera. At the beginning, an algorithm that allows calibration through the acquisition of only two views has been implemented, including camera calibration, modeled by pin-hole model, in the calibration algorithm. To do this, we have assumed a geometric model for the projector which has been verified by the evaluation of experimental data. The projector is then modeled as a second camera, also using the pin-hole model, and we proceeded with the calibration of camera-projector pair as a pair of stereo cameras, using a DLT calibration. Thanks to the acquisition of two views of the target volume in the calibration, it is possible to extract the parameters of the two devices through which the projected pattern can be generated, and the acquisition by the camera can be done, eliminating the problem of noise due to ambient light. This system is a good compromise between the reduction in calibration time, which passed from half an hour to a couple of minutes, with a reduction in term of uncertainty in order of one percentage point of calibration volumes that was chosen of a depth of 10 centimeters… [edited by author]
X n.s.
APA, Harvard, Vancouver, ISO, and other styles
23

Di, Caro Domenico. "NMR measurements for hazelnuts classification." Doctoral thesis, Universita degli studi di Salerno, 2018. http://hdl.handle.net/10556/3113.

Full text
Abstract:
2016 - 2017
In this work, a method for the quality detection of the in-shell hazelnuts, based on the low field NMR, has been proposed. The aim of the work is to develop an in-line classification system able to detect the hidden defects of the hazelnuts. After an analysis of the hazelnut oil, carried out in order to verify the applicability of the NMR techniques and to determine some configuration parameters, the influence factors that affect these measurements in presence of solid sample instead of liquids have been analyzed. Then, the measurement algorithms were defined. The proposed classification procedure is based on the CPMG sequence and the analysis of the transverse relaxation decay. The procedure includes three different steps in which different features are detected: moisture content, kernel development and mold development. These quality parameters have been evaluated .analyzing the maximum amplitude and the second echo peak of the CPMG signal, and the T2 distribution of the relaxation decay. In order to assure high repeatability and low execution time, special attention has been put in the definition of the data processing. Finally, the realized measurement system has been characterized in terms of classification performance. In this phase, because of the reduced size of the test sample (especially for the hazelnuts with defects) a resampling method, the bootstrap, was used. [edited by Author]
XVI n.s. (XXX ciclo)
APA, Harvard, Vancouver, ISO, and other styles
24

Gasparini, Leonardo. "Ultra-low-power Wireless Camera Network Nodes: Design and Performance Analysis." Doctoral thesis, Università degli studi di Trento, 2011. https://hdl.handle.net/11572/368297.

Full text
Abstract:
A methodology for designing Wireless Camera Network nodes featuring long lifetime is presented. Wireless Camera Networks may nd widespread application in the elds of security, animal monitoring, elder care and many others. Unfortunately, their development is currently thwarted by the lack of nodes capable of operating autonomously for a long period of time when powered with a couple of AA batteries. In the proposed approach, the logic elements of a Wireless Camera Network node are clearly identied along with their requirements in terms of processing capabilities and power consumption. For each element, strategies leading to significant energy savings are proposed. In this context, the employment of a custom vision sensor and an ecient architecture are crucial. In order to validate the methodology, a prototype node is presented, mounting a smart sensor and a ash-based FPGA. The node implements a custom algorithm for counting people, a non trivial task requiring a considerable amount of on-board processing. The overall power consumption is limited to less than 5 mW, thus achieving a two orders of magnitude improvement with respect to the state of the art. By powering the system with two batteries providing 2200 mAh at 3.3 V, the expected lifetime of the system exceeds two months even in the worst-case scenario.
APA, Harvard, Vancouver, ISO, and other styles
25

Adami, Andrea. "MEMS Piezoresistive Micro-Cantilever Arrays for Sensing Applications." Doctoral thesis, Università degli studi di Trento, 2010. https://hdl.handle.net/11572/368350.

Full text
Abstract:
In several application fields there is an increasing need for a diffused on-field control of parameters able to diagnosis potential risks or problems in advance or in early stages in order to reduce their impact. The timely recognition of specific parameters is often the key for a tighter control on production processes, for instance in food industry, or in the development of dangerous events such as pollution or the onset of diseases in humans. Diffused monitoring can be hardly performed with traditional instrumentation in specialised laboratories, due to the time required for sample collection and analysis. In all applications, one of the key-points for a successful solution of the problem is the availability of detectors with high-sensitivity and selectivity to the chemical or biochemical parameters of interest. Moreover, an increased diffused on-field control of parameters can be only achieved by replacing the traditional costly laboratory instrumentations with a larger number of low cost devices. In order to compete with well-known and established solution, one of main feature of new systems is the capability to perform specific tests on the field with fast response times; in this perspective, a fast measurement of reduced number of parameters is to be preferred to a straightforward “clone†of laboratory instrumentation. Moreover, the detector must also provide robustness and reliability for real-world applications, with low costs and easiness of use. In this paradigm, MEMS technologies are emerging as realisation of miniaturised and portable instrumentation for agro-food, biomedical and material science applications with high sensitivity and low cost. In fact, MEMS technologies can allow a reduction of the manufacturing cost of detectors, by taking advantage of the parallel manufacturing of large number of devices at the same time; furthermore, MEMS devices can be potentially expanded to systems with high level of measurement parallelism. Device costs are also a key issues when devices must be for “single use†, which is a must in application where cross-contamination between different measurement is a major cause of system failure and may cause severe consequences, such as in biomedical application. Among different options, cantilever micro-mechanical structures are one of the most promising technical solution for the realisation of MEMS detectors with high sensitivity. This thesis deals with the development of cantilever-based sensor arrays for chemical and biological sensing and material characterisation. In addiction to favourable sensing properties of single devices, an array configuration can be easily implemented with MEMS technologies, allowing the detection of multiple species at the same time, as well as the implementation of reference sensors to reject both physical and chemical interfering signals. In order to provide the capability to operate in the field, solution providing simple system integration and high robustness of readout have been preferred, even at the price of a lower sensitivity with respect to other possibilities requiring more complex setups. In particular, piezoresistive readout has been considered as the best trade-off between sensitivity and system complexity, due to the easy implementation of readout systems for resistive sensors and to their high potential for integration with standard CMOS technologies. The choice has been performed after an analysis of mechanical and sensing properties of microcantilever, also depending of technological options for their realisation. As case-studies for the development of cantilever devices, different approaches have been selected for gas sensing applications, DNA hybridisation sensing and material characterisation, based on two different technologies developed at the BioMEMS research unit of FBK (Fondazione Bruno Kessler - Center for Materials and Microsystems, Trento). The first process, based on wet-etching bulk micromachining techniques, has provided 10 µm-thick silicon microcantilevers while the second technology, based on Silicon-On-Insulator (SOI) wafer, has provided a reduction of device thickness, thus resulting in an increase of sensitivity. Performances of devices has been investigated by analytical and numerical modelling of both structures and readout elements, in order to optimise both fabrication technology and design. In particular, optimal implant parameters for the realisation of piezoresistors have been evaluated with process simulation with Athena Silvaco simulation software, while ANSYS has been used to analyse the best design for devices and the effect of some technology-related issues, such as the effect of underetch during the release of the beams or residual stresses. Static and modal analysis of cantilever bending in different conditions have been performed, in order to evaluate the mechanical performances of the device, and later results have been compared with the experimental characterisation. With regard to gas sensing applications, the development has been oriented to resonant sensors, where the adsorption of analytes on a adsorbent layer deposited on the cantilever leads to shift of resonance frequency of the structure, thus providing a gravimetric detection of analytes. The detection of amines, as markers of fish spoilage during transport, has been selected as a case-study for the analysis of these sensors. The sensitivity of devices has been measured, with results compatible with the models. Cantilever structures are also suitable for bioaffinity-based applications or genomic tests, such as the detection of specific Single Nucleotide Polymorphisms (SNPs) that can be used to analyse the predisposition of individuals to genetic-based diseases. In this case, measurements are usually performed in liquid phase, where viscous damping of structures results in a severe reduction of resonance quality factor, which is a key-parameter for the device detection limit. Then, cantilever working in “bending mode†are usually preferred for these applications. In this thesis, the design and technologies have been optimised for this approach, which has different requirements with respect to resonant detectors. In fact, the interaction of target analytes with properly functionalised surfaces results in a bending of the cantilever device, which is usually explained by a number of mechanism ranging from electrostatic and steric interaction of molecules to energy-based considerations. In the case of DNA hybridisation detection, the complexity of the molecule interactions and solid-liquid interfaces leads to a number of different phenomena concurring in the overall response. Main parameters involved in the cantilever bending during DNA hybridisation has been studied on the basis of physical explanations available in the literature, in order to identify the key issues for an efficient detection. Microcantilever devices can play a role also in thin film technologies, where residual stresses and material properties in general need to be accurately measured. Since cantilever sensors are highly sensitive to stress, their use is straightforward for this application. Moreover, apart from their sensitivity, they also have other advantages on other methods for stress measurements, such as the possibility to perform on-line measurements during the film deposition in an array configuration, which can be useful for combinatorial approaches for the development of thin film materials libraries. In collaboration with the Plasma Advanced Materials (PAM) group of the Bruno Kessler Foundation, the properties of TiO2 films deposited by sputtering has been measured as a case study for these applications. In addiction to residual stress, a method for measuring the Young’s modulus of the deposited films has been developed, based on the measurement by means of a stylus profilometer of beam stiffness increase due to TiO2 film. The optimal data analysis procedure has been evaluated in order to increase the efficiency of the measurement. In conclusion, this work has provided the development of MEMS-based microcantilever devices for a range of different applications by evaluating the technological solutions for their realisation, the optimisation of design and testing of realised devices. The results validate the use of this class of devices in applications where high sensitivity detectors are required for portable analysis systems.
APA, Harvard, Vancouver, ISO, and other styles
26

Tosato, Pietro. "Smart Energy Systems: using IoT Embedded Architectures for Implementing a Computationally Efficient Synchrophasor Estimator." Doctoral thesis, Università degli studi di Trento, 2019. https://hdl.handle.net/11572/367646.

Full text
Abstract:
Energy efficiency is a key challenge to build a sustainable society. It can be declined in variety of ways: for instance, from the reduction of the environmental impact of appliances manufacturing, to the implementation of low-energy communication networks, or the management of the existing infrastructures in a smarter way. The actual direction is the integration of different energy systems with a common management scheme with the aim of harmonizing and integrating different energy systems. In this context, smart cities already envision the use of information communication technologies (ICT) to smartify objects and services, connecting people and machines. An important enabling technology for smart cities is certainly the Internet of Things (IoT). Both smart cities and IoT have been extensively investigated over the last few years, under the influence of European funded projects as well. Smart cities apply communication and networking technologies, very often using the paradigm of IoT, to address relevant issues like traffic congestion, population growth, crowding, and others, besides implementing innovative services, modernizing existing infrastructures, e.g. smart mobility. IoT greatly helps in monitoring and better managing energy consumption as well, realizing smart homes, smart buildings and smart grids. For what concern the power grid, in fact, the direction is to harness IoT technologies to improve flexibility, easiness of use and, ultimately, energy efficiency while preserving stability and safety. Today the electrical grid is facing deep changes, mostly caused by the intensive deployment of Distributed Energy Resources (DER) based on renewable sources such as photovoltaic plants or wind farms. Managing such heterogeneous active distribution networks (ADNs), represent one of the most important challenges to be faced in the future of energy systems. The integration of active elements into the grid is challenging because of both the great potential they bring in energy production and the hazard they may represent if not properly managed (e.g. violation of operational constraints). ADN implementation relies on the deployment of high-performance real-time monitoring and control systems. It is well accepted that the phasor measurement units (PMU) are one of the most promising instruments to overcome many problems in ADN management, as they support a number of applications, such as grid state estimation, topology detection, volt-var optimization and reverse power flow management. However, classic PMUs are conceived to measure synchrophasor in transmission systems, while the distribution ones have very different characteristics and, in general, different needs. Therefore, tailoring the characteristics of the new-generation PMUs to the needs of the ADNs is currently very important. This new kind of PMU must address few important design challenges: 1. improved angle measurement capabilities, to cope with the smaller angle differences that distribution grids exhibit; 2. low cost, to promote an extensive deployment in the grid. These two requirements are clearly in opposition. In this dissertation, a low-cost PMU design approach, partially influenced by IoT ideas, is presented.
APA, Harvard, Vancouver, ISO, and other styles
27

Panina, Ekaterina. "Design and characterisation of SPAD based CMOS analog pixels for photon-counting applications." Doctoral thesis, Università degli studi di Trento, 2014. https://hdl.handle.net/11572/368962.

Full text
Abstract:
Recent advancements in biomedical research and imaging applications have ignited an intense interest in single-photon detection. Along with single-photon resolution, nanosecond or sub-nanosecond timing resolution and high sensitivity of the device must be achieved at the same time. Single- Photon Avalanche Diodes (SPADs) have proved their prospectives in terms of shot-noise limited operation, excellent timing resolution and wide spec- tral range. Nonetheless, the performance of recently presented SPAD based arrays has an issue of low detection efficiency by reason of the area on the substrate occupied by additional processing electronics. This dissertation presents the design and experimental characteriza- tion of a few compact analog readout circuits for SPAD based arrays. Tar- geting the applications where the spatial resolution is the key requirement, the work is focused on the circuit compactness, that is, pixel fill factor re- finement. Consisting of only a few transistors, the proposed structures are remarkable for a small area occupation. This significant advancement has been achieved with the analog implementation of the additional circuitry instead of standard digital approach. Along with the compactness, the dis- tinguishing features of the circuits are low power consumption, low output non-linearity and pixel-to-pixel non-uniformity. In addition, experimental results on a time-gated operation have proved feasibility of a sub-nanosecond time window.
APA, Harvard, Vancouver, ISO, and other styles
28

Stellini, Marco. "Evaluation of Uncertainty and Repeatability in Measurement: two application studies in Synchronization and EMC Testing." Doctoral thesis, Università degli studi di Padova, 2009. http://hdl.handle.net/11577/3425620.

Full text
Abstract:
Efficient organization of measurement tasks requires the knowledge and characterization of the parameters and of the effects that may affect the measurement itself. Uncertainty analysis is an example of how measurement accuracy is often difficult to quantify. Repeatability also assumes a key role. This is the ability to replicate the tests and related measurements at different times. The research was focused on this aspect of analysis of test repeatability. Some specific case studies in the field of measurements relating to synchronization between network nodes and to measurements for Electromagnetic Compatibility have been considered. Synchronization between the components of a system is a particularly important requirement when considering distributed structures. The network nodes developed for this research are based on both PCs with a Real Time operating system (RTAI) and Linux-based embedded systems (FOX Acme Systems Board) interfaced to an auxiliary module with a field-programmable gate array (FPGA). The aim of the tests is to measure and classify the uncertainty due to jitter in the Time Stamping mechanism, and consequently to evaluate the resolution and repeatability of synchronization achieved in different traffic conditions using a standardized protocol for synchronization (IEEE 1588-PTPd). The work in the Electromagnetic Compatibility has likewise focused on the repeatability of measurements typical of some practical applications. Some experiments involving LISN calibration have been carried out and some improvements are presented to reduce uncertainty. Theoretical and experimental uncertainty analysis associated with the ESD tests has been conducted and some possible solutions are proposed. A study on the performance of sites for radiated tests (Anechoic chambers, open-area test site) has been started using simulations and experimental testing in order to assess the capability of different sites. Obtained results are compared with different reference sources. Finally, the results of a research project carried out at the University of Houston for the propagation of electromagnetic fields are reported.
Organizzare una efficiente campagna di misure richiede la conoscenza e la caratterizzazione dei parametri e degli effetti che possono influire sulla misura stessa. L’analisi dell’incertezza è un esempio di come l’accuratezza sia spesso difficile da quantificare. Oltre all’incertezza tuttavia assume un ruolo chiave la ripetibilità, ovvero la possibilità di replicare il test e le relative misure in momenti diversi. L’attività di ricerca ha riguardato proprio questo aspetto di analisi della ripetibilità dei test prendendo in considerazione alcuni casi di studio specifici sia in ambito di Misure relative alla Sincronizzazione tra nodi di un sistema distribuito sia di Misure per la Compatibilità Elettromagnetica. La sincronizzazione è un’esigenza particolarmente sentita quando si considerano strutture di misura distribuite. I nodi di rete sviluppati per queste ricerche sono basati sia su PC dotati di sistema operativo Real Time (RTAI) sia su sistemi embedded Linux-based (Acme Systems FOX Board) interfacciati ad un modulo ausiliario su cui si trova un field-programmable gate array (FPGA). I test condotti hanno permesso di misurare e classificare l’incertezza dovuta al jitter nel meccanismo di Time Stamp, e conseguentemente di valutare la risoluzione e la ripetibilità della sincronizzazione raggiunta in diverse condizioni di traffico utilizzando un protocollo di sincronizzazione standardizzato secondo l’IEEE 1588 (PTPd). In ambito di compatibilità elettromagnetica, il lavoro svolto si è concentrato sull’analisi della ripetibilità di misure tipiche di alcune applicazioni pratiche in ambito EMC. E’ stata svolta una analisi approfondita dei fenomeni parassiti legati alla taratura di una LISN e sono stati introdotti alcuni miglioramenti costruttivi al fine di ridurre i contributi di incertezza. Si è condotta una indagine teorico-sperimentale sull’incertezza associata alla misura di immunità con generatore di scariche elettrostatiche e l’individuazione di possibili soluzioni. E’ stato avviato uno studio sulle prestazioni dei siti per le misure dei disturbi irradiati (camere anecoiche, open-area test site) mediante simulazioni teoriche e prove ’in campo’ al fine di valutare i limiti di impiego dei diversi siti e comparare i risultati ottenuti con sorgenti di riferimento. Infine, vengono riportati i risultati di una ricerca svolta presso l’Università di Houston e relativa alla propagazione di campi elettromagnetici.
APA, Harvard, Vancouver, ISO, and other styles
29

Russo, Domenico. "Innovative procedure for measurement uncertainty evaluation of environmental noise accounting for sound pressure variability." Doctoral thesis, Universita degli studi di Salerno, 2017. http://hdl.handle.net/10556/2574.

Full text
Abstract:
2015 - 2016
This study aims to demonstrate the importance of uncertainty evaluation in the measurement of environmental noise in the context of Italian legislation on noise pollution. Attention is focused on the variability of the measurand as a source of uncertainty and a procedure for the evaluation of uncertainty for environmental noise measurement is proposed. First drawing on several real noise datasets in order to determine suitable measurement time intervals for the estimation of the environmental noise, a data-driven sampling strategy is proposed, which takes into account the observed variability associated with measured sound pressure levels. Outliers are eliminated from the actual noise measurements using an outlier detection algorithm based on K-neighbors distance. As the third step, the contribution of measurand variability on measurement uncertainty is determined by using the normal bootstrap method. Experimental results exploring the adoption of the proposed method drawing upon real data from environmental noise using acquisition campaigns confirm the reliability of the proposal. It is shown to be very promising with regard to the prediction of expected values and uncertainty of traffic noise when a reduced dataset is considered. [edited by author]
Negli ultimi anni, studiosi ed esperti del settore hanno focalizzato la loro attenzione sulle possibili fonti di incertezza associabili a tale attività, cercando di pervenire a modelli che contemplassero tutte le variabili che concorrono alla determinazione dell’incertezza nella misura dei livelli di pressione acustica: l'incertezza dovuta alle caratteristiche della strumentazione di misura (fonometri o analizzatori multicanale), l'errore derivante dal posizionamento della strumentazione e quindi dei trasduttori microfonici, l'incertezza dovuta al calibratore, nonché l’incertezza da associare. Al fine, però, di fornire un’adeguata stima dell’indeterminazione associata alla misura del livello equivalente di rumore ambientale, risulta indispensabile considerare l’incertezza derivante dall’intrinseca variabilità del fenomeno in esame. Il tema risulta essere di particolare interesse scientifico e, negli ultimi anni, molti autori hanno proposto diverse metodologie di approccio al suddetto problema, in particolare alcuni hanno focalizzato l’attenzione sull’eliminazione dei segnali sonori non desiderati, altri sulla stima del tempo di misura e altri ancora direttamente sulla determinazione dell’incertezza. Alla luce di quanto esposto, ho pensato di integrare le diverse tecniche studiate in un’unica procedura, basata sul metodo bootstrap, tecnica statistica di ricampionamento con sostituzione del dataset iniziale, in quanto non ha limitazioni in termini di forma e di proprietà delle distribuzioni statistiche considerate ed è, pertanto, più adatta all’analisi del rumore ambientale, la cui popolazione non è strettamente gaussiana. Inizialmente, dal momento che l’affidabilità della stima degli indicatori di rumore ambientale dipende in modo significativo dalla variabilità temporale del rumore, e, quindi, risulta fondamentale scegliere in modo accurato il tempo di misura che tenga in considerazione la variabilità statistica del fenomeno acustico sotto osservazione, l’algoritmo individua in modo automatico un tempo minimo di acquisizione, corrispondente al numero minimo di livelli pressione sonora necessari a garantire la significatività statistica del set di dati di partenza. In una seconda fase sono individuati ed eliminati dal segnale acquisito eventuali valori anomali (outlier) ed, infine, è calcolata l’incertezza relativa al misurando applicando il metodo bootstrap. I risultati di tale metodo sono stati anche confrontati con la stima del valore atteso per il descrittore acustico a breve termine e della corrispondente incertezza applicando il metodo classico (GUM ISO). Poiché le grandezze calcolate con l’applicazione del metodo bootstrap si avvicinano molto a quelle determinate con il metodo classico nell’ipotesi di ridotto numero di campioni, tale procedura risulta altresì particolarmente adatta alla previsione dell'indicatore di rumore ambientale quando sono disponibili pochi dati di misura. [a cura dell'autore]
XV n.s. (XXIX)
APA, Harvard, Vancouver, ISO, and other styles
30

Nazemzadeh, Payam. "Indoor Localization of Wheeled Robots using Multi-sensor Data Fusion with Event-based Measurements." Doctoral thesis, Università degli studi di Trento, 2016. https://hdl.handle.net/11572/367712.

Full text
Abstract:
In the era in which the robots have started to live and work everywhere and in close contact with humans, they should accurately know their own location at any time to be able to move and perform safely. In particular, large and crowded indoor environments are challenging scenarios for robots' accurate and robust localization. The theory and the results presented in this dissertation intend to address the crucial issue of wheeled robots indoor localization by proposing some novel solutions in three complementary ways, i.e. improving robots self-localization through data fusion, adopting collaborative localization (e.g. using the position information from other robots) and finally optimizing the placement of landmarks in the environment once the detection range of the chosen sensors is known. As far as the first subject is concerned, a robot should be able to localize itself in a given reference frame. This problem is studied in detail to achieve a proper and affordable technique for self-localization, regardless of specific environmental features. The proposed solution relies on the integration of relative and absolute position measurements. The former are based on odometry and on an inertial measurement unit. The absolute position and heading data instead are measured sporadically anytime some landmark spread in the environment is detected. Due to the event-based nature of such measurement data, the robot can work autonomously most of time, even if accuracy degrades. Of course, in order to keep positioning uncertainty bounded, it is important that absolute and relative position data are fused properly. For this reason, four different techniques are analyzed and compared in the dissertation. Once the local kinematic state of each robot is estimated, a group of them moving in the same environment and able to detect and communicate with one another can also collaborate to share their position information to refine self-localization results. In the dissertation, it will be shown that this approach can provide some benefits, although performances strongly depend on the metrological features of the adopted sensors as well as on the communication range. Finally, as far as the problem optimal landmark placement is concerned, this is addressed by suggesting a novel and easy-to-use geometrical criterion to maximize the distance between the landmarks deployed over a triangular lattice grid, while ensuring that the absolute position measurement sensors can always detect at least one landmark.
APA, Harvard, Vancouver, ISO, and other styles
31

Landi, Marco. "Bidirectional Metering Advancements and Applications to Demand Response Resource Management." Doctoral thesis, Universita degli studi di Salerno, 2014. http://hdl.handle.net/10556/1448.

Full text
Abstract:
2012 - 2013
The power grid is an electric system capable of performing electricity generation, transmission, distribution and control. Nowadays it has been subjected to a deep transformation, which will reshape it completely. In fact, growing electricity demand and consequent increase of power losses in transmission and distribution grids, the increase in prices of fossil fuels and the diffusion of renewable resources, the need for a more effective and efficient grid management and use of energy, the availability of new technologies to be integrated into the grid, they all push for a modernization of the power grid. Integrating technology and approaches typical of different areas (i.e. power systems, ICT, measurements, automatic controls), the aim is to build a grid capable of engulfing all types of sources and loads, capable of efficiently deliver electricity automatically adapting to changes in generation and demand, ultimately empowering customers with new and advanced services. This paradigm is known as Smart Grid. In this context, the role of measurement theories, techniques and instrumentation is a fundamental one: the automatic management and control of the grid is a completely unfeasible goal without a timely and reliable picture of the state of the electric network. For this reason, a metering infrastructure (including sensors, data acquisition and process system and communication devices and protocols) is needed to the development of a smarter grid. Among the features of such an infrastructure are the ability to execute accurate and real‐time measurements, the evaluation of power supply quality and the collection of measured data and its communication to the system operator. Moreover, a so defined architecture can be extended to all kinds of energy consumption, not only the electricity ones. With the development of an open energy market, an independent entity could be put in charge of the execution of measurements on the grid and the management of the metering infrastructure: in this way, “certified” measurements will be guaranteed, ensuring an equal treatment of all grid and market users. In the thesis, different aspects relative to measurement applications in the context of a Smart Grid have been covered. A smart meter prototype to be installed in customers’ premises has been realized: it is an electricity meter also capable of interfacing with gas and hot water meters, acting as a hub for monitoring the overall energy consumption. The realized prototype is based on an ARM Cortex M3 microcontroller architecture (precisely, the ST STM32F103), which guarantees a good compromise among cost, performance and availability of internal peripherals. Advanced measurement algorithms to ensure accurate bidirectional measurements even in non‐sinusoidal conditions have been implemented in the meter software. Apart from voltage and current transducer, the meter embeds also a proportional and three binary actuators: through them is possible to intervene directly on the monitored network, allowing for load management policies implementation. Naturally the smart meter is only functional if being a part of a metering and communication infrastructure: this allows not only the collection of measured data and its transmission to a Management Unit, which can so build an image of the state of the network, but also to provide users with relevant information regarding their consumptions and to realize load management policies. In fact, the realized prototype architecture manages load curtailments in Demand Response programs relying on the price of energy and on a cost threshold that can be set up by the user. Using a web interface, the user can verify his own energy consumptions, manage contracts with the utility companies and eventually his participation in DR programs, and also manually intervene on his loads. In the thesis storage systems, of fundamental importance in a Smart Grid Context for the chance they offer of decoupling generation and consumption, have been studied. They represent a key driver towards an effective and more efficient use of renewable energy sources and can provide the grid with additional services (such as down and up regulation). In this context, the focus has been on li‐ion batteries: measurement techniques for the estimation of their state of life have been realized. Since batteries are becoming increasingly important in grid operation and management, knowing the degradation they are subjected has a relevant impact not only on grid resource planning (i.e. substitution of worn off devices and its scheduling) but also on the reliability in the services based on batteries. The implemented techniques, based on Fuzzy logic and neural networks, allow to estimate the State of Life of li‐ion batteries even for variation of the external factors influencing battery life (temperature, discharge current, DoD). Among the requisites a Smart Grid architecture has, is the integration into the grid of Electric Vehicles. EVs include both All Electric Vehicles and Plug‐in Hybrid Electric Vehicles and have been considered by governments and industry as sustainable means of transportation and, therefore, have been the object of intensive study and development in recent years. Their number is forecasted to increase considerably in the next future, with alleged consequences on the power grid: while charging, they represent a consistent additional load that, if not properly managed, could be unbearable for the grid. Nonetheless, EVs can be also a resource, providing their locally stored energy to the power grid, thus realizing useful ancillary services. The paradigm just described is usually referred to as Vehicle‐to‐Grid (V2G). Being the storage systems onboard the EVs based on li‐ion batteries, starting from the measurement and estimation techniques precedently introduced, aim of the thesis work will be the realization of a management systems for EV fleets for the provision of V2G services. Assuming the system model in which the aggregator not only manages such services, but can also be the owner of the batteries, the goal is to manage the fleets so to maximize battery life, and guarantee equal treatment to all the users participating in the V2G program. [edited by author]
XII n.s.
APA, Harvard, Vancouver, ISO, and other styles
32

RASILE, Antonio. "Metodologie e set-up di misura per l’esecuzione di test non distruttivi su materiali metallici." Doctoral thesis, Università degli studi di Cassino, 2020. http://hdl.handle.net/11580/75155.

Full text
Abstract:
In this thesis, the methodologies and measurement set-up for the execution of non-destructive tests with the eddy currents technique (ECT) and ultrasound technique (UST) are examined and implemented. Any industrial product, made from metallic and non-metallic materials, can present numerous varieties of defects, inside or on its external surface, which differ both in type and shape. These discontinuities can be caused by the manufacturing processes and by the stresses that the various components undergo during their use. The presence of these defects or their development and increase, especially in working conditions, can cause both a reduction in the useful life and an unexpected breakage of the same, with results that can be harmful from an economic viewpoint and of the safety of the people. For this reason, in the industrial sector (aeronautics, automotive, construction, etc.), each product of critical importance must be checked for the verification of its integrity and compliance with current regulations ensuring its quality and safety of use. In this scenario, the development of techniques and tools for non-destructive diagnostics is moving towards new frontiers, capable of facing the current needs of the industrial sector. In particular, the eddy current and ultrasound inspection techniques offer a good compromise in terms of ease of use, speed of inspection and reliability of the test. Although technologically advanced, the methods and tools for non-destructive testing with eddy current and ultrasound techniques currently in use are not without problems and limitations. For example, as regards the ECT, the most important limitations are related to the applicability only to the conductive materials, to the possibility of locating only the surface discontinuities or to limited depths, to the use of different probes according to the defect to be analysed (superficial or sub-superficial) or to the difficulty in identifying the specific typology of the defect. As for the UST instead, the most important limitations are related to the need to use couplers for the probes, to the loss of sensitivity towards thin defects and parallel to the direction of propagation of the ultrasonic beam, to the calibration procedures of the instrument to obtain a response to the easily recognizable defect, to the need to use high energy excitation signals and to the difficulty in correctly identifying the result of the measurement. In this thesis, as regards the non-destructive method of eddy current investigation, solutions of low energy consumption probes assisted by the use of magnetic field sensors are analysed and implemented, with the aim of increasing the sensitivity of the method at depths greater than those of the current solutions available in the literature. In particular, the prototype of the realized probe is presented, with all the steps of the design, implementation and realization. The innovative detection system created is highlighted, which allows to improve the probe performance. The experimental configuration for the probe characterization and the results of the numerous tests are illustrated. Finally, it is shown how the realised probe improves the detection of defects compared to the cutting-edge solutions available in the literature. Regarding the line of research, concerning the non-destructive ultrasound investigation methods, the attention was paid to the analysis of structures such as metal bars. This type of investigation is currently absent in the literature. Starting therefore from the different types of excitation signals, from the different digital conditioning methods and from the different signal processing methods proposed in the literature, which allow a better accuracy of the measurement process, a higher inspection speed and a greater sensitivity compared to various types of defects, an experimental comparison is performed to identify the better combination that allows the best detection and location of the defects. The designed measurement station for the execution of non-destructive ultrasound tests is presented, which allows a real-time analysis of the examination performed, using a developed ad-hoc software, and finally, the experimental results are presented, to identify the best combination of the different types of excitation signals, digital conditioning and data processing considered, for the detection and localization of defects in structures such as metal bars.
APA, Harvard, Vancouver, ISO, and other styles
33

Gamba, Giovanni. "Cross layer analysis of interference effects in wireless systems." Doctoral thesis, Università degli studi di Padova, 2010. http://hdl.handle.net/11577/3421882.

Full text
Abstract:
Wireless communication systems are nowadays employed in many fields. The research effort beneath this work is to investigate one of the main issues in a radio communication system: interference. Devices operating in the 2.4 GHz unlicensed Industrial, Scientific and Medical (ISM) band are considered and, in particular, wireless sensors networks compliant to IEEE 802.15.4 standard are used to evaluate performance indices in the presence of interference. The analysis starts from a real wireless control application and a RF power meter application: a complete perspective involving theoretical formulas, simulations and experimental results is given. A cross layer approach for CSMA/CA based devices, merging interference effects at PHY and MAC layer, is presented. Subsequently effects of interference on channel status evaluation and RSSI circuitry are faced and, finally, the feasibility of a low cost RF power meter is discussed. The purpose of the work is to understand interference phenomena, using a real life test bed and a complex set of interferers comprising both arbitrary signal generators and real wireless networks. The interaction of interference with a device is manifold: it involves physical layer, hardware and protocols. The final aim of this work is to provide protocol and hardware designers with a set of performance metrics and perspectives useful to improve wireless devices’ behavior against interference.
Oggigiorno i sistemi di comunicazione wireless hanno un impiego assai vasto. Questo lavoro di ricerca si propone di investigare una delle maggiori problematiche per una comunicazione radio: l’interferenza. Vengono considerati alcuni dispositivi operanti nella banda libera ISM a 2.4 GHz e, in particolare, vengono utilizzate reti di sensori wireless aderenti allo standard IEEE 802.15.4 per la valutazione di indici di prestazioni in presenza di interferenza. L’analisi parte da un’applicazione di controllo wireless e da una applicazione per la misura della potenza a radiofrequenza. Viene di seguito presentato, per i dispositivi basati su CSMA/CA, un approccio cross-layer per un’analisi congiunta degli effetti delle interferenze a livello PHY e a livello MAC. Di seguito vengono affrontati gli effetti dell’interferenza sulla valutazione dell’occupazione di canale e sulla circuiteria di RSSI e, infine, viene discussa la fattibilità di un misuratore di potenza RF a basso costo. Lo scopo della tesi è comprendere i fenomeni d’interferenza utilizzando un test bed reale e un complesso sistema di interferenze sia provenienti da generatori di segnale sia da reti wireless reali. L’interazione dell’interferenza con un dato dispositivo è molteplice: coinvolge il livello fisico, l’hardware e i protocolli. L’obiettivo finale è fornire ai progettisti dell’hardware e dei protocolli un insieme di metriche di prestazione e prospettive utili a migliorare il comportamento dei sistemi wireless contro l’interferenza.
APA, Harvard, Vancouver, ISO, and other styles
34

BONGIORNO, JACOPO. "STUDIO E MIGLIORAMENTO DELL’INCERTEZZA DEI METODI DI MISURA DELLA CONDUTTANZA VERSO TERRA DEL BINARIO." Doctoral thesis, Università degli studi di Genova, 2019. http://hdl.handle.net/11567/944205.

Full text
Abstract:
Track conductance measurement is an important topic addressed by several international standards. In particular IEC 62128-2 (EN 50122-2) suggests some measurement methods in its annex A. A deep analysis about methods suggested was performed, considering measurement uncertainty. Several on site measurements are performed to assess the measurement variability and validate a lumped element model; used to evaluate measurement systematic uncertainty due to test set up. Some validation methods of simulation results by on site measurements are reviewed and assessed. An underestimating of the similarity in case of very similar data was found and some improvement is suggested about FSV method. Once validated the model a corrective coefficient to compensate the systematic error due to measurement test set up is proposed and applied
APA, Harvard, Vancouver, ISO, and other styles
35

Fortin, Daniele. "Performance assessment of DVB-T and wireless communication systems by means of cross-layer measurements." Doctoral thesis, Università degli studi di Padova, 2008. http://hdl.handle.net/11577/3425206.

Full text
Abstract:
This thesis deals with the study and the application of measurement methods for the analysis of digitally modulated signals, deployed for wireless communications and television broadcasting. The goal is to develop methods allowing to efficiently assess the performance of transmission - reception systems upon the varying of transmission parameters and of the channel. In this way guidelines for the optimization of transmission set-up and of the network planning can be obtained, thus reducing the environmental impact of EM fields and at the same time providing an adequate quality of service. The methods developed, known as cross-layer measurements, provide an analysis of the signal to be simultaneously carried out at different layers. In this way correlation relationships between physical layer parameters and higher layer ones (transport, network, application) can be obtained. These cross-layer measurements are applied to a DVB-T transmission platform (Part I), in collaboration with Digilab, Bozen, and furthermore, they are deployed for the assessment of coexistence problems of IEEE 802.11b and IEEE 802.15.4 wireless networks (Part II).
APA, Harvard, Vancouver, ISO, and other styles
36

CASTELLO, PAOLO. "Algorithms for the synchrophasor measurement in steady-state and dynamic conditions." Doctoral thesis, Università degli Studi di Cagliari, 2014. http://hdl.handle.net/11584/266420.

Full text
Abstract:
Phasor measurement units (PMUs) are becoming one of the key issues of power network monitoring. They have to be able to perform accurate estimations of current and voltage signals either under steady-state or dynamic conditions. The first part of this PhD thesis analyses the impact of the phasor models on the estimation accuracy, focuses on algorithms proposed in the literature for the estimation of phasors and studies their performance under several different conditions. On the basis of the results of this analysis, in the second part of this thesis an innovative approach to improve the performance of synchrophasor estimation is presented. The method proposes a modified version of the synchrophasor estimation algorithm which uses the non-orthogonal transform defined as Taylor-Fourier Transform (TFT) and which is based on a Weighted Least Squares (WLS) estimation of the parameters of a second order Taylor model of the phasor. The aim of the proposed enhancements is to improve the performance of the algorithm in presence of fast transient events and to achieve a Phasor Measurement Unit that is simultaneously compliant with both M and P compliance classes, suggested by the synchrophasor standard IEEE C37.118.1. In particular, while the TFT based adaptive algorithm is used for synchrophasor estimation, frequency and Rate of Change of Frequency (ROCOF) are estimated using the higher derivatives outputs of the adaptive TFT. Frequency estimation feedback is used to tune the algorithm and achieve better performance in off-nominal conditions. The proposed approaches are validated by means of simulations in all the static and dynamic conditions defined in the standard. In the last chapter, the algorithm proposed above is used in a novel architecture, compliant to IEC 61850, for a distributed IED-based PMU, to be used in electrical substations. In particular, a measurement architecture based on process bus and sampled values synchronized with IEEE 1588-2008 is proposed, so that voltage and current signals are acquired by a Merging Unit device, while the PMU signal processing is performed on a IED (Intelligent Electronic Device), in compliance with IEEE C37.118.1-2011.
APA, Harvard, Vancouver, ISO, and other styles
37

Sultan, D. M. S. "Development of Small-Pitch, Thin 3D Sensors for Pixel Detector Upgrades at HL-LHC." Doctoral thesis, Università degli studi di Trento, 2017. https://hdl.handle.net/11572/367699.

Full text
Abstract:
3D Si radiation sensors came along with extreme radiation hard properties, primarily owing to the geometrical advantages over planar sensors where electrodes are formed penetrating through the active substrate volume. Among them: reduction of the inter-electrode distance, lower depletion voltage requirement, inter-columnar high electric field distribution, lower trapping probability, faster charge collection capability, lower power dissipation, and lower inter-pitch charge sharing. Since several years, FBK has developed 3D sensors with a double-sided technology, that have also been installed in the ATLAS Insertable B-Layer at LHC. However, the future High-Luminosity LHC (HL-LHC) upgrades, aimed to be operational by 2024, impose a complete swap of current 3D detectors with more radiation hard sensor design, able to withstand very large particle fluences up to 2×1016 cm-2 1-MeV equivalent neutrons. The extreme luminosity conditions and related issues in occupancy and radiation hardness lead to very dense pixel granularity (50×50 or 25×100 µm2), thinner active region (~100 µm), narrower columnar electrodes (~5µm diameter) with reduced inter-electrode spacing (~30 µm), and very slim edges (~100 µm) into the 3D pixel sensor design. This thesis includes the development of this new generation of small-pitch and thin 3D radiation sensors aimed at the foreseen Inner Tracker (ITk) upgrades at HL-LHC.
APA, Harvard, Vancouver, ISO, and other styles
38

BRUZZONE, ANDREA. "Design and Realization of Electronic Measurement Systems for Partial Discharge Monitoring on Electrical Equipment." Doctoral thesis, Università degli studi di Genova, 2021. http://hdl.handle.net/11567/1038159.

Full text
Abstract:
The monitoring of insulations that composing high voltage apparatus and electrical machines is a crucial aspect for a predictive maintenance program. The insulation system of an electrical machine is affected by partial discharges (PDs), phenomena that can lead to the breakdown in a certain time, with a consequent and significant economic loss. Partial discharges are identified as both the symptom and the cause of a deterioration of solid-type electrical insulators. Thus, it is necessary to adopt solutions for monitoring the insulation status. To do this, different techniques and devices can be adopted. During this research activity, two different systems have been developed at the circuit and layout level, which base their operation respectively on the conducted and on the irradiated measurement, in compliance with the provisions of current standards, if foreseen. The first system is based on the use of a classic signal conditioning chain in which gain value can be set through PC control, allowing the conducted measurement of partial discharges in two frequency bands, Low Frequency (LF) and High Frequency (HF). Based on these bands, the application of the system is diversified. In this case, the information obtained from the measurement can be analysed by an expert operator or processed by an intelligent system, obtaining in both cases information on the status of the machine under test. The second makes use of a UHF antenna built on PCB, which takes care of detecting the irradiated signal generated in the presence of discharge activity, which is then appropriately conditioned and processed by analog electronics, to then be acquired through a programmable logic, which interprets it and returns information on the status of the machine, which can also be checked by an expert user. The application of this system is linked to the type of insulation and the type of power supply adopted, which differentiate its characteristics. In both systems, the analysis of the measurement of partial discharges is suitable for the prevention of failures and the planning of suitable maintenance interventions.
APA, Harvard, Vancouver, ISO, and other styles
39

Abate, Francesco. "Innovative algorithms and data structures for signal treatment applied to ISO/IEC/IEEE 21451 smart transducers." Doctoral thesis, Universita degli studi di Salerno, 2016. http://hdl.handle.net/10556/2493.

Full text
Abstract:
2014 - 2015
Technologies and, in particular sensors, permeate more and more application sectors. From energy management, to the factories one, to houses, environments, infrastructure, and building monitoring, to healthcare and traceability systems, sensors are more and more widespread in our daily life. In the growing context of the Internet of Things capabilities to acquire magnitudes of interest, to elaborate and to communicate data is required to these technologies. These capabilities of acquisition, elaboration, and communication can be integrated on a unique device, a smart sensor, which integrates the sensible element with a simple programmable logic device, capable of managing elaboration and communication. An efficient implementation of communication is required to these technologies, in order to better exploit the available bandwidth, minimizing energy consumption. Moreover, these devices have to be easily interchangeable (plug and play) in such a way that they could be easily usable. Nowadays, smart sensors available on the market reveal several problems such as programming complexity, for which depth knowledge is required, and limited software porting capability. The family of standards IEEE 1451 is written with the aim to define a set of common communication interfaces. These documents come from the Institute of Electric and Electronic Engineers (IEEE) with the aim to create a standard interface which allows devices interoperability produced from different manufacturers, but it is not concerned with problems related to bandwidth, management, elaboration and programming. For this family of standards, now under review, it is expected a further development, with the aim to renew applicable standards, and to add new layers of standardization. The draft of the ISO/IEC/IEEE 21451.001 proposes to solve problems related to the bandwidth and the elaboration management, relocating a part of processing in the point of acquisition, taking advantage of elaboration capabilities of smart sensors. This proposal is based on a Real Time Segmentation and Labeling algorithm, a new sampling technique, which allows to reduce the high number of samples to be transferred, with the same information content. This algorithm returns a data structure, according to which the draft expects two elaboration layers: a first layer, in order to elaborate basic information of the signal processing, and a second layer, for more complex elaboration. [edited by author]
XIV n.s.
APA, Harvard, Vancouver, ISO, and other styles
40

Kalinina, Elena. "Real-time adaptation of stimulation protocols for neuroimaging studies." Doctoral thesis, Università degli studi di Trento, 2018. https://hdl.handle.net/11572/368216.

Full text
Abstract:
Neuroimaging techniques allow to acquire images of the brain involved in cognitive tasks. In traditional neuroimaging studies, the brain response to external stimulation is investigated. Stimulation categories, the order they are presented to the subject and the presentation duration are dened in the stimulation protocol. The protocol is xed before the beginning of the study and does not change in the course of experiment. Recently, there has been a major rise in the number of real-time neuroscientic experiments where the incoming brain data is analysed in an online mode. Real-time neuroimaging studies open an avenue for approaching a whole new broad range of questions, like, for instance, how the outcome of the cognitive task depends on the current brain state. Real-time experiments need a dierent protocol type that can be exibly and interactively adjusted in line with the experimental scope, e.g. hypotheses testing or optimising design for individual subject's parameters. A plethora of methods is currently deployed for protocol adaptation: information theory, optimisation algorithms, genetic algorithms. What is lacking, however, is the paradigm for interacting with the subject's state, brain state in particular. I am addressing this problem in my research. I have concentrated on two types of real-time experiments: closed-loop stimulation experiments and brain-state dependent stimulation (BSDS). As the rst contribution, I put forward a method for closed-loop stimulation adaptation and apply it in a real-time Galvanic Skin Response (GSR) experimental setting. The second contribution is an unsupervised method for brain state detection and a real-time functional Magnetic Resonance Imaging (rtfMRI) setup making use of this method. In a neurofeedback setting the goal is for the subject to achieve a target state. Ideally, the stimulation protocol should be adapted to the subject to better guide them towards that state. One way to do this would be modelling the subject's activity in a way that we can evaluate the eect of various stimulation options and choose the optimised ones, maximising the reward or minimising the error. However, currently developing such models for neuroimaging neurofeedback experiments presents a number of challenges, namely: complex dynamics of a very noisy neural signal and non-trivial mapping of neural and cognitive processes. We designed a simpler experiment as a proof of concept using GSR signal. We showed that if it is possible to model the subject's state and the dynamics of the system, it is also possible to steer the subject towards the desired state. In BSDS, there is no target state, but the challenge lies in the most accurate identication of the subject state in any given moment. The reference, state-of-the-art method for determining the current brain state is the use of machine learning classiers, or multivariate decoding. However, running supervised machine learning classiers on neuroimaging data has a number of issues that might seriously limit their application, especially in real- time scenarios. For BSDS, we show how an unsupervised machine learning algorithm (clustering in real-time) can be employed with fMRI data to determine the onset of the activated brain state. We also developed a real-time fMRI setup for BSDS that uses this method. In an initial attempt to base BSDS on brain decoding, we encountered a set of issues related to classier use. These issues prompted us to developed a new set of methods based on statistical inference that help address fundamental neuroscientic questions. The methods are presented as the secondary contribution of the thesis.
APA, Harvard, Vancouver, ISO, and other styles
41

Khatib, Moustafa. "THz Radiation Detection Based on CMOS Technology." Doctoral thesis, Università degli studi di Trento, 2019. https://hdl.handle.net/11572/368305.

Full text
Abstract:
The Terahertz (THz) band of the electromagnetic spectrum, also defined as sub-millimeter waves, covers the frequency range from 300 GHz to 10 THz. There are several unique characteristics of the radiation in this frequency range such as the non-ionizing nature, since the associated power is low and therefore it is considered as safe technology in many applications. THz waves have the capability of penetrating through several materials such as plastics, paper, and wood. Moreover, it provides a higher resolution compared to conventional mmWave technologies thanks to its shorter wavelengths. The most promising applications of the THz technology are medical imaging, security/surveillance imaging, quality control, non-destructive materials testing and spectroscopy. The potential advantages in these fields provide the motivation to develop room-temperature THz detectors. In terms of low cost, high volume, and high integration capabilities, standard CMOS technology has been considered as an excellent platform to achieve fully integrated THz imaging systems. In this Ph.D. thesis, we report on the design and development of field effect transistor (FET) THz direct detectors operating at low THz frequency (e.g. 300 GHz), as well as at higher THz frequencies (e.g. 800 GHz – 1 THz). In addition, we investigated the implementation issues that limit the power coupling efficiency with the integrated antenna, as well as the antenna-detector impedance-matching condition. The implemented antenna-coupled FET detector structures aim to improve the detection behavior in terms of responsivity and noise equivalent power (NEP) for CMOS based imaging applications. Since the detected THz signals by using this approach are extremely weak with limited bandwidth, the next section of this work presents a pixel-level readout chain containing a cascade of a pre-amplification and noise reduction stage based on a parametric chopper amplifier and a direct analog-to-digital conversion by means of an incremental Sigma-Delta converter. The readout circuit aims to perform a lock-in operation with modulated sources. The in-pixel readout chain provides simultaneous signal integration and noise filtering for the multi-pixel FET detector arrays and hence achieving similar sensitivity by the external lock-in amplifier. Next, based on the experimental THz characterization and measurement results of a single pixel (antenna-coupled FET detector + readout circuit), the design and implementation of a multispectral imager containing 10 x 10 THz focal plane array (FPA) as well as 50 x 50 (3T-APS) visible pixels is presented. Moreover, the readout circuit for the visible pixel is realized as a column-level correlated double sampler. All of the designed chips have been implemented and fabricated in 0.15-µm standard CMOS technology. The physical implementation, fabrication and electrical testing preparation are discussed.
APA, Harvard, Vancouver, ISO, and other styles
42

ABOU, KHALIL ALI. "Event Driven Tactile Sensors for Artificial Devices." Doctoral thesis, Università degli studi di Genova, 2020. http://hdl.handle.net/11567/1001986.

Full text
Abstract:
Present-day robots are, to some extent, able to deal with high complexity and variability of the real-world environment. Their cognitive capabilities can be further enhanced, if they physically interact and explore the real-world objects. For this, the need for efficient tactile sensors is growing day after day in such a way are becoming more and more part of daily life devices especially in robotic applications for manipulation and safe interaction with the environment. In this thesis, we highlight the importance of touch sensing in humans and robots. Inspired by the biological systems, in the the first part, we merge between neuromorphic engineering and CMOS technology where the former is a eld of science that replicates what is biologically (neurons of the nervous system) inside humans into the circuit level. We explain the operation and then characterize different sensor circuits through simulation and experiment to propose finally new prototypes based on the achieved results. In the second part, we present a machine learning technique for detecting the direction and orientation of a sliding tip over a complete skin patch of the iCub robot. Through learning and online testing, the algorithm classies different trajectories across the skin patch. Through this part, we show the results of the considered algorithm with a future perspective to extend the work.
APA, Harvard, Vancouver, ISO, and other styles
43

Ress, Cristina. "Micro electrochemical sensors and PCR systems: cellular and molecular tools for wine yeast analysis." Doctoral thesis, Università degli studi di Trento, 2010. https://hdl.handle.net/11572/368047.

Full text
Abstract:
Nowadays, exciting bioanalytical microsystems are currently receiving increasing attention in biology since they can comply with the considerable demand for reliable, sensitive and low-cost analysis tools. Small reagents volumes, low power consumption, portability, fast analysis, high throughput and systems integration are the key aspects that make these systems more and more appealing within both the academic and industrial communities. In the last years, many microdevices were developed for a wide range of biological applications, particularly dedicated to cellu-lar or molecular analysis. Many efforts were devoted to the realization of Cell-Based Biosensors (CBBs) to monitor the dynamic behaviour of cell cultures for pharmacological screening and basic research. Other researchers focused their interests in the development of so-called Lab-on-a-Chip (LOC) systems for DNA analysis mostly applied to clinical diagnosis. This thesis deals with the investigation of two miniaturized devices – a cell-based biosensor and a DNA amplification system – for the cellular and molecular analysis of wine yeasts, respectively. The first device consists of integrated electrochemical sensors – Ion-Sensitive Field-Effect Transistor (ISFET), impedimetric and temperature sensors – for the real time evaluation of pH and cell settling of yeasts under batch culture conditions. The assessment of yeast performance and robustness has been focused on ethanol tolerance, as it is one of the main stress factors acting in wine, and thus, one of the major causes of stuck fermentations. A good agreement between extracellular acidification and cell growth trends at different ethanol concentration has been demonstrated, significantly reducing the time of the traditional assays. Moreover, resistivity measurements have shown the possibility to follow progressive settling of the cell suspension. Concerning the second system, a Polymerase Chain Reaction (PCR) microdevice has been biologically validated by successfully amplifying yeast genomic DNA fragments. Additionally, the outcome of PCR has been positively assessed with diluted samples and boiled yeast cultures, demonstrating the possibility to skip the time-consuming purification process for potential LOC applications with very little or no pre-PCR sample manipulations. The encouraging results from both microsystems have demonstrated their suitability for wine yeast analysis, aimed at quality improvements of the winemaking process.
APA, Harvard, Vancouver, ISO, and other styles
44

Odorizzi, Lara. "Lab-on-cell and cantilever-based sensors for gene analysis." Doctoral thesis, Università degli studi di Trento, 2010. https://hdl.handle.net/11572/369139.

Full text
Abstract:
Nowadays, both gene mutations detection and function investigation are expected to assume a key role in diseases understanding and in many other biotechnological fields. In fact, gene mutations are often cause of genetic diseases and gene function analysis itself can help to have a broader vision on cells health status. Traditionally, gene mutations detection is carried out at pre-translational/sequence level (transcriptomic approach). On the other hand, the function of innumerable sequenced genes can be investigated by delivering them into cells through transfection methods and observing their expression result at post-translational level (proteomic approach). In this context, Micro-ElectroMechanical Systems (MEMSs) offer the intrinsic advantages of miniaturization: low sample and reagent consumption, reduction of costs, shorter analysis time and higher sensitivity. Their applications range from the whole cell assays to molecular biology investigations. On this subject, the thesis deals with two different tools for gene analysis: a Lab-on-Cell and cantilever-based sensors for in-vitro cell transfection and label-free Single Nucleotide Poly-morphisms (SNPs) detection, respectively. Regarding the first topic, an enhanced platform for single-site electroporation and controlled transfectants delivery has been presented. The device consists of a gold MicroElectrode Array (MEA) with multiple cell compartments, integrated microfluidics based on independent channels and nanostructured titanium dioxide (ns-TiO2) functionalized electrodes. Different activities have been reported, from the study of the microfabrication substrates bioaffinity and device development to the electroporation results. The functional characterization of the system has been carried out by electroporating HeLa cells with a small fluorescent dye and then, in order to validate the approach for gene delivery, with plasmid for the enhanced expression of the Green Fluorescent Protein (pEGFP-N1). The second research activity has been focused on a detection module aimed at the integration in a Lab-on-Chip (LOC) for the early screening of autoimmune diseases. The proposed approach consists of piezoresistive SOI-MEMS cantilever arrays operating in static mode. Their gold surface (aimed at the binding of specific thiolated DNA probes) has been deeply analyzed by means of Atomic Force Microscopy (AFM) and X-ray Photoelectron Spectroscopy (XPS) revealing an evident gold non-uniformity and low content together with oxygen and carbon contaminations. Different technological and cleaning solutions have been chosen in order to optimize the system. However, other improvements will be required. Moreover, the feasibility of the spotting technique has been demonstrated by verifying microcantilever mechanical resistance and good surface coverage without cross-contaminations. Finally, as future perspective, possible biological protocols and procedures have been also proposed and discussed starting from literature.
APA, Harvard, Vancouver, ISO, and other styles
45

Frigo, Guglielmo. "Compressive Sensing Applications in Measurement: Theoretical issues, algorithm characterization and implementation." Doctoral thesis, Università degli studi di Padova, 2015. http://hdl.handle.net/11577/3424133.

Full text
Abstract:
At its core, signal acquisition is concerned with efficient algorithms and protocols capable to capture and encode the signal information content. For over five decades, the indisputable theoretical benchmark has been represented by the wellknown Shannon’s sampling theorem, and the corresponding notion of information has been indissolubly related to signal spectral bandwidth. The contemporary society is founded on almost instantaneous exchange of information, which is mainly conveyed in a digital format. Accordingly, modern communication devices are expected to cope with huge amounts of data, in a typical sequence of steps which comprise acquisition, processing and storage. Despite the continual technological progress, the conventional acquisition protocol has come under mounting pressure and requires a computational effort not related to the actual signal information content. In recent years, a novel sensing paradigm, also known as Compressive Sensing, briefly CS, is quickly spreading among several branches of Information Theory. It relies on two main principles: signal sparsity and incoherent sampling, and employs them to acquire the signal directly in a condensed form. The sampling rate is related to signal information rate, rather than to signal spectral bandwidth. Given a sparse signal, its information content can be recovered even fromwhat could appear to be an incomplete set of measurements, at the expense of a greater computational effort at reconstruction stage. My Ph.D. thesis builds on the field of Compressive Sensing and illustrates how sparsity and incoherence properties can be exploited to design efficient sensing strategies, or to intimately understand the sources of uncertainty that affect measurements. The research activity has dealtwith both theoretical and practical issues, inferred frommeasurement application contexts, ranging fromradio frequency communications to synchrophasor estimation and neurological activity investigation. The thesis is organised in four chapters whose key contributions include: • definition of a general mathematical model for sparse signal acquisition systems, with particular focus on sparsity and incoherence implications; • characterization of the main algorithmic families for recovering sparse signals from reduced set of measurements, with particular focus on the impact of additive noise; • implementation and experimental validation of a CS-based algorithmfor providing accurate preliminary information and suitably preprocessed data for a vector signal analyser or a cognitive radio application; • design and characterization of a CS-based super-resolution technique for spectral analysis in the discrete Fourier transform(DFT) domain; • definition of an overcomplete dictionary which explicitly account for spectral leakage effect; • insight into the so-called off-the-grid estimation approach, by properly combining CS-based super-resolution and DFT coefficients polar interpolation; • exploration and analysis of sparsity implications in quasi-stationary operative conditions, emphasizing the importance of time-varying sparse signal models; • definition of an enhanced spectral content model for spectral analysis applications in dynamic conditions by means of Taylor-Fourier transform (TFT) approaches.
Nell'ambito della Scienza dell'Informazione, il problema dell'acquisizione dei segnali è intimamente connesso alla progettazione e implementazione di efficienti algoritmi e procedure capaci di estrapolare e codificare il contenuto informativo contenuto nel segnale. Per oltre cinquant'anni, il riferimento in quest'ambito è stato rappresentato dal teorema di campionamento di Shannon e la corrispondente definizione di informazione in termini di estensione spettrale del segnale. La società contemporanea si fonda su di un pressoché incessante ed istantaneo scambio di informazioni, che vengono veicolate per la maggior parte in formato digitale. In siffatto contesto, i moderni dispositivi di comunicazione sono chiamati a gestire notevoli moli di dati, seguendo un consueto protocollo operativo che prevede acquisizione, elaborazione e memorizzazione. Nonostante l'incessante sviluppo tecnologico, il protocollo di acquisizione convenzionale è sottoposto a sempre crescente pressione e richiede un carico computazionale non proporzionale al reale contenuto informativo del segnale. Recentemente, un nuovo paradigma di acquisizione, noto con il nome di Campionamento Compresso, va diffondendosi tra i diversi settori della Scienza dell'Informazione. Questa innovativa teoria di campionamento si fonda su due principi fondamentali: sparsità del segnale e incoerenza del campionamento, e li sfrutta per acquisire il segnale direttamente in una versione condensata, compressa appunto. La frequenza di campionamento è collegata al tasso di aggiornamento dell'informazione, piuttosto che all'effettiva estensione spettrale del segnale. Dato un segnale sparso, il suo contenuto informativo può essere ricostruito a partire da quello che potrebbe sembrare un insieme incompleto di misure, al costo di un maggiore carico computazionale della fase di ricostruzione. La mia tesi di dottorato si basa sulla teoria del Campionamento Compresso e illustra come i concetti di sparsità e incoerenza possano essere sfruttati per sviluppare efficienti protocolli di campionamento e per comprendere appieno le sorgenti di incertezza che gravano sulle misure. L'attività di ricerca ha riguardato aspetti sia teorici sia implementativi, traendo spunto da contesti applicativi di misura che spaziano dalle comunicazioni a radio frequenza alla stima dei sincrofasori e all'indagine dell'attività neurologica. La tesi è organizzata in quattro capitoli ove i contributi più significativi includono: • la definizione di un modello unificato per i sistemi di acquisizione di segnali sparsi, con particolare attenzione alle implicazioni dovute alle assunzioni di sparsità e incoerenza; • caratterizzazione delle principali famiglie algoritmiche per la ricostruzione di segnali sparsi, con particolare attenzione all'impatto del rumore additivo sull'accuratezza delle stime; • implementazione e validazione sperimentale di un algoritmo di campionamento compresso capace di fornire accurate informazioni preliminari e opportuni dati pre-elaborati per un contesto applicativo di analizzatore vettoriale o di radio cognitiva; • sviluppo e caratterizzazione fi un algoritmo di campionamento compresso per super-risoluzione nell'ambito dell'analisi spettrale nel dominio della trasformata discreta di Fourier (DFT); • definizione di un dizionario sovra-completo che renda conto esplicitamente dell'effetto di leakage spettrale; • indagine dei cosiddetti approcci di stima off-the-grid, mediante un'opportuna combinazione di super-risoluzione mediante campionamento compresso e interpolazione polare dei coefficienti DFT; • analisi del concetto di sparsità entro il contesto dei segnali quasi-stazionari, sottolineando l'importanza dei modelli di segnali a sparsità tempo-variante; • definizione di un modello del contenuto spettrale del segnale attraverso campionamento compresso da utilizzarsi in applicazioni di analisi spettrale in condizioni dinamiche mediante trasformata di Taylor-Fourier.
APA, Harvard, Vancouver, ISO, and other styles
46

Tramarin, Federico. "Industrial Wireless Sensor Networks - Simulation and measurement in an interfering environment." Doctoral thesis, Università degli studi di Padova, 2012. http://hdl.handle.net/11577/3422089.

Full text
Abstract:
Recently the research community is considering with a growing interest the adoption of IWSNs in application contexts such as real-time (industrial) communications and distributed measurement systems. These types of applications typically impose very tight equirements to the underlying communication systems and, moreover, they might have to cope with the intrinsic unreliability of wireless networks. It is hence needed an accurate characterization of these networks' behavior, from a metrological point of view. Suitable measurement systems have to be realized, and experiments performed aimed at evaluating some of the most appropriate performance indicators. Unfortunately, despite the appealing opportunities provided by IWSNs, their adoption is just at its beginning. It is clear that a comprehensive experimental analysis of their behavior would improve theoretical analysis, simulations and design of the network, since the consequent increased accuracy of models could reduce the source of difference between real and expected behaviors. With the work presented in this thesis the author would provide some original contribution in the field of measurements on real-time wireless networks adopted for industrial communications and distributed measurement systems. In this context, one of the most relevant aspect to be considered is represented, as described in the literature, by interference that possibly arises from "intentional" communications taking place in external systems. In order to address such an issue, some simulation techniques have been considered. As a result, they lead to the development of a network simulator software tool that enabled a cross-layer analysis of interference. This activity stimulated an in-depth study of the IEEE 802.15.4 and IEEE 802.11 communication protocols. Particularly, medium access techniques have been analyzed in the perspective of IWSN applications. On this basis new and effective methods for increasing the network reliability have been proposed, along with fair packet retransmission scheduling methods. Moreover, new rate adaptation algorithms for wireless networks specifically designed for real-time communication purposes, exploiting the high robustness of low transmission rates have been proposed. Finally, since the reliability of a network strongly depends on the real behavior of the employed devices, an experimental approach for the measurement of the devices characteristics is presented, with the aim of providing suitable models and methods for designers.
La comunità scientifica, recentemente, sta considerando con sempre maggiore interesse l'adozione di reti di sensori wireless in contesti come le comunicazioni real-time (industriali) e i sistemi di misura distribuiti. Queste applicazioni richiedono tipicamente, al sistema di comunicazione, di soddisfare requisiti molto stringenti, considerando anche l'intrinseca inaffidabilità del canale radio. Risulta quindi necessaria un'accurata caratterizzazione, in termini metrologici, del comportamento di questa tipologia di reti, tramite sistemi di misura adatti alla valutazione dei più appropriati indici di prestazioni. Sfortunatemente, infatti, l'impiego di questi sistemi è ancora agli inizi, nonostante le interessanti prospettive applicative fornite dalle reti wireless real-time. Appare quindi chiaro come un'accurata caratterizzazione sperimentale del loro comportamento reale migliorerebbe sensibilmente l'efficacia delle analisi teoriche, delle simulazioni e di conseguenza del progetto della rete, risultando incrementata l'accuratezza dei modelli teorici e limitate le sorgenti di deviazione tra i risultati attesi e quelli sperimentali. Con il lavoro presentato in questa tesi, l'autore intende fornire contributi originali nel campo delle misure sulle reti wireless real-time adottate per comunicazioni industriali e sistemi di misura distribuiti. In questo contesto, uno dei principali aspetti da considerare, come si evince dalla letteratura, è dato dall'interferenza dovuta a comunicazioni "intenzionali" da parte di sistemi esterni. Per affrontare quest'analisi si sono inizialmente valutate alcune tecniche di simulazione. Questo ha portato allo sviluppo di un software di simulazione per reti di comunicazione specificamente progettato per l'analisi cross-layer dei fenomeni d'interferenza. Quest'attività ha stimolato uno studio approfondito dei protocolli di comunicazione IEEE 802.15.4 and IEEE 802.11. Nell'ottica del loro impiego per reti wireless real-time, particolare enfasi è stata rivolta alle tecniche di accesso al mezzo specificate nei citati standard. Sulla base di quest'analisi, sono stati proposti alcuni metodi originali per incrementare l'affidabilità di questi sistemi, considerando ad esempio nuove politiche di ritrasmissione per reti basate su polling ciclico. Inoltre sono stati proposti nuovi algoritmi per l'adattamento automatico del rate di trasmissione per reti IEEE 802.11, progettati per l'impiego specifico in un contesto di reti real-time. Infine, considerando che l'affidabilità di una rete in questo contesto dipende strettamente dal comportamento fisico dei componenti impiegati, viene proposto un approccio sperimentale per la misura e caratterizzazione dei ritardi introdotti dai dispositivi di rete, allo scopo di fornire metodi e modelli adeguati in un contesto di progettazione di rete.
APA, Harvard, Vancouver, ISO, and other styles
47

Jha, Rupesh Kumar. "Power Stages and Control of Wireless Power Transfer Systems (WPTSs)." Doctoral thesis, Università degli studi di Padova, 2018. http://hdl.handle.net/11577/3424780.

Full text
Abstract:
Wireless charging of electric vehicle (EV) batteries by inductive power transfer (IPT) offers unique advantages compared to conventional conductive chargers. Due to the absence of a galvanic connection, the charging process requires no user interaction and no moving of mechanical components. For public transport systems, e.g., public buses or tramways, this makes possible a fully automated opportunity charging at bus stations, taxicab stands, or traffic lights. The schematic of wireless battery charger (WBC) is made of two stages, one is transmitter stage and another one is receiver stage. Both the stages include coils and capacitors to resonate at the supply frequency along with power conversion circuits. The transmitter coil is buried in the ground while receiving coil is situated in the vehicle. Based on the connection of resonating capacitors four topologies are possible which can be divided into two arrangements i) transmitter capacitor in series while receiver capacitor is in either series or in parallel, giving rise to SS and SP topologies, ii) transmitting capacitor in parallel while receiving capacitor is in either series or in parallel, giving rise to PS and PP topologies. In the thesis, these topologies have been studied in detail in terms of efficiency, power sizing of supply inverter and resonating coils, behavior under the extreme condition of open and short circuit of the receiver. Power conversion circuitry of a WBC system includes a diode rectifier to supply the load with a direct voltage and resorts to different solutions for charging the battery. The two most used solutions are either in a straightforward manner through the diode rectifier or through a chopper in cascade to the diode rectifier. These two arrangements have been discussed and compared in terms of efficiency and power sizing of supply inverter and transmitting and receiving coil, including the selection of the optimum chopper input voltage. Due to aging and thermal effect, the parameters of the reactive components of a WBC system may change and this can deviates the resonance frequency from the supply frequency. In this thesis the impact of such mismatch on efficiency and supply inverter power sizing factor of WBC with SS topology has been studied. Three supply frequency updating techniques to keep in resonance either the transmitter stage or the receiver stage or the impedance seen from power supply have been investigated. The thesis continues with the study of high power WBC systems which includes power supply architecture, core material and coil geometry. A review of different power supply architectures such as single phase with two stage and parallel topologies including their merits and demerits have been presented. Reviewing some paper on coil geometry, DD coil is found to be suitable for high power application. Using JMAG simulation tool, a transmitter track of three DD coils and a receiver with one DD coil has been analyzed when receiver is moving on the transmitting track. Due to disfavor of ferrite as a core material for high-power WBC system, a varieties of different powdered magnetic materials have been considered here and compared in terms of saturated value of the magnetic flux density, magnetic properties -like dependency of their permeability on temperature, magnetic field strength and frequency-, power losses and cost. At last, two methods to model the WPT system have been considered. The methods model the system by considering the envelop of the signals.
La ricarica wireless delle batterie a bordo dei veicoli elettrici, ottenuta utilizzando il trasferimento di potenza induttivo, offre vantaggi unici rispetto ai caricabatterie tradizionali. A causa dell'assenza di una connessione galvanica, il processo di ricarica non richiede alcuna interazione dell'utente né alcuna movimentazione di un componente meccanico. Per i sistemi di trasporto pubblico, ad esempio autobus o tram, questo rende possibile la cosiddetta carica di opportunità completamente automatizzata presso i depositi degli autobus, le corsie dei taxi, o ai semafori. I caricabatterie wireless sono costituiti da due stadi: uno stadio trasmittente e uno stadio di ricezione. Entrambi gli stadi includono bobine e condensatori, dimensionati per risuonare alla frequenza di alimentazione, e convertitori statici di potenza. La bobina del trasmettitore è interrata nel manto stradale, mentre la bobina ricevente è situata a bordo del veicolo. Sulla base della connessione dei condensatori risonanti sono possibili quattro topologie circuitali diverse che possono essere raggruppate in due principali: i) un condensatore in serie con la bobina di trasmissione con il condensatore lato ricevitore in serie o in parallelo costituisce le topologie SS e SP, rispettivamente, e ii) un condensatore in parallelo alla bobina di trasmissione con il condensatore della sezione ricevente in serie o in parallelo costituisce le topologie PS e PP, rispettivamente. Nella tesi queste topologie sono state studiate dettagliatamente in termini di efficienza, dimensionamento dell'invertitore di alimentazione e progetto delle bobine risonanti, e di comportamento nelle condizioni estreme di circuito aperto e di cortocircuito del ricevitore. Il circuito di conversione di potenza di un sistema per la ricarica wireless induttiva di un veicolo elettrico include un raddrizzatore a diodi nello stadio di ricezione per ottenere un bus di tensione in continua e utilizza differenti modi per caricare la batteria del veicolo. Le due soluzioni più diffuse eseguono la carica o direttamente attraverso il raddrizzatore a diodi oppure attraverso un chopper collegato in cascata ad esso. Queste due modalità sono state discusse e confrontate in termini di efficienza, di dimensionamento sia dell'invertitore di alimentazione, che delle bobine di trasmissione e ricezione, includendo nell’analisi la scelta della tensione ottima in ingresso al chopper. A causa dell'invecchiamento e dell'effetto termico, i parametri dei componenti reattivi di un circuito di ricarica wireless possono variare e questo fa sì che la frequenza di risonanza e la frequenza di alimentazione non siano perfettamente identiche. In questa tesi è stato studiato l'impatto che tale deviazione ha sull'efficienza e sul dimensionamento dell’invertitore in un sistema di ricarica wireless con topologia SS. Sono state studiate tre tecniche di adattamento della frequenza di alimentazione per mantenere in risonanza o lo stadio trasmittente o quello di ricezione oppure l’impedenza vista dall’alimentazione. La tesi prosegue con lo studio dei sistemi di ricarica wireless per elevate potenze che richiedono una specifica architettura di alimentazione, particolari materiali per la costruzione del nucleo magnetico, oltre ad una peculiare geometria delle bobine. E’ stata presentata una panoramica di diverse architetture di alimentazione come, ad esempio, le topologie monofase a due stadi e in parallelo, inclusi i loro pregi e svantaggi. Sulla base di un’accurata revisione della letteratura della geometria delle bobine, la geometria DD si è rivelata essere la più conveniente per le applicazioni di alta potenza. Utilizzando il codice agli elementi finiti JMAG, è stato simulato il comportamento di un sistema di ricarica wireless costituito da tre bobine di trasmissione e una bobina di ricezione, tutte di tipo DD. Poiché, date le sue caratteristiche, le ferriti non si prestano bene per sistemi ad alta potenza, sono state considerate altre tipologie di materiali magnetici. Sono state analizzate e confrontate diverse leghe amorfe in base all’induzione magnetica di saturazione, alle proprietà magnetiche, come la dipendenza della permeabilità dalla temperatura, dal campo magnetico applicato e dalla frequenza, alle perdite di potenza e al costo. Infine, sono stati considerati due metodi per modellizzare il WPT. I metodi modellizzano il sistema considerando l'inviluppo dei segnali.
APA, Harvard, Vancouver, ISO, and other styles
48

Marconato, Nicolò. "Development and validation of numerical models for the optimization of magnetic field configurations in fusion devices." Doctoral thesis, Università degli studi di Padova, 2012. http://hdl.handle.net/11577/3422076.

Full text
Abstract:
This thesis presents the work carried out in the context of two different activities, both involving development of Finite Element models for magnetic analysis regarding the optimization of fusion devices. In particular, the thesis deals with the design of the electrostatic accelerator of the ITER Neutral Beam Injector (NBI) prototype and with the MHD active control system of the RFX-mod Reverse Field Pinch (RFP) experiment, respectively under construction and operating at Consorzio RFX in Padua. ITER, the first fusion experimental reactor under construction in Cadarache, will be equipped with two NBIs, each of them capable to inject into the plasma up to 16.5 MW, by accelerating negative hydrogen or deuterium ions up to energy of 1MeV. The needs of very high voltages and of the use of negative ions represent the main issues relating to this new technology, and efforts have still to be spent in order to overcome them. Therefore, in this regard, the construction of a test facility housing the ITER neutral beam prototypes was deemed required. In the present status of advancement of the ITER neutral beam test facility PRIMA (Padova Research on Injectors Megavolt Accelerated), the design and optimization of several features relating both physics and engineering aspects required massive use of modelling tools. An important role in the ion source and accelerator physics is played by the magnetic field here present, which has to be determined and optimized accurately, and its distribution has also to be provided as input for physics simulation codes. The analyses regarding the ITER NBI, carried out in the framework of Fusion for Energy grants for the final design of the source and accelerator prototype SPIDER (Source for Productions of Ions of Deuterium Extracted from a Radio-frequency plasma) and of the full NBI prototype MITICA (Megavolt ITER Injector and Concept Advancement), aim at optimizing the magnetic configuration inside the ion source and accelerator, in order to improve the performances in terms of ion beam optics and aiming, and to obtain an efficient filter for the extracted electrons. Several 2D and 3D models have been developed in order to assess different features, on different scales of magnitude, from the local configuration inside a single aperture to the global non-uniformity effects near the external edges of the device. This has been fulfilled mainly by means of the commercial FEM software ANSYS®, which has allowed to chose between several formulations to perform magnetostatic analyses in presence of permanent magnets, ferromagnetic material and current bus-bars with rather complex geometry at the same time. In such condition the development and verification of the models was not straightforward. Auxiliary numerical tools have been also developed for specific post- processing purposes. The second work presented in this thesis concerns the modelling of the electromagnetic response of the RFX-mod MHD active control system. RFX-mod, the world’s largest RFP experiment, has the most complete and flexible (magnetic) feedback control system of MHD instabilities, made by 192 radial field coils fully covering the toroidal surface of the machine. Their independent power supplies, together with as many sensor coils inside the stabilizing copper shell, allow the implementation of advanced control scheme for the active stabilization of slow timescale MHD modes. The shell and the other conductive structures interposed between active coils and sensors introduce a dynamic behaviour in the input-output response of the system. Ths behaviour results strongly affected by the presence of 3D features, like the gaps required for the penetration of the axisymmetric field components, which introduce poloidal and toroidal mode coupling in the system response to an external magnetic field. This activity has been carried out by the implementation of an optimized mesh of the system of coils and conductive structures, suited for the custom FEM software CARIDDI developed by CREATE consortium, and by the following derivation of the state-space representation of this model. Large part of the work has been accomplished through the implementation of Matlab® routines required for building the mesh and for pos-processing purposes. There are three main results of this activity. The first is the in-depth understanding of the symmetry properties of the machine. The second is the implementation of a new control algorithm based on the developed model able to compensate in real time the effect introduced by the 3D wall. The third is the proposal of a new measurement cleaning algorithm to be introduced in the control scheme, again based on the developed model and therefore, contrary to the presently implemented one, able to take into account the actual toroidal geometry. The thesis is organized as follows: • Chapter 1 contains an overview on the advancements of the research and technology of nuclear fusion as possible sustainable energy source for the future. In the context of the current word energy source availability, the nuclear fusion is introduced. Some fundamental physics and engineering concepts are presented, together with the progresses obtained in the latest years, leading to the ITER project. In this framework, the concept and available technology about plasma heating is described, with a particularly detailed description of the NBI, anticipating topics required for a good understanding of the PhD work described in further chapters. A description of the RFX-mod experiment is also given, in order to introduce concepts related to the second subject of this dissertation. • Chapter 2 focuses on the mathematical formulations at the basis of the numerical solution of magnetic problems. The several magnetic formulations are described with the twofold purpose of underling the wide range of methods suited to solve particular cases and of providing references for the following paragraphs and chapters. Few words are spent to describe the edge-element approach to finite element methods and its advantages. Finally a brief description of both the ANSYS® and CARIDDI code is given. • Chapter 3 deals with the development of FEM models for the optimization of the magnetic field configuration in the extraction and accelerator area of SPIDER. First a description of the magnetic sources and the aim of the optimization are given. Then the optimization procedure made by 2D models is reported. Finally the assessment of the optimized configuration with 3D model and its final implementation, together with the new features introduced are described. • In chapter 4 the work done for MITICA is discussed. The several alternative magnetic design concepts considered are described and compared. • Chapter 5 presents the modelling activity done on the RFX-mod MHD active control system. A brief description of the system is recalled and the effect of the conductive structure on shaping its response, together with the concept of modal decoupler is introduced. Then the procedure to derive the state-space representation from the FEM model determined with the CARIDDI code is reported. The optimization of the mesh and the experimental benchmark of the results take up a large part of the chapter. Then the development and implementation of the so called modal decoupler is described in detail, and some preliminary experimental results are shown. In the last paragraph, a new measurement cleaning algorithm based on the developed toroidal model is proposed. • Finally, chapter 6 summarizes the results obtained, providing some conclusions and suggesting some future developments.
Questa tesi espone il lavoro realizzato nell’ambito di due diverse attività, entrambe riguardanti lo sviluppo di modelli magnetici agli elementi finiti per l’ottimizzazione di macchine per la fusione. In particolare gli argomenti trattati riguardano la progettazione dell’acceleratore elettrostatico del prototipo di Iniettore di Neutri (NBI) per ITER e del sistema di controllo attivo delle instabilità MHD di cui è dotato l’esperimento RFX-mod in configurazione Reverse Field Pinch (RFP), rispettivamente il primo in costruzione e il secondo già operante al Consorzio RFX a Padova. ITER, il primo reattore sperimentale a fusione in costruzione a Cadarache (Francia), sarà dotato di due NBI, ciascuno in grado di iniettare nel plasma fino ad una potenza di 16.5 MW, mediante l’accelerazione di ioni negativi di idrogeno o deuterio con energia fino a 1MeV. La necessità dell’impiego di tensioni così elevate e dell’uso di ioni negativi costituisce la principale difficoltà per lo sviluppo di questa giovane tecnologia, difficoltà che richiede ancora molti sforzi per essere superata con successo. La realizzazione di una facility per testare un prototipo dei vari componenti che costituiscono l’iniettore è pertanto considerata necessaria. Allo stato attuale di avanzamento nella realizzazione di tale facility, chiamata PRIMA (Padova Research on Injectors Megavolt Accelerated), il progetto e l’ottimizzazione di diversi aspetti, sia di fisica che d’ingegneria, richiedono un massiccio utilizzo di codici di simulazione. Un ruolo molto importante nella fisica della sorgente e nell’acceleratore di ioni è giocato dal campo magnetico * presente, il quale deve essere pertanto accuratamente determinato e ottimizzato, e la cui distribuzione deve poter essere disponibile come input per altri codici di simulazione. Le analisi relative all’NBI di ITER, realizzate nell’ambito di contratti con Fusion for Energy per il progetto definitivo del prototipo di sorgente e acceleratore SPIDER (Source for Productions of Ions of Deuterium Extracted from a Radio-frequency plasma) e del prototipo completo MITICA (Megavolt ITER Injector and Concept Advancement), sono finalizzate all’ottimizzazione della configurazione magnetica all’interno della sorgente e dell’acceleratore di ioni, al fine di migliorare le loro performance in termini di ottica e direzione del fascio, e per ottenere un’efficiente filtraggio degli elettroni congiuntamente estratti. Sono stati realizzati alcuni modelli 2D e 3D per valutare diversi aspetti, su differenti scale di grandezza, dalla configurazione locale all’interno di un singolo foro delle griglie alla disuniformità globale ai bordi di queste. Ciò è stato svolto principalmente mediante l’uso del software FEM commerciale ANSYS®, il quale permette di scegliere tra numerose formulazioni per la realizzazione di analisi magnetostatiche, anche con la contemporanea presenza di magneti permanenti, materiali ferromagnetici e conduttori di corrente con geometrie complesse. In tali condizioni, infatti, lo sviluppo e la verifica dei modelli non sono affatto immediati. Sono inoltre stati sviluppati strumenti numerici ausiliari utilizzati in fase di post-processing. Il secondo lavoro illustrato in questa tesi riguarda la modellizzazione della risposta elettromagnetica del sistema di controllo attivo MHD di RFX-mod. RFX-mod è il più grande esperimento in configurazione RFP attualmente presente al mondo ed è dotato del più completo e flessibile sistema di controllo (magnetico) attivo delle instabilità MHD, costituito da 192 bobine di campo radiale che ricoprono interamente la superficie toroidale della macchina. Ognuna di esse è alimentata indipendentemente e ad ognuna corrisponde un sensore di campo radiale posizionato all’interno della scocca stabilizzatrice in rame. Tale sistema permette l’implementazione di avanzati schemi di controllo feedback per la stabilizzazione attiva dei modi MHD caratterizzati da dinamiche troppo lente perché siano stabilizzati passivamente dalla scocca conduttrice. Questa, insieme alle altre strutture conduttrici interposte tra bobine attuatrici e sensori, introduce un comportamento dinamico nella risposta input-output del sistema. Tale dinamica risulta fortemente influenzata dalla caratteristica tipicamente 3D delle strutture conduttrici, in particolare dovuta ai tagli necessari per la penetrazione delle componenti di campo assialsimmetriche, i quali introducono accoppiamenti modali poloidali e toroidali nella risposta del sistema ad un campo magnetico esterno. Quest’attività ha previsto la realizzazione e ottimizzazione di una mesh del sistema di bobine e strutture conduttrici, adatta al codice FEM CARIDDI sviluppato dal consorzio CREATE, e dalla successiva derivazione di una rappresentazione state-space del modello ottenuto. Buona parte del lavoro è stata impiegata nell’implementazione di routine Matlab® sviluppate per la costruzione della mesh e per fini di post-processing. Tale attività ha portato a tre principali risultati. Il primo è stato l’approfondimento della comprensione delle proprietà di simmetria che caratterizzano la macchina. Il secondo è stato l’implementazione di un nuovo algoritmo di controllo basato sul modello sviluppato, in grado di compensare in tempo reale l’effetto introdotto dalle strutture 3D. Per ultimo, si è arrivati alla proposta di un nuovo algoritmo di ripulitura delle misure da introdurre nello schema di controllo, anch’esso basato sul modello sviluppato e per questo in grado di tener conto dell’effettiva geometria toroidale, a differenza di quello attualmente utilizzato che si basa su un modello cilindrico. La tesi è organizzata come segue: • Nel capitolo 1 è presentata una panoramica sui progressi della ricerca e della tecnologia per lo sviluppo della fusione nucleare come possibile fonte di energia sostenibile per il futuro. La fusione nucleare viene considerata nel contesto dell’attuale disponibilità di risorse energetiche nel mondo. Vengono poi richiamati alcuni concetti fondamentali di fisica ed ingegneria, insieme ai progressi ottenuti negli ultimi anni che hanno portato al progetto internazionale ITER. In quest’ambito sono inseriti il concetto e le metodologie per il riscaldamento del plasma, con una descrizione più dettagliata del NBI e dello stato dell’arte, anticipando concetti necessari alla comprensione del lavoro di dottorato descritto nei capitoli successivi. È fornita inoltre una breve descrizione dell’esperimento RFX-mod, anche questa necessaria per introdurre concetti relativi al secondo soggetto della tesi, descritto nell’ultimo capitolo. • Il capitolo 2 si focalizza sulle formulazioni matematiche alla base della soluzione numerica di problemi magnetici. Le diverse formulazioni vengono elencate con il doppio scopo di evidenziare la grande varietà di metodi adatti a risolvere casi specifici e di fornire riferimenti a quanto trattato nei seguenti paragrafi e capitoli. Qualche parola è spesa anche per descrivere l’approccio edge-element nel metodo degli elementi finiti e i vantaggi connessi al suo utilizzo. Infine è fornita una breve descrizione dei software ANSYS® e CARIDDI. • Il capitolo 3 tratta del lavoro relativo ai modelli FEM sviluppati per l’ottimizzazione della configurazione magnetica nella regione di estrazione ed accelerazione di SPIDER. Inizialmente vengono descritte le sorgenti magnetiche presenti e successivamente la procedura di ottimizzazione mediante modelli 2D. Infine vengono descritte la verifica della configurazione ottimizzata mediante modelli 3D, la sua definitiva implementazione e le novità introdotte. • Nel capitolo 4 è riportato il lavoro svolto per la configurazione magnetica dell’esperimento MITICA. I diversi concetti di design alternativi presi in considerazione sono descritti e confrontati. Alla fine si propone quella che è ritenuta la soluzione più performante. • Il capitolo 5 presenta l’attività di modellizzazione sul sistema di controllo attivo MHD di RFX-mod. Viene prima richiamata una breve descrizione del sistema e poi vengono introdotti gli effetti delle strutture conduttive nell’influenzarne la risposta, insieme al concetto di disaccoppiatore modale. Successivamente è descritta la procedura per derivare la rappresentazione state-space dal modello determinato con il codice CARIDDI. L’ottimizzazione della mesh e il benchmark sperimentale dei risultati occupano gran parte del capitolo. Vengono poi descritti dettagliatamente lo sviluppo e l’implementazione del cosiddetto disaccoppiatore modale e sono esposti alcuni risultati sperimentali preliminari. Nell’ultimo paragrafo è descritto il nuovo algoritmo di ripulitura delle misure che viene proposto. • Il capitolo 6, infine, riassume i risultati ottenuti, fornisce le conclusioni e suggerisce alcuni possibili sviluppi futuri.
APA, Harvard, Vancouver, ISO, and other styles
49

PANZAVECCHIA, Nicola. "DEVELOPMENT AND CHARACTERIZATION OF ADVANCED METERING AND ICT SOLUTIONS FOR SMART ENERGY DISTRICTS." Doctoral thesis, Università degli Studi di Palermo, 2022. http://hdl.handle.net/10447/533641.

Full text
Abstract:
Climate change will have an influence on the world's 8 billion inhabitants, the majority of whom live in cities, which account for roughly two-thirds of the CO2 emissions that are at the root of the climate crisis. To attain a net-zero carbon future, a rapid transition across business models and policy is required. At the same time, policy and legislation are struggling to keep up with smart technology and the Internet of Things. The management of smart urban infrastructure is the key to successful decarbonisation and the achievement of sustainable cities. In this framework, the smart electricity infrastructure shall be equipped with integrated technologies, such solar panels, storage facilities, electric vehicle charging, intelligent public lighting system and sensor connected to a digital platform. This paradigm has changed the view of the power system itself, as decades ago the energy infrastructure was built for a centralised power system, not for a decentralised and digitalized system when energy flows in a bi-directional way within the grid. This increases the advanced metering and ICT solutions for a proper and safe management of the grid itself. This Ph.D. thesis proposes a smart architecture, as well as smart equipment and solutions, to suit the needs of the new power grid that will support smart energy districts. The developed architecture provides a distributed measurement system to keep the distributor updated about the status of the grid. Power Line Communication (PLC) has been chosen as communication technology in order to allow the DSO to reduce the cost of the upgrade of the grid and keep the control over the communication medium. Within this architecture, several devices have been developed. In detail, a concentrator and a remote PLC bridge implementing the PLC-PRIME v1.4 protocol have been developed to fulfil the requirements of the architecture. An IEC-6100-4-3/4-7 Class S Power quality analyser has been implemented on a low cost STMicroelectronics platform already used for smart metering applications. Starting from field measurement data collection, a specific software has been developed as oracle for the SCADA system in order to provide Distribution System Operators (DSOs) with valuable information for a better management of the power grid.
APA, Harvard, Vancouver, ISO, and other styles
50

Povoli, Marco. "Development of enhanced double-sided 3D radiation sensors for pixel detector upgrades at HL-LHC." Doctoral thesis, Università degli studi di Trento, 2013. https://hdl.handle.net/11572/368454.

Full text
Abstract:
The upgrades of High Energy Physics (HEP) experiments at the Large Hadron Collider (LHC) will call for new radiation hard technologies to be applied in the next generations of tracking devices that will be required to withstand extremely high radiation doses. In this sense, one of the most promising approaches to silicon detectors, is the so called 3D technology. This technology realizes columnar electrodes penetrating vertically into the silicon bulk thus decoupling the active volume from the inter-electrode distance. 3D detectors were first proposed by S. Parker and collaborators in the mid ’90s as a new sensor geometry intended to mitigate the effects of radiation damage in silicon. 3D sensors are currently attracting growing interest in the field of High Energy Physics, despite their more complex and expensive fabrication, because of the much lower operating voltages and enhanced radiation hardness. 3D technology was also investigated in other laboratories, with the intent of reducing the fabrication complexity and aiming at medium volume sensor production in view of the first upgrades of the LHC experiments. This work will describe all the efforts in design, fabrication and characterization of 3D detectors produced at FBK for the ATLAS Insertable B-Layer, in the framework of the ATLAS 3D sensor collaboration. In addition, the design and preliminary characterization of a new batch of 3D sensor will also be described together with new applications of 3D technology.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography