Dissertations / Theses on the topic 'Analysi'

To see the other types of publications on this topic, follow the link: Analysi.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Analysi.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Gopinathan, Vaishnavan, and Kavya Vijayam Prakash. "Lifestyle and customs of Ukraine and India: comparative analysi." Thesis, Sumy State University, 2019. https://essuir.sumdu.edu.ua/handle/123456789/77310.

Full text
Abstract:
Ukraine and India share a majority of differences whether it is in traditions or the lifestyle. Both countries offer a wide range of comparisons and dissimilarities. India's culture is among the world’s oldest; civilization in India began about 4,500 years ago. Many sources describe it as "Sa Prathama Sanskrati Vishvavara" − the first and the supreme culture in the world [1]. Indians made significant advances in architecture (Taj Mahal), mathematics (the invention of zero) and medicine (Ayurveda). Today, India is a very diverse country, with more than 1.2 billion people, making it the second most populous nation after China [1]. Art, religion, architecture, language, food and fashion are just some of the various aspects of Indian culture. India is also well known for its film industry, which is often referred to as Bollywood. The country's movie history began in 1896 when the Lumière brothers demonstrated the art of cinema in Mumbai [2]. Today, the films are known for their elaborate singing and dancing. Indian dance, music and theater traditions span back more than 2,000 years. The major classical dance traditions − Bharata Natyam, Kathak, Odissi, Manipuri, Kuchipudi, Mohiniattam and Kathakali − draw on themes from mythology and literature and have rigid presentation rules [2].
APA, Harvard, Vancouver, ISO, and other styles
2

CAPPELLO, Riccardo. "Progressi sperimentali e numerici nella valutazione dell'integrita strutturale dei solidi mediante Thermoelastic Stress Analysis." Doctoral thesis, Università degli Studi di Palermo, 2023. https://hdl.handle.net/10447/580150.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

CRIPPA, CHIARA. "Regional and local scale analysis of very slow rock slope deformations integrating InSAR and morpho-structural data." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2021. http://hdl.handle.net/10281/306309.

Full text
Abstract:
Le deformazioni lente di versante in roccia (DGPV e grandi frane) sono fenomeni diffusi che interessano interi versanti e mobilizzano volumi di roccia anche di miliardi di metri cubi. La loro evoluzione è legata a processi di rottura progressiva sotto forzanti esterne e di accoppiamento idromeccanico, rispecchiate da un complesso processo di creep. Sebbene caratterizzate da bassi tassi di spostamento (fino a pochi cm / anno), queste instabilità di versante danneggiano infrastrutture e ospitano settori potenzialmente soggetti a differenziazione e collasso catastrofico. È quindi necessaria una robusta caratterizzazione del loro stile di attività per determinare il potenziale impatto sugli elementi a rischio e anticipare un eventuale collasso. Tuttavia una metodologia di analisi finalizzata a questo scopo è ancora mancante. In questa prospettiva, abbiamo sviluppato un approccio multiscala che integra dati morfostrutturali, di terreno e tecniche DInSAR, applicandoli allo studio di un inventario di 208 deformazioni lente di versanti mappate in Lombardia. Su questo dataset abbiamo eseguito una mappatura geomorfologica e morfostrutturale di semi dettaglio tramite immagini aeree e DEM. Abbiamo quindi sviluppato un pacchetto di procedure oggettive per lo screening su scala di inventario delle deformazioni lente di versante integrando dati di velocità di spostamento, cinematica e di danneggiamento dell’ammasso roccioso per ogni frana. Utilizzando dataset PS-InSAR e SqueeSAR, abbiamo sviluppato una procedura mirata a identificare in maniera semiautomatica la velocità InSAR rappresentativa, il grado di segmentazione e l'eterogeneità interna di ogni frana mappata identificando la presenza di possibili fenomeni secondari. Utilizzando la tecnica 2DInSAR e tecniche di machine learning, abbiamo inoltre sviluppato un approccio automatico caratterizzare la cinematica di ciascuna frana. I dati così ottenuti sono stati integrati tramite analisi di PCA e K-medoid per identificare gruppi di frane caratterizzati da stili di attività simili. Partendo dai risultati della classificazione su scala regionale, ci siamo poi concentrati su 3 casi di studio emblematici, le DGPV di Corna Rossa, Mt. Mater e Saline, rappresentativi di problematiche tipiche delle grandi frane (segmentazione spaziale, attività eterogenea, sensibilità alle forzanti idrologiche). Applicando un approccio DInSAR mirato abbiamo indagato la risposta del versante a diverse baseline temporali per evidenziare le eterogeneità spaziali e, tramite un nuovo approccio di stacking su basline temporali lunghe abbiamo estrattoi segnali di spostamento permanenti ed evidenziato i settori e le strutture con evoluzione differenziale. Lo stesso approccio DInSAR è stato utilizzato per studiare la sensibilità delle deformazioni lente di versante alle forzanti idrologiche. Il confronto tra i tassi di spostamento stagionale e le serie temporali di precipitazioni e scioglimento neve per il monte. Mater e Saline hanno delineato complessi trend di spostamento stagionale. Queste tendenze, più evidenti per i settori più superficiali, evidenziano una risposta maggiore a periodi prolungati di precipitazione modulati dagli effetti dello scioglimento della neve. Ciò suggerisce che le DGPV, spesso considerate non influenzate dalla forzante climatica a breve termine (pluriennale), sono sensibili a input idrologici, con implicazioni chiave nell'interpretazione del loro fallimento progressivo. I nostri risultati hanno dimostrato l'efficacia della metodologia multi-scala proposta, che sfrutta i prodotti DInSAR e l'analisi mirata per identificare, classificare e caratterizzare l'attività delle deformazioni lente di versante includendo dati geologici in tutte le fasi dell'analisi. Il nostro approccio, è applicabile a diversi contesti e dataset e fornisce gli strumenti per indagare processi chiave in uno studio finalizzato alla definizione del rischio connesso alle deformazioni lente di versante.
Slow rock slope deformations (DSGSDs and large landslides) are widespread, affect entire hillslopes and displace volumes up to billions of cubic meters. They evolve over long time by progressive failure processes, under variable climatic and hydro-mechanical coupling conditions mirrored by a complex creep behaviour. Although characterized by low displacement rates (up to few cm/yr), these slope instabilities damage sensitive structures and host nested sectors potentially undergoing rockslide differentiation and collapse. A robust characterization of the style of activity of slow rock slope deformations is required to predict their interaction with elements at risk and anticipate possible failure, yet a comprehensive methodology to this aim is still lacking. In this perspective, we developed a multi-scale methodology integrating geomorphological mapping, field data and different DInSAR techniques, using an inventory of 208 slow rock slope deformations in Lombardia (Italian Central Alps), for which we performed a geomorphological and morpho-structural mapping on aerial images and DEMs. On the regional scale, we developed an objective workflow for the inventory-scale screening of slow-moving landslides. The approach is based on a refined definition of activity that integrates the displacement rate, kinematics and degree of internal damage for each landslide. Using PS-InSAR and SqueeSAR datasets, we developed an original peak analysis of InSAR displacement rates to characterize the degree of segmentation and heterogeneity of mapped phenomena, highlight the occurrence of sectors with differential activity and derive their characteristic displacement rates. Using 2DInSAR velocity decomposition and machine learning classification, we set up an original automatic approach to characterize the kinematics of each landslides. Then, we sequentially combine PCA and K-medoid cluster analysis to identify groups of landslides characterized by consistent styles of activity, accounting for all the relevant aspects including velocity, kinematics, segmentation, and internal damage. Starting from the results of regional-scale classification, we focused on the Corna Rossa, Mt. Mater and Saline DSGSDs, that are emblematic case studies on which apply DInSAR analysis to investigate typical issues in large landslide studies (spatial segmentation, heterogenous activity, sensitivity to hydrological triggers). We applied a targeted DInSAR technique on multiple temporal baselines to unravel the spatial heterogeneities of complex DSGSDs and through a novel stacking approach on raw long temporal baseline interferograms, we outlined the permanent displacement signals and sectors with differential evolution as well as individual active structures. We then used DInSAR to investigate the possible sensitivity of slow rock slope deformations to hydrological triggers. Comparison between seasonal displacement rates, derived by interferograms with targeted temporal baselines, and time series of precipitation and snowmelt at the Mt. Mater and Saline ridge outlined complex temporally shifted seasonal displacement trends. These trends, more evident for shallower nested sectors, outline dominant controls by prolonged precipitation periods modulated by the effects of snowmelt. This suggests that DSGSDs, often considered insensitive to short-term (pluri-annual) climatic forcing, may respond to hydrological triggering, with key implication in the interpretation of their progressive failure. Our results demonstrated the effectiveness of the proposed multi-scale methodology that exploits DInSAR products and targeted processing to identify, classify and characterize the activity of slow rock slope deformation at different levels of details by including geological data in all the analysis stages. Our approach, readily applicable to different settings and datasets, provides the tools to solve key scientific issues in a geohazard-oriented study of slow rock slope deformations.
APA, Harvard, Vancouver, ISO, and other styles
4

CERUTI, FRANCESCA. "Il settore estrattivo in Italia. Analisi e valutazione delle strategie competitive per lo sviluppo sostenibile." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2013. http://hdl.handle.net/10281/41871.

Full text
Abstract:
This thesis deals with the strategic management in a particular Italian industry: the mining sector. It is proposed an integrated methodology to implement a competitive analysis to explore trends in the mining sector in terms of business strategies adopted and strategic groups existent. The methodology uses secondary sources to make an environmental analysis and an accounting one. This second type makes use of key performance indicators of the sector as well as highlights the distribution of firms throughout the country. The research provides also information from a questionnaire about production, management, competitiveness, internationalization, innovation, recycling and environmental sustainability of enterprises behavior. Through PCA - Principal Component Analysis some positioning maps were created to identify strategic groups on the basis of four latent variables: competitive advantages, goals, tools and future investments. The design of research aims to bring out not only the Italian mining industry profile, but also to delineate its lineages and possible developments.
APA, Harvard, Vancouver, ISO, and other styles
5

VAZ, SANDRA M. "Analise de extratos de plantas medicinais pelo metodo de ativacao com neutrons." reponame:Repositório Institucional do IPEN, 1995. http://repositorio.ipen.br:8080/xmlui/handle/123456789/10457.

Full text
Abstract:
Made available in DSpace on 2014-10-09T12:38:56Z (GMT). No. of bitstreams: 0
Made available in DSpace on 2014-10-09T14:02:47Z (GMT). No. of bitstreams: 1 02753.pdf: 2050243 bytes, checksum: bad09f6ca0af45251626c9a6196f689f (MD5)
Dissertacao (Mestrado)
IPEN/D
Instituto de Pesquisas Energeticas e Nucleares - IPEN/CNEN-SP
APA, Harvard, Vancouver, ISO, and other styles
6

D'Alessandro, S. "POLYNOMIAL ALGEBRAS AND SMOOTH FUNCTIONS IN BANACH SPACES." Doctoral thesis, Università degli Studi di Milano, 2014. http://hdl.handle.net/2434/244407.

Full text
Abstract:
According to the fundamental Stone-Weierstrass theorem, if X is a finite dimensional real Banach space, then every continuous function on the unit ball B_X can be uniformly approximated by polynomials. For infinite dimensional Banach spaces the statement of the Stone-Weierstrass Theorem is false, even if we replace continuous functions by the uniformly continuous ones (which is a natural condition that coincides with continuity in the finite dimensional setting): in fact, on every infinite-dimensional Banach space X there exists a uniformly continuous real function not approximable by continuous polynomials. The natural problem of the proper generalization of the result for infinite dimensional spaces was posed by Shilov (in the case of a Hilbert space). Aron observed that the uniform closure on B_X of the space of all polynomials of the finite type is precisely the space of all functions which are weakly uniformly continuous on B_X. Since there exist infinite dimensional Banach spaces such that all bounded polynomials are weakly uniformly continuous on B_X (e.g. C_0 or more generally all Banach spaces not containing a copy of l_1 and such that all bounded polynomials are weakly sequentially continuous on B_X), this result gives a very satisfactory solution to the problem. Unfortunately, most Banach spaces, including L_p, do not have this special property. In this case, no characterization of the uniform limits of polynomials is known. But the problem has a more subtle formulation as well. Let us consider the algebras consisting of all polynomials which can be generated by finitely many algebraic operations of addition and multiplication, starting from polynomials on X of degree not exceeding n. Of course, such polynomials can have arbitrarily high degree. It is clear that, if n is the lowest degree such that there exists a polynomial P which is not weakly uniformly continuous, then the we have equalities among the algebras up to n-1 and then we have a strict inclusion. The problem of what happens from n on has been studied in several papers. The natural conjecture appears to be that once the chain of eualities has been broken, it is going to be broken at each subsequent step. The proof of this latter statement given by Hajek in 1996, for all classical Banach spaces, based on the theory of algebraic bases, is unfortunately not entirely correct, as was pointed out by our colleague Michal Johanis. It is not clear to us if the theory of algebraic bases developed therein can be salvaged. Fortunately, the main statement of this theory can be proved using another approach. The complete proof can be found in this thesis. Most of the results in this area are therefore safe. The main result of this thesis implies all previously known results in this area (all confirming the above conjecture) as special cases. We also give solutions to three other problems posed in the literature, which are concerning smooth functions rather than polynomials, but which belong to the same field of study of smooth mappings on a Banach space. The first result is a construction of a non-equivalent C^k-smooth norm on every Banach space admitting a C^k-smooth norm, answering a problem posed in several places in the literature. We solve a another question by proving that a real Banach space admitting a separating real analytic function whose holomorphic extension is Lipschitz in some strip around X admits a separating polynomial. Eventually, we solve a problem posed by Benyamini and Lindenstrauss, concerning the extensions of uniformly differentiable functions from the unit ball into a larger set, preserving the values in some neighbourhood of the origin. More precisely, we construct an example of a uniformly differentiable real-valued function f on the unit ball of a certain Banach space X, such that there exists no uniformly differentiable function g on cB_X for any c>1 which coincides with f in some neighbourhood of the origin. To do so, we construct suitable renormings of c_0, based on the theory of W-spaces.
APA, Harvard, Vancouver, ISO, and other styles
7

SANTORO, MAURO. "Inference of behavioral models that support program analysis." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2011. http://hdl.handle.net/10281/19514.

Full text
Abstract:
The use of models to study the behavior of systems is common to all fields. A behavioral model formalizes and abstracts the view of a system and gives insight about the behavior of the system being developed. In the software field, behavioral models can support software engineering tasks. In particular, relevant uses of behavioral models are included in all the main analysis and testing activities: models are used in program comprehension to complement the information available in specifications, are used in testing to ease test case generation, used as oracles to verify the correctness of the executions, and are used as failure detection to automatically identify anomalous behaviors. When behavioral models are not part of specifications, automated approaches can automatically derive behavioral models from programs. The degree of completeness and soundness of the generated models depends from the kind of inferred model and the quality of the data available for the inference. When model inference techniques do not work well or the data available for the inference are poor, the many testing and analysis techniques based on these models will necessarily provide poor results. This PhD thesis concentrates on the problem of inferring Finite State Automata (the model that is likely most used to describe the behavior of software systems) that describe the behavior of programs and components and can be useful as support for testing and analysis activities. The thesis contributes to the state of the art by: (1) Empirically studying the effectiveness of techniques for the inference of FSAs when a variable amount of information (from scarce to good) is available for the inference; (2) Empirically comparing the effectiveness of techniques for the inference of FSAs and Extended FSAs; (3) Proposing a white-box technique that infers FSAs from service-based applications by starting from a complete model and then refining the model by incrementally removing inconsistencies; (4) Proposing a black-box technique that infers FSAs by starting from a partial model and then incrementally producing additional information to increase the completeness of the model.
APA, Harvard, Vancouver, ISO, and other styles
8

SPAGNUOLO, Gandolfo Alessandro. "INTEGRATED MULTI-PHYSICS DESIGN TOOL FOR FUSION BREEDING BLANKET SYSTEMS - DEVELOPMENT AND VALIDATION." Doctoral thesis, Università degli Studi di Palermo, 2020. http://hdl.handle.net/10447/395226.

Full text
Abstract:
Il Breeding Blanket (BB) del reattore DEMO rappresenta un sistema complesso in un ambiente pericoloso. Infatti, esso deve soddisfare diversi requisiti e vincoli ingegneristici sia di tipo nucleare, termo-strutturale che di sicurezza. Per questi motivi, è necessaria una progettazione omnicomprensiva che preveda l'applicazione di strumenti avanzati di simulazione basati su approcci multi-fisici. Questi strumenti devono eseguire simultaneamente diversi tipi di analisi. Tre di esse possono essere considerate prioritarie e propedeutiche per lo studio di tutti gli altri fenomeni riguardanti il BB, vale a dire l´analisi nucleare, termo-fluidodinamica e strutturale. In questa tesi, è proposto un innovativo approccio multi-fisico che copre i tre pilastri principali su cui è basato il progetto del BB (la neutronica, la termoidraulica e la termo-meccanica). Queste analisi devono essere condotte in maniera integrata, consentendo una valutazione olistica dei carichi volumetrici di potenza, delle prestazioni termiche sia del fluido di raffreddamento che delle strutture, nonché dei campi di tensione e deformazione. La strategia seguita per il conseguimento di questa sfida consiste nella creazione di una procedura “CAD-centric” e “loosely-coupled” (debolmente accoppiata) per la progettazione dei concetti di BB utilizzando una tecnica di analisi basata su sotto-modelli. Questa procedura prende il nome di Multi-physics Approach for Integrated Analysis (MAIA). Essa basa la sua architettura sull'uso di codici validati e sulla minimizzazione del loro numero. In particolare, MAIA è articolata in 10 fasi principali che vanno dalla creazione di un modello per le analisi nucleari generato dalla decomposizione in geometrie semplici di un generico CAD alla valutazione della potenza volumetrica, dal calcolo dei campi di temperatura e velocità nella struttura e nel refrigerante alla valutazione dei campi di spostamento, deformazione e stress, dalla stima dei tassi di produzione degli isotopi dell´azoto prodotti dall'attivazione dell'ossigeno presente nell'acqua al calcolo della loro distribuzione spaziale di concentrazione tenendo conto degli effetti del trasporto convettivo. Tutti i vari passaggi condividono gli stessi dettagli geometrici. In particolare, MAIA differisce dagli approcci convenzionali usati nell´accoppiamento multi-fisico su tre aspetti chiave. Innanzitutto, non introduce omogeneizzazioni dei modelli e dei carichi. In secondo luogo, MAIA permette di determinare, con un’alta risoluzione spaziale, i gradienti dei carichi per tutte le analisi coinvolte senza richiedere sforzi computazionali proibitivi. In terzo luogo, MAIA permette di mantenere la coerenza tra le tre analisi garantendo la congruenza tra gli input e gli output. Tuttavia, l´onere computazionale richiesto dall´approccio CAD-centric, su cui si basa la procedura MAIA, non permette di rappresentare il BB nel suo complesso ma solo alcune sue porzioni (una slice, per esempio). Ciò impone la definizione e, conseguentemente, la validazione di opportune condizioni al contorno per ogni sotto-modello utilizzato e per ogni analisi eseguita. A tal proposito, per quanto riguarda le analisi nucleari, le condizioni al contorno utilizzate nel modello locale della slice sono: definizione di una sorgente locale neutronica/fotonica per tener in conto l´effetto albedo dell´intero reattore, rappresentazione del Vacuum Vessel (VV) per simulare il back scattering verso il BB, e l´applicazione di condizioni di riflessione (“mirror”, specchio/simmetria, nella direzione poloidale e “white”, riflessione isotropica, in quella toroidale) per simulare la presenza delle slice adiacenti a quella analizzata. I risultati ottenuti mostrano una variazione della potenza depositata del -0.48 % tra il modello di riferimento DEMO e quello locale (slice). Inoltre, è stata eseguita un'analisi di sensibilità sulla distribuzione angolare della sorgente neutronica/fotonica locale determinando una discretizzazione ottimale in 10 suddivisioni poloidali. Questa suddivisione rappresenta un buon compromesso sia in termini di fedeltà dei risultati ottenuti, rispetto a quelli del modello di riferimento (DEMO), che di minimizzazione dell´onere computazionale. Per quanto riguarda l'analisi delle condizioni al contorno termo-idrauliche usate nel modello locale della slice, è stata applicata una condizione di simmetria termica poloidale. Assumendo una variazione delle portate comprese tra ~ -1.3% e ~ 0.6% e una fluttuazione della densità di potenza fino a ~ 6% tra slice vicine, è stata ottenuta una variazione della distribuzione delle temperature del ± 2.4% dimostrando, quindi, l'applicabilità di tali condizioni. Per quanto riguarda le analisi termo-meccaniche, le condizioni al contorno identificate per il modello locale della slice sono: simmetria sul piano inferiore della slice, Generalised Plane Strain su quello superiore e spostamenti radiali e toroidali impediti ai nodi che giacciono nella parte posteriore della back supporting structure lungo la direzione toroidale e poloidale. Queste condizioni, applicate al sotto-modello, producono una variazione compresa tra il -6% e il 4% tra gli spostamenti calcolati nella slice e quelli nel modello di riferimento DEMO, nonché una stima conservativa delle tensioni primarie e secondarie sia di membrana che di flessione. Inoltre, è stato anche studiato l'impatto della variazione (± 2.4%) di temperatura dimostrando che le fluttuazioni sulle deformazioni totale sono comprese tra il -0.3% e l’1.7%, fino a un massimo del 15% sulle tensioni equivalenti di membrana e tra il -7% e il 5% su quelle di flessione. Infine, la procedura MAIA è stata utilizzata per valutare l'impatto sul design del BB. La sua applicazione ha dimostrato la presenza di alcune criticità nel progetto. In particolare, i risultati fluidodinamici mostrano una violazione dei limiti di temperatura che non sono stati risolti introducendo soluzioni progettuali adeguate. Inoltre, queste violazioni producono, a loro volta, valori molto intensi delle tensioni equivalenti di Von Mises che potrebbero indicare un pericolo per l'integrità strutturale del BB. L´applicazione di MAIA al design del BB a permesso di dimostrare il valore aggiunto di questa procedura la quale potrebbe diventare uno strumento fondamentale e di riferimento per la progettazione del BB. Inoltre, la procedura MAIA ha permesso di mappare localmente variabili importanti come flussi neutronici e temperature, nonché le tensioni primarie e secondarie che sono utilizzate per la determinazione delle tensioni ammissibili applicate per la verifica dei criteri di progettazione. Al fine di dimostrare ulteriormente la versatilità e l'adattabilità della procedura MAIA, è stato studiato il problema di attivazione dell'acqua del sistema di trasferimento di calore primario (Primary Heat Transfer System, PHTS). Utilizzando la procedura MAIA, è stato possibile prendere in considerazione gli effetti dell´efflusso sulla concentrazione degli isotopi dell´azoto e fornire informazioni utili per lo sviluppo sia del design del BB che del suo PHTS.
The Breeding Blanket (BB) of the DEMO reactor represents a harsh system in a dangerous environment. It has to satisfy engineering requirements and constraints that are of nuclear, thermo-structural, material and safety kind. For these reasons, the application of advanced simulation tools, based on a multi-physics approach, is required for its comprehensive design. These tools have to simultaneously perform different kind of analyses among which three, and namely nuclear, thermofluid-dynamic and thermo-mechanical, can be prioritized and considered as propaedeutic for the investigation of all the other issues related to the BB. In this dissertation, a multi-physic approach, covering the three pillars of the BB design (the neutronics, thermal-hydraulics and thermo-mechanics), is proposed. These analyses have to be conducted in a strongly integrated way, allowing a holistic assessment of volumetric heat loads, thermal performances of coolant and structures as well as their stress and deformation states. The strategy, followed for the achievement of this challenge, consists of creating a CAD-centric and loosely-coupled procedure for the BB concepts design adopting a sub-modelling technique, named Multi-physics Approach for Integrated Analysis (MAIA). The MAIA procedure bases its architecture on the use of validated codes and on the minimisation of their number. It is articulated in 10 main steps that go from the decomposition of generic CAD in a format suitable for neutron/photon transport analysis to the nuclear analysis for the assessment of volumetric heating, from the assessment of temperature and velocity fields within coolant and structure to the evaluation of their displacement, deformation and stress fields, from the evaluation of nitrogen isotopes production rates from water oxygen activation to the calculation of their concentration spatial distribution taking into account the effects of passive convective transport. All the steps share the same geometry details and the consistency between input and output parameters. The new MAIA procedure differs from the conventional coupling approach with respect to three key aspects. First, it does not introduce homogenisations of models and loads. Second, MAIA can capture load gradients at high resolution in the three directions for all the analysis involved without requiring prohibitive computational efforts. And third, MAIA keeps the consistency between the three analyses maintaining the congruence between inputs and outputs. However, the computational effort required by the CAD-centric feature of MAIA procedure imposes the representation of BB portions and, therefore, the definition and validation of boundary conditions for each performed calculation. Regarding the nuclear analysis, it has been found that the set of reflecting and white conditions in the poloidal and toroidal directions, respectively, together with the presence of Vacuum Vessel (VV) and the definition of local neutron and photon source, produces a mismatch of -0.48 % in terms of power deposition between the DEMO and the local (e.g. slice) models. It has been demonstrated that the neutronic symmetry conditions are valid in the entire module up to the last slices nearby the caps. Furthermore, a sensitivity analysis on the angular distribution of local neutron and photon source has been performed indicating in 10 cosine bins the optimal discretisation choice in terms of compromise between the fidelity of the results obtained respect to those of the reference model and the relevant computational effort. Concerning the analysis of thermal-hydraulic boundary conditions, it has been found that the variation on mass flow rates (comprised between the ~-1.3 % and the ~0.6 %) as well as power density fluctuation (up to the ~6 % in the neighbouring domains) affect the temperature distribution for less than ±2.4 % demonstrating the applicability of poloidal symmetry conditions. As far as the thermo-mechanical analyses are concerned, it has been identified the set of boundary conditions (radial and toroidal displacements prevented to the nodes lying in the rear of the back supporting structure along the toroidal and poloidal direction, symmetry at the lower cut surface and Generalised Plane Strain to the top one) that produce a discrepancy in terms of displacement in the sub-model comprised between the -6 % and the 4 % as well as a conservative assessment of membrane and bending stresses both for primary and secondary stresses. The impact of the temperature variation has also been investigated showing that the fluctuations on total deformation are comprised between -0.3 % and the 1.7 %, on equivalent membrane stress up to 15 % while on equivalent bending stress between the -7 % and the 5 %. As a proof-of-concept, the MAIA procedure has been then used to evaluate the impact on the BB design, demonstrating that some criticalities are present in the design. In particular, the fluid-dynamic results show a violation of the temperature requirement limits that have not been solved introducing proper design solutions. Furthermore, these violations of thermal-hydraulic requirements produce very intense values of Von Mises equivalent stresses that could jeopardize the structural integrity of the segment box. This demonstrates that MAIA procedure can become the reference tool for the design of the BB. Moreover, the MAIA procedure has proven the possibility to locally map important variables such as the neutron flux and the temperature as well as the primary and secondary stress that are used for the determination of the allowable stress and applied for compering with design criteria. In order to further demonstrate the versatility and adaptability of the MAIA procedure, the water activation issue occurring within the blanket Primary Heat Transfer System (PHTS) has been studied. Using MAIA procedure, it has been possible to take into account the effects of the flow on the nitrogen concentration and to provide useful information for the development of both BB design and its PHTS.
APA, Harvard, Vancouver, ISO, and other styles
9

POZZI, FEDERICO ALBERTO. "Probabilistic Relational Models for Sentiment Analysis in Social Networks." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2015. http://hdl.handle.net/10281/65709.

Full text
Abstract:
The huge amount of textual data on theWeb has grown in the last few years rapidly creating unique contents of massive dimensions that constitutes fertile ground for Sentiment Analysis. In particular, social networks represents an emerging challenging sector where the natural language expressions of people can be easily reported through short but meaningful text messages. This unprecedented contents of huge dimensions need to be efficiently and effectively analyzed to create actionable knowledge for decision making processes. A key information that can be grasped from social environments relates to the polarity of text messages, i. e. the sentiment (positive, negative or neutral) that the messages convey. However, most of the works regarding polarity classification usually consider text as unique information to infer sentiment, do not taking into account that social networks are actually networked environments. A representation of real world data where instances are considered as homogeneous, independent and identically distributed (i.i.d.) leads us to a substantial loss of information and to the introduction of a statistical bias. For this reason, the combination of content and relationships is a core task of the recent literature on Sentiment Analysis, where friendships are usually investigated to model the principle of homophily (a contact among similar people occurs at a higher rate than among dissimilar people). However, paired with the assumption of homophily, constructuralism explains how social relationships evolve via dynamic and continuous interactions as the knowledge and behavior that two actors share increase. Considering the similarity among users on the basis of constructuralism appears to be a much more powerful force than interpersonal influence within the friendship network. As first contribution, this Ph.D. thesis proposes Approval Network as a novel graph representation to jointly model homophily and constructuralism, which is intended to better represent the contagion on social networks. Starting from the classical state-of-the-art methodologies where only text is used to infer the polarity of social networks messages, this thesis presents novel Probabilistic Relational Models on user, document and aspect-level which integrate the structural information to improve classification performance. The integration is particularly useful when textual features do not provide sufficient or explicit information to infer sentiment (e. g., I agree!). The experimental investigations reveal that incorporating network information through approval relations can lead to statistically significant improvements over the performance of complex learning approaches based only on textual features.
APA, Harvard, Vancouver, ISO, and other styles
10

Calderazzi, G. "OPTIMIZATION OF THE EXPERIMENTAL SETTINGS FOR THE STUDY OF ESTROGEN ACTION IN INFLAMMATION." Doctoral thesis, Università degli Studi di Milano, 2017. http://hdl.handle.net/2434/469186.

Full text
Abstract:
Gli estrogeni, legandosi ai propri recettori estrogenici (ERs), influenzano un’enorme varietà di cellule tra cui quelle del sistema immunitario. Quando la produzione di estrogeni cessa, alcune donne possono manifestare patologie come l’osteoporosi, l’aterosclerosi e le malattie neurodegenerative, tutte caratterizzate da una forte e incontrollata risposta infiammatoria. L’infiammazione è per lo più guidata dai macrofagi, che sono in grado di captare ogni segnale dal microambiente in cui risiedono e di andare in contro a modificazioni fenotipiche e metaboliche che gli permettono di rimuovere gli insulti e di riparare e risolvere il danno tissutale. Esistono diverse evidenze che dimostrano come il 17β-estradiolo sia in grado di regolare la risposta infiammatoria e di ripristinare l’omeostasi tissutale attraverso la modulazione diretta dei macrofagi. Tuttavia, attualmente la conoscenza del meccanismo di azione dell’estrogeno sui macrofagi e la precisa identità dei geni target degli estrogeni appare limitata, dal momento che gli studi a riguardo si basano su modelli sperimentali di condizioni infiammatorie. Con l’obiettivo di identificare tutti i geni e i pathways cellulari coinvolti nella regolazione dell’omeostasi del macrofago in risposta all’estrogeno senza nessun altro stimolo, è stato condotto uno studio dell’espressione genica estesa a tutto il genoma (genome wide-gene expression study) su cellule macrofagiche peritoneali isolate da topi trattati in vivo con estrogeno. Inizialmente abbiamo identificato nel macrofago isolato dal peritoneo di topi femmina trattati con estrogeno in vivo, il modello cellulare macrofagico più fedele alle condizioni fisiologiche. Successivamente abbiamo condotto l’analisi di espressione genica della risposta macrofagica all’estrogeno. Con l’obiettivo di mantenere il più possibile le condizioni fisiologiche, abbiamo trattato i topi con concentrazioni fisiologiche di 17β-estradiolo (5μg/kg) in vivo per 3 o 24 ore e sui loro macrofagi peritoneali è stata condotta l’analisi di espressione genica. I gruppi sperimentali sono stati scelti in relazione al contenuto estrogenico endogeno. In particolare, abbiamo utilizzato: Gruppo 1- femmine in Metaestro (ME), con i più bassi livelli di estrogeni nel sangue; Gruppo 2- femmine in Estro (E), che rappresenta la fase del ciclo estrale temporalmente più prossima all’incremento fisiologico di estrogeni endogeni che si verifica durante la fase di Proestro immediatmente precedente; Gruppo 3- femmine in Metaestro trattate per 3 ore con un’iniezione sottocutanea di 17β-estradiolo (ME+3hE2); Gruppo 4- femmine in Metaestro trattate per 24 ore con un’iniezione sottocutanea di 17β-estradiolo (ME+24hE2). Allo scopo di isolare i macrofagi peritoneali, sono state utilizzate biglie magnetiche pre-caricate con anticorpi contro la proteina CD11b. Successivamente, abbiamo ottenuto una lista di geni target dell’estrogeno, i quali appaiono differenzialmente regolati nelle quattro diverse condizioni ormonali analizzate. Attraverso analisi bioinformatiche e bibliometriche, abbiamo ottenuto delle indicazioni sui possibili pathways funzionali regolati dall’estrogeno nei macrofagi peritoneali. I risultati di queste analisi hanno portato all’identificazione di possibili geni target dell’estrogeno nei macrofagi, che possono essere suddivisi in geni precocemente, tardivamente e persistentemente regolati, e che possono essere considerati come target diretti dell’azione degli estrogeni nei macrofagi. Ulteriori studi sui dettagli molecolari dell’azione degli estrogeni nei macrofagi potranno far luce sul ruolo biologico di questi ormoni in vivo e potranno portare all’identificazione di nuove terapie e target terapeutici. La conoscenza di nuovi meccanismi attraverso i quali l’estrogeno regola l’attività dei macrofagi è utile per iniziare a capire il ruolo fisiologico del legame tra estrogeno e macrofagi, possibilmente espandendo queste informazioni alle condizioni patologiche infiammatorie in cui è noto il coinvolgimento dell’estrogeno, come l’endometriosi, i tumori dell’utero, l’infertilità e le patologie del tratto riproduttivo.
Estrogen hormones, binding to estrogen receptors (ERs), influence a wide variety of cell types including those of the immune system. When estrogen production ceases, women may suffer of pathologies that are all associated with a strong and disregulated inflammatory response, such as osteoporosis, atherosclerosis and neurodegenerative diseases. Inflammation is mainly driven by macrophages that are able to sense any microenvironment signals and undergo a metabolic and phenotypic adaptations that allow them to remove the insult, repair and resolve tissue damage. Several reports have showed that 17b-estradiol regulates the inflammatory response and restoration the tissue homeostasis through a direct modulation of macrophages. Neverthless, current knowledge on E2 action in macrophages and the identity of estrogen target genes appears limitated since is based on experimental models of inflammatory condition. With the aim to identify the identity of all genes and cellular pathways that are involved in the homeostatic regulation of macrophages in response to estrogen alone, a genome wide-gene expression study of peritoneal macrophages of mice treated in vivo with E2 was performed. In particular, we first identified the most faithful macrophage model to perform such study, represented by macrophages isolated from the peritoneum of female mice treated with estrogen in vivo. Successively, we performed a gene expression analysis of the response macrophages to estrogen stimulus alone. In order to maintain physiological conditions, we treated mice with physiological concentration of 17b-estradiol (5μg/kg) in vivo for 3 or 24 hours and their peritoneal macrophages were then analyzed by gene expression. The experimental groups were chosen according to the endogenous estrogen content. We used: Groups 1-Metaestrous (ME) with lowest levels of estrogen; 2- Estrous (E), that represents the closest phase in proximity to endogenous increase of estrogen that occurs in Proestrous; 3-Metaestrous treated with a SC injection for 3h with 17β-estradiol (ME+3hE2); and 4- Metaestrous treated with a SC injection for 24h with 17β-estradiol (ME+24hE2);. Magnetic beads pre-loaded with antibodies against CD11b were used to isolated peritoneal macrophages. Successively, we obtained a list of estrogen target genes, which appear differential regulated among the four hormonal conditions analyzed. In addition, by performing bioinformatic and bibliometric analyses, we obtained indications of functional pathways regulated by estrogen in peritoneal macrophages. Results lead to the identification of possible estrogen target genes in macrophages, that can be grouped in early, late and persistently regulated genes, and can be considered as direct targets of estrogen action in macrophages. More studies on the molecular details of estrogen action in macrophages will shed more light on the biological role of these hormones in vivo and the identification of novel therapies and therapeutics targets. The knowledge of new mechanisms by which estrogen regulates the activity of macrophages will be useful to start understanding the physiologic role of this interplay, possibly expanding these information to pathologic inflammatory conditions in which estrogen is also involved, such as in endometriosis, uterine tumors, infertility and reproductive pathologies.
APA, Harvard, Vancouver, ISO, and other styles
11

LIU, XIAOQIU. "Managing Cardiovascular Risk in Hypertension: Methodological Issues in Blood Pressure Data Analysis." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2017. http://hdl.handle.net/10281/154475.

Full text
Abstract:
Hypertension remains in 2017 a leading cause of mortality and disability worldwide. A number of issues related to the determinants of cardiovascular risk in hypertensive patients and to the strategies for better hypertension control are still pending. In such a context, aims of my research program were: 1. To investigate the contribution of blood pressure variability to the risk of cardiovascular mortality in hypertensive patients. In this setting, different methods for assessing blood pressure variability and different models exploring the link between blood pressure variability and outcome were investigated. 2. To assess the possibility that a hypertension management strategy based on hemodynamic assessment of patients through impedance cardiography might lead to a better hypertension control over 24 hours than a conventional approach only based on blood pressure measurement during clinic visits. To these aims, this thesis summarizes data obtained by performing a). An in-depth analysis of a study conducted in the Dublin hypertensive population, including 11492 subjects, and b). The analysis of longitudinal data collected in the frame of BEAUTY (BEtter control of blood pressure in hypertensive pAtients monitored Using the hoTman® sYstem) study. In Dublin study, the proportional hazard Cox model and accelerated failure time models have been used to estimate the additional effect of blood pressure variability on cardiovascular mortality over and above the effect of increased mean BP levels, with an attempt to identify the best threshold values for risk stratification. On the other hand, in BEAUTY study, mixed model and generalized estimation equation are used for the longitudinal data analysis.
APA, Harvard, Vancouver, ISO, and other styles
12

VESHO, Nikolla. "A methodology on the treatment of the issue of Cultural heritage restoration in Tirana, period 1920-’40 BIM modeling, Seismic simulation and Theoretical interpretatios." Doctoral thesis, Università degli studi di Ferrara, 2022. https://hdl.handle.net/11392/2501205.

Full text
Abstract:
The history has left its mark on the Albanian culture and society, from the “Roman Empire, Greek colonies, Turkish Empire, to the Balkan and World Wars”. Indescribable is the influence of Italian designers between 1920 and 1940. The architecture of the center of Tirana, reflected mainly along the main boulevard buildings that divides the city into two parts, is an “undeniable proof of the symbiotic process” that has always characterized the relationship between Italian and Albanian culture in all dimensions. The cultural heritage buildings of Tirana’s case provide structures and monuments of great value, since they are proof of historic forms of life and the history of modern societies, and show the existence of a tangible cultural identity. The architectural heritage adds character to its surroundings, is an integral part of the city, and is also a valuable tourism resource. The study will be carried out through theoretical approaches initially and literature research, while later the methodology will take shape through the treatment of some cases of studies, which have not been previously studied in this specific context. The Albanian context has shown a lack of cooperation in a team of experts or governments have made this contribution even more fragile. To achieve this goal, the right atmosphere of cooperation between experts must be created, to consider their diverse perspectives. On the other hand, this research aims to raise awareness of Tirana's cultural heritage past by fostering a debate on cultural heritage and its future within a structured heritage framework. The study tends to stimulate the process of digitalization by creation of 3D models including BIM, in such a way as to create accessible database of materials, elements, combined architectural and structural models, database for additions and retrofitting’s. Those buildings that surround us today define the spaces of our cities which still more tell us how we were, how to judge the past and above of all how we can project the future.
La storia ha lasciato il suo marchio nella società e la cultura albanese da le colonie Greche al Impero Romano dal Impero Ottomano alle guerre balcaniche fino alle guerre mondiali. Descrivibili sono le tracce e l’influenza lasciata dagli architetti Italiani tra il 1920 al 1940. L’Architettura del centro di Tirana, riflessa principalmente lungo il viale principale il quale decide la città in due parti e prova indelebile del processo simbiotico che ha caratterizzato le relazioni culturali tra Italia e Albania in tutte le dimensioni. La ricerca e l’analisi dei materiali trovati al archivio del istituto Tecnico di Tirana e anche ne archivio del Istituto Luce, specificamente negli archivi del L. Luigi e G. Fiorini sono stati d’aiuto nel capire meglio la visione degli architetti Italiani per Tirana. Oggigiorno questi disegni servono come punto di riferimento per architetti, ingegneri e restauratori per quanto riguarda al bisogno di ricostruzione e intervento di manutenzione che questi edifici hanno. Dopo la seconda Guerra Mondiale questi edifici venivano percepiti come memoria e segno del invasione straniera, negli ultimi 30 anni queste ideologie e superstizioni furono rimpiazzate da un più pacifico processo evolutivo. Costruitesi tra gli anni 20-40 per altri scopi dopo la liberazione cambiarono destinazione d’uso trasformandosi in Università, centri di studi, sedi per Ministeri diventando cosi parte inseparabile per la vita di ogni giorno e entrando a far parte nella storia della citta, intersecandosi e comunicando tra di loro. Oggigiorno questo patrimonio Storico-culturale condivisa ha aiutato a cambiare la percezione iniziale. Questo patrimonio culturale di Tirana fornisce strutture di grande valore si come sono prova di storia e delle forme della vita e della storia moderna della società. Il patrimonio architettonico da un distintivo aspetto al contesto circostante a uno degli elementi essenziali della città e ancor più diventa una risorsa turistica. Oggigiorno questo patrimonio culturale architettonica viene riconosciuto come fragile e insostituibile, patrimonio che va preservata e trasmessa alle future generazioni. Comunque le politiche di protezione non sempre sono efficienti come lo dovrebbe essere alle quali si aggiunge un disinteresse cittadino e un clima socio-politica molto avversa. La performanza degli edifici patrimonio culturale dopo i terremoti e una preoccupazione di tante figure professionali di diverse discipline. Anche se tutti predicono la salvaguardia di questo patrimonio culturale – architettonica dal rischio danneggiamento a causa dei terremoti, il prevedibile danneggio di questi valori estetici di questi edifici (causa le addizioni e processi di riqualificazione) viene ostacolato da una decisione consensuale sul bisogno di intervento o meno nel salvaguardare questi tesori. Questo deriva dalla mancanza di una solida base di principi, nel tenere conto dei fattori che incidono sulla riduzione del rischio sismico negli edifici, per il consolidamento della struttura e nel contempo il danneggiamento dei valori attribuiti a specifici tecniche di intervento. L’età di questi edifici si sta avvicinando ai 100° anni fatto che ci rende consapevoli della grandezza degli architetti di quel periodo. Riguardo a questo studio tenta di definire i principi e i segreti della semplicità strutturale e la reinterpretazione teorica degli argomenti strutturali. Le configurazioni in piano e altezza, degli elementi strutturali più usati nel periodo e l’abilita di sopportare a un ampio numero di eventi sismici. La visione strategica di questo studio e di sviluppare una dettagliata metodologia la quale servirà come manuale di riferimento nel indirizzare sia problemi di restauro anche i problemi del analisi strutturale del patrimonio culturale con i principi delle carte internazionali sul restauro.
APA, Harvard, Vancouver, ISO, and other styles
13

Marini, F. "PARALLEL ADDITIVE SCHWARZ PRECONDITIONING FOR ISOGEOMETRIC ANALYSIS." Doctoral thesis, Università degli Studi di Milano, 2015. http://hdl.handle.net/2434/336923.

Full text
Abstract:
We present a multi-level massively parallel additive Schwarz preconditioner for Isogeometric Analysis, a FEM-like numerical analysis for PDEs that permits exact geometry representation and high regularity basis functions. Two model problems are considered: the scalar elliptic equation and the advection-diffusion equation. Theoretical analysis proves that the adoption of a coarse correction grid is crucial in order to have the condition number of the preconditioned stiffness matrix independent from the number of subdomains, whenever the ratio between the coarse mesh size and the fine mesh size is kept fixed. Numerical tests for the scalar elliptic equation (in 2D and 3D on trivial and non-trivial domains) confirm the theory. The preconditioner is then applied to the advection-diffusion equation in 2D and 3D. Again, the numerical results shows that the condition number of the preconditioned linear system scales with the number of subdomains up to 8100 processors, eventually with SUPG stabilization. The tests are implemented in C programming language on the top of PETSc library.
APA, Harvard, Vancouver, ISO, and other styles
14

CONSOLAZIO, DAVID. "Social and Spatial Inequalities in Health in Milan: the Case of Type 2 Diabetes Mellitus." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2020. http://hdl.handle.net/10281/263136.

Full text
Abstract:
La presente tesi di dottorato si propone di indagare lo stato delle disuguaglianze di salute nella città di Milano. Si parla di disuguaglianze di salute in presenza di differenze negli stati di salute delle persone all’interno di una popolazione, o tra gruppi di individui, quando queste sono attribuibili alle condizioni socioeconomiche delle persone, in virtù dell’iniqua distribuzione di risorse sociali, economiche, culturali e relazionali che consentono a ciascuno di raggiungere il proprio potenziale di salute. In aggiunta, il raggiungimento di uno stato di salute ottimale può essere influenzato anche dalle caratteristiche materiali e psicosociali del contesto di residenza, esponendo coloro che vivono in contesti svantaggiati a maggiori rischi per la loro. Muovendo dai presupposti teorici e concettuali della Fundamental Causes Theory e dall’approccio alla salute basato sui determinanti sociali questo lavoro si pone l’obiettivo di fornire una mappatura della distribuzione delle condizioni di salute all’interno del territorio milanese, contribuendo altresì al dibattito circa la presenza di neighbourhood effects sulla salute. Il lavoro svolto si basa sull’utilizzo di un approccio interdisciplinare, nel quale si fa ricorso a metodi e strumenti di tipo sociologico, epidemiologico, e geografico. Uno studio dettagliato della distribuzione sociale e territoriale di una patologia nei diversi quartieri della città è ad oggi assente, abbiamo dunque deciso di concentrarci sul Diabete Mellito di Tipo 2 alla luce della sua tipica associazione sia con le condizioni socioeconomiche individuali che con le caratteristiche dell’ambiente di vita. Facendo ricorso all’utilizzo inedito di dati amministrativi del sistema sanitario forniti dall’Unità di Epidemiologia dell’Agenzia di Tutela della Salute della Città Metropolitana di Milano, in combinazione con i dati provenienti dall’ultimo censimento della popolazione italiana, abbiamo condotto uno studio caso-controllo multilivello, con l’obiettivo di esaminare l’impatto relativo delle condizioni socioeconomiche individuali e del quartiere di residenza sul rischio di sviluppare la patologia in esame. I risultati hanno confermato la presenza di un gradiente sociale nella patologia, con una più alta prevalenza rintracciabile nelle persone con titolo di studio più basso. È stata inoltre riscontrata un’eterogeneità nella distribuzione territoriale della patologia, la quale non viene tuttavia spiegata unicamente dalle condizioni socioeconomiche individuali: l’associazione tra condizioni socioeconomiche del quartiere di residenza e rischio di sviluppo del Diabete Mellito di Tipo 2 risulta infatti essere statisticamente significativa anche controllando per le variabili individuali, suggerendo un ruolo del contesto di residenza nel plasmare l’esposizione al rischio indipendentemente dalla concentrazione di individui con caratteristiche simili nelle stesse aree. In linea con la letteratura di riferimento, è stato riscontrato che le caratteristiche individuali giocano un ruolo predominate nel determinare l’esposizione, ciononostante il quartiere dove le persone vivono esercita un effetto non trascurabile sulla salute e necessita di essere tenuto in considerazione nello sviluppo di politiche volte a contrastare l’incidenza della patologia e a ridurre le disuguaglianze sociali connaturate alla sua insorgenza. Pur essendo parzialmente in grado di mitigare le disparità in ambito di gestione della patologia e qualità delle cure, è evidente che il sistema sanitario da solo non può essere in grado di porre rimedio alle disuguaglianze sociali esistenti nel Diabete Mellito di Tipo 2, evidenziando il bisogno di interventi più ampi capaci di agire sulla struttura che contribuisce a generare e perpetuare le disuguaglianze sociali e territoriali in relazione alla patologia.
This PhD dissertation is aimed at studying health inequalities in the Italian city of Milan. Health inequalities can be defined as differences in people’s health across the population and between population groups, which are attributable to individuals’ socioeconomic status as a consequence of the uneven distribution of social, economic, cultural, and relational resources that enable people to reach their health potential (Sarti et al., 2011). Moreover, people’s health may also be affected by psychosocial and physical characteristics of the local environment in which they live, so that those living in disadvantaged areas may be at a higher risk of being subjected to worse health conditions (Macintyre and Ellaway, 2000; 2003). Moving from the theoretical and conceptual foundations of the Fundamental Causes Theory (Link and Phelan 1995; Phelan et al., 2010) and the Social Determinants of Health approach ( Solar and Irwin, 2010; Wilkinson and Marmot, 2003) this work intends to provide both an accurate mapping of the distribution of health conditions within the Milanese territory – and its association with individual and contextual socioeconomic status – and to contribute to the debate on the presence of neighbourhood effects on health (Diez-Roux, 2004; Galster, 2012). We thus relied on an interdisciplinary approach, making use of tools and methods from sociology, epidemiology, and geography. A fine-grained study of disease distribution among the neighbourhoods of the city of Milan was missing, and we opted to focus on Type 2 Diabetes Mellitus in light of its typical association with both individual socioeconomic conditions (Agardh et al., 2011) and environmental characteristics (Den Braver et al., 2018). Relying on the unprecedented use of administrative healthcare data provided by the Epidemiology Unit of the Health Protection Agency of the Metropolitan City of Milan, linked with data from the most recent Italian census, we performed a multilevel case-control study, aimed at assessing the relative impact of individual and neighbourhood socioeconomic status on the risk of developing the disease. Our results confirmed the presence of a social gradient in the distribution of the disease, with an increasing prevalence in correspondence with lower educational attainment. Moreover, we found evidence of a spatial heterogeneity in the distribution of the disease, which was not entirely explained by individual socioeconomic status: the association between neighbourhood socioeconomic status and the risk of developing Type 2 Diabetes Mellitus remained statistically significant even after accounting for individual-level variables, suggesting a role of the context in shaping risk exposure independently of the clustering of individuals with similar characteristics in the same areas. In line with the existing literature, we found that individual characteristics still play a major role in explaining risk exposure, but also that the context where people live has a non-negligible effect and should be encompassed in the design of policies aimed at tackling the disease and reducing social inequalities at its onset. Despite playing a role in mitigating disparities in relation to disease management and quality of care, there is evidence that the healthcare system alone is not able to effectively tackle existing inequalities, and that broader actions intervening in the structure that contribute to the generation and perpetuation of social and spatial inequalities are needed.
APA, Harvard, Vancouver, ISO, and other styles
15

BOLFI, BIANCA. "Target and non-target LC-MS/MS analysis of cosmetic, food and environmental samples." Doctoral thesis, Università del Piemonte Orientale, 2018. http://hdl.handle.net/11579/97187.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

ROMELLI, KATIA. "Discourse, society and mental disorders: deconstructing DSM over time through critical and lacanian discourse analysis." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2014. http://hdl.handle.net/10281/83278.

Full text
Abstract:
This dissertation presents an interdisciplinary work aimed at investigate the discursive construction of otherness in mental health domain in the Western culture, and in detail the role of Diagnostic and Statistical Manual of Mental Disorders (DSM), published by the APA, in this process. Critical psychology perspective oriented the research and the data collection. The analysis is conducted through a multi-method approach (Critical Discourse Analysis, Semiotic Analysis and Lacanian Discourse Analysis), which integrates several traditions with a particular concern about the ways in which power and ideology are discursively enacted, produced and resisted by text and talk and shape the concept of mental disorders. The study (1) examines how legitimisation and hegemony have been discursively constructed, legitimised and consolidated over time. The study (2) aims to investigate the scientific debate around the deletion of NPD in order to reconstruct and deconstruct the decisional process through which the boundaries between normality and pathology are constructed. The study (3) aims to investigate how the discourse of DSM was contested by discourse of other social actors involved in the mental-health domain in order to analyze the effect on shaping subjectivity of patients and mental-health professionals.
APA, Harvard, Vancouver, ISO, and other styles
17

CAIRO, BEATRICE. "ESTIMATING CARDIORESPIRATORY COUPLING FROM SPONTANEOUS VARIABILITY IN HEALTH AND PATHOLOGY." Doctoral thesis, Università degli Studi di Milano, 2021. http://hdl.handle.net/2434/816612.

Full text
Abstract:
Molteplici meccanismi sono responsabili delle interazioni cardiorespiratorie osservate nell’uomo. L’azione di questi meccanismi risulta in specifici ritmi di variabilità cardiaca (HRV) e ha effetti sulle interazioni tra attività cardiaca e respiratoria. I quattro principali tipi di fenomeni che derivano dalle interazioni tra cuore e sistema respiratorio sono: i) aritmia respiratoria sinusale (RSA); ii) accoppiamento cardioventilatorio; iii) sincronizzazione cardiorespiratoria in fase; iv) sincronizzazione cardiorespiratoria in frequenza. L’obiettivo di questa tesi è la descrizione e quantificazione di diversi aspetti delle interazioni cardiorespiratorie tramite l’utilizzo di una varietà di metodologie derivate dalla letteratura, adattate e ottimizzate per i tipici contesti sperimentali in cui HRV e segnale respiratorio sono comunemente acquisiti. Sei metodi analitici sono stati sfruttati a questo scopo per valutare l’entropia di trasferimento (TE), l’entropia cross-condizionata tramite entropia corretta cross-condizionata normalizzata (NCCCE), la coerenza quadratica (K2), l’accoppiamento cardioventilatorio tramite entropia normalizzata di Shannon (NSE) dell’intervallo temporale tra complesso QRS e inizio di fase inspiratoria o espiratoria, la sincronizzazione in fase tramite un indice di sincronizzazione (SYNC%) e il quoziente pulsazione-respirazione (PRQ). Questi approcci sono stati utilizzati con la finalità di testare gli effetti di uno stimolo simpatico, ovvero stimoli posturali quali l’head-up tilt (TILT) e l’ortostatismo attivo (STAND) sulle interazioni cardiorespiratorie. Gli approcci proposti sono stati testati in tre protocolli: i) atleti amatoriali sottoposti a un allenamento muscolare inspiratorio (IMT) durante clinostatismo supino (REST) e STAND; ii) volontari sani sottoposti a un decondizionamento da allettamento prolungato (HDBR), durante REST e TILT; iii) pazienti affetti da sindrome da tachicardia posturale ortostatica (POTS), durante REST e TILT, in condizione basale e durante approfondimento un anno dopo. I risultati principali della presente tesi di dottorato concernono l’effetto degli stimoli posturali sulle interazioni cardiorespiratorie in soggetti sani e patologici. Infatti, tutti gli indici proposti danno una visione coerente dell’intensità dell’interazione cardiorespiratoria in risposta a uno stimolo ortostatico, in quanto essa diminuisce in tutti i protocolli. Tuttavia, il potere statistico degli indici è differente. TE e K2 appaiono essere particolarmente deboli nell’identificare l’effetto dello stimolo posturale sulle interazioni cardiorespiratorie. NCCCE, NSE, e SYNC% dimostrano una capacità molto maggiore a tale riguardo, mentre PRQ appare troppo intimamente collegata alla frequenza cardiaca, in assenza di cambiamenti significativi della frequenza respiratoria. Per contro, tutti gli indici appaiono deboli nell’identificare gli effetti cronici di IMT e HDBR in una popolazione sana o le conseguenze croniche della gestione clinica in pazienti POTS. La tesi conclude che diversi aspetti delle interazioni cardiorespiratorie possono essere modificati in modo acuto ma gli effetti cronici di un trattamento o intervento a lungo termine sono irrisori sulla magnitudine delle interazioni cardiorespiratorie e/o possono essere confusi con la variabilità intrinseca degli indici. Considerazioni sulle differenze metodologiche e sull’efficacia degli indici proposti suggeriscono che un utilizzo simultaneo di molteplici metodi bivariati è vantaggiosa negli studi cardiorespiratori, in quanto diversi aspetti delle interazioni cardiorespiratorie possono essere valutati contemporaneamente. Questa valutazione simultanea può essere effettuata a un costo computazionale trascurabile e in contesti applicativi in cui il solo segnale ECG è disponibile.
Several mechanisms are responsible for cardiorespiratory interactions observed in humans. The action of these mechanisms results in specific patterns in heart rate variability (HRV) and affects the interaction between heart and respiratory activities. The four main types of phenomena resulting from the interactions between heart and respiratory system are: i) respiratory sinus arrhythmia (RSA); ii) cardioventilatory coupling; iii) cardiorespiratory phase synchronization; iv) cardiorespiratory frequency synchronization. The aim of this thesis is to describe and quantify different aspects of cardiorespiratory interactions employing a variety of methods from literature, adapted and optimized for the usual experimental settings in which HRV and respiratory signal are commonly acquired. Six analytical methods were exploited for this purpose assessing transfer entropy (TE), cross-conditional entropy via normalized corrected cross-conditional entropy (NCCCE), squared coherence (K2), cardioventilatory coupling via normalized Shannon entropy (NSE) of the time interval between QRS complex and inspiratory, or expiratory, onsets, phase synchronization via a synchronization index (SYNC%) and pulse-respiration quotient (PRQ). These approaches were employed with the goal of testing the effects of a sympathetic challenge, namely postural stimuli like head-up tilt (TILT) and active standing (STAND), on cardiorespiratory interactions. The proposed approaches were tested on three protocols: i) amateur athletes undergoing an inspiratory muscle training (IMT) during supine rest (REST) and STAND; ii) healthy volunteers undergoing a prolonged bed rest deconditioning (HDBR), during REST and TILT; iii) patients suffering from postural orthostatic tachycardia syndrome (POTS), during REST and TILT, at baseline and at one-year follow-up. The most important findings of the present doctoral thesis concern the effect of postural stimuli on cardiorespiratory interactions in health and disease. Indeed, all proposed indexes gave a coherent view of cardiorespiratory interaction strength in response to the orthostatic challenge, as it decreased in all protocols. However, the statistical power of the indexes was different. TE and K2 appeared to be particularly weak in detecting the effect of postural challenge on cardiorespiratory interactions. NCCCE, NSE and SYNC% exhibited much stronger ability in this regard, while PRQ seemed too closely related to heart rate, in presence of no significant modification of the respiratory rate. Conversely, all indexes appeared to be weak in detecting the chronic effects of IMT and HDBR on a healthy population and the long-term consequences of the clinical management in POTS patients. The thesis concludes that the different aspects of cardiorespiratory interactions can be modified acutely but the chronic effects of a long-term treatment or intervention on the magnitude of cardiorespiratory interactions are negligible and/or could be confused with the variability of markers. Considerations about the methodological dissimilarities and differences in effectiveness of the proposed indexes suggest that the simultaneous exploitation of all bivariate methodologies in cardiorespiratory studies is advantageous, as different aspects of cardiorespiratory interactions can be evaluated concurrently. This simultaneous evaluation can be carried out with a relatively negligible computational cost and in applicative contexts when only an ECG signal is available.
APA, Harvard, Vancouver, ISO, and other styles
18

Bianchi, Michele. "LC-MS applications in Pharmaceutical Analysis, Bioanalysis and Proteomics." Doctoral thesis, Università del Piemonte Orientale, 2019. http://hdl.handle.net/11579/102458.

Full text
Abstract:
Liquid Chromatography tandem Mass Spectrometry (LC-MS) is a powerful tool available to scientists with multidisciplinary applications. In this thesis, LC-MS technique had a key role to perform exciting experiments giving back an outstanding contribute to each project caught up. Different LC-MS instruments were involved. We worked with HPLC and UHPLC systems operating with micro- or nano-flow rates. At the same time, various MS analyzers working in different scan modes (full MS, MS2, MS3, SRM, MRM and DDA) were chosen depending by the expected results. The projects addressed by this thesis were: (1) The identification and characterization of troxerutin degradation products by chemical stability studies. (2) The quantitative evaluation of Adenosine 4'-tetraphosphate (Ap) and Nicotinamide phosphoribosyltransferase (NAMPT) related compounds in B16 cells and in C57BL/6 healthy male mice plasma. (3) The quantitation of serotonin extracellular levels in murine neural progenitor cells. (4) The Identification, quantification and investigation of phosphorylation as post translational modification on NAMPT in B16 cells. For each work, the experimental conditions, the sample handling and preparation, the LC-MS set up and the data analysis were developed and successfully applied. In conclusion, we demonstrated the versatility, reproducibility and robustness of the LC-MS technique through various chemical and biological applications.
APA, Harvard, Vancouver, ISO, and other styles
19

Gaucci, Andrea. "Necropoli etrusca di Valle Trebba (Spina). Studio di un lotto di tombe del "Dosso E" e indagini archeometriche sulla ceramica a vernice nera dei relativi corredi." Doctoral thesis, Università degli studi di Padova, 2014. http://hdl.handle.net/11577/3423535.

Full text
Abstract:
The Etruscan necropolis of Valle Trebba (Spina). Study of a batch of graves of the "Dosso E" and archaeometrical analysis on black glazed pottery of the grave goods. The Research theme is the study of a topographically coherent batch of 184 graves located in the middle of the "Dosso E" of the Necropolis of Valle Trebba. This work is part of a wider project on the necropolis headed by the Chair of Etruscology and Italic Archaeology of the Bologna University. Primary aim of the research is the study of the graves of this area and its funerary and spatial analysis. Collateral but not secondary aim is the study of the black-glazed pottery of the grave goods supported by archeometric analysis. This kind of ware is the most represented among the grave goods of all chronological phases of the necropolis and besides, it requires a more accurate chrono-typological classification. The study of the funerary contexts has resulted in the collection of a large amount of archival, graphics, photographs information, found in the archives of the Superintendence for Archaeological Heritage of Emilia Romagna. All these informations have been merged into a database completed with the analytical and systematic cataloging of the grave goods preserved at the National Archaeological Museum of Ferrara. The study of materials has enabled a more precise periodization of the contexts, whose chronological range is between the late VIth and the full IIIrd century B.C. The study of archival information has also led to a revision of the planimetry of the area of excavation and an unprecedented proposal to return the ancient landscape of the necropolis. The data obtained were then processed by performing an analysis of the funerary contexts and for the first time of the dynamics of occupation of the area, aiming at the establishment of the different forms of funerary ritual and the identification of burial plots. As part of the study of the grave goods, the study of the black glazed pottery has been supported by an archaeometrical analysis. A large group of samples was selected and integrated with samples of of indicators of production from the inhabited area, granted by the Superintendence for Archaeological Heritage of the Emilia Romagna in collaboration with the Universities of Milan and Pavia, and chemical and mineralogical data of the Po delta of the ISPRA's CARG project. Archeometric analysis were conducted at the laboratories of the Department of Biology, Geology and Environment, University of Bologna (BiGeA). The results of the archaeometrical analysis have been then compared with those of a rich database of samples from the main Etruscan and Roman sites of the north -west of Italy, which helped define the places of production of the imported pottery. It was thus confirmed the presence of three major manufacturing groups of productions, an Attic one quite consistent dated between the VIth and the IVth century B.C., one from Volterra dated between IVth and IIIrd century B.C., and one local, for which it was possible to hypothesize the location of the deposits of clay used. The statistical analysis of the results allowed us to identify a sample of imitation of Attic pottery of the Vth century. B.C. attributable to a local production of Spina. The data on productions obtained from the archaeometrical analysis have been thus the basis for setting the chrono-morphological analysis of black glazed pottery, which involved the three major productions identified (Atene, Volterra , Spina). This has led to the determination of seriations and chronologies for each ceramic form. This analysis was in fact conceived as a basis on which to set the study of the black glazed pottery of the entire necropolis, involving all the productions and the chronologies, and to develop a pottery typology for the site and hopefully for the entire Etruscan Po Valley, felt as an indispensable requirement for this geographical area
Necropoli etrusca di Valle Trebba (Spina). Studio di un lotto di tombe del "Dosso E" e indagini archeometriche sulla ceramica a vernice nera dei relativi corredi Tema della ricerca, inserita in un progetto di più ampio respiro sulla Necropoli di Valle Trebba che fa capo alla Cattedra di Etruscologia e Archeologia Italica dell'Università di Bologna, è lo studio di un lotto di 184 tombe topograficamente coerente situato nella parte mediana del Dosso E della necropoli. Obiettivi primari della ricerca sono lo studio dei corredi funerari di quest'area e la relativa analisi funeraria e spaziale. Obiettivo collaterale, ma non secondario, è l'analisi puntuale delle ceramiche a vernice nera dei corredi supportata da analisi archeometriche, che nasce dall'esigenza di un più definito inquadramento crono-tipologico ma anche produttivo della classe ceramica più rappresentata nelle sepolture spinetiche di tutte le fasi cronologiche. Lo studio dei corredi ha comportato la raccolta di una grande quantità di informazioni di natura archivistica, grafica, fotografica, reperite presso gli Archivi delle sedi della Soprintendenza per i Beni Archeologici dell'Emilia Romagna, poi confluite in un database. Tale database è stato quindi completato con la schedatura sistematica e analitica dei reperti conservati presso il Museo Archeologico Nazionale di Ferrara. Lo studio dei materiali ha consentito di definire una periodizzazione più puntuale dell'area funeraria in esame, il cui range cronologico è compreso tra la fine del VI e il pieno III sec. a.C. Lo studio delle informazioni d'archivio ha inoltre permesso una rielaborazione planimetrica dei settori di scavo della necropoli oggetto di studio e una inedita proposta di restituzione del paesaggio antico, in particolare dell'aspetto geomorfologico, funzionale all'analisi spaziale delle sepolture medesime. I dati ricavati sono stati quindi elaborati eseguendo una analisi dei corredi e per la prima volta anche delle dinamiche di occupazione dell'area funeraria, finalizzate alla definizione delle forme di ritualità funeraria spinetiche in diacronia e all'individuazione di plots funerari. Si è così arrivati all'enucleazione di specifiche forme di ritualità funeraria, individuando elementi di continuità e discontinuità. Questo quadro articolato ha permesso così di definire le linee fondamentali delle forme di ritualità dell'area funeraria tra la fine del VI al pieno III sec. a.C. Contestualmente allo studio dei corredi, è stata predisposta una analisi delle ceramiche a v.n. dei corredi. Ad un cospicuo nucleo di campioni si è inoltre aggiunta la campionatura di indicatori di produzione dagli scavi di abitato gentilmente concessi dalla Soprintendenza per i Beni Archeologici dell'Emilia Romagna in collaborazione con le Università di Milano e Pavia, e i dati chimico-mineralogici sui carotaggi effettuati nell'area del delta padano del progetto CARG dell'ISPRA. Le analisi archeometriche sono state condotte presso i laboratori del Dipartimento di Biologia, Geologia e Ambiente dell'Università di Bologna (BiGeA). I risultati delle analisi archeometriche, funzionali alla caratterizzazione chimico-mineralogica dei ceramici, sono stati inoltre confrontati con quelli di un ricco database di campioni dai principali siti etruschi e romani dell'Adriatico nord-occidentale, di Marzabotto e di Volterra conservato presso il BiGeA, che hanno permesso di definire le produzioni enucleate con maggior sicurezza. Si è così confermata la presenza di tre gruppi produttivi principali, uno attico internamente piuttosto coerente, uno volterrano ed uno locale molto articolato per specificità tecnologiche, per il quale è stato possibile ipotizzarne l'areale dei giacimenti di argilla utilizzati, prossimi all'abitato. Si aggiunge che l'elaborazione statistica dei risultati delle analisi ha permesso di individuare un campione di imitazione di ceramica attica di V sec. a.C. attribuibile alla produzione locale. I dati sulle produzioni ricavati dalle analisi archeometriche sono state quindi la base per impostare l'analisi crono-morfologica delle ceramiche a v.n., che ha coinvolto le tre principali produzioni individuate: attica, volterrana, locale. Si è così arrivati alla determinazione di seriazioni, cronologie e alla formulazione di ipotesi per i modelli formali per ogni singola forma ceramica. L'esposizione ha seguito l'impostazione delle principali classificazioni e tipologie note in letteratura, evidenziandone così le criticità . Tale analisi è stata infatti concepita come scheletro su cui impostare lo studio delle ceramiche a v.n. dell'intera necropoli, finalizzato all'elaborazione di una tipologia che sia cronologicamente trasversale e sentita come esigenza imprescindibile per questo ambito territoriale
APA, Harvard, Vancouver, ISO, and other styles
20

Rossa, Stefano. "Digitalizzazione, Pubblica Amministrazione e Dati Aperti. L’Open Data Analysis pubblica come esempio di Born-Digital Administrative Functions." Doctoral thesis, Università del Piemonte Orientale, 2020. http://hdl.handle.net/11579/115040.

Full text
Abstract:
La Tesi di Dottorato analizza l’impatto della digitalizzazione sulla Pubblica Amministrazione. Lo studio si sofferma, in particolare, sul ruolo dei dati aperti (Open Data) nell’esercizio delle funzioni amministrative. Dopo aver ricostruito il concetto digitalizzazione della Pubblica Amministrazione, e aver proceduto a delinearne una definizione giuridica, dall’indagine emerge come la digitalizzazione possa essere intesa come un processo esecutivo e organizzativo che consta di due momenti. Da un lato, la Pubblica Amministrazione agisce, e si organizza per agire, per esercitare con le tecnologie digitali (ICT) funzioni amministrative preesistenti, al fine di attuare i principi “tradizionali” dell’Amministrazione. Dall’altro, la Pubblica Amministrazione agisce, e si organizza per agire, per esercitare con le ICT funzioni amministrative nuove, non preesistenti, al fine di attuare i principi della strategia di Open Government (trasparenza, partecipazione e collaborazione). In questa seconda declinazione del concetto di digitalizzazione, la Pubblica Amministrazione si trova a esercitare funzioni amministrative innovative, nuove, rese concretamente possibili dall’impiego della tecnologia digitale: funzioni amministrative che possono essere definite Born-Digital Administrative Functions. Fra esse, un esempio può essere costituito dall’Open Data Analysis pubblica, vale a dire l’analisi dei dati aperti in possesso delle Pubbliche Amministrazioni, la quale rappresenta la crasi della concretizzazione dei principi di trasparenza, partecipazione e collaborazione.
APA, Harvard, Vancouver, ISO, and other styles
21

Lopresti, Mattia. "Non-destructive X-ray based characterization of materials assisted by multivariate methods of data analysis: from theory to application." Doctoral thesis, Università del Piemonte Orientale, 2022. http://hdl.handle.net/11579/143020.

Full text
Abstract:
X-ray based non-destructive techniques are an increasingly important tool in many fields, ranging from industry to fine arts, from medicine to basic research. Over the last century, the study of the physical phenomena underlying the interaction between X-rays and matter has led to the development of many different techniques suitable for morphological, textural, elementary, and compositional analysis. Furthermore, with the development of the hardware technology and its automation thanks to IT advancements, enormous progress has been made also from the point of view of data collection and nowadays it is possible to carry out measurement campaigns by collecting many GigaBytes of data in a few hours. Already huge data sets are further enlarged when samples are analyzed with a multi-technique approach and/or at in situ conditions with time, space, temperature, and concentration becoming additional variables. In the present work, new data collection and analysis methods are presented along with applicative studies in which innovative materials have been developed and characterized. These materials are currently of high application interest and involve composites for radiation protection, ultralight magnesium alloys and eutectic mixtures. The new approaches have been grown up from an instrumental viewpoint and with regard to the analysis of the data obtained, for which the use and development of multivariate methods was central. In this context, extensive use has been made of principal component analysis and experimental design methods. One prominent topic of the study involved the development of in situ analysis methods of evolving samples as a response to different types of gradients. In fact, while in large structures such as synchrotrons carrying out analyzes under variable conditions is now consolidated practice, on a laboratory scale this type of experiments is still relatively young and the methods of data analysis of data sets evolving systems have large perspectives for development especially, if integrated by multivariate methods.
APA, Harvard, Vancouver, ISO, and other styles
22

ROCCHI, ELISA. "STRUCTURAL PROPERTIES AND FUNCTIONAL TAILORING OF REFINED OR UNREFINED PLANT-BASED MATERIALS OBTAINED BY EXTRACTION OR BIO-TRANSFORMATION OF AGRO-FOOD WASTES." Doctoral thesis, Università degli Studi di Milano, 2018. http://hdl.handle.net/2434/603778.

Full text
Abstract:
L’impiego delle risorse in maniera efficiente è una tematica di interesse globale, sia da un punto di vista ambientale che economico. La produzione di grandi quantità di residui agro-alimentari è una delle principali cause di inefficienza a livello di produzione industriale. Questi materiali hanno ancora un elevato contenuto di materiale organico, ricco in composti naturali preziosi e peculiari strutture che, una volta identificati, possono essere estratti e valorizzati per via fisica, chimica o biotecnologica. Per queste ragioni, negli ultimi anni si osserva un crescente interesse sia scientifico che industriale per il recupero e la valorizzazione di materiali di scarto. In questo contesto, questo progetto di dottorato si propone di investigare le potenzialità di ingredienti derivati da residui agro-alimentari per la formulazione alimentare. La caratterizzazione strutturale e funzionale delle matrici considerate ha seguito un approccio tipico della scienza dei materiali, basato sull’analisi quantitativa mediante modelli matematici per promuovere e controllare specifici fenomeni e modificazioni in grado di migliorare le proprietà funzionali del sistema formulato. Nella presente tesi di dottorato sono stati considerati quattro diversi materiali. La cellulosa nano cristallina è stata caratterizzata da un punto di vista reo-ottico per meglio comprendere quale sia relazione esistente tra l’organizzazione nematica e lo stato cineticamente arrestato. Gli estratti di parete vegetale (CWM) sono stati studiati nell’ottica di un loro impiego come sistemi per il trasporto di composti bioattivi, agendo soprattutto sull’organizzazione della matrice polimerica di parete con lo scopo di poter meglio controllare il rilascio dei composti di interesse. Le frazioni di farina di semi di canapa sono state valutate in termini di capacità texturizzante e strutturante. La cellulosa batterica si è dimostrata invece un eccellente supporto per la ritenzione di molecole volatili.
The efficient exploitation of resources is a topic of concern worldwide, from both an environmental and economic point of view. The production of large amounts of agro-food residues is one of the main causes for the inefficiency of industrial-scale food production. These are materials of high organic load, where valuable natural compounds and structures can be identified and extracted in order to valorize wastes via physical, chemical or biotechnological processing. For this reason, an intensive investigation for the recovery of materials with improved functional properties has been carried out in the last years. In this framework, this PhD doctoral project aimed to explore the potentiality of residue-derived ingredients for the food formulation. Structural and functional characterization of the proposed matrices was performed with a material science point of view, based on the quantitative analysis through mathematical models to promote and control adequate changes which can improve some functional properties in the food. In the present thesis, four different materials were considered. Cellulose nanocrystals were characterized with a rheo-optical approach in order to provide an insight into the relationship between nematic ordering and kinetic arrest. Cell wall materials were studied as carrier of bioactive compounds, with the purpose of tuning the wall matrix organization to obtain a sustained release. Hemp seed meal fractions were evaluated because of their texturing and structuring abilities. The use of bacterial cellulose as support for the retain of volatile molecules was also investigated, along with its thickening ability.
APA, Harvard, Vancouver, ISO, and other styles
23

STRAZZA, ANNACHIARA. "Advanced Techniques for EMG-based Assessment of Muscular Co-Contraction During Walking." Doctoral thesis, Università Politecnica delle Marche, 2019. http://hdl.handle.net/11566/263516.

Full text
Abstract:
L'analisi del cammino è definita come lo studio sistematico della locomozione umana. Una parte centrale dell'analisi del cammino è rappresentata dall'elettromiografia di superficie (sEMG). Un ruolo centrale nel controllo del cammino è svolto dai muscoli degli arti inferiori, e in particolare dalla co-contrazione muscolare degli arti inferiori. La co-contrazione muscolare è definita come il reclutamento concomitante di muscoli antagonisti che afferiscono a un determinato giunto. In soggetti sani, la co-contrazione esercita una pressione omogenea sulla superficie articolare, preservandone la stabilità articolare. In individui patologici, la co-contrazione sembra avere un ruolo chiave nello sviluppo di strategie di compensazione durante la riabilitazione motoria. Per la quantificazione dell'attività di co contrazione dei muscoli degli arti inferiori, diverse metodologie basate su sEMG sono state sviluppate ma uno standard per identificare la co-contrazione muscolare non è ancora disponibile. Dunque, obiettivo è l'analisi della co contrazione dei muscoli delle gambe nel dominio tempo-frequenza durante il cammino e la determinazione di dati normativi durante il cammino sano adulto e pediatrico. L'analisi mediante trasformata Wavelet (WT) è uno strumento appropriato per sviluppare un nuovo approccio per la valutazione della co-contrazione muscolare nel dominio tempo frequenza. Il metodo proposto è denominato CODE: CO-contraction DEtection. Un'ulteriore applicazione dell'analisi WT è l'estrazione e la valutazione dei suoni cardiaci fetali, dal segnale fonocardiografico fetale. Per ottenere dati di riferimento sulla co-contrazione dei muscoli dell’arto inferiore durante il cammino adulto e pediatrico è stata utilizzata la Statistical Gait Analysis (SGA), tecnica recente in grado di fornire una caratterizzazione statistica del cammino, calcolando i parametri spaziali-temporali mediante l’analisi di centinaia di passi di uno stesso soggetto durante il cammino.
Gait analysis is the systematic study of human locomotion. A central part of gait analysis is represented by surface electromyography (sEMG). The walking control is played by lower limb muscles, and in particular by lower limb muscular co-contraction. Muscular co-contraction is the concomitant recruitment of antagonist muscles crossing a joint. In healthy subjects, co-contraction occurs to achieve a homogeneous pressure on joint surface, preserving articular stability. In pathological individuals, the assessment of co-contraction appeared to have a key role for discriminating dysfunction conditions of the central nervous system. Different methodologies for muscular co-contraction assessment were developed. A co-contraction index (CI) based on the area computation under the curve of rectified EMG signal from antagonist muscles was developed. It provides an overall numerical index that could not be suitable to characterize dynamic task. To overcome this limitation, muscular co-contraction was assessed by overlapping linear envelopes or temporal interval where muscles superimposed. Thus, a gold standard for identifying muscle co-contraction is not available yet. The aim of the study is to perform an EMG-based analysis of muscular co-contraction by proposing a new and reliable techniques for leg-muscle co-contraction assessment in time-frequency domain and by providing normative co-contraction data during heathy adult and child walking. The proposed method, based on Wavelet transform (WT), is named CO-contraction DEtection algorithm (CODE). A further application of WT analysis is the extraction and assessment of fetal heart sounds, from fetal phonocardiography signal. In the present study, also a reference data on lower-limb-muscle co contraction was provided by means of Statistical Gait Analysis, a technique able to provide a statistical characterization of gait, by averaging spatial-temporal and sEMG-based parameters over hundreds of strides during walking.
APA, Harvard, Vancouver, ISO, and other styles
24

Favero, Francesco. "Development of two new approaches for NGS data analysis of DNA and RNA molecules and their application in clinical and research fields." Doctoral thesis, Università del Piemonte Orientale, 2019. http://hdl.handle.net/11579/102446.

Full text
Abstract:
The aim of this study is focused on two main areas of NGS analysis data: RNA-seq(with a specific interest in meta-transcriptomics) and DNA somatic mutations detection. We developed a simple and efficient pipeline for the analysis of NGS data derived from gene panels to identify DNA somatic point mutations. In particular we optimized a somatic variant calling procedure that was tested on simulated datasets and on real data. The performance of our system has been compared with currently available tools for variant calling reviewed in literature. For RNA-seq analysis, in this work we tested and optimized STAble, an algorithm developed originally in our laboratory for the de novo reconstruction of transcripts from non reference based RNA-seq data. At the beginning of this study, the first module of STAble was already been written. The first module is the one which reconstructs a list of transcripts starting from RNA-seq data. The aim of this study, particularly, consisted in adding a new module to STAble, developed in collaboration with Cambridge University, based on the flux-balance analysis in order to link the metatranscriptomic analysis to a metabolic approach. This goal has been achieved in order to study the metabolic fluxes of microbiota starting from metatranscriptomic data.
APA, Harvard, Vancouver, ISO, and other styles
25

DONELLI, NICOLA. "Bayesian semiparametric analysis of vector Multiplicative Error Models." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2015. http://hdl.handle.net/10281/95879.

Full text
Abstract:
The purpose of our research is to generalize the semiparametric Bayesian MEM of Mira and Solgi (2013) to a multivariate setting, namely vector MEM (vMEM). In the proposed vMEM, given the information set available at time t-1, the positive data vector at time t is modeled as the element-by-element product of a conditional mean vector and a random vector of innovations. In our approach, the process of innovations is an i.i.d. vector process from a unit mean multivariate distribution supported on the positive orthant. We model this distribution nonparametrically, using a Dirichlet process mixture of multivariate log-Normal distributions. We also propose a general parametric model for the conditional mean that nests most of the specifications used in the vMEM literature. To perform Bayesian inference we first expand the parameter space, considering an unidentifiable model with a non-unit mean parameter for the innovations. We then apply the slice sampler algorithm described in Kalli et. al (2011) to the parameter-expanded model and finally post-process this sample to obtain a sample from the posterior of original model. We finally present simulations with different specifications of the conditional mean vector and a real-data application.
APA, Harvard, Vancouver, ISO, and other styles
26

Ferrari, Rosalba (ORCID:0000-0002-3989-713X). "An elastoplastic finite element formulation for the structural analysis of Truss frames with application to ha historical iron arch bridge." Doctoral thesis, Università degli studi di Bergamo, 2013. http://hdl.handle.net/10446/28959.

Full text
Abstract:
This doctoral thesis presents a structural analysis of the Paderno d’Adda Bridge, an impressive iron arch viaduct built in 1889 and located in Lombardia region (Italy). The thesis falls in the context of a research activity started at University of Bergamo since 2005, that is still ongoing and aims to perform an evaluation of the present state of conservation of the bridge. In fact, the bridge is currently still in service and its important position in the context of transport network will soon lead to questions about its future destination, with particular attention to the evaluation of the residual performance capacity. To this end, an inelastic structural analysis of the Paderno d’Adda bridge has been performed, up to failure. This analysis has been conducted through an autonomous computer code of a 3D frame structure that runs in the MATLAB environment and has been developed within the classical frame of Limit Analysis and Theory of Plasticity. The algorithm has been developed applying the “exact” and stepwise holonomic step-by-step analysis method. It has shown very much able to track the limit structural behaviour of the bridge, by reaching convergence with smooth runs up to the true limit load and corresponding collapse displacements. The main characteristic ingredients of its elastoplastic FEM formulation are: beam finite elements; perfectly plastic joints (as an extension of classical plastic hinges); piece-wise linear yield domains; “exact” time integration. In the algorithm, the following original features have been implemented: treatment of mutual connections by static condensation and Gaussian elimination; determination of the tangent stiffness formulation through Gaussian elimination. These peculiar contributions are presented in detail in this thesis.
APA, Harvard, Vancouver, ISO, and other styles
27

Barban, Nicola. "Essays on sequence analysis for life course trajectories." Doctoral thesis, Università degli studi di Padova, 2010. http://hdl.handle.net/11577/3421549.

Full text
Abstract:
The thesis is articulated in three chapters in which I explore methodological aspects of sequence analysis for life course studies and I present some empirical analyses. In the first chapter, I study the reliability of two holistic methods used in life-course methodology. Using simulated data, I compare the goodness of classification of Latent Class Analysis and Sequence Analysis techniques. I first compare the consistency of the classification obtained via the two techniques using an actual dataset on the life course trajectories of young adults. Then, I adopt a simulation approach to measure the ability of these two methods to correctly classify groups of life course trajectories when specific forms of “random” variability are introduced within pre-specified classes in an artificial datasets. In order to do so, I introduce simulation operators that have a life course and/or obser- vational meaning. In the second chapter, I propose a method to study the heterogeneity in life course trajectories. Using a non parametric approach, I evaluate the association between Optimal Matching distances and a set of categorical variables. Using data from the National Longitudinal Study of Adolescent Health (Add-Health), I study the hetero- geneity of early family trajectories in young women. In particular, I investigate if the OM distances can be partially explained by family characteristics and geographical context experienced during adolescence. The statistical methodology is a generalization of the analysis of variance (ANOVA) to any metric measure. In the last chapter, I present an application of sequence analysis. Using family transitions from Wave I to Wave IV of Add-health, I investigate the association between life trajectories and health outcomes at Wave IV. In particular, I am interested in exploring how differences in timing, quantum and order of family formation transitions are connected to self-reported health, depres- sion and risky behaviors in young women. Using lagged-value regression models, I take into account selection and the effect of confounding variables.
La tesi è articolata in tre sezioni distinte in cui vengono afffrontati sia aspetti metodologici che analisi empiriche riguardanti l’analisi delle sequenze per lo studio del corso di vita. Nel primo capitolo, viene presentato un confronto tra due metodi olistici per lo studio del corso di vita. Usando dati simulati, si confronta la bontà di classificazione ottenuta con modelli di classi latenti e tecniche di analisi delle sequenze. Le simulazioni sono effettuate introducendo errori di tipo stocastico in gruppi omogenei di traiettorie. Nel secondo capitolo, si propone di studiare l’eterogeneità nei percorsi di vita familiare. Usando un approccio nonparametrico, viene valutata l’associazione tra le distanze ottenute tramite l’algoritmo di Optimal Matching ed un insieme di variabili categoriche. Usando i dati provenienti dall’indagine National Longitudinal Study of Adolescent Health (Add-Health), si studia l’eterogeneità nei percorsi di formazione familiare di un campione di giovani donne statunitensi. La metodologia statistica proposta è una generalizzazione dell’analisi della varianza (ANOVA) . Nell’ultimo capitolo, si presenta un’applicazione dell’analisi delle sequenze per dati longitudinali. Usando i dati sulla transizione alla famiglia dalla prima alla quarta rilevazione nell’indagine Add-Health, vengono studiate le associazioni tra transizioni familiari e diversi indicatori di salute. In particolare, viene studiato come alcune caratteristiche legate alle transizioni familiari (timing, quantum, sequencing) siano associate allo stato generale di salute, depressione e comportamenti a rischio. La selezione e l’effetto di variabili confondenti sono prese in considerazione nell’analisi.
APA, Harvard, Vancouver, ISO, and other styles
28

MARRONE, Gianliborio Gaetano. "VALUTAZIONE DELLE PERFORMANCE E ATTIVITÀ DI INDIRIZZO, MONITORAGGIO E CONTROLLO SULLA PROGRAMMAZIONE PUBBLICA REGIONALE DEI FONDI STRUTTURALI. APPROCCI QUANTITATIVI E SIMULAZIONE DINAMICA." Doctoral thesis, Università degli Studi di Palermo, 2014. http://hdl.handle.net/10447/91264.

Full text
Abstract:
La programmazione pubblica regionale attraversa un momento di forte criticità interna legata al soggetto attuatore regionale, caratterizzata da una lentissima attuazione dei programmi operativi cofinanziati dalla Commissione Europea. In risposta alle emergenze economiche e sociali, l’amministrazione regionale necessita di strumenti in grado di comunicare efficacemente azioni e risultati che possono positivamente influenzare la vita sociale e civile della comunità regionale. In particolare, gli operatori economici e, in generale, tutti i portatori di interesse, basano i propri atteggiamenti e le proprie scelte sulle aspettative delle dinamiche attese nei settori economici e sociali in cui operano. La partecipazione dei portatori di interessi al processo di governance della programmazione pubblica regionale garantisce una condivisione dei programmi e dei sottostanti obiettivi. Un sistema di misurazione delle perfomance, basato sulla formulazione di indicatori di risultato consistenti e condivisi, contribuisce a colmare il gap comunicativo interno e verso l’esterno. Ciò può generare un miglioramento nella qualità delle decisioni prese in riferimento all’allocazione di risorse, permettendo di indirizzare, in maniera sostenibile e condivisa, maggiori risorse in settori dell’amministrazione regionale più efficienti. Un sistema cosiddetto di performance budgenting permette l’indirizzo, il monitoraggio e il controllo delle risorse allocate e dei relativi risultati ottenuti. Attraverso un tale sistema, l’amministrazione regionale, come già l’amministrazione statale, sarà in grado di generare condivisione e consenso attorno alle proprie scelte. Un approccio di tipo statistico alla formulazione di un sistema di valutazione delle performance consente di favorire un processo di apprendimento orientato al miglioramento della qualità delle decisioni.
APA, Harvard, Vancouver, ISO, and other styles
29

Crotta, M. "PROBABILISTIC MODELLING IN FOOD SAFETY: A SCIENCE-BASED APPROACH FOR POLICY DECISIONS." Doctoral thesis, Università degli Studi di Milano, 2015. http://hdl.handle.net/2434/339138.

Full text
Abstract:
This thesis deals with use of qualitative and quantitative probabilistic models for the animal-derived food safety management. Four unrelated models are presented: three quantitative and one qualitative. Two of the quantitative models concern the risk posed by pathogens in raw milk, in the first study, a probabilistic approach for the inclusion of the variability and the uncertainty in the consumers’ habits and the bacterial pathogenic potential is proposed while the second study, demonstrate how the overlook of the relationship between the storage time and temperature has led to overestimated results in raw milk-related models published so far and an equation to address the issue is provided. In the third study, quantitative modelling techniques are used to simulate the dynamics underlying the spread of Campylobacter in broiler flocks and quantify the potential effects that different on-farm mitigation strategies or management measures have on the microbial load in the intestine of infected birds at the end of the rearing period. In the qualitative study, a general approach for the estimation of the likelihoods of introduction of live parasites in aquaculture implants and the commercialization of infested product is outlined by using the example of Anisakids in farmed Atlantic salmon.
APA, Harvard, Vancouver, ISO, and other styles
30

CHIESA, DAVIDE. "Development and experimental validation of a Monte Carlo simulation model for the Triga Mark II reactor." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2014. http://hdl.handle.net/10281/50064.

Full text
Abstract:
In recent years, many computer codes, based on Monte Carlo methods or deterministic calculations, have been developed to separately analyze different aspects regarding nuclear reactors. Nuclear reactors are very complex systems, which require an integrated analysis of all the variables which are intrinsically correlated: neutron fluxes, reaction rates, neutron moderation and absorption, thermal and power distributions, heat generation and transfer, criticality coefficients, fuel burnup, etc. For this reason, one of the main challenges in the analysis of nuclear reactors is the coupling of neutronics and thermal-hydraulics simulation codes, with the purpose of achieving a good modeling and comprehension of the mechanisms which rule the transient phases and the dynamic behavior of the reactor. This is very important to guarantee the control of the chain reaction, for a safe operation of the reactor. In developing simulation tools, benchmark analyses are needed to prove the reliability of the simulations. The experimental measurements conceived to be compared with the results coming out from the simulations are really precious and can provide useful information to improve the description of the physics phenomena in the simulation models. My PhD research activity was held in this framework, as part of the research project Analysis of Reactor COre (ARCO, promoted by INFN) whose task was the development of modern, flexible and integrated tools for the analysis of nuclear reactors, relying on the experimental data collected at the research reactor TRIGA Mark II, installed at the Applied Nuclear Energy Laboratory (LENA) at the University of Pavia. In this way, once the effectiveness and the reliability of these tools for modeling an experimental reactor have been demonstrated, these could be applied to develop new generation systems. In this thesis, I present the complete neutronic characterization of the TRIGA Mark II reactor, which was analyzed in different operating conditions through experimental measurements and the development of a Monte Carlo simulation tool (relied on the MCNP code) able to take into account the ever increasing complexity of the conditions to be simulated. First of all, after giving an overview of some theoretical concepts which are fundamental for the nuclear reactor analysis, a model that reconstructs the first working period of the TRIGA Mark II reactor, in which the “fresh” fuel was not heavily contaminated with fission reaction products, is described. In particular, all the geometries and the materials are described in the MCNP simulation model with good detail, in order to reconstruct the reactor criticality and all the effects on the neutron distributions. The very good results obtained from the simulations of the reactor at low power condition -in which the fuel elements can be considered to be in thermal equilibrium with the water around them- are then used to implement a model for simulating the full power condition (250kW), in which the effects arising from the temperature increase in the fuel-moderator must be taken into account. The MCNP simulation model was exploited to evaluate the reactor power distribution and a dedicated experimental campaign was performed to measure the water temperature within the reactor core. In this way, through a thermal-hydraulic calculation tool, it has been possible to determine the temperature distribution within the fuel elements and to include the description of the thermal effects in the MCNP simulation model. Thereafter, since the neutron flux is a crucial parameter affecting the reaction rates and thus the fuel burnup, its energy and space distributions are analyzed presenting the results of several neutron activation measurements. Particularly, the neutron flux was firstly measured in the reactor's irradiation facilities through the neutron activation of many different isotopes. Hence, in order to analyze the energy flux spectra, I implemented an analysis tool, based on Bayesian statistics, which allows to combine the experimental data from the different activated isotopes and reconstruct a multi-group flux spectrum. Subsequently, the spatial neutron flux distribution within the core was measured by activating several aluminum-cobalt samples in different core positions, thus allowing the determination of the integral and fast flux distributions from the analysis of cobalt and aluminum, respectively. Finally, I present the results of the fuel burnup calculations, that were performed for simulating the current core configuration after a 48 years-long operation. The good accuracy that was reached in the simulation of the neutron fluxes, as confirmed by the experimental measurements, has allowed to evaluate the burnup of each fuel element from the knowledge of the operating hours and the different positions occupied in the core over the years. In this way, it has been possible to exploit the MCNP simulation model to determine a new optimized core configuration which could ensure, at the same time, a higher reactivity and the use of less fuel elements. This configuration was realized in September 2013 and the experimental results confirm the high quality of the work done. The results of this Ph.D. thesis highlight that it is possible to implement analysis tools -ranging from Monte Carlo simulations to the fuel burnup time evolution software, from neutron activation measurements to the Bayesian statistical analysis of flux spectra, and from temperature measurements to thermal-hydraulic models-, which can be appropriately exploited to describe and comprehend the complex mechanisms ruling the operation of a nuclear reactor. Particularly, it was demonstrated the effectiveness and the reliability of these tools in the case of an experimental reactor, where it was possible to collect many precious data to perform benchmark analyses. Therefore, for as these tools have been developed and implemented, they can be used to analyze other reactors and, possibly, to project and develop new generation systems, which will allow to decrease the production of high-level nuclear waste and to exploit the nuclear fuel with improved efficiency.
APA, Harvard, Vancouver, ISO, and other styles
31

ORILISI, GIULIA. "Multidisciplinary Approaches in Modern Dentistry: Studies on Dental Structure and Innovative Biomaterials." Doctoral thesis, Università Politecnica delle Marche, 2022. https://hdl.handle.net/11566/301272.

Full text
Abstract:
I denti rappresentano un organo molto complesso, il cui studio comprende numerosi aspetti, tra cui la morfologia, la composizione chimica e elementare, le proprietà meccaniche e fisiche, e anche l’interazione con i diversi biomateriali. Negli ultimi anni si è verificato un incremento delle richieste da parte dei pazienti di trattamenti restaurativi estetici, grazie anche alla maggiore sensibilità delle persone in questo campo e alla fondamentale importanza sociale di un bel sorriso. Ciò ha portato alla necessità di sviluppare biomateriali sempre più estetici e tecniche cliniche più innovative e, di conseguenza, ad una continua implementazione della ricerca scientifica. A tutt’oggi, diverse tecniche analitiche, fra cui la Microscopia a Scansione Elettronica, la Diffrattometria a raggi X e la spettroscopia EDS, sono state applicate singolarmente per analizzare specifiche problematiche odontoiatriche. Purtroppo, tutte queste tecniche comportano la distruzione del campione durante le fasi di preparazione e/o analisi, e ciò può influenzare il risultato finale. Recentemente, sono state introdotte nuove tecniche ad alta risoluzione, fra cui le spettroscopie IR e Raman e la Microtomografia Computerizzata. Queste tecniche hanno il vantaggio di preservare l’integrità del campione, poiché non richiedono specifiche procedure preparatorie. Inoltre, gli spettrometri IR e Raman di ultima generazione sono accoppiati con microscopi ottici, e permettono, quindi, di ottenere sullo stesso campione e con una singola misura informazioni oggettive e affidabili sia sulla composizione chimica del campione che sulla sua struttura, semplificando l’acquisizione dei dati e riducendo i costi dell’analisi. Sulla base di questi presupposti, si può affermare che l'accoppiamento di tutte queste tecniche ad alta risoluzione in un approccio multidisciplinare rappresenta un valido strumento per ottenere informazioni affidabili e complete a livello molecolare sulla micro e macrostruttura dei tessuti dentali e sulle proprietà strutturali dei biomateriali di nuova generazione. I dati ottenuti rappresentano inoltre una preziosa base di partenza sia per migliorare la diagnosi e il trattamento di diverse patologie dentali, che per sviluppare nuovi biomateriali più performanti e, infine, per fornire protocolli clinici semplificati non invasivi e più appropriati.
Human teeth can be considered a very complex and multi-faceted organ, the study of which comprises the elucidation of several aspects, including morphology, macromolecular and chemical composition, mechanical and functional properties, as well as interaction with different biomaterials. In the last years, the demand for restorative and esthetic purposes has undergone a significant increase, also thanks to a major sensitivity of people and to the fundamental role of a healthy smile in the society. Hence, the need for more innovative materials and clinical techniques has consequently led to the implementation of scientific research in this field. Several analytical techniques, including Scanning Electron Microscopy, X-ray diffraction, and Energy Dispersive X-ray Spectroscopy, have been individually applied to elucidate specific issues in dentistry. Unfortunately, all these techniques have the inconvenience to destroy the sample during the preparation phase. Recently, new high-resolution techniques, including IR and Raman spectroscopies and Microcomputed Tomography (μ-CT), have been introduced. These latter analytical tools have the advantage to preserve the sample without destroying it, since they do not require fixing procedures before measurements, which could influence the final analysis. Moreover, last generation IR and Raman spectrometers are coupled with optical microscopes, letting obtain on the same samples and at the same time, reliable and objective information both on the chemical composition and the structure of the sample, simplifying data acquisition and reducing times of analysis. On these bases, it appears evident that the coupling of all these high-resolution techniques in a multidisciplinary approach represents a valuable tool to obtain reliable and comprehensive information at a molecular level on the micro- and macro- structure of dental tissues and on the structural properties of new generation biomaterials. This could be precious to improve diagnosis and treatment of several dental pathologies. Furthermore, shedding new light on the design and development of innovative dental materials will allow elucidating their interaction with hard dental tissues, thus suggesting non-invasive and more appropriate simplified clinical protocols.
APA, Harvard, Vancouver, ISO, and other styles
32

ARMETTI, GIACOMO. "GEOMECHANICAL CHARACTERIZATION OF ROCK MASSES FOR TUNNEL EXCAVATION WITH TBM." Doctoral thesis, Università degli Studi di Milano, 2019. http://hdl.handle.net/2434/612954.

Full text
Abstract:
The prediction of Tunnel Boring Machine (TBM) performances and the comprehension of the relationship between the geological, mechanical and structural features of a rock mass and the machine behaviour are fundamental in order to select the most effective tunnel construction methods and to estimate the condition of excavation in terms of time and economic costs of the infrastructure. The general purpose of this work is to carry out predictive models utilisable during the early stage of tunnel planning and realisation; for this purpose, many approaches have been considered. Simple and multiple, both linear and nonlinear, regression analyses have been performed in order to achieve an empirical equation able to estimate the principal TBM performance indices (ROP and FPI). In addition, Multiple Correspondence Analysis and clustering have been developed. A modified version of the Rock Engineering System has been performed and used to obtain a predictive model. This study focuses on the exploratory tunnel named “La Maddalena”, located in Northern Italy, Western Alps, where geological surveys data have been continuously performed during tunnel construction; this allowed the collection of a large amount of geological, geotechnical and TBM performance data.
APA, Harvard, Vancouver, ISO, and other styles
33

Guiotto, Annamaria. "Development of a gait analysis driven finite element model of the diabetic foot." Doctoral thesis, Università degli studi di Padova, 2013. http://hdl.handle.net/11577/3423117.

Full text
Abstract:
Diabetic foot is an invalidating complication of diabetes mellitus, a chronic disease increasingly frequently encountered in the aging population. The global prevalence of diabetes is predicted to double by the year 2030 from 2.8% to 4.4%. The prevalence of foot ulceration among patients with diabetes mellitus ranges from 1.3% to 4.8%. Several studies have highlighted that biomechanical factors play a crucial role in the aetiology, treatment and prevention of diabetic foot ulcers. Recent literature on the diabetic foot indicates that mechanical stresses, high plantar pressures or/and high tangential stresses, acting within the soft tissues of the foot can contribute to the formation of neuropathic ulcers. While it is important to study the in-vivo diabetic foot-to-floor interactions during gait, models for simulations of deformations and stresses in the diabetic plantar pad are required to predict high risk areas or to investigate the performance of different insoles design for optimal pressure relief. The finite elements (FE) models allow taking into account the critical aspects of the diabetic foot, namely the movement, the morphology, the tissue properties and the loads. Several 2-dimensional (2D) and 3-dimensional (3D) foot models were developed recently to study the biomechanical behavior of the human foot and ankle. However, to the author knowledge, a geometrically detailed and subject specific 3D FE model of the diabetic neuropathic foot and ankle has not been reported. Furthermore 2D and 3D state-of-the-art FE foot models are rarely combined with subject specific gait analysis data both in term of ground reaction forces and kinematics as input parameters and plantar pressure for validation purposes. The purpose of the study herein presented was to simulate the biomechanical behavior of both an healthy and a diabetic neuropathic foot in order to predict the area characterized by excessive stresses on the plantar surface. To achieve this, it has been developed an FE model of the foot by means of applying the loading and boundary conditions given by subject-specific integrated and synchronized kinematic-kinetic data acquired during gait analysis trials to a subject specific FE model (geometry was obtained through subject specific magnetic resonance images - MRI). Thus, an integrated kinematic-kinetic protocol for gait analysis which evaluates 3D kinematics and kinetics of foot subsegments together with two comprehensive FE models of an healthy and a diabetic neuropathic foot and ankle were described herein. In order to establish the feasibility of the former approach, a 2D FE model of the hindfoot was first developed, taking into account the bone and plantar pad geometry, the soft tissues material properties, the kinematics and the kinetics of both an healthy and a diabetic neuropathic foot acquired during three different phases of the stance phase of gait. Once demonstrated the advantage of such an approach in developing 2D FE foot models, 3D FE models of the whole foot of the same subjects were developed and the simulations were run in several phases of the stance phase of gait The validation of the FE simulations were assessed by means of comparison between the simulated plantar pressure and the subject-specific experimental ones acquired during gait with respect to different phases of the stance phase of gait. A secondary aim of the study was to drive the healthy and the diabetic neuropathic FE foot models with the gait analysis data respectively of 10 healthy and 10 diabetic neuropathic subjects, in order to verify the possibility of extending the results of the subject specific FE model to a wider population. The validity of this approach was also established by comparison between the simulated plantar pressures and the subject-specific experimental ones acquired during gait with respect to different phases of the stance phase of gait. Comparison was also made between the errors evaluated when the FE models simulations was run with the subject specific geometry (obtained from MRI data) and the errors estimated when the FE simulations were run with the data of the 20 subjects
Il diabete mellito è una malattia cronica sempre più frequente. Fra le complicanze ad esso associate vi è il cosiddetto “piede diabetico”. L’incidenza del diabete a livello mondiale è destinata a raddoppiare entro il 2030 passando dal 2.8% al 4.4% della popolazione ed il numero di pazienti affetti da diabete mellito che sviluppano ulcera podalica oscilla tra l’1.3% ed il 4.8%. Numerosi studi hanno evidenziato come i fattori biomeccanici giochino un ruolo fondamentale nell’eziologia, nel trattamento e nella prevenzione delle ulcere del piede diabetico. La letteratura recente sul piede diabetico indica che le sollecitazioni meccaniche, ossia le elevate pressioni plantari e/o gli elevati sforzi tangenziali, che agiscono all’interno dei tessuti molli del piede possono contribuire alla formazione di ulcere. È quindi importante studiare le interazioni piede-suolo durante il cammino nei pazienti diabetici, ma si rendono anche necessari dei modelli per la simulazione di sollecitazioni e deformazioni nel tessuto plantare del piede diabetico che permettano di predire le aree ad alto rischio di ulcerazione o di valutare l’efficacia di ortesi plantari nel ridistribuire in modo ottimale le pressioni plantari. I modelli agli elementi finiti consentono di tenere conto degli aspetti critici del piede diabetico, vale a dire il movimento, la morfologia, le proprietà dei tessuti e le sollecitazioni meccaniche. Di recente sono stati sviluppati diversi modelli bidimensionali (2D) e tridimensionali (3D) del piede con lo scopo di studiare il comportamento biomeccanico di piede e caviglia. Tuttavia, per quanto appurato dall’autore, in letteratura non è stato riportato un modello 3D agli elementi finiti del piede diabetico neuropatico con geometria dettagliata e specifica di un soggetto. Inoltre, i modelli 2D e 3D agli elementi finiti del piede presenti in letteratura sono stati raramente combinati con i dati del cammino specifici dei soggetti, sia in termini di forze di reazione al suolo e cinematica (come parametri di input) che in termini di pressioni plantari per la validazione. L’obiettivo dello studio qui presentato è stato quello di simulare il comportamento biomeccanico sia del piede di un soggetto sano che del piede di un soggetto diabetico neuropatico per prevedere l'area della superficie plantare caratterizzata da eccessive sollecitazioni. A tal scopo, sono stati sviluppati due modelli agli elementi finiti di piede e caviglia, utilizzando le geometrie specifiche dei piedi dei due soggetti (uno sano ed uno diabetico neuropatico) ottenute attraverso immagini di risonanza magnetica (MRI). Quindi sono state effettuate delle simulazioni mediante l'applicazione di carichi e di condizioni al contorno, ottenuti da dati di cinematica e cinetica, integrati e sincronizzati, acquisiti durante il cammino, specifici dei due soggetti sui rispettivi modelli agli elementi finiti. Pertanto in questa tesi sono stati descritti un protocollo integrato di cinematica-cinetica per l'analisi del cammino che permette di valutare la cinematica e la cinetica 3D dei sottosegmenti del piede e due modelli completi agli elementi finiti di un piede sano e di un piede diabetico neuropatico. Per stabilire la fattibilità di tale approccio, sono stati inizialmente sviluppati due modelli 2D agli elementi finiti del retropiede di un soggetto sano e di un soggetto diabetico neuropatico, tenendo conto della geometria ossea e del cuscinetto plantare, delle proprietà dei materiali dei tessuti molli, della cinematica e della cinetica. Questi ultimi sono stati acquisiti durante tre istanti della fase di appoggio del ciclo del passo. Una volta dimostrato il vantaggio di un simile approccio nello sviluppo di modelli 2D agli elementi finiti del piede, sono stati sviluppati i modelli 3D agli elementi finiti del piede intero degli stessi soggetti e sono state eseguite le simulazioni in vari istanti della fase di appoggio. La validazione delle simulazioni è stata effettuata attraverso il confronto tra le pressioni plantari simulate e quelle acquisite sperimentalmente durante il cammino degli stessi soggetti, nei corrispondenti istanti della fase di appoggio. Un secondo scopo dello studio qui presentato è stato quello di effettuare simulazioni del modello del piede del soggetto sano e di quello del soggetto neuropatico con dati di analisi del cammino rispettivamente di 10 soggetti sani e 10 diabetici neuropatici, al fine di verificare la possibilità di estendere i risultati dei modelli specifici dei due soggetti ad una popolazione più ampia. La validità di questo approccio è stata valutata tramite il confronto tra le pressioni plantari simulate e quelle sperimentali specifiche di ogni soggetto, acquisite durante il cammino. Inoltre gli errori delle simulazioni eseguite con i dati dei 20 soggetti sono stati confrontati con gli errori effettuati quando le simulazioni dei modelli avevano previsto l’utilizzo di dati di cammino specifici dei due soggetti la cui geometria podalica era stata ottenuta da MRI
APA, Harvard, Vancouver, ISO, and other styles
34

FERREIRA, Thyago Araujo. "Resolução de problemas de probabilidade no ensino médio: uma análise de erros em provas da OBMEP no Maranhão." Universidade Federal do Maranhão, 2017. https://tedebc.ufma.br/jspui/handle/tede/tede/2009.

Full text
Abstract:
Submitted by Daniella Santos (daniella.santos@ufma.br) on 2017-11-23T12:47:27Z No. of bitstreams: 1 ThyagoFerreira.pdf: 3641580 bytes, checksum: 3b4d34126632e0ce0039022e1b95aa44 (MD5)
Made available in DSpace on 2017-11-23T12:47:28Z (GMT). No. of bitstreams: 1 ThyagoFerreira.pdf: 3641580 bytes, checksum: 3b4d34126632e0ce0039022e1b95aa44 (MD5) Previous issue date: 2017-07-28
CAPES
Recognize the problems and central errors completed by High school students in explaining Probability problems is the paper’s main aim. It analyzed errors in OBMEP’s second step - Level 3 between 2015 and 2016 years in Maranh~ao. Therefore, it was elaborated by seven chapters, talking about: a short history and teaching about robability’s studies; The establishing measures of error classification will be analyze in resolution of OBMEP’s mistakes probability tests - Level 3 in 2015 and 2016 years of Maranh~ao and in the most frequent errors types analyzes about probability inquiries. In Cury’s (2008, 2009 and 2010) studies assurances the errors analysis on the students written records, established on the questions analysis of OBMEP’s probability content tests (Public Schools’ Brazilian Mathematics Olympiad) 2015 and 2016 years, from the floating reading steps of all material, union and categorization answers and management results.
O presente trabalho tem por objetivo principal identificar as dificuldades e os principais erros cometidos pelos alunos do Ensino Médio na resolução de problemas de Probabilidade, mediante análise e classificaçãao de erros nas provas da segunda fase da OBMEP { Nível 3 nos anos 2015 e 2016 no estado do Maranhão. Para tanto, está composto de sete capítulos, os quais versar~ao sobre: um breve histórico sobre o desenvolvimento dos estudos acerca da probabilidade, bem como do seu ensino; o estabelecimento dos critérios de classificação de erros a serem analisados nas resoluções dos problemas de probabilidade nas provas da OBMEP { Nivel 3 nos anos 2015 e 2016 no estado do Maranhão e as análises dos tipos de erros mais frequentes nas resoluções de questões de probabilidade. Tendo como principal aporte teórico e metodológico os trabalhos de Cury (2008, 2009 e 2010), os quais garantem a análise de erros sobre os registros escritos dos alunos, a partir da análise de questões que abordaram o conteúdo de probabilidade nas provas da OBMEP (Olimpíada Brasileira de Matemática das Escolas Públicas) dos anos de 2015 e 2016, a partir das etapas de leitura flutuante de todo material, unitarização e categorização das respostas e tratamento dos resultados.
APA, Harvard, Vancouver, ISO, and other styles
35

Marchetti, A. "Automatic classification of galaxy spectra in large redshift surveys." Doctoral thesis, Università degli Studi di Milano, 2014. http://hdl.handle.net/2434/243304.

Full text
Abstract:
In my thesis work I make use of a Principal Component Analysis to classify galaxy spectra in large redshift surveys. In particular, I apply this classification to the first public data release spectra of galaxies in the range 0.4
APA, Harvard, Vancouver, ISO, and other styles
36

Cavalli, N. "INFLUENCE OF IMPLANT NUMBER, IMPLANT LENGTH AND CROWN HEIGHT ON BONE STRESS DISTRIBUTION FOR THREE-UNIT BRIDGES IN THE POSTERIOR MANDIBLE: A 3D FINITE ELEMENT ANALYSIS." Doctoral thesis, Università degli Studi di Milano, 2015. http://hdl.handle.net/2434/333063.

Full text
Abstract:
Background and aim: Reduced alveolar bone height due to ridge resorption represents a major limitation in the use of dental implants, in particular in the posterior sectors of jaws. It increases the probability of an invasion with related possible damage to some anatomical structures, such as the inferior alveolar nerve. Short dental implants' placement has been proposed as an alternative to surgical bone augmentation procedures and recent studies indicated that short implants could present survival and success rates similar to conventional implants in short and medium term. However doubts about biomechanical performances were risen because higher crowns are sometimes necessary to compensate the bone resorption, leading to a less favourable crown to implant ratio.
Recently image-based approaches combined with Finite Element Analyses (FEA) have allowed effective stress–strain investigations in biological systems and in particular stress distribution in bone. The objective of this study was to evaluate the stress transmitted to surrounding bone by different implant number, implant length and crown configurations in a three-unit bridge by means of finite element analysis. Material and methods: The 3D geometry of the edentulous mandible was reconstructed from computerized tomography (CT) scans. The symmetry of the structure permitted the reconstruction of a half maxilla.
Bone material mechanical properties have been assigned to each tetrahedral element based on the Grey Value. The meshes of the implants (Astra Tech AB OsseoSpeedTM TX, Dentsply Implants) were placed in second premolar, first molar and second molar position for the three implants configurations and in second premolar and first molar position for the two implants configurations. A superstructure representing a porcelain three unit bridge was built using beam elements for each configuration. Six different implant configurations were compared: LS2) two 4 mm diameter x 11 mm long implants with 8 mm long crowns; LS3) three 4 mm diameter x 11 mm long implants with 8 mm long crowns SS2) two 4 mm diameter x 11 mm long implants with 8 mm long crowns; SS3) three 4 mm diameter x 11 mm long implants with 8 mm long crowns; SL2) two 4 mm diameter x 6 mm long implants with 13 mm long crowns; SL3) three 4 mm diameter x 6 mm long implants with 13 mm long crowns. A 200 N axial and 45° oblique loads were applied to each crown. For each configuration the effect of both loading scenarios was evaluated in terms of state of stress in the bone-implant interface (Von Mises stress, maximum and minimum principal stresses). Results: Under oblique load the stress distribution is more concentrated around the coronal part of the implant and it is several times higher than under axial load. In particular the tension represented by the maximum principal stress is from 15 to 35 times higher. In all configurations the stress was more concentrated in the cervical area of the peri-implant bone. Considering axial load the higher values of peri-implant stress were found in the SS2 and SL2 configurations while the lower values around in LC2 and LC3 configurations. Under oblique load the maximum peri-implant stress was found in the SL2 configuration while the minimum peri-implant stress was found in the LC3 configuration. The increase of stress parameters values in SS configurations respect to LS configuration were on average of the 40%. Even the average increase of stress values in SL configurations respect to SS configuration was about the 42% under tilted load.
Configurations with 2 implants were recorded to undergo about the 50% more of stress on average than the respective 3 implants configurations. Conclusions: Crown heigh, implant number and implant length seem to be all influencing factors on implant bone stress, however the augmentation of crown heigh seems to have a greater effect than a reduction of implant length.
Even if the stress observed in all configurations was within a physiological range, a three-unit bridge with 13 mm long crowns supported by two implants may be biomechanicaly hazardous in the presence of horizontal forces, and the addition of another short implant or increase of bone volume may be suggested to dissipate the stress at bone-implant interface. In conclusion the use of short dental implants to support a three unit bridge in the posterior mandible can be considered a potential alternative to standard length implants, but crown heigh and lateral forces have to be carefully analyzed in every patient.
APA, Harvard, Vancouver, ISO, and other styles
37

ZINNANTI, Cinzia. "DEALING WITH RISK IN AGRICULTURE: A CROP LEVEL ANALYSIS AND MANAGEMENT PROPOSAL FOR ITALIAN FARMS." Doctoral thesis, Università degli Studi di Palermo, 2020. http://hdl.handle.net/10447/395466.

Full text
Abstract:
Risk management plays a critical role in agriculture, which is particularly exposed to multiple and heterogeneous risk factors. In addition to the traditional basic risks that generally characterize any business venture, agriculture faces external factors, generally difficult to control and with a strong impact on farm profitability. These are firstly environmental (pests and diseases) and climatic conditions that affect the quantity and quality of agricultural production, but also the structural constraints of the agricultural market, which is characterised by a high degree of supply rigidity, price volatility and inelasticity of demand. This leads to the need to implement risk management tools, some of which aimed at income stabilization (already in place by many years in other countries, i.e. the USA and Canada) and requiring the active participation of the farmer on the one hand and of the institutional system on the other. In order to suggest risk management solutions to Italian farmers, this thesis makes efforts in simulating the feasibility of a risk management tool introduced in the EU with Regulation (EU) No 2017/2393 but not yet implemented: the sector-specific Income Stabilization Tool. This is based on a public-private partnership and is managed by a mutual fund steered by associated farmers. These latter pay an annual contribution to become eligible for receiving indemnities when experiencing a severe income drop. Unlike others that are limited to covering specific types of risk, this tool makes it possible to look at the farmer's entire income risk considering the correlation among several sources of risk (particularly between production level and prices). This thesis provides first a theoretical background on risk analysis and risk management in agriculture (concepts, classification, literature and methodology). Second, the role of policies within the European Union framework and, Italy, in particular, has been viewed by analysing the normative framework and the reference context of insurance instruments in agriculture. Subsequently, since assessing farm profitability and economic risk is important to support farmers’ decisions about investments and whether or not to join the insurance instruments, an explorative analysis on profitability and riskiness of a perennial crop in Italy, such as hazelnut, has been done. Finally, the implementation of a sector-specific 3 Income Stabilization Tool for the crop investigated has been suggested by following this structure: - assessment of the profitability and risk of hazelnut production, in the four main production areas in Italy; - assessment of the most important parameters generating risk; - simulation of the feasibility of using an income risk management tool to make supply and demand able to interact and its impact on the level and riskiness of farm income; - assessment of the geographical scale at which the Income Stabilization Tool scheme could be implemented. Using data from the Italian Farm Accountancy Data Network on hazelnut producing farms, a downside risk analysis showed that riskiness is distributed in different ways on the entire country with sensitivity on yield risk affecting farmers' income level and economic risk. The simulation implemented in this study demonstrates the tool could reduce substantially the risk faced by hazelnut farmers in Italy. The additional public support is essential in case of joining the tool. In addition, in view of the differences within the Italian territory, the farmers’ payments should be differentiated based on the requisites and the specific climatic and environmental characteristics of each region. Concurrently, recourse to a national mutual fund would make it possible to benefit from the principle of risk pooling.
APA, Harvard, Vancouver, ISO, and other styles
38

IANNACCONE, Alice. "PERFORMANCE ANALYSIS OF BEACH HANDBALL." Doctoral thesis, Università degli studi di Cassino, 2021. http://hdl.handle.net/11580/84783.

Full text
Abstract:
The aim of this project was the evaluation of performance of beach handball players. Beach handball is considered a high-intensity mixed-metabolism sport where the performance depends on the combination of high-intensity physical patterns at player and team level. The player performances depend on specific movements involving speed and power, rapid accelerations and decelerations, and changes of directions, whereas the overall team performance depends on technical and tactical team performance indicators, such as passing, catching, throwing, and blocking during offensive and defensive situations. The first part of this dissertation aimed to examine the differences between male and female players during shooting actions occurring during semifinal and final phases of a European Beach Handball Tournament by means of notational analysis. For the study 9 matches were analyzed. Overall, 559 (males: 353; females: 206) shots were observed; 54.7±9.4% were successful and 19.9±7.1% were saved by goalkeepers. No difference for gender emerged. Results showed that notational analysis represents a valuable tool to examine many aspects of shooting actions such as the shooting technique, the shooting area, the area of the goal to which the shots end in relation to players and goalkeepers’ efficiencies. To improve players’ performances and to understand the level of adaptation to a given training program and minimizing the risk of non-functional overreaching, monitoring players’ load plays a fundamental role. For this purpose, the second part of this dissertation focused on the investigation of internal (assessed using objective and subjective methods) and external loads experienced by youth male beach handball players during training sessions and competitions. Thirteen players from the Lithuanian U17 beach handball team were monitored across 2 training camps (14 training sessions) and during the Younger Age Category 17 European Beach Handball tournament where players were involved in 7 matches. For the external load inertial movement units devices units were used while the internal load was objectively recorded by means of heart rate (HR) monitors. After each session, the HR data were exported, and the individual workload was calculated according to the summated HR zones (SHRZ) method. Furthermore, the internal load was subjectively assessed by means of Rating of Perceived Exertion (RPE) scale by asking each player “How hard was your training/Match?” 30 min after the completion of the session. Successively, the session RPE (sRPE) workload was calculated by multiplying the individual RPE score for the duration of each session. Subjective perception of internal load experienced by youth beach handball players increases with the objective internal load. However, the increase varies between players and sessions. In addition, results showed that external load variables (PlayerLoadTM, accelerations, low intensity events, medium intensity events, high intensity events, jumps, and changes of directions) showed a strong correlation with sRPE and SHRZ. The variable with the highest relationship and predictive capability, with both SHRZ and sRPE, was PlayerLoadTM. Further investigation should examine any potential difference in players of different ages and explore the influence of other contextual factors such as the match outcome or the individual players’ fatigue.
APA, Harvard, Vancouver, ISO, and other styles
39

VITA, LEONARDO. "Sviluppo e implementazione di formulazioni per l’analisi dinamica di sistemi multibody." Doctoral thesis, Università degli Studi di Roma "Tor Vergata", 2005. http://hdl.handle.net/2108/187.

Full text
Abstract:
Il presente lavoro di tesi conclude i tre anni di dottorato in Progettazione dei Sistemi Meccanici presso il Dipartimento di Ingegneria Meccanica dell'Università di Roma Tor Vergata. Durante l'attività di ricerca sono state esaminate talune metodologie di analisi di sistemi meccanici. Questo studio mi ha consentito di venire a conoscenza di strategie numeriche utili alla modellazione dei sistemi meccanici ed alla simulazione dinamica degli stessi. L'approccio cosiddetto multibody è probabilmente il più noto ed attraverso varie formulazioni dinamiche consente di pervenire in maniera sistematica alle equazioni della dinamica di una larga classe di sistemi meccanici. Uno dei problemi che spesso si pone in questo settore è rappresentato dalla risoluzione del sistema algebrico differenziale così ottenuto. Già attraverso il mio lavoro di tesi di laurea, consistito nello sviluppo ed implementazione di un codice di calcolo per l'analisi dinamica 3D mediante approccio multibody, ero venuto a contatto con questo tipo di problematiche. Un serio ostacolo sperimentato nel corso dell'integrazione numerica era rappresentato dai vincoli sovrabbondanti oppure dai gradi di libertà oziosi, spesso presenti nei sistemi meccanici. In questo caso, infatti, diventa necessario risolvere le equazioni della dinamica mantenendo contemporaneamente un'elevata accuratezza nel soddisfacimento dei vincoli. Nella prima parte del presente lavoro, richiamati i fondamenti teorici del metodo delle equazioni di vincolo dovuto ad E.J. Haug, vengono proposte e numericamente sperimentate talune formulazioni cui ricorrere (ortogonalizzazione dei vincoli, set minimo di coordinate, formulazione di Udwadia - Kalaba). Inoltre viene proposta ed illustrata un'originale strategia di soluzione adatta alla simulazione di sistemi a vincoli sovrabbondanti o variabili. Questa nuova strategia risolutiva, unitamente alle altre, è stata implementata in un codice di calcolo per l'analisi dinamica di sistemi meccanici spaziali mediante approccio multibody, realizzato ed ampliato durante questi tre anni di dottorato di ricerca. Per quanto riguarda la formulazione di Udwadia-Kalaba, fondata sul principio di minimo di Gauss, si è in particolare indagato sull'influenza della metodologia di calcolo della matrice pseudoinversa sull'efficienza computazionale (tempi di calcolo, accuratezza, affidabilità). Nell'ottica di una modellazione sempre più fedele della realtà sono state implementate anche coppie cinematiche con attrito. Si è posta particolare attenzione alla modellazione dei cuscinetti a strisciamento in regime di lubrificazione idrodinamica. La seconda parte della presente tesi, richiamati i principi dell'algebra duale, se ne propone l'applicazione nella modellazione di tolleranze geometriche e dimensionali, o errori di montaggio, nei sistemi meccanici. L'approccio descritto è stato applicato allo studio del rendimento di un giunto cardanico ed i risultati numerici validati mediante i risultati sperimentali ottenuti dal banco giunti allestito presso il Laboratorio del Dipartimento di Ingegneria Meccanica di questa Università. Infine, è stata impostata una metodologia per la modellazione statistica delle tolleranze geometriche nei meccanismi spaziali. Anche in questo caso l'applicazione è stata rivolta ad un giunto cardanico. In particolare si è studiata l'influenza delle tolleranze, generate in modo casuale secondo distribuzioni predefinite, sui parametri cinematici di funzionamento del meccanismo.
This thesis concerns the three years course of doctoral studies in Designing of Mechanical Systems at the Department of Mechanical Engineering of University of Rome Tor Vergata. The main research activity was the analysis of the numerical strategies for the resolution of the dynamic equations of multibody systems with redundant constraints. In particular the multibody approach suggested by Haug has been followed for the deduction of the equations of motion. This leads to the resolution of a differential algebraic system (DAE) of non-linear equations. A general purpose multibody code based on this approach has been implemented. Often, in the analysis of three dimensional mechanical systems one has to deal with redundant constraints. For this reason the first half of this work of thesis concerns numerical strategies for the integration of DAE systems as for example: the coordinate partitioning method, the orthogonalization method, the Udwadia – Kalaba formulation. For each of them a numerical example is presented. Moreover a new formulation of the equation of motion based on the QTZ decomposition of the Jacobian matrix of constraint equations is discussed. This formulation was implemented and it is one of the integration strategies which one could choose using the multibody code developed. In order to model mechanical systems in a as much realistic as possible manner, kinematic joints with clearance and friction were implemented. Particular attention was focused on hydrodynamic journal bearings for spatial linkages. The second half of the thesis deals with the application of dual algebra for the study of mechanical systems. In particular the dual algebra was used for modelling clearances and dimensional tolerances in joints. The effects of mounting errors on the mechanical efficiency of a Cardan joint was reported as numerical example. In particular the results obtained were validated by means of an experimental test rig set in the Laboratory of the Department of Mechanical Engineering of University of Rome Tor Vergata. Moreover the statistical analysis of tolerances based on Monte Carlo simulation was developed. The effects on angular position, velocity and acceleration in the kinematic pairs of a Cardan joint were analyzed. The statistical distribution, the standard deviation and the mean value of each of these output parameters were reported.
APA, Harvard, Vancouver, ISO, and other styles
40

MURGIA, GIANLUCA. "La definizione di nuovi modelli di leadership per il conseguimento del vantaggio competitivo: un'analisi strutturale degli organi di governo delle imprese italiane basta sull'integrazione della upper echelons theory e della resource dependence theory." Doctoral thesis, Università degli Studi di Roma "Tor Vergata", 2009. http://hdl.handle.net/2108/940.

Full text
Abstract:
Il presente lavoro analizza il modello di leadership delle aziende italiane attraverso un approccio che integra la upper echelons theory e la resource dependence theory. L’integrazione di queste due teorie permette di comprendere il ruolo giocato dalla leadership aziendale nella gestione delle risor-se che fanno parte dell’ambiente interno ed esterno all’impresa, e quindi il contributo che essa può dare alla definizione del vantaggio competitivo aziendale. La leadership aziendale si estrinseca at-traverso gli organi di governo dell’impresa e in particolare attraverso il Consiglio di Amministra-zione (CdA), il cui ruolo centrale è dovuto dal fatto che esso partecipa alla formulazione delle stra-tegie aziendali, opera un monitoraggio continuo della gestione dell’impresa e rappresenta uno stru-mento essenziale per l’accesso alle risorse presenti nell’ambiente. La modalità con cui il CdA riesce ad espletare tali funzioni dipende dalle caratteristiche del suo patrimonio cognitivo e relazionale, ossia dall’insieme di conoscenze, valori e relazioni di cui i suoi membri, singolarmente e colletti-vamente, si fanno portatori. Tuttavia, l’efficacia del CdA, e quindi l’impatto che tale organo può avere sulle strategie e la performance aziendale, dipende anche da altri fattori che sono riconducibili all’ambiente interno ed esterno all’impresa. In particolare, occorre tenere conto del modello di cor-porate governance adottato dall’azienda che determina le relazioni tra gli stakeholder e gli equilibri di potere all’interno dell’organizzazione. Per questo motivo, si è data un’ampia descrizione dei mo-delli di corporate governance adottati dalle aziende italiane, precedentemente e successivamente al-le riforme legislative approvate a partire dal 1998, evidenziandone le caratteristiche dal punto di vi-sta della struttura proprietaria, del ruolo delle diverse tipologie di azionisti e degli effetti che questi fattori hanno sugli organi di governo aziendale. All’interno dei modelli di corporate governance del-le aziende italiane, il ruolo del CdA è focalizzato soprattutto sulle funzioni di formulazione della strategia aziendale e di accesso alle risorse esterne, ma l’efficacia con cui esse vengono svolte di-pende fortemente dalle caratteristiche del patrimonio cognitivo e relazionale del CdA. Per valutare il patrimonio cognitivo e relazionale del CdA è stata effettuata un’analisi empirica utilizzando un approccio demografico, che si basa sullo studio di alcune variabili strettamente demografiche, che riguardano i singoli amministratori e il Consiglio nel suo insieme. Ciascuna di queste variabili può avere un impatto diverso sulle strategie messe in atto dall’azienda e sulla performance ottenuta dalla stessa, come testimoniato dalla vasta letteratura in merito, legata soprattutto alla upper echelons theory e alla resource dependence theory. Nel presente lavoro, in linea con alcuni studi precedenti, si è focalizzata l’attenzione sull’impatto che tali variabili possono avere in due popolazioni di a-ziende, appartenenti ad un settore stabile e turbolento, rispettivamente quello alimentare e quello in-formatico; in questo modo è stato possibile evidenziare lo stretto legame esistente tra patrimonio cognitivo e relazionale del CdA e ambiente di business in cui opera l’impresa. Conseguentemente, si è proceduto ad un’accurata analisi strutturale della composizione di tale organo, che ha permesso di mappare alcune variabili demografiche che ne definiscono il grado di omogeneità/eterogeneità del patrimonio cognitivo, mentre la misurazione delle variabili legate al patrimonio relazionale è stata condotta attraverso l’uso di tecniche legate alla Social Network Analysis. Si è proceduto ad un confronto tra le aziende appartenenti ai due settori, attraverso l’uso di tecniche statistiche non para-metriche, che hanno evidenziato l’esistenza di differenze significative tra le due popolazioni di a-ziende; infatti, le aziende del settore informatico mostrano un patrimonio cognitivo molto più etero-geneo e un patrimonio relazionale più sviluppato rispetto a quelle alimentari. Infine, attraverso l’uso della cluster analysis, si è proceduto ad una classificazione delle aziende sulla base del grado di omogeneità/eterogeneità del patrimonio cognitivo del loro CdA e sono state evidenziate delle rela-zioni positive con le strategie di innovazione messe in atto dalle aziende e con le performance, mi-surate in termini di ROE e ROI. Attraverso questo studio, è stato possibile identificare se una de-terminata configurazione del CdA può avere effetti positivi o negativi su determinate scelte e risul-tati aziendali, anche se ciò non implica che il patrimonio cognitivo e relazionale del CdA sia la cau-sa principale del comportamento delle imprese, dal momento che quest’ultimo dipende da una mol-teplicità di fattori, interni ed esterni all’impresa.
This work analyzes the leadership model of the Italian firms through an approach that combines the upper echelons theory and the resource dependence theory. The integration of these theories favours the acknowledgement of the role played by the executive leadership in managing the resources of the firm’s internal and external environment, and, consequently, her contribution to the develop-ment of the firm’s competitive advantage. The executive leadership is developed through the firm’s government organs, and specifically through the Board of Directors (BoD), whose central role is due to the fact that it participates to the definition of firm’s strategies, acts a continuous monitoring of the firm’s management and represents an essential instrument for the access to the environmental resources. BoD can deal with these functions accordingly with the characteristics of its cognitive and relational capital, that are the whole of knowledge, values and relationships developed, singu-larly and collectively, by its members. Nevertheless, the BoD effectiveness, and so its impact on the firm’s strategies and performance, depends also on other factors, that concern with the firm’s inter-nal and external environment. In particular, it’s necessary to understand the corporate governance model adopted by the firm, which determines the relationship among stakeholders and the power equilibrium inside the organization. For this reason, I have given a wide description of the corporate governance models adopted by the Italian firms, before and after the legislative reforms passed since 1998, highlighting their characteristics in terms of ownership structure, of the role played by the different typologies of shareholders, and of the effects on the firm’s government organs. Inside the corporate governance models of the Italian firms, the BoD role is focussed above all on the functions of firm’s strategies development and of the access to the environmental resources, but the effectiveness in dealing with these functions strongly depends on the characteristics of BoD cogni-tive and relational capital. In order to evaluate BoD cognitive and relational capital, it has been car-ried out an empirical analysis using a demographic approach, which is based on the study of such strictly demographic variables, inherent to the singular director and to the whole BoD. Each of these variables can have a different impact on the strategies and the performance of the firm, as indicated by the wide literature linked to the upper echelons theory and to the resource dependence theory. In this work, coherently with some previous studies, I have focussed my attention on the impact that these variables can have on two populations of firms, belonging to a stable and to a turbulent sector, respectively the food and the IT sector; so, it’s possible to point in evidence the strict relationship between BoD cognitive and relational capital and the environment where the firm operates. So, I have carried out an accurate structural analysis of BoD composition, which has favoured the map-ping of such demographic variables that determine the degree of homogeneity/heterogeneity of the cognitive capital, while the measurement of the variables linked to the relational capital has been carried out through the use of Social Network Analysis techniques. Then, I have carried out a com-parison between the firms belonging to the two sectors, through the use of non parametric statistical techniques, which highlighted the existence of significative differences between the two populations of firms; in fact, the IT firms show a more heterogeneous cognitive capital and a relational capital more developed than the food ones. Finally, through the use of a cluster analysis, I have classified the firms of each sector accordingly to their degree of homogeneity/heterogeneity of BoD cognitive capital and I have pointed in evidence some positive relationships with the innovation strategies car-ried out by the firms and with their performance, measured in terms of ROE and ROI. Through this work, I have identified if a specific BoD composition con have a positive or negative effect on such firm’s strategies and results, but this does not imply that the cognitive and relational capital is the principal cause of the firm’s behaviour, which is determined by several factors, inherent to the ex-ternal and the internal firm’s environment.
APA, Harvard, Vancouver, ISO, and other styles
41

Redoglio, D. A. "DEVELOPMENT OF AN INNOVATIVE LIBS SYSTEM FOR ON-LINE QUANTITATIVE ANALYSIS OF RAW COAL." Doctoral thesis, Università degli Studi di Milano, 2015. http://hdl.handle.net/2434/340604.

Full text
Abstract:
Every day in the world 64’125 GWh of electricity are produced and consumed: to satisfy this huge demand, many different power sources are employed, and in the last years the renewable energies share has kept on increasing. However, coal is still the main energy source and, thanks to the huge growth of China energy production, the amount of electrical energy produced this way is increasing: every day 12.1 million tons of coal are burned to fulfill 40% of the global electrical energy demand. In Italy, even if only 16.7% of the total power generation is obtained with coal combustion, coal is still the third power source, after natural gas and hydroelectric, and we have to import it in the amount of 17.5 million tons every year. Coal has an extremely variable composition, and heat power, pollutants generation and, consequently, cost, strongly depends on that composition; for these reasons whoever deals in coal is always on the look for instruments that allow to perform faster and more accurate analysis of coal composition. Devices able to perform on-line quantitative analysis of coal chemical composition are already commercially available; however, they are extremely expensive and must undergo strict regulations, due to the fact they employ either x-rays, gamma rays or neutrons. On the other hand, laser induced breakdown spectroscopy, or LIBS, has proven itself as a cost effective solution and has found many applications in other fields. In my Ph.D. thesis I have developed a working prototype for on-line quantitative analysis of coal based on LIBS, following every step of its realization. The main task has been the design and realization of an innovative collection optic that allows to perform measures on raw coal, at a distance of 1 meter from a conveyor belt; the innovative part consists in the fact that it doesn’t resort to the use of moving parts, decreasing its cost and increasing its reliability. My tasks also included the development of the software for data collection and analysis, the test of the instrument and its calibration . I have been deeply involved also into the choice of the components and the design of an instrument for the test in a real power plant.
APA, Harvard, Vancouver, ISO, and other styles
42

ADAMO, GRETA. "Investigating business process elements: a journey from the field of Business Process Management to ontological analysis, and back." Doctoral thesis, Università degli studi di Genova, 2020. http://hdl.handle.net/11567/1010191.

Full text
Abstract:
Business process modelling languages (BPMLs) typically enable the representation of business processes via the creation of process models, which are constructed using the elements and graphical symbols of the BPML itself. Despite the wide literature on business process modelling languages, on the comparison between graphical components of different languages, on the development and enrichment of new and existing notations, and the numerous definitions of what a business process is, the BPM community still lacks a robust (ontological) characterisation of the elements involved in business process models and, even more importantly, of the very notion of business process. While some efforts have been done towards this direction, the majority of works in this area focuses on the analysis of the behavioural (control flow) aspects of process models only, thus neglecting other central modelling elements, such as those denoting process participants (e.g., data objects, actors), relationships among activities, goals, values, and so on. The overall purpose of this PhD thesis is to provide a systematic study of the elements that constitute a business process, based on ontological analysis, and to apply these results back to the Business Process Management field. The major contributions that were achieved in pursuing our overall purpose are: (i) a first comprehensive and systematic investigation of what constitutes a business process meta-model in literature, and a definition of what we call a literature-based business process meta-model starting from the different business process meta-models proposed in the literature; (ii) the ontological analysis of four business process elements (event, participant, relationship among activities, and goal), which were identified as missing or problematic in the literature and in the literature-based meta-model; (iii) the revision of the literature-based business process meta-model that incorporates the analysis of the four investigated business process elements - event, participant, relationship among activities and goal; and (iv) the definition and evaluation of a notation that enriches the relationships between activities by including the notions of occurrence dependences and rationales.
APA, Harvard, Vancouver, ISO, and other styles
43

SANTUCCI, CLAUDIA. "PROGRESS IN CANCER INCIDENCE, MORTALITY, AND SURVIVAL." Doctoral thesis, Università degli Studi di Milano, 2023. https://hdl.handle.net/2434/951503.

Full text
Abstract:
Cancer mortality has declined over the last three decades in most high-income countries reflecting improvements in cancer prevention, diagnosis, treatments, and management. However, there are persisting and substantial differences in mortality, incidence, and survival worldwide. Using the World Health Organization (WHO) database I worked on the trends and projections analysis of mortality from various cancer sites. I computed age-specific rates for each 5-year age group, calendar year, and sex globally. I then computed age-standardized mortality rates per 100,000 person-years using the direct method based on the world standard population. I performed joinpoint models to identify the years when significant changes in trends occurred and I calculated the corresponding annual percent changes. For the mortality projections, I predicted the number deaths and rates for a specific calendar year, using a logarithmic Poisson count data joinpoint regression model. My first study on this topic aimed to provide an up-to-date overview of trends in cancer mortality, incidence, and survival among adults, retrieving data from high-quality population-based cancer registries in seven high-income countries and the European Union. Mortality from all cancers and most common cancer sites has declined over the last three decades, except for pancreas and lung (in women). The patterns for incidence were less consistent between countries, except for a steady decrease in stomach cancer in both sexes and lung cancer in men. Survival for all cancers and the selected cancer sites increased in all countries, although there is still substantial variability. Although overall cancer death rates continue to decline, incidence rates have been levelling off among males and have been moderately increasing among females. These trends reflect population changes in cancer risk factors, screening test use, diagnostic practices, and treatment advances. Many cancers can be prevented or treated effectively if they are diagnosed early. Population-based cancer incidence and mortality data can be used to inform efforts to decrease the cancer burden and regularly monitor progress toward goals. I had the opportunity to collaborate with the University of Miami Miller School of Medicine, which provided data on cancer mortality among Italy-born Americans in California, Florida, Massachusetts, and New York Departments of Vital Statistics. The comparison of cancer mortality rates and risk factors among foreign-born populations in a host country with those in the country of origin provides insights into differences in access to care, timely diagnosis, and disease management between the two countries. Moreover, cancer studies on specific European-born populations in the USA are scarce. Using official Italian death certificate data and resident population estimates based on the official census from the WHO, I was able to conduct a study to compare cancer mortality rates between Italians and Italy-born Americans. Generational differences in smoking prevalence patterns between the USA and Italy may explain the advantages for Italy-born Americans for lung and other tobacco-related cancers compared to their Italian male counterparts. The lower prevalence of Helicobacter pylori, alcohol consumption, and hepatitis B and C virus in the USA may justify the lower mortality for stomach and liver cancer, among Italy-born Americans. Earlier and more widespread adoption of cancer screening and effective treatments in the USA is likely to be influential in breast, colorectal, and prostate cancer mortality. I then focused my studies on urologic cancer mortality over time and predictions. I carried out a time-trend analysis for selected European countries for prostate, testis, bladder, and kidney cancers over the last four decades. Prostate cancer mortality in the EU decreased over recent years and the projections are favorable. Less favourable trends were observed in eastern Europe, though starting from relatively low rates. Testicular cancer mortality declined over time in most countries, however levelling off in northern and western countries, after reaching very low rates. Bladder cancer mortality trends were less favourable in central and eastern countries compared to northern and western ones. Kidney cancer mortality reported a slight increase in men and stable rates in women over the last decade in the EU. To sum up, over the last four decades, mortality from prostate, testis, and bladder cancers but not from kidney cancer, declined in most European countries. Prostate cancer mortality rates remain lower in Mediterranean countries than in northern and central Europe. Rates for all urologic cancers remain higher in central and eastern Europe. I wrote a book chapter on the epidemiology of prostate cancer, including all the aspects I studied for my PhD. In addition, I published as a co-author other papers on mortality over time and prediction analysis focusing on different aspects and various cancer mortality causes: mortality from soft tissue sarcomas, childhood cancer mortality, colorectal cancer mortality in young adults, and mortality from gastric and esophageal cancer, and differences between eastern and western EU cancer mortality. I submitted an abstract for the Africa Mortality Symposium on mortality cancer trends in the Republic of South Africa, the Republic of Mauritius, and Réunion and I am currently working on European prediction of cancer mortality rates for 2023. The research group I work has been publishing cancer mortality predictions annually since 2011. Moreover, during these three years, I was involved in the CEFIC project (PI Prof. Negri from University of Bologna) titled “Incidence trends of selected endocrine-related diseases and conditions in Europe and North America, and the contribution of changes in human reproduction”. Among the endocrine-related diseases and conditions, there were four cancer sites considered: endometrium, breast, testis, and prostate. I gave my contribution to this project by evaluating the cancer incidence trends in high-income countries worldwide and reviewing the association between selected reproductive factors and the selected cancers. Thus, we investigated changes in relevant reproductive factors and estimated their influence on cancer occurrence. During my PhD, I have been collaborating with the department of Oncology at the Mario Negri Institute. Under the supervision of Dr Bosetti I have worked on various projects. In particular, I was involved in updating of a meta-analysis concerning aspirin use and the risk of twelve solid tumors with a dose-response analysis finalizing three publications. Moreover, the Mario Negri Institute manages the Italian Register of Multiple Sclerosis, collecting data from more than 100 Italian centers including more than 70.000 patients. Based on this real-world dataset, I have dealt with several aspects related to multiple sclerosis, being involved in the drafting of two papers one concerning two methods for measuring the disability accumulated over time and another one studying patients’ and referral centers’ characteristics in relation to multiple sclerosis phenotypes. In my last PhD year, I worked at the Department of Quantitative Methods and Economics of the University of Las Palmas supervised by Prof. Serra-Majem for nine months. This training period aimed to gain new experience in conducting cost-effectiveness studies. I conducted a study aimed to quantify the over cost due to obesity among patients hospitalized for Covid-19. In collaboration with the Department of Public Health, I conducted an effectiveness analysis of a primary prevention intervention with a Mediterranean diet supplemented with extra-virgin olive oil or nuts using the data from PREDIMED Trial. Lastly, I took part in the WOMEDS Study, a project aimed to analyse gender inequality among medical doctors in Spain. During these months abroad, I co-wrote three papers, currently under revision.
APA, Harvard, Vancouver, ISO, and other styles
44

MEZZAPELLE, PARDO ANTONIO. "A mechanically derived vulnerability index method for seismic risk assessment of existing RC school buildings." Doctoral thesis, Università Politecnica delle Marche, 2018. http://hdl.handle.net/11566/252599.

Full text
Abstract:
La vulnerabilità sismica degli edifici scolastici in C.A. è un aspetto molto importante in Italia, come mostrato da recenti terremoti che hanno causato danni rilevanti in diverse scuole. La maggior parte delle scuole italiane è stata costruita tra gli anni ’50 e ’90, quindi in assenza o con basso livello di resistenza sismica. Nel presente studio è stata sviluppata una metodologia per la valutazione speditiva del rischio sismico è stata sviluppata. Essa si basa sul metodo dell’indice di vulnerabilità, assumendo 15 indicatori di vulnerabilità a cui assegnare dei punteggi sulla base di un giudizio esperto. I punteggi sono stati determinati mediante analisi pushover effettuate su diversi modelli strutturali rappresentativi delle principali caratteristiche dei due campioni di scuole Pre 1974 e Post 1974. A tal fine un set di circa 40 istituti scolastici superiori è stato analizzato per rilevare le vulnerabilità tipiche e specifiche, ed una progettazione simulata è stata effettuata in accordo ai Codici in vigore nei due periodi di riferimento. Delle correlazioni tra l’indice di vulnerabilità globale Iv e la capacità in termini di PGA, sia per lo stato di danno leggero che per il collasso, sono state determinate per ottenere curve di danno trilineari così come nel metodo GNDT. La validazione numerica del metodo proposto è stata effettuata confrontando le curve di danno trilineari ottenute per due edifici prototipo, rappresentativi in media delle classi Pre 1974 e Post 1974, con le curve di danno analitiche fornite per gli stessi edifici sia da analisi pushover che dinamiche incrementali. Inoltre, una validazione sperimentale è stata fatta confrontando, per le scuole superiori delle province di Ancona e Macerata, il danno subito a causa della sequenza sismica del centro Italia del 2016, con il danno stimato, per lo stesso livello di intensità, mediante il metodo proposto. Entrambe le procedure di validazione hanno confermato una buona affidabilità del metodo proposto per valutazioni rapide e di tipo comparativo. Infine, due tipologie di scenari di danno rapido sono state sviluppate per il campione di edifici, al fine di stimare le perdite fisiche, umane ed economiche. Il primo tipo considera una pericolosità sismica uniforme sull’intero territorio e livelli di intensità crescenti, invece l’altra tipologia considera tre singoli eventi sulla base del sistema di faglie presente nella zona di interesse (fissando per ognuno un epicentro, magnitudo e profondità), quindi i valori di PGA sono calcolati per ogni edificio mediante una legge di attenuazione.
The seismic vulnerability of RC school buildings is a very important issue in Italy, as shown from recent earthquakes that caused heavy damage of several school buildings. Most of Italian schools were built between the 50s and the 90s, so without or with low seismic resistance criteria. In this study a methodology for rapid seismic risk assessment of RC school buildings is developed. It is based on the vulnerability index method, assuming 15 vulnerability indicators to which assign scores on the base of expert judgment. The scores were determined through pushover analyses performed on several structural models representative of the main characteristics of the Pre 1974 and Post 1974 school building stocks. To this aim a set of about 40 high schools were analysed to detect typical and specific vulnerabilities, and a simulated design procedure was carried according to the Codes in force in the two reference periods. Correlations between the global vulnerability index Iv and the capacity in terms of PGA, for both slight damage and collapse, were determined to obtain trilinear damage curves such as in GNDT method. The numerical validation of the proposed method was made by comparing the trilinear damage curves obtained for two prototype buildings, representative in average of the Pre 1974 and Post 1974 classes, with analytical damage curves provided for the same buildings by both pushover and incremental dynamic analyses. Also, experimental validation was carried out by comparing, for the high schools of the provinces of Ancona and Macerata, the damage occurred because of the Centre Italy 2016 seismic sequence, with the damage estimated, for the same intensity level, through the proposed method. Both validation procedures have confirmed a good reliability of the proposed method for rapid and comparative evaluations. Finally, two typologies of rapid damage scenarios were developed for the building stock, in order to estimate physical, human and economic losses. The first typology considers uniform seismic hazard on the whole territory and increasing intensity levels, instead the second one considers three single events on the base of the fault system of the region (fixing for each an epicentre, magnitude and depth), thus the PGA values are calculated for every school building by means an attenuation law.
APA, Harvard, Vancouver, ISO, and other styles
45

LAMONACA, EMILIA. "Analysis of Socio-Economic and Environmental Sustainability of Barley Supply Chain: a Healthy Crop for Human Nutrition." Doctoral thesis, Università di Foggia, 2017. http://hdl.handle.net/11369/362032.

Full text
Abstract:
La ricerca ha lo scopo di fornire evidenze sui benefici, in termini di salubrità, sostenibilità ambientale ed efficienza produttiva, dell’orzo (Hordeum vulgare L.), una coltura ampiamente diffusa in Puglia (Italia). Al fine di perseguire questo obiettivo generale, lo scopo della ricerca è duplice: (i) studiare la percezione dei consumatori circa la qualità dei prodotti biologici, in termini di sostenibilità e salubrità, e analizzare come e in che misura la qualità percepita dei prodotti biologici è influenzata dalla presenza di informazioni in etichetta relative alla qualità e dal profilo socio-demografico dei consumatori; (ii) confrontare la coltivazione dell’orzo in regime biologico e convenzionale, in condizioni pedo-climatiche favorevoli, al fine di valutare i potenziali impatti ambientali ed individuare la migliore soluzione in termini di sostenibilità ambientale ed efficienza produttiva. È stato usato un approccio basato su una Combinazione tra una variabile casuale Uniforme discreta e una variabile casuale Binomiale traslata, denominato modello CUB per analizzare le preferenze dei consumatori in termini di due componenti latenti, il livello di attrazione verso l’item considerato (feeling) e l’incertezza connessa alle modalità della raccolta delle risposte (uncertainty). È stata usata la metodologia della valutazione del ciclo di vita, Life Cycle Assessment (LCA), usando alternativamente come Unità Funzionali (FU) 1 ha di terra destinata alla coltivazione di orzo per testare la sostenibilità ambientale, e 1 kg di granella secca di orzo per testare l’efficienza produttiva. I risultati del modello CUB evidenziano che la presenza di informazioni specifiche sull’etichetta dei prodotti (e.g. etichetta ambientale, certificazione biologica, indicazioni salutistiche) fanno percepire i prodotti biologici come prodotti di qualità superiore. I risultati sottolineano anche come il profilo socio-demografico dei consumatori svolge un ruolo fondamentale ne determinare il meccanismo di acquisto dei prodotti. I risultati di LCA mostrano che la coltivazione di orzo biologico è più sostenibile da un punto di vista ambientale (ma non è produttivamente efficiente), viceversa la coltivazione di orzo convenzionale è più efficiente da un punto di vista produttivo (ma non è sostenibile per l’ambiente). I risultati relativi all’efficienza produttiva e alla sostenibilità ambientale dovrebbero essere bilanciate da assunzioni metodologiche (la scelta dell’unità funzionale, il procedimento di allocazione) e da elementi qualitativi (la qualità della coltura, l’adattabilità a specifiche condizioni pedo-climatiche). L’unità funzionale di superficie (1 ha) è preferibile per l’analisi delle fasi di coltivazione in campo, mentre l’unità funzionale di massa (1 kg) è consigliabile per la valutazione di un contesto più ampio, quale un’intera filiera. La ricerca tenta di colmare la mancanza esistente nella letteratura economica relativa alla coltura dell’orzo, che rappresenta un punto di forza per le aziende agricole e di trasformazione pugliesi, grazie alle sue caratteristiche di sostenibilità e salubrità
The scope of the research is to provide evidences about the benefits, in terms of healthiness, environmental sustainability, and productive efficiency, related to barley (Hordeum vulgare L.), a widespread crop in the Apulia region (italy). Seeking to pursue this general goal, the aim of the research is twofold: (i) investigating consumers’ perception about quality of organic food, in terms of sustainability and healthiness, and analyzing how and to what extent perceived quality of organic food is influenced by the presence of information related to quality on food products’ label, and consumers’ socio-demographic profile; (ii) comparing organic and conventional cultivation of barley, under favorable pedo-climatic conditions, to evaluate the potential environmental impacts related to the cultivation of barley and to identify the most suitable solution in terms of environmental sustainability and productive efficiency. An approach based on Combination of Uniform and shifted Binomial random variables, named CUB model, was performed to analyze consumers’ preference in terms of two latent components, feeling and uncertainty. A Life Cycle Assessment (LCA) was performed alternatively using, as Functional Units (FUs), 1 ha of land involved in cultivation of barley to seek environmental sustainability and 1 kg of dry matter grains of produced barley to check productive efficiency. Findings from CUB models highlight that the presence of specific information on food’s label (e.g. environmental label, organic certification, healthy claims) contributes to perceive organic food as food of superior quality. Results also underline how consumers’ socio-demographic profile plays a significant role in driving food purchasing decision mechanism. Findings from comparative LCA show that organic barley cultivation is the most environmentally sustainable solution (but not efficient in production), vice versa conventional barley cultivation is the solution most efficient in production (but not environmentally sustainable). Efficiency in production and environmental sustainability may be balanced with methodological assumptions (choice of functional unit, allocation procedure) and qualitative elements (crop quality and adaptiveness to specific pedo-climatic conditions). A land-based FU is preferred in the analysis of the agricultural stage, while a mass-based FU is suitable for the assessment of a wider context, such as the entire supply chain. The research seeks to fill the lack, existing in economic literature, about barley crop, which is a potential strength for Apulian farms and firms, thanks to its sustainability and healthiness properties.
APA, Harvard, Vancouver, ISO, and other styles
46

Ratti, S. "THE INFLUENCE OF ANIMAL NUTRITION ON MEAT QUALITY." Doctoral thesis, Università degli Studi di Milano, 2015. http://hdl.handle.net/2434/259595.

Full text
Abstract:
The influence of animal nutrition on meat quality Sabrina Ratti Abstract: Food and Agriculture organization of the United Nations (FAO) define meat quality as compositional quality and palatability factors such as visual appearance, smell, firmness, juiciness, tenderness, and flavour. Recently, the attention has been focused on the improvement of meat quality parameters. The protection of lipids from oxidative phenomena is fundamental to preserve meat color, nutritional quality and increase shelf life, enhancing consumer acceptability. Otherwise, it is possible to enhance nutritional quality of meat and meat products, modifying fat content and fatty acid profile. Animal feeding is a key factor for improve meat quality parameters through dietary supplementation with different additives. Antioxidants are one of the major strategies for preventing lipid oxidation in meat products, decreasing oxidative phenomena and increasing shelf life. The search for safe and effective natural antioxidants has focused on plants extract. Dietary supplementation with antioxidant mixture containing polyphenols in swine, enhance color parameters and oxidative stability in Longissimus Dorsi muscle (trial 1 and 2). Moreover, the same plant extract improved oxidative stability and sensory parameter in donkey and horse Longissimus Lomborum muscle (trial 3). Also genetic type significantly affects cooked ham chemical profile, without affecting other quality parameters (trial 4). Nutritional quality of pork products are improved by dietary linseed supplementation, enhancing muscle and adipose tissue n-3 PUFA content. Meta-analysis data provides information to find the optimal dosage, source and length of linseed dietary supplementation in relation to slaughter weight and genetic type (trial 5). Our results point out the importance of animal dietary supplementation with plant extract and linseed, on oxidative stability, physical, chemical and sensory parameters of different type of meat and meat product.
APA, Harvard, Vancouver, ISO, and other styles
47

CHEMLA, ROMEU SANTOS AXEL CLAUDE ANDRE'. "MANIFOLD REPRESENTATIONS OF MUSICAL SIGNALS AND GENERATIVE SPACES." Doctoral thesis, Università degli Studi di Milano, 2020. http://hdl.handle.net/2434/700444.

Full text
Abstract:
Tra i diversi campi di ricerca nell’ambito dell’informatica musicale, la sintesi e la generazione di segnali audio incarna la pluridisciplinalità di questo settore, nutrendo insieme le pratiche scientifiche e musicale dalla sua creazione. Inerente all’informatica dalla sua creazione, la generazione audio ha ispirato numerosi approcci, evolvendo colle pratiche musicale e gli progressi tecnologici e scientifici. Inoltre, alcuni processi di sintesi permettono anche il processo inverso, denominato analisi, in modo che i parametri di sintesi possono anche essere parzialmente o totalmente estratti dai suoni, dando una rappresentazione alternativa ai segnali analizzati. Per di più, la recente ascesa dei algoritmi di l’apprendimento automatico ha vivamente interrogato il settore della ricerca scientifica, fornendo potenti data-centered metodi che sollevavano diversi epistemologici interrogativi, nonostante i sui efficacia. Particolarmente, un tipo di metodi di apprendimento automatico, denominati modelli generativi, si concentrano sulla generazione di contenuto originale usando le caratteristiche che hanno estratti dei dati analizzati. In tal caso, questi modelli non hanno soltanto interrogato i precedenti metodi di generazione, ma anche sul modo di integrare questi algoritmi nelle pratiche artistiche. Mentre questi metodi sono progressivamente introdotti nel settore del trattamento delle immagini, la loro applicazione per la sintesi di segnali audio e ancora molto marginale. In questo lavoro, il nostro obiettivo e di proporre un nuovo metodo di audio sintesi basato su questi nuovi tipi di generativi modelli, rafforazti dalle nuove avanzati dell’apprendimento automatico. Al primo posto, facciamo una revisione dei approcci esistenti nei settori dei sistemi generativi e di sintesi sonore, focalizzando sul posto di nostro lavoro rispetto a questi disciplini e che cosa possiamo aspettare di questa collazione. In seguito, studiamo in maniera più precisa i modelli generativi, e come possiamo utilizzare questi recenti avanzati per l’apprendimento di complesse distribuzione di suoni, in un modo che sia flessibile e nel flusso creativo del utente. Quindi proponiamo un processo di inferenza / generazione, il quale rifletta i processi di analisi/sintesi che sono molto usati nel settore del trattamento del segnale audio, usando modelli latenti, che sono basati sull’utilizzazione di un spazio continuato di alto livello, che usiamo per controllare la generazione. Studiamo dapprima i risultati preliminari ottenuti con informazione spettrale estratte da diversi tipi di dati, che valutiamo qualitativamente e quantitativamente. Successiva- mente, studiamo come fare per rendere questi metodi più adattati ai segnali audio, fronteggiando tre diversi aspetti. Primo, proponiamo due diversi metodi di regolarizzazione di questo generativo spazio che sono specificamente sviluppati per l’audio : una strategia basata sulla traduzione segnali / simboli, e una basata su vincoli percettivi. Poi, proponiamo diversi metodi per fronteggiare il aspetto temporale dei segnali audio, basati sull’estrazione di rappresentazioni multiscala e sulla predizione, che permettono ai generativi spazi ottenuti di anche modellare l’aspetto dinamico di questi segnali. Per finire, cambiamo il nostro approccio scientifico per un punto di visto piú ispirato dall’idea di ricerca e creazione. Primo, descriviamo l’architettura e il design della nostra libreria open-source, vsacids, sviluppata per permettere a esperti o non-esperti musicisti di provare questi nuovi metodi di sintesi. Poi, proponiamo una prima utilizzazione del nostro modello con la creazione di una performance in real- time, chiamata ægo, basata insieme sulla nostra libreria vsacids e sull’uso di une agente di esplorazione, imparando con rinforzo nel corso della composizione. Finalmente, tramo dal lavoro presentato alcuni conclusioni sui diversi modi di migliorare e rinforzare il metodo di sintesi proposto, nonché eventuale applicazione artistiche.
Among the diverse research fields within computer music, synthesis and generation of audio signals epitomize the cross-disciplinarity of this domain, jointly nourishing both scientific and artistic practices since its creation. Inherent in computer music since its genesis, audio generation has inspired numerous approaches, evolving both with musical practices and scientific/technical advances. Moreover, some syn- thesis processes also naturally handle the reverse process, named analysis, such that synthesis parameters can also be partially or totally extracted from actual sounds, and providing an alternative representation of the analyzed audio signals. On top of that, the recent rise of machine learning algorithms earnestly questioned the field of scientific research, bringing powerful data-centred methods that raised several epistemological questions amongst researchers, in spite of their efficiency. Especially, a family of machine learning methods, called generative models, are focused on the generation of original content using features extracted from an existing dataset. In that case, such methods not only questioned previous approaches in generation, but also the way of integrating this methods into existing creative processes. While these new generative frameworks are progressively introduced in the domain of image generation, the application of such generative techniques in audio synthesis is still marginal. In this work, we aim to propose a new audio analysis-synthesis framework based on these modern generative models, enhanced by recent advances in machine learning. We first review existing approaches, both in sound synthesis and in generative machine learning, and focus on how our work inserts itself in both practices and what can be expected from their collation. Subsequently, we focus a little more on generative models, and how modern advances in the domain can be exploited to allow us learning complex sound distributions, while being sufficiently flexible to be integrated in the creative flow of the user. We then propose an inference / generation process, mirroring analysis/synthesis paradigms that are natural in the audio processing domain, using latent models that are based on a continuous higher-level space, that we use to control the generation. We first provide preliminary results of our method applied on spectral information, extracted from several datasets, and evaluate both qualitatively and quantitatively the obtained results. Subsequently, we study how to make these methods more suitable for learning audio data, tackling successively three different aspects. First, we propose two different latent regularization strategies specifically designed for audio, based on and signal / symbol translation and perceptual constraints. Then, we propose different methods to address the inner temporality of musical signals, based on the extraction of multi-scale representations and on prediction, that allow the obtained generative spaces that also model the dynamics of the signal. As a last chapter, we swap our scientific approach to a more research & creation-oriented point of view: first, we describe the architecture and the design of our open-source library, vsacids, aiming to be used by expert and non-expert music makers as an integrated creation tool. Then, we propose an first musical use of our system by the creation of a real-time performance, called aego, based jointly on our framework vsacids and an explorative agent using reinforcement learning to be trained during the performance. Finally, we draw some conclusions on the different manners to improve and reinforce the proposed generation method, as well as possible further creative applications.
À travers les différents domaines de recherche de la musique computationnelle, l’analysie et la génération de signaux audio sont l’exemple parfait de la trans-disciplinarité de ce domaine, nourrissant simultanément les pratiques scientifiques et artistiques depuis leur création. Intégrée à la musique computationnelle depuis sa création, la synthèse sonore a inspiré de nombreuses approches musicales et scientifiques, évoluant de pair avec les pratiques musicales et les avancées technologiques et scientifiques de son temps. De plus, certaines méthodes de synthèse sonore permettent aussi le processus inverse, appelé analyse, de sorte que les paramètres de synthèse d’un certain générateur peuvent être en partie ou entièrement obtenus à partir de sons donnés, pouvant ainsi être considérés comme une représentation alternative des signaux analysés. Parallèlement, l’intérêt croissant soulevé par les algorithmes d’apprentissage automatique a vivement questionné le monde scientifique, apportant de puissantes méthodes d’analyse de données suscitant de nombreux questionnements épistémologiques chez les chercheurs, en dépit de leur effectivité pratique. En particulier, une famille de méthodes d’apprentissage automatique, nommée modèles génératifs, s’intéressent à la génération de contenus originaux à partir de caractéristiques extraites directement des données analysées. Ces méthodes n’interrogent pas seulement les approches précédentes, mais aussi sur l’intégration de ces nouvelles méthodes dans les processus créatifs existants. Pourtant, alors que ces nouveaux processus génératifs sont progressivement intégrés dans le domaine la génération d’image, l’application de ces techniques en synthèse audio reste marginale. Dans cette thèse, nous proposons une nouvelle méthode d’analyse-synthèse basés sur ces derniers modèles génératifs, depuis renforcés par les avancées modernes dans le domaine de l’apprentissage automatique. Dans un premier temps, nous examinerons les approches existantes dans le domaine des systèmes génératifs, sur comment notre travail peut s’insérer dans les pratiques de synthèse sonore existantes, et que peut-on espérer de l’hybridation de ces deux approches. Ensuite, nous nous focaliserons plus précisément sur comment les récentes avancées accomplies dans ce domaine dans ce domaine peuvent être exploitées pour l’apprentissage de distributions sonores complexes, tout en étant suffisamment flexibles pour être intégrées dans le processus créatif de l’utilisateur. Nous proposons donc un processus d’inférence / génération, reflétant les paradigmes d’analyse-synthèse existant dans le domaine de génération audio, basé sur l’usage de modèles latents continus que l’on peut utiliser pour contrôler la génération. Pour ce faire, nous étudierons déjà les résultats préliminaires obtenus par cette méthode sur l’apprentissage de distributions spectrales, prises d’ensembles de données diversifiés, en adoptant une approche à la fois quantitative et qualitative. Ensuite, nous proposerons d’améliorer ces méthodes de manière spécifique à l’audio sur trois aspects distincts. D’abord, nous proposons deux stratégies de régularisation différentes pour l’analyse de signaux audio : une basée sur la traduction signal/ symbole, ainsi qu’une autre basée sur des contraintes perceptives. Nous passerons par la suite à la dimension temporelle de ces signaux audio, proposant de nouvelles méthodes basées sur l’extraction de représentations temporelles multi-échelle et sur une tâche supplémentaire de prédiction, permettant la modélisation de caractéristiques dynamiques par les espaces génératifs obtenus. En dernier lieu, nous passerons d’une approche scientifique à une approche plus orientée vers un point de vue recherche & création. Premièrement, nous présenterons notre librairie open-source, vsacids, visant à être employée par des créateurs experts et non-experts comme un outil intégré. Ensuite, nous proposons une première utilisation musicale de notre système par la création d’une performance temps réel, nommée ægo, basée à la fois sur notre librarie et sur un agent d’exploration appris dynamiquement par renforcement au cours de la performance. Enfin, nous tirons les conclusions du travail accompli jusqu’à maintenant, concernant les possibles améliorations et développements de la méthode de synthèse proposée, ainsi que sur de possibles applications créatives.
APA, Harvard, Vancouver, ISO, and other styles
48

VIRGILI, LUCA. "Graphs behind data: A network-based approach to model different scenarios." Doctoral thesis, Università Politecnica delle Marche, 2022. http://hdl.handle.net/11566/295088.

Full text
Abstract:
Al giorno d’oggi, i contesti che possono beneficiare di tecniche di estrazione della conoscenza a partire dai dati grezzi sono aumentati drasticamente. Di conseguenza, la definizione di modelli capaci di rappresentare e gestire dati altamente eterogenei è un argomento di ricerca molto dibattuto in letteratura. In questa tesi, proponiamo una soluzione per affrontare tale problema. In particolare, riteniamo che la teoria dei grafi, e più nello specifico le reti complesse, insieme ai suoi concetti ed approcci, possano rappresentare una valida soluzione. Infatti, noi crediamo che le reti complesse possano costituire un modello unico ed unificante per rappresentare e gestire dati altamente eterogenei. Sulla base di questa premessa, mostriamo come gli stessi concetti ed approcci abbiano la potenzialità di affrontare con successo molti problemi aperti in diversi contesti. ​
Nowadays, the amount and variety of scenarios that can benefit from techniques for extracting and managing knowledge from raw data have dramatically increased. As a result, the search for models capable of ensuring the representation and management of highly heterogeneous data is a hot topic in the data science literature. In this thesis, we aim to propose a solution to address this issue. In particular, we believe that graphs, and more specifically complex networks, as well as the concepts and approaches associated with them, can represent a solution to the problem mentioned above. In fact, we believe that they can be a unique and unifying model to uniformly represent and handle extremely heterogeneous data. Based on this premise, we show how the same concepts and/or approach has the potential to address different open issues in different contexts. ​
APA, Harvard, Vancouver, ISO, and other styles
49

CARUGATI, LAURA. "Molecular analysis of marine benthic biodiversity: methodological implementations and perspectives." Doctoral thesis, Università Politecnica delle Marche, 2017. http://hdl.handle.net/11566/245338.

Full text
Abstract:
La biodiversità marina regola il funzionamento ecosistemico, responsabile della produzione di beni e servizi importanti per la biosfera e il benessere umano. La stima della biodiversità è uno degli obiettivi fondamentali della scienza. I cambiamenti globali e le attività antropiche stanno avendo un impatto crescente sugli ecosistemi. Le valutazioni d'impatto e i programmi di monitoraggio si basano su specie di grandi dimensioni, trascurando gli organismi di piccola taglia come la meiofauna (20-500 µm), a causa delle difficoltà connesse con la tradizionale identificazione morfologica. L’obiettivo di questo lavoro è quello di implementare i metodi per lo studio della meiofauna, combinando l’approccio morfologico e molecolare. Al fine di rispondere alle esigenze della Direttiva Quadro sulla Strategia Marina sono stati testati, in diversi ecosistemi marini, uno strumento per la valutazione dello stato ambientale e varie metodologie innovative di campionamento. Gli strumenti molecolari, come il metabarcoding, insieme ai nuovi sistemi di campionamento, mostrano un grande potenziale per lo studio della biodiversità marina. Il protocollo molecolare qui messo a punto, permette di analizzare la diversità di organismi di piccola taglia in modo rapido ed economicamente efficiente, migliorando la capacità di studiare la biodiversità della meiofauna anche in ecosistemi marini profondi. Nonostante i numerosi vantaggi, l'applicazione del metabarcoding al monitoraggio richiede il superamento di alcuni limiti. L'incompletezza delle banche dati e la presenza di copie multiple di geni in individui della stessa specie ostacolano l'interpretazione dei dati e non consentono di effettuare stime quantitative di biodiversità. Per garantire che le analisi di metabarcoding possano essere ulteriormente convalidate, risulta cruciale conservare la competenza nell’identificazione morfologica. L’approccio tradizionale e quello molecolare forniscono informazioni diverse, quindi dovrebbero essere usati entrambi per ottenere stime accurate di biodiversità.
Marine biodiversity regulates ecosystem functions that are responsible for the production of goods and services for the entire biosphere and human wellbeing. Censusing species inhabit oceans is among the most fundamental questions in science. Global changes and human activities are having an increasing impact on ocean biodiversity and ecosystem functioning. Impact assessments and monitoring programmes are almost based on large and conspicuous species. Small and cryptic organisms including eukaryotic meiobenthic fauna (20-500 µm) remain overlooked due to the difficulties associated with the traditional, morphology-based identification. We here aim to implement methods for the study of meiofaunal biodiversity, combining classical morphological and molecular approaches. In order to answer to the needs of the Marine Strategy Framework Directive, an environmental status assessment tool and a variety of innovative sampling methodologies have been tested in different ecosystems. Along with new sampling approaches, molecular tools, as metabarcoding, have a great potential in innovating the analysis of marine biodiversity. Molecular protocol set up in this work allows to analyse the diversity of tiny organisms in a rapid and cost-effective way, thus increasing our ability to investigate meiofaunal biodiversity, also in the deep sea. Beyond its many advantages, the routine application of metabarcoding for monitoring requires overcoming some limitations. The incompleteness of public databases and the presence of multi-copy genes within meiofaunal species hamper the interpretation of metabarcoding data and do not allow to infer biodiversity quantitative estimates. It is thus critically important to maintain expertise in morphological identification to ensure that metabarcoding analyses can be further validated. Morphological- and molecular-based approaches provide different information, thus they should be combined to obtain better evaluations of the actual marine biodiversity.
APA, Harvard, Vancouver, ISO, and other styles
50

SIMONETTI, Andrea. "Development of statistical methods for the analysis of textual data." Doctoral thesis, Università degli Studi di Palermo, 2022. https://hdl.handle.net/10447/574870.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!