Дисертації з теми "Model Analysi"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Model Analysi.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 дисертацій для дослідження на тему "Model Analysi".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

SANTORO, MAURO. "Inference of behavioral models that support program analysis." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2011. http://hdl.handle.net/10281/19514.

Повний текст джерела
Анотація:
The use of models to study the behavior of systems is common to all fields. A behavioral model formalizes and abstracts the view of a system and gives insight about the behavior of the system being developed. In the software field, behavioral models can support software engineering tasks. In particular, relevant uses of behavioral models are included in all the main analysis and testing activities: models are used in program comprehension to complement the information available in specifications, are used in testing to ease test case generation, used as oracles to verify the correctness of the executions, and are used as failure detection to automatically identify anomalous behaviors. When behavioral models are not part of specifications, automated approaches can automatically derive behavioral models from programs. The degree of completeness and soundness of the generated models depends from the kind of inferred model and the quality of the data available for the inference. When model inference techniques do not work well or the data available for the inference are poor, the many testing and analysis techniques based on these models will necessarily provide poor results. This PhD thesis concentrates on the problem of inferring Finite State Automata (the model that is likely most used to describe the behavior of software systems) that describe the behavior of programs and components and can be useful as support for testing and analysis activities. The thesis contributes to the state of the art by: (1) Empirically studying the effectiveness of techniques for the inference of FSAs when a variable amount of information (from scarce to good) is available for the inference; (2) Empirically comparing the effectiveness of techniques for the inference of FSAs and Extended FSAs; (3) Proposing a white-box technique that infers FSAs from service-based applications by starting from a complete model and then refining the model by incrementally removing inconsistencies; (4) Proposing a black-box technique that infers FSAs by starting from a partial model and then incrementally producing additional information to increase the completeness of the model.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Guiotto, Annamaria. "Development of a gait analysis driven finite element model of the diabetic foot." Doctoral thesis, Università degli studi di Padova, 2013. http://hdl.handle.net/11577/3423117.

Повний текст джерела
Анотація:
Diabetic foot is an invalidating complication of diabetes mellitus, a chronic disease increasingly frequently encountered in the aging population. The global prevalence of diabetes is predicted to double by the year 2030 from 2.8% to 4.4%. The prevalence of foot ulceration among patients with diabetes mellitus ranges from 1.3% to 4.8%. Several studies have highlighted that biomechanical factors play a crucial role in the aetiology, treatment and prevention of diabetic foot ulcers. Recent literature on the diabetic foot indicates that mechanical stresses, high plantar pressures or/and high tangential stresses, acting within the soft tissues of the foot can contribute to the formation of neuropathic ulcers. While it is important to study the in-vivo diabetic foot-to-floor interactions during gait, models for simulations of deformations and stresses in the diabetic plantar pad are required to predict high risk areas or to investigate the performance of different insoles design for optimal pressure relief. The finite elements (FE) models allow taking into account the critical aspects of the diabetic foot, namely the movement, the morphology, the tissue properties and the loads. Several 2-dimensional (2D) and 3-dimensional (3D) foot models were developed recently to study the biomechanical behavior of the human foot and ankle. However, to the author knowledge, a geometrically detailed and subject specific 3D FE model of the diabetic neuropathic foot and ankle has not been reported. Furthermore 2D and 3D state-of-the-art FE foot models are rarely combined with subject specific gait analysis data both in term of ground reaction forces and kinematics as input parameters and plantar pressure for validation purposes. The purpose of the study herein presented was to simulate the biomechanical behavior of both an healthy and a diabetic neuropathic foot in order to predict the area characterized by excessive stresses on the plantar surface. To achieve this, it has been developed an FE model of the foot by means of applying the loading and boundary conditions given by subject-specific integrated and synchronized kinematic-kinetic data acquired during gait analysis trials to a subject specific FE model (geometry was obtained through subject specific magnetic resonance images - MRI). Thus, an integrated kinematic-kinetic protocol for gait analysis which evaluates 3D kinematics and kinetics of foot subsegments together with two comprehensive FE models of an healthy and a diabetic neuropathic foot and ankle were described herein. In order to establish the feasibility of the former approach, a 2D FE model of the hindfoot was first developed, taking into account the bone and plantar pad geometry, the soft tissues material properties, the kinematics and the kinetics of both an healthy and a diabetic neuropathic foot acquired during three different phases of the stance phase of gait. Once demonstrated the advantage of such an approach in developing 2D FE foot models, 3D FE models of the whole foot of the same subjects were developed and the simulations were run in several phases of the stance phase of gait The validation of the FE simulations were assessed by means of comparison between the simulated plantar pressure and the subject-specific experimental ones acquired during gait with respect to different phases of the stance phase of gait. A secondary aim of the study was to drive the healthy and the diabetic neuropathic FE foot models with the gait analysis data respectively of 10 healthy and 10 diabetic neuropathic subjects, in order to verify the possibility of extending the results of the subject specific FE model to a wider population. The validity of this approach was also established by comparison between the simulated plantar pressures and the subject-specific experimental ones acquired during gait with respect to different phases of the stance phase of gait. Comparison was also made between the errors evaluated when the FE models simulations was run with the subject specific geometry (obtained from MRI data) and the errors estimated when the FE simulations were run with the data of the 20 subjects
Il diabete mellito è una malattia cronica sempre più frequente. Fra le complicanze ad esso associate vi è il cosiddetto “piede diabetico”. L’incidenza del diabete a livello mondiale è destinata a raddoppiare entro il 2030 passando dal 2.8% al 4.4% della popolazione ed il numero di pazienti affetti da diabete mellito che sviluppano ulcera podalica oscilla tra l’1.3% ed il 4.8%. Numerosi studi hanno evidenziato come i fattori biomeccanici giochino un ruolo fondamentale nell’eziologia, nel trattamento e nella prevenzione delle ulcere del piede diabetico. La letteratura recente sul piede diabetico indica che le sollecitazioni meccaniche, ossia le elevate pressioni plantari e/o gli elevati sforzi tangenziali, che agiscono all’interno dei tessuti molli del piede possono contribuire alla formazione di ulcere. È quindi importante studiare le interazioni piede-suolo durante il cammino nei pazienti diabetici, ma si rendono anche necessari dei modelli per la simulazione di sollecitazioni e deformazioni nel tessuto plantare del piede diabetico che permettano di predire le aree ad alto rischio di ulcerazione o di valutare l’efficacia di ortesi plantari nel ridistribuire in modo ottimale le pressioni plantari. I modelli agli elementi finiti consentono di tenere conto degli aspetti critici del piede diabetico, vale a dire il movimento, la morfologia, le proprietà dei tessuti e le sollecitazioni meccaniche. Di recente sono stati sviluppati diversi modelli bidimensionali (2D) e tridimensionali (3D) del piede con lo scopo di studiare il comportamento biomeccanico di piede e caviglia. Tuttavia, per quanto appurato dall’autore, in letteratura non è stato riportato un modello 3D agli elementi finiti del piede diabetico neuropatico con geometria dettagliata e specifica di un soggetto. Inoltre, i modelli 2D e 3D agli elementi finiti del piede presenti in letteratura sono stati raramente combinati con i dati del cammino specifici dei soggetti, sia in termini di forze di reazione al suolo e cinematica (come parametri di input) che in termini di pressioni plantari per la validazione. L’obiettivo dello studio qui presentato è stato quello di simulare il comportamento biomeccanico sia del piede di un soggetto sano che del piede di un soggetto diabetico neuropatico per prevedere l'area della superficie plantare caratterizzata da eccessive sollecitazioni. A tal scopo, sono stati sviluppati due modelli agli elementi finiti di piede e caviglia, utilizzando le geometrie specifiche dei piedi dei due soggetti (uno sano ed uno diabetico neuropatico) ottenute attraverso immagini di risonanza magnetica (MRI). Quindi sono state effettuate delle simulazioni mediante l'applicazione di carichi e di condizioni al contorno, ottenuti da dati di cinematica e cinetica, integrati e sincronizzati, acquisiti durante il cammino, specifici dei due soggetti sui rispettivi modelli agli elementi finiti. Pertanto in questa tesi sono stati descritti un protocollo integrato di cinematica-cinetica per l'analisi del cammino che permette di valutare la cinematica e la cinetica 3D dei sottosegmenti del piede e due modelli completi agli elementi finiti di un piede sano e di un piede diabetico neuropatico. Per stabilire la fattibilità di tale approccio, sono stati inizialmente sviluppati due modelli 2D agli elementi finiti del retropiede di un soggetto sano e di un soggetto diabetico neuropatico, tenendo conto della geometria ossea e del cuscinetto plantare, delle proprietà dei materiali dei tessuti molli, della cinematica e della cinetica. Questi ultimi sono stati acquisiti durante tre istanti della fase di appoggio del ciclo del passo. Una volta dimostrato il vantaggio di un simile approccio nello sviluppo di modelli 2D agli elementi finiti del piede, sono stati sviluppati i modelli 3D agli elementi finiti del piede intero degli stessi soggetti e sono state eseguite le simulazioni in vari istanti della fase di appoggio. La validazione delle simulazioni è stata effettuata attraverso il confronto tra le pressioni plantari simulate e quelle acquisite sperimentalmente durante il cammino degli stessi soggetti, nei corrispondenti istanti della fase di appoggio. Un secondo scopo dello studio qui presentato è stato quello di effettuare simulazioni del modello del piede del soggetto sano e di quello del soggetto neuropatico con dati di analisi del cammino rispettivamente di 10 soggetti sani e 10 diabetici neuropatici, al fine di verificare la possibilità di estendere i risultati dei modelli specifici dei due soggetti ad una popolazione più ampia. La validità di questo approccio è stata valutata tramite il confronto tra le pressioni plantari simulate e quelle sperimentali specifiche di ogni soggetto, acquisite durante il cammino. Inoltre gli errori delle simulazioni eseguite con i dati dei 20 soggetti sono stati confrontati con gli errori effettuati quando le simulazioni dei modelli avevano previsto l’utilizzo di dati di cammino specifici dei due soggetti la cui geometria podalica era stata ottenuta da MRI
Стилі APA, Harvard, Vancouver, ISO та ін.
3

VIRGILI, LUCA. "Graphs behind data: A network-based approach to model different scenarios." Doctoral thesis, Università Politecnica delle Marche, 2022. http://hdl.handle.net/11566/295088.

Повний текст джерела
Анотація:
Al giorno d’oggi, i contesti che possono beneficiare di tecniche di estrazione della conoscenza a partire dai dati grezzi sono aumentati drasticamente. Di conseguenza, la definizione di modelli capaci di rappresentare e gestire dati altamente eterogenei è un argomento di ricerca molto dibattuto in letteratura. In questa tesi, proponiamo una soluzione per affrontare tale problema. In particolare, riteniamo che la teoria dei grafi, e più nello specifico le reti complesse, insieme ai suoi concetti ed approcci, possano rappresentare una valida soluzione. Infatti, noi crediamo che le reti complesse possano costituire un modello unico ed unificante per rappresentare e gestire dati altamente eterogenei. Sulla base di questa premessa, mostriamo come gli stessi concetti ed approcci abbiano la potenzialità di affrontare con successo molti problemi aperti in diversi contesti. ​
Nowadays, the amount and variety of scenarios that can benefit from techniques for extracting and managing knowledge from raw data have dramatically increased. As a result, the search for models capable of ensuring the representation and management of highly heterogeneous data is a hot topic in the data science literature. In this thesis, we aim to propose a solution to address this issue. In particular, we believe that graphs, and more specifically complex networks, as well as the concepts and approaches associated with them, can represent a solution to the problem mentioned above. In fact, we believe that they can be a unique and unifying model to uniformly represent and handle extremely heterogeneous data. Based on this premise, we show how the same concepts and/or approach has the potential to address different open issues in different contexts. ​
Стилі APA, Harvard, Vancouver, ISO та ін.
4

CHIESA, DAVIDE. "Development and experimental validation of a Monte Carlo simulation model for the Triga Mark II reactor." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2014. http://hdl.handle.net/10281/50064.

Повний текст джерела
Анотація:
In recent years, many computer codes, based on Monte Carlo methods or deterministic calculations, have been developed to separately analyze different aspects regarding nuclear reactors. Nuclear reactors are very complex systems, which require an integrated analysis of all the variables which are intrinsically correlated: neutron fluxes, reaction rates, neutron moderation and absorption, thermal and power distributions, heat generation and transfer, criticality coefficients, fuel burnup, etc. For this reason, one of the main challenges in the analysis of nuclear reactors is the coupling of neutronics and thermal-hydraulics simulation codes, with the purpose of achieving a good modeling and comprehension of the mechanisms which rule the transient phases and the dynamic behavior of the reactor. This is very important to guarantee the control of the chain reaction, for a safe operation of the reactor. In developing simulation tools, benchmark analyses are needed to prove the reliability of the simulations. The experimental measurements conceived to be compared with the results coming out from the simulations are really precious and can provide useful information to improve the description of the physics phenomena in the simulation models. My PhD research activity was held in this framework, as part of the research project Analysis of Reactor COre (ARCO, promoted by INFN) whose task was the development of modern, flexible and integrated tools for the analysis of nuclear reactors, relying on the experimental data collected at the research reactor TRIGA Mark II, installed at the Applied Nuclear Energy Laboratory (LENA) at the University of Pavia. In this way, once the effectiveness and the reliability of these tools for modeling an experimental reactor have been demonstrated, these could be applied to develop new generation systems. In this thesis, I present the complete neutronic characterization of the TRIGA Mark II reactor, which was analyzed in different operating conditions through experimental measurements and the development of a Monte Carlo simulation tool (relied on the MCNP code) able to take into account the ever increasing complexity of the conditions to be simulated. First of all, after giving an overview of some theoretical concepts which are fundamental for the nuclear reactor analysis, a model that reconstructs the first working period of the TRIGA Mark II reactor, in which the “fresh” fuel was not heavily contaminated with fission reaction products, is described. In particular, all the geometries and the materials are described in the MCNP simulation model with good detail, in order to reconstruct the reactor criticality and all the effects on the neutron distributions. The very good results obtained from the simulations of the reactor at low power condition -in which the fuel elements can be considered to be in thermal equilibrium with the water around them- are then used to implement a model for simulating the full power condition (250kW), in which the effects arising from the temperature increase in the fuel-moderator must be taken into account. The MCNP simulation model was exploited to evaluate the reactor power distribution and a dedicated experimental campaign was performed to measure the water temperature within the reactor core. In this way, through a thermal-hydraulic calculation tool, it has been possible to determine the temperature distribution within the fuel elements and to include the description of the thermal effects in the MCNP simulation model. Thereafter, since the neutron flux is a crucial parameter affecting the reaction rates and thus the fuel burnup, its energy and space distributions are analyzed presenting the results of several neutron activation measurements. Particularly, the neutron flux was firstly measured in the reactor's irradiation facilities through the neutron activation of many different isotopes. Hence, in order to analyze the energy flux spectra, I implemented an analysis tool, based on Bayesian statistics, which allows to combine the experimental data from the different activated isotopes and reconstruct a multi-group flux spectrum. Subsequently, the spatial neutron flux distribution within the core was measured by activating several aluminum-cobalt samples in different core positions, thus allowing the determination of the integral and fast flux distributions from the analysis of cobalt and aluminum, respectively. Finally, I present the results of the fuel burnup calculations, that were performed for simulating the current core configuration after a 48 years-long operation. The good accuracy that was reached in the simulation of the neutron fluxes, as confirmed by the experimental measurements, has allowed to evaluate the burnup of each fuel element from the knowledge of the operating hours and the different positions occupied in the core over the years. In this way, it has been possible to exploit the MCNP simulation model to determine a new optimized core configuration which could ensure, at the same time, a higher reactivity and the use of less fuel elements. This configuration was realized in September 2013 and the experimental results confirm the high quality of the work done. The results of this Ph.D. thesis highlight that it is possible to implement analysis tools -ranging from Monte Carlo simulations to the fuel burnup time evolution software, from neutron activation measurements to the Bayesian statistical analysis of flux spectra, and from temperature measurements to thermal-hydraulic models-, which can be appropriately exploited to describe and comprehend the complex mechanisms ruling the operation of a nuclear reactor. Particularly, it was demonstrated the effectiveness and the reliability of these tools in the case of an experimental reactor, where it was possible to collect many precious data to perform benchmark analyses. Therefore, for as these tools have been developed and implemented, they can be used to analyze other reactors and, possibly, to project and develop new generation systems, which will allow to decrease the production of high-level nuclear waste and to exploit the nuclear fuel with improved efficiency.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Ferrari, Rosalba (ORCID:0000-0002-3989-713X). "An elastoplastic finite element formulation for the structural analysis of Truss frames with application to ha historical iron arch bridge." Doctoral thesis, Università degli studi di Bergamo, 2013. http://hdl.handle.net/10446/28959.

Повний текст джерела
Анотація:
This doctoral thesis presents a structural analysis of the Paderno d’Adda Bridge, an impressive iron arch viaduct built in 1889 and located in Lombardia region (Italy). The thesis falls in the context of a research activity started at University of Bergamo since 2005, that is still ongoing and aims to perform an evaluation of the present state of conservation of the bridge. In fact, the bridge is currently still in service and its important position in the context of transport network will soon lead to questions about its future destination, with particular attention to the evaluation of the residual performance capacity. To this end, an inelastic structural analysis of the Paderno d’Adda bridge has been performed, up to failure. This analysis has been conducted through an autonomous computer code of a 3D frame structure that runs in the MATLAB environment and has been developed within the classical frame of Limit Analysis and Theory of Plasticity. The algorithm has been developed applying the “exact” and stepwise holonomic step-by-step analysis method. It has shown very much able to track the limit structural behaviour of the bridge, by reaching convergence with smooth runs up to the true limit load and corresponding collapse displacements. The main characteristic ingredients of its elastoplastic FEM formulation are: beam finite elements; perfectly plastic joints (as an extension of classical plastic hinges); piece-wise linear yield domains; “exact” time integration. In the algorithm, the following original features have been implemented: treatment of mutual connections by static condensation and Gaussian elimination; determination of the tangent stiffness formulation through Gaussian elimination. These peculiar contributions are presented in detail in this thesis.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Rosso, T. "METODI STATISTICI PER L'ANALISI E LA PREVISIONE DELLA MORTALITA' PER TUMORE." Doctoral thesis, Università degli Studi di Milano, 2015. http://hdl.handle.net/2434/344554.

Повний текст джерела
Анотація:
The introduction of time series modeling techniques made analyzing the different factors underlying the changes in mortality and incidence rates over time possible, both for analytic and predictive purposes. Age-period-cohort analyses contribute to the etiologic purpose of descriptive epidemiology making inference from the group to the individual possible. These refer to a family of statistical techniques that study the temporal trends of outcomes, such as mortality an incidence, in terms of three temporal variables: subject age, calendar period and the subject's birth cohort. Useful as it is, the age-period-cohort model is marred by a structural problem of identifiability: the variables of age, period and cohort have an exact linear dependence, i.e. "age = period - cohort". Predicting a future event is a complex and insidious process, however, it is a useful endeavor in most human activities. The information gained on probable future trends, even if unreliable or imprecise is highly valuable. Predicted future cancer incidence and mortality rates are essential tools for both epidemiology and health planning. Numerous methods to carry out age-period-cohort analysis are described in the literature, three of these are illustrated in detail and compared by applying them to real data (WHO mortality database): a method based on penalized likelihood, one using generalized additive models (GAM) and one based on partial least squares (PLS) techniques. Predictive analysis techniques are also presented and compared, using observed mortality data. Short term age-period prediction methods based on joinpoint analysis and Bayesian modelling, and a long term technique, which uses a Bayesian age-period-cohort model, are reviewed. In details, predictions through age-period method based on joinpoint analysis are carried out applying linear, Poisson and log-linear regression models. In the age-period-cohort analysis comparison, the penalized likelihood and GAM methods produce similar results, while effect estimates from the PLS model are noticeably different. These differences can be explained by looking at how the three models solve the issue of perfect collinearity between age, period and cohort parameters. On the one hand, the penalized likelihood and GAM methods use different techniques to distribute the linear drift between the period and cohort effects. The PLS method, on the other hand, solves the identifiability problem by tackling the generalized inverse, minimizing the estimated parameter variance and covariance matrix. Without a formal simulation analysis, comments are limited to stating that the two models based on linear drift distribution are more suitable for epidemiological comparisons, where the effects of age are well defined (as in the case of cancer mortality) and the major problems reside in untangling the period and cohort effects. The PLS model, on the other hand, may hypothetically prove to be a useful method to predict future trends. Age-period-cohort analysis is thus an extremely useful tool in the study of mortality data, particularly for cohort effect analysis, but it should be used with due caution since it is relatively easy to draw erroneous conclusions. The predictive method comparison shows that estimates from the different models are similar, especially for the Poisson and log-linear models. However, the linear model has a tendency to underestimate, while the other considered models seem to overestimate, particularly as the forecasting time period grew larger. Overall, the Bayesian age-period model seems to be less suitable for short and medium term mortality predictions, while the other models do not show large performance differences. From these limited tests the linear model and the Bayesian age-period-cohort model seem to provide better estimates when mortality values are low, whereas in the case of greater numbers Poisson and log-linear models seem like better choices. Finally, the analyzed data's unknown underlying distribution shape determines which model predicts more successfully. However, all the studied models are appropriate for predicting data over short periods (up to 5 years). While none of them performs well over the medium term. Prediction of future trends will always be a complex and insidious exercise, albeit an extremely useful one, furthermore the obtained estimates should be taken with caution and only regarded as a general indication of potential interest for epidemiology and health planning.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

ZANOTTI, FRAGONARA LUCA. "Dynamic models for ancient heritage structures." Doctoral thesis, Politecnico di Torino, 2012. http://hdl.handle.net/11583/2502121.

Повний текст джерела
Анотація:
Risks to cultural heritage and the related losses should be mitigated before disasters such as earthquakes happen. Risks can be addressed by various means, from raising the cultural attention of authorities to documenting the artistic or historical value of an object. The main contribution of structural engineering to cultural heritage concerns the regular maintenance and monitoring for risk reduction. Risk mitigation of historical buildings, as a part of the more general concept of conservation, involves different disciplines. Teams need to be multidisciplinary and information deriving from historical, metric, stylistic, structural, seismic, geotechnical and physical analysis may contribute to the achievement of an overall comprehension of cultural assets. The synergic action of the characterization and monitoring techniques are essential factors to understand, on one hand the mechanisms and the consequences of degradation and, on the other hand, to provide reliable and well-grounded guidelines for the definition of technical interventions to prevent/ to stop the degradation phenomena, to restore the functionality and the use of the historical building/ artifact, or to predict, mitigate and even control the response to accidental events, including strong motions. In this field, an important role is played not only by the analytical aspects, but also by the development and validation of innovative materials and systems for conservation. International deontological guidelines on conservation of cultural heritage define the structural rehabilitation of heritage structures as the cure of a sick person, hence “the heritage structures require anamnesis, diagnosis, therapy and controls, corresponding respectively to the searches for significant data and information, individuation of the causes of damage and decay, choice of the remedial measures and control of the efficiency of the interventions”. Moreover, the same codes state that: “the best therapy is preventive maintenance”, which can only be achieved via monitoring of the structure. In this thesis work a few topical issues of the structural modelling, monitoring and assessment of historic masonry buildings were addressed, with particular emphasis on the dynamic testing and identification. The possible connections with other disciplines are analysed and discussed throughout the text. In this framework, the outline of the thesis includes an introductive first chapter in which the context established by the most recent codes and guidelines concerning the architectural heritage conservation is duly reviewed and analysed. The importance of attaining a knowledge of the structure is also discussed. The second chapter sets up the scene, in which it introduces the principal issues of seismic risk and safety assessment of architectural heritage. Firstly, a brief overview is given of the seismic risk and of geological and geotechnical aspects as related to ancient heritage. Successively, the viability of performance-based approaches, for application to the seismic assessment of architectural heritage, is discussed also in the light of a few recent proposals. In this context, the fundamentals of structural health monitoring are also reported. Chapter 3 is intended to stress the importance of modal testing as an effective tool for ancient structures characterisation, so it starts with a state-of-the-art on linear system identification methods with emphasis on output-only techniques. In particular, time domain and joint time-frequency domain identification techniques are introduced and deeply analysed. Model updating is then addressed and its connection with operational modal analysis is underlined. Finally, a few noteworthy examples of linear identification and model updating of architectural heritage structures are reported. Chapter 4 is about the dynamic and seismic behaviour of domes. The coverage focuses on three ideal benchmarks on reconciling geometric survey with dynamic monitoring. The analyses concerned structures with oval shape domes, such as the Sanctuary of Vicoforte, S. Caterina in Casale Monferrato and S. Agostino in L’Aquila. The final products are virtual models which were enabled to predict the linear dynamic response under earthquake excitation. Chapter 5 inspects modelling strategies suited for masonry under intense seismic excitations. The state-of-the-art covers both models for equivalent static analysis and models which operates in dynamics. A model allowing for stiffness degradation, pinching and hysteresis is then proposed, whose formulation admits extensions to multiple degree of freedom systems. The proposal originates from the well-known Bouc-Wen model. Chapter 6 deals with non-linear identification methods. In perspective, also non-linear identification is expected to become a powerful tool in the context of structural and seismic reliability assessment, especially in the light of the increasing levels of knowledge and prediction capabilities which recent standards strive for. Unfortunately, non-linear identification is to date a specialized and challenging matter, and it has been seldom applied to full-scale structures. In this chapter, special emphasis is given to on-line implementations, with several numerical examples showing the potential of non-linear as well as hysteretic system identification. The last chapter presents an experimental application of non-linear identification. A scaled model of a two-span masonry arch bridge has been artificially damaged and monitored at each damage step. A non-linear identification has been performed from shaker tests data. Results of the experimental campaign will be used to corroborate a non-linear and hysteretic model of the bridge endowed with prediction capabilities.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

GUARNERA, DANIELE. "Refined one-dimensional models applied to biostructures and fluids." Doctoral thesis, Politecnico di Torino, 2019. http://hdl.handle.net/11583/2729363.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

D'ALESSANDRO, ANNAMARIA. "Characterization of protein degradation arrest inducted by Epoxomicin in a neuroblastoma cell line model." Doctoral thesis, Università degli Studi di Roma "Tor Vergata", 2008. http://hdl.handle.net/2108/385.

Повний текст джерела
Анотація:
Il mantenimento dell’omeostasi e la capacità delle cellule di rispondere all’ambiente esterno dipendono dalla degradazione di proteine regolatorie. I due principali sistemi di degradazione delle proteine nelle cellule eucariotiche sono rappresentati dal sistema proteasoma-ubiquitina (UPS) e dall’autofagia (ALP). Pur essendo l’UPS più efficiente nella degradazione rispetto all’autofagia, in particolari condizioni (inibizione del proteasoma), l’autofagia diviene il sistema più adottato. Molte sostanze, sintetiche e naturali (per es. Epoxomicina) sono state descritte in modelli animali come reminescenze delle patologie neurodegenerative. Tali risultati ci hanno indotto ad effettuare una caratterizzazione molecolare degli effetti provocati dall’Epoxomicina su cellule di neuroblastoma (NB). Obiettivo della ricerca è stato esaminare gli effetti biologici di tale farmaco lesionante su SH-SY5Y, linea cellulare di NB umano, (cambiamenti morfologici, apoptosi, accumulo delle proteine poliubiquitinate ed attivazione dell’autofagia), chiarire attraverso proteomica funzionale l’impatto del farmaco sul proteoma di NB e caratterizzare le informazioni ottenute tramite lo studio dei networks proteici. In particolare, la caratterizzazione fenotipica, strutturale e funzionale dell’Epoxomicina su cellule di NB è stata effettuata usando diversi approcci di proteomica (elettroforesi bidimensionale associata a “Peptide Mass Fingerprinting”, Cromatografia in fase liquida associata alla Spettrometria di Massa, sia in Tandem che Esponenziale). Tutte le proteine differenzialmente espresse identificate (ctrl vs. trattato) sono state studiate e raggruppate nelle rispettive categorie funzionali. Alcune di esse sono state validate tramite western-blotting su differenti linee cellulari umane di NB, oltre che su neuroni primari murini, caratterizzati da differente background fenotipico e genetico. I risultati di proteomica sono stati poi analizzati tramite bioinformatica. Sulla base del “Knowledge based Database approach” abbiamo costruito dei networks funzionali, comprendenti le proteine identificate, e scoperto che molte di esse sono connesse con il beta-estradiolo, noto per il suo ruolo neuroprotettivo. Per confermare tali evidenze abbiamo trattato il nostro modello cellulare di NB con beta-estradiolo prima di esporre le cellule stesse ad Epoxomicina. I risultati ottenuti hanno evidenziato una riduzione dell’apoptosi e la ripresa del ciclo cellulare, associate ad una marcata riduzione delle inclusioni ubiquitinate ed induzione dell’autofagia. Questi dati sembrano quindi suggerire il ruolo protettivo dell’estradiolo nella rimozione degli aggregati proteici. Ulteriori studi saranno effettuati per definire in modelli animali i meccanismi tramite cui le proteine identificate sono coinvolte nella risposta all’Epoxomicina.
Maintenance of cellular homeostasis and ability of cells to respond to their environment depend on orderly degradation of key regulatory proteins. The two main routes of protein clearance in eukaryotic cells are the ubiquitin-proteasome system (UPS) and autophagy-lysosome pathways (ALP). Even if UPS is more efficient than macroautophagy, in particular conditions (i.e. inhibition of proteasome), autophagy becomes the major clearance route. A variety of compounds, both synthetic analogs and natural products (i.e. Epoxomicin), have described in animal models as reminescence of neurodegenerative syndromes. These evidences have suggested us the need for a better characterization of the molecular insight induced by Epoxomicin. Our investigation sought to examine the biological effect of this injuring drug on SH-SY5Y cells, a human neuroblastoma (NB) cell line (cell morphological changes, induction of apoptosis, accumulation of polyubiquitinated proteins and activation of autophagy), to clarify by functional proteomics its impact on NB cells proteome and characterize the obtained informations flow through protein networks. The characterization of phenotypical, structural and functional impact of Epoxomicin on NB cells proteome was carried out by using different functional proteomic approaches (2DE combined to Peptide Mass Fingerprinting, Liquid Chromatography-Tandem Mass Spectrometry and nano-LC/MSE). All the distinct differentially expressed proteins (ctrl vs. treated) were examined for their known biological function and grouped in the respective functional categories. Some of them were also validated by western-blotting on different human NB cell lines and also on primary murine neurons, characterized by different genetic and phenotipical background. A more comprehensive analysis of the proteomic results was performed by a bioinformatic approach. Applying a Knowledge based Database approach we have drawn functional networks including the identified proteins and found that several of them are directed towards beta-estradiol, known for its neuroprotective properties. To confirm the central role played by estradiol we have treated our NB cell model with beta-estradiol, before the exposure to Epoxomicin. Results showed apoptosis reduction and cell cycle resumption associated to strong reduction of the ubiquitinated inclusions and autophagy induction. These data seem to suggest a protective role played directly by beta-estradiol in protein aggregates removing. Further investigation will be necessary to define the in vivo mechanism by which the identified proteins can be involved in responce to Epoxomicin.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

RAMAZZOTTI, DANIELE. "A Model of Selective Advantage for the Efficient Inference of Cancer Clonal Evolution." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2016. http://hdl.handle.net/10281/100453.

Повний текст джерела
Анотація:
Recently, there has been a resurgence of interest in rigorous and scalable algorithms for efficient inference of cancer progression using genomic patient data. The motivations are manifold: (i) rapidly growing NGS and single cell data from cancer patients, (ii) long-felt need for novel Data Science and Machine Learning algorithms well-suited for inferring models of cancer progression, and finally, (iii) a desire to understand the temporal and heterogeneous structure of tumor so as to tame its natural progression through most efficacious therapeutic intervention. This thesis presents a multi-disciplinary effort to algorithmically and efficiently model tumor progression involving successive accumulation of genetic alterations, each resulting populations manifesting themselves with a novel cancer phenotype. The framework presented in this work along with efficient algorithms derived from it, represents a novel and versatile approach for inferring cancer progression, whose accuracy and convergence rates surpass other existing techniques. The approach derives its power from many insights from, and contributes to, several fields including algorithms in machine learning, theory of causality, and cancer biology. Furthermore, an optimal, versatile and modular pipeline to extract ensemble-level progression models from cross-sectional sequenced cancer genomes is also proposed. The pipeline combines state-of-the-art techniques for sample stratification, driver selection, identification of fitness-equivalent exclusive alterations and progression model inference. Finally, the results are rigorously validated using synthetic data created with realistic generative models, and empirically interpreted in the context of real cancer datasets; in the later case, biologically significant conclusions revealed by the reconstructed progressions are also highlighted. Specifically, the pipeline's ability to reproduce much of the current knowledge on colorectal cancer progression, as well as to suggest novel experimentally verifiable hypotheses is also demonstrate. Lastly, it is also proved that the proposed framework can be applied, mutatis mutandis, in reconstructing the evolutionary history of cancer clones in single patients, as illustrated by an example with multiple biopsy data from clear cell renal carcinomas.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

CHEMLA, ROMEU SANTOS AXEL CLAUDE ANDRE'. "MANIFOLD REPRESENTATIONS OF MUSICAL SIGNALS AND GENERATIVE SPACES." Doctoral thesis, Università degli Studi di Milano, 2020. http://hdl.handle.net/2434/700444.

Повний текст джерела
Анотація:
Tra i diversi campi di ricerca nell’ambito dell’informatica musicale, la sintesi e la generazione di segnali audio incarna la pluridisciplinalità di questo settore, nutrendo insieme le pratiche scientifiche e musicale dalla sua creazione. Inerente all’informatica dalla sua creazione, la generazione audio ha ispirato numerosi approcci, evolvendo colle pratiche musicale e gli progressi tecnologici e scientifici. Inoltre, alcuni processi di sintesi permettono anche il processo inverso, denominato analisi, in modo che i parametri di sintesi possono anche essere parzialmente o totalmente estratti dai suoni, dando una rappresentazione alternativa ai segnali analizzati. Per di più, la recente ascesa dei algoritmi di l’apprendimento automatico ha vivamente interrogato il settore della ricerca scientifica, fornendo potenti data-centered metodi che sollevavano diversi epistemologici interrogativi, nonostante i sui efficacia. Particolarmente, un tipo di metodi di apprendimento automatico, denominati modelli generativi, si concentrano sulla generazione di contenuto originale usando le caratteristiche che hanno estratti dei dati analizzati. In tal caso, questi modelli non hanno soltanto interrogato i precedenti metodi di generazione, ma anche sul modo di integrare questi algoritmi nelle pratiche artistiche. Mentre questi metodi sono progressivamente introdotti nel settore del trattamento delle immagini, la loro applicazione per la sintesi di segnali audio e ancora molto marginale. In questo lavoro, il nostro obiettivo e di proporre un nuovo metodo di audio sintesi basato su questi nuovi tipi di generativi modelli, rafforazti dalle nuove avanzati dell’apprendimento automatico. Al primo posto, facciamo una revisione dei approcci esistenti nei settori dei sistemi generativi e di sintesi sonore, focalizzando sul posto di nostro lavoro rispetto a questi disciplini e che cosa possiamo aspettare di questa collazione. In seguito, studiamo in maniera più precisa i modelli generativi, e come possiamo utilizzare questi recenti avanzati per l’apprendimento di complesse distribuzione di suoni, in un modo che sia flessibile e nel flusso creativo del utente. Quindi proponiamo un processo di inferenza / generazione, il quale rifletta i processi di analisi/sintesi che sono molto usati nel settore del trattamento del segnale audio, usando modelli latenti, che sono basati sull’utilizzazione di un spazio continuato di alto livello, che usiamo per controllare la generazione. Studiamo dapprima i risultati preliminari ottenuti con informazione spettrale estratte da diversi tipi di dati, che valutiamo qualitativamente e quantitativamente. Successiva- mente, studiamo come fare per rendere questi metodi più adattati ai segnali audio, fronteggiando tre diversi aspetti. Primo, proponiamo due diversi metodi di regolarizzazione di questo generativo spazio che sono specificamente sviluppati per l’audio : una strategia basata sulla traduzione segnali / simboli, e una basata su vincoli percettivi. Poi, proponiamo diversi metodi per fronteggiare il aspetto temporale dei segnali audio, basati sull’estrazione di rappresentazioni multiscala e sulla predizione, che permettono ai generativi spazi ottenuti di anche modellare l’aspetto dinamico di questi segnali. Per finire, cambiamo il nostro approccio scientifico per un punto di visto piú ispirato dall’idea di ricerca e creazione. Primo, descriviamo l’architettura e il design della nostra libreria open-source, vsacids, sviluppata per permettere a esperti o non-esperti musicisti di provare questi nuovi metodi di sintesi. Poi, proponiamo una prima utilizzazione del nostro modello con la creazione di una performance in real- time, chiamata ægo, basata insieme sulla nostra libreria vsacids e sull’uso di une agente di esplorazione, imparando con rinforzo nel corso della composizione. Finalmente, tramo dal lavoro presentato alcuni conclusioni sui diversi modi di migliorare e rinforzare il metodo di sintesi proposto, nonché eventuale applicazione artistiche.
Among the diverse research fields within computer music, synthesis and generation of audio signals epitomize the cross-disciplinarity of this domain, jointly nourishing both scientific and artistic practices since its creation. Inherent in computer music since its genesis, audio generation has inspired numerous approaches, evolving both with musical practices and scientific/technical advances. Moreover, some syn- thesis processes also naturally handle the reverse process, named analysis, such that synthesis parameters can also be partially or totally extracted from actual sounds, and providing an alternative representation of the analyzed audio signals. On top of that, the recent rise of machine learning algorithms earnestly questioned the field of scientific research, bringing powerful data-centred methods that raised several epistemological questions amongst researchers, in spite of their efficiency. Especially, a family of machine learning methods, called generative models, are focused on the generation of original content using features extracted from an existing dataset. In that case, such methods not only questioned previous approaches in generation, but also the way of integrating this methods into existing creative processes. While these new generative frameworks are progressively introduced in the domain of image generation, the application of such generative techniques in audio synthesis is still marginal. In this work, we aim to propose a new audio analysis-synthesis framework based on these modern generative models, enhanced by recent advances in machine learning. We first review existing approaches, both in sound synthesis and in generative machine learning, and focus on how our work inserts itself in both practices and what can be expected from their collation. Subsequently, we focus a little more on generative models, and how modern advances in the domain can be exploited to allow us learning complex sound distributions, while being sufficiently flexible to be integrated in the creative flow of the user. We then propose an inference / generation process, mirroring analysis/synthesis paradigms that are natural in the audio processing domain, using latent models that are based on a continuous higher-level space, that we use to control the generation. We first provide preliminary results of our method applied on spectral information, extracted from several datasets, and evaluate both qualitatively and quantitatively the obtained results. Subsequently, we study how to make these methods more suitable for learning audio data, tackling successively three different aspects. First, we propose two different latent regularization strategies specifically designed for audio, based on and signal / symbol translation and perceptual constraints. Then, we propose different methods to address the inner temporality of musical signals, based on the extraction of multi-scale representations and on prediction, that allow the obtained generative spaces that also model the dynamics of the signal. As a last chapter, we swap our scientific approach to a more research & creation-oriented point of view: first, we describe the architecture and the design of our open-source library, vsacids, aiming to be used by expert and non-expert music makers as an integrated creation tool. Then, we propose an first musical use of our system by the creation of a real-time performance, called aego, based jointly on our framework vsacids and an explorative agent using reinforcement learning to be trained during the performance. Finally, we draw some conclusions on the different manners to improve and reinforce the proposed generation method, as well as possible further creative applications.
À travers les différents domaines de recherche de la musique computationnelle, l’analysie et la génération de signaux audio sont l’exemple parfait de la trans-disciplinarité de ce domaine, nourrissant simultanément les pratiques scientifiques et artistiques depuis leur création. Intégrée à la musique computationnelle depuis sa création, la synthèse sonore a inspiré de nombreuses approches musicales et scientifiques, évoluant de pair avec les pratiques musicales et les avancées technologiques et scientifiques de son temps. De plus, certaines méthodes de synthèse sonore permettent aussi le processus inverse, appelé analyse, de sorte que les paramètres de synthèse d’un certain générateur peuvent être en partie ou entièrement obtenus à partir de sons donnés, pouvant ainsi être considérés comme une représentation alternative des signaux analysés. Parallèlement, l’intérêt croissant soulevé par les algorithmes d’apprentissage automatique a vivement questionné le monde scientifique, apportant de puissantes méthodes d’analyse de données suscitant de nombreux questionnements épistémologiques chez les chercheurs, en dépit de leur effectivité pratique. En particulier, une famille de méthodes d’apprentissage automatique, nommée modèles génératifs, s’intéressent à la génération de contenus originaux à partir de caractéristiques extraites directement des données analysées. Ces méthodes n’interrogent pas seulement les approches précédentes, mais aussi sur l’intégration de ces nouvelles méthodes dans les processus créatifs existants. Pourtant, alors que ces nouveaux processus génératifs sont progressivement intégrés dans le domaine la génération d’image, l’application de ces techniques en synthèse audio reste marginale. Dans cette thèse, nous proposons une nouvelle méthode d’analyse-synthèse basés sur ces derniers modèles génératifs, depuis renforcés par les avancées modernes dans le domaine de l’apprentissage automatique. Dans un premier temps, nous examinerons les approches existantes dans le domaine des systèmes génératifs, sur comment notre travail peut s’insérer dans les pratiques de synthèse sonore existantes, et que peut-on espérer de l’hybridation de ces deux approches. Ensuite, nous nous focaliserons plus précisément sur comment les récentes avancées accomplies dans ce domaine dans ce domaine peuvent être exploitées pour l’apprentissage de distributions sonores complexes, tout en étant suffisamment flexibles pour être intégrées dans le processus créatif de l’utilisateur. Nous proposons donc un processus d’inférence / génération, reflétant les paradigmes d’analyse-synthèse existant dans le domaine de génération audio, basé sur l’usage de modèles latents continus que l’on peut utiliser pour contrôler la génération. Pour ce faire, nous étudierons déjà les résultats préliminaires obtenus par cette méthode sur l’apprentissage de distributions spectrales, prises d’ensembles de données diversifiés, en adoptant une approche à la fois quantitative et qualitative. Ensuite, nous proposerons d’améliorer ces méthodes de manière spécifique à l’audio sur trois aspects distincts. D’abord, nous proposons deux stratégies de régularisation différentes pour l’analyse de signaux audio : une basée sur la traduction signal/ symbole, ainsi qu’une autre basée sur des contraintes perceptives. Nous passerons par la suite à la dimension temporelle de ces signaux audio, proposant de nouvelles méthodes basées sur l’extraction de représentations temporelles multi-échelle et sur une tâche supplémentaire de prédiction, permettant la modélisation de caractéristiques dynamiques par les espaces génératifs obtenus. En dernier lieu, nous passerons d’une approche scientifique à une approche plus orientée vers un point de vue recherche & création. Premièrement, nous présenterons notre librairie open-source, vsacids, visant à être employée par des créateurs experts et non-experts comme un outil intégré. Ensuite, nous proposons une première utilisation musicale de notre système par la création d’une performance temps réel, nommée ægo, basée à la fois sur notre librarie et sur un agent d’exploration appris dynamiquement par renforcement au cours de la performance. Enfin, nous tirons les conclusions du travail accompli jusqu’à maintenant, concernant les possibles améliorations et développements de la méthode de synthèse proposée, ainsi que sur de possibles applications créatives.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

DI, MARIA Chiara. "Longitudinal mediation analysis with structural and multilevel models: associational and causal perspectives." Doctoral thesis, Università degli Studi di Palermo, 2022. http://hdl.handle.net/10447/533485.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
13

MASTRODONATO, STEFANO LUIGI. "Geographic representation in location intelligence problems analysis: the geo-element mapping chart." Doctoral thesis, Università degli Studi di Roma "Tor Vergata", 2009. http://hdl.handle.net/2108/1061.

Повний текст джерела
Анотація:
Purpose - The present research has three major aims: to examine the concept of geographic information in business application through a critical review of different definitions and conceptualization that, from several views, literature and business applied sectors present; to individuate a logical framework to support the decomposition of spatial analysis models used to support business decision making, and a conceptualization scheme to help the user/analyst to gain insight into geographic representation inherent in location intelligence applications; finally, to apply the framework proposed to some common location intelligence problem statements to evaluate its meaningfulness. Design/methodology/approach – This research critically reviews existing literature of business application of Geographic Information Systems; it adopts the Beguin-Thisse framework of geographic space to focus on how is representation of geography included in spatial analysis techniques and models used to afford location intelligence problems. The logical framework proposed is then applied to some analytical business approaches: trade area analysis models, retail location models, location allocation models, and spatial allocation models. Findings – This research has identified a logical framework, named geo-element mapping chart (GEMC), to support mapping and making practical evidence of the “geographic dimension” (distance, direction, connectivity, and shape) inside spatial analysis models used to explore some specific business problems. The general conclusions are that traditional spatial analysis approaches simplifies its representation of geography, using principally the “classic distance dimension”. The GEMC has showed that other dimensions, such as connectivity and shape, can be present in some models, but their practical conceptualization and successive implementation for more insightful spatial modelling approaches require multidisciplinary competencies and computational expertise. Research implications/limitations – The idea on which the framework proposed (GEMC) is based is that, for business applications, every spatial analysis models can be decomposed using some elementary model building blocks, which, subsequently, can contains in their definition a “geographic dimension” or represent an element of the geographic space upon which conceptually the model works. The GEMC has been applied only to some case studies, therefore its implementation need to be extended to evaluate other modelling contexts, such as spatial statistics and spatial econometrics, to provide more general considerations and coclusions. Practical implications – Understanding the use and the value of geography and geographic information in business decision making, i.e. the GEMC major purpose, can support further developments of specific GIS-based support tools and related spatial analysis techniques. The development of a framework to decompose models and then to make evidence of the representation of geographic elements and dimensions inherent in the problem, can support a more useful management of spatial analytical models, helping a potential user to build new location intelligence models by reusing existing modelling approaches with their “geographical meaning”, and facilitating a more intelligence model selection in a complex problem solving environment (such as Knowledge Based Spatial Decision Support Systems and Knowledge Based Planning Support Systems). In other words, the generalization of the GEMC application to other spatial analysis approaches used to model different location intelligence problem, could potentially help to build a kind of “library” (model library) of different approaches used to model several geographic component, inherent in business problems, that have in the spatial dimension an important variable of their definition and for their effective solution. Originality/value – This research organizes and proposes a framework of integration of the different definitions related to the use of geographic information and Geographic Information Systems in the business sector. It attempts to formalize and test in some specific contexts a logical approach to evaluate geographic representation in spatial analysis models used to support decision making processes. The GEMC is intended to be a flexible approach to highlight where geography comes into play during spatial models formulation. The dissertation offers an original applied examination of some issues that have an impact on many aspect of location intelligence applications. By adopting the notion of GEMC, this research provides a detailed analysis of some methodologies used to model specific spatial business problems. The author is not aware of this logical approach having being applied elsewhere in research or application.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

MORADI, MONA. "Development of lumped parameters models for aerostatic gas bearings." Doctoral thesis, Politecnico di Torino, 2019. http://hdl.handle.net/11583/2733954.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Favaretto, Chiara. "Development of a model for the assessment of Coastal Flooding Vulnerability: an application to the Venetian littoral." Doctoral thesis, Università degli studi di Padova, 2018. http://hdl.handle.net/11577/3424876.

Повний текст джерела
Анотація:
In the recent years, marine flooding and its impacts have become a question of growing interest in the scientific community as well as in managing authorities, since coastal areas are the most heavily populated and developed land zones in the world. Under climate change, sea levels are rising and even storm surge intensity is possibly increasing. Therefore, it is expected that the occurrence probability of extreme coastal flooding events will increase. This major hazard requires urgent adaptations in order to increase the resistance and resilience of an area to coastal floods. The motivation of this research arises from the practical need highlighted by local managers of the Veneto region that require (possibly GIS-integrated) rapid tools to simulate the whole complexity of the problem of mapping the risk of coastal flood by wave over-topping in an urban area at large scale. The aim of this thesis is to develop a methodology in order to define flood risk maps by analysing different scenarios at a different time and spatial scales, combining both marine forcing and flood propagation in the hinterland. After an articulated theoretical study and an accurate bibliographic research, a flood propagation numerical model was implemented. In order to use GPU acceleration, i) the domain Shallow Water Equations are simplified by linearising bottom friction and neglecting advection, and ii) an appropriate vectorization method is considered. The numerical model for coastal flooding propagation was tested against four well-known benchmarks (two analytical solutions of the SWEs and two experimental tests) and applied to a real case of coastal flooding occurred at Caorle (VE) in December 2008. The methodology was finally applied to the coast of the Veneto Region, thanks to an extensive geomorphological and hydraulic knowledge of the area (Ruol et al. 2016, 2018). Combining i) a bivariate statistical analysis of marine forcing (waves and sea levels), ii) a model of wave transformation from offshore to onshore and iii) a reliability analysis, Coastal Flooding hazard maps were produced for three stretches of the Veneto littoral: Valle Vecchia, Caorle and Cavallino-Treporti.
L’allagamento costiero è una tematica di grande attualità che ha suscitato negli ultimi anni una forte attenzione sia da parte della comunità scientifica che da parte degli amministratori e gestori del territorio. L’innalzamento del livello medio del mare dovuto ai cambiamenti climatici e la maggior frequenza di mareggiate estreme fanno prevedere una più alta probabilità di accadimento di eventi di ingressione marina lungo i litorali. La crescente urbanizzazione e la sempre più alta percentuale di persone che vivono nei litorali aumentano il valore esposto all'allagamento costiero, che va dunque studiato e approfondito per mitigare il rischio di perdite economiche, di danni al patrimonio artistico/culturale e all’ambiente e per scongiurare pericoli per l'incolumità delle persone presenti in questi territori. La motivazione di questa ricerca è scaturita dalla necessità, espressa dagli enti gestori e pianificatori della costa Veneta, di redigere mappe di rischio di allagamento che includessero tra le cause dell’alluvionamento non solamente l’esondazione di tipo fluviale, ma anche quella di origine costiera e di avere pertanto uno strumento rapido e scientificamente basato che consenta di rispondere in maniera unitaria e omogenea per tutto il litorale alla Direttiva Alluvioni (2007/60/CE). A tale scopo è stata predisposta una metodologia per definire mappe di rischio di allagamento attraverso un’analisi di diversi scenari a diverse scale temporali e spaziali. Il primo passo è stato implementare, dopo un articolato studio teorico e una accurata ricerca bibliografica, un modello numerico che risolve le equazioni del moto (ovvero le equazioni alle acque basse) in forma semplificata per simulare la propagazione dell’allagamento nell’entroterra. Le semplificazioni apportate alle equazioni (in particolare all'equazione della conservazione della quantità di moto) sono essenzialmente due: i) i termini avvettivi sono stati trascurati poiché risultano poco importanti nel tipo di fenomeno analizzato, ii) il termine di attrito, fondamentale per descrivere la propagazione, è stato linearizzato. Per garantire la positività della soluzione e la sua stabilità, evitando quindi la formazione di oscillazioni, sono state implementate alcune tecniche numeriche. Le equazioni semplificate sono adatte ad un calcolo in parallelo e pertanto il modello proposto ha come peculiarità l’utilizzo di algoritmi idonei all’uso di GPU, in grado quindi di analizzare grandi mappe in tempi di calcolo ridotti e di lavorare direttamente alla scala del pixel (utilizzando “Digital Elevation Model” DEM ad alta risoluzione) senza la necessità di creare mesh. Nel presente studio è stata utilizzata la GPU Nvidia Tesla K80 con 4992 core e 12 GByte di memoria, ottenendo tempi di calcolo, per domini molto estesi, pari al 3% di quelli necessari utilizzando una classica CPU. Il modello numerico di allagamento è stato esaminato attraverso il confronto con quattro benchmark molto noti in letteratura (due soluzioni analitiche delle equazioni alla acque basse e due prove sperimentali). Inoltre è stato applicato ad un caso reale di allagamento costiero avvenuto a Caorle (VE) nel Dicembre 2008, confrontando i risultati ottenuti con una mappa di aree allagate ricostruita grazie ad un video ripreso durante l’evento estremo. Il modello è risultato in buon accordo con le soluzioni analitiche, le misure di laboratorio e le informazioni disponibili. La metodologia è stata infine applicata alla costa della Regione Veneto, traendo vantaggio dall’ampia conoscenza geomorfologica e idraulica del territorio maturata nello svolgimento di una approfondita ricerca eseguita ed elaborata sui più recenti dati e misure disponibili per la zona costiera (Ruol et al. 2016, 2018). A partire dai dati di altezza d’onda e livello misurati alla Torre CNR “Acqua Alta” è stata condotta un’analisi statistica bivariata che ha permesso di valutare la probabilità di superamento associata a coppie di altezza d’onda e livello. L’obiettivo finale, ossia la redazione di mappe di allagamento, si avvale di una analisi di affidabilità di livello II (FORM). I risultati sono pertanto i valori di probabilità di superamento di un determinato livello idrico per ciascun pixel del DEM disponibile. Questo si traduce in un risultato di estremo interesse scientifico e pratico, ovvero la predisposizione di mappe di pericolosità all’allagamento costiero nell'arco temporale di 1 e 10 anni. Sono stati analizzati tre tratti che appartengono alla costa Veneta (in provincia di Venezia) e che hanno lunghezza compresa tra i 4 e i 15 km: il litorale di Valle Vecchia, il litorale di Caorle e il litorale di Cavallino. Le realizzazione di queste mappe e la conseguente individuazione delle zone più critiche vuole fornire un valido supporto per la programmazione e progettazione degli interventi che riguardano la protezione costiera dal rischio di ingressione marina lungo il litorale.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

SIVORI, DANIELE. "Ambient vibration tools supporting the model-based seismic assessment of existing buildings." Doctoral thesis, Università degli studi di Genova, 2021. http://hdl.handle.net/11567/1045713.

Повний текст джерела
Анотація:
The technological advancements of the last decades are making dynamic monitoring an efficient and widespread resource to investigate the safety and health of engineering structures. In the wake of these developments, the thesis proposes methodological tools supporting the seismic assessment of existing buildings through the use of ambient vibration tests. In this context, the literature highlights considerable room to broaden the ongoing research, especially regarding masonry buildings. The recent earthquakes, once again, highlighted the significant vulnerability of this structural typology as an important part of our built heritage, remarking the importance of risk mitigation strategies for the territorial scale. The thesis builds upon a simplified methodology recently proposed in the literature, conceived to assess the post-seismic serviceability of strategic buildings based on their operational modal parameters. The original contributions of the work pursue the theoretical and numerical validation of its basic simplifying assumptions, in structural modelling – such as the in-plane rigid behaving floor diaphragms – and seismic analysis – related to the nonlinear fundamental frequency variations induced by earthquakes. These strategies are commonly employed in the seismic assessment of existing buildings, but require further developments for masonry buildings. The novel proposal of the thesis takes advantage of ambient vibration data to establish direct and inverse mechanical problems in the frequency domain targeted at, first, qualitatively distinguishing between rigid and nonrigid behaving diaphragms and, second, quantitatively identifying their in-plane shear stiffness, mechanical feature playing a primary role in the seismic behaviour of masonry buildings. The application of these tools to real case studies points out their relevance in the updating and validation of structural models for seismic assessment purposes. In the light of these achievements, a model-based computational framework is proposed to develop frequency decay-damage control charts for masonry buildings, which exploit ambient vibration measurements for quick damage evaluations in post-earthquake scenarios. The results of the simulations, finally, highlight the generally conservative nature of ambient vibration-based simplified methodologies, confirming their suitability for the serviceability assessment of existing masonry buildings.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

DA, SILVA PEREIRA DANIEL FILIPE. "Qualitative modelling of ecological systems: Extending calculation procedures and applications." Doctoral thesis, Università degli studi di Ferrara, 2020. http://hdl.handle.net/11392/2487971.

Повний текст джерела
Анотація:
The aim of this Ph.D. was to contribute to the discipline of ecosystems networks, in particular to loop analysis, by improving on the current algorithm implementations, with particular emphasis in developing an approach to couple systems quantitative information to the analytical processes of loop analysis, and through it allow the exploration the mechanism behind a systems responsiveness to perturbations, that is, the importance of both the variables, the structure of linkages between them and the intensity of those linkages. In this thesis, after a presentation of the loop analysis and its main drawback, the inherent lack of associated link intensity and the repercussions these have on the system’s responsiveness, three chapters follow. In Chapter 3, the LevinsAnalysis R package is presented. In this package the improved code and its applications explained and demonstrated through the application of the package functions to a case study, the Savannah Fires (Bodini & Clerici, 2016) model. This case was specifically selected to demonstrate the potential of the package and its novel approach to identification of the importance of linkage strength and path analysis. In Chapter 4, I explore the Caspian Sea network prior to Mnemiopsis leidyi invasion, with the aim of investigating the mechanism behind the changes observed on multiple species and their importance compared with one another, the role that different species, the strength of interaction of the links and paths connecting them might have played in the system response to the different pressures it suffered. The result of this analysis, pointing to the importance of both kilkas and bony fish in the system’s response to perturbations such as overfishing. Phytoplankton also emerges as potentially playing an important role in the system, in particular a possible negative input on this variable seems to be of importance in describing the changes observed in the system. From this chapter also comes about how the strength of interplay between variables and from there the strength of pathways connecting the system play a central role in the Caspian Sea system and its response to press perturbations. In Chapter 5, a discussion is taken on the viability and potential use of loop analysis in the study of systems whose variables lay across the social and the ecological domains: species populations, predators and prey, but also governmental organizations, human dynamics and social mechanisms.
Lo scopo di questo lavoro di dottorato è di contribuire allo sviluppo della scienza delle reti in ambito ecologico. In particolare il lavoro si focalizza sulla tecnica della loop analysis ampliandone le potenzialità dell’algoritmo implementato su piattaforma informatizzata introducendo un’estensione quantitativa dell’algoritmo di predizione. Lo scopo è quello di rendere più efficace la ricerca dei maccanismi alla base delle risposte degli ecosistemi agli eventi perturbativi. Dopo una presentazione della metodologia e dei suoi limiti, con particolare attenzione alla mancanza di quantificazione dei coefficienti di interazione tra le component dell’ecosistema la tesi si sviluppa in tre capitoli. Nel capitolo 3 è presentato il software “LevinsAnalysis”, che è stato sviluppato in ambiente R. Lo compongono diverse funzioni che consentono una più agevole applicazione dell’algoritmo previsionale a qualsiasi rete interattiva di tipo ecologico (e non solo). L’applicabilità di tali funzioni è sviluppata attraverso un caso di studio inerente l’ecologia degli ambienti di savana. Il Capitolo 4 è dedicato a una applicazione ecologica e riguarda lo studio della comunità del Mar Caspio finalizzato alla comprensione dei meccanismi che hanno generato le trasformazioni ecologiche osservate in quell’ecosistema, con particolare riferimento alla riduzione drastica di alcune specie ittiche e di mammiferi e all’esplosione degli organismi gelatinosi. Lo studio ha mostrato l’importanza di alcune componenti dal punto di vista dinamico, e ha consentito di formulare ipotesi causative sulle risposte dell’ecosistema alle perturbazioni, risposte che si ricavano dallo studio dei percorsi di interazione e della loro intensità. Il capitolo 5 è sostanzialmente un capitolo di discussione in cui si enfatizzano gli aspetti applicativi della loop analysis anche in contesti non strettamente ecologici, data la versatilità della tecnica. Così lo strumento diventa importante per analizzare sistemi socio-ecologici, che considerano, cioè, non solo variabili ecologiche come prede e predatori ma anche le interferenze di organizzazioni governative, e gli effetti delle dinamiche sociali.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Cairo, V. "MOTIVAZIONI, VALUTAZIONE E PROSPETTIVE NELLA PARTECIPAZIONE DEGLI AGRICOLTORI ALLE MISURE AGRO-AMBIENTALI:ANALISI QUALI-QUANTITATIVA SU UN CAMPIONE DI AZIENDE LOMBARDE." Doctoral thesis, Università degli Studi di Milano, 2015. http://hdl.handle.net/2434/341976.

Повний текст джерела
Анотація:
Agro-environmental measures (AEMs) are policy instruments in the European Union that pay farmers for voluntary environmental commitments and protection of the European countryside. The first AEMs were introduced by Reg. 2078/1992, thank to the MacSharry Reform. In this time they were “accompanying measures” and they were used to sustain rural income after decoupling and the abolishment of internal price support. In the following programming period it became mandatory to every Member State to consider Agro-environmental measures part of their Rural Developing Programs and they became one of the most important instrument of the EU for rural areas. Investigating literature about AEMs, we understand that the determinants of farmers’ participation were not only to be searched on farm structure and farmers’ characteristics, but also in personal attitudes using the Ajzen’s Theory of Planned Behavior (1991). We collect 227 questionnaires of farmers participating in Agro-environmental measures in Lombardy during the last programming period in order to evaluate the perceptions of the respondents on the policy and explore motivations that drive farmers in the participation, evaluating both farm structural factors and their attitudes. The study is composed by two main parts: one is focused on the construction of the identikit of the “standard participant” through a Likert scale survey and a qualitative analysis, and the other one is focused on modeling factors affecting the subscription of agro-environmental contracts. In the first part, farmers answer questions concerning their perceptions about the role of conventional and environmental friendly agriculture, the impact of AEMs in their daily practices and economic aspects associated with them. They identify the reasons that push them to participate, the functions of the farm and the future they imagine for their business. Through a classifications of farmers using personal and farm characteristics, we subdivides the sample and we try to understand how this parameters influence the answers and to typify the AEMs’ participant. In the second part we implemented a logit model in order to answer the question “which are the determinants for the participation in agro-environmental measures in the next programming period?”, matching farm characteristics and farmers’ personal attitudes. Farmers choose to participate in AEMs for environmental reasons and to value their own production on the market. Most of them are interested in increasing their income through the measures. They are strongly aware of agriculture’s role of environmental manager and public goods producer but they aren’t satisfied with the recognition that is given by the decision-maker. In particular farmers criticize the Administration for procedures, bureaucracy and inspection but, finally, they want to continue to participate in AEMs. Factors affecting the participation are linked to farm characteristics, such as its UAA or its membership to organic farms, and to the farmers perceptions on some issues, such as stiffness of control and satisfaction on the environmental performances of the measures.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

PASTOR, ELIANA. "Pattern-based algorithms for Explainable AI." Doctoral thesis, Politecnico di Torino, 2021. http://hdl.handle.net/11583/2942116.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
20

AUGELLO, RICCARDO. "Advanced FEs for the micropolar and geometrical nonlinear analyses of composite structures." Doctoral thesis, Politecnico di Torino, 2021. http://hdl.handle.net/11583/2872330.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
21

AREZZO, DAVIDE. "An innovative framework for Vibration Based Structural Health Monitoring of buildings through Artificial Intelligence approaches ​." Doctoral thesis, Università Politecnica delle Marche, 2022. http://hdl.handle.net/11566/299822.

Повний текст джерела
Анотація:
Il monitoraggio della salute strutturale consiste nell'identificare tutti quei processi volti a valutare la sicurezza di una struttura. Questi processi hanno trovato la loro prima applicazione nel campo dell'ingegneria aerospaziale e meccanica al fine di valutare le prestazioni e l'insorgenza di danni in componenti meccanici di veicoli e macchinari industriali rotanti. Col tempo, la necessità di valutare lo stato di salute delle strutture ha portato all'utilizzo di queste tecniche anche nel campo dell'ingegneria civile, in particolare del monitoraggio basato su misure di vibrazione ambientale attraverso l'applicazione di tecniche di Operational Modal Analysis (OMA). Queste tecniche sono ben consolidate, basate su solide basi teoriche, e implementate in numerosi framework per il monitoraggio della salute strutturale. Tuttavia, la definizione e l'implementazione di un monitoraggio dinamico efficace in grado di rilevare i danni richiede un alto grado di multidisciplinarietà e il contributo di specialisti provenienti da diversi campi, vale a dire, misure meccaniche, informatica, ingegneria elettronica, identificazione dinamica, ingegneria strutturale, data science. Durante le attività di dottorato è stato sviluppato un framework per il sistema di monitoraggio della salute strutturale basato sulle misure vibrazionali (VB-SHM) in tutte le sue parti, cercando di raggiungere la replicabilità del sistema e la sua efficacia nel tracciare correttamente le condizioni di salute della struttura nel tempo. La replicabilità è fondamentale per promuovere la più ampia diffusione possibile di questo tipo di monitoraggio. Il framework è stato sviluppato a partire dai risultati ottenuti da tre principali casi studio monitorati durante le attività di dottorato. Il caso studio della Chiesa di Santa Maria in Via a Camerino affronta il problema dell'identificazione dinamica, della calibrazione del modello e del posizionamento ottimale dei sensori. Vista la complessità del modello ad elementi finiti, la sua calibrazione è stata effettuata con l'aiuto dell'algoritmo Particle Swarm Optimization. In seguito, vengono presentati i risultati del monitoraggio di un edificio scolastico a Camerino monitorato durante la sequenza sismica del 2016. Durante tutto il periodo di monitoraggio è stata registrata la risposta dell'edificio a diversi terremoti di bassa e media intensità. L'edificio, nonostante l'assenza di danni, ha mostrato un comportamento dinamico tempo variante rendendo difficile la tracciabilità delle frequenze durante la risposta sismica. Applicando una procedura di linearizzazione, è stato possibile tenere traccia delle frequenze anche durante la risposta sismica dell’edificio. Infine, vengono riportati i risultati del monitoraggio della Torre di Ingegneria dell'Università Politecnica delle Marche. La Torre è stata monitorata dal 2017 e, seppur con alcune interruzioni, ha permesso di osservare una marcata dipendenza delle sue frequenze proprie dai parametri ambientali, in particolare temperatura e vento. Questi effetti sono stati efficacemente depurati attraverso l'implementazione di una rete neurale artificiale.
Structural health monitoring consists of identifying all those processes aimed at assessing the safety of a structure. These processes found their first application in the field of aerospace and mechanical engineering in order to assess the performance and occurrence of damage in mechanical components of vehicles and rotating industrial machinery. Over time, the need to assess the health status of structures has also led to the use of these techniques in the field of civil engineering, in particular vibration-based monitoring through the application of Operational Modal Analysis (OMA) techniques. These techniques are well established, based on solid theoretical foundations, and implemented in numerous frameworks for structural health monitoring. However, the definition and implementation of an effective dynamic monitoring capable to detect damage requires a high degree of multi-disciplinary and the contribution of specialists from different fields, i.e., measurement engineering, computer science, electronic engineering, dynamic identification, structural engineering, data science. During the PhD activities an effort have been made for the development of a framework for Vibration-Based Structural Health Monitoring system (VB-SHM) in all its part, attempting to achieve replicability of the system and its effectiveness in correctly tracking the health conditions of the structure over time. Replicability is crucial to promote the widest possible spread of this kind of monitoring. The framework has been developed starting from results obtained by three main case studies monitored during the PhD activities. The case study of the Santa Maria in Via Church in Camerino deal with the problem of dynamic identification, model updating and optimal sensor placement. Due to the complexity of the finite element model, model updating has been carried out with the aid of Particle Swarm Optimization algorithm. Thereafter, monitoring results of the r.c. school building in Camerino monitored during the 2016 seismic sequence are presented. Throughout the monitoring period, the response of the building to several low to medium intensity earthquakes was recorded. The building, despite the absence of damage, showed a time-varying dynamic behaviour making it difficult to track the frequencies during the seismic response. By applying a linearisation procedure, frequencies are tracked even during strong motions. Finally, the monitoring results of the Engineering Tower of the Università Politecnica delle Marche are reported. The Tower has been monitored since 2017 and, although with some interruptions, allowed the observation of a marked dependence of its eigen-frequencies on environmental parameters, especially temperature and wind. These effects have been effectively cleansed through the implementation of an artificial neural network.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

GIOVANNELLI, ALESSANDRO. "Nonlinear forecasting using a large number of predictors." Doctoral thesis, Università degli Studi di Roma "Tor Vergata", 2010. http://hdl.handle.net/2108/1333.

Повний текст джерела
Анотація:
L’obiettivo principale di questa tesi è di introdurre un modello non lineare, “Feedforward Neural Network-Dynamic Factor” (FNN-DF), per la previsione di serie macroeconomiche utilizzando un numero elevato di variabili. La tecnica usata per riassumere le variabili in un piccolo numero di fattori è il “Generalized Dynamic Factor Model” (GDFM), mentre le reti neurali di tipo “Feedforward” sono utilizzate per rappresentare la non-linearità. Comunemente nella letteratura del GDFM, le previsioni sono effettuate con modelli lineari. Tuttavia tali tecniche spesso non sono correttamente specificate e le previsioni risultanti forniscono soltanto un’approssimazione alla migliore previsione possibile. Nel tentativo di ottener previsioni più accurate, il modello FNN-DF è stato introdotto. Per determinare l'utilità pratica del modello, sono stati svolti diversi esercizi di previsione per otto variabili mensili dell'economia degli Stati Uniti per differenti orizzonti previsivi a 1-, 3 -, 6 -, 9 e 12 mesi. I fattori sono stati stimati utilizzando un data set di 131 variabili mensili dell’economia degli Stati Uniti. Lo studio empirico mostra che il modello FNN-DF ha buone capacità di prevedere le variabili oggetto di studio soprattutto nel periodo antecedente l’inizio della “Great Moderation”, cioè il 1984. In seguito il FNN-DF ha la stessa accuratezza in previsione rispetto al modello di riferimento.
This dissertation aims to introduce a nonlinear model to forecast macroeconomic time series using a large number of predictors, namely the Feedforward Neural Network - Dynamic Factor Model (FNN-DF). The technique used to summarize the predictors in a small number of factors is Generalized Dynamic Factor Model, while the method used to capture nonlinearity is artificial neural networks, specifically Feedforward Neural Network. Commonly in GDFM literature, forecasts are made using linear models. However linear techniques are often misspecified and the resulting forecasts provide only a poor approximation to the best possible forecast. In an effort to address this issue, the technique we propose is FNN-DF. To determine the practical usefulness of the model, we conducted several pseudo forecasting exercises on 8 series of the United States economy. The series we were interested in forecasting were grouped in real and nominal categories. This method was used to construct the forecasts at 1-, 3-, 6-, 9 and 12-month horizons for monthly U.S. economic variables using 131 predictors. The empirical study shows that FNN-DF has good ability to predict the variables under study in the period before the start of the "Great Moderation", namely 1984. After 1984, FNN-DF has the same accuracy in forecasting with respect to the benchmark.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Benatti, Serena. "Study and preparation of space missions for Asteroseismology." Doctoral thesis, Università degli studi di Padova, 2011. http://hdl.handle.net/11577/3423219.

Повний текст джерела
Анотація:
The PhD project presented in this thesis is aimed to exploit the great potential of Asteroseismology combined with the high precision photometry of present and future space satellites. The ESA-PLATO (PLAnetary Transits and Oscillations of stars) space mission (Catala et al. 2008) is proposed to be the next generation planet-finder, having its worth in the characterization of the parent stars thanks to asteroseismic analysis. The present work includes the feasibility study of PLATO, with particular attention on the analysis of simulated images, in order to evaluate the photometric quality of the optical design. Then the creation of procedures to perform seismic analysis allows us to measure useful asteroseismic observables which provide noticeable informations about the stellar structure. Finally we were able to constrain fundamental parameters of stars through the computation of stellar theoretical models supported by space-based observations with the NASA-Kepler satellite (Borucki et al. 2009). In the framework of Kepler and PLATO these results are of great importance, because the knowledge of global stellar parameters is the only way to characterize an extrasolar planet.
Il progetto di dottorato di ricerca presentato in questa tesi si propone di sfruttare il potenziale dell'Asterosismologia combinato con l'alta precisione fotometrica fornita dai satelliti spaziali, sia quelli gia' operativi che in fase di progettazione. Il satellite ESA-PLATO (PLAnetary Transits and Oscillations of stars) (Catala et al. 2008) e' stato proposto come uno strumento per la ricerca di pianeti extrasolari di prossima generazione, sfruttando l'analisi asterosismologica per la caratterizzazione della stella centrale del sistema planetario. Il presente lavoro include parte dello studio di fattibilita' di PLATO, con particolare attenzione all'analisi di immagini simulate, al fine di valutare la qualita' fotometrica del disegno ottico dei telescopi. Verra' quindi discussa la creazione di procedure per eseguire l'analisi sismica che permette di misurare gli osservabili asterosismologici che forniscono importanti informazioni riguardanti la struttura stellare. Infine siamo stati in grado di fissare i parametri fondamentali di alcune stelle attraverso il calcolo di modelli stellari teorici supportata da osservazioni dallo spazio con il satellite NASA-Kepler (Borucki et al. 2009). Nel quadro di Kepler e PLATO questi risultati sono di grande importanza, perche' la conoscenza dei parametri stellari e' l'unico modo per caratterizzare un pianeta extrasolare.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Zreik, Rawya. "Analyse statistique des réseaux et applications aux sciences humaines." Thesis, Paris 1, 2016. http://www.theses.fr/2016PA01E061/document.

Повний текст джерела
Анотація:
Depuis les travaux précurseurs de Moreno (1934), l’analyse des réseaux est devenue une discipline forte, qui ne se limite plus à la sociologie et qui est à présent appliquée à des domaines très variés tels que la biologie, la géographie ou l’histoire. L’intérêt croissant pour l’analyse des réseaux s’explique d’une part par la forte présence de ce type de données dans le monde numérique d’aujourd’hui et, d’autre part, par les progrès récents dans la modélisation et le traitement de ces données. En effet, informaticiens et statisticiens ont porté leurs efforts depuis plus d’une dizaine d’années sur ces données de type réseau en proposant des nombreuses techniques permettant leur analyse. Parmi ces techniques on note les méthodes de clustering qui permettent en particulier de découvrir une structure en groupes cachés dans le réseau. De nombreux facteurs peuvent exercer une influence sur la structure d’un réseau ou rendre les analyses plus faciles à comprendre. Parmi ceux-ci, on trouve deux facteurs importants: le facteur du temps, et le contexte du réseau. Le premier implique l’évolution des connexions entre les nœuds au cours du temps. Le contexte du réseau peut alors être caractérisé par différents types d’informations, par exemple des messages texte (courrier électronique, tweets, Facebook, messages, etc.) échangés entre des nœuds, des informations catégoriques sur les nœuds (âge, sexe, passe-temps, Les fréquences d’interaction (par exemple, le nombre de courriels envoyés ou les commentaires affichés), et ainsi de suite. La prise en considération de ces facteurs nous permet de capturer de plus en plus d’informations complexes et cachées à partir des données. L’objectif de ma thèse été de définir des nouveaux modèles de graphes aléatoires qui prennent en compte les deux facteurs mentionnés ci-dessus, afin de développer l’analyse de la structure du réseau et permettre l’extraction de l’information cachée à partir des données. Ces modèles visent à regrouper les sommets d’un réseau en fonction de leurs profils de connexion et structures de réseau, qui sont statiques ou évoluant dynamiquement au cours du temps. Le point de départ de ces travaux est le modèle de bloc stochastique (SBM). Il s’agit d’un modèle de mélange pour les graphiques qui ont été initialement développés en sciences sociales. Il suppose que les sommets d’un réseau sont répartis sur différentes classes, de sorte que la probabilité d’une arête entre deux sommets ne dépend que des classes auxquelles ils appartiennent
Over the last two decades, network structure analysis has experienced rapid growth with its construction and its intervention in many fields, such as: communication networks, financial transaction networks, gene regulatory networks, disease transmission networks, mobile telephone networks. Social networks are now commonly used to represent the interactions between groups of people; for instance, ourselves, our professional colleagues, our friends and family, are often part of online networks, such as Facebook, Twitter, email. In a network, many factors can exert influence or make analyses easier to understand. Among these, we find two important ones: the time factor, and the network context. The former involves the evolution of connections between nodes over time. The network context can then be characterized by different types of information such as text messages (email, tweets, Facebook, posts, etc.) exchanged between nodes, categorical information on the nodes (age, gender, hobbies, status, etc.), interaction frequencies (e.g., number of emails sent or comments posted), and so on. Taking into consideration these factors can lead to the capture of increasingly complex and hidden information from the data. The aim of this thesis is to define new models for graphs which take into consideration the two factors mentioned above, in order to develop the analysis of network structure and allow extraction of the hidden information from the data. These models aim at clustering the vertices of a network depending on their connection profiles and network structures, which are either static or dynamically evolving. The starting point of this work is the stochastic block model, or SBM. This is a mixture model for graphs which was originally developed in social sciences. It assumes that the vertices of a network are spread over different classes, so that the probability of an edge between two vertices only depends on the classes they belong to
Стилі APA, Harvard, Vancouver, ISO та ін.
25

RADICIONI, Tommaso. "All the ties that bind. A socio-semantic network analysis of Twitter political discussions." Doctoral thesis, Scuola Normale Superiore, 2021. http://hdl.handle.net/11384/109224.

Повний текст джерела
Анотація:
Social media play a crucial role in what contemporary sociological reflections define as a ‘hybrid media system’. Online spaces created by social media platforms resemble global public squares hosting large-scale social networks populated by citizens, political leaders, parties and organizations, journalists, activists and institutions that establish direct interactions and exchange contents in a disintermediated fashion. In the last decade, an increasing number of studies from researchers coming from different disciplines has approached the study of the manifold facets of citizen participation in online political spaces. In most cases, these studies have focused on the investigation of direct relationships amongst political actors. Conversely, relatively less attention has been paid to the study of contents that circulate during online discussions and how their diffusion contributes to building political identities. Even more rarely, the study of social media contents has been investigated in connection with those concerning social interactions amongst online users. To fill in this gap, my thesis work proposes a methodological procedure consisting in a network-based, data-driven approach to both infer communities of users with a similar communication behavior and to extract the most prominent contents discussed within those communities. More specifically, my work focuses on Twitter, a social media platform that is widely used during political debates. Groups of users with a similar retweeting behavior - hereby referred to as discursive communities - are identified starting with the bipartite network of Twitter verified users retweeted by nonverified users. Once the discursive communities are obtained, the corresponding semantic networks are identified by considering the co-occurrences of the hashtags that are present in the tweets sent by their members. The identification of discursive communities and the study of the related semantic networks represent the starting point for exploring more in detail two specific conversations that took place in the Italian Twittersphere: the former occured during the electoral campaign before the 2018 Italian general elections and in the two weeks after Election day; the latter centered on the issue of migration during the period May-November 2019. Regarding the social analysis, the main result of my work is the identification of a behavior-driven picture of discursive communities induced by the retweeting activity of Twitter users, rather than determined by prior information on their political affiliation. Although these communities do not necessarily match the political orientation of their users, they are closely related to the evolution of the Italian political arena. As for the semantic analysis, this work sheds light on the symbolic dimension of partisan dynamics. Different discursive communities are, in fact, characterized by a peculiar conversational dynamics at both the daily and the monthly time-scale. From a purely methodological aspect, semantic networks have been analyzed by employing three (increasingly restrictive) benchmarks. The k-shell decomposition of both filtered and non-filtered semantic networks reveals the presence of a core-periphery structure providing information on the most debated topics within each discursive community and characterizing the communication strategy of the corresponding political coalition.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

MAININI, LAURA. "Multidisciplinary and multi-fidelity optimization environment for wing integrated design." Doctoral thesis, Politecnico di Torino, 2012. http://hdl.handle.net/11583/2500000.

Повний текст джерела
Анотація:
The Ph.D. program has been focused on the development of a multidisciplinary integrated environment for the design of wing for which large changes in shape are expected to be allowed during the flight in order to be better adapted for the different flight segments. The first phase of study has been dedicated to the investigation of the proper Multidisciplinary Design Optimization (MDO) architecture for the integrated management of the design process and a multilevel solution has been proposed and implemented. Such framework involves several disciplinary analysis and optimization loops: in particular aerodynamic analysis, structural analysis, material optimization and mission and performance evaluation are the main components considered for the preliminary design development for such a “morphing” wing. This stage addressed basically the multidisciplinarity and interdisciplinarity issues. The second phase has been dedicated to the investigation of possible techniques for the reduction of the computational burden that characterizes typically this kind of integrated design processes. For this purpose multi-fidelity analysis techniques have been considered involving the use of surrogate models. In particular the attention has been focused on the study of a proper methodology to build an approximated model for the estimation of aerodynamic coefficients to be used for performance evaluation in the mission optimization stage. In this case a procedure involving variables screening phase, data-fit surrogate models evaluation and assessment phase and a final crucial global correction phase of the best surrogate model has been proposed.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

RONCALLO, LUCA. "Evolutionary spectral model for thunderstorm outflows and application to the analysis of the dynamic response of structures." Doctoral thesis, Università degli studi di Genova, 2022. http://hdl.handle.net/11567/1080956.

Повний текст джерела
Анотація:
Thunderstorms are destructive phenomena at the mesoscale with extension of few kilometres and short duration, potentially dangerous for mid-low structures. The nonstationary nature of the wind field generated by thunderstorm outflows makes most of the theory and models developed for extra-tropical cyclones unsuitable and their small extension make them difficult to be detected by one single anemometer. These circumstances prevent the collection of precious data over which research can be carried out and the development of robust models for rapid engineering calculations shared by the scientific community. Therefore, a unified and reliable analytical model for the assessment of the maximum dynamic response to thunderstorms coherent with the techniques commonly adopted in wind engineering is not yet available. In this framework, the thesis introduces an Evolutionary Power Spectral Density (EPSD) model of the wind velocity related of thunderstorm outflows, consistent with full-scale records, and studies its application to calculate the alongwind dynamic response of structures and its maximum from an operative perspective. The EPSD model is derived starting from the analysis of 129 full-scale thunderstorm records, assuming the turbulent fluctuations uniformly modulated and the turbulence intensity constant. The reliability of the assumptions are verified on the basis of the data available. Three analytical models for the modulating function of the slowly-varying mean wind velocity are proposed. The models are based on the functions extracted from the records and include parameters of physical meaning for the thunderstorm outflow. Moreover, the possibility of adopting the classical spectral models of synoptic winds to model the stationary part of the turbulence is verified. Successively, the EPSD model is adopted to calculate the dynamic response of a set of linear elastic point-like SDOF systems with variable fundamental frequency and damping ratio, both accounting and neglecting the effects of the transient dynamics. In this framework a closed-form solution of the Evolutionary Frequency Response Function (EFRF) is derived. The mean value of the maximum response is estimated based on an Equivalent Parameter Technique (EPT) from literature, generalizing the Davenport’s gust factor technique. The effects of the Poisson hypothesis are investigated and mitigated introducing an equivalent expected frequency. The results are validated with the ones obtained in the time domain starting from the real thunderstorm records available. Successively, a sensitivity analysis is carried out to assess the influence on the maximum dynamic response of the parameters that shape the modulating function of the velocity. A closed-form solution for the equivalent parameters and the gust factor is introduced. The comparison with alternative formulations proposed in the literature demonstrates the improved accuracy of the proposed one. Finally, the formulation is extended to the analysis of slender vertical structures, adopting a vertical profile for the mean wind velocity from the literature and the equivalent wind spectrum technique. Two case studies of vertical slender structures are analysed and a comparison with synoptic wind loading conditions is outlined, showing that the proposed model constitutes a valid and handy tool for the evaluation of the wind loading on structures provided by thunderstorm outflows.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Chauvet, Pierre. "Elements d'analyse structurale des fai-k a 1 dimension." Paris, ENMP, 1987. http://www.theses.fr/1987ENMP0070.

Повний текст джерела
Анотація:
L'information structurale d'une fai d'ordre k definie sur une maille reguliere monodimensionnelle est contenue dans ses accroissements d'ordre k+1. On cherche a etablir la relation entre les covariances d'accroissements (experiences) et la covariance generalisee (modele) et a l'utiliser dans les deux sens. Une tentative d'expliciter la covariance generalisee a l'aide du variogramme generalise s'est soldee par un echec
Стилі APA, Harvard, Vancouver, ISO та ін.
29

LI, GUOHONG. "Variable Kinematic Finite Element Formulations Applied to Multi-layered Structures and Multi-field Problems." Doctoral thesis, Politecnico di Torino, 2019. http://hdl.handle.net/11583/2729361.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Volinsky, Christopher T. "Bayesian model averaging for censored survival models /." Thesis, Connect to this title online; UW restricted, 1997. http://hdl.handle.net/1773/8944.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Li, Lingzhu. "Model checking for general parametric regression models." HKBU Institutional Repository, 2019. https://repository.hkbu.edu.hk/etd_oa/654.

Повний текст джерела
Анотація:
Model checking for regressions has drawn considerable attention in the last three decades. Compared with global smoothing tests, local smoothing tests, which are more sensitive to high-frequency alternatives, can only detect local alternatives dis- tinct from the null model at a much slower rate when the dimension of predictor is high. When the number of covariates is large, nonparametric estimations used in local smoothing tests lack efficiency. Corresponding tests then have trouble in maintaining the significance level and detecting the alternatives. To tackle the issue, we propose two methods under high but fixed dimension framework. Further, we investigate a model checking test under divergent dimension, where the numbers of covariates and unknown parameters go divergent with the sample size n. The first proposed test is constructed upon a typical kernel-based local smoothing test using projection method. Employed by projection and integral, the resulted test statistic has a closed form that depends only on the residuals and distances of the sample points. A merit of the developed test is that the distance is easy to implement compared with the kernel estimation, especially when the dimension is high. Moreover, the test inherits some feature of local smoothing tests owing to its construction. Although it is eventually similar to an Integrated Conditional Moment test in spirit, it leads to a test with a weight function that helps to collect more information from the samples than Integrated Conditional Moment test. Simulations and real data analysis justify the powerfulness of the test. The second test, which is a synthesis of local and global smoothing tests, aims at solving the slow convergence rate caused by nonparametric estimation in local smoothing tests. A significant feature of this approach is that it allows nonparamet- ric estimation-based tests, under the alternatives, also share the merits of existing empirical process-based tests. The proposed hybrid test can detect local alternatives at the fastest possible rate like the empirical process-based ones, and simultane- ously, retains the sensitivity to high-frequency alternatives from the nonparametric estimation-based ones. This feature is achieved by utilizing an indicative dimension in the field of dimension reduction. As a by-product, we have a systematic study on a residual-related central subspace for model adaptation, showing when alterna- tive models can be indicated and when cannot. Numerical studies are conducted to verify its application. Since the data volume nowadays is increasing, the numbers of predictors and un- known parameters are probably divergent as sample size n goes to infinity. Model checking under divergent dimension, however, is almost uncharted in the literature. In this thesis, an adaptive-to-model test is proposed to handle the divergent dimen- sion based on the two previous introduced tests. Theoretical results tell that, to get the asymptotic normality of the parameter estimator, the number of unknown parameters should be in the order of o(n1/3). Also, as a spinoff, we demonstrate the asymptotic properties of estimations for the residual-related central subspace and central mean subspace under different hypotheses.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

FILIPPIS, G. DE. "CALIBRATION OF THE GROUNDWATER FLOW MODEL AND ASSESSMENT OF THE SALTWATER INTRUSION IN A MULTI-LAYERED AQUIFER SYSTEM OF THE IONIAN COASTAL AREA (TARANTO GULF, SOUTHERN ITALY)." Doctoral thesis, Università degli Studi di Milano, 2016. http://hdl.handle.net/2434/362522.

Повний текст джерела
Анотація:
In some Mediterranean karst areas, groundwater is often the only available supply for freshwater. Besides the contamination induced by human activities, coastal aquifers often suffer from the saltwater intrusion phenomenon, which can be enhanced by both extensive withdrawals and climatic changes. Establishing an effective set of regulatory and management measures to ensure the sustainability of coastal aquifers requires a deep knowledge about natural and anthropic stresses involved in groundwater dynamics. In this regard, a prior conceptualization of aquifer systems and a deeper characterization of balance terms through mathematical modelling are of paramount importance. In the gulf of Taranto (southern Italy), these issues are particularly pressing, as the multi-layered, carbonatic aquifer is the only available resource of freshwater and satisfies most of the human water-related activities. Especially during the last decades, proper management plans and decisions seem to be compelling, as the national government included Taranto in the list of the contaminated sites of national importance, due to the presence of highly-polluting activities nearby the Mar Grande and Mar Piccolo seawater bodies, whose relationship with the underground resources is matter of concern, as they host important freshwater springs. Furthermore, the Taranto area is particularly sensitive to the phenomenon of seawater intrusion, both for the specific hydrostratigraphic configuration and for the presence of highly water-demanding industrial activities. These problems, strictly related to the protection and preservation of groundwater quality and quantity, have triggered several actions. Among them, the Flagship Project RITMARE (la Ricerca Italiana per il Mare - the Italian Research for the Sea) took into account criticalities involving several environmental components within the Mar Piccolo ecosystem, including groundwater. In this thesis, a full charactrization of the multi-layered aquifer system of the whole Province of Taranto is presented, with the purpose of supporting monitoring activities, land-use plans and management decisions. The preliminary outcomes refer to the identification of the conceptual model, namely the reconstruction of the hydrostratigraphic structure of the underground and the qualitative assessment of the groundwater dynamics. The successive development of a numerical model permits to produce a tool for quantifying the hydrogeological balance and simulating the system response to climate or man-induced changes. Generally speaking, thorough evaluation of model adequacy and/or accuracy is an important step in the study of environmental systems, due to the uncertainties on hydrodynamic properties and boundary conditions and to the scarcity of good-quality field data. This commonly results in groundwater models being calibrated and often leads to the development of many candidate models that can differ in the analysed processes, representation of boundary conditions, distribution of system characteristics, and parameter values. In this framework, calibration of alternative models allowed to identify the main challenges which limit the reliability of model outcomes and test model adequacy while proposing a new calibration methodology, which represents tha major scientific contribution of this thesis.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Da, Silva Frédéric. "Méthodologies de réduction de modèles multiphysiques pour la conception et la commande d’une chaîne de traction électrique." Thesis, Université Paris-Saclay (ComUE), 2015. http://www.theses.fr/2015SACLC022/document.

Повний текст джерела
Анотація:
La simulation numérique occupe une part de plus en plus importante dans les phases de conception mais aussi de validation de systèmes innovants. Dans le cadre de la conception d’une chaîne de traction d’un véhicule électrique, la simulation numérique peut par exemple être employée pour choisir une technologie de moteur électrique ou encore pour mettre au point des stratégies de pilotage au regard de critères de décision tels que l’autonomie du véhicule, son coût mais aussi sa performance.Les systèmes devenant de plus en plus complexes, ils requièrent des simulations de plus en plus fines afin d’appréhender au mieux les phénomènes qu’ils mettent en œuvre - par exemple l’étude des pertes fer dans une machine électrique. L’utilisation de simulations 3D permet d’avoir des résultats très précis à l’échelle d’un organe mais ne se prête pas encore aujourd’hui à l’étude de systèmes de grande taille (c’est-à-dire avec beaucoup de degrés de liberté, de nombreux paramètres d’optimisation et plusieurs domaines de la Physique en jeu). En effet, les simulations 3D sont d’autant plus coûteuses en temps de calcul que le modèle à étudier contient de degrés de liberté. C’est pourquoi, depuis quelques années les techniques de réduction de modèles attisent les développements. En effet, elles permettent de garantir un bon compromis entre le temps de calcul et la précision des résultats produits par les modèles réduits.Nous nous intéressons ici à l’utilisation de ces techniques dans un contexte industriel autour de deux axes : - l’étude de phénomènes thermiques (dans les modules d’électronique de puissance) - l’étude de phénomènes électromagnétiques (dans les machines électriques)
Numerical simulations are widely used during the design phase of a product but also for the validation of an innovative system. For example, during the conception of an electric vehicle’s powertrain, numerical simulations can be used to select the appropriate electric engine technology or for the development of control strategies taking into account decision criteria such as vehicle’s autonomy, but also its cost and performance.System’s complexity is always increasing, so they require more and more precise simulations in order to better understand the phenomena involved - for example to study iron losses in an electric engine. 3D simulations provide very accurate results to study a body but are still not appropriate today for the study of large scale systems (ie. with many degrees of freedom, many optimization parameters and several areas of Physics). Indeed, 3D simulations computing time cost is directly linked with the number of degrees of freedom. That’s why, in recent years, model order reduction techniques stir developments because they guarantee a good compromise between the computation time and accuracy of results produced by these models.In this study, we are interested in techniques that can be used in an industrial context around two axes: - the study of thermal phenomena (in the power electronics modules) - the study of electromagnetic phenomena (in electric engines)
Стилі APA, Harvard, Vancouver, ISO та ін.
34

DELLA, MARCA ROSSELLA. "Problemi di controllo in epidemiologia matematica e comportamentale." Doctoral thesis, Università degli studi di Modena e Reggio Emilia, 2021. http://hdl.handle.net/11380/1237622.

Повний текст джерела
Анотація:
Nonostante i progressi nell'eliminazione di infezioni da lungo in circolazione, gli ultimi decenni hanno visto la continua comparsa o ricomparsa di malattie infettive. Esse non solo minacciano la salute globale, ma i costi generati da epidemie nell’uomo e negli animali sono responsabili di significative perdite economiche. I modelli matematici della diffusione di malattie infettive hanno svolto un ruolo significativo nel controllo delle infezioni. Da un lato, hanno dato un importante contributo alla comprensione epidemiologica degli andamenti di scoppi epidemici; d'altro canto, hanno concorso a determinare come e quando applicare le misure di controllo al fine di contenere rapidamente ed efficacemente le epidemie. Ciononostante, per dare forma alle politiche di sanità pubblica, è essenziale acquisire una migliore e più completa comprensione delle azioni efficaci per controllare le infezioni, impiegando nuovi livelli di complessità. Questo è stato l'obiettivo fondamentale della ricerca che ho svolto durante il dottorato; in questa tesi i prodotti di questa ricerca sono raccolti e interconnessi. Tuttavia, poiché fuori contesto, altri problemi a cui mi sono interessata sono stati esclusi: essi riguardano le malattie autoimmuni e l'ecologia del paesaggio. Si inizia con un capitolo introduttivo, che ripercorre la storia dei modelli epidemici, le motivazioni e gli incredibili progressi. Sono due gli aspetti su cui ci concentriamo: i) la valutazione qualitativa e quantitativa di strategie di controllo specifiche per il problema in questione (attraverso, ad esempio, il controllo ottimo o le politiche a soglia); ii) l'incorporazione nel modello dei cambiamenti nel comportamento umano in risposta alla dinamica della malattia. In questo quadro si inseriscono e contestualizzano i nostri studi. Di seguito, a ciascuno di essi è dedicato un capitolo specifico. Le tecniche utilizzate includono la costruzione di modelli appropriati dati da equazioni differenziali ordinarie non lineari, la loro analisi qualitativa (tramite, ad esempio, la teoria della stabilità e delle biforcazioni), la parametrizzazione e la validazione con i dati disponibili. I test numerici sono eseguiti con avanzati metodi di simulazione di sistemi dinamici. Per i problemi di controllo ottimo, la formulazione segue l'approccio classico di Pontryagin, mentre la risoluzione numerica è svolta da metodi di ottimizzazione sia diretta che indiretta. Nel capitolo 1, utilizzando come base di partenza un modello Suscettibili-Infetti-Rimossi, affrontiamo il problema di minimizzare al contempo la portata e il tempo di eradicazione di un’epidemia tramite strategie di vaccinazione o isolamento ottimali. Un modello epidemico tra due sottopopolazioni, che descrive la dinamica di Suscettibili e Infetti in malattie della fauna selvatica, è formulato e analizzato nel capitolo 2. Qui, vengono confrontati due tipi di strategie di abbattimento localizzato: proattivo e reattivo. Il capitolo 3 tratta di un modello per la trasmissione di malattie pediatriche prevenibili con vaccino, dove la vaccinazione dei neonati segue la dinamica del gioco dell’imitazione ed è affetta da campagne di sensibilizzazione da parte del sistema sanitario. La vaccinazione è anche incorporata nel modello del capitolo 4. Qui, essa è rivolta a individui suscettibili di ogni età ed è funzione dell’informazione e delle voci circolanti sulla malattia. Inoltre, si assume che l'efficacia del vaccino sia parziale ed evanescente col passare del tempo. L'ultimo capitolo è dedicato alla tuttora in corso pandemia di COVID-19. Si costruisce un modello epidemico con tassi di contatto e di quarantena dipendenti dall’informazione circolante. Il modello è applicato al caso italiano e incorpora le progressive restrizioni durante il lockdown.
Despite major achievements in eliminating long-established infections (as in the very well known case of smallpox), recent decades have seen the continual emergence or re-emergence of infectious diseases (last but not least COVID-19). They are not only threats to global health, but direct and indirect costs generated by human and animal epidemics are responsible for significant economic losses worldwide. Mathematical models of infectious diseases spreading have played a significant role in infection control. On the one hand, they have given an important contribution to the biological and epidemiological understanding of disease outbreak patterns; on the other hand, they have helped to determine how and when to apply control measures in order to quickly and most effectively contain epidemics. Nonetheless, in order to shape local and global public health policies, it is essential to gain a better and more comprehensive understanding of effective actions to control diseases, by finding ways to employ new complexity layers. This was the main focus of the research I have carried out during my PhD; the products of this research are collected and connected in this thesis. However, because out of context, other problems I interested in have been excluded from this collection: they rely in the fields of autoimmune diseases and landscape ecology. We start with an Introduction chapter, which traces the history of epidemiological models, the rationales and the breathtaking incremental advances. We focus on two critical aspects: i) the qualitative and quantitative assessment of control strategies specific to the problem at hand (via e.g. optimal control or threshold policies); ii) the incorporation into the model of the human behavioral changes in response to disease dynamics. In this framework, our studies are inserted and contextualized. Hereafter, to each of them a specific chapter is devoted. The techniques used include the construction of appropriate models given by non-linear ordinary differential equations, their qualitative analysis (via e.g. stability and bifurcation theory), the parameterization and validation with available data. Numerical tests are performed with advanced simulation methods of dynamical systems. As far as optimal control problems are concerned, the formulation follows the classical approach by Pontryagin, while both direct and indirect optimization methods are adopted for the numerical resolution. In Chapter 1, within a basic Susceptible-Infected-Removed model framework, we address the problem of minimizing simultaneously the epidemic size and the eradication time via optimal vaccination or isolation strategies. A two-patches metapopulation epidemic model, describing the dynamics of Susceptibles and Infected in wildlife diseases, is formulated and analyzed in Chapter 2. Here, two types of localized culling strategies are considered and compared: proactive and reactive. Chapter 3 concerns a model for vaccine-preventable childhood diseases transmission, where newborns vaccination follows an imitation game dynamics and is affected by awareness campaigns by the public health system. Vaccination is also incorporated in the model of Chapter 4. Here, it addresses susceptible individuals of any age and depends on the information and rumors about the disease. Further, the vaccine effectiveness is assumed to be partial and waning over time. The last Chapter 5 is devoted to the ongoing pandemic of COVID-19. We build an epidemic model with information-dependent contact and quarantine rates. The model is applied to the Italian case and explicitly incorporates the progressive lockdown restrictions.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

VALLARINO, GIULIA. "A new formulation of ellagic acid and pomegranate peel extract for dietary supplementation in an animal model of multiple sclerosis." Doctoral thesis, Università degli studi di Genova, 2023. https://hdl.handle.net/11567/1105298.

Повний текст джерела
Анотація:
My Ph.D. project was dedicated to evidentiate beneficial effects elicited by the therapeutic administration of a new formulation of ellagic acid (Ellagic Acid microdispersion, EAm) and pomegranate peel extract (Pomegranate peel Extract microdispersion, PEm) in an animal model of multiple sclerosis (the EAE mice), with particular attention to its impact on “in vivo” and “in vitro” parameters at the acute stage of disease, to support its translation to clinical studies in patients suffering from multiple sclerosis. My thesis was composed of two different sections: the first one focuses on the characterization of the EAE model and the analysis of the healthy properties of the formulations on it; the second one investigates a potential therapeutic target of ellagic acid. The study led to two recent publications in Molecules and Antioxidants and was exposed in the poster section of national and international congresses reported in the last part of the thesis. The thesis would also briefly describe other studies I was involved in during the 3 years Ph.D. program.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

GARCIA, DE MIGUEL ALBERTO. "Hierarchical component-wise models for enhanced stress analysis and health monitoring of composites structures." Doctoral thesis, Politecnico di Torino, 2019. http://hdl.handle.net/11583/2729658.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Inzoli, S. "EXPERIMENTAL AND STATISTICAL METHODS TO IMPROVE THE RELIABILITY OF SPECTRAL INDUCED POLARIZATION TO INFER LITHO-TEXTURAL PROPERTIES OF ALLUVIAL SEDIMENTS." Doctoral thesis, Università degli Studi di Milano, 2016. http://hdl.handle.net/2434/360596.

Повний текст джерела
Анотація:
The characterization of the shallow subsurface constitutes a challenging issue in several applications of science and engineering. Among the other disciplines, hydrogeophysics deals with the use of geophysical methods for the exploration, management, and monitoring of soil and groundwater. One of the main topics is the study of the petrophysical relationships between electrical properties and hydraulic conductivity, mainly through the dependence of such physical parameters on textural properties. The general aim of this work consists in an investigation of porous materials typical of alluvial environments with spectral induced polarization (SIP) method. The driving question of the research is the feasibility of the use of SIP to characterize both the textural assemblage of the sediments and the fluid properties, in presence of interacting effects related to particles’ mineralogy, organic matter, sediments’ fabric, etc. The samples’ set is constituted by 19 unconsolidated materials collected in four sites of the Po plain south of Milano (Orio Litta, Senna Lodigiana, and Landriano) and west of Milano (Lozzolo), and saturated with seven NaCl-water solutions with electrical resistivity varying from 0.9 Ωm to 315 Ωm. The textural composition of the samples varies between slightly-sandy mud and gravelly sand, and the porosity of the repacked samples between 0.26 and 0.63. The measurements are executed with an experimental system designed and realized at the Laboratory of Hydrogeophysics of the Università degli Studi di Milano. The resistivity amplitude and phase spectra are firstly modelled with single-relaxation models (Cole-Cole and generalized Cole-Cole) in a bounded low-frequency interval. Besides a traditional optimization based on the root-mean-square error, an original multi-optimization approach with separated amplitude and phase errors is tested to obtain a set of optimal solutions and an uncertainty interval for each model parameter, in order to avoid the misinterpretation of petrophysical relationships with scarcely reliable parameters. Significant relationships are identified between DC-resistivity and water resistivity, and between chargeability and mud content. The 10-based logarithm of the relaxation time is inversely correlated with a characteristic diameter of the sample. On the other hand, a Debye-decomposition, multi-relaxation model is applied to identify several polarization processes, characterized by different relaxation times, over the whole frequency interval. In order to maintain the whole spectral information also in the search for electrical-textural relationships, a combination of cluster analysis (CA) and principal component analysis (PCA) is adopted. This constitutes a new approach to relate spectral electrical behaviour to litho-textural properties, avoiding the selection of individual parameters or individual investigation frequency. The CA permits to classify the samples on the basis of their electrical behaviour, and the PCA allows to interpret the variability within the database in terms of a series of parameters ordered by importance. A textural characterization (characteristic diameters, gravel and mud contents, uniformity coefficients) is associated to each cluster, based on the characteristics of the corresponding samples. Analogously, a typical range of water resistivity is attributed to each cluster. This association of variability ranges of electrical and sedimentological properties is then used to infer the sediments’ properties of samples external to the input database, with satisfactory results. The high flexibility of the hierarchical clustering also allows evaluating the differences in the inferred properties according to the number of selected clusters. Finally, some preliminary SIP tests are performed in the field; field and laboratory results are not completely comparable, due to the differences in porosity, water content, and scale of investigation. However, some peculiar characters of the laboratory spectra are recognized in the corresponding field spectra, thus supporting a future application of the proposed methodology to interpret the resistivity amplitude and phase distribution in the subsurface.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Páleník, Petr. "Lávka pro pěší přes rychlostní komunikaci." Master's thesis, Vysoké učení technické v Brně. Fakulta stavební, 2013. http://www.nusl.cz/ntk/nusl-226444.

Повний текст джерела
Анотація:
The aim of this master thesis is a design of the pedestrian bridge across the highway. The bridge is formed by a slab structure of 6 spans with lenghts from 9 to 51 m. The main spans are suspended on a V shape pylon. The deck of the span across the highway is assambled of precast segments and composite deck slab. The deck is in the lenghtways kept in the parabolic arch. The model of the construction is made in software ANSYS, the solution was done non-linear. The design and assessment are according to the europien standard.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

NARBAEV, TIMUR. "Forecasting cost at completion with growth models and Earned Value Management." Doctoral thesis, Politecnico di Torino, 2012. http://hdl.handle.net/11583/2506248.

Повний текст джерела
Анотація:
Reliable forecasting of the final cost at completion is one of the vital components of project monitoring. Accuracy and stability in the forecast of an ongoing project is a critical criterion that ensures the project’s on budget and timely completion. The purpose of this dissertation is to develop a new Cost Estimate at Completion (CEAC) methodology to assist project managers in the task of forecasting the final cost at completion of ongoing projects. This forecasting methodology interpolates intrinsic characteristics of an S-shaped growth model and combines the Earned Schedule (ES) concepts into its equation to provide more accurate and stable cost estimates. Widely used conventional index-based methods for CEAC have inherent limitations such as reliance on past performance only, unreliable forecasts in early stages of a project life, and no count of forecasting statistics. To achieve its purpose the dissertation carried out five tasks. It, first, developed the method’s equation based on the integration of the four candidate S-shaped models and the earned schedule concepts. Second, the models’ equations were tested on past projects to assess their applicability and, then, the accuracy of CEACs was compared with ones found by the Cost Performance Index (CPI)-based formula. The scope of third task included comparing CEACs found by statistically valid and the most accurate Gompertz model (GM)-based equation against ones computed with the CPI-based method at each time point of the projects life. Then, the stability test was performed to determine if the method, with its corresponding performance indices that achieves the earlier stability, provides more accurate CEAC. Finally, the analysis was conducted to determine the existence of a correlation between schedule progress and the CEAC accuracy. Based on the research results it was determined that the GM-based method is the only valid model for cost estimates in all three stages and it provides more accurate estimates than the CPI-based formula does. Further comparative analysis showed that the two (the GM and CPI-based) methods’ performance index that achieved the earlier stability provided more accurate CEACs for that method, and finally, the new methodology takes into account the schedule impact as a factor of the cost performance in forecasting the CEAC. The developed methodology enhances forecasting capabilities of the existing Earned Value Management methods by refining traditional index-based approach through nonlinear regression analysis. The main novelty of the research is that this is a cost-schedule integrated approach which interpolates characteristics of a sigmoidal growth model with the ES technique to calculate a project’s CEAC. Two major contributions are brought to the Project Management. First, the dissertation extends the body of knowledge by introducing the methodology which combined two separate methods in one statistical technique that, so far, have been considered as two separate streams of project management research. Second, this technique advances the project management practice as it is a practical cost-schedule integrated approach that takes into account schedule progress (advance/delay) as a factor of cost behavior in calculation of CEAC.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Lipkovich, Ilya A. "Bayesian Model Averaging and Variable Selection in Multivariate Ecological Models." Diss., Virginia Tech, 2002. http://hdl.handle.net/10919/11045.

Повний текст джерела
Анотація:
Bayesian Model Averaging (BMA) is a new area in modern applied statistics that provides data analysts with an efficient tool for discovering promising models and obtaining esti-mates of their posterior probabilities via Markov chain Monte Carlo (MCMC). These probabilities can be further used as weights for model averaged predictions and estimates of the parameters of interest. As a result, variance components due to model selection are estimated and accounted for, contrary to the practice of conventional data analysis (such as, for example, stepwise model selection). In addition, variable activation probabilities can be obtained for each variable of interest. This dissertation is aimed at connecting BMA and various ramifications of the multivari-ate technique called Reduced-Rank Regression (RRR). In particular, we are concerned with Canonical Correspondence Analysis (CCA) in ecological applications where the data are represented by a site by species abundance matrix with site-specific covariates. Our goal is to incorporate the multivariate techniques, such as Redundancy Analysis and Ca-nonical Correspondence Analysis into the general machinery of BMA, taking into account such complicating phenomena as outliers and clustering of observations within a single data-analysis strategy. Traditional implementations of model averaging are concerned with selection of variables. We extend the methodology of BMA to selection of subgroups of observations and im-plement several approaches to cluster and outlier analysis in the context of the multivari-ate regression model. The proposed algorithm of cluster analysis can accommodate re-strictions on the resulting partition of observations when some of them form sub-clusters that have to be preserved when larger clusters are formed.
Ph. D.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Wang, Ning. "Price sensitivity to the exponent in the CEV model." Thesis, Uppsala universitet, Analys och tillämpad matematik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-180977.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
42

DAGNES, NICOLE. "3D Human Face Analysis for recognition applications and motion capture." Doctoral thesis, Politecnico di Torino, 2020. http://hdl.handle.net/11583/2790163.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Billah, Baki 1965. "Model selection for time series forecasting models." Monash University, Dept. of Econometrics and Business Statistics, 2001. http://arrow.monash.edu.au/hdl/1959.1/8840.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
44

REGNI, MARCO. "The Role of Soil-Structure Interaction in Interpretation of Vibration Measurements on Continuous Viaducts." Doctoral thesis, Università Politecnica delle Marche, 2019. http://hdl.handle.net/11566/263550.

Повний текст джерела
Анотація:
L’oggetto della tesi è lo studio dell’effetto dell'interazione terreno-struttura e della risposta di sito sul comportamento dinamico di viadotti continui a più campate, sviluppato sia sperimentalmente tramite misure di vibrazioni ambientali sia numericamente, tramite modelli raffinati agli elementi finiti. A tal fine, sono state eseguite prove di vibrazione ambientale su un ponte multi-campata, fondato su pali in un deposito eluviale-colluviale, caratterizzato dinamicamente da indagini geofisiche. Le proprietà modali sperimentali sono state valutate mediante l'analisi modale operativa sui dati misurati e il ruolo dell'interazione terreno-struttura nell'interpretazione dei test è stato riscontrato mediante modelli agli elementi finiti caratterizzati da diversa accuratezza nella modellazione delle fondazioni e dell’interazione con il terreno. Le impedenze tra fondazioni e terreno sono state definite a partire dalla condizione locale del sito in corrispondenza di ciascuna pila. Il confronto tra i risultati sperimentali ottenuti dalle prove sul terreno e sull’impalcato consente di identificare sia il periodo predominante del sito che i periodi fondamentali della struttura. Inoltre, il confronto tra i risultati ottenuti dai diversi modelli numerici con la risposta dinamica del viadotto, consente l'identificazione del contributo di diversi aspetti di interazione tra terreno e struttura come l’interazione tra le pile, il problema della dissipazione per radiazione, l'ingombro della zattera e la variabilità della stratigrafia del suolo lungo la direzione longitudinale del viadotto. Sono state eseguite alcune prove in corrispondenza di una pila del viadotto per identificare il contributo allo spostamento modale trasversale dovuto alla deformazione elastica delle pile e alla rotazione della fondazione. In aggiunta, vengono presentati altri due casi studio di viadotti con caratteristiche diverse al precedente, allo scopo di ampliare lo studio dell’interazione terreno-struttura.
The scope of this thesis is to identify the significance of soil-structure interaction (SSI) and site response on the dynamic behaviour of continuous multi-span reinforced concrete viaducts, based on ambient vibration tests (AVTs) and numerical simulations with finite element models. For this purpose, a long bridge located in Central Italy, founded on piles in eluvial-colluvial soil deposit was instrumented and AVTs together with geophysical investigations were performed. Experimental modal properties were evaluated by means of operational modal analysis on recorded data and the role of SSI in the interpretation of the tests was detected by means of finite element models characterised by different accuracy in addressing the interaction problem. In the SSI models the local site condition in correspondence with each bridge piers were considered in the definition of soil-foundations impedances. Comparison between the experimental results obtained from AVTs on the free-field and on the viaduct deck, permits the identification of both predominant period of the site and the fundamental periods of the structure. In addition, comparisons between results obtained from the different numerical models with the measured dynamic response of the viaduct, in terms of fundamental frequencies and mode shapes, allow the identification of the contribution of different SSI aspects such as the pile-soil-pile interaction, the radiation problem, the pile cap embedment as well as the variability of the soil stratigraphy along the longitudinal direction of the viaduct. About the transverse behaviour, some tests were performed in correspondence with one pier, measuring accelerations of the foundation cap and the pier bent, to identify the contribution to the transverse modal displacement due to the elastic deflection of the pier and the foundation rocking. In addition, other two viaducts, with different characteristics respect to the previous one, were presented, for extending the study of SSI.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

SCOZZESE, FABRIZIO. "AN EFFICIENT PROBABILISTIC FRAMEWORK FOR SEISMIC RISK ANALYSIS OF STRUCTURAL SYSTEMS EQUIPPED WITH LINEAR AND NONLINEAR VISCOUS DAMPERS." Doctoral thesis, Università degli Studi di Camerino, 2018. http://hdl.handle.net/11581/429547.

Повний текст джерела
Анотація:
Seismic passive protection with supplemental damping devices represents an efficient strategy to produce resilient structural systems with improved seismic performances and notably reduced post-earthquake consequences. Such strategy offers indeed several advantages with respect to the ordinary seismic design philosophy: structural damages are prevented; the safety of the occupants is ensured and the system remains operational both during and right after the earthquake; no major retrofit interventions are needed but only a post-earthquake inspection (and if necessary, replacement) of dissipation devices is required; a noticeable reduction of both direct and indirect outlays is achieved. However, structural systems equipped with seismic control devices (dampers) may show potentially limited robustness, since an unexpected early disruption on the dampers may lead to a progressive collapse of the actually non-ductile system. Although the most advanced international seismic codes acknowledge this issue and require dampers to have higher safety margins against the failure, they only provide simplified approaches to cope with the problem, often consisting of general demand amplification rules which are not tailored on the actual needs of different device typologies and which lead to reliability levels not explicitly declared. The research activity carried out within this Thesis stems from the need to fill the gaps still present in the international regulatory framework, and respond to the scarcity of specific probabilistic studies geared to characterize and understand the probabilistic seismic response of such systems up to very low failure probabilities. In particular, as a first step towards this goal, the present work aims at addressing the issue of the seismic risk of structures with fluid viscous dampers, a simple and widely used class of dissipation devices. A robust probabilistic framework has been defined for the purposes of the present work, made up of the combination of an advanced probabilistic tool for solving reliability problems, consisting of Subset Simulation (with Markov chain Monte Carlo and Metropolis-like algorithms), and a stochastic ground motion model for statistical seismic hazard characterization. The seismic performance of the system is described by means of demand hazard curves, providing the mean annual frequency of exceeding any specified threshold demand value for all the relevant global and local Engineering Demand Parameters (EDPs). A wide range of performance levels is monitored, encompassing the serviceability conditions, the ultimate limit states, up to very rare performance demand levels (with mean annual frequency of exceedance around 10-6) at which the seismic reliability shall be checked in order to confer the system an adequate level of safety margins against seismic events rarer than the design one. Some original contributions regarding the methodological approaches have been obtained by an efficient combination of the common conditional probabilistic methods (i.e., multiple-stripe and cloud analysis) with a stochastic earthquake model, in which subset simulation is exploited for efficiently generate both the seismic hazard curve and the ground motion samples for structural analysis purposes. The accuracy of the proposed strategy is assessed by comparing the achieved seismic risk estimates with those provided via Subset Simulation, the latter being assumed as reference reliability method. Furthermore, a reliability-based optimization method is proposed as powerful tool for investigating upon the seismic risk sensitivity to variable model parameters. Such method proves to be particularly useful when a proper statistical characterization of the model parameters is not available. The proposed probabilistic framework is applied to a set of single-degree-of-freedom damped models to carry out an extensive parametric analysis, and to a multi-story steel building with linear and nonlinear viscous dampers for the aims of a deeper investigation. The influence of viscous dampers nonlinearity level on the seismic risk of such systems is investigated. The variability of viscous constitutive parameters due to the tolerance allowed in devices’ quality control and production tests is also accounted for, and the consequential effects on the seismic performances are evaluated. The reliability of simplified approaches proposed by the main international seismic codes for dampers design is assessed, the main regulatory gaps are highlighted and proposals for improvement are given as well. Results from this whole probabilistic investigation contribute to the development of more reliable design procedures for seismic passive protection strategies.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Ren, Zhen. "Modular model assembly from finite element models of components." Diss., Connect to online resource - MSU authorized users, 2008.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
47

HUSSAIN, RADI RADI MOHAMMED ABDUL. "Structural construction and economic benefits for precast concrete high-rise housing buildings." Doctoral thesis, Università Politecnica delle Marche, 2016. http://hdl.handle.net/11566/242978.

Повний текст джерела
Анотація:
La tecnologia di prefabbricazione in calcestruzzo armato è considerata una delle soluzioni costruttive di maggior competitività per la realizzazione di edifici ad uso residenziale che usa supportare gli altri sistemi costruttivi. Gli aspetti di produzione a catena degli elementi costruttivi e delle tecniche di messa in opera degli stessi rendono tale tecnologia diversa e molto più appetibile rispetto alle soluzioni tradizionali in calcestruzzo gettato in opera. L’obiettivo di questa tesi è investigare le possibilità e le capacità dei sistemi strutturali prefabbricati in termini di performance sismica e costi di realizzazione, al fine di assicurare l’impiego ottimale di tale tecnologia per risolvere l’emergenza abitativa che affligge il Medio Oriente e le aree esposte a calamità naturali e guerre. Questo lavoro si articola in una prima indagine delle tipologie strutturali presenti nel mercato, svolta prendendo in esame applicazioni pratiche e progetti di edilizia abitativa, per poi valutare la performance sismica e gli aspetti di natura economica delle soluzioni strutturali competitive, scelte sulla base delle loro caratteristiche peculiari, delle regole e dei vincoli che le caratterizzano. I sistemi che si sono rivelati maggiormente impiegati per efficacia tecnica ed economicita' sono stati analizzati in maniera piu' approfondita scegliendo opportuni casi studio. Le analisi statiche non lineari si sono rivelate strumenti utili per la valutazione della perfomance sismica e quindi, insieme alla stima dei costi, per il confronto delle soluzioni strutturali. Dal confronto tra i vari sistemi studiati è emerso che i sistemi prefabbricati rappresentano una soluzione sismicamente efficiente ed economicamente conveniente anche per la realizzazione di edifici alti. Inoltre, i risultati mettono in luce come le soluzioni prefabbricate rappresentano una valida alternativa non solo nelle aree non sismiche, ma anche in quelle ad elevata sismicità.
Precast concrete technology is considered one of the greatest importance systems in multifamily housing buildings. The concept of manufacturing, production and construction makes this technology different from cast in place concrete and often more interesting and befitting. The objective of this thesis is to investigate the possibilities and capabilities of the precast concrete technology in terms of structural performance and construction cost of high-rise housing buildings, to ensure the optimal use of this technology to solve the housing crisis in the Middle East and the areas affected by natural disasters and wars. The study includes a review of the most widely systems used in this field such as the precast concrete frame systems and large panel system, which therefore have been studied intensively by considering real cases. The practical applications and experiences of housing projects and real case studies, additional to the codes, are considered the important parts of this thesis. Comparison between precast concrete systems are conducted to find out the suitable system; the comparison depends on the characteristics, rules and constraints for each system. In order to compare different precast structural housing systems for a number of case studies, the seismic performance and the construction costs are assumed as criteria for the assessment and selection of the system. The seismic performance is obtained with push-over non linear analyses, whereas the construction cost is estimated to total cost for each case study; the results obtained for various case studies are then compared. Precast concrete frame structural systems represent a suitable solution for the high rise housing buildings in terms of seismic performance and construction cost. Furthermore, the results showed that this system is a good economic alternative for the structural buildings not only in the non-seismic or low seismic areas but also in high seismicity areas.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Jawaid, Hassan. "Applications of the Heath, Jarrow and Morton (HJM) model to energy markets." Thesis, Uppsala universitet, Analys och tillämpad matematik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-176611.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
49

TELESCA, ALESSIO. "ADVANCED MODELLING OF OVER-STROKE DISPLACEMENT CAPACITY FOR CURVED SURFACE SLIDER DEVICES." Doctoral thesis, Università degli studi della Basilicata, 2022. http://hdl.handle.net/11563/153765.

Повний текст джерела
Анотація:
This doctoral dissertation aims to report on the research work carried out and to provide a contribution to the field of seismic base isolation. Since its introduction, the base isolation strategy proved to be an effective solution for the protection of structures and their components from the earthquake-induced damage, enhancing their resilience and implying a significative decrease in time and cost of repair compared to a conventional fixed-base structure. Sliding isolation devices feature some important characteristics, over other devices, that make them particularly suitable for the application in the existing buildings retrofit such as the high displacements capacity combined with limited plan dimensions. Even though these devices diffusion has gotten more popular worldwide in last years, a full understanding of their performances and limits as well as their behaviour under real seismic excitations has not been yet completely achieved. When Curved Surface Sliders reach their displacement capacity, they enter the so-called over-stroke sliding regime which is characterized by an increase in stiffness and friction coefficient. While in the over-stroke displacements regime, anyways, sliding isolators are still capable, until certain threshold values, of preserving their ability to support gravity loads. In this doctoral dissertation, the analysis of Curved Surface Sliding devices influence on different structures and under different configurations is presented and a tool for to help professionals in the design phase is provided. The research main focuses are: i) the numerical investigation of the over-stroke displacement influence on base isolated structures; ii) the numerical investigation of displacement retaining elements influence on base isolated structures; iii) the development of a mechanical model and an algebraic solution describing the over-stroke sliding regime and the associated limit displacements.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Gandy, Axel. "Directed model checks for regression models from survival analysis." Berlin Logos-Ver, 2005. http://deposit.ddb.de/cgi-bin/dokserv?id=2766731&prov=M&dok_var=1&dok_ext=htm.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії