Thèses sur le sujet « Innovative modelling »

Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Innovative modelling.

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 50 meilleures thèses pour votre recherche sur le sujet « Innovative modelling ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les thèses sur diverses disciplines et organisez correctement votre bibliographie.

1

Rehman, S. « Knowledge-based cost modelling for innovative design ». Thesis, Cranfield University, 2000. http://hdl.handle.net/1826/3971.

Texte intégral
Résumé :
The contribution to new knowledge from this research is a novel method for modelling production costs throughout the design phase of a product's lifecycle, from conceptual to detail design. The provision of cost data throughout the design phase allows management to make more accurate bid estimates and encourages designers to design to cost, leading to a reduction in the amount of design rework and product's time to market. The cost modelling strategy adopted incorporates the use of knowledge-based and case-based approaches. Cost estimation is automated by linking design knowledge, required for predicting design features from incomplete design descriptions, to production knowledge. The link between the different paradigms is achieved through the blackboard framework of problem solving which incorporates both case-baseda nd rule-based reasoning. The method described is aimed at innovative design activities in which original designs are produced which are similar to some extent to past design solutions. The method is validated through a prototyping approach. Tests conducted on the prototype confirm that the designed method models costs sufficiently accurately within the range of its own knowledge base. It can therefore be inferred that the designed cost modelling methodology sets out a feasible approach to cost estimation throughout the design phase.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Lamberto, Giuliano. « An innovative approach to tibiofemoral joint modelling ». Thesis, University of Sheffield, 2017. http://etheses.whiterose.ac.uk/18081/.

Texte intégral
Résumé :
Musculoskeletal models allow non-invasive predictions of non-directly measurable forces exchanged within the human body in motion. Despite this information has plenty of potential applications, actual adoption of current models is impeded by limitations related to the insufficient number of validation studies or the drastic modelling assumptions often made. This thesis aims to address these limitations developing an innovative approach to the mechanical modelling of the tibiofemoral joint. To achieve this, three main sections are presented: Effects of the soft tissue artefact on current musculoskeletal models – this study used a statistical approach to develop a realistic distribution of soft tissue artefact, which was used to assess the sensitivity of the estimates of three publicly-available musculoskeletal models. Results showed joint-dependent variations, decreasing from hip to ankle, providing awareness for the research community on the investigated models and indications to better interpret simulation outcomes. Modelling the mechanical behaviour of the tibiofemoral joint using compliance matrices – this part of the thesis proposed a method to characterise the tibiofemoral joint mechanical behaviour using a discrete set of compliance matrices. Model calibration and validation was performed using data from ex vivo testing. Accurate results were found in close proximity to where the model was calibrated, opening the way to a more biofidelic joint representations. The developed model was included in the calculation pipeline to estimate joint kinematics using penalty-based method. For this inclusion, validation using in vivo data for these estimates was promising, providing remarkable alternatives to traditional methods. A force-based approach to personalised tibiofemoral models – this section attested on an ex vivo dataset that the model based on compliance matrices can be personalised using data from clinical tests. Since the latter are usually performed in vivo, this opens the way to future exciting applications.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Silva, Neander F. « Hybrid system for innovative design ». Thesis, University of Strathclyde, 1996. http://oleg.lib.strath.ac.uk:80/R/?func=dbin-jump-full&object_id=21250.

Texte intégral
Résumé :
The thesis focuses on in two vital and interrelated aspects of modelling design support systems, they are: how innovative solutions may arise, and the knowledge-base's extension and maintenance. The dilemma 'reproduction versus creativity' is identified as one of the main deadlocks that the design methods' debate, research in Computer Aided Architectural Design, CAAD, and Artificial Intelligence, AI, have faced in the last thirty years. A hybrid approach is then proposed as a means of overcoming these difficulties, where a rudimentary evolving design support environment is developed. It draws inspiration from three areas of Artificial Intelligence: knowledge-base systems, connectionist models, and case-based reasoning (CBR). However, it differs fundamentally from conventional knowledge-base systems, connectionist models and CBR tools, in its architecture, although strongly inspired by these underlying theories. The main benefits and contributions of this hybrid system are an incremental selfextending feature able to minimise substantially the dependency on knowledge engineer intervention, and an interactive support to innovation by augmenting the designer's creativity.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Liu, Zhao. « Innovative lesion modelling for computer-assisted diagnosis of melanoma ». Thesis, University of the West of England, Bristol, 2012. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.589390.

Texte intégral
Résumé :
Malignant melanoma is a relatively rare disease, but it is the most fatal form of skin cancers. In UK alone, it represents 10% of all skin cancers and its incidence has increased four times during the last three decades. The survival rate of melanoma is inversely proportional to the tumour thickness, so early detection with surgical removal of thin lesions is vital for successful treatment. Many dermatologists have advocated the development of computer- assisted diagnosis (CAD) for early detection of melanoma, due to its objectivity and the potential it could provide for self-screening. Numerous computer-based methods have been developed for melanoma diagnosis. However, better accuracy and robustness are demanded for computer-based systems to be trustworthy enough for routine clinical applications. With the aim of improving the existing CAD systems, the present study develops several innovative techniques to achieve accurate and reliable computer-based diagnosis for melanoma. Based on the clinical evidence, a new sub-segmentation scheme for cutaneous lesions is firstly proposed to allow the isolation of normal skin, suspicious lesion areas, and interesting darker areas inside the lesion, simultaneously. This scheme is much different from traditional segmentation techniques, where only lesion areas and non-lesion areas are separated. It has been found that the isolated darker areas within the lesion are of great diagnostic importance, such that they are useful in characterising the malignancy of cutaneous lesions. Melanin index and erythema index, which respectively characterise the amount and distributions of melanin and haemoglobin components within human skin, are computed from conventional RGB images according to the optical theory of human skin. These biological indices are employed to generate new colour variegation descriptors for melanoma diagnosis. Experiments show that the derived chromophore indices can accurately describe pigmentation distributions, and tend to be less influenced by external imaging complexities (e.g. light conditions and optical sensor parameters) than the conventional image intensities such as RGB colour values. A novel asymmetry analysis algorithm based on the global point signatures (GPSs) is developed to quantify the shape and pigmentation asymmetry of cutaneous lesions, simultaneously. In contrast to existing methods for asymmetry analysis, the newly proposed method results in only one pair of reflective symmetry axes. This is consistent with the asymmetry of cutaneous lesions as perceived by the human eye, and circumvents the problem of yielding two different pairs of symmetry axes when shape and colour asymmetries are evaluated separately. In addition, the new asymmetry descriptor is approved to be invariant to rigid transformations, and robust to non-rigid deformations. This suggests that the GPS-based asymmetry descriptor not only benefits for the computer-assisted diagnosis of melanoma, but also has good potential for follow-up monitoring of suspicious cutaneous lesions for high-risk Caucasian population. Finally, an innovative CAD system for melanoma is developed using the extended Laplacian Eigenmap. This system incorporates, for the first time, clinically important metadata into a completely automatic classification process for melanoma diagnosis. Algorithm performance is evaluated on both 2D dermoscopy images and 3D data obtained from our Skin Analyser device. The proposed CAD system achieves 90.97% sensitivity and 86.42% specificity with the dermoscopy database, and 94.12% sensitivity and 88.99% specificity with the Skin Analyser database. The diagnostic accuracy obtained by our system is superior to most of the results from other existing CAD systems for melanoma.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Lantto, Johanna, et Willie Wiholm. « Innovative communication strategies and modelling of robust sensor functions ». Thesis, Linköpings universitet, Kommunikations- och transportsystem, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-139941.

Texte intégral
Résumé :
The aim of this thesis was to create a resilient network, capable of handling link failures without affecting the data flow. This was done by using graph theory and three mathematical models. A generic system was created, on which the models were applied on. The mathematical models were path diversity, edge protection and path restoration. These models were tested to evaluate if they could create a robust system. The models were also compared with each other to obtain the best performing one. It was concluded that it was possible to construct a resilient network using these types of mathematical modelling. It was also concluded that the models provided different results in terms of cost and robustness. The report ends with suggestions on future work of how studies can be conducted to create realistic systems.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Gasparini, Michele. « Innovative methods for modelling and design of compression drivers ». Doctoral thesis, Università Politecnica delle Marche, 2014. http://hdl.handle.net/11566/242755.

Texte intégral
Résumé :
Le moderne attività produttive, essendo legate alle esigenze del mercato, sonocostantemente soggette a richieste di elevata qualità e tempi di produzione brevi. Un buon progetto deve necessariamente tenere conto di questi aspetti; di conseguenza il lavoro del progettista diventa sempre più difficile. Per questo motivo, in molti settori si stanno diffondendo tool avanzati di supporto alla progettazione. Il progetto di altoparlanti è un’operazione estremamente complessa, dal momento che coinvolge diversi domini fisici che non possono essere ignorati se si vuole ottenere una data risposta. Purtroppo, non è sempre possibile trovare una formulazione matematica in grado di modellare correttamente il funzionamento del driver a compressione; anche per questo motivo tool specifici per il progetto di driver non sono diffusi. Pensando a questo contesto, in questa tesi viene presentata una analisi completa degli aspetti riguardanti la progettazione di driver a compressione. Per prima cosa vengono analizzati i fenomeni che influiscono sulla risposta in frequenza del driver, che solitamente deve essere efficiente e regolare su tutto lo spettro. Poichè modelli matematici validi non sono sempre disponibili, è stato ritenuto opportuno impiegare tecniche di analisi più avanzate. L’utilizzo di simulatori è un approccio molto comune oramai in molti settori dell’ingegneria, grazie anche alla potenza di calcolo dei moderni calcolatori e alla diffusione di pacchetti sofware commerciali completi. La maggior parte di questi programmi è basata sul metodo ad elementi finiti (FEM), che è una tecnica molto diffusa per la soluzione di sistemi di equazioni alle derivate parziali. Il funzionamento dei driver a compressione può essere descritto solo attraverso un’insieme di domini fisici differenti, che comprende non solo l’acustica, ma anche la meccanica e l’elettromagnetismo. Per questo motivo sono necessari simulatori specifici, comunemente chiamati multifisici, in grado di gestire l’interazione tra fisiche diverse. La precisione dei risultati ottenuti dal FEM può inoltre essere sfruttata per ottimizzare il flusso di progetto del driver, permettendo così di migliorare la qualità del prodotto e di ridurre i tempi di realizzazione. Nello specifico dei metodi di ottimizzazione avanzati possono essere impiegati per analizzare i risultati delle simulazioni e individuare una configurazione ottimale per i parametri di progetto del driver, in relazione ad una data funzione di valutazione. Sulla base di questa idea, in questa tesi viene presentato un tool completo di supi porto alla progettazione dei driver a compressione. L’approccio proposto si basa sull’applicazione di un algoritmo di ottimizzazione evolutivo sui risultati ottenuti dalla simulazione del driver tramite un programma FEM multifisico. A seconda della particolare funzione di ottimizzazione scelta, si può decidere su quali aspetti del driver intervenire con maggiore efficacia. Quasi tutte le grandezze geometriche e fisiche coinvolte nel funzionamento del driver possono essere incluse nel modello FEM e utilizzate come variabili di progetto. La metodologia presentata è stata valutata su driver reali e in condizioni di utilizzo differenti, mostrando di riuscire a migliorare la qualità della risposta in frequenza del driver, senza necessità di particolari conoscenze a priori, e senza bisogno di essere supervisionata da un operatore. Un altro aspetto molto importante per comprendere a fondo i driver a compressione è lo studio dei comportamenti non lineari. Il driver infatti, a causa delle sue ridotte dimensioni, presenta nonlinearità specifiche che non sono presenti negli altoparlanti tradizionali. La corretta identificazione di queste nonlinearità è fondamentale, dato che permette di valutare difetti nel processo di fabbricazione o nel progetto. Inoltre, la conoscenza di un modello nonlineare plausibile permette l’applicazione di tecniche di linearizzazione della risposta, in grado di ridurre il livello di distorsione. Sono stati presentati tre metodi differenti, sia statici che adattativi, per l’identificazione di sistemi nonlineari. Il primo si basa sulla tecnica della convoluzione dinamica e sfrutta la decomposizione in componenti principali per aumentare l’efficienza dell’algoritmo. Le altre due tecniche sono adattative e utilizzano, per la modellazione del sistema non lineare incognito, rispettivamente le proprietà di particolari polinomi ortogonali e delle spline cubiche. Tutti gli algoritmi sono infine stati testati e comparati con altri metodi già presenti in letteratura.
Modern industrial activity is constantly related to high quality requirements and short time to market. A good design has therefore to cope with these fundamental aspects, making the engineer’s work always more heavy. For this reason, advanced tools have been developed in several fields to support and to aid the design activity. Loudspeakers design is a very complex task since it involves different physical phenomena that have to be taken into account. Unfortunately, the driver mathematical description is quite complex and it is not always available. For this reason, specific design tools for compression drivers are not so common. Considering the aforementioned scenario, a complete analysis of the compression driver design process is reported in this dissertation. The driver design has to cover several aspects. First of all, the desired acoustical output has to be achieved, usually with high efficiency and regularity of the response. Since effective mathematical models may not be available, more advanced techniques should be employed. Simulation is nowadays a very common approach in many engineering problems, thanks to the high computational power of modern calculators and to the diffusion of complete commercial software products. Most of them are based on the finite element method (FEM), which is a widespread technique for solving partial differential equations systems (PDEs). Compression drivers behaviour is based on a set of equations that involves not only acoustic, but also mechanics and electromagnetism, so specific simulators have to be use, capable of handling the interactions between different physics. These particular software products are commonly called multiphysics simulators. The FEM simulators accuracy can be employed to optimize the driver design process, thus allowing to increase the product quality and to reduce the time needed for the solution of the model. In particular, modern optimization methods can be applied to analyze the simulation results and to find an driver optimum parameters configuration, according to a certain evaluation function. A complete tool to support the compression drivers design process is presented in this thesis, based on the above mentioned idea. The proposed approach makes use of an evolution strategies algorithm on the results obtained by the simulation of the driver with a commercial multiphysics FEM software. Depending on the choice of the particular decision function, the algorithm is capable of improving specific aspects of the driver design. Almost every physii cal or geometrical quantity, involved in the driver operation and included in the FEM model, can be used as a design parameter. The proposed algorithm has been tested on a real driver design under different conditions and proved to be effective in enhancing the quality of the driver frequency response, without any a priori information, and without the need of human supervision, thus reducing the design time. Another important aspect to understand the compression driver is the study of its nonlinear behaviour. In fact, due to its small size, the driver is affected by specific distortions that are not present in traditional loudspeakers. The correct identification of these nonlinearities is fundamental, since it allows to evaluate flaws in the driver design or in its production process. Moreover, a suitable nonlinear model allows the application of linearization techniques, in order to improve the driver response. Three different methods for the identification of nonlinear systems are presented in this thesis, both adaptive and fixed. The first one is based on the dynamic convolution method, exploiting the principal component analysis to improve the algorithm efficiency. The other two are adaptive techniques, that take advantage of orthogonal polynomial and of cubic splines characteristics, to model the unknown nonlinear behaviour. All the algorithms have been tested and compared with other methods already reported in the literature.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Bouklas, Stavros. « INNOVATIVE DESIGN OF DECARBONISATION SCENARIOS FOR MULTI-SECTORAL MODELLING FRAMEWORKS ». Thesis, KTH, Energisystemanalys, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-217807.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

BOZ, BUKET. « Complementary experiments, modelling, and simulations of innovative-li-ion batteries ». Doctoral thesis, Università degli studi di Brescia, 2021. http://hdl.handle.net/11379/544081.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Bronchinetti, Lorenzo. « Experimental characterization and numerical modelling of an innovative hydropneumatic suspension system ». Master's thesis, Alma Mater Studiorum - Università di Bologna, 2018.

Trouver le texte intégral
Résumé :
Today most of the vehicles are equipped with a suspension system to guarantee a good comfort level while providing enough handling capability. All the suspension systems where a shock absorber and a spring are employed are subjected to the well known "ride-handling compromise", since comfort and handling require opposite settings of the suspension parameters. This thesis deals with an innovative suspension system, the AirTender, that allows to overcome this compromise thanks to the introduction of an hydropneumatic spring in series to a coil spring into the original suspension structure. Thanks to the two springs working in series a dynamic change of the equivalent spring rate during the shock absorber travel is obtained. The main aim of this thesis is to develop a mathematical model capable to capture the fundamental dynamics of the AirTender system in order to be able to study the way it works under different initial conditions and to study the changes of the main physical parameters. Furthermore, focusing on the implementation of the system on a Honda CRF 1000 motorcycle, this thesis also wants to investigate the effects of the AirTender on the vehicle dynamics. The research work started with an experimental test campaign aimed at the characterization of the shock absorber installed on the motorbike and of the AirTender system, to get all the data needed to study the system and to develop the mathematical model. Indeed some parameters of the model cannot be directly computed and they need to be estimated from the experimental data. The physical model of the AirTender suspension system has been then developed starting from the fundamental equations describing the coil and hydropneumatic spring behaviour and comparing the numerical simulations to the experimental data, leading in the end to the development of a linear model able to represent the AirTender dynamics with a good accuracy.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Shehab, Esam. « An intelligent knowledge based cost modelling system for innovative product development ». Thesis, De Montfort University, 2001. http://hdl.handle.net/2086/11605.

Texte intégral
Résumé :
This research work aims to develop an intelligent knowledge-based system for product cost modelling and design for automation at an early design stage of the product development cycle, that would enable designers/manufacturing planners to make more accurate estimates of the product cost. Consequently, a quicker response to customers’ expectations. The main objectives of the research are to: (1) develop a prototype system that assists an inexperienced designer to estimate the manufacturing cost of the product, (2) advise designers on how to eliminate design and manufacturing related conflicts that may arise during the product development process, (3) recommend the most economic assembly technique for the product in order to consider this technique during the design process and provide design improvement suggestions to simplify the assembly operations (i.e. to provide an opportunity for designers to design for assembly (DFA)), (4) apply a fuzzy logic approach to certain cases, and (5) evaluate the developed prototype system through five case studies. The developed system for cost modelling comprises of a CAD solid modelling system, a material selection module, knowledge-based system (KBS), process optimisation module, design for assembly module, cost estimation technique module, and a user interface. In addition, the system encompasses two types of databases, permanent (static) and temporary (dynamic). These databases are categorised into five separate groups of database, Feature database, Material database, Machinability database, Machine database, and Mould database. The system development process has passed through four major steps: firstly, constructing the knowledge-based and process optimisation system, secondly developing a design for assembly module. Thirdly, integrating the KBS with both material selection database and a CAD system. Finally, developing and implementing a ii fuzzy logic approach to generate reliable estimation of cost and to handle the uncertainty in cost estimation model that cannot be addressed by traditional analytical methods. The developed system has, besides estimating the total cost of a product, the capability to: (1) select a material as well as the machining processes, their sequence and machining parameters based on a set of design and production parameters that the user provides to the system, and (2) recommend the most economic assembly technique for a product and provide design improvement suggestion, in the early stages of the design process, based on a design feasibility technique. It provides recommendations when a design cannot be manufactured with the available manufacturing resources and capabilities. In addition, a feature-by-feature cost estimation report was generated using the system to highlight the features of high manufacturing cost. The system can be applied without the need for detailed design information, so that it can be implemented at an early design stage and consequently cost redesign, and longer lead-time can be avoided. One of the tangible advantages of this system is that it warns users of features that are costly and difficult to manufacture. In addition, the system is developed in such a way that, users can modify the product design at any stage of the design processes. This research dealt with cost modelling of both machined components and injection moulded components. The developed cost effective design environment was evaluated on real products, including a scientific calculator, a telephone handset, and two machined components. Conclusions drawn from the system indicated that the developed prototype system could help companies reducing product cost and lead time by estimating the total product cost throughout the entire product development cycle including assembly cost. Case studies demonstrated that designing a product using the developed system is more cost effective than using traditional systems. The cost estimated for a number of products used in the case studies was almost 10 to 15% less than cost estimated by the traditional system since the latter does not take into consideration process optimisation, design alternatives, nor design for assembly issues.
Styles APA, Harvard, Vancouver, ISO, etc.
11

Smith, Melvyn Lionel. « The integration of innovative vision and graphic modelling techniques for surface inspection ». Thesis, University of the West of England, Bristol, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.387938.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
12

Zafeirakis, Dimitrios. « Design, modelling and valuation of innovative dispatch strategies for energy storage systems ». Thesis, University of East Anglia, 2015. https://ueaeprints.uea.ac.uk/54413/.

Texte intégral
Résumé :
Energy storage has in recent years attracted considerable interest, mainly owing to its potential to support large-scale integration of renewable energy sources (RES). At the same time however, energy storage technologies are called to take over multiple roles across the entire electricity sector, introducing modern applications for both private actors and system operators. In this context, the current thesis focuses on the valuation of emerging energy storage applications, while also proceeding to the design and modelling of novel dispatch strategies, along with the development of financial instruments and support measures for the market uptake of energy storage technologies. In doing so, emphasis is given on mature, bulk energy storage technologies, able to support energy management applications. These include pumped hydro storage, compressed air energy storage and battery technologies. Energy storage applications/dispatch strategies examined are divided into three main categories that focus on private actors, autonomous electricity grids and utility-scale systems. For private energy storage actors, active, profit-seeking participation in energy markets is examined through the evaluation of high-risk arbitrage strategies. Furthermore, the interplay of energy storage and demand side management (DSM) is studied for private actors exposed to increased electricity prices and energy insecurity, designating also the potential for combined strategies of arbitrage and DSM. To reduce the investment risks associated with participation in energy markets, a novel aspect of collaboration between energy storage and RES is accordingly investigated for energy storage investors, proposing the use of storage for the delivery of guaranteed RES power during peak demand periods and stimulating the development of state support instruments such as feed-in tariffs. Next, attention is given on the introduction of energy storage systems in autonomous island grids. Such autonomous systems comprise ideal test-benches for energy storage and smart-grids, owed to the technical challenges they present on the one hand (e.g. low levels of energy diversity and limitations in terms of grid balancing capacity) and the high electricity production cost determining the local energy sector on the other (due to the need for oil imports). To this end, combined operation of RES with energy storage could, under the assumption of appreciable RES potential, prove cost-effective in comparison with the current solution of expensive, oil-based thermal power generation. Moreover, by considering the limited balancing capacity of such autonomous grids, which dictates the oversizing of the storage components in order to achieve increased energy autonomy, the trade-off between DSM and energy storage is next studied, becoming increasingly important as the quality of RES potential decays. With regards to utility-scale energy storage applications, the potential of bulk energy storage to support base-load RES contribution is investigated, proving in this way that the intermittent characteristics of RES power generation could be eliminated. This implies increased energy security at the level of national grids while also challenging the prospect of grid parity for such energy schemes. Furthermore, the market regulating capacity of utility-scale energy storage is reflected through the examination of different market-efficiency criteria, providing system operators with a valuable asset for the improved operation of electricity markets. Finally, the role of utility-scale energy storage in the optimum management of national electricity trade is investigated, designating the underlying problem of embodied carbon dioxide emissions’ exchange over cross-border transmission and paving the way for the consideration of energy storage aspects in electricity grid planning.
Styles APA, Harvard, Vancouver, ISO, etc.
13

Emadi, Bagher. « Experimental studies and modelling of innovative peeling processes for tough-skinned vegetables ». Thesis, Queensland University of Technology, 2006. https://eprints.qut.edu.au/16212/1/Bagher_Emadi_Thesis.pdf.

Texte intégral
Résumé :
Tough-skinned vegetables such as pumpkin and melon currently are peeled either semi-automatically or automatically. The main limitation of both methods, especially for varieties with an uneven surface, is high peeling losses. Improvement of current mechanical peeling methods and development of new mechanical methods for tough-skinned vegetables which are close to the "ideal" peeling conditions using mechanical properties of the product were the main objectives of this research. This research has developed four innovative mechanical peeling methods on the basis of the mechanical properties of tough-skinned vegetables. For the first time, an abrasive-cutter brush has been introduced as the best peeling method of tough-skinned vegetables. This device simultaneously applies abrasive and cutting forces to remove the peel. The same peeling efficiency at concave and convex areas in addition to high productivity are the main advantages of the developed method. The developed peeling method is environmentally friendly, as it minimises water consumption and peeling wastes. The peeling process using this method has been simulated in a mathematical model and the significant influencing parameters have been determined. The parameters are related to either the product or peeler. Those parameters appeared as the coefficients of a linear regression model. The coefficients have been determined for Jap and Jarrahdale as two varieties of pumpkin. The mathematical model has been verified by experimental results. The successful implementation of this research has provided essential information for the design and manufacture of a commercial peeler for tough-skinned vegetables. It is anticipated that the abrasive-cutting method and the mathematical model will be put into practical use in the food processing industry, enabling peeling of tough-skinned vegetables to be optimised and potentially saving the food industry millions of dollars in tough-skinned vegetable peeling processes.
Styles APA, Harvard, Vancouver, ISO, etc.
14

Emadi, Bagher. « Experimental studies and modelling of innovative peeling processes for tough-skinned vegetables ». Queensland University of Technology, 2006. http://eprints.qut.edu.au/16212/.

Texte intégral
Résumé :
Tough-skinned vegetables such as pumpkin and melon currently are peeled either semi-automatically or automatically. The main limitation of both methods, especially for varieties with an uneven surface, is high peeling losses. Improvement of current mechanical peeling methods and development of new mechanical methods for tough-skinned vegetables which are close to the "ideal" peeling conditions using mechanical properties of the product were the main objectives of this research. This research has developed four innovative mechanical peeling methods on the basis of the mechanical properties of tough-skinned vegetables. For the first time, an abrasive-cutter brush has been introduced as the best peeling method of tough-skinned vegetables. This device simultaneously applies abrasive and cutting forces to remove the peel. The same peeling efficiency at concave and convex areas in addition to high productivity are the main advantages of the developed method. The developed peeling method is environmentally friendly, as it minimises water consumption and peeling wastes. The peeling process using this method has been simulated in a mathematical model and the significant influencing parameters have been determined. The parameters are related to either the product or peeler. Those parameters appeared as the coefficients of a linear regression model. The coefficients have been determined for Jap and Jarrahdale as two varieties of pumpkin. The mathematical model has been verified by experimental results. The successful implementation of this research has provided essential information for the design and manufacture of a commercial peeler for tough-skinned vegetables. It is anticipated that the abrasive-cutting method and the mathematical model will be put into practical use in the food processing industry, enabling peeling of tough-skinned vegetables to be optimised and potentially saving the food industry millions of dollars in tough-skinned vegetable peeling processes.
Styles APA, Harvard, Vancouver, ISO, etc.
15

Barbagallo, Raffaele. « Innovative characterization and elastoplastic modelling of metals under static and dynamic conditions ». Doctoral thesis, Università di Catania, 2018. http://hdl.handle.net/10761/4170.

Texte intégral
Résumé :
Material characterization and modelling are fundamental in both Industry and Research worlds in order to have reliable FEM simulations. In this research, such procedures are investigated regarding quasistatic and dynamic tests on metals. According to several experiments reported in the literature, the static elastoplastic behaviour of such materials depends not only on the first stress invariant (triaxiality) for the ductile damage and on the second stress invariant (equivalent von Mises stress) for the yield, but also on the third stress invariant (normalized Lode angle X) which may affect at the same time the yielding and the ductile failure. In this research, a new accurate and easy-to-calibrate yield model is presented, in which the yield surface depends on the Lode Angle and, eventually, also on the triaxiality ratio. The proposed model has been tested and validated against experimental data from the literature on the Titanium alloy Ti6Al4V. Moreover, in this research, a new methodology is proposed and experimentally validated for translating the engineering curves, coming from tensile tests, into the true curves via material-independent mathematical tools named MVB functions, which only depend on the necking initiation strain and on the aspect ratio of the undeformed cross section. Regarding the dynamic behaviour of metals, this research is aimed at the quantitative evaluation of the error levels in the characterization via Hopkinson bar tensile tests, ran according to the classical strain-gauge-based experimental procedure and to the enhanced high-speed-camera-assisted procedure. In addition, the effect of the specimen slenderness is investigated for checking the sensitivity of both the above techniques to different specimen geometries. Lastly, this work analyses the necking-induced freezing of the strain rate effect via experimental data and numerical simulations, with reference to materials exhibiting both early and late necking initiation and the consequences of this phenomenon in the characterization process via Hopkinson bar tensile tests.
Styles APA, Harvard, Vancouver, ISO, etc.
16

Tardif, Derek. « Experimental investigation and numerical modelling of innovative FRP reinforced concrete bridge deck panels ». Mémoire, Université de Sherbrooke, 2005. http://savoirs.usherbrooke.ca/handle/11143/1298.

Texte intégral
Résumé :
The possibility of designing steel-free reinforced concrete structures offers an interesting approach to overcome deterioration problems associated with the corrosion of steel reinforcement. This research project investigates steel-free precast prestressed concrete bridge deck systems, with the objectives of increasing the service life of new bridge decks and understanding their structural behaviour. The project was initiated after a major repair of the bridge deck systems on the Jacques-Cartier Bridge in Montreal, which was carried out during summer 2001-02. The final solution for the replacement of the bridge deck was the use of precast prestressed concrete decks with conventional steel tendons and reinforcing bars. The bridge decks were rebuilt because the existing steel tendons and reinforcing bars were severely damaged over the years by corrosion. Full-scale concrete bridge panel prototypes reinforced with carbon fibre-reinforced polymer (CFRP) prestressing tendons and glass fibre-reinforced polymers (GFRPs) as stirrups and slab reinforcement were constructed and tested up to failure. The results of these prototypes were compared to those of normal steel reinforced and steel prestressed concrete panel specimens. Each prototype was loaded under three different configurations until cracks first appeared to study the service state. It was then loaded at mid-span until failure to study the ultimate state. The behaviour of the panels was characterized by the mid-span and third point deflections and also by the strains obtained from the gauges fixed on the tendons, concrete, reinforcing bars and stirrups. Based on the testing of four full-scale panels, the thesis presents comparisons on the serviceability, ultimate strength, and failure modes. The experimental results are also compared to the theoretical equations and to the predictions from two finite element programs. The numerical and theoretical results are shown to be in good agreement with the experiments.
Styles APA, Harvard, Vancouver, ISO, etc.
17

MORETTI, Linda. « Analysis and modelling of innovative technologies on natural gas transportation and distribution networks ». Doctoral thesis, Università degli studi di Cassino, 2022. https://hdl.handle.net/11580/90999.

Texte intégral
Résumé :
Renewable energy sources (RESs), such as wind, solar and biomass, are the keystone of the energy policy of the EU to fulfil the target of a carbon neutral economy by 2050. However, the integration of a significant share of RESs pose significant challenges to the EU energy systems as it requires, on the one hand, storing large energy volumes to match intermittent renewable supply with the pattern of energy consumption and, on the other hand, transporting renewable energy from where it can be most efficiently and feasibly produced to where it is consumed. To overcome such challenges, it will be effective to consider the reuse of the existing natural gas (NG) infrastructure. The latter will play a crucial role in the development of a decarbonized energy system based on a large usage of RESs due to its widespread presence and its capacity to provide a cost-effective option for transporting and storing large amounts of energy for long-term period exploiting the NG transportation and distribution networks, as well as the storage complexes of the existing NG infrastructure. This thesis aims to address, with a multi-thematic approach, the issue of innovative uses of, as well as the development of innovative technologies on, NG transport and distribution networks. To this aim, a number of five case studies were investigated to: i) evaluate the fault management strategies for NG distribution networks to minimize the disservice and to define possible structural improvement measures; ii) assess the technical feasibility of the Power to Gas concept to store intermittent RESs; iii) analyse the impact of hydrogen injection on NG networks; iv) evaluate the effectiveness of the equilibrium gasification models as a modelling tool for the design and optimization of biomass gasification systems integrated into polygenerative plants coupled with energy networks. Results of this thesis provide useful insights to researchers, designers and policy makers, filling some of the gaps highlighted in the existing scientific literature in all the analysed areas.
Styles APA, Harvard, Vancouver, ISO, etc.
18

Ferrari, I. « MODELLING THE PERFORMANCES OF TRADITIONAL AND INNOVATIVE FATS IN A PLUM CAKE FORMULATION ». Doctoral thesis, Università degli Studi di Milano, 2012. http://hdl.handle.net/2434/169033.

Texte intégral
Résumé :
Finding alternatives to animal and saturated fats is an attractive challenge in bakery. In a first part of the research, effects of fat melting characteristics and fat quantity in a plum cake formulation were studied by a Central Composite Design (CCD), using palm oil/palm olein blends as fat. Cakes produced with butter (BC) or anhydrous butter (ABC) were taken as the references. Response surface models demonstrated that cake texture was significantly affected (p<0.01) both by fat content and percentage of olein in the fat blend, while volume was influenced (p<0.01) only by fat content. An optimized cake formulation (OF) was obtained. The rheology of OF, BC and ABC fats (and the corresponding batters) was studied, as well as cake characteristics during storage. Fat properties influenced batter viscoelastic behaviour and creaming performances. A higher content of unsaturated fatty acids improved baking performances: OF containing 19.7% olein in the fat blend attained good structural properties, suggesting that an aerated structure can be achieved even with a minimal solid fat content. In a second part, the effect of structured fats in plum cake was studied, using an organogel (OG) made of sunflower oil, with β-sitosterol and γ-oryzanol as gelators. OG showed higher G' with increasing gelators concentration. From the results of a CCD (with OG content in cake and gelators percentage in OG as factors), OG properties showed no significant impact (p>0.05) on the final cake, although the reference cake, produced with liquid sunflower oil, showed higher consistency (p>0.05). During storage trials, OG cakes showed slight different behaviour but further study on OG structuring in cake is needed.
Styles APA, Harvard, Vancouver, ISO, etc.
19

Piedrafita, Francos Daniel. « Designing, testing and modelling two innovative non-conventional buckling restrained braces for seismic resistant buildings ». Doctoral thesis, Universitat de Girona, 2014. http://hdl.handle.net/10803/284738.

Texte intégral
Résumé :
In this thesis two all-steel BRB have been designed, manufactured and tested, and both satisfy the testing protocols required by EU and US codes. They are composed of a steel slotted restraining unit which stabilizes the steel core. The first one, the Modular Buckling Restrained Brace (MBRB), is composed by several seriated modules which contain several dissipation units connected in parallel, which yield under shear forces. Although it has good hysteretic response and a high ductility, its core is heavy and expensive to manufacture. The second one,the Slotted Buckling Restraining Brace (SBRB), solves these two shortcomings. It yields under axial forces, like the conventional BRBs, but the usual solid core has been substituted by a perforated plate. The core is a one-piece element composed of two lateral bands, of a nearly uniform section and designed to yield, connected by stabilizing bridges, which behave elastic. The buckling prevention of the lateral bands is done by the restraining unit and the stabilizing bridges. Design expressions have been proposed to design both devices, and a numerical material model have been formulated and implemented in commercial finite element method software to numerically simulate the behavior of the braces, which will reduce the need of full scale tests for its design
En la present tesi, dos BVR totalment metàl·lics s’han dissenyat, fabricat i assajat, satisfent tots dos els requeriments dels protocols definits per les normes europees i americanes. Estan formats per un element de travat ranurat que estabilitza el nucli metàl·lic. El primer d’ells, el Braç de Vinclament Restringit Modular (BVRM), està format per diversos mòduls seriats que a la vegada contenen diverses unitats de dissipació, que plastifiquen sota esforços tallants, connectades en paral·lel. Tot i que té un bon comportament histerètic i una gran ductilitat, el nucli és pesat i difícil de fabricar. El segon braç, anomenat Braç de Vinclament Restringit Ranurat (BVRR), soluciona aquests inconvenients. Plastifica sota esforços axials, de la mateixa manera que els BVR convencionals, però el nucli massís és substituït per una platina perforada. Aquest nucli consisteix en un únic element composat per dos bandes laterals, dissenyades per a plastificar i amb una secció quasi constant, connectades per diversos ponts estabilitzadors que es mantenen sempre en el seu rang elàstic. Aquests ponts, juntament amb l’element de travat, impedeixen el vinclament de les bandes laterals. S’han proposat diverses expressions de disseny pels dos braços. S’ha formulat i implementat, en un programa d’elements finits comercial, un model de material per a simular numèricament el comportament dels braços, reduint així la dependència dels assajos a escala real durant el seu procés de disseny
Styles APA, Harvard, Vancouver, ISO, etc.
20

Khorgami, M. H. « A conceptually innovative and practical approach to the modelling of household activities and travel behaviour ». Thesis, University College London (University of London), 2016. http://discovery.ucl.ac.uk/1477519/.

Texte intégral
Résumé :
Operational activity-based models have been developed, and are being used, in North America and some mainland European cities, for policy analysis; in the UK the practical application of activity-based modelling is much more limited, despite the fact that available sources of travel and activity data enable such models to be developed. An extensive review of the literature reveals that most operational models do not fully embody the principles of an activity-based approach. In particular, (i) the models mainly concentrate on out-of-home activities, (ii) there is no explicit trade-off between taking part in an activity in home or out of home, (iii) they assume that only one activity takes place at each non-home location, and (iv) the basic unit of analysis in these models is the trip tour or the activity pattern. This research proposes an approach for the analysis and modelling of activities which addresses these shortcomings, using individual daily activities as the basic unit and starting point. The approach recognises that activity participation is influenced by physiological constraints, as well as socio-economic characteristics. Also, it considers time availability and overall time budget as further constraining factors. Inter-personal linkages and the tradeoff between in-home and out-of-home activity time allocation are also incorporated in the proposed activity based modelling framework. The model has two major sequential components: (1) an activity generation and household allocation model system, developed in depth, followed by (2) an outline activity/travel scheduling model system. The proposed modelling framework incorporates the possibility of engaging in multiple activities per non-home stop, and allows for trade-offs between in-home and out-of-home discretionary activity participation. The framework also allows for the joint engagement of household members in out-of-home activities. The predictive capability of the activity generation-allocation model system and some applications of the model in transport planning and policy are demonstrated.
Styles APA, Harvard, Vancouver, ISO, etc.
21

LAMIANI, PIETRO. « INNOVATIVE APPROACHES AND INSTRUMENTS IN MODELLING AND MONITORING THE SHELF LIFE OF PACKAGED PERISHABLE FOODS ». Doctoral thesis, Università degli Studi di Milano, 2012. http://hdl.handle.net/2434/169035.

Texte intégral
Résumé :
Shelf life models are mathematical equations that describe the relationship between food, package, and environment. Shelf life models are useful to predict the shelf life of food, to design food packages and to provide useful information. The accuracy by which these relationships are measured influences the reliability of the models. It is also necessary to know the relationships among the variables that play a significant role in defining food quality. Many chemical, biological, and physical changes are dependent on moisture content or water activity in foods, whose changes may be dictated by the protection from the environment given by packaging in the distribution channel. Water activity is ruled by the moisture content in the food, which may change with time through the permeation process of water vapor across the packaging film. The equilibrium relationship between moisture content and water activity is explained by the sorption/desorption isotherm, which is obtained mostly by experiments determining moisture contents of the food samples equilibrated under different relative humidity conditions. Many attempts were conducted to express the equilibrium relationship by mathematical forms. The most widely used are the equations of Smith, Halsey, Henderson, Oswin, Inglesias, and Chirife. These equations are used in forecast models to determine the shelf life of moisture depending products. All these models assume that the relative moisture of the headspace of the packaging (which changes with the entrance of water vapour from the outside) is immediately in equilibrium with the product. This condition has not been verified experimentally and is based on theoretical considerations. The aim of this PhD thesis is the study of innovative approaches and instruments to model and monitor the shelf life of packaged perishable foods. The crucial part of this work is to verify the theoretical assumption of immediate equilibrium assumed by moisture depending prediction models. To achieve this result it has been developed and built an acquisition system based on moisture sensors, a recording unit and specific software to collect data. The system has been tested on different products and packages in order to define the proper materials and working conditions to define the experimental plan and the system settings. A model system based on “Fetta biscottata” (a typical Italian bakery product chosen because of its regular shape, geometry, and uniformity among different batches) packaged in rigid EPS trays of different volumes has been chosen. This model system allowed to study how different quantities of product, different headspace volumes, and different packaging solutions influence the equilibrium between the relative humidity of the headspace and of the product and how this differences influence the shelf life prediction. The model system studied in the present work has demonstrated that, in this case, the theoretical assumption generally used in the shelf life modelling, stating the immediate equilibrium between the headspace and the product relative moistures, is not verified. These relative moistures are not immediately in equilibrium but show an increasing difference correlated to the increase of the headspace volume/product weight ratio. This correlation has been studied and it has been described by a linear equation in the considered range. The application of a shelf life prediction model to the studied system has shown that a correction is needed to avoid an underestimation of the shelf life. The application of this correction will be particularly interesting for the shelf life modelling of moisture depending products characterized by packaging solution with high headspace volumes. In a second part of the work it has been studied the possibility to simplify the shelf life prediction for a category of products applying a unique shelf life model for all the products part of the category. The shelf life prediction is a labor and time intensive work, a simplification might be useful in real case studies, especially if the category includes high quantities of references. The study on 14 different types of “Fetta biscottata” has demonstrated that it is possible to simplify the work only with simplifications based on a good knowledge of statistical indexes and their correct application.
Styles APA, Harvard, Vancouver, ISO, etc.
22

Aubret, Mathilde. « Development of innovative multi-physics solutions to optimize biological measure performance ». Thesis, KTH, Skolan för kemi, bioteknologi och hälsa (CBH), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-279160.

Texte intégral
Résumé :
Paper strips are becoming widely used for blood testing and particularly with embedded immunoassays. Among the so called "Point of Care" devices, the association of LFIA tests with a reading technology able to capture the strip and analyze its composition to output the biological result is advantageous in many ways.         In this thesis, a biosensor composed of a LFIA strip embedded on a reading device is investigated. The variability of the optical system used in the device is assessed, and the dynamic of the capillary flow in the LFIA strip is investigated in order to build a model of the flow field to understand the test physically and further optimize it.         Statistical methods based on available database as well as experimental data are used in order to assess the optical system variability. The camera variability in positioning, zoom and sharpness is investigated, as well as LED lightning system reproducibility and repeatability. The camera shows a zoom variability of 6% when computed on the database of devices available, and around 40% for sharpness variability. It reveals that sharpness is the most sensible camera parameter, because images of different sharpness gives varying results in the signal processing measurement method used to output the biological result. The lightning system has a variability below 1%, a reproducibility of 4%, and a repeatability below 1%, which are low results and it comforts the reliability of the lightning system during the measurement process. The different quantitative results for the optical system performances give precious insight on the further signal processing methods used to analyze the strip composition.         The paper strip is composed of four different substrips placed horizontally in series with different length and cross section. The method consists in setting up a model for the flow field by combining Darcy's law and mass conservation at the junction between strips. A relation between the fluid velocity and travelling distance is deduced. The four paper parameters are determined experimentally and input into the model equations. The flow field model fits favorably to experimental result of the flow in the strip. The model presents velocity jumps at the junction between strips and the importance of paper parameters such as cross sections, porosity, permeability and capillary pressure to govern the flow field is stressed. The flow field microfluidic model demonstrates that Darcy's law has to be modified in order to account for the pad superposition.         This project lead to a quantification of optical system variability sources which help choosing the signal processing methods embedded in the software to read and analyse the LFIA strip test. They will also be further used for designing device quality tests to ensure that the device is able to output a reliable biological result. The microfluidic model built and combined with experimental flow data helps to understand and optimize the flow in the strip, which is responsible for bringing the analyte of interest towards the reaction zone on the strip. This model could be enhanced if linked to the biochemical reaction kinetic model, and could further be integrated into a more general biosensor model to predict its behavior physically and biologically.
Styles APA, Harvard, Vancouver, ISO, etc.
23

Schumacher, Katja. « Innovative energy technologies in energy-economy models ». Doctoral thesis, Humboldt-Universität zu Berlin, Wirtschaftswissenschaftliche Fakultät, 2007. http://dx.doi.org/10.18452/15654.

Texte intégral
Résumé :
Die Einführung neuartiger Energietechnologien wird allgemein als der Schlüssel zur Senkung klimaschädlicher Treibhausgase angesehen. Allerdings ist die Abbildung derartiger Technologien in numerischen Modellen zur Simulation und ökonomischen Analyse von energie- und klimaschutzpolitischen Maßnahmen vielfach noch rudimentär. Die Dissertation entwickelt neue Ansätze zur Einbindung von technologischen Innovationen in energie-ökonomische allgemeine Gleichgewichtsmodelle, mit dem Ziel den Energiesektor realitätsnäher abzubilden. Die Dissertation adressiert einige der Hauptkritikpunkte an allgemeinen Gleichgewichtsmodellen zur Analyse von Energie- und Klimapolitik: Die fehlende sektorale und technologische Disaggregation, die beschränkte Darstellung von technologischem Fortschritt, und das Fehlen von einem weiten Spektrum an Treibhausgasminderungsoptionen. Die Dissertation widmet sich zwei Hauptfragen: (1) Wie können technologische Innovationen in allgemeine Gleichgewichtsmodelle eingebettet werden? (2) Welche zusätzlichen und politikrelevanten Informationen lassen sich durch diese methodischen Erweiterungen gewinnen? Die Verwendung eines sogenannten Hybrid-Ansatzes, in dem neuartige Technologien für Stromerzeugung und Eisen- und Stahlherstellung in ein dynamisch multi-sektorales CGE Modell eingebettet werden, zeigt, dass technologiespezifische Effekte von großer Bedeutung sind für die ökonomische Analyse von Klimaschutzmaßnahmen, insbesondere die Effekte hinsichtlich von Technologiewechsel und dadurch bedingten Änderungen der Input- und Emissionsstrukturen. Darüber hinaus zeigt die Dissertation, dass Lerneffekte auf verschiedenen Stufen der Produktionskette abgebildet werden müssen: Für regenerative Energien, zum Beispiel, nicht nur bei der Anwendung von Stromerzeugungsanlagen, sondern ebenso auf der vorgelagerten Produktionsstufe bei der Herstellung dieser Anlagen. Die differenzierte Abbildung von Lerneffekten in Exportsektoren, wie zum Beispiel Windanlagen, verändert die Wirtschaftlichkeit und die Wettbewerbsfähigkeit und hat wichtige Implikationen für die ökonomische Analyse von Klimapolitik.
Energy technologies and innovation are considered to play a crucial role in climate change mitigation. Yet, the representation of technologies in energy-economy models, which are used extensively to analyze the economic, energy and environmental impacts of alternative energy and climate policies, is rather limited. This dissertation presents advanced techniques of including technological innovations in energy-economy computable general equilibrium (CGE) models. New methods are explored and applied for improving the realism of energy production and consumption in such top-down models. The dissertation addresses some of the main criticism of general equilibrium models in the field of energy and climate policy analysis: The lack of detailed sectoral and technical disaggregation, the restricted view on innovation and technological change, and the lack of extended greenhouse gas mitigation options. The dissertation reflects on the questions of (1) how to introduce innovation and technological change in a computable general equilibrium model as well as (2) what additional and policy relevant information is gained from using these methodologies. Employing a new hybrid approach of incorporating technology-specific information for electricity generation and iron and steel production in a dynamic multi-sector computable equilibrium model it can be concluded that technology-specific effects are crucial for the economic assessment of climate policy, in particular the effects relating to process shifts and fuel input structure. Additionally, the dissertation shows that learning-by-doing in renewable energy takes place in the renewable electricity sector but is equally important in upstream sectors that produce technologies, i.e. machinery and equipment, for renewable electricity generation. The differentiation of learning effects in export sectors, such as renewable energy technologies, matters for the economic assessment of climate policies because of effects on international competitiveness and economic output.
Styles APA, Harvard, Vancouver, ISO, etc.
24

NICODEMO, GIUSEPPE. « Exposure modelling and loss estimation for seismic risk assessment of residential buildings : innovative methods and applications ». Doctoral thesis, Università degli studi della Basilicata, 2021. http://hdl.handle.net/11563/146804.

Texte intégral
Résumé :
Defining the seismic hazard, assessing the vulnerability of the main components of the built environment and, consequently, estimating the expected losses are key steps for setting up effective post-event emergency plans as well as medium-long term mitigation strategies. Despite the significant knowledge advancements achieved in the last years, several points need to be further developed. Among them the collection of reliable building inventories, the selection of appropriate measures of seismic intensity and the definition of accurate loss estimation models still propose some challenges for the scientific community. The present PhD thesis aims at providing a contribution in this direction. After a comprehensive state of the art on seismic risk components along with a literature review focused on the main models to estimate the expected seismic losses, some new procedures related to hazard, exposure and loss estimation, have been proposed and applied. Firstly, a model aimed at estimating the direct economic losses (i.e., building repair costs) has been developed by improving the models currently available in the literature. These models generally account for only the severity of damage (i.e., the maximum damage level), while damage extension and distribution, especially along the building height, are implicitly considered in the repair cost values. If on the one side, the assessment of safety condition depends essentially on damage severity, on the other side, damage extension strongly affects the estimation of economic impact. In this regard, the proposed model allows to explicitly consider both damage severity and distribution along the building height. The model is applicable to both Reinforced Concrete (RC) and masonry building types. It requires the determination of the more frequent damage distributions throughout the building height. At the current state, the procedure has been specifically implemented for existing Reinforced Concrete (RC) building types by performing Non-Linear Dynamic Analyses (NLDAs). As for seismic hazard, correlations between macroseismic intensities and ground motion parameters have been derived processing data related to Italian earthquakes occurred in the last 40 years. Peak Ground Acceleration (PGA), Peak Ground Velocity (PGV) and Housner Intensity (IH) as instrumental measures, and European Macroseismic Scale (EMS-98) and Mercalli-Cancani-Sieberg (MCS) as macroseismic measures, have been considered. The correlations can be used both to adopt empirical damage estimation methods (e.g., Damage Probability Matrices) and to convert the macroseismic data of historical earthquakes into instrumental intensity values, more suitable to risk analyses and design practice. Concerning exposure, an innovative methodology has been developed to convert the information on the typological characteristics collected through the AeDES form (currently used in Italy in post-earthquake usability surveys) to recognized international standards such as the taxonomy proposed by the Global Earthquake Model (GEM) and the EMS-98 building types. The methodology allows to fully exploit the exposure and vulnerability data of post-earthquake surveys related to the Italian built environment and to define an exposure model in terms of risk-oriented classes more suitable for large-scale risk assessments. Furthermore, an approach based on the integration of data collected with the CARTIS procedure (i.e., a protocol used in Italy for the typological-structural characterization of buildings at regional scale) and using the RRVS web-based platform (i.e., for a remote visual screening based on satellite images) has been proposed and specifically applied to the village of Calvello (Basilicata region, Southern Italy). This approach represents a useful tool for compiling residential building inventories in a quick and inexpensive way thus being very suitable in data-poor and economically developing countries. To better illustrate the proposed methodological developments, some applications are provided in the last part of the thesis. The first one proposes a comparison among the results obtained applying some casualty estimation models available in the literature using the vulnerability and damage data collected in the L’Aquila urban area after the 2009 earthquake (data available on the Observed Damage Database Da.D.O. platform). After, by using the same data source, an exposure model in terms of EMS-98 types based on the 2009 post-earthquake data has been implemented for the residential buildings of L'Aquila town and the surrounding municipalities involved in the usability assessment surveys. The third - expansive - application deals with the seismic risk assessment of the Val d’Agri area (Basilicata region, Southern Italy). This area has a strategic role for Italy due to the large quantities of oil extracted from local deposits, making available large resources deriving from royalties. Specifically, earthquake damage scenarios for the residential building stock of 19 villages have been prepared. Considering a seismic vulnerability distribution obtained from the integration of a building-by-building inventory and information collected with the CARTIS and RRVS approaches, the expected losses deriving from a seismic event with an exceedance probability of 10% in 50 years (475 years return period) have been determined. Finally, an action plan for the seismic risk mitigation, essentially based on the reduction of vulnerability of the building stock through a structural strengthening program, has been proposed and specifically applied to one of the villages in the area under study.
Styles APA, Harvard, Vancouver, ISO, etc.
25

Meng, Fanlin. « The impact of innovative effluent permitting policy on urban wastewater system performance ». Thesis, University of Exeter, 2015. http://hdl.handle.net/10871/19393.

Texte intégral
Résumé :
This thesis investigates innovative effluent point-source permitting approaches from an integrated urban wastewater system (UWWS) perspective, and demonstrates that three proposed permitting approaches based on optimal operational or control strategies of the wastewater system are effective in delivering multiple and balanced environmental benefits (water quality, GHG emissions) in a cost-efficient manner. Traditional permitting policy and current flexible permitting practices are first reviewed, and opportunities for permitting from an integrated UWWS perspective are identified. An operational strategy-based permitting approach is first developed by a four-step permitting framework. Based on integrated UWWS modelling, operational strategies are optimised with objectives including minimisation of operational cost, variability of treatment efficiency and environmental risk, subject to compliance of environmental water quality standards. As trade-offs exist between the three objectives, the optimal solutions are screened according to the decision-makers’ preference and permits are derived based on the selected solutions. The advantages of this permitting approach over the traditional regulatory method are: a) cost-effectiveness is considered in decision-making, and b) permitting based on operational strategies is more reliable in delivering desirable environmental outcomes. In the studied case, the selected operational strategies achieve over 78% lower environmental risk with at least 7% lower operational cost than the baseline scenario; in comparison, the traditional end-of-pipe limits can lead to expensive solutions with no better environmental water quality. The developed permitting framework facilitates the derivation of sustainable solutions as: a) stakeholders are involved at all points of the decision-making process, so that various impacts of the operation of the UWWS can be considered, and b) multi-objective optimisation algorithm and visual analytics tool are employed to efficiently optimise and select high performance operational solutions. The second proposed permitting approach is based on optimal integrated real time control (RTC) strategies. Permits are developed by a three-step decision-making analysis framework similar to the first approach. An off-line model-based predictive aeration control strategy is investigated for the case study, and further benefits (9% lower environmental risk and 0.6% less cost) are achieved by an optimal RTC strategy exploiting the dynamic assimilation capacity of the environment. A similar permitting approach, but simpler than the first two methods, is developed to derive operational/control strategy-based permits by an integrated cost-risk analysis framework. Less comprehensive modelling and optimisation skills are needed as it couples a dynamic wastewater system model and a stochastic permitting model and uses sensitivity analysis and scenario analysis to optimise operational/control strategies, hence this approach can be a good option to develop risk-based cost-effective permits without intensive resources. Finally, roadmaps for the implementation of the three innovative permitting approaches are discussed. Current performance-based regulations and self-monitoring schemes are used as examples to visualise the new way of permitting. The viability of the proposed methods as alternative regulation approaches are evaluated against the core competencies of modern policy-making.
Styles APA, Harvard, Vancouver, ISO, etc.
26

DUO, Enrico. « Innovative Approaches for the Evaluation of Coastal Risk on Sandy Mediterranean Beaches ». Doctoral thesis, Università degli studi di Ferrara, 2018. http://hdl.handle.net/11392/2487960.

Texte intégral
Résumé :
The increase in frequency and intensity of extreme coastal storms and the continuous exponential development of the coasts of the world are threatening coastal communities, exposing them to higher levels of risk. Notwithstanding the future projections are affected by large uncertainty, coastal managers, as recommended by the United Nations and the European Union, need to properly evaluate coastal risk in order to propose adequate risk reduction plans for the current and future climate change scenarios. This should be done while considering all the components that influence risk: hazard, vulnerability and exposure. The involvement of local stakeholders and the adoption of multi-disciplinary approaches, including social sciences, are becoming very common in coastal risk studies, supporting the idea that the same should be done at the management level, to properly address coastal risk issues. The work of this PhD thesis aimed at applying innovative approaches for the evaluation of coastal risk, at different scales, on Mediterranean sandy beaches. The innovations are related to fieldwork methodologies, numerical applications and coastal risk assessment. Part of the work was done in the framework of the EU FP7 RISC-KIT project, that aimed at providing tools in support of coastal managers, in order to increase the resilience of coastal communities. The approaches were implemented at locations along the Emilia-Romagna (Italy) and Catalunya (Spain) coasts. The first part of this thesis focuses on fieldwork activities. Post-storm and seasonal surveys were implemented based on up-to-date low-cost drones and photogrammetric techniques for post-processing. The approach allowed to collect local-scale high-resolution data used to integrate regional post-storm assessments, including qualitative information collected involving the local community, and to detect significant changes of the beach due to the influence of coastal storms and winds. Numerical models were used to analyze the propagation of errors due to the use of synthetic time-series of waves in a process-based chain of models used to simulate erosion and flooding hazards. Results were analyzed with a Bayesian-based approach. The use of synthetic time-series can produce significant errors in the hazard assessment, if compared with the use of real ones, and can influence the risk assessment. Then, two coastal risk assessments are presented, respectively at the local and regional levels. The studies were implemented using the RISC-KIT tools: the Coastal Risk Assessment Framework (CRAF) Phase 1 for the identification of critical areas (hotspots) at the regional level and the Bayesian-based Hotspot tool for testing local measures for disaster reduction, in the current and future scenarios. The CRAF Phase 1 was validated on the Emilia-Romagna coast, confirming that it is able to detect well-known hotspots. The Hotspot tool provided useful insights on the tested measures at the two analyzed case study sites, in Italy and Spain. The applications confirmed that the RISC-KIT approach for regional and local scale assessments is valuable for coastal managers, in order to propose adequate solutions for risk reduction. An interesting aspect of this PhD work is that the majority of the applications were implemented including local people and managers in the process. Large parts of the integrated risk assessments were supported by a strong collaboration between physical and social scientists, confirming that a multi-disciplinary approach is a key aspect in order to properly understand and reduce coastal risk. Coastal managers should take into account all the aspects analyzed in this PhD thesis that can affect risk assessments, from the fieldwork to the deskwork. They should be able to properly address risk by interacting with physical and social scientists, as well as with local communities, if they want to provide effective and acceptable risk reduction strategies.
Le coste del mondo sono minacciate dall'incremento, in frequenza ed intensità, delle mareggiate e dello sviluppo costiero. Pertanto, le comunità costiere sono esposte a livelli di rischio sempre più elevati. Le Nazioni Unite e l'Unione Europea richiedono ai manager costieri di valutare il rischio delle coste per proporre piani di riduzione adeguati, sia per lo scenario attuale, sia per quello futuro, considerando i possibili effetti del cambiamento climatico, sebbene le proiezioni future siano caratterizzate da incertezze non trascurabili. Le valutazioni di rischio devono essere basate considerando pericolosità, vulnerabilità ed esposizione. Dovrebbero essere svolte adottando approcci multi-disciplinari e coinvolgendo i portatori di interesse. Il lavoro oggetto di questa tesi è stato svolto applicando approcci innovativi per la valutazione del rischio su spiagge sabbiose del Mediterraneo, a diverse scale spaziali. Le applicazioni riguardano diversi campi: rilievi di spiaggia, modellazione numerica e valutazione integrata del rischio. Parte del lavoro è stato svolto nell'ambito del progetto EU FP7 RISC-KIT, il cui obiettivo era fornire ai manager costieri strumenti utili alla riduzione del rischio ed all'incremento della resilienza delle comunità costiere. Gli approcci sono stati applicati in località costiere in Emilia-Romagna (Italia) e Catalogna (Spagna). La prima parte di questa tesi riguarda aspetti di misure sul campo. Sono stati utilizzati droni a basso costo e tecniche di fotogrammetria per rilievi post-evento e stagionali. Sono stati raccolti a scala locale dati ad alta risoluzione utili sia ad integrare rilievi post-evento a scala regionale, includendo informazioni qualitative ottenute coinvolgendo la comunità locale, sia all'analisi delle variazioni della spiaggia dovute alle mareggiate e ai venti. Sono stati utilizzati modelli numerici per analizzare, tramite approccio Bayesiano, la propagazione degli errori dovuti all'utilizzo di mareggiate sintetiche in input ad una catena di modelli per la simulazione dell'erosione ed inondazione costiera. L'uso di input sintetici produce errori significativi nella valutazione dei pericoli, se confrontato con l'uso di serie temporali reali, e può avere effetti importanti sulle successive analisi del rischio. Inoltre, si presentano due valutazioni di rischio, a scala regionale e locale. Sono stati applicati gli strumenti forniti da RISC-KIT, il Coastal Risk Assessment Framework (CRAF) Phase 1 per l'identificazione delle aree critiche (hotspot) a livello regionale e l'Hotspot tool, un approccio Bayesiano per l'analisi dell'efficacia di misure di riduzione del rischio, per gli scenari attuali e futuri. Il CRAF Phase 1 è stato validato per la costa dell'Emilia-Romagna dimostrandosi efficace nell'identificare aree critiche note. L'Hotspot tool ha fornito informazioni utili alla caratterizzazione delle misure in entrambi i casi studio, in Italia e Spagna. Le analisi hanno dimostrato che l'approccio di RISC-KIT è utile ai manager costieri per la preparazione di piani adeguati di riduzione del rischio. Un aspetto interessante di questo lavoro riguarda il coinvolgimento dei portatori di interesse e dei manager costieri nella maggior parte delle analisi svolte. La collaborazione tra studiosi delle scienze naturali e sociali è stata di estrema importanza per l'appropriata valutazione del rischio costiero a diverse scale spaziali. Questo conferma che l'approccio multi-disciplinare è un aspetto chiave per comprendere e ridurre il rischio costiero. I manager costieri dovrebbero tenere in considerazione tutti gli aspetti che influenzano le valutazioni di rischio presentati in questa tesi, dalle misure sul campo al lavoro alla scrivania. Dovrebbero interagire maggiormente con i ricercatori che studiano le coste, sia dal punto di vista fisico sia sociale, e con i portatori di interesse, per fornire strategie per la riduzione del rischio che siano efficaci e condivisibili.
Styles APA, Harvard, Vancouver, ISO, etc.
27

Haq, Izhar Ul. « Innovative configurable and collaborative approach to automation systems engineering for automotive powertrain assembly ». Thesis, Loughborough University, 2009. https://dspace.lboro.ac.uk/2134/15178.

Texte intégral
Résumé :
Presently the automotive industry is facing enormous pressure due to global competition and ever changing legislative, economic and customer demands. Both, agility and reconfiguration are widely recognised as important attributes for manufacturing systems to satisfy the needs of competitive global markets. To facilitate and accommodate unforeseen business changes within the automotive industry, a new proactive methodology is urgently required for the design, build, assembly and reconfiguration of automation systems. There is also need for the promotion of new technologies and engineering methods to enable true engineering concurrency between product and process development. Virtual construction and testing of new automation systems prior to build is now identified as a crucial requirement to enable system verification and to allow the investigation of design alternatives prior to building and testing physical systems. The main focus of this research was to design and develop reconfigurable assembly systems within the powertrain sector of the automotive industry by capturing and modelling relevant business and engineering processes. This research has proposed and developed a more process-efficient and robust automation system design, build and implementation approach via new engineering services and a standard library of reusable mechanisms. Existing research at Loughborough had created the basic technology for a component based approach to automation. However, no research had been previously undertaken on the application of this approach in a user engineering and business context. The objective of this research was therefore to utilise this prototype method and associated engineering tools and to devise novel business and engineering processes to enable the component-based approach to be applied in industry. This new approach has been named Configurable and Collaborative Automation Systems (CO AS). In particular this new research has studied the implications of migration to a COAS approach in terms of I) necessary changes to the end-users business processes, 2) potential to improve the robustness of the resultant system and 3) potential for improved efficiency and greater collaboration across the supply chain.
Styles APA, Harvard, Vancouver, ISO, etc.
28

Cazorla, Marín Antonio. « MODELLING AND EXPERIMENTAL VALIDATION OF AN INNOVATIVE COAXIAL HELICAL BOREHOLE HEAT EXCHANGER FOR A DUAL SOURCE HEAT PUMP SYSTEM ». Doctoral thesis, Universitat Politècnica de València, 2019. http://hdl.handle.net/10251/125696.

Texte intégral
Résumé :
[ES] La energía geotérmica de baja entalpía es una alternativa eficiente y renovable a los sistemas convencionales para proporcionar calefacción, refrigeración y producir agua caliente sanitaria (ACS) de forma sostenible. El proyecto GEOTeCH plantea el desarrollo de sistemas con bomba de calor geotérmica más eficientes y con un coste menor en comparación con el mercado. Para ello, se ha desarrollado un nuevo tipo de intercambiador enterrado coaxial con flujo helicoidal en el tubo externo que presenta una mayor eficiencia y permite reducir la longitud de intercambiador a instalar, así como una bomba de calor dual con compresor de velocidad variable, capaz de trabajar con el terreno o el aire como fuente/sumidero, seleccionando la que proporcione un mejor rendimiento del sistema. El principal objetivo es desarrollar un sistema eficiente y replicable para proporcionar calefacción, refrigeración y producir ACS en el sector de mercado de pequeños edificios con un tamaño menor en el campo de intercambiadores enterrados y un aumento de la eficiencia. Para demostrar la aplicabilidad de estos sistemas, se han construido tres instalaciones demostración en tres países europeos. En esta tesis doctoral se ha desarrollado un modelo dinámico completo del sistema en el software TRNSYS, capaz de reproducir el comportamiento de los diferentes componentes y del sistema en general. Este modelo constituye una herramienta útil para el desarrollo y análisis de diferentes estrategias de control sin la necesidad de implementarlas en instalaciones reales, así como analizar el comportamiento del sistema funcionando bajo condiciones diferentes. Para este propósito, es necesario desarrollar modelos detallados de los nuevos componentes desarrollados en el proyecto: el intercambiador enterrado coaxial helicoidal y la bomba de calor dual; para poder acoplarlos al resto de componentes en el modelo completo del sistema. Por ello, se ha desarrollado un modelo dinámico del nuevo intercambiador, capaz de reproducir con precisión el comportamiento a corto plazo del intercambiador, enfocado a la evolución de la temperatura del fluido, y se ha validado con datos experimentales en diferentes condiciones de operación. Para poder reproducir no solo el comportamiento dinámico del intercambiador enterrado, sino también la respuesta a largo plazo del terreno y la interacción entre intercambiadores en un campo, se ha desarrollado otro modelo en TRNSYS que realiza esta función. De esta manera, al acoplar ambos modelos es posible reproducir el comportamiento a corto plazo del intercambiador enterrado a la vez que la respuesta a largo plazo del terreno. Por otro lado, se ha implementado en TRNSYS un modelo de la bomba de calor dual desarrollado. Con este modelo es posible calcular la capacidad de la bomba de calor dependiendo del modo de operación en que esté funcionando, de la frecuencia del compresor y otras variables y condiciones de operación. El modelo del sistema dual en TRNSYS se ha utilizado para hacer un análisis de su comportamiento funcionando en diferentes climas, para ello se han seleccionado tres ciudades en España y en Europa con diferentes climas y se han realizado simulaciones del sistema funcionando en cada ciudad. Por otro lado, también se ha modelado en TRNSYS una de las instalaciones demostración del proyecto GEOTeCH, incluyendo el edificio climatizado y el acoplamiento con los fan coils. Con este modelo se estudia una nueva estrategia para controlar la frecuencia del compresor en base a la temperatura de las habitaciones, en lugar de controlarla en base a la temperatura de suministro, con el objetivo de reducir el consumo del compresor cuando ya se haya conseguido el confort. Además, otras estrategias de optimización se han analizado con el modelo.Por tanto, los modelos desarrollados constituyen herramientas útiles para ayudar en el diseño del sistema y los diferentes componentes, el análisis de su comportamiento y el d
[CAT] L'energia geotèrmica de baixa entalpia es planteja com una alternativa eficient i renovable als sistemes convencionals per proporcionar calefacció, refrigeració i produir aigua calenta sanitària (ACS) de forma sostenible. El projecte GEOTeCH planteja el desenvolupament de sistemes amb bomba de calor geotèrmica més eficients i amb un cost menor en comparació amb el mercat. Per a això, s'ha desenvolupat un nou tipus d'intercanviador enterrat coaxial amb flux helicoïdal en el tub extern que presenta una major eficiència i permet reduir la longitud a instal·lar, així com una bomba de calor dual amb compressor de velocitat variable, capaç de treballar amb el terreny o l'aire com a font, seleccionant la que proporcione un millor rendiment. Aquests components s'utilitzen en el nou sistema amb bomba de calor dual. El principal objectiu és desenvolupar un sistema eficient i replicable per proporcionar calefacció, refrigeració i produir ACS en edificis xicotets amb una grandària menor d'intercanviadors soterrats i un augment de l'eficiència. Per demostrar l'aplicabilitat d'aquests sistemes, s'han construït tres instal·lacions demostració en Itàlia, Països Baixos i Regne Unit. En aquesta tesi s'ha desenvolupat un model dinàmic complet del sistema en TRNSYS, capaç de reproduir el comportament dels components i del sistema en general. Aquest model constitueix una eina útil per al desenvolupament i anàlisi de diferents estratègies de control sense la necessitat d'implementar-les en instal·lacions reals, així com analitzar el comportament del sistema funcionant en condicions diferents. Per a això, cal desenvolupar models detallats dels nous components desenvolupats en el projecte: l'intercanviador enterrat i la bomba de calor dual; per poder acoblar-los a la resta de components. Per això, s'ha desenvolupat un model dinàmic del nou intercanviador enterrat, capaç de reproduir amb precisió el comportament a curt termini de l'intercanviador, enfocat a l'evolució de la temperatura del fluid, i s'ha validat amb dades experimentals en diferents condicions d'operació. Per a poder reproduir no només el comportament dinàmic de l'intercanviador soterrat, sinó també la resposta a llarg termini del terreny i la interacció entre intercanviadors en un camp, s'ha desenvolupat un altre model en TRNSYS que realitza aquesta funció. D'aquesta manera, en acoblar els dos models és possible reproduir el comportament a curt termini de l'intercanviador enterrat, al mateix temps que la resposta a llarg termini del terreny. D'altra banda, s'ha implementat en TRNSYS un model de la bomba de calor. Amb aquest model és possible calcular la capacitat de la bomba de calor depenent del mode d'operació en què estiga funcionant, de la freqüència del compressor i altres variables i condicions d'operació. El model del sistema dual en TRNSYS s'ha utilitzat per a fer una anàlisi del seu comportament funcionant en diferents climes, per a això s'han seleccionat tres ciutats a Espanya i tres a Europa amb diferents climes i s'han realitzat simulacions del sistema funcionant en cada ciutat durant un any. S'ha analitzat l'eficiència del sistema en cada ciutat, així com l'ús de cadascuna de les fonts (aire / terreny). D'altra banda, també s'ha modelat en TRNSYS una de les instal·lacions demostració del projecte GEOTeCH, incloent l'edifici d'oficines climatitzat i l'acoblament amb els fan coils. Amb aquest model es pretén estudiar una nova estratègia per a controlar la freqüència del compressor d'acord amb la temperatura de les habitacions, en lloc de controlar-la en base a la temperatura de subministrament, amb l'objectiu de reduir el consum del compressor quan les habitacions ja es troben en condicions de confort. A més, altres estratègies d'optimització s'han analitzat amb el model. Per tant, els models desenvolupats constitueixen eines útils per ajudar en el disseny del sistema i els diferents components, l'anàlisi del
[EN] Low enthalpy geothermal energy is considered as an efficient and renewable alternative to conventional systems to provide heating, cooling and Domestic Hot Water (DHW) production in a sustainable way. In this context, the GEOTeCH project proposes the development of more efficient geothermal heat pump systems with a lower cost compared to the market. To this end, a new type of coaxial Borehole Heat Exchanger (BHE) with helical flow through the outer tube has been developed, which presents a higher efficiency and allows to reduce the length of the heat exchanger to be installed, as well as a Dual Source Heat Pump (DSHP) with variable speed compressor, capable of working with the ground or air as a source / sink, selecting the one that provides the best performance of the system. These components are used in the new DSHP system developed. The main objective is to develop efficient and replicable systems to provide heating, cooling and DHW in the market sector of small buildings with a smaller size of the BHE field and an increase in the efficiency. To demonstrate the applicability of these systems, three demonstration facilities have been installed in Italy, the Netherlands and the UK. In this thesis, a complete dynamic model of the system has been developed in the TRNSYS software, capable of reproducing the behavior of the different components and the system in general. This model is a useful tool for the development and analysis of different control strategies without the need to implement them in real installations, as well as analyses the behavior of the system operating under different conditions. For this purpose, it is necessary to develop detailed models of the new components developed in the project: the BHE and the DSHP; to couple them to the rest of the components of the system. For this reason, a dynamic model of the new BHE was developed, able to accurately reproduce its short-term behavior, focused on the evolution of the fluid temperature, and validated with experimental data in different operating conditions. In order to reproduce not only the dynamic behavior of the BHE, but also the long-term response of the ground and the interaction between BHEs in a field, another model was developed in TRNSYS. In this way, by coupling both models, it is possible to reproduce the short-term behavior of the BHE as well as the long-term response of the ground. On the other hand, a model of the DSHP was implemented in TRNSYS. With this model, it is possible to calculate the capacity of the heat pump depending on the operating mode in which it is operating, the frequency of the compressor and other variables and operating conditions. The model of the hybrid system in TRNSYS has been used to make an analysis of its behavior working in different climatic conditions, for which three cities have been selected in Spain and three in Europe, with different climates. So, different simulations of the system have been carried out in each city for one year. The efficiency of the system in each city has been analyzed, as well as the use of each of the sources (air / ground). On the other hand, one of the demo-sites of the GEOTeCH project, including the conditioned office building and the coupling with the fan coils, has also been modelled in TRNSYS. With this model, it is studied a new strategy to control the frequency of the compressor based on the temperature of the rooms, instead of controlling it based on the supply temperature, with the aim of reducing the consumption of the compressor when the rooms are already in comfort conditions. In addition, other optimization strategies have been analyzed with the model. Therefore, the models developed, both for the BHE and the system, are able to reproduce their operation and can be used as virtual installations, constituting useful tools to help in the design of the system and the different components, the analysis of their behavior and the development of optimization strategies.
I would like to acknowledge the financial support that has made this PhD thesis possible. The present work has been supported by the European Community Horizon 2020 Program for European Research and Technological Development (2014-2020) inside the framework of the project 656889 – GEOTeCH (Geothermal Technology for Economic Cooling and Heating), also by the Generalitat Valenciana inside the program “Ayudas para la contratación de personal investigador en formación de carácter predoctoral (ACIF/2016/131)” and by the Institute for Energy Engineering of the Universitat Politècnica de València.
Cazorla Marín, A. (2019). MODELLING AND EXPERIMENTAL VALIDATION OF AN INNOVATIVE COAXIAL HELICAL BOREHOLE HEAT EXCHANGER FOR A DUAL SOURCE HEAT PUMP SYSTEM [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/125696
TESIS
Styles APA, Harvard, Vancouver, ISO, etc.
29

PAVLOVIC, MILORAD. « Multi scale procedures for the structural identification of historic Masonry Bell Towers by innovative monitoring and structural modelling techniques ». Doctoral thesis, Università IUAV di Venezia, 2017. http://hdl.handle.net/11578/278741.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
30

ARENA, SIMONE. « Modelling, design and analysis of innovative thermal energy storage systems using PCM for industrial processes, heat and power generation ». Doctoral thesis, Università degli Studi di Cagliari, 2016. http://hdl.handle.net/11584/266776.

Texte intégral
Résumé :
The topic of this PhD thesis is framed on the study and the analysis of thermal energy storage (TES) systems based on phase change materials (PCM) to be used as a back-up for intermediate temperature applications (up to 250 °C). The work is divided in two part: the first part presents the development of numerical models of the latent heat thermal energy storage (LHTES) devices. Different models are developed by means of a 2D and 3D numerical simulation codes specifically implemented in the COMSOL Multiphysics environment. Design of LHTES requires knowledge of the heat transfer process within them, as well as the phase change behaviour of the PCM used. For simulate the PCM, two approaches are used: the first approach takes only into account heat transfer by conduction during the entire process, also when the PCM is in the liquid phase. In the second, the energy equation considering both heat conduction and natural convection is solved to predict the behaviour of the PCM. Different PCM materials, geometries and configurations of the storage device are considered and tested in order to evaluate the effectiveness of the adopted numerical codes under different working conditions. Finally, the models are validated using experimental data obtained from tests carried out on a double tube heat exchanger with fins and with Rubitherm® RT35 paraffin as PCM. The tests are conducted in the laboratories of the University of Lleida (Spain) by the research group GREA Innovació Concurrent. The second part of this work concerns the design and implementation of a test rig, specifically built for experimental investigation of heat storage devices in the laboratory for TES technologies of the University of Cagliari. An accurate study and selection of both the test rig layout and all the needed equipment is carried out to perform experimental analysis. The test bench is composed of an electrical heater, which heats the HTF up to the operating temperature, an air cooler, which simulate the thermal demand during the discharge phase, a HTF circulating pump, two test sections for thermal energy storage systems, and a preliminary TES device which consists in a shell and tube heat exchanger, where the HTF flows in the tubes while the PCM is placed in the shell side. At this stage, the thermal energy storage system, the measuring devices and the data acquisition system are under implementation.
Styles APA, Harvard, Vancouver, ISO, etc.
31

GRASSI, DAVIDE. « Tecnologie innovative per il consolidamento di substrati di fondazione e opere geotecniche ». Doctoral thesis, Università degli Studi di Milano-Bicocca, 2022. http://hdl.handle.net/10281/366246.

Texte intégral
Résumé :
The PhD research activities are focused on ground improvement application. The objective is to study the permeation grouting technique in granular soils and to understand the hydro-mechanical properties that can be obtained using non-conventional injection materials. During the project it has been developed an injection apparatus and a monitoring system of the grouting process that is used to study the permeation process and to develop a theoretical, and then analytical and numerical approach. Furthermore, the injection apparatus is used to simulate the permeation of different injection materials in soils and to generate specimen for hydraulic and mechanical evaluations. The experimental set-up comprises of a polycarbonate extruded clear tube, 50 mm in inner diameter and variable length (with a maximum of 1.6 m), and two floating caps at the extremities with a double sealing system. Once it has been filled with a selected soil it is inserted in a rigid steel chassis and the hole injection system can support 100 bar injection pressure. To simulate the soil micro-mechanical behavior has been created a loading system, composed by a screw that can apply a confining pressure on the soil sample. The column is then instrumented: • two load cells to control the confining pressure; • five pressure transducers to control the imposed pressure; • two pressure switches to impose pressure to the fluid; • laser ranging distance sensors to measure the flow rate; • vision camera to detect fluid front advancement. The data from the sensors are collected by employing Arduino processors and all the results are elaborated and displayed in real time with a Labview platform software. Some of the sensors were properly calibrated after being installed and all the measurements and the injection apparatus are validated before starting the test. The injection apparatus has been used to test different conventional and non-conventional grout with different rheological properties: sodium silicate, acrylic resin, colloidal silica, cement and micro-cement grout. For some of the previous injection materials have been defined a rheological time-dependent law. All these tests have been used to understand the permeation phenomena and to define an analytical and numerical predictive model that could be valid for all the soil in all conditions. By using this approach and by knowing the soil hydraulic properties, related to a new geotechnical project, it should be possible to indicate to designers the type of grout, the injection parameters and the injection geometry for this specific ground improvement application. Finally, a hydro-mechanical investigation of the different injection materials has been performed, consisting of the following test: permeability, unconfined compressive strength, triaxial test and brasilian test. For each injection test it has been evaluated the difference mechanical behavior from the bottom to the top part of the column, resulting from a variable soil saturation during the permeation process, that, in a ground with a grout spherical propagation, can be related to the radial distance from the injection point. Furthermore, a mechanical comparison, in term of friction angle and cohesion, is performed for the different grout types and, for some of them, at different curing time.
The PhD research activities are focused on ground improvement application. The objective is to study the permeation grouting technique in granular soils and to understand the hydro-mechanical properties that can be obtained using non-conventional injection materials. During the project it has been developed an injection apparatus and a monitoring system of the grouting process that is used to study the permeation process and to develop a theoretical, and then analytical and numerical approach. Furthermore, the injection apparatus is used to simulate the permeation of different injection materials in soils and to generate specimen for hydraulic and mechanical evaluations. The experimental set-up comprises of a polycarbonate extruded clear tube, 50 mm in inner diameter and variable length (with a maximum of 1.6 m), and two floating caps at the extremities with a double sealing system. Once it has been filled with a selected soil it is inserted in a rigid steel chassis and the hole injection system can support 100 bar injection pressure. To simulate the soil micro-mechanical behavior has been created a loading system, composed by a screw that can apply a confining pressure on the soil sample. The column is then instrumented: • two load cells to control the confining pressure; • five pressure transducers to control the imposed pressure; • two pressure switches to impose pressure to the fluid; • laser ranging distance sensors to measure the flow rate; • vision camera to detect fluid front advancement. The data from the sensors are collected by employing Arduino processors and all the results are elaborated and displayed in real time with a Labview platform software. Some of the sensors were properly calibrated after being installed and all the measurements and the injection apparatus are validated before starting the test. The injection apparatus has been used to test different conventional and non-conventional grout with different rheological properties: sodium silicate, acrylic resin, colloidal silica, cement and micro-cement grout. For some of the previous injection materials have been defined a rheological time-dependent law. All these tests have been used to understand the permeation phenomena and to define an analytical and numerical predictive model that could be valid for all the soil in all conditions. By using this approach and by knowing the soil hydraulic properties, related to a new geotechnical project, it should be possible to indicate to designers the type of grout, the injection parameters and the injection geometry for this specific ground improvement application. Finally, a hydro-mechanical investigation of the different injection materials has been performed, consisting of the following test: permeability, unconfined compressive strength, triaxial test and brasilian test. For each injection test it has been evaluated the difference mechanical behavior from the bottom to the top part of the column, resulting from a variable soil saturation during the permeation process, that, in a ground with a grout spherical propagation, can be related to the radial distance from the injection point. Furthermore, a mechanical comparison, in term of friction angle and cohesion, is performed for the different grout types and, for some of them, at different curing time.
Styles APA, Harvard, Vancouver, ISO, etc.
32

NONINI, LUCA. « ASSESSMENT OF WOOD BIOMASS AND CARBON STOCK AND EVALUATION OF MACHINERY CHAINS PERFORMANCES IN ALPINE FORESTRY CONDITIONS : AN INNOVATIVE MODELLING APPROACH ». Doctoral thesis, Università degli Studi di Milano, 2021. http://hdl.handle.net/2434/846415.

Texte intégral
Résumé :
The PhD Thesis focuses on two topics: (i) assessment of forest wood and carbon (C) stock and (ii) forestry mechanization applicable at the forest stand level for any given conditions among those found in the Italian Alpine and pre-Alpine mountainous areas. Both these topics aim to improve the use of forestry resources for climate change mitigation, starting from a bottom-up approach scaled on the information made available by Forest Management Plans (FMP). After an introduction on the topics given in chapter 1, the first topic (assessment of forest wood and C stock) is investigated in chapters 2, 3, 4 and 5, by taking the Valle Camonica District (Lombardy Region, Italy) as Case Study Area. The aim is to develop a stand-level model to estimate the mass of wood (t·yr-1 dry matter, DM) and C (t·yr-1 C) in aboveground wood biomass, belowground wood biomass and dead organic matter (i.e., deadwood and litter), quantifying, at the same time, the mass of potentially available logging residues (i.e., branches and tops; t·yr-1 DM) for energy generation and the corresponding potentially generated energy (GJ·yr-1), under the assumption that wood replaces non-renewable energy sources. Chapter 2 presents the first version of the model, called “WOody biomass and Carbon ASsessment” (WOCAS v1), aimed at the quantification of the mass of wood and C in the forest pools in a predefined reference year, by using a methodology already applied at the regional and national level. The model was tested on a dataset of 2019 public forest stands extracted from 45 FMPs (area: 37000 ha) covering the period from 1984 (year in which the oldest FMP came into force) to 2016 (most recent available data from the local FMPs). Preliminary results showed that, in 2016, the total C stock (given by the sum of C stock in aboveground wood biomass, belowground wood biomass, and dead organic matter) achieved 76.02 t·ha-1 C. The model also gives the possibility to analyze future scenarios based on the continuation of the current management practices rather than improved practices, to define a possible mitigation strategy for the activation of a local Voluntary Carbon Market. WOCAS v1 was implemented into a second version (WOCAS v2), by introducing, first of all, an improved methodology to calculate the mass of wood (t·yr-1 DM) and C (t·yr-1 C) within the forest pools from the year in which the FMPs entry into force until a predefined reference year (chapter 3). The main innovative aspect of the improved methodology is that the gross annual increment of each stand is calculated through an age-independent theoretical non-linear growth function based on the merchantable stem mass, solving the limitation of WOCAS v1 in which the gross annual increment of the stand is assumed as constant, as reported by the FMPs. This improved methodology was applied to the same dataset used for WOCAS 1 (i.e., 2019 forest stands, 45 FMPs; forest area: 37000 ha; period: 1984-2016). The total weighted average wood yield, calculated as the sum of wood yield in all the above-mentioned forest pools, ranged from 53.36±53.13 t∙ha-1∙yr-1 DM (1984) to 156.38±79.76 t∙ha-1∙yr-1 DM (2016). The total weighted average C yield ranged from 26.63±26.80 t∙ha-1∙yr-1 C (1984) to 77.45±40.19 t∙ha-1∙yr-1 C (2016). The average C yield related to the whole analyzed period (1984-2016) was 66.04 t∙ha-1 C. Of this, C yield in the aboveground wood biomass, belowground wood biomass and dead organic matter was equal to 72.0%, 15.8% and 12.2%, respectively. Validation of the results at the stand level was performed by comparing the value of the gross annual increment provided by the FMPs with the one predicted by WOCAS v2. The model caused, in some cases, an overestimation and, in other cases, an underestimation. For example, for Larix decidua Mill. and for Picea abies L., the Pearson coefficient of correlation (r2) between predicted and provided increments was r2 = 0.69 and r2 = 0.46, respectively. This was due to the fact that the methodology currently implemented into WOCAS v2 is based on average values of growth parameters valid for the whole Lombardy Region, and does not consider the productivity class of the stands since specific information was not always made available by the FMPs. WOCAS v2 also includes an innovative methodology (chapter 4 and chapter 5) to quantify – as an additional climate change mitigation strategy – the mass of potentially available residues (t·yr-1 DM) for energy generation, the potentially generated heat and electricity (GJ·yr-1) and the potentially avoided CO2 emissions into the atmosphere related to the final combustion process (t·yr-1 CO2), under the assumption that wood substituted non-renewable energy sources. In chapter 4, since not all the required data were initially made available for the Case Study Area, the mass of residues was computed by considering only the stand’s function and the stand’s management system, covering the period from 1994 (year in which the first wood cut was performed) to 2016. The calculation was then improved (chapter 5) by taking into account also the stand’s accessibility, the forest roads’ transitability and the energy market demand. Information on topographic features, landscape morphology and characteristics of the forest roads were collected by combining the FMPs data coming from WOCAS v2 and a Digital Elevation Model (DEM) in a Geographic Information System (GIS) software. The georeferenced stands were characterized by both single contiguous areas (single stands), as well as different non-contiguous areas (sub-stands). Overall, 2157 polygons – consisting of both single and sub-stands – were analyzed, covering the period from 2009 (most recent available data on forest roads’ transitability) and 2016. The mass of potentially available residues calculated for the analyzed period was used to estimate the current sustainable supply (i.e., 1.82∙103±6.61∙102 t·yr-1 DM). Under the hypothesis that these residues were prepared into woodchips to feed the Organic Rankine Cycle (ORC) unit of the local centralized heating plant of Ponte di Legno, the potentially generated heat and electricity (GJ·yr-1) and the potentially avoided CO2 emissions into the atmosphere (t∙yr-1 CO2) for the final combustion process were estimated by assuming that: (i) heat generated by the ORC unit replaced the one produced by natural gas-based heating plants; (ii) electricity generated by the ORC unit replaced the one generated by the Italian natural gas-based plants-mix for combined heat and electricity production and distributed through the National grid. Results showed that if only the current sustainable mass of residues was used to feed the ORC unit of the plant, the potentially generated heat and electricity would represent at most 28.7% of that generated by the unit in the year 2019. The thermal and electric power would be equal to 0.70 MW and 0.17 MW, with an average power load of the ORC unit of 23.6%. Experimental tests are needed to collect information on the harvesting method, used machines and technologies – which considerably affect the mass of available resides – as well as the currently harvested mass of residues for the validation of the results, that up to now is not possible since no measured data are available yet at the stand level. The second topic (forestry mechanization) is investigated in chapter 6. The aim is to develop an innovative approach in order to: (i) select the most suitable Forestry Machinery Chain (FMC) to adopt at the stand level for wood collection (harvesting and transport) and (ii) compute the economic costs (€·h-1; €·t-1 DM; €) of the selected FMC. To make the selection feasible, a user-friendly stand-level model called “FOREstry MAchinery chain selection” (FOREMA v1) was developed. FOREMA v1 supports the user in selecting the FMC according to seven technical parameters that characterize the stand. For each FMC, the model defines the sequence of the operations and the types of machines that can be used. The economic costs of the selected FMC are then quantified by taking into account the fixed and the variable costs. The approach was applied for a Case Study concerning the collection of woodchips from a coppice stand in the Italian Alps for energy generation. The analyzed FMC was made up of the following operations: (i) felling, (ii) bunching and extraction, (iii) chipping and (iv) loading and transport. For the whole FMC, the cost per unit of time was 669.3 €·h-1; the cost per unit of product was 113.0 €·t DM, whereas the cost of production amounted to 6893.2 €. Results provided by FOREMA v1 still need to be validated; experimental tests are required to collect information on the operating conditions in which the machines are actually used and, consequently, on the corresponding economic costs. Obtained results on the costs of the operations were compared with that reported in literature and related to studies performed under similar forestry and operating conditions.
Styles APA, Harvard, Vancouver, ISO, etc.
33

Perera, Nilakshi. « Structural behaviour and design of innovative hollow flange steel plate grinders ». Thesis, Queensland University of Technology, 2018. https://eprints.qut.edu.au/123310/1/Liyanage%20Nilakshi%20Piyahasi_Perera_Thesis.pdf.

Texte intégral
Résumé :
This thesis proposes a new Hollow Flange Steel Plate Girder (HFSPG) by welding industrially available cold-formed Rectangular Hollow Sections (RHS) to a web plate for use in long span construction. Design procedures presented in the national and international design guidelines were reviewed and suitable improvements were made to accurately predict the structural behaviour and capacities of HFSPGs by undertaking detailed experimental and numerical studies into their unique structural behaviour. Local buckling/yielding, global buckling and local-global interaction failures were all considered in this thesis.
Styles APA, Harvard, Vancouver, ISO, etc.
34

Brusa, Alessandro <1992&gt. « Development and testing of innovative methodologies for modelling and control of normal and knocking combustion and implementation of novel rapid prototyping solutions ». Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amsdottorato.unibo.it/9667/1/corpo_tesi_BRUSA.pdf.

Texte intégral
Résumé :
The thesis work deals with topics that led to the development of innovative control-oriented models and control algorithms for modern gasoline engines. Knock in boosted spark ignition engines is the widest topic discussed in this document because it remains one of the most limiting factors for maximizing combustion efficiency in this kind of engine. First chapter is thus focused on knock and a wide literature review is proposed to summarize the preliminary knowledge that even represents the background and the reference for discussed activities. Most relevant results achieved during PhD course in the field of knock modelling and control are then presented, describing every control-oriented model that led to the development of an adaptive model-based combustion control system. The complete controller has been developed in the context of the collaboration with Ferrari GT and it allowed to completely redefine the knock intensity evaluation as well as the combustion phase control. The second chapter is focused on the activity related to a prototyping Port Water Injection system that has been developed and tested on a turbocharged spark ignition engine, within the collaboration with Magneti Marelli. Such system and the effects of injected water on the combustion process were then modeled in a 1-D simulation environment (GT Power). Third chapter shows the development and validation of a control-oriented model for the real-time calculation of exhaust gas temperature that represents another important limitation to the performance increase in modern boosted engines. Indeed, modelling of exhaust gas temperature and thermocouple behavior are themes that play a key role in the optimization of combustion and catalyst efficiency.
Styles APA, Harvard, Vancouver, ISO, etc.
35

MESTRINER, DANIELE. « Lightning-induced voltages on power lines : advances in modelling, computational effort optimization and innovative tools for the protection of overhead distribution lines ». Doctoral thesis, Università degli studi di Genova, 2020. http://hdl.handle.net/11567/1008296.

Texte intégral
Résumé :
The continuous growth of MV overhead distribution systems requires a constant improvement in terms of security and power quality. One of the most critical event that can cause the fault of a distribution line is represented by the atmospheric discharges and, among them, the most dangerous one is no-doubt the lightning stroke. In the transmission and distribution systems, lightning transients can be caused by either direct or indirect strikes. Indirect strikes are much more frequent than direct strikes and can cause flashovers, especially when the line insulation level is low. The computation of the lightning induced-voltages (i.e. the one related to the indirect strikes, which represent the most critical issue in distribution systems) is a very complicated task for two main reasons: 1) the number of uncertain parameters is high: it involves a correct representation of the current that flows in the lightning channel as well as a correct representation of the soil conductivity where the power line is located. 2) The computational complexity of the calculations that allow evaluating the final overvoltage is high because in this case we are dealing with the computation of electromagnetic fields and with the effect of such fields on the power line. Concerning the protecting measures the most widely employed are the use of shield wires, surge arrester and the increase of the line insulation level. This thesis aims at improving the problem of the lightning-induced voltages in overhead distribution lines in terms of three main concepts: 1) innovation of the existing models, 2) optimization of the computational effort and 3) introduction of innovative tools for the protecting scheme. In this framework, the thesis proposes a new channel-bae current model (1) , an analytical technique for the electromagnetic fields computation (2) , a new scheme for the lightning-induced voltages computation (2), a new approach for reducing the computational effort of the lightning performance computation (2 and 3) and an innovative approach for the evaluation of the mitigation effect of shield wires on the lightning-induced voltages (3).
Styles APA, Harvard, Vancouver, ISO, etc.
36

Kim, Jimmy. « Development of modular building systems made of innovative steel sections and wall configurations ». Thesis, Queensland University of Technology, 2019. https://eprints.qut.edu.au/127327/1/Jimmy_Kim_Thesis.pdf.

Texte intégral
Résumé :
This study has presented a thorough review on steel Modular Building Systems including the execution of case studies on real-world MBS projects to establish an understanding of the current development and shortcomings of this emerging technology for which innovative solutions are later introduced. The review determined that the major limitations of this technology included lack of structurally-efficient designs, poor control of construction tolerances, impractical to construct designs and lack of measures to address fire-resisting performance. Several innovative design concepts were incorporated into a complete MBS module and proposed to address these shortcomings.
Styles APA, Harvard, Vancouver, ISO, etc.
37

SELICATI, VALERIA. « Innovative thermodynamic hybrid model-based and data-driven techniques for real time manufacturing sustainability assessment ». Doctoral thesis, Università degli studi della Basilicata, 2022. http://hdl.handle.net/11563/157566.

Texte intégral
Résumé :
This doctoral thesis is the result of the supervision and collaboration of the University of Basilicata, the Polytechnic of Bari, and the enterprise Master Italy s.r.l. The main research lines explored and discussed in the thesis are: sustainability in general and, more specifically, manufacturing sustainability, the Industry 4.0 paradigm linked to smart (green) manufacturing, model-based assessment techniques of manufacturing processes, and data-driven analysis methodologies. These seemingly unrelated topics are handled throughout the thesis in such a way that it reveal how strongly interwoven and characterised by transversality they are. The goal of the PhD programme was to design and validate innovative assessment models in order to investigate the nature of manufacturing processes and rationalize the relationships and correlations between the different stages of the process. This composite model may be utilized as a tool in political decision-making about the long-term development of industrial processes and the continuous improvement of manufacturing processes. The overarching goal of this research is to provide strategies for real-time monitoring of manufacturing performance and sustainability based on hybrid thermodynamic models of the first and second order, as well as those based on data and machine learning. The proposed model is tested on a real industrial case study using a systemic approach: the phases of identifying the requirements, data inventory (materials, energetic, geometric, physical, economic, social, qualitative, quantitative), modelling, analysis, ad hoc algorithm adjustment (tuning), implementation, and validation are developed for the aluminium alloy die-casting processes of Master Italy s.r.l., a southern Italian SME which designs and produces the accessories and metal components for windows since 1986. The thesis digs in the topic of the sustainability of smart industrial processes from each and every perspective, including both the quantity and quality of resources used throughout the manufacturing process's life cycle. Traditional sustainability analysis models (such as life cycle analysis, LCA) are combined with approaches based on the second law of thermodynamics (exergetic analysis); they are then complemented by models based on information technology (big-data analysis). A full analysis of the potential of each strategy, whether executed alone or in combination, is provided. Following a summary of the metrics relevant for determining the degree of sustainability of industrial processes, the case study is demonstrated using modelling and extensive analysis of the processes, namely aluminium alloy die casting. After assessing the sustainability of production processes using a model-based approach, we move on to the real-time application of machine learning analyses with the goal of identifying downtime and failures during the production cycle and predicting their occurrence well in advance using real-time process thermodynamic parameter values and automatic learning. Finally, the thesis suggests the use of integrated models on various case studies, such as laser deposition processes and the renovation of existing buildings, to demonstrate the multidisciplinarity and transversality of these issues. The thesis reveals fascinating findings derived from the use of a hybrid method to assessing the sustainability of manufacturing processes, combining exergetic analysis with life cycle assessment. The proposed theme is completely current and relevant to the most recent developments in the field of industrial sustainability, combining traditional model-based approaches with innovative approaches based on the collection of big data and its analysis using the most appropriate machine learning methodologies. Furthermore, the thesis demonstrates a highly promising application of machine learning approaches to real-time data collected in order to identify any fault source in the manufacturing line beginning with sustainability measures generated from exergetic analysis and life cycle analysis. As such, it unquestionably represents an advancement above earlier information depicted in the initial state of the art. In actuality, manufacturing companies that implement business strategies based on smart models and key enabling technologies today have a higher market value in terms of quality, customisation, flexibility, and sustainability.
Styles APA, Harvard, Vancouver, ISO, etc.
38

Lemoussu, Sophie. « A model-based framework for innovative Small and Medium-sized Enterprises (SMEs) in Aeronautics ». Thesis, Toulouse, ISAE, 2020. http://www.theses.fr/2020ESAE0014.

Texte intégral
Résumé :
Le marché de l'aviation fait face aujourd'hui à une croissance rapide des technologies innovantes. Les drones cargo, les taxis drones, les dirigeables, les ballons stratosphériques, pour n'en citer que quelques-uns, pourraient faire partie de la prochaine génération de transport aérien. Dans le même temps, les Petites et Moyennes Entreprises (PMEs) s'impliquent de plus en plus dans la conception et le développement de nouvelles formes de système aéroporté, passant du rôle traditionnel de fournisseur à celui de concepteur et intégrateur. Cette situation modifie considérablement la portée de la responsabilité des PMEs. En tant qu'intégrateurs, elles deviennent responsables de la certification des composants et du processus de fabrication, domaine dans lequel elles n’ont encore que peu d'expérience. La certification, qui requiert une connaissance très spécifique des réglementations, des normes et standards, demeure un processus obligatoire et une activité critique pour les entreprises de l'industrie aéronautique. C’est aussi un défi majeur pour les PMEs qui doivent assumer cette responsabilité de certification avec des moyens limités. Dans cette thèse, deux besoins majeurs sont identifiés: le soutien méthodologique n'est pas facilement disponible pour les PMEs; et les exigences de certification ne sont pas facilement compréhensibles et adaptables à chaque situation. Nous examinons donc des voies alternatives pour réduire la complexité de la situation des PMEs. L'objectif est de fournir un soutien afin qu'elles puissent être plus efficaces pour comprendre et intégrer les règles, les législations et les lignes directrices à leurs processus internes de manière plus simple. Cette thèse propose ainsi une approche méthodologique pour soutenir ces organisations. Développée en étroite collaboration avec une PME française, l'approche est composée d'un ensemble de modèles (métamodèle, modèles structurels et comportementaux) couverts par un mécanisme de gouvernance
The aviation market is facing nowadays a fast growth of innovative airborne systems. Drone cargo, drone taxi, airships, stratospheric balloons, to cite a few, could be part of the next generation of air transportation. In the same time, Small and Medium-sized Enterprises (SMEs) are becoming more and more involved in designing and developing new forms of air transportation, transitioning from the traditional role of supplier to those of system designer and integrator. This situation changes drastically the scope of SMEs' responsibility. As integrators they become responsible for certification of the components and the manufacturing process, an area in which they have little experience. Certification mandates very specific knowledge, regarding the regulations, norms and standards. Certification is a mandatory process and a critical activity for the enterprises in the aerospace industry. It constitutes a major challenge for SMEs who have to take on this certification responsibility with only limited resources. In this thesis, two major needs are identified: methodological support is not easily available for SMEs; and certification requirements are not easily comprehensive and adaptable to each situation. We examine alternate paths, reducing the complexity and bringing one step closer to solving the problem for the innovative SMEs. The objective is to provide support so that they can be more efficient to comprehend and integrate rules, legislations and guidelines to their internal processes in a simpler way. This thesis proposes then a methodological approach to support such organisation. Developed in close cooperation with a French SME in this situation, the approach is composed of a set of models (metamodel, structural, and behavioural models) covered by a certification governance mechanism
Styles APA, Harvard, Vancouver, ISO, etc.
39

Langley, Ivor. « Impact assessment of new tuberculosis diagnostic tools and algorithms to support policy makers in low and middle income countries : an innovative modelling approach ». Thesis, University of Warwick, 2016. http://wrap.warwick.ac.uk/87877/.

Texte intégral
Résumé :
In many low and middle income countries the infectious disease tuberculosis is a leading and persistent cause of death, sickness and hardship. This is despite an effective and readily available treatment regimen. Better diagnostics and more rapid initiation of patients onto treatment is essential if the high burden of tuberculosis in these settings is to be substantially reduced, as there is currently no effective vaccine. There is an encouraging pipeline of improved diagnostic tools and algorithms being developed, some of which have been endorsed by the World Health Organization (e.g. Xpert MTB/RIF). These new diagnostic tools have the potential to overcome many of the weaknesses of the present processes, however they might substantially increase the demands on scarce resources and funds. In addition, whether these new diagnostics should replace existing methods or be used in combination with them is unclear. Before national tuberculosis programmes can scale-up new diagnostics, policy makers need to understand the effects on patients, the health system, and the wider population. Failure to do so could lead to poor performance outcomes, unsustainable implementation, and wasted resources. An innovative linked modelling approach is proposed that brings together detailed operational models of patient pathways with transmission models to provide the comprehensive projections required. The studies that make up this research first explore the concept of linked modelling, then in the second study develop a detailed operational model incorporating cost-effectiveness analysis. The third study uses the linked modelling approach to explore eight alternative diagnostic algorithms in Tanzania. It provides comprehensive projections of patient, health system and community impacts including cost-effectiveness analysis, from which the national tuberculosis programme can develop a strategy for scale-up of new diagnostics across the country. Having shown how the approach of linked operational and transmission modelling can assist policy makers, the fourth and fifth studies review the process of impact assessment and recommend how it can be improved, and how the lessons from this research in tuberculosis diagnostics might apply to other health decisions in low and middle income countries. The linked modelling approach is feasible and relevant in supporting rational decision making for tuberculosis diagnostics in low and middle income countries. The results from using the approach in Tanzania show that full scale-up of Xpert MTB/RIF is a cost-effective option with an incremental cost-effectiveness ratio of US$169 per DALY averted (95% credible interval, 104–265), and has the potential to significantly reduce the national tuberculosis burden. Substantial levels of funding would need to be mobilised to translate this into clinical practice. In the context of Tanzania, targeting Xpert MTB/RIF to HIV-positive patients only, was not cost-effective compared to rollout of LED fluorescence microscopy with two samples collected on the same day. Review of the Impact Assessment Framework and operational modelling used in these studies found the approaches had many other potential applications, for example for decisions around human parasitic disease diagnostics and tuberculosis treatment. In Tanzania full scale-up of Xpert MTB/RIF should be progressed in districts where resources and funding are available. LED fluorescence microscopy using two samples collected on the same day should be considered in other districts. Tuberculosis programmes should use the operational modelling approach to prioritise the implementation of new diagnostics by district. The operational and linked operational and transmission modelling approaches have many other potential applications in other contexts and disease areas and these should be further researched.
Styles APA, Harvard, Vancouver, ISO, etc.
40

Visalli, Roberto. « Innovative numerical petrological methods for definition of metamorphic timescale events of southern European Variscan relicts via thermodynamic and diffusion modelling of zoned garnets ». Doctoral thesis, Università di Catania, 2017. http://hdl.handle.net/10761/4089.

Texte intégral
Résumé :
Innovative numerical petrology methods have been developed using several computer programming languages, to investigate chemical-physical properties of metamorphic rocks at the microscale. These methods can help users to analyse the final aspect of the metamorphic rocks, which derives from the counterbalancing factors controlled by deformation vs. recovery processes, through a better quantification of the rock fabric parameters (e.g., grain and mineral size distribution) as well as of the rock volumes and the specific compositions that take part in the reactions during each metamorphic evolutionary stage. In this perspective, a grain boundary detection tool (i.e., Grain Size Detection - GSD) was created to draw grain boundary and create polygon features in a Geographic Information System (GIS) platform using thin section optical scans as input images. Such a tool allows users to obtain several pieces of information from the investigated samples such as grain surfaces and sizes displayed as derivative maps. These maps have been then integrated with the mineralogical distribution map of the entire thin section classified from the micro X-ray maps. This step has been made to enhance the grain size distribution analysis by associating a mineral label to each polygon feature, by developing a further tool called Min-GSD (i.e., Mineral-Grain Size Distribution). The image analysis of rocks at the microscale was further improved by introducing a new multilinear regression technique within a previous image analysis software (i.e., X-ray Map Analyser - XRMA), with the aim to calibrate X-ray maps per each classified mineral of the selected thin section microdomain. This enhancement (called Quantitative X-ray Map Analyser - Q-XRMA) allowed to compute: (a) the elemental concentration within a single phase expressed in a.p.f.u; (b) maps of the end member fractions defining the potential zoning patterns of solid solution mineral phases. Moreover, the classification through this new method of one or several microdomains per thin section, able to describe the potential sequence of recognized metamorphic equilibria, has been here used to a better definition of the effective bulk rock chemistries at the base of a more robust thermodynamic modelling, providing more reliable thermobaric constraints. These thermobaric constraints were here converted for the first time into Pressure-Temperature (PT) maps by the development of an add-on (i.e., Diffusion Coefficient Map Creator - DCMC) of the previous tool (Q-XRMA), for creating maps of compositionally-dependent diffusion coefficients, by integrating diffusion data from the literature. As a result, an articulated Local Information System (LIS) for the investigated mineral, involving data on composition, grain size, modal amounts and kinetic rates, is created and potentially useful for detailed investigations as, for instance, the determination of the timescales of metamorphic events. All of these methods mentioned above can be considered part of the Petromatics discipline, here for the first time defined as the science which integrates new computers technologies with different techno-scientific sectors related to the detection and handling of spatial minerochemical data characterising rocks at the microscale. Furthermore, the quantification of the rock parameters at the microscale laid the groundwork for the development of an innovative numerical petrological workflow here called Metamorphic Petrology Information System (MetPetIS). The latter is a new LIS able to store, manage and elaborate multidisciplinary and multiscale data collection from metamorphic basement rocks within a unique cyber-infrastructure.
Styles APA, Harvard, Vancouver, ISO, etc.
41

Al-Hilo, Naeem A. « Novel Sound Absorbing Materials Made From Elastomeric Waste : Compounding And Structuring Of Elastomeric Waste Crumb And Fibers With Binders Into Innovative Noise Insulation Materials ». Thesis, University of Bradford, 2018. http://hdl.handle.net/10454/17383.

Texte intégral
Résumé :
Elastomeric wastes plague our time, polluting our environment and requiring urgent upcycling solutions. This research contributes to this agenda using an important source of waste, car tyre shred fibre residue (TSFR). It is demonstrated how using binders, non-foaming (SBR) and foaming (PU), we can transform these TSFR into structured porous acoustic-thermal insulation materials, suitable as underlay, cavity wall and pipe insulation. These structures were fabricated in purpose designed moulds and characterised for their porosity, tortuosity, flow resistivity and density. Their acoustic absorption performance was measured using industrial standards and the measurement underpinned with the Johnson-Champoux-Allard (JCA) model. With the under-layer materials, thermal insulation was also measured. The results were as follows: (i) 40%/60% SBR/TSFR was an optimal composition for the underlay with the addition of 15% w/w bumper crumb of size > 1mm enhancing both impact sound and thermal insulation, (ii) PU was found to produce well performing wall cavity insulation, particularly when vacuum pressure was applied, allowing micro and macro pores to be formed; (iii) PU applied with controlled amount of water to control foaming CO2 formation produced super-performing (compared with Armacell System B) stratified pipe cladding insulation, optimal at porosity stratification of 90%, 83%, and 70%; (iv) Very good agreement was observed with predictions using JCA model, allowing further research to be carried out with these now well characterised sound insulations. In addition to the developing materials, a novel technique for measuring sound absorption of pipe cladding was developed that could replace the expensive standard using a reverberation chamber.
Styles APA, Harvard, Vancouver, ISO, etc.
42

Sagha, Hossein. « Development of innovative robust stability enhancement algorithms for distribution systems containing distributed generators ». Thesis, Queensland University of Technology, 2015. https://eprints.qut.edu.au/91052/1/Hossein_Sagha_Thesis.pdf.

Texte intégral
Résumé :
This project was a step forward in improving the voltage profile of traditional low voltage distribution networks with high photovoltaic generation or high peak demand. As a practical and economical solution, the developed methods use a Dynamic Voltage Restorer or DVR, which is a series voltage compensator, for continuous and communication-less power quality enhancement. The placement of DVR in the network is optimised in order to minimise its power rating and cost. In addition, new approaches were developed for grid synchronisation and control of DVR which are integrated with the voltage quality improvement algorithm for stable operation.
Styles APA, Harvard, Vancouver, ISO, etc.
43

Henkel, Johannes [Verfasser], et Georg [Akademischer Betreuer] Erdmann. « Modelling the Diffusion of Innovative Heating Systems in Germany – Decision Criteria, Influence of Policy Instruments and Vintage Path Dependencies / Johannes Henkel. Betreuer : Georg Erdmann ». Berlin : Universitätsbibliothek der Technischen Universität Berlin, 2012. http://d-nb.info/1021976628/34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
44

Samanta, Irene. « Modelling the relationships in B2B under e-marketing practices : moving from the traditional business environment to innovative e-commerce. The case of Greek firms ». Thesis, University of the West of Scotland, 2012. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.729429.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
45

Alkhoury, William [Verfasser], Martin [Akademischer Betreuer] Sauter et Elias [Akademischer Betreuer] Salameh. « Hydrological modelling in the meso scale semiarid region of Wadi Kafrein, Jordan - The use of innovative techniques under data scarcity / William Alkhoury. Gutachter : Martin Sauter ; Elias Salameh. Betreuer : Martin Sauter ». Göttingen : Niedersächsische Staats- und Universitätsbibliothek Göttingen, 2011. http://d-nb.info/1042264554/34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
46

Weber, Denis [Verfasser]. « Measuring and predicting the effects of time-variable exposure of pesticides on populations of green algae : combination of flow-through studies and ecological modelling as an innovative tool for refined risk assessments / Denis Weber ». Aachen : Hochschulbibliothek der Rheinisch-Westfälischen Technischen Hochschule Aachen, 2013. http://d-nb.info/103111565X/34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
47

Wang, Shu. « What Motivates Marketing Innovation and Whether Marketing Innovation Varies across Industry Sectors ». Thesis, Université d'Ottawa / University of Ottawa, 2015. http://hdl.handle.net/10393/33008.

Texte intégral
Résumé :
Innovativeness is one of the fundamental instruments of growth strategies that provide companies with a competitive edge. Only a few recent studies have examined marketing innovation and the factors that might encourage its adoption. This study investigates the factors that motivate marketing innovation and examines whether the occurrence of marketing innovation varies across industry sectors. This study uses data from surveys and a nationwide census conducted by Statistics Canada. They include: the Survey of Innovation and Business Strategies (SIBS) 2009, the Survey of Innovation and Business Strategies (SIBS) 2012, the Business Registry (BR) and the General Index of Financial Information (GIFI). Multilevel (random-intercept) logistic regression modelling is employed. The results show that if a firm has a strategic focus on new marketing practices, maintains marketing within its enterprise, acquires or expands marketing capacity, has competitor and customer orientations, and adopts advanced technology then it is more likely to carry out marketing innovation. However, breadth of long-term strategic objectives and competitive intensity do not have significant impacts on marketing innovation. In addition, product innovation and organizational innovation occur simultaneously with marketing innovation, but process innovation may not. Lastly, the occurrence of marketing innovation is found to vary across industry sectors. The theoretical and empirical implications of the results are discussed within this study.
Styles APA, Harvard, Vancouver, ISO, etc.
48

To, Chester Kin-man. « Modelling innovation activity processes for global fashion marketplaces ». Thesis, De Montfort University, 2001. http://hdl.handle.net/2086/4254.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
49

Tran, Martino. « Modelling innovation diffusion in complex energy-transport systems ». Thesis, University of Oxford, 2012. http://ora.ox.ac.uk/objects/uuid:30ab651b-7c5a-4a4b-a905-6d86b5507042.

Texte intégral
Résumé :
Global sustainable energy and environmental policies have increased the need to understand how new energy innovations diffuse into the market. The transport sector is currently a major source of unsustainable energy use contributing ~20-25% global CO2 emissions. Although the potential benefits of alternative fuel vehicle (AFV) technologies to reduce CO2 emissions and fossil fuel dependency have been demonstrated, many uncertainties exist in their market diffusion. It is also not well understood how policy can influence rapid diffusion of AFVs. To transition to a more sustainable energy-transport system, we need to understand the market conditions and factors necessary for triggering widespread adoption of new energy innovations such as AFVs. Modelling the diffusion of innovations is one way to explain why some ideas and technologies spread through society successfully, while others do not. These diffusion processes are characterized by non-linear interactions between heterogeneous agents in complex networked systems. Diffusion theory has typically been applied to consumer durable goods but has found less application to new energy and environmental innovations. There is much scope for advanced diffusion methods to inform energy policy. This depends upon understanding how consumer behaviour and technologies interact and can influence each other over time. There is also need to understand the underlying mechanisms that influence adoption behaviour among heterogeneous agents. This thesis tackles the above issues using a combination of empirical data analysis, scenarios, and simulation modelling as follows: 1) We first develop the empirical basis for assessing innovation diffusion from a technology-behavioural perspective, where we explicitly account for interactions between consumer preferences and technological performance across different spatial and temporal scales; 2) Scenarios are then used to disaggregate consumer markets and analyze the technological and behavioural factors that might trigger large-scale adoption of AFVs; 3) We then case-analyze the UK transport sector and develop a model of the dynamics between how vehicle technologies and consumer preferences can change and influence the diffusion process; 4) Finally, we develop exploratory simulations to assess how social network effects can influence individual adoption behaviour; 5) We close with policy implications of our findings, contributions and limitations of the thesis, and possible avenues for taking the research forward.
Styles APA, Harvard, Vancouver, ISO, etc.
50

Hollberg, Philipp. « Swarm grids - Innovation in rural electrification ». Thesis, KTH, Energisystemanalys, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-172846.

Texte intégral
Résumé :
Access to clean and affordable energy is a prerequisite for human development. In order to achieve access to sustainable energy for all innovation in rural electrification is needed. Decentralized renewable energy technologies in form of Solar Home Systems and Mini-grids possess the potential of electrifying a large number of rural households which cannot be connected to the national grid with local available energy sources. However, the deployment of Mini-grids is facing barriers such as a lack of private investments. By building on already existing SHSs swarm grids can enable households to trade electricity and use their excess electricity to supply additional loads. Swarm grids as an evolutionary bottom-up approach to electrification can overcome some of the obstacles regular Mini-grids face and play a vital role in improving electricity access. As part of this thesis a model has been developed which allows for simulating the electricity flow including line losses in swarm grids of any size on an hourly basis. The model facilitates the gaining of a better understanding for the impact global parameters (e.g. distance between households) have on the feasibility of swarm grids. A field trip to Bangladesh has been undertaken in order to obtain input data for simulating different cases in the model created. The simulations performed indicate that in a swarm grid the generated excess energy of SHSs which so far is wasted can supply the demand of households without SHS as well as commercial loads such as irrigation pumps. Overall the results point towards swarm grids being an innovation with the potential of improving rural electricity access by building on existing infrastructure.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie