To see the other types of publications on this topic, follow the link: Inc Engineering Information.

Dissertations / Theses on the topic 'Inc Engineering Information'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Inc Engineering Information.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Krugh, Lisa S. "Report on a MTSC Internship at Golder Associates Inc." Miami University / OhioLINK, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=miami1258382636.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

DAVID, STEFANO. "Ontology engineering methodologies for information integration." Doctoral thesis, Università Politecnica delle Marche, 2009. http://hdl.handle.net/11566/242246.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Gharib, Mohamad. "Information Quality Requirements Engineering: a Goal-based Modeling and Reasoning Approach." Doctoral thesis, Università degli studi di Trento, 2015. https://hdl.handle.net/11572/369026.

Full text
Abstract:
Information Quality (IQ) has been always a growing concern for most organizations, since they depend on information for managing their daily tasks, delivering their services to their costumers, making important decisions, etc., and relying on low-quality information may negatively influence their overall performance, or even disasters in the case of critical systems (e.g., air traffic management systems, healthcare systems, etc.). Although there exist several techniques for dealing with IQ related problems in the literature (e.g., checksum, integrity constraints, etc.), but most of them propose solutions that are able to address the technical aspects of IQ, and seem to be limited in addressing social and organizational aspects. In other words, these techniques do not satisfy the needs of current complex systems, such as socio-technical systems, where humans and their interactions are considered as an integral part of the system along with the technical elements (e.g., healthcare systems, smart cities, etc.). This introduces the need of analyzing the social and organizational context where the system will eventually operates, since IQ related problems might manifest themselves in the actors' interactions and dependencies. Moreover, considering IQ requirements since the early phase of the system development (the requirements phase) can prevent revising the system to accommodate such needs after the system deployment, which might be too costly. Despite this, most of the Requirements Engineering (RE) frameworks and approaches either loosely define, or simply ignore IQ requirements. To this end, we propose a goal-oriented framework for modeling and reasoning about IQ requirements since the early phases of the system development. The proposed framework consists of (i) a modeling language that provides concepts and constructs for modeling IQ requirements; (ii) a set of analysis techniques that support system designers while performing the required analysis to verify the correctness and consistency of the IQ requirements model; (iii) an engineering methodology to assist designers in using the framework for capturing IQ requirements; and (iv) an automated tool-support, namely ST-IQ Tool. In addition, we empirically evaluated the framework to demonstrate its applicability, usefulness, and the scalability of its reasoning techniques by successfully applying it to a case study concerning a stock market system.
APA, Harvard, Vancouver, ISO, and other styles
4

ARNALDI, PIETRO. "Engineered biopolymeric systems for tissue engineering and drug delivery applications." Doctoral thesis, Università degli studi di Genova, 2022. http://hdl.handle.net/11567/1090662.

Full text
Abstract:
The advent of biodegradable polymers constituted an important development tool for the realization of modern systems that can be used in biomedicine. Biodegradable polymers are essential when it is necessary to have easily workable materials with suitable properties to obtain an excellent biological response, for this reason they found applications in a wide range of tissue engineering and drug delivery systems. The main limitation of biopolymers is however in the properties of the materials themselves, sometimes too poor compared to the application field in which they need to be used to provide efficient support or therapy. The fabricated systems exposed in this thesis work, aim to provide useful tools not only for the improvement of previously developed polymeric systems but also for the achievement of new objectives in the field of neuronal cultures and controlled drug release. Specifically, Chitosan (CHI) has been used as a bulk material to produce engineered neuronal networks both at the two-dimensional level and, in the currently essential passage towards biologically more relevant models, at the 3D one. Gold nanorods (GNRs) have been used thanks to their good interaction with chitosan to provide thermo-plasmonic properties to a composite ink developed to be able to be printed using a commercial ink-jet printer, with the aim of creating a platform for simple and scalable neuronal networks stimulation for potential studies to better understand brain diseases (such as epilepsy). Moreover, chitosan was used to manufacture porous microparticles by means of air-assisted jetting technique and phase inversion gelation. These systems can be used in various fields such as tissue engineering, as a bottom-up 3D scaffolds, or in drug delivery for local drug release. Precisely in these two directions I worked during my PhD research activity to develop systems that, by using Chitosan as a base, exploited the interactions with other materials to improve the properties of the biopolymer. Interactions between CHI and graphitic materials have been exploited to provide to scaffolds, formed by assemblies of neurons and chitosan microspheres, electrical conductivity, mechanical strength, and degradative resistance in physiological and/or injury conditions. With this in mind, graphite oxide and graphite nanoplatelets were used both as filler and by electrostatic surface interaction, evaluating the different impact on the bulk properties of CHI and on the material-cell interface.Afterwards, with a conservative approach, I used CHI microparticles as a potential carrier for drug release in the gastrointestinal tract. The poor degradative resistance of CHI in harsh conditions made it necessary to apply a surface coating. The biocompatible synthetic polymer already widely used in drug delivery Poly-(styrene-co-maleic anhydride) (PSMA), thanks to the strong grafting reaction with CHI, made it possible to obtain a system with a limited burst effect in the release of molecules in the first hour of administration. The overall findings of this thesis support the efforts in making novel bio-fabricated systems as greatly promising tools for tissue engineering and controlled drug delivery. Specifically, the interaction between biopolymers and synthetic polymers can introduce interesting innovations in the fields of drug delivery, while interactions between biopolymers and carbon-based materials could be a key point for the next years perspective in neuro-engineering.
APA, Harvard, Vancouver, ISO, and other styles
5

Gharib, Mohamad. "Information Quality Requirements Engineering: a Goal-based Modeling and Reasoning Approach." Doctoral thesis, University of Trento, 2015. http://eprints-phd.biblio.unitn.it/1535/1/phd-thesis.pdf.

Full text
Abstract:
Information Quality (IQ) has been always a growing concern for most organizations, since they depend on information for managing their daily tasks, delivering their services to their costumers, making important decisions, etc., and relying on low-quality information may negatively influence their overall performance, or even disasters in the case of critical systems (e.g., air traffic management systems, healthcare systems, etc.). Although there exist several techniques for dealing with IQ related problems in the literature (e.g., checksum, integrity constraints, etc.), but most of them propose solutions that are able to address the technical aspects of IQ, and seem to be limited in addressing social and organizational aspects. In other words, these techniques do not satisfy the needs of current complex systems, such as socio-technical systems, where humans and their interactions are considered as an integral part of the system along with the technical elements (e.g., healthcare systems, smart cities, etc.). This introduces the need of analyzing the social and organizational context where the system will eventually operates, since IQ related problems might manifest themselves in the actors' interactions and dependencies. Moreover, considering IQ requirements since the early phase of the system development (the requirements phase) can prevent revising the system to accommodate such needs after the system deployment, which might be too costly. Despite this, most of the Requirements Engineering (RE) frameworks and approaches either loosely define, or simply ignore IQ requirements. To this end, we propose a goal-oriented framework for modeling and reasoning about IQ requirements since the early phases of the system development. The proposed framework consists of (i) a modeling language that provides concepts and constructs for modeling IQ requirements; (ii) a set of analysis techniques that support system designers while performing the required analysis to verify the correctness and consistency of the IQ requirements model; (iii) an engineering methodology to assist designers in using the framework for capturing IQ requirements; and (iv) an automated tool-support, namely ST-IQ Tool. In addition, we empirically evaluated the framework to demonstrate its applicability, usefulness, and the scalability of its reasoning techniques by successfully applying it to a case study concerning a stock market system.
APA, Harvard, Vancouver, ISO, and other styles
6

Mancini, Martina <1980&gt. "Rehabilitation Engineering in Parkinson's disease." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2009. http://amsdottorato.unibo.it/1638/2/Mancini_Martina_tesi.pdf.

Full text
Abstract:
Impairment of postural control is a common consequence of Parkinson's disease (PD) that becomes more and more critical with the progression of the disease, in spite of the available medications. Postural instability is one of the most disabling features of PD and induces difficulties with postural transitions, initiation of movements, gait disorders, inability to live independently at home, and is the major cause of falls. Falls are frequent (with over 38% falling each year) and may induce adverse consequences like soft tissue injuries, hip fractures, and immobility due to fear of falling. As the disease progresses, both postural instability and fear of falling worsen, which leads patients with PD to become increasingly immobilized. The main aims of this dissertation are to: 1) detect and assess, in a quantitative way, impairments of postural control in PD subjects, investigate the central mechanisms that control such motor performance, and how these mechanism are affected by levodopa; 2) develop and validate a protocol, using wearable inertial sensors, to measure postural sway and postural transitions prior to step initiation; 3) find quantitative measures sensitive to impairments of postural control in early stages of PD and quantitative biomarkers of disease progression; and 4) test the feasibility and effects of a recently-developed audio-biofeedback system in maintaining balance in subjects with PD. In the first set of studies, we showed how PD reduces functional limits of stability as well as the magnitude and velocity of postural preparation during voluntary, forward and backward leaning while standing. Levodopa improves the limits of stability but not the postural strategies used to achieve the leaning. Further, we found a strong relationship between backward voluntary limits of stability and size of automatic postural response to backward perturbations in control subjects and in PD subjects ON medication. Such relation might suggest that the central nervous system presets postural response parameters based on perceived maximum limits and this presetting is absent in PD patients OFF medication but restored with levodopa replacement. Furthermore, we investigated how the size of preparatory postural adjustments (APAs) prior to step initiation depend on initial stance width. We found that patients with PD did not scale up the size of their APA with stance width as much as control subjects so they had much more difficulty initiating a step from a wide stance than from a narrow stance. This results supports the hypothesis that subjects with PD maintain a narrow stance as a compensation for their inability to sufficiently increase the size of their lateral APA to allow speedy step initiation in wide stance. In the second set of studies, we demonstrated that it is possible to use wearable accelerometers to quantify postural performance during quiet stance and step initiation balance tasks in healthy subjects. We used a model to predict center of pressure displacements associated with accelerations at the upper and lower back and thigh. This approach allows the measurement of balance control without the use of a force platform outside the laboratory environment. We used wearable accelerometers on a population of early, untreated PD patients, and found that postural control in stance and postural preparation prior to a step are impaired early in the disease when the typical balance and gait intiation symptoms are not yet clearly manifested. These novel results suggest that technological measures of postural control can be more sensitive than clinical measures. Furthermore, we assessed spontaneous sway and step initiation longitudinally across 1 year in patients with early, untreated PD. We found that changes in trunk sway, and especially movement smoothness, measured as Jerk, could be used as an objective measure of PD and its progression. In the third set of studies, we studied the feasibility of adapting an existing audio-biofeedback device to improve balance control in patients with PD. Preliminary results showed that PD subjects found the system easy-to-use and helpful, and they were able to correctly follow the audio information when available. Audiobiofeedback improved the properties of trunk sway during quiet stance. Our results have many implications for i) the understanding the central mechanisms that control postural motor performance, and how these mechanisms are affected by levodopa; ii) the design of innovative protocols for measuring and remote monitoring of motor performance in the elderly or subjects with PD; and iii) the development of technologies for improving balance, mobility, and consequently quality of life in patients with balance disorders, such as PD patients with augmented biofeedback paradigms.
APA, Harvard, Vancouver, ISO, and other styles
7

Mancini, Martina <1980&gt. "Rehabilitation Engineering in Parkinson's disease." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2009. http://amsdottorato.unibo.it/1638/.

Full text
Abstract:
Impairment of postural control is a common consequence of Parkinson's disease (PD) that becomes more and more critical with the progression of the disease, in spite of the available medications. Postural instability is one of the most disabling features of PD and induces difficulties with postural transitions, initiation of movements, gait disorders, inability to live independently at home, and is the major cause of falls. Falls are frequent (with over 38% falling each year) and may induce adverse consequences like soft tissue injuries, hip fractures, and immobility due to fear of falling. As the disease progresses, both postural instability and fear of falling worsen, which leads patients with PD to become increasingly immobilized. The main aims of this dissertation are to: 1) detect and assess, in a quantitative way, impairments of postural control in PD subjects, investigate the central mechanisms that control such motor performance, and how these mechanism are affected by levodopa; 2) develop and validate a protocol, using wearable inertial sensors, to measure postural sway and postural transitions prior to step initiation; 3) find quantitative measures sensitive to impairments of postural control in early stages of PD and quantitative biomarkers of disease progression; and 4) test the feasibility and effects of a recently-developed audio-biofeedback system in maintaining balance in subjects with PD. In the first set of studies, we showed how PD reduces functional limits of stability as well as the magnitude and velocity of postural preparation during voluntary, forward and backward leaning while standing. Levodopa improves the limits of stability but not the postural strategies used to achieve the leaning. Further, we found a strong relationship between backward voluntary limits of stability and size of automatic postural response to backward perturbations in control subjects and in PD subjects ON medication. Such relation might suggest that the central nervous system presets postural response parameters based on perceived maximum limits and this presetting is absent in PD patients OFF medication but restored with levodopa replacement. Furthermore, we investigated how the size of preparatory postural adjustments (APAs) prior to step initiation depend on initial stance width. We found that patients with PD did not scale up the size of their APA with stance width as much as control subjects so they had much more difficulty initiating a step from a wide stance than from a narrow stance. This results supports the hypothesis that subjects with PD maintain a narrow stance as a compensation for their inability to sufficiently increase the size of their lateral APA to allow speedy step initiation in wide stance. In the second set of studies, we demonstrated that it is possible to use wearable accelerometers to quantify postural performance during quiet stance and step initiation balance tasks in healthy subjects. We used a model to predict center of pressure displacements associated with accelerations at the upper and lower back and thigh. This approach allows the measurement of balance control without the use of a force platform outside the laboratory environment. We used wearable accelerometers on a population of early, untreated PD patients, and found that postural control in stance and postural preparation prior to a step are impaired early in the disease when the typical balance and gait intiation symptoms are not yet clearly manifested. These novel results suggest that technological measures of postural control can be more sensitive than clinical measures. Furthermore, we assessed spontaneous sway and step initiation longitudinally across 1 year in patients with early, untreated PD. We found that changes in trunk sway, and especially movement smoothness, measured as Jerk, could be used as an objective measure of PD and its progression. In the third set of studies, we studied the feasibility of adapting an existing audio-biofeedback device to improve balance control in patients with PD. Preliminary results showed that PD subjects found the system easy-to-use and helpful, and they were able to correctly follow the audio information when available. Audiobiofeedback improved the properties of trunk sway during quiet stance. Our results have many implications for i) the understanding the central mechanisms that control postural motor performance, and how these mechanisms are affected by levodopa; ii) the design of innovative protocols for measuring and remote monitoring of motor performance in the elderly or subjects with PD; and iii) the development of technologies for improving balance, mobility, and consequently quality of life in patients with balance disorders, such as PD patients with augmented biofeedback paradigms.
APA, Harvard, Vancouver, ISO, and other styles
8

Hasan, Md Rashedul. "Semantic Aware Representing and Intelligent Processing of Information in an Experimental domain:the Seismic Engineering Research Case." Doctoral thesis, Università degli studi di Trento, 2015. https://hdl.handle.net/11572/367650.

Full text
Abstract:
Seismic Engineering research projects’ experiments generate an enormous amount of data that would benefit researchers and experimentalists of the community if could be shared with their semantics. Semantics is the meaning of a data element and a term alike. For example, the semantics of the term experiment is a scientific research performed to conduct a controlled test or investigation. Ontology is a key technique by which one can annotate semantics and provide a common, comprehensible foundation for the resources on the Semantic Web. The development of the domain ontology requires expertise both in the domain to model as well as in the ontology development. This means that people from very different backgrounds, such as Seismic Engineering and Computer Science should be involved in the process of creating ontology. With the invention of the Semantic Web, computing paradigm is experiencing a shift from databases to Knowledge Bases (KBs), in which ontologies play a major role in enabling reasoning power that can make implicit facts explicit to produce better results for users. To enable an ontology and a dataset automatically exploring the relevant ontology and datasets from the external sources, these can be linked to the Linked Open Data (LOD) cloud, which is an online repository of a large amount of interconnected datasets published in RDF. Throughout the past few decades, database technologies have been advancing continuously and showing their potential in dealing with large collection of data, but they were not originally designed to deal with the semantics of data. Managing data with the Semantic Web tools offers a number of advantages over database tools, including classifying, matching, mapping and querying data. Hence we translate our database based system that was managing the data of Seismic Engineering research projects and experiments into KB-based system. In addition, we also link our ontology and datasets to the LOD cloud. In this thesis, we have been working to address the following issues. To the best of knowledge the Semantic Web still lacks the ontology that can be used for representing information related to Seismic Engineering research projects and experiments. Publishing vocabulary in this domain has largely been overlooked and no suitable vocabulary is yet developed in this very domain to model data in RDF. The vocabulary is an essential component that can provide logistics to a data engineer when modeling data in RDF to include them in the LOD cloud. Ontology integration is another challenge that we had to tackle. To manage the data of a specific field of interest, domain specific ontologies provide essential support. However, they alone can hardly be sufficient to assign meaning also to the generic terms that often appear in a data source. That necessitates the use of the integrated knowledge of the generic ontology and the domain specific one. To address the aforementioned issues, this thesis presents the development of a Seismic Engineering Research Projects and Experiments Ontology (SEPREMO) with a focus on the management of research projects and experiments. We have used DERA methodology for ontology development. The developed ontology was evaluated by a number of domain experts. Data originating from scientific experiments such as cyclic and pseudodynamic tests were also published in RDF. We exploited the power of Semantic Web technologies, namely Jena, Virtuoso and VirtGraph tools in order to publish, storage and manage RDF data, respectively. Finally, a system was developed with the full integration of ontology, experimental data and tools, to evaluate the effectiveness of the KB-based approach; it yielded favorable outcomes. For ontology integration with WordNet, we implemented a semi-automatic facet based algorithm. We also present an approach for publishing both the ontology and the experimental data into the LOD Cloud. In order to model the concepts complementing the vocabulary that we need for the experimental data representation, we suitably extended the SEPREMO ontology. Moreover, the work focuses on RDF data sets interlinking technique by aligning concepts and entities scattered over the cloud.
APA, Harvard, Vancouver, ISO, and other styles
9

Palumbo, Pierpaolo <1986&gt. "Biomedical engineering for healthy ageing. Predictive tools for falls." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2015. http://amsdottorato.unibo.it/6960/1/Palumbo_Pierpaolo_tesi.pdf.

Full text
Abstract:
Falls are common and burdensome accidents among the elderly. About one third of the population aged 65 years or more experience at least one fall each year. Fall risk assessment is believed to be beneficial for fall prevention. This thesis is about prognostic tools for falls for community-dwelling older adults. We provide an overview of the state of the art. We then take different approaches: we propose a theoretical probabilistic model to investigate some properties of prognostic tools for falls; we present a tool whose parameters were derived from data of the literature; we train and test a data-driven prognostic tool. Finally, we present some preliminary results on prediction of falls through features extracted from wearable inertial sensors. Heterogeneity in validation results are expected from theoretical considerations and are observed from empirical data. Differences in studies design hinder comparability and collaborative research. According to the multifactorial etiology of falls, assessment on multiple risk factors is needed in order to achieve good predictive accuracy.
APA, Harvard, Vancouver, ISO, and other styles
10

Palumbo, Pierpaolo <1986&gt. "Biomedical engineering for healthy ageing. Predictive tools for falls." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2015. http://amsdottorato.unibo.it/6960/.

Full text
Abstract:
Falls are common and burdensome accidents among the elderly. About one third of the population aged 65 years or more experience at least one fall each year. Fall risk assessment is believed to be beneficial for fall prevention. This thesis is about prognostic tools for falls for community-dwelling older adults. We provide an overview of the state of the art. We then take different approaches: we propose a theoretical probabilistic model to investigate some properties of prognostic tools for falls; we present a tool whose parameters were derived from data of the literature; we train and test a data-driven prognostic tool. Finally, we present some preliminary results on prediction of falls through features extracted from wearable inertial sensors. Heterogeneity in validation results are expected from theoretical considerations and are observed from empirical data. Differences in studies design hinder comparability and collaborative research. According to the multifactorial etiology of falls, assessment on multiple risk factors is needed in order to achieve good predictive accuracy.
APA, Harvard, Vancouver, ISO, and other styles
11

Hasan, Md Rashedul. "Semantic Aware Representing and Intelligent Processing of Information in an Experimental domain: the Seismic Engineering Research Case." Doctoral thesis, Università degli studi di Trento, 2015. https://hdl.handle.net/11572/369088.

Full text
Abstract:
Seismic Engineering research projects’ experiments generate an enormous amount of data that would benefit researchers and experimentalists of the community if could be shared with their semantics. Semantics is the meaning of a data element and a term alike. For example, the semantics of the term experiment is a scientific research performed to conduct a controlled test or investigation. Ontology is a key technique by which one can annotate semantics and provide a common, comprehensible foundation for the resources on the Semantic Web. The development of the domain ontology requires expertise both in the domain to model as well as in the ontology development. This means that people from very different backgrounds, such as Seismic Engineering and Computer Science should be involved in the process of creating ontology. With the invention of the Semantic Web, computing paradigm is experiencing a shift from databases to Knowledge Bases (KBs), in which ontologies play a major role in enabling reasoning power that can make implicit facts explicit to produce better results for users. To enable an ontology and a dataset automatically exploring the relevant ontology and datasets from the external sources, these can be linked to the Linked Open Data (LOD) cloud, which is an online repository of a large amount of interconnected datasets published in RDF. Throughout the past few decades, database technologies have been advancing continuously and showing their potential in dealing with large collection of data, but they were not originally designed to deal with the semantics of data. Managing data with the Semantic Web tools offers a number of advantages over database tools, including classifying, matching, mapping and querying data. Hence we translate our database based system that was managing the data of Seismic Engineering research projects and experiments into KB-based system. In addition, we also link our ontology and datasets to the LOD cloud. In this thesis, we have been working to address the following issues. To the best of knowledge the Semantic Web still lacks the ontology that can be used for representing information related to Seismic Engineering research projects and experiments. Publishing vocabulary in this domain has largely been overlooked and no suitable vocabulary is yet developed in this very domain to model data in RDF. The vocabulary is an essential component that can provide logistics to a data engineer when modeling data in RDF to include them in the LOD cloud. Ontology integration is another challenge that we had to tackle. To manage the data of a specific field of interest, domain specific ontologies provide essential support. However, they alone can hardly be sufficient to assign meaning also to the generic terms that often appear in a data source. That necessitates the use of the integrated knowledge of the generic ontology and the domain specific one. To address the aforementioned issues, this thesis presents the development of a Seismic Engineering Research Projects and Experiments Ontology (SEPREMO) with a focus on the management of research projects and experiments. We have used DERA methodology for ontology development. The developed ontology was evaluated by a number of domain experts. Data originating from scientific experiments such as cyclic and pseudodynamic tests were also published in RDF. We exploited the power of Semantic Web technologies, namely Jena, Virtuoso and VirtGraph tools in order to publish, storage and manage RDF data, respectively. Finally, a system was developed with the full integration of ontology, experimental data and tools, to evaluate the effectiveness of the KB-based approach; it yielded favorable outcomes. For ontology integration with WordNet, we implemented a semi-automatic facet based algorithm. We also present an approach for publishing both the ontology and the experimental data into the LOD Cloud. In order to model the concepts complementing the vocabulary that we need for the experimental data representation, we suitably extended the SEPREMO ontology. Moreover, the work focuses on RDF data sets interlinking technique by aligning concepts and entities scattered over the cloud.
APA, Harvard, Vancouver, ISO, and other styles
12

AROYO, ALEXANDER MOIS. "Bringing Human Robot Interaction towards _Trust and Social Engineering." Doctoral thesis, Università degli studi di Genova, 2019. http://hdl.handle.net/11567/940915.

Full text
Abstract:
Robots started their journey in books and movies; nowadays, they are becoming an important part of our daily lives: from industrial robots, passing through entertainment robots, and reaching social robotics in fields like healthcare or education. An important aspect of social robotics is the human counterpart, therefore, there is an interaction between the humans and robots. Interactions among humans are often taken for granted as, since children, we learn how to interact with each other. In robotics, this interaction is still very immature, however, critical for a successful incorporation of robots in society. Human robot interaction (HRI) is the domain that works on improving these interactions. HRI encloses many aspects, and a significant one is trust. Trust is the assumption that somebody or something is good and reliable; and it is critical for a developed society. Therefore, in a society where robots can part, the trust they could generate will be essential for cohabitation. A downside of trust is overtrusting an entity; in other words, an insufficient alignment of the projected trust and the expectations of a morally correct behaviour. This effect could negatively influence and damage the interactions between agents. In the case of humans, it is usually exploited by scammers, conmen or social engineers - who take advantage of the people's overtrust in order to manipulate them into performing actions that may not be beneficial for the victims. This thesis tries to shed light on the development of trust towards robots, how this trust could become overtrust and be exploited by social engineering techniques. More precisely, the following experiments have been carried out: (i) Treasure Hunt, in which the robot followed a social engineering framework where it gathered personal information from the participants, improved the trust and rapport with them, and at the end, it exploited that trust manipulating participants into performing a risky action. (ii) Wicked Professor, in which a very human-like robot tried to enforce its authority to make participants obey socially inappropriate requests. Most of the participants realized that the requests were morally wrong, but eventually, they succumbed to the robot'sauthority while holding the robot as morally responsible. (iii) Detective iCub, in which it was evaluated whether the robot could be endowed with the ability to detect when the human partner was lying. Deception detection is an essential skill for social engineers and professionals in the domain of education, healthcare and security. The robot achieved 75% of accuracy in the lie detection. There were also found slight differences in the behaviour exhibited by the participants when interacting with a human or a robot interrogator. Lastly, this thesis approaches the topic of privacy - a fundamental human value. With the integration of robotics and technology in our society, privacy will be affected in ways we are not used. Robots have sensors able to record and gather all kind of data, and it is possible that this information is transmitted via internet without the knowledge of the user. This is an important aspect to consider since a violation in privacy can heavily impact the trust. Summarizing, this thesis shows that robots are able to establish and improve trust during an interaction, to take advantage of overtrust and to misuse it by applying different types of social engineering techniques, such as manipulation and authority. Moreover, robots can be enabled to pick up different human cues to detect deception, which can help both, social engineers and professionals in the human sector. Nevertheless, it is of the utmost importance to make roboticists, programmers, entrepreneurs, lawyers, psychologists, and other sectors involved, aware that social robots can be highly beneficial for humans, but they could also be exploited for malicious purposes.
APA, Harvard, Vancouver, ISO, and other styles
13

Lovecchio, Joseph <1987&gt. "Development of an innovative bioreactor system for human bone tissue engineering." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2018. http://amsdottorato.unibo.it/8676/1/Tesi_Joseph_Lovecchio.pdf.

Full text
Abstract:
In the last decades significant progress has been carried out leading to significant advances in the development of engineered tissues, thanks to taking into account three fundamental components: the cells to address tissue formation, a scaffold useful as substrate for tissue growth and development, growth factors and/or biomechanical stimuli to address the differentiation of cells within the scaffolds. In particular, mechanical stimuli are known to play a key role in bone tissue formation and mineralization. Mechanical actuators, namely bioreactor systems, can be used to enhance in vitro culture steps in the overall cell-based tissue engineering strategy of expanding in vitro a stem cell source to be cultured and differentiated on a three-dimensional scaffold, aiming at implanting this scaffold in vivo. The purpose of this study is thus to design a stand-alone perfusion/compression bioreactor system. The developed prototypal system allows to apply physical stimuli mimicking native loading regimens. The results obtained in human bone marrow stem cells (hBMSCs)  onboard of a 3D graphene/chitosan scaffold indicate that their exposure to a controlled dynamic environment is suitable to address bone tissue commitment.
APA, Harvard, Vancouver, ISO, and other styles
14

Vergne, Matthieu. "Expert Finding for Requirements Engineering." Doctoral thesis, Università degli studi di Trento, 2016. https://hdl.handle.net/11572/369206.

Full text
Abstract:
Requirements Engineering (RE) revolves around requirements, from their discovery to their satisfaction, passing through their formalisation, modification, and traceability with other project artefacts, like preliminary interviews or resulting source codes. Although it is clear for many that involving knowledgeable people is an important aspect of many RE tasks, no proper focus has been given to Expert Finding (EF) systems, leading to have only few related works in the field. Our work attempts to fill this gap by investigating several dimensions of EF: conceptual by analysing the literature about expertise and its evaluation, formal by revising the usual representation of expert rankings, and practical by designing an EF system. As a result, we provide (i) a metamodel grounded in literature from Psychology to identify requirements for EF systems, (ii) a novel formalisation of experts rankings which solves limitations observed in usual EF measures, (iii) two variants of an EF system which builds on usual RE indicators (accessible knowledge and social recognition), and (iv) an enriched evaluation process which investigates deeper the consistency and correctness of an EF system.
APA, Harvard, Vancouver, ISO, and other styles
15

Hasan, Md Rashedul. "Semantic Aware Representing and Intelligent Processing of Information in an Experimental domain: the Seismic Engineering Research Case." Doctoral thesis, University of Trento, 2015. http://eprints-phd.biblio.unitn.it/1492/1/Thesis_Hasan.pdf.

Full text
Abstract:
Seismic Engineering research projects’ experiments generate an enormous amount of data that would benefit researchers and experimentalists of the community if could be shared with their semantics. Semantics is the meaning of a data element and a term alike. For example, the semantics of the term experiment is a scientific research performed to conduct a controlled test or investigation. Ontology is a key technique by which one can annotate semantics and provide a common, comprehensible foundation for the resources on the Semantic Web. The development of the domain ontology requires expertise both in the domain to model as well as in the ontology development. This means that people from very different backgrounds, such as Seismic Engineering and Computer Science should be involved in the process of creating ontology. With the invention of the Semantic Web, computing paradigm is experiencing a shift from databases to Knowledge Bases (KBs), in which ontologies play a major role in enabling reasoning power that can make implicit facts explicit to produce better results for users. To enable an ontology and a dataset automatically exploring the relevant ontology and datasets from the external sources, these can be linked to the Linked Open Data (LOD) cloud, which is an online repository of a large amount of interconnected datasets published in RDF. Throughout the past few decades, database technologies have been advancing continuously and showing their potential in dealing with large collection of data, but they were not originally designed to deal with the semantics of data. Managing data with the Semantic Web tools offers a number of advantages over database tools, including classifying, matching, mapping and querying data. Hence we translate our database based system that was managing the data of Seismic Engineering research projects and experiments into KB-based system. In addition, we also link our ontology and datasets to the LOD cloud. In this thesis, we have been working to address the following issues. To the best of knowledge the Semantic Web still lacks the ontology that can be used for representing information related to Seismic Engineering research projects and experiments. Publishing vocabulary in this domain has largely been overlooked and no suitable vocabulary is yet developed in this very domain to model data in RDF. The vocabulary is an essential component that can provide logistics to a data engineer when modeling data in RDF to include them in the LOD cloud. Ontology integration is another challenge that we had to tackle. To manage the data of a specific field of interest, domain specific ontologies provide essential support. However, they alone can hardly be sufficient to assign meaning also to the generic terms that often appear in a data source. That necessitates the use of the integrated knowledge of the generic ontology and the domain specific one. To address the aforementioned issues, this thesis presents the development of a Seismic Engineering Research Projects and Experiments Ontology (SEPREMO) with a focus on the management of research projects and experiments. We have used DERA methodology for ontology development. The developed ontology was evaluated by a number of domain experts. Data originating from scientific experiments such as cyclic and pseudodynamic tests were also published in RDF. We exploited the power of Semantic Web technologies, namely Jena, Virtuoso and VirtGraph tools in order to publish, storage and manage RDF data, respectively. Finally, a system was developed with the full integration of ontology, experimental data and tools, to evaluate the effectiveness of the KB-based approach; it yielded favorable outcomes. For ontology integration with WordNet, we implemented a semi-automatic facet based algorithm. We also present an approach for publishing both the ontology and the experimental data into the LOD Cloud. In order to model the concepts complementing the vocabulary that we need for the experimental data representation, we suitably extended the SEPREMO ontology. Moreover, the work focuses on RDF data sets interlinking technique by aligning concepts and entities scattered over the cloud.
APA, Harvard, Vancouver, ISO, and other styles
16

Siena, Alberto. "Engineering Law-Compliant Requirements: the Nomos Framework." Doctoral thesis, Università degli studi di Trento, 2010. https://hdl.handle.net/11572/369218.

Full text
Abstract:
In modern societies, both business and private life are deeply pervaded by software and information systems. Using software has extended human capabilities, allowing information to cross physical and ethical barriers. To handle misuse dangers, governments are increasingly laying down new laws and introducing obligations, rights and responsibilities concerned with the use of software. As a consequence, laws are assuming a steering role in the specification of software requirements, which must be compliant to avoid fines and penalties. This work proposes a model-based approach to the problem of law compliance of software requirements. It aims at extending state-of-the-art goal-oriented requirements engineering techniques with the capability to argue about compliance, through the use and analysis of models. It is based on a language for modelling legal prescriptions. Upon the language, compliance can be defined as a condition that depends on a set of properties. Such a condition is achieved through an iterative modelling process. Specifically, we investigated the nature of legal prescription to capture their conceptual language. From jurisprudence literature, we adopted a taxonomy of legal concepts, which has been elaborated and translated into a conceptual meta-model. Moreover, this metamodel was integrated with the meta-model of a goal-oriented modelling language for requirements engineering, in order to provide a common legal-intentional meta-model. Requirements models built with the proposed language consist of graphs, which ultimately can be verified automatically. Compliance amounts then in a set of properties the graph must have. The compliance condition gains relevance in two cases. Firstly, when a requirements model has already been developed, and it needs to be reconciled with a set of laws. Secondly, when requirements have to be modelled from scratch, and they are need to be compliant. In both cases, compliance results from a design process. The proposed modelling language, as well as the compliance condition and the corresponding design process, have been applied to two case studies. The obtained results confirm the validity of the approach, and point out interesting research directions for the future.
APA, Harvard, Vancouver, ISO, and other styles
17

Morales, Ramirez Itzel. "Exploiting Online User Feedback in Requirements Engineering." Doctoral thesis, Università degli studi di Trento, 2015. https://hdl.handle.net/11572/367785.

Full text
Abstract:
User feedback is mainly defined as an information source for evaluating the customers’ satisfaction for a given goods, service or software application. Due to the wide diffusion of the Internet and to the proliferation of mobile devices, users access a myriad of software services and applications, at any time and in any place. In this context users can provide feedback upon their experience in using software, through dedicated software applications or web forms. This online user feedback is a powerful source of information for improving the software service or application. Specifically in software engineering, user feedback is recognized as a source of requests for change in a system, so it can contribute to the evolution of software systems. Indeed, user feedback is gaining more attention from the requirements engineering research community, and dedicated buzzwords have been introduced to refer to research studies in RE, i.e. mass RE and crowd RE. Arguing on this premise, the possibility of exploiting user feedback is worth to be investigated in requirements engineering, by addressing open challenges in collection as well as in the analysis of online feedback. The research work described in this Thesis starts with a stateof-the-art literature analysis that revealed that the definition of user feedback as an artifact, as well as the characterization and understanding of its process of elaboration and communication were still unexplored, especially from the requirements engineering perspective. We adopted a multidisciplinary approach by borrowing concepts and techniques from ontologies, philosophy of language, natural language processing, requirements engineering and human computer interaction. The main research contributions are: an ontology of user feedback, the characterization of user feedback as speech acts for applying a semantic analysis, and the proposal of a new way of gathering and filtering user feedback by applying an argumentation framework.
APA, Harvard, Vancouver, ISO, and other styles
18

Li, Fenglin. "Desiree - a Refinement Calculus for Requirements Engineering." Doctoral thesis, Università degli studi di Trento, 2016. https://hdl.handle.net/11572/368013.

Full text
Abstract:
The requirements elicited from stakeholders suffer from various afflictions, including informality, incompleteness, ambiguity, vagueness, inconsistencies, and more. It is the task of requirements engineering (RE) processes to derive from these an eligible (formal, complete enough, unambiguous, consistent, measurable, satisfiable, modifiable and traceable) requirements specification that truly captures stakeholder needs. We propose Desiree, a refinement calculus for systematically transforming stakeholder requirements into an eligible specification. The core of the calculus is a rich set of requirements operators that iteratively transform stakeholder requirements by strengthening or weakening them, thereby reducing incompleteness, removing ambiguities and vagueness, eliminating unattainability and conflicts, turning them into an eligible specification. The framework also includes an ontology for modeling and classifying requirements, a description-based language for representing requirements, as well as a systematic method for applying the concepts and operators in order to engineer an eligible specification from stakeholder requirements. In addition, we define the semantics of the requirements concepts and operators, and develop a graphical modeling tool in support of the entire framework. To evaluate our proposal, we have conducted a series of empirical evaluations, including an ontology evaluation by classifying a large public requirements set, a language evaluation by rewriting the large set of requirements using our description-based syntax, a method evaluation through a realistic case study, and an evaluation of the entire framework through three controlled experiments. The results of our evaluations show that our ontology, language, and method are adequate in capturing requirements in practice, and offer strong evidence that with sufficient training, our framework indeed helps people conduct more effective requirements engineering.
APA, Harvard, Vancouver, ISO, and other styles
19

CAVO, MARTA MARIA. "Cancer Tissue Engineering: development of new 3D models and technologies to support cancer research." Doctoral thesis, Università degli studi di Genova, 2018. http://hdl.handle.net/11567/930263.

Full text
Abstract:
The activity of research of this thesis focuses on the relevance that appropriate models reproducing the in vivo tumor microenvironment are essential for improving cancer biology knowledge and for testing new anticancer compounds. Animal models are proven not to be entirely compatible with the human system, and the success rates between animal and human studies are still unsatisfactory. On the other hand, 2D cell cultures fail to reproduce some aspects of tumor system. These limitations have a significant weight especially during the screening of novel antitumor drugs, as it was demonstrated that cells are less sensitive to treatments when in contact with their microenvironment. To obtain the same tumor cell inhibition levels observed in vivo, the culture environment has to reflect the 3D natural environment. Natural or synthetic hydrogels reported successful outcomes in mimicking ECM environment. During this PhD, I developed different gel-based scaffolds to be use as substrates for the culture of breast cancer cells. In detail, I developed different gels for low and highly aggressive cancer cell lines (i.e. MCF-7 and MDA-MB-231), obtaining significant results as regards the reproduction of key features normally present into the in vivo environment. Considering the importance of the metastasis process in breast cancer evolution, I then focused on a new set-up for the observation of cancer cell motility and invasion. In particular, I combined a bioreactor-based bioengineering approach with single cell analysis of Circulating Tumor Cells (CTCs). This part of work was carried out at the Dipartiment of Biomedicine of the University of Basel (CH) that, among its equipment, has a cell celector machine for single cell analysis. At the end of this work, I provided a proof-of-concept that the approach can work, as well as evidence that the cells can be extracted from the device and used for molecular analysis.
APA, Harvard, Vancouver, ISO, and other styles
20

Vergne, Matthieu. "Expert Finding for Requirements Engineering." Doctoral thesis, University of Trento, 2016. http://eprints-phd.biblio.unitn.it/1703/1/vergne_final_thesis.pdf.

Full text
Abstract:
Requirements Engineering (RE) revolves around requirements, from their discovery to their satisfaction, passing through their formalisation, modification, and traceability with other project artefacts, like preliminary interviews or resulting source codes. Although it is clear for many that involving knowledgeable people is an important aspect of many RE tasks, no proper focus has been given to Expert Finding (EF) systems, leading to have only few related works in the field. Our work attempts to fill this gap by investigating several dimensions of EF: conceptual by analysing the literature about expertise and its evaluation, formal by revising the usual representation of expert rankings, and practical by designing an EF system. As a result, we provide (i) a metamodel grounded in literature from Psychology to identify requirements for EF systems, (ii) a novel formalisation of experts rankings which solves limitations observed in usual EF measures, (iii) two variants of an EF system which builds on usual RE indicators (accessible knowledge and social recognition), and (iv) an enriched evaluation process which investigates deeper the consistency and correctness of an EF system.
APA, Harvard, Vancouver, ISO, and other styles
21

Salnitri, Mattia. "Secure Business Process Engineering: a socio-technical approach." Doctoral thesis, Università degli studi di Trento, 2016. https://hdl.handle.net/11572/368502.

Full text
Abstract:
Dealing with security is a central activity for todays organizations. Security breaches impact on the activities executed in organizations, preventing them to execute their business processes and, therefore, causing millions of dollars of losses. Security by design principles underline the importance of considering security as early as during the design of organizations to avoid expensive fixes during later phases of their lifecycle. However, the design of secure business processes cannot take into account only security aspects on the sequences of activities. Security reports in the last years demonstrate that security breaches are more and more caused by attacks that take advantage of social vulnerabilities. Therefore, those aspects should be analyzed in order to design a business process robust to technical and social attacks. Still, the mere design of business processes does not guarantee that their correct execution, such business processes have to be correctly implemented and performed. We propose SEcure Business process Engineering (SEBE), a method that considers social and organizational aspects for designing and implementing secure business processes. SEBE provides an iterative and incremental process and a set of verification of transformation rules, supported by a software tool, that integrate different modeling languages used to specify social security aspects, business processes and the implementation code. In particular, SEBE provides a new modeling language which permits to specify business processes with security concepts and complex security constraints. We evaluated the effectiveness of SEBE for engineering secure business processes with two empirical evaluations and applications of the method to three real scenarios.
APA, Harvard, Vancouver, ISO, and other styles
22

Asprino, Luigi <1988&gt. "Engineering Background Knowledge for Social Robots." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amsdottorato.unibo.it/9020/1/asprino_luigi_tesi.pdf.

Full text
Abstract:
Social robots are embodied agents that continuously perform knowledge-intensive tasks involving several kinds of information coming from different heterogeneous sources. Providing a framework for engineering robots' knowledge raises several problems like identifying sources of information and modeling solutions suitable for robots' activities, integrating knowledge coming from different sources, evolving this knowledge with information learned during robots' activities, grounding perceptions on robots' knowledge, assessing robots' knowledge with respect humans' one and so on. In this thesis we investigated feasibility and benefits of engineering background knowledge of Social Robots with a framework based on Semantic Web technologies and Linked Data. This research has been supported and guided by a case study that provided a proof of concept through a prototype tested in a real socially assistive context.
APA, Harvard, Vancouver, ISO, and other styles
23

Siena, Alberto. "Engineering Law-Compliant Requirements: the Nomos Framework." Doctoral thesis, University of Trento, 2010. http://eprints-phd.biblio.unitn.it/230/1/AlbertoSiena-PhD-Thesis.pdf.

Full text
Abstract:
In modern societies, both business and private life are deeply pervaded by software and information systems. Using software has extended human capabilities, allowing information to cross physical and ethical barriers. To handle misuse dangers, governments are increasingly laying down new laws and introducing obligations, rights and responsibilities concerned with the use of software. As a consequence, laws are assuming a steering role in the specification of software requirements, which must be compliant to avoid fines and penalties. This work proposes a model-based approach to the problem of law compliance of software requirements. It aims at extending state-of-the-art goal-oriented requirements engineering techniques with the capability to argue about compliance, through the use and analysis of models. It is based on a language for modelling legal prescriptions. Upon the language, compliance can be defined as a condition that depends on a set of properties. Such a condition is achieved through an iterative modelling process. Specifically, we investigated the nature of legal prescription to capture their conceptual language. From jurisprudence literature, we adopted a taxonomy of legal concepts, which has been elaborated and translated into a conceptual meta-model. Moreover, this metamodel was integrated with the meta-model of a goal-oriented modelling language for requirements engineering, in order to provide a common legal-intentional meta-model. Requirements models built with the proposed language consist of graphs, which ultimately can be verified automatically. Compliance amounts then in a set of properties the graph must have. The compliance condition gains relevance in two cases. Firstly, when a requirements model has already been developed, and it needs to be reconciled with a set of laws. Secondly, when requirements have to be modelled from scratch, and they are need to be compliant. In both cases, compliance results from a design process. The proposed modelling language, as well as the compliance condition and the corresponding design process, have been applied to two case studies. The obtained results confirm the validity of the approach, and point out interesting research directions for the future.
APA, Harvard, Vancouver, ISO, and other styles
24

Morales, Ramirez Itzel. "Exploiting Online User Feedback in Requirements Engineering." Doctoral thesis, University of Trento, 2015. http://eprints-phd.biblio.unitn.it/1581/2/MoralesRamirezItzel.pdf.

Full text
Abstract:
User feedback is mainly defined as an information source for evaluating the customers’ satisfaction for a given goods, service or software application. Due to the wide diffusion of the Internet and to the proliferation of mobile devices, users access a myriad of software services and applications, at any time and in any place. In this context users can provide feedback upon their experience in using software, through dedicated software applications or web forms. This online user feedback is a powerful source of information for improving the software service or application. Specifically in software engineering, user feedback is recognized as a source of requests for change in a system, so it can contribute to the evolution of software systems. Indeed, user feedback is gaining more attention from the requirements engineering research community, and dedicated buzzwords have been introduced to refer to research studies in RE, i.e. mass RE and crowd RE. Arguing on this premise, the possibility of exploiting user feedback is worth to be investigated in requirements engineering, by addressing open challenges in collection as well as in the analysis of online feedback. The research work described in this Thesis starts with a stateof-the-art literature analysis that revealed that the definition of user feedback as an artifact, as well as the characterization and understanding of its process of elaboration and communication were still unexplored, especially from the requirements engineering perspective. We adopted a multidisciplinary approach by borrowing concepts and techniques from ontologies, philosophy of language, natural language processing, requirements engineering and human computer interaction. The main research contributions are: an ontology of user feedback, the characterization of user feedback as speech acts for applying a semantic analysis, and the proposal of a new way of gathering and filtering user feedback by applying an argumentation framework.
APA, Harvard, Vancouver, ISO, and other styles
25

Li, Fenglin. "Desiree - a Refinement Calculus for Requirements Engineering." Doctoral thesis, University of Trento, 2016. http://eprints-phd.biblio.unitn.it/1642/1/Thesis.Feng-Lin.Li.pdf.

Full text
Abstract:
The requirements elicited from stakeholders suffer from various afflictions, including informality, incompleteness, ambiguity, vagueness, inconsistencies, and more. It is the task of requirements engineering (RE) processes to derive from these an eligible (formal, complete enough, unambiguous, consistent, measurable, satisfiable, modifiable and traceable) requirements specification that truly captures stakeholder needs. We propose Desiree, a refinement calculus for systematically transforming stakeholder requirements into an eligible specification. The core of the calculus is a rich set of requirements operators that iteratively transform stakeholder requirements by strengthening or weakening them, thereby reducing incompleteness, removing ambiguities and vagueness, eliminating unattainability and conflicts, turning them into an eligible specification. The framework also includes an ontology for modeling and classifying requirements, a description-based language for representing requirements, as well as a systematic method for applying the concepts and operators in order to engineer an eligible specification from stakeholder requirements. In addition, we define the semantics of the requirements concepts and operators, and develop a graphical modeling tool in support of the entire framework. To evaluate our proposal, we have conducted a series of empirical evaluations, including an ontology evaluation by classifying a large public requirements set, a language evaluation by rewriting the large set of requirements using our description-based syntax, a method evaluation through a realistic case study, and an evaluation of the entire framework through three controlled experiments. The results of our evaluations show that our ontology, language, and method are adequate in capturing requirements in practice, and offer strong evidence that with sufficient training, our framework indeed helps people conduct more effective requirements engineering.
APA, Harvard, Vancouver, ISO, and other styles
26

Paja, Elda. "STS: a Security Requirements Engineering methodology for socio-technical Systems." Doctoral thesis, Università degli studi di Trento, 2014. https://hdl.handle.net/11572/368991.

Full text
Abstract:
Today’s software systems are situated within larger socio-technical systems, wherein they interact — by exchanging data and delegating tasks — with other technical components, humans, and organisations. The components (actors) of a socio-technical system are autonomous and loosely controllable. Therefore, when interacting, they may endanger security by, for example, disclosing confidential information, breaking the integrity of others’ data, and relying on untrusted third parties, among others. The design of a secure software system cannot disregard its collocation within a socio-technical context, where security is threatened not only by technical attacks, but also by social and organisational threats. This thesis proposes a tool-supported model-driven methodology, namely STS, for conducting security requirements engineering for socio-technical systems. In STS, security requirements are specified — using the STS-ml requirements modelling language — as social contracts that constrain the social interactions and the responsibilities of the actors in the socio-technical system. A particular feature of STS-ml is that it clearly distinguishes information from its representation — in terms of documents, and separates information flow from the permissions or prohibitions actors specify to others over their interactions. This separation allows STS-ml to support a rich set of security requirements. The requirements models of STS-ml have a formal semantics which enables automated reasoning for detecting possible conflicts among security requirements as well as conflicts between security requirements and actors’ business policies — how they intend to achieve their objectives. Importantly, automated reasoning techniques are proposed to calculate the impact of social threats on actors’ information and their objectives. Modelling and reasoning capabilities are supported by STS-Tool. The effectiveness of STS methodology in modelling, and ultimately specifying security requirements for various socio-technical systems, is validated with the help of case studies from different domains. We assess the scalability for the implementation of the conflict identification algorithms conducting a scalability study using data from one of the case studies. Finally, we report on the results from user-oriented empirical evaluations of the STS methodology, the STS-ml modelling language, and the STS-Tool. These studies have been conducted over the past three years starting from the initial proposal of the methodology, language, and tool, in order to improve them after each evaluation.
APA, Harvard, Vancouver, ISO, and other styles
27

PAGLIARI, LORENZO. "Performance engineering of Cyber-Physical Systems." Doctoral thesis, Gran Sasso Science Institute, 2020. http://hdl.handle.net/20.500.12571/14181.

Full text
Abstract:
Cyber-Physical Systems (CPS) are the consequence of today’s technology progression. They were born as evolution of a very large family of systems called embedded systems, complex systems and system of systems and many others. Technology improvements are computational power, more powerful telecommunication architectures and more efficient software and algorithms that characterize the smart features that are so widespread and used since few years ago. CPS are complex systems and the evaluation of their performance is a crucial task. Waiting the implementation to assess the system performance is quite costly, whereas at the development stage, refactoring actions might have lower costs. Moreover, CPS, due to their nature, have to work satisfying different types of requirements, e.g., safety-critical or real-time constraints. When the system is complex and refactoring actions are costly when performed at the implementation stage, the capability of evaluate system performance in the early stage of the development process is very important. A large set of challenges in many activity of the engineering and design process have been raised by CPS. The communication and collaboration features, among system agents and components, have made it difficult to apply the well-established design and performance evaluation methodologies of the previous systems generations, such as embedded systems. In literature, there are formalisms and tools specialised in modelling and evaluating software, hardware or physical components and also communication infrastructures and protocols and so forth. However, a modelling formalism for software has difficulties in modelling hardware features such as laws of physics and a simulation tool for software struggle in accurately simulate also mechanical, hydraulic or electric dynamics. Vice versa the situation is very similar, formalisms and tools for hardware and physical components are not capable, or very little, of modelling and evaluating software elements. This doctoral work aims to investigate new possibilities about these research challenges. In particular, the main topic of this research work is to present a novel methodology to performance engineering Cyber-Physical Systems, exploiting and leveraging, if possible, state of the art ideas and concepts but also well-established formalisms, tools, and techniques. The main goals of this work are to be a step forward towards the innovation of engineering CPS and to propose a novel methodology that would help to establish future innovative performance engineering and model-based approaches for CPS.
APA, Harvard, Vancouver, ISO, and other styles
28

Salnitri, Mattia. "Secure Business Process Engineering: a socio-technical approach." Doctoral thesis, University of Trento, 2016. http://eprints-phd.biblio.unitn.it/1851/1/MattiaSalnitri_thesisFinal.pdf.

Full text
Abstract:
Dealing with security is a central activity for todays organizations. Security breaches impact on the activities executed in organizations, preventing them to execute their business processes and, therefore, causing millions of dollars of losses. Security by design principles underline the importance of considering security as early as during the design of organizations to avoid expensive fixes during later phases of their lifecycle. However, the design of secure business processes cannot take into account only security aspects on the sequences of activities. Security reports in the last years demonstrate that security breaches are more and more caused by attacks that take advantage of social vulnerabilities. Therefore, those aspects should be analyzed in order to design a business process robust to technical and social attacks. Still, the mere design of business processes does not guarantee that their correct execution, such business processes have to be correctly implemented and performed. We propose SEcure Business process Engineering (SEBE), a method that considers social and organizational aspects for designing and implementing secure business processes. SEBE provides an iterative and incremental process and a set of verification of transformation rules, supported by a software tool, that integrate different modeling languages used to specify social security aspects, business processes and the implementation code. In particular, SEBE provides a new modeling language which permits to specify business processes with security concepts and complex security constraints. We evaluated the effectiveness of SEBE for engineering secure business processes with two empirical evaluations and applications of the method to three real scenarios.
APA, Harvard, Vancouver, ISO, and other styles
29

Russo, Daniel <1988&gt. "Socio–Technical Software Engineering: a Quality–Architecture–Process Perspective." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amsdottorato.unibo.it/8782/1/FINAL-PhD_Russo_757115.pdf.

Full text
Abstract:
This dissertation provides a model, which focuses on Quality, Architecture, and Process aspects, to manage software development lifecycles in a sustainable way. Here, with sustainability is meant a context-aware approach to IT, which considers all relevant socio-technical units of analysis. Both social (e.g., at the level of the stakeholders community, organization, team, individual) and technical (e.g., technological environments coding standards, language) dimensions play a key role to develop IT systems which respond to contingent needs and may implement future requirements in a flexible manner. We used different research methods and analyzed the problem from several perspectives, in a pragmatic way, to deliver useful insights both to the research and practitioners communities. The Software Quality, Architecture, and Process (SQuAP) model, highlights the key critical factors to develop systems in a sustainable ways. The model was firstly induced and then deduced from a longitudinal research of the financial sector. To support the model, SQuAP-ont, an OWL ontology was develop as a managerial and assessment tool. A real-world case study within a mission-critical environment shows how these dimensions are critical for the development of IT applications. Relevant IT managers concerns were also covered with reference to software reuse and contracting problems. Finally, a long-term contribution for the educational community presents actionable teaching styles and models to train future professionals to act in a Cooperative Thinking fashion.
APA, Harvard, Vancouver, ISO, and other styles
30

Paja, Elda. "STS: a Security Requirements Engineering methodology for socio-technical Systems." Doctoral thesis, University of Trento, 2014. http://eprints-phd.biblio.unitn.it/1312/1/Paja-May2014.pdf.

Full text
Abstract:
Today’s software systems are situated within larger socio-technical systems, wherein they interact — by exchanging data and delegating tasks — with other technical components, humans, and organisations. The components (actors) of a socio-technical system are autonomous and loosely controllable. Therefore, when interacting, they may endanger security by, for example, disclosing confidential information, breaking the integrity of others’ data, and relying on untrusted third parties, among others. The design of a secure software system cannot disregard its collocation within a socio-technical context, where security is threatened not only by technical attacks, but also by social and organisational threats. This thesis proposes a tool-supported model-driven methodology, namely STS, for conducting security requirements engineering for socio-technical systems. In STS, security requirements are specified — using the STS-ml requirements modelling language — as social contracts that constrain the social interactions and the responsibilities of the actors in the socio-technical system. A particular feature of STS-ml is that it clearly distinguishes information from its representation — in terms of documents, and separates information flow from the permissions or prohibitions actors specify to others over their interactions. This separation allows STS-ml to support a rich set of security requirements. The requirements models of STS-ml have a formal semantics which enables automated reasoning for detecting possible conflicts among security requirements as well as conflicts between security requirements and actors’ business policies — how they intend to achieve their objectives. Importantly, automated reasoning techniques are proposed to calculate the impact of social threats on actors’ information and their objectives. Modelling and reasoning capabilities are supported by STS-Tool. The effectiveness of STS methodology in modelling, and ultimately specifying security requirements for various socio-technical systems, is validated with the help of case studies from different domains. We assess the scalability for the implementation of the conflict identification algorithms conducting a scalability study using data from one of the case studies. Finally, we report on the results from user-oriented empirical evaluations of the STS methodology, the STS-ml modelling language, and the STS-Tool. These studies have been conducted over the past three years starting from the initial proposal of the methodology, language, and tool, in order to improve them after each evaluation.
APA, Harvard, Vancouver, ISO, and other styles
31

Bavota, Gabriele. "Using Structural and Semantic Information to Support Software Refactoring." Doctoral thesis, Universita degli studi di Salerno, 2015. http://hdl.handle.net/10556/2022.

Full text
Abstract:
2011 - 2012
In the software life cycle the internal structure of the system undergoes continuous modifications. These changes push away the source code from its original design, often reducing its quality. In such cases refactoring techniques can be applied to improve the design quality of the system. Approaches existing in literature mainly exploit structural relationships present in the source code, e.g., method calls, to support the software engineer in identifying refactoring solutions. However, also semantic information is embedded in the source code by the developers, e.g., the terms used in the comments. This research investigates about the usefulness of combining structural and semantic information to support software refactoring. In particular, a framework of approaches supporting different refactoring operations, i.e., Extract Class, Move Method, Extract Package, and Move Class, is presented. All the approaches have been empirically evaluated. Particular attention has been devoted to evaluations conducted with software developers, to understand if the refactoring operations suggested by the proposed approaches are meaningful from their point of view. [edited by Author]
XI n.s.
APA, Harvard, Vancouver, ISO, and other styles
32

Qureshi, Nauman Ahmed. "Requirements Engineering for Self-Adaptive Software: Bridging the Gap Between Design-Time and Run-Time." Doctoral thesis, Università degli studi di Trento, 2011. https://hdl.handle.net/11572/367869.

Full text
Abstract:
Self-Adaptive Software systems (SAS) adapt at run-time in response to changes in user’s needs, operating contexts, and resource availability, by requiring minimal to no involvement of system administrators. The ever-greater reliance on software with qualities such as flexibility and easy integrability, and the associated increase of design and maintenance effort, is raising the interest towards research on SAS. Taking the perspective of Requirements Engineering (RE), we investigate in this thesis how RE for SAS departs from more conventional RE for nonadaptive systems. The thesis has two objectives. First, to define a systematic approach to support the analyst to engineer requirements for SAS at design-time, which starts at early requirements (elicitation and analysis) and ends with the specification of the system, which will satisfy those requirements. Second, to realize software holding a representation of its requirements at run-time, thus enabling run-time adaptation in a user-oriented, goal-driven manner. To fulfill the first objective, a conceptual and theoretical framework is proposed. The framework is founded on core ontology for RE with revised elements that are needed to support RE for SAS. On this basis, a practical and systematic methodology at support of the requirements engineer is defined. It exploits a new aggregate type of requirement, called adaptive requirements, together with a visual modeling language to code requirements into a designtime artifact (called Adaptive Requirements Modeling Language, ARML). Adaptive requirements not only encompass functional and non-functional requirements but also specify properties for control loop functionalities such as monitoring specification, decision criteria and adaptation actions. An experiment is conducted involving human subjects to provide a first assessment on the effectiveness of proposed modeling concepts and approach. To support the second objective, a Continuous Adaptive RE (CARE) framework is proposed. It is based on a service-oriented architecture mainly adopting concepts from service-based applications to support run-time analysis and refinement of requirements by the system itself. The key contribution in achieving this objective is enabling the CARE framework to involve the end-user in the adaptation at run-time, when needed. As a validation of this framework, we perform a research case study by developing a proof of concept application, which rests on CARE’s conceptual architecture. This thesis contributes to the research on requirements engineering for SAS by proposing: (1) a conceptual core ontology with necessary concepts and relations to support the formulation of a dynamic RE problem i.e. finding adaptive requirements specification both at design-time and run-time. (2) a systematic methodology to support the analyst for modeling and operationalizing adaptive requirements at design-time. (3) a framework to perform continuous requirements engineering at run-time by the system itself involving the end-user.
APA, Harvard, Vancouver, ISO, and other styles
33

PASQUALI, DARIO. "Social Engineering Defense Solutions Through Human-Robot Interaction." Doctoral thesis, Università degli studi di Genova, 2022. http://hdl.handle.net/11567/1092333.

Full text
Abstract:
Social Engineering is the science of using social interaction to influence others on taking computer-related actions of attacker’s interest. It is used to steal credentials, money, or people’s identities. After being left unchecked for a long time, social engineering is raising increasing concerns. Despite its social nature, state-of-the-art defense systems mainly focus on engineering factors. They detect technical features specific to the medium employed in the attack (e.g., phishing emails), or they train final users on detecting them. However, the crucial aspects of social engineering are humans, their vulnerabilities, and how attackers leverage them, gaining victims’ compliance. Recent solutions involved victims’ explicit perception and judgment in technical defenses (Humans-as-a-Security-Sensor paradigm). However, humans also communicate implicitly: gaze, heart rate, sweating, body posture, and voice prosody are physiological and behavioral cues that implicitly disclose humans’ cognitive and emotional state. In literature, expert social engineers reported monitoring such cues from the victims continuously to adapt their strategy (e.g., in face-to-face attacks); also, they stressed the importance of controlling them to avoid revealing the attacker’s malicious intentions. This thesis studies how to leverage such behavioral and physiological cues to defend against social engineering. Moreover, it researches humanoid social robots - more precisely the iCub and Furhat robotic platforms - as novel agents in the cybersecurity field. Humans’ trust in robots and their role are still debated: attackers could hijack and control them to perform face-to-face attacks from a safe distance. However, this thesis speculates robots could be helpers, everyday companions able to warn users against social engineering attacks, better than traditional notification vectors could do. Finally, this thesis explores leveraging game-based entertaining human-robot interactions to collect more realistic, less biased data. For this purpose, I performed four studies concerning different aspects of social engineering. Firstly, I studied how the trust between attackers and victims evolves and can be exploited. In a Treasure Hunt game, players had to decide whether trust the hints of iCub. The robot showed four mechanical failures designed to mine its perceived reliability in the game and could provide transparent motivations for them. The study showed how players’ trust in iCub decreased only if they perceived all the faults or the robot explained them; i.e., they perceived the risk of relying on a faulty robot. Then, I researched novel physiological-based methods to unmask malicious social engineers. In a Magic Trick card game, autonomously led by the iCub robot, players lied or told the truth about gaming card descriptions. ICub leveraged an End-to-end deception detection architecture to identify lies based on players’ pupil dilation alone. The architecture enables iCub to learn customized deception patterns, improving the classification over prolonged interactions. In the third study, I focused on victims’ behavioral and physiological reactions during social engineering attacks; and how to evaluate their awareness. Participants played an interactive storytelling game designed to challenge them against social engineering attacks from virtual agents and the humanoid robot iCub. Post-hoc, I trained three Random Forest classifiers to detect whether participants’ perceived the risk and uncertainty of Social Engineering attacks and predict their decisions. Finally, I explored how social humanoid robots should intervene to prevent victims’ compliance with social engineering. In a refined version of the interactive storytelling, the Furhat robot contrasted players’ decisions with different strategies, changing their minds. Preliminary results suggest the robot effectively affected participants’ decisions, motivating further studies toward closing the social engineering defense loop in human-robot interaction. Summing up, this thesis provides evidence that humans’ implicit cues and social robots could help against social engineering; it offers practical defensive solutions and architectures supporting further research in the field and discusses them aiming for concrete applications.
APA, Harvard, Vancouver, ISO, and other styles
34

MARCHESI, LODOVICA. "Software Engineering Practices applied to Blockchain Technology and Decentralized Applications." Doctoral thesis, Università degli Studi di Cagliari, 2022. http://hdl.handle.net/11584/333449.

Full text
Abstract:
Blockchain software development is becoming more and more important for any modern software developer and IT startup. Nonetheless, blockchain software production still lacks a disciplined, organized and mature development process. The goal of my research is to study innovative software engineering techniques applicable to the development of blockchain applications. I focused on the use of agile practices, because these are suitable for developing systems whose requirements are not fully understood from the beginning, or tend to change, as is the case with most blockchain-based applications. In particular, I contributed to the proposal of a method to guide software development, called ABCDE, meaning Agile BlockChain Dapp Engineering. It takes into account the substantial difference between developing traditional software and developing smart contracts, and separates the two activities. It considers the software integration among the blockchain components and the off-chain components, which all together constitute a complete dApp system. The method also addresses specific activities for both security assessment and gas optimization, two of the main issues of dApp development, through systematic use of patterns and checklists. Agile methodologies aim to reduce software development risk, however, the risk of project failure or time and budget overruns is still a relevant problem. Therefore, I studied and developed a new approach to model some key risk factors in agile development, using software process simulation modeling (SPSM). The approach includes modeling the agile process, gathering data from the tool used for project management, and performing Monte Carlo simulations of the process, to get insights about the expected time and effort to complete the project, and about their distributions. While the simulator hasn't been specifically applied to blockchain projects yet, it has all the features to be able to do so. I also proposed an evaluation framework to compare public and permissioned blockchains, specifically suited for industrial applications. Then, I presented a complete solution based on Ethereum to implement a decentralized application, putting together in an original way components and patterns already used and proved. The proposed approach has the same transparency and immutability of a public blockchain, largely reducing its drawbacks. The key reason to use a blockchain is trust. There is a growing demand for transparency across the agri-food supply chain from customers and governments. The adoption of blockchain technology to enable secure traceability for the agri-food supply chain management, provide information such as the provenance of a food product, and prevent food fraud, is rapidly emerging. However, developing correct smart contracts for these use cases is still more of a challenge. My research also focused on defining a novel approach for easily customizing and composing general Ethereum-based smart contracts designed for the agri-food industrial domain, to be able to reuse the code and modules and automate the process to shorten the time of development, keeping it secure and trusted. Starting from the definition of the real production process, I aimed to automatically generate both the smart contracts to manage the system and the user interfaces to interact with them, thus producing a working system in a semi-automated way. Another supply chain in which blockchain technology can be applied with potential advantages is the shipping logistics. With the support of SWOT analysis, I explored the application prospects and the practical impacts, benefits, pros and cons, economic and technical barriers related to the use of Blockchain to support the creation of an interport community. Finally, I included in this thesis, albeit marginally, a research to the study of techniques for the forecasting of time series, in particular to forecast daily closing price series of different cryptocurrencies.
APA, Harvard, Vancouver, ISO, and other styles
35

Hasan, Md Rashedul. "Semantic Aware Representing and Intelligent Processing of Information in an Experimental domain:the Seismic Engineering Research Case." Doctoral thesis, University of Trento, 2015. http://eprints-phd.biblio.unitn.it/1545/1/Thesis_Hasan.pdf.

Full text
Abstract:
Seismic Engineering research projects’ experiments generate an enormous amount of data that would benefit researchers and experimentalists of the community if could be shared with their semantics. Semantics is the meaning of a data element and a term alike. For example, the semantics of the term experiment is a scientific research performed to conduct a controlled test or investigation. Ontology is a key technique by which one can annotate semantics and provide a common, comprehensible foundation for the resources on the Semantic Web. The development of the domain ontology requires expertise both in the domain to model as well as in the ontology development. This means that people from very different backgrounds, such as Seismic Engineering and Computer Science should be involved in the process of creating ontology. With the invention of the Semantic Web, computing paradigm is experiencing a shift from databases to Knowledge Bases (KBs), in which ontologies play a major role in enabling reasoning power that can make implicit facts explicit to produce better results for users. To enable an ontology and a dataset automatically exploring the relevant ontology and datasets from the external sources, these can be linked to the Linked Open Data (LOD) cloud, which is an online repository of a large amount of interconnected datasets published in RDF. Throughout the past few decades, database technologies have been advancing continuously and showing their potential in dealing with large collection of data, but they were not originally designed to deal with the semantics of data. Managing data with the Semantic Web tools offers a number of advantages over database tools, including classifying, matching, mapping and querying data. Hence we translate our database based system that was managing the data of Seismic Engineering research projects and experiments into KB-based system. In addition, we also link our ontology and datasets to the LOD cloud. In this thesis, we have been working to address the following issues. To the best of knowledge the Semantic Web still lacks the ontology that can be used for representing information related to Seismic Engineering research projects and experiments. Publishing vocabulary in this domain has largely been overlooked and no suitable vocabulary is yet developed in this very domain to model data in RDF. The vocabulary is an essential component that can provide logistics to a data engineer when modeling data in RDF to include them in the LOD cloud. Ontology integration is another challenge that we had to tackle. To manage the data of a specific field of interest, domain specific ontologies provide essential support. However, they alone can hardly be sufficient to assign meaning also to the generic terms that often appear in a data source. That necessitates the use of the integrated knowledge of the generic ontology and the domain specific one. To address the aforementioned issues, this thesis presents the development of a Seismic Engineering Research Projects and Experiments Ontology (SEPREMO) with a focus on the management of research projects and experiments. We have used DERA methodology for ontology development. The developed ontology was evaluated by a number of domain experts. Data originating from scientific experiments such as cyclic and pseudodynamic tests were also published in RDF. We exploited the power of Semantic Web technologies, namely Jena, Virtuoso and VirtGraph tools in order to publish, storage and manage RDF data, respectively. Finally, a system was developed with the full integration of ontology, experimental data and tools, to evaluate the effectiveness of the KB-based approach; it yielded favorable outcomes. For ontology integration with WordNet, we implemented a semi-automatic facet based algorithm. We also present an approach for publishing both the ontology and the experimental data into the LOD Cloud. In order to model the concepts complementing the vocabulary that we need for the experimental data representation, we suitably extended the SEPREMO ontology. Moreover, the work focuses on RDF data sets interlinking technique by aligning concepts and entities scattered over the cloud.
APA, Harvard, Vancouver, ISO, and other styles
36

Poggi, Francesco <1982&gt. "Structural patterns for document engineering: from an empirical bottom-up analysis to an ontological theory." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2015. http://amsdottorato.unibo.it/7123/4/Poggi_Francesco_Tesi.pdf.

Full text
Abstract:
This thesis aims at investigating a new approach to document analysis based on the idea of structural patterns in XML vocabularies. My work is founded on the belief that authors do naturally converge to a reasonable use of markup languages and that extreme, yet valid instances are rare and limited. Actual documents, therefore, may be used to derive classes of elements (patterns) persisting across documents and distilling the conceptualization of the documents and their components, and may give ground for automatic tools and services that rely on no background information (such as schemas) at all. The central part of my work consists in introducing from the ground up a formal theory of eight structural patterns (with three sub-patterns) that are able to express the logical organization of any XML document, and verifying their identifiability in a number of different vocabularies. This model is characterized by and validated against three main dimensions: terseness (i.e. the ability to represent the structure of a document with a small number of objects and composition rules), coverage (i.e. the ability to capture any possible situation in any document) and expressiveness (i.e. the ability to make explicit the semantics of structures, relations and dependencies). An algorithm for the automatic recognition of structural patterns is then presented, together with an evaluation of the results of a test performed on a set of more than 1100 documents from eight very different vocabularies. This language-independent analysis confirms the ability of patterns to capture and summarize the guidelines used by the authors in their everyday practice. Finally, I present some systems that work directly on the pattern-based representation of documents. The ability of these tools to cover very different situations and contexts confirms the effectiveness of the model.
APA, Harvard, Vancouver, ISO, and other styles
37

Poggi, Francesco <1982&gt. "Structural patterns for document engineering: from an empirical bottom-up analysis to an ontological theory." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2015. http://amsdottorato.unibo.it/7123/.

Full text
Abstract:
This thesis aims at investigating a new approach to document analysis based on the idea of structural patterns in XML vocabularies. My work is founded on the belief that authors do naturally converge to a reasonable use of markup languages and that extreme, yet valid instances are rare and limited. Actual documents, therefore, may be used to derive classes of elements (patterns) persisting across documents and distilling the conceptualization of the documents and their components, and may give ground for automatic tools and services that rely on no background information (such as schemas) at all. The central part of my work consists in introducing from the ground up a formal theory of eight structural patterns (with three sub-patterns) that are able to express the logical organization of any XML document, and verifying their identifiability in a number of different vocabularies. This model is characterized by and validated against three main dimensions: terseness (i.e. the ability to represent the structure of a document with a small number of objects and composition rules), coverage (i.e. the ability to capture any possible situation in any document) and expressiveness (i.e. the ability to make explicit the semantics of structures, relations and dependencies). An algorithm for the automatic recognition of structural patterns is then presented, together with an evaluation of the results of a test performed on a set of more than 1100 documents from eight very different vocabularies. This language-independent analysis confirms the ability of patterns to capture and summarize the guidelines used by the authors in their everyday practice. Finally, I present some systems that work directly on the pattern-based representation of documents. The ability of these tools to cover very different situations and contexts confirms the effectiveness of the model.
APA, Harvard, Vancouver, ISO, and other styles
38

Qureshi, Nauman Ahmed. "Requirements Engineering for Self-Adaptive Software: Bridging the Gap Between Design-Time and Run-Time." Doctoral thesis, University of Trento, 2011. http://eprints-phd.biblio.unitn.it/649/1/naumanqureshiPhDThesis2011.pdf.

Full text
Abstract:
Self-Adaptive Software systems (SAS) adapt at run-time in response to changes in user’s needs, operating contexts, and resource availability, by requiring minimal to no involvement of system administrators. The ever-greater reliance on software with qualities such as flexibility and easy integrability, and the associated increase of design and maintenance effort, is raising the interest towards research on SAS. Taking the perspective of Requirements Engineering (RE), we investigate in this thesis how RE for SAS departs from more conventional RE for nonadaptive systems. The thesis has two objectives. First, to define a systematic approach to support the analyst to engineer requirements for SAS at design-time, which starts at early requirements (elicitation and analysis) and ends with the specification of the system, which will satisfy those requirements. Second, to realize software holding a representation of its requirements at run-time, thus enabling run-time adaptation in a user-oriented, goal-driven manner. To fulfill the first objective, a conceptual and theoretical framework is proposed. The framework is founded on core ontology for RE with revised elements that are needed to support RE for SAS. On this basis, a practical and systematic methodology at support of the requirements engineer is defined. It exploits a new aggregate type of requirement, called adaptive requirements, together with a visual modeling language to code requirements into a designtime artifact (called Adaptive Requirements Modeling Language, ARML). Adaptive requirements not only encompass functional and non-functional requirements but also specify properties for control loop functionalities such as monitoring specification, decision criteria and adaptation actions. An experiment is conducted involving human subjects to provide a first assessment on the effectiveness of proposed modeling concepts and approach. To support the second objective, a Continuous Adaptive RE (CARE) framework is proposed. It is based on a service-oriented architecture mainly adopting concepts from service-based applications to support run-time analysis and refinement of requirements by the system itself. The key contribution in achieving this objective is enabling the CARE framework to involve the end-user in the adaptation at run-time, when needed. As a validation of this framework, we perform a research case study by developing a proof of concept application, which rests on CARE’s conceptual architecture. This thesis contributes to the research on requirements engineering for SAS by proposing: (1) a conceptual core ontology with necessary concepts and relations to support the formulation of a dynamic RE problem i.e. finding adaptive requirements specification both at design-time and run-time. (2) a systematic methodology to support the analyst for modeling and operationalizing adaptive requirements at design-time. (3) a framework to perform continuous requirements engineering at run-time by the system itself involving the end-user.
APA, Harvard, Vancouver, ISO, and other styles
39

TRAINI, LUCA. "Performance Engineering in Agile/DevOps Development Processes: Ensuring Software Performance While Moving Fast." Doctoral thesis, Università degli Studi dell'Aquila, 2021. http://hdl.handle.net/11697/177843.

Full text
Abstract:
Agile principles and DevOps practices play a pivotal role in modern software development. These methodologies aim to improve software organization productivity while preserving the quality of the produced software. Unfortunately, the assessment of important non-functional software properties, such as performance, can be challenging in these contexts. Frequent code changes and software releases make impractical the use of classical performance assurance approaches. Moreover, many performance issues require highly specific conditions to be detected, which may be difficult to replicate in a testing environment. This thesis investigates and tackles problems related to performance assessment of software systems in the context of Agile/DevOps development processes. Specifically, it focuses on three aspects. The first aspect concerns practical and management problems in handling performance requirements. These problems were investigated through a 6-months industry collaboration with a large software organization that adopts an Agile software development process. The research was conducted in line with ethnographic research, which guided towards building knowledge from participatory observations, unstructured interviews and reviews of documentations. The study identified a set of management and practical challenges that arise from the adoption of Agile methodologies. The second aspect concerns the impact of refactoring activities on software performance. Refactoring is a fundamental activity in modern software development, and it is a core development phase of many Agile methodologies (e.g., Test-Driven Development and Extreme Programming). Nevertheless, there is little knowledge about the impact of refactoring operations on software performance. This thesis aims to fill this gap by presenting the largest study to date that investigates the impact of refactoring on software performance, in terms of execution time. The change history of 20 Java open-source systems was mined with the goal of identifying commits in which developers have implemented refactoring operations impacting code components that are exercised by performance benchmarks. Through a quantitative and qualitative analysis, the impact of (different types of) refactoring on execution times were unveiled. The results showed that the impact of refactoring on execution time varies depending on the refactoring type, with none of them being 100% "safe" in ensuring that there is no performance regression. Some refactoring types can result in substantial performance regression and, as such, should be carefully considered when refactoring performance-critical parts of a system. The third aspect concerns the introduction of techniques for performance assessment in the context of DevOps processes. Due to the fast-faced release cycle and the inherently non-deterministic nature of software performance, it is often unfeasible to proactively detect performance issues. For these reasons, today, the diagnosis of performance issues in production is a fundamental activity for maintaining high-quality software systems. This activity can be time-consuming, since it may require thorough inspection of large volumes of traces and performance indices. This thesis introduces two novel techniques for automated diagnosis of performance issues in service-based systems, which can be easily integrated into DevOps processes. These techniques are evaluated, in terms of effectiveness and efficiency, on a large number of datasets generated for two case study systems, and they are compared to two state-of-the-art techniques and three general-purpose clustering algorithms. The results showed that baselines were outperformed with a better and more stable effectiveness. Moreover, the presented techniques showed to be more efficient on large datasets when compared to the most effective baseline.
APA, Harvard, Vancouver, ISO, and other styles
40

GENUZIO, MARCO. "ENGINEERING COMPRESSED STATIC FUNCTIONS AND MINIMAL PERFECT HASH FUNCTIONS." Doctoral thesis, Università degli Studi di Milano, 2018. http://hdl.handle.net/2434/547316.

Full text
Abstract:
\emph{Static functions} are data structures meant to store arbitrary mappings from finite sets to integers; that is, given universe of items $U$, a set of $n \in \mathbb{N}$ pairs $(k_i,v_i)$ where $k_i \in S \subset U, |S|=n$, and $v_i \in \{0, 1, \ldots, m-1\} , m \in \mathbb{N} $, a static function will retrieve $v_i$ given $k_i$ (usually, in constant time). When every key is mapped into a different value this function is called \emph{perfect hash function} and when $n=m$ the data structure yields an injective numbering $S\to \lbrace0,1, \ldots n-1 \rbrace$; this mapping is called a \emph{minimal perfect hash function}. Big data brought back one of the most critical challenges that computer scientists have been tackling during the last fifty years, that is, analyzing big amounts of data that do not fit in main memory. While for small keysets these mappings can be easily implemented using hash tables, this solution does not scale well for bigger sets. Static functions and MPHFs break the information-theoretical lower bound of storing the set $S$ because they are allowed to return \emph{any} value if the queried key is not in the original keyset. The classical constructions technique for static functions can achieve just $O(nb)$ bits space, where $b=\log(m)$, and the one for MPHFs $O(n)$ bits of space (always with constant access time). All these features make static functions and MPHFs powerful techniques when handling, for instance, large sets of strings, and they are essential building blocks of space-efficient data structures such as (compressed) full-text indexes, monotone MPHFs, Bloom filter-like data structures, and prefix-search data structures. The biggest challenge of this construction technique involves lowering the multiplicative constants hidden inside the asymptotic space bounds while keeping feasible construction times. In this thesis, we take advantage of the recent result in random linear systems theory regarding the ratio between the number of variables and number of the equations, and in perfect hash data structures, to achieve practical static functions with the lowest space bounds so far, and construction time comparable with widely used techniques. The new results, however, require solving linear systems that require more than a simple triangulation process, as it happens in current state-of-the-art solutions. The main challenge in making such structures usable is mitigating the cubic running time of Gaussian elimination at construction time. To this purpose, we introduce novel techniques based on \emph{broadword programming} and a heuristic derived from \emph{structured Gaussian elimination}. We obtained data structures that are significantly smaller than commonly used hypergraph-based constructions while maintaining or improving the lookup times and providing still feasible construction.We then apply these improvements to another kind of structures: \emph{compressed static hash functions}. The theoretical construction technique for this kind of data structure uses prefix-free codes with variable length to encode the set of values. Adopting this solution, we can reduce the\n space usage of each element to (essentially) the entropy of the list of output values of the function.Indeed, we need to solve an even bigger linear system of equations, and the time required to build the structure increases. In this thesis, we present the first engineered implementation of compressed hash functions. For example, we were able to store a function with geometrically distributed output, with parameter $p=0.5$in just $2.28$ bit per key, independently of the key set, with a construction time double with respect to that of a state-of-the-art non-compressed function, which requires $\approx\log \log n$ bits per key, where $n$ is the number of keys, and similar lookup time. We can also store a function with an output distributed following a Zipfian distribution with parameter $s=2$ and $N= 10^6$ in just $2.75$ bits per key, whereas a non-compressed function would require more than $20$, with a threefold increase in construction time and significantly faster lookups.
APA, Harvard, Vancouver, ISO, and other styles
41

Sedin, Jonas. "A comparison of Polar Code Constructions and Punctur-ing methods for AWGN and Fading channels." Thesis, KTH, Teknisk informationsvetenskap, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-212307.

Full text
Abstract:
Today 5G and other wireless standards are being developed for the future of our society. The different use-cases of future wireless services are going to be ever-more demanding, whether it is vehicular communication or low-powered sensor networks. High-rate, ultra-reliable and low-power are future requirements that will also affect the coding schemes being used. A relatively recent coding scheme, called polar codes, has the potential to fulfill all of these requirements if the coding scheme applied is well-designed. In this thesis we will be focusing on practical algorithms for implementation of polar codes at medium-sized block-lengths.       Polar codes are very different from other modern coding schemes. The code construction is rather unique in that they are dependent on the underlying channel, where the code construction can change with the Signal-to-Noise-Ratio of the AWGN channel. The puncturing of polar codes is also non-trivial compared to other coding schemes. Since the Polar Codes are dependent on the underlying channel, the fading channel performance is thus important to consider. In this thesis we aim to show through simulations how these different concepts affect the Block Error Rate (BLER) performance. Specifically, we compare how code constructions compare over the AWGN channel, how code construction affects the BLER performance with puncturing and how puncturing affects the performance over fading channels. We find that an appropriate code construction is very important for optimal performance over the AWGN channel with puncturing, in our case using Gaussian Approximation. We also find that different puncturing methods have vastly different performances for different rates over the AWGN and Rayleigh fading channel and that applying an interleaver is very important for optimal performance.
Idag så utvecklas och standardiseras 5G och andra trådlösa standarder. De olika applikationerna av framtida trådlösa nätverk kommer att vara mer och mer krävande, allt från kommunikation mellan fordon till små energisnåla sensorer. Högre hastigheter, påliltlighet och energieffektivitet är krav som också kommer att påverka den kanalkodningen som används av standarden. En relativt ny typ av kanalkodning, polar codes, har all potential att kunna uppfylla de framtida kraven. I denna uppsats så kommer vi att undersöka praktiska algoritmer för implementation av polar codes för blocklängder i mediumstorlek.       Polar codes är annorlunda från andra moderna kanalkodningar. Kodkonstruktionen är unik på så sätt att den är beroende av den underliggande kanalen som koden används över, där exempelvis kodkonstruktionen kan ändras med Signal-till-Brus-förhållandet (SNR) över Additiv-Vit-Gaussisk-Brus-kanalen (AWGN).  Punktueringen av polar codes är också annorlunda jämfört med andra kanalkodningar. Eftersom polar codes är beroende av den underliggande kanalen, så är prestandan över fädande kanaler viktig att undersöka. I denna uppsats så visar vi genom simulationer hur de ovannämnda koncepten påverkade block-felfrekvensen (BLER). Specifikt så jämför vi hur  kodkonstruktioner presterar över AWGN-kanalen, hur kodkonstruktion påverkar prestande med punktuering samt hur punktuering påverkar prestandan över fädande kanaler. I denna uppsats så observerar vi att kodkonstruktion är viktig för optimal prestanda över AWGN-kanalen, där vi använder Gaussian Approximation. Vi observerar också att olika punktueringsmetoder har omfattande olika prestanda över AWGN och fädande kanaler och att använda en interleaver är väldigt viktig för optimal prestanda över fädande kanaler.
APA, Harvard, Vancouver, ISO, and other styles
42

Novi, Daniele. "Knowledge management and Discovery for advanced Enterprise Knowledge Engineering." Doctoral thesis, Universita degli studi di Salerno, 2014. http://hdl.handle.net/10556/1466.

Full text
Abstract:
2012 - 2013
The research work addresses mainly issues related to the adoption of models, methodologies and knowledge management tools that implement a pervasive use of the latest technologies in the area of Semantic Web for the improvement of business processes and Enterprise 2.0 applications. The first phase of the research has focused on the study and analysis of the state of the art and the problems of Knowledge Discovery database, paying more attention to the data mining systems. The most innovative approaches which were investigated for the "Enterprise Knowledge Engineering" are listed below. In detail, the problems analyzed are those relating to architectural aspects and the integration of Legacy Systems (or not). The contribution of research that is intended to give, consists in the identification and definition of a uniform and general model, a "Knowledge Enterprise Model", the original model with respect to the canonical approaches of enterprise architecture (for example with respect to the Object Management - OMG - standard). The introduction of the tools and principles of Enterprise 2.0 in the company have been investigated and, simultaneously, Semantic Enterprise based appropriate solutions have been defined to the problem of fragmentation of information and improvement of the process of knowledge discovery and functional knowledge sharing. All studies and analysis are finalized and validated by defining a methodology and related software tools to support, for the improvement of processes related to the life cycles of best practices across the enterprise. Collaborative tools, knowledge modeling, algorithms, knowledge discovery and extraction are applied synergistically to support these processes. [edited by author]
XII n.s.
APA, Harvard, Vancouver, ISO, and other styles
43

Johansson, Adam, and Tim Johansson. "Code generation for programmable logic controllers : Evaluating model-based engineering practices in a real-world context." Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-18714.

Full text
Abstract:
The industrial manufacturing of today is achieved through the use of programmable logic controllers (PLC). The way PLCs are programmed remains largely unchanged since their conception 40 years ago, but complexity and software size have increased, and requirements have becomemore elaborate. Model-driven engineering (MDE) practices, formal verification and automated testing could help manage these challenges. This study seeks to improve development practices in the context of a company that delivers automation projects. Through design science methodology the state of the field is investigated and an artefact is developed. The artefact shows potential benefits resulting from the introduction of model-driven code generation, which is evaluated through an experiment with engineers. Our results indicate the engineers may benefit from incorporating generated code in their work.
APA, Harvard, Vancouver, ISO, and other styles
44

ZELADA, VALDIVIESO HÉCTOR MIGUEL. "COMPUTATIONAL MODEL TO ALERT EARLY THE LEVEL OF ACHIEVEMENT OF THE GRADUATION PROFILE OF UNIVERSITY ENGINEERING STUDENTS." Doctoral thesis, Università degli Studi di Trieste, 2022. http://hdl.handle.net/11368/3030921.

Full text
Abstract:
Hoy en día el logro de las competencias del perfil de egreso es una evidencia muy importante que muestra la calidad de los programas universitarios. Sin embargo, las universidades deben esperar a que sus estudiantes terminen o lleguen al final de sus estudios para poder medir exactamente el logro de éstas, lo cual trae problemas que cuando algunos no alcancen los niveles esperados ya no haya el tiempo suficiente para poder tomar acciones correctivas. En ese sentido, en la literatura, existen varias investigaciones relacionadas con el perfil de egreso y otras que se centraron en la predicción del rendimiento académico en cursos presenciales o virtuales, en la predicción de la deserción estudiantil, en predecir la motivación académica del estudiante, en predecir la colocación de un estudiante en un empleo, en predecir el promedio final global del estudiante universitario o predecir el retraso en la graduación, más no hay estudios que hayan abordado directamente el tema de la predicción del nivel de logro del perfil del egresado universitario de ingeniería. Debido a esto, la investigación tuvo como objetivo general desarrollar un modelo computacional basado en Machine Learning que permita alertar tempranamente el nivel de logro del perfil de egreso de los estudiantes universitarios de Ingeniería, de tal manera que se tenga información oportuna para tomar decisiones correctivas anticipadamente y no esperar hasta el final de los estudios para poder obtener este resultado. Para esto, se siguió la metodología CRISP-DM, se recolectaron datos de 1982 egresados de diferentes programas de ingeniería de una universidad peruana y haciendo uso de Matlab y algoritmos de Machine Learning, al final se creo un modelo bastante preciso (accuracy: 96.7%) utilizando el algoritmo SVM y se obtuvo que las características relacionadas con las calificaciones obtenidas hasta el IV ciclo de estudios en los cursos de Matemáticas y Física, son los mejores predictores para pronosticar el nivel de logro del perfil de egreso de los estudiantes de ingeniería.
Today, the achievement of the graduate profile competencies is very important evidence that shows the quality of university programs. However, universities must wait for their students to finish or reach the end of their studies in order to accurately measure their achievement, which brings problems that when some do not reach the expected levels there is not enough time to take action corrective. In this sense, in the literature, there are several investigations related to the graduate profile and others that focused on the prediction of academic performance in face-to-face or virtual courses, in the prediction of student desertion, in predicting the student's academic motivation, in predicting the placement of a student in a job, in predicting the final global average of the university student or predicting the delay in graduation, but there are no studies that have directly addressed the subject of the prediction of the level of achievement of the profile of the university graduate engineering. Due to this, the general objective of the research was to develop a computational model based on Machine Learning that allows early warning of the level of achievement of the graduation profile of engineering university students, in such a way that timely information is available to make corrective decisions in advance. and not wait until the end of the studies to obtain this result. For this, the CRISP-DM methodology was followed, data was collected from 1982 graduates of different engineering programs of a Peruvian university and using Matlab and Machine Learning algorithms, in the end a fairly accurate model was created (accuracy: 96.7% ) using the SVM algorithm and it was found that the characteristics related to the grades obtained up to the IV cycle of studies in Mathematics and Physics courses are the best predictors to predict the level of achievement of the graduation profile of engineering students.
APA, Harvard, Vancouver, ISO, and other styles
45

Bengtsson, Jonas, and Mikael Grönkvist. "Performing Geographic Information System Analyses on Building Information Management Models." Thesis, KTH, Geodesi och satellitpositionering, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-208922.

Full text
Abstract:
As the usage of both BIM (Building Information Modelling) and 3D-GIS (Three-Dimensional Geographic Information Systems) has increased within the field of urban development and construction, so has the interest in connecting these two tools.  One possibility of integration is the potential of visualising BIM models together with other spatial data in 3D. Another is to be able to perform spatial 3D analyses on the models. Both of these can be achieved through use of GIS software. This study explores how integration of BIM and GIS could look. The goal was to perform typical GIS analyses in 3D on BIM models. Previous research points towards some success within the field through use of the indicated standard format for each tool – IFC (Industry Foundation Classes) for BIM and CityGML (City Geographic Markup Language) for GIS. Transformation between the formats took place through use of the BIM software Revit, the transformation tool FME and the GIS software ArcGIS. A couple of reviewed applications of GIS analyses were chosen for testing on the converted models – indoor network analysis, visibility analysis and spatial analysis for 3D buildings. The input data in the study was several BIM models, both models created for real-life usage and others that only function as sample data within the different software. From the results of the practical work it can be concluded that a simple, automated and full-scale integration does not seem to be within reach quite yet. Most transformations between IFC and CityGML failed to some extent, especially the more detailed and complex ones. In some test cases, the file could not be imported into ArcGIS and in others geometries were missing or existing even though they should not. There were also examples where geometries had been moved during the process. As a consequence of these problems, most analyses failed or did not give meaningful results. A few of the original analyses did give positive results. Combining (flawed) CityGML models with other spatial data for visualisation purposes worked rather well. Both the shadow volume and sightline analyses did also get reasonable results which indicates that there might be a future for those applications. The obstacles for a full-scale integration identified during the work were divided into four different categories. The first is BIM usage and routines where created models need to be of high quality if the final results are to be correct. The second are problems concerning the level of detail, especially the lack of common definitions for the amount of details and information. The third category concerns the connection between local and global coordinate systems where a solution in form of updates to IFC might already be in place. The fourth, and largest, category contains those surrounding the different formats and software used. Here, focus should lie on the transformation between IFC and CityGML. There are plenty of possible, future, work concerning these different problems. There is also potential in developing own tools for integration or performing different analyses than those chosen for this thesis.
I takt med den ökade användningen av både BIM och 3D-GIS inom samhällsbyggnadsprocessen har även intresset för att sammanföra de två verktygen blivit större. En möjlighet med integration är potentialen att visualisera BIM-modeller tillsammans med andra geografiska data i 3D. En annan är att kunna genomföra rumsliga 3D-analyser på modellerna. Båda dessa går att utföra med hjälp av GIS-programvara. Denna studie utforskar hur en integration mellan BIM och GIS kan se ut. Målet är att genomföra typiska GIS-analyser i 3D på BIM-modeller. Tidigare forskning pekar mot vissa framgångar inom området genom att arbeta med det utpekade standardformatet för respektive verktyg – IFC för BIM och CityGML för GIS. Transformation mellan formaten skedde med hjälp av programvarorna Revit, FME och ArcGIS. Ett par framhållna tillämpningar av GIS-analyser valdes ut för tester på de konverterade modellerna – nätverksanalyser inomhus, siktanalyser och rumsliga analyser för 3D-byggnader. Som indata användes flera olika BIM-modeller, både sådana som tillverkats för faktisk användning och modeller som skapats för att användas som exempeldata inom programvarorna. Utifrån resultaten från det praktiska arbetet kan konstateras att en enkel, automatiserad och fullskalig integration mellan verktygen verkar ligga en bit in i framtiden. De flesta transformationerna mellan IFC och CityGML misslyckades i någon aspekt, speciellt de mer detaljerade och komplexa. I vissa testfall kunde filen inte importeras i ArcGIS, i andra saknas eller existerar oväntade geometrier även om importen lyckats. Det finns också exempel där geometrier förflyttats. Som en konsekvens av dessa problem kunde de flesta 3D-analyser inte genomföras alls eller lyckades inte ge betydelsefulla resultat. Ett fåtal av de ursprungliga analyserna gav dock positiv utdelning. Att kombinera (felaktiga) CityGML-modeller med annan rumslig data fungerade förhållandevis väl ur ett visualiseringssyfte. Både skuggvolymsanalysen och framtagandet av siktlinjer från byggnaderna gav någorlunda korrekta resultat vilket indikerar att det kan finnas en framtid gällande de tillämpningarna. Hindren för en fullskalig integration som identifierades genom arbetet delades upp i fyra olika kategorier. Den första är BIM-användning där hög kvalitet på de skapade modellerna är viktigt för korrekta slutresultat. Den andra är detaljeringsgraden där avsaknaden av gemensamma definitioner för detaljeringsgraderna ställer till problem. Den tredje kategorin är koordinat- och referenssystem där en lösning på kopplingen mellan lokala och globala system redan kan finnas på plats i en av de senare utgåvorna av IFC-formatet. Den sista och största kategorin är problematiken kring just format och programvaror där mer arbete på översättningen mellan IFC och CityGML kommer att krävas. I framtiden finns det gott om arbete att göra med dessa olika problem. Det finns också potential att utveckla egna verktyg för integrationen eller att ägna sig åt att göra andra analyser än de som valdes ut i den här studien.
APA, Harvard, Vancouver, ISO, and other styles
46

CHAOUCH, CHAKIB. "Cyber-physical systems in the framework of audio song recognition and reliability engineering." Doctoral thesis, Chakib Chaouch, 2021. http://hdl.handle.net/11570/3210939.

Full text
Abstract:
Music Information Retrieval (MIR) is the interdisciplinary discipline of extracting information from music, and it is the topic of our research. The MIR system faces a significant issue in dealing with various genres of music. Music retrieval aims at helping end-users search and finds a desired piece of music from an extensive database. In other words, Music Information retrieval tries to make music information more accessible to listeners, musicians, and data scientists. The challenges and research problems that an audio recognition system faces in everyday use might come in a variety of forms. Significant aspects are: near-identical original audio, noise, and spectral or temporal distortion invariance, a minimal length of song track required for identification, retrieval speed, and processing load are all important factors. In order to overcome these problems and achieve our goal, a Short Time Power Spectral Density (ST-PSD) fingerprinting is proposed as an innovative, efficient, highly accurate, and exact fingerprinting approach. To maintain high accuracy and specificity on hard datasets, we propose matching features based on an efficient hamming distance search on a binary type fingerprint, followed by a verification step for match hypotheses. We gradually improve this system by adding additional components like the Mel frequency bank filter and progressive probability evaluation score. Besides, we introduce a new fingerprint generation method and we present the fundamentals for generating fingerprints and we show they are robust in the song recognition process. Then, we evaluate the performance of our proposed method using a scoring measure based on the accuracy classification of thousands of Songs. Our purpose is to communicate the effectiveness of the fingerprints generated with two proposed approaches; we will show that, even without any optimized searching algorithm, the accuracy obtained in recognizing pieces of songs is very good, thus making the apprapproachropose a good candidate to be used in an effective song recognition process. I will be discussing another area of research that was done as part of my period abroad at Duke University, USA, as part of an exchange program. The topic related to reliability engineering has been incorporated. The first part focuses on the reliability and interval reliability of the Phased Mission System (PMS) with repairable components and disconnected phases, using analytical modeling as a state space-oriented method using the Continuous-time Markov chain (CTMC). The second aspect focuses on non-repairable multi-state components PMS, in which we present a practical case study of a spacecraft satellite that was used to demonstrate only the (PMS-BDD) method proposed with the implementation of Sharpe tools based on (FT) configuration in order to demonstrate the system’s reliability/unreliability in this case.
APA, Harvard, Vancouver, ISO, and other styles
47

Wistedt, Johan. "Digital secondary substations with auto-configuration of station monitoring through IEC 61850 and CIM." Thesis, Uppsala universitet, Elektricitetslära, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-360513.

Full text
Abstract:
This thesis explore the possibility to automate a process for configuration of secondary substations monitoring and control. By using a network information system (NIS), information of secondary substations can be extracted, such as feeder naming, primary equipment type, rating and model. From this information an automated process of configuring the secondary substation is possible, which open up the possibility to cost-efficiently digitalise the distribution grid. In the project, the standard IEC 61850 for configuration of communications of intelligent electrical devices was used to automate and standardize the process. The process starts with a extracted IEC 61970 CIM file from the NIS. The IEC 61970 CIM file is converted into a IEC 61850 SCL file through an system engineering tool. The configuration is based of information from the NIS, where the models and types of the equipments decides what type of functionality that is needed for the secondary substation. With help of the created SCL file hardware and human-machine interface (HMI) was configured, creating a full functional system for the secondary substation monitoring and control equipment. The usage of 400V capable input module together with bus couplers, configured in IEC 61850, lowers the configuration needed for the hardware. The usage of SCL files also helps automate the creation of HMI for the secondary substation through IEC 61850 based tools in SCADA software. Creating views of both single-line diagrams as well as digital representation of the secondary substation outgoing feeders with measured values on display. The result of the project helps show NIS information is sufficient and standards mature enough to allow an almost fully automated system. Lowering the total time spent on each stations configuration to around two hours. Leading the way for future development of automating software for configurations of the secondary substations.
APA, Harvard, Vancouver, ISO, and other styles
48

Bataw, Anas. "On the integration of Building Information Modelling in undergraduate civil engineering programmes in the United Kingdom." Thesis, University of Manchester, 2016. https://www.research.manchester.ac.uk/portal/en/theses/on-the-integration-of-building-information-modelling-in-undergraduate-civil-engineering-programmes-in-the-united-kingdom(6e1827f3-1178-4ef7-9608-f0a2f106a03b).html.

Full text
Abstract:
The management of data, information and knowledge through the project life cycle of buildings and civil infrastructure projects is becoming increasingly complex. In an attempt to drive efficiencies and address this complexity, the United Kingdom (UK) Government has mandated that Building Information Modelling (BIM) methods must be adopted in all public sector construction projects from 2016. Emerging from the US Department of Defence, BIM is an approach to the co-ordination of design and production data using object-oriented principles as described in ISO 29481-1:2010. The underlying philosophy of BIM is to ensure the “provision of a single environment to store shared asset data and information, accessible to all individuals who are required to produce, use and maintain it” (PAS 1192-2:2013). A key aspect of BIM lies in the notion of ‘interoperability’ between various software applications used in the design and construction process and a common data format for the efficient exchange of design information and knowledge. Protagonists of BIM argue that this interoperability provides an effective environment for collaboration between actors in the construction process and creates accurate, reliable, repeatable and high-quality information exchange. This UK Government mandate presents numerous challenges to the architecture, engineering and construction (AEC) professions; in particular, the characteristics of BIM Level 2 remain explicitly undefined and this has created a degree of uncertainty amongst the promoters and those professionals charged with delivering projects. This uncertainty is further reflected in UK higher education; contemporary undergraduate programmes in civil engineering across the UK are, on the whole, at the bottom of the BIM ‘maturity curve’. UK higher education institutions are increasingly being challenged to embrace BIM through appropriate pedagogies and teaching practices but the supporting guidance is emergent and variable. In the case of civil engineering programmes in the UK, the Joint Board of Moderators (JBM) has issued a ‘good practice guide’ as have the Higher Education Academy (HEA) under the auspices of the ‘BIM Academic Forum’. Nevertheless, a clear demand for further research to explore the technical and pedagogical issues associated with BIM integration into degree programmes remains. The research described in this thesis casts a critical lens on the current literature in the domains of object-oriented modelling of infrastructure and the associated implications for procurement and project management. A mixed-methods approach using questionnaire analysis, focus groups and secondary case study analysis was used to enact an inductive research approach that captures a range of data on pedagogic issues and considerations associated with the integration of BIM into the design of a new civil engineering curricular. The findings include recommendations for the ‘up-skilling’ of university teachers and academics, enhancing student employability and the development of suitable learning and learning techniques. A framework for the incorporation of BIM principles, concepts and technologies into civil engineering programmes is proposed. The findings of the research suggest that the first two years of study in a typical, accredited civil engineering degree programme should focus on the technical concepts relating to design from a modelling and analysis perspective. The latter years of the degree should focus on the development of ‘soft-skills’ required to enable effective teamwork and collaboration within a multidisciplinary project environment. Further studies should seek to test the proposed framework in a ‘live’ environment, particularly in the context of the necessity to balance the demands of summative and formative assessment regimes.
APA, Harvard, Vancouver, ISO, and other styles
49

Venugopal, Manu. "Formal specification of industry foundation class concepts using engineering ontologies." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/42868.

Full text
Abstract:
Architecture, Engineering, Construction (AEC) and Facilities Management (FM) involve domains that require a very diverse set of information and model exchanges to fully realize the potential of Building Information Modeling (BIM). Industry Foundation Classes (IFC) provides a neutral and open schema for interoperability. Model View Definitions (MVD) provide a common subset for specifying the exchanges using IFC, but are expensive to build, test and maintain. A semantic analysis of IFC data schema illustrates the complexities of embedding semantics in model views. A software engineering methodology based on formal specification of shared resources, reusable components and standards that are applicable to the AEC-FM industry for development of a Semantic Exchange Module (SEM) structure for IFC schema is adopted for this research. This SEM structure is based on engineering ontologies that are capable of developing more consistent MVDs. In this regard, Ontology is considered as a machine-readable set of definitions that create a taxonomy of classes and subclasses, and relationships between them. Typically, the ontology contains the hierarchical description of important entities that are used in IFC, along with their properties and business rules. This model of an ontological framework, similar to that of Semantic Web, makes the IFC more formal and consistent as it is capable of providing precise definition of terms and vocabulary. The outcome of this research, a formal classification structure for IFC implementations for the domain of Precast/ Prestressed Concrete Industry, when implemented by software developers, provides the mechanism for applications such as modular MVDs, smart and complex querying of product models, and transaction based services, based on the idea of testable and reusable SEMs. It can be extended and also helps in consistent implementation of rule languages across different domains within AEC-FM, making data sharing across applications simpler with limited rework. This research is expected to impact the overall interoperability of applications in the BIM realm.
APA, Harvard, Vancouver, ISO, and other styles
50

Carlquist, Johan. "Evaluating the use of ICN for Internet of things." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-343368.

Full text
Abstract:
The market of IOT devices continues to grow at a rapid speed as well as constrained wireless sensor networks. Today, the main network paradigm is host centric where a users have to specify which host they want to receive their data from. Information-centric networking is a new paradigm for the future internet, which is based on named data instead of named hosts. With ICN, a user needs to send a request for a perticular data in order to retrieve it. When sent, any participant in the network, router or server, containing the data will respond to the request. In order to achieve low latency between data creation and its consumption, as well as being able to follow data which is sequentially produced at a fixed rate, an algortihm was developed. This algortihm calculates and determines when to send the next interest message towards the sensor. It uses a ‘one time subscription’ approach to send its interest message in advance of the creation of the data, thereby enabling a low latency from data creation to consumption. The result of this algorithm shows that a consumer can retrieve the data with minimum latency from its creation by the sensor over an extended period of time, without using a publish/subscribe system such as MQTT or similar which pushes their data towards their consumers. The performance evaluation carried out which analysed the Content Centric Network application on the sensor shows that the application has little impact on the overall round trip time in the network. Based on the results, this thesis concluded that the ICN paradigm, together with a ’one-time subscription’ model, can be a suitable option for communication within the IoT domain where consumers ask for sequentially produced data.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography