Дисертації з теми "Lignes de Produit Logiciel"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся з топ-50 дисертацій для дослідження на тему "Lignes de Produit Logiciel".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.
Istoan, Paul. "Méthodologie pour la dérivation comportementale de produits dans une ligne de produit logicielle." Phd thesis, Université Rennes 1, 2013. http://tel.archives-ouvertes.fr/tel-00926141.
Повний текст джерелаUrli, Simon. "Processus flexible de configuration pour lignes de produits logiciels complexes." Thesis, Nice, 2015. http://www.theses.fr/2015NICE4002/document.
Повний текст джерелаThe necessity of producing high quality softwares and the specific software market needs raise new approaches such as Software Product Lines (SPL). However in order to satisfy the growing requirements of new information systems, we need to consider those systems as a composition of many interconnected sub-systems called systems-of-systems. As a SPL, it implies to support the modularity and the large variability of such systems, from the definition of sub-systems to their composition, ensuring the consistency of final systems. To support design and usage of such a complex SPL, we propose a new approach based on (i) the definition of a SPL domain model, (ii) the formalization of variability using feature models (FM) and (iii) the representation of dependencies between those different FM. In order to manage the complexity of this SPL we complete our approach by in one hand algorithms ensuring the consistency of the SPL and on the other hand the definition of a configuration process which guarantees the consistency of products without imposing order in user choices and authorizing to cancel any choice. This thesis presents a formalization of these works and demonstrates the expected properties of those SPL, like the control of the product line consistency with incremental algorithms exploiting the domain model topology, the formal definition and the proof of the configuration process flexibility, and the consistency concepts of the process itself. On these basis, we propose a first implementation and we validate our works on a SPL dedicated to an industrial scale system-of-systems for producing digital signage systems
Ziadi, Tewfik. "Manipulation de lignes de produits en UML." Rennes 1, 2004. http://www.theses.fr/2004REN10152.
Повний текст джерелаNebut, Clémentine. "Génération automatique de tests à partir des exigences et application aux lignes de produits logicielles." Rennes 1, 2004. http://www.theses.fr/2004REN10099.
Повний текст джерелаArboleda, Jiménez Hugo Fernando. "Fine-grained configuration et dérivation de lignes de produit logiciels dirigé par les modèles." Nantes, 2009. http://www.theses.fr/2009NANT2117.
Повний текст джерелаWe present FieSta, an approach based on Model-Driven Development ideas to create Software Product Lines (SPLs). In Model-Driven SPL approaches, the derivation of a product starts from a domain application model. This model is transformed through several stages reusing model transformation rules until a product is obtained. Transformations rules are selected according to variants included in congurations created by product designers. Congurations include variants from variation points, which are relevant characteristics representing the variability of a product line. FieSta (1) provides mechanisms to improve the expression of variability of Model-Driven SPLs by allowing designers to create ne-grained congurations of products, and (2) integrates a product derivation process which uses decision models and Aspect-Oriented Programming facilitating the reuse, adaptation and composition of model transformation rules. We introduce constraint models which make it possible for product line architects to capture the scope of product lines using the concepts of constraint, cardinality property and structural dependency property. To congure products, we create domain models and binding models, which are sets of bindings between model elements and variants and satisfy the constraint models. We dene a decision model as a set of aspects. An aspect maintains information of when transformation rules that generate commonalities of products must be intercepted (joinpoints) and what transformation rules (advices) that generate variable structures must be executed instead. Our strategy maintains uncoupled variants from model transformation rules. This solves problems related to modularization, coupling, exibility and maintainability of transformations rules because they are completely separated from variants; thus, they can evolve independently
Ghabach, Eddy. "Prise en charge du « copie et appropriation » dans les lignes de produits logiciels." Thesis, Université Côte d'Azur (ComUE), 2018. http://www.theses.fr/2018AZUR4056/document.
Повний текст джерелаA Software Product Line (SPL) manages commonalities and variability of a related software products family. This approach is characterized by a systematic reuse that reduces development cost and time to market and increases software quality. However, building an SPL requires an initial expensive investment. Therefore, organizations that are not able to deal with such an up-front investment, tend to develop a family of software products using simple and intuitive practices. Clone-and-own (C&O) is an approach adopted widely by software developers to construct new product variants from existing ones. However, the efficiency of this practice degrades proportionally to the growth of the family of products in concern, that becomes difficult to manage. In this dissertation, we propose a hybrid approach that utilizes both SPL and C&O to develop and evolve a family of software products. An automatic mechanism of identification of the correspondences between the features of the products and the software artifacts, allows the migration of the product variants developed in C&O in an SPL The originality of this work is then to help the derivation of new products by proposing different scenarios of C&O operations to be performed to derive a new product from the required features. The developer can then reduce these possibilities by expressing her preferences (e.g. products, artifacts) and using the proposed cost estimations on the operations. We realized our approach by developing SUCCEED, a framework for SUpporting Clone-and-own with Cost-EstimatEd Derivation. We validate our works on a case study of families of web portals
Bécan, Guillaume. "Metamodels and feature models : complementary approaches to formalize product comparison matrices." Thesis, Rennes 1, 2016. http://www.theses.fr/2016REN1S116/document.
Повний текст джерелаProduct Comparison Matrices (PCMs) abound on the Web. They provide a simple representation of the characteristics of a set of products. However, the lack of formalization and the large diversity of PCMs challenges the development of software for processing these matrices. In this thesis, we develop two complementary approaches for the formalisation of PCMs. The first one consists in a precise description of the structure and semantics of PCMs in the form of a metamodel. We also propose an automated transformation from PCMs to PCM models conformant to the metamodel. The second one consists in synthesizing attributed feature models from a class of PCMs. With our contributions, we propose a generic and extensible approach for the formalization and exploitation of PCMs
Al-Msie', Deen Ra'Fat. "Construction de lignes de produits logiciels par rétro-ingénierie de modèles de caractéristiques à partir de variantes de logiciels : l'approche REVPLINE." Thesis, Montpellier 2, 2014. http://www.theses.fr/2014MON20024/document.
Повний текст джерелаThe idea of Software Product Line (SPL) approach is to manage a family of similar software products in a reuse-based way. Reuse avoids repetitions, which helps reduce development/maintenance effort, shorten time-to-market and improve overall quality of software. To migrate from existing software product variants into SPL, one has to understand how they are similar and how they differ one from another. Companies often develop a set of software variants that share some features and differ in other ones to meet specific requirements. To exploit existing software variants and build a software product line, a feature model must be built as a first step. To do so, it is necessary to extract mandatory and optional features in addition to associate each feature with its name. Then, it is important to organize the mined and documented features into a feature model. In this context, our thesis proposes three contributions.Thus, we propose, in this dissertation as a first contribution a new approach to mine features from the object-oriented source code of a set of software variants based on Formal Concept Analysis, code dependency and Latent Semantic Indexing. The novelty of our approach is that it exploits commonality and variability across software variants, at source code level, to run Information Retrieval methods in an efficient way. The second contribution consists in documenting the mined feature implementations based on Formal Concept Analysis, Latent Semantic Indexing and Relational Concept Analysis. We propose a complementary approach, which aims to document the mined feature implementations by giving names and descriptions, based on the feature implementations and use-case diagrams of software variants. The novelty of our approach is that it exploits commonality and variability across software variants, at feature implementations and use-cases levels, to run Information Retrieval methods in an efficient way. In the third contribution, we propose an automatic approach to organize the mined documented features into a feature model. Features are organized in a tree which highlights mandatory features, optional features and feature groups (and, or, xor groups). The feature model is completed with requirement and mutual exclusion constraints. We rely on Formal Concept Analysis and software configurations to mine a unique and consistent feature model. To validate our approach, we applied it on three case studies: ArgoUML-SPL, Health complaint-SPL, Mobile media software product variants. The results of this evaluation validate the relevance and the performance of our proposal as most of the features and its constraints were correctly identified
Cipriano, Mota Sousa Gustavo. "A sofware product lines-based approach for the setup and adaptation of multi-cloud environments." Thesis, Lille 1, 2018. http://www.theses.fr/2018LIL1I090.
Повний текст джерелаCloud computing is characterized by a model in which computing resources are delivered as services in a pay-as-you-go manner, which eliminates the need for upfront investments, reducing the time to market and opportunity costs. Despite its benefits, cloud computing brought new concerns about provider dependence and data confidentiality, which further led to a growing trend on consuming resources from multiple clouds. However, building multi-cloud systems is still very challenging and time consuming due to the heterogeneity across cloud providers' offerings and the high-variability in the configuration of cloud providers.This variability is reflected by the large number of available services and the many different ways in which they can be combined and configured. In order to ensure correct setup of a multi-cloud environment, developers must be aware of service offerings and configuration options from multiple cloud providers. To tackle this problem, this thesis proposes a software product line-based approach for managing the variability in cloud environments in order to automate the setup and adaptation of multi-cloud environments. The contributions of this thesis enable to automatically generate a configuration or reconfiguration plan for a multi-cloud environment from a description of its requirements. The conducted experiments aim to assess the impact of the approach on the automated analysis of feature models and the feasibility of the approach to automate the setup and adaptation of multi-cloud environments
Al-Msie'Deen, Ra'Fat. "Construction de lignes de produits logiciels par rétro-ingénierie de modèles de caractéristiques à partir de variantes de logiciels: l'approche REVPLINE." Phd thesis, Université Montpellier II - Sciences et Techniques du Languedoc, 2014. http://tel.archives-ouvertes.fr/tel-01015102.
Повний текст джерелаPossompès, Thibaut. "Configuration par modèle de caractéristiques adapté au contexte pour les lignes de produits logiciels : application aux Smart Buildings." Thesis, Montpellier 2, 2013. http://www.theses.fr/2013MON20237/document.
Повний текст джерелаSoftware product lines aim at reusing documents, source code, architectures, and, all artefact created during software development achieved in a given domain. Nowadays, we use ``feature models'' to facilitate the reuse of such elements. The approach consists in describing, in this feature model, artefacts and their usage constraints, and then to identify representative features for creating a new product. In some situations, a feature represents an artefact associated to a context element that must be handled by the product. Such a feature, and its related constraints, can be cloned for each occurrence of instances of this element in a given context. In this thesis, we are try to determine the impact of a product execution context on a future product features. We first explore different ways for representing feature models and a product context. Then, we propose a generic method to adapt a feature model to context elements. This thesis has been achieved in the context of the RIDER project (Research for IT Driven EneRgy efficiency). This project aims at reducing energy waste due to an inappropriate management of energy sources and needs. The heterogeneousness of building equipments and each building specificities require to adapt energy optimisation software. We propose to apply a software product line approach to this project. More precisely, we propose to apply to this project our feature model context adaptation methodology, in order to adapt energy optimisation software to each building specific context
Méndez, Acuña David Fernando. "Leveraging software product lines engineering in the construction of domain specific languages." Thesis, Rennes 1, 2016. http://www.theses.fr/2016REN1S136/document.
Повний текст джерелаThe use of domain-specific languages (DSLs) has become a successful technique in the development of complex systems because it furnishes benefits such as abstraction, separation of concerns, and improvement of productivity. Nowadays, we can find a large variety of DSLs providing support in various domains. However, the construction of these languages is an expensive task. Language designers are intended to invest an important amount of time and effort in the definition of formal specifications and tooling for the DSLs that tackle the requirements of their companies. The construction of DSLs becomes even more challenging in multi-domain companies that provide several products. In this context, DSLs should be often adapted to diverse application scenarios, so language development projects address the construction of several variants of the same DSL. At this point, language designers face the challenge of building all the required variants by reusing, as much as possible, the commonalities existing among them. The objective is to leverage previous engineering efforts to minimize implementation from scratch. As an alternative to deal with such a challenge, recent research in software language engineering has proposed the use of product line engineering techniques to facilitate the construction of DSL variants. This led the notion of language product lines i.e., software product lines where the products are languages. Similarly to software product lines, language product lines can be built through two different approaches: top-down and bottom-up. In the top-down approach, a language product line is designed and implemented through a domain analysis process. In the bottom-up approach, the language product line is built up from a set of existing DSL variants through reverse-engineering techniques. In this thesis, we provide support for the construction of language product lines according to the two approaches mentioned before. On one hand, we propose facilities in terms of language modularization and variability management to support the top-down approach. Those facilities are accompanied with methodological insights intended to guide the domain analysis process. On the other hand, we introduce a reverse-engineering technique to support the bottom-up approach. This technique includes a mechanism to automatically recover a language modular design for the language product line as we as a strategy to synthesize a variability model that can be later used to configure concrete DSL variants. The ideas presented in this thesis are implemented in a well-engineered language workbench. This implementation facilitates the validation of our contributions in three case studies. The first case study is dedicated to validate our languages modularization approach that, as we will explain later in this document, is the backbone of any approach supporting language product lines. The second and third case studies are intended to validate our contributions on top-down and bottom-up language product lines respectively
Galindo, Duarte José Ángel. "Evolution, testing and configuration of variability systems intensive." Thesis, Rennes 1, 2015. http://www.theses.fr/2015REN1S008/document.
Повний текст джерелаThe large number of configurations that a feature model can encode makes the manual analysis of feature models an error prone and costly task. Then, computer-aided mechanisms appeared as a solution to extract useful information from feature models. This process of extracting information from feature models is known as ''Automated Analysis of Feature models'' that has been one of the main areas of research in the last years where more than thirty analysis operations have been proposed. In this dissertation we looked for different tendencies in the automated analysis field and found several research opportunities. Driven by real-world scenarios such as smart phone or videosurveillance domains, we contributed applying, adapting or extending automated analysis operations in variability intensive systems evolution, testing and configuration
Yu, Jianqi. "Ligne de produits dynamique pour les applications à services." Phd thesis, Grenoble, 2010. http://www.theses.fr/2010GRENM034.
Повний текст джерелаApplication development by composition of dynamic and heterogeneous services, that is to say, implemented according to different technologies, is the main subject of this thesis. We believe, actually, that service-oriented approach brings considerable changes in software computing and can bring significant benefits in terms of cost reduction, quality improvement and compression of time-to-market. The service-based technologies have already penetrated in many industries sections and answered certain expectations they aroused. Application development by composing heterogeneous services is still very complex for several reasons. Firstly, the various existing technologies employ very different mechanisms for declaration, research and liaison. The services themselves are described following structures often different. Therefore, development and technical knowledge are necessary to correctly combine services using different technological bases. On the other hand, dynamism management is complex. The principle of the service-oriented approach is to allow late service binding and, in some cases, the change of bindings according to context evolution. This requires very precise synchronization algorithms and is difficult to develop and test. We are well aware that in many cases, the benefits of service-oriented approach are not fully achieved, lacking of appropriate dynamism management. Finally, services are essentially described using a logical syntax. Therefore, we cannot guarantee, in a general case, the compatibility of several services or, more simply, the correctness of their global behavior. This is even more difficult when services have complex interactions, not restricted to a single call to obtain information. In this thesis, we bring a domain-specific dimension into service composition. The domain definition allows restricting the possible compositions of services, both at technical and semantic level. Thus we have found great complementarities between software product lines approaches and service-oriented approaches. Dynamism is a natural characteristic of service-based technologies, that is to say, the ability to bind services as late as possible utile runtime. Software product-lines, in turn, define approaches for planned reuse. Specifically, this thesis provides a three-phase approach for development of service composition with tool support, namely: the definition of a domain in the form of services and reference architectures, the definition of application in the form of service-based architectures and the execution of autonomic applications following application architecture. This thesis is validated in a collaborative project in the home healthcare domain
Yu, Jianqi. "Ligne de produits dynamique pour les applications à services." Phd thesis, Grenoble, 2010. http://tel.archives-ouvertes.fr/tel-00493355.
Повний текст джерелаNiang, Boubou Thiam. "A Model-Driven Engineering and Software Product Line Approach to Support Interoperability in Systems of Information Systems." Electronic Thesis or Diss., Lyon 2, 2024. http://www.theses.fr/2024LYO20005.
Повний текст джерелаModern information systems consist of various components that require seamless communication and coordination. Organizations face difficulties adapting to dynamic changes while engaging with diverse industry partners. The challenge arises from the fact that interoperability mechanisms are often created manually and in an ad hoc manner. These mechanisms must be reusable to avoid time-consuming and error-prone processes in an ever-changing environment. The lack of reusability is because interoperability mechanisms are often integrated into business logic components, which creates a strong coupling between components, making maintenance difficult without affecting the overall operation of the system.Berger-Levrault, our industrial partner, primarily serves public institutions. The company actively maintains interoperability, especially in the context of frequent reforms in the local public sector. In addition, companies have grown through acquisitions, resulting in a diverse range of legacy applications with variations in language, architecture, norms, and industry standards. The primary goal is to create an adaptable interoperability solution that facilitates seamless communication between the components of the information system and the external environment. Challenges include adapting to changing rules and standards, managing variable data volumes, and integrating connected objects within public institutions.This thesis examines data exchange flows between constituents and systems, analyzes their characteristics and requirements, and proposes cost-effective approaches for implementing and evolving interoperability mechanisms while minimizing the impact on overall information system operations. The methodology adopted begins with a reified vision of interoperability mechanisms, where exchange mechanisms are extracted from the business logic constituents and considered first-class constituents called interoperability connectors. To achieve this, reverse engineering extracts functionality from existing interoperability mechanisms and reifies it as a tangible constituent, the connector, within the information system. For the analysis, we create a repository comprising projects selected transparently, guaranteeing a minimal number of projects, and covering all the Enterprise Integration Patterns from different sources. The proposed metamodel confirms and validates this reification regarding completeness and extensibility. The completeness of the connector metamodel is validated through a well-defined process, while another process guarantees the metamodel's extensibility. The extensible metamodel reveals connectors as common entities, leading to the ConPL approach, a software product line framework adapted to connectors. The PhaDOP tool was utilized to implement this approach, and a proof-of-concept was demonstrated with a specific use case. Performance tests were conducted on the proposed connector representation structure. The ConPL framework is validated through an industrial use case
El, Amraoui Yassine. "Faciliter l'inclusion humaine dans le processus de science des données : de la capture des exigences métier à la conception d'un workflow d'apprentissage automatique opérationnel." Electronic Thesis or Diss., Université Côte d'Azur, 2024. http://www.theses.fr/2024COAZ4017.
Повний текст джерелаWhen data scientists need to create machine learning workflows to solve a problem, they first understand the business needs, analyze the data, and then experiment to find a solution. They judge the success of each attempt using metrics like accuracy, recall, and F-score. If these metrics meet expectations on the test data, it's a success; otherwise, it's considered a failure. However, they often don't pinpoint why a workflow fails before trying a new one. This trial-and-error process can involve many attempts because it's not guided and relies on the preferences and knowledge of the data scientist.This intuitive method leads to varying trial counts among data scientists. Also, evaluating solutions on a test set doesn't guarantee performance on real-world data. So, when models are deployed, additional monitoring is needed. If a workflow performs poorly, the whole process might need restarting with adjustments based on new data.Furthermore, each data scientist learns from their own experiences without sharing knowledge. This lack of collaboration can lead to repeated mistakes and oversights. Additionally, the interpretation of similarity between use cases can vary among practitioners, making the process even more subjective. Overall, the process lacks structure and heavily depends on the individual knowledge and decisions of the data scientists involved.In this work, we present how to mutualize data science knowledge related to anomaly detection in time series to help data scientists generate machine learning workflows by guiding them along the phases of the process .To this aim, we have proposed three main contributions to this problem:Contribution 1: Integrating Data, Business Requirements, and Solution Components in ML Workflow design.While automatic approaches focus on data, our approach considers the dependencies between the data, the business requirements, and the solution components. This holistic approach ensures a more comprehensive understanding of the problem and guides the development of appropriate solutions.Contribution 2: Customizing Workflows for Tailored Solutions by Leveraging Partial and Modular Configurations. Our approach aims to assist data scientists in customizing workflows for their specific problems. We achieve this by employing various variability models and a constraint system. This setup enables users to receive feedback based on their data and business requirements, possibly only partially identified.Additionally, we showed that users can access previous experiments based on problem settings or create entirely new ones.Contribution 3: Enhancing Software Product Lines Knowledge through New Product Exploitation.We have proposed a practice-driven approach to building an SPL as a first step toward allowing the design of generic solutions to detect anomalies in time series while capturing new knowledge and capitalizing on the existing one when dealing with new experiments or use cases.The incrementality in the acquisition of knowledge and the instability of the domain are supported by the SPL through its structuring and the exploitation of partial configurations associated with past use cases.As far as we know, this is the first case of application of the SPL paradigm in such a context and with a knowledge acquisition objective.By capturing practices in partial descriptions of the problems and descriptions of the solutions implemented, we obtain the abstractions to reason about datasets, solutions, and business requirements.The SPL is then used to produce new solutions, compare them to past solutions, and identify knowledge that was not explicit.The growing abstraction supported by the SPL also brings other benefits.In knowledge sharing, we have observed a shift in the approach to creating ML workflows, focusing on analyzing problems before looking for similar applications
Le, Moulec Gwendal. "Synthèse d'applications de réalité virtuelle à partir de modèles." Thesis, Rennes, INSA, 2018. http://www.theses.fr/2018ISAR0010/document.
Повний текст джерелаDevelopment practices in Virtual Reality (VR) are not optimized. for example, each company uses its own methods. The goal of this PhD thesis is to automatize development and evaluation of VR software with the use of Model-Driven Engineering (MDE) technics. The existing approaches in VR do not take advantage of software commonalities. Those lacks of reuse and abstraction are known problems in MDE, which proposes the Soflware Product Line (SPL) concept to automatize the production of software belonging to the same family, by reusing common components. However, this approach is not adapted to software based on a scenario, like inVR.We propose two frameworks that respectively address the lacks in MDE and VR : SOSPL (scenario-oriented software product line) and VRSPL (VR SPL). SOSPL is based on a scenario model that handles a software variability model (feature model , FM). Each scenario step matches a configuration of the FM. VRSPL is based on SOSPL. The scenario manages virtual objects manipulation, the objects being generated automatically from a model. We implemented these frameworks inside tools that have been tried on exemples and evaluated by their target users. The results promote the use of these frameworks for producing scenario-based software
Ridene, Youssef. "Ingéniérie dirigée par les modèles pour la gestion de la variabilité dans le test d'applications mobiles." Thesis, Pau, 2011. http://www.theses.fr/2011PAUU3010/document.
Повний текст джерелаMobile applications have increased substantially in volume with the emergence ofsmartphones. Ensuring high quality and successful user experience is crucial to the successof such applications. Only an efficient test procedure allows developers to meet these requirements. In the context of embedded mobile applications, the test is costly and repetitive. This is mainly due to the large number of different mobile devices. In this thesis, we describe MATeL, a Domain-Specific Modeling Language (DSML) for designing test scenarios for mobile applications. Its abstract syntax, i.e. a meta model and OCL constraints, enables the test designer to manipulate mobile applications testing concepts such as tester, mobile or outcomes and results. It also enables him/her to enrich these scenarios with variability points in the spirit of Software Product-Line engineering, that can specify variations in the test according to the characteristics of one mobile or a set of mobiles. The concrete syntax of MATeL that is inspired from UML sequence diagrams and its environment based on Eclipse allow the user to easily develop scenarios. MATeL is built upon an industrial platform (a test bed) in order to be able to run scenarios on several different phones. The approach is illustrated in this thesis through use cases and experiments that led to verify and validate our contribution
Creff, Stephen. "Une modélisation de la variabilité multidimensionnelle pour une évolution incrémentale des lignes de produits." Phd thesis, Université Rennes 1, 2013. http://tel.archives-ouvertes.fr/tel-00926119.
Повний текст джерелаBen, Rhouma Aouina Takoua. "Composition des modèles de lignes de produits logiciels." Phd thesis, Université Paris Sud - Paris XI, 2012. http://tel.archives-ouvertes.fr/tel-00772257.
Повний текст джерелаBen, Rhouma Takoua. "Composition des modèles de lignes de produits logiciels." Thesis, Paris 11, 2012. http://www.theses.fr/2012PA112299/document.
Повний текст джерелаThe Software Product Line (SPL) engineering aims at modeling and developing a set of software systems with similarities rather than individual software systems. Modeling task can be, however, tedious or even infeasible for large scale and complex SPLs. To address such a problem, the modeling task is distributed among different stakeholders. At the end, the models separately developed have to be composed in order to obtain the global SPL model. Composing SPL models is not a trivial task; variability information of model elements has to be treated during the composition, as well as the variability constraints. Similarly, the model structure and the composition semantics are key points that have to be considered during the composition. This thesis aims at providing specific mechanisms to compose SPL models. Therefore, we propose two composition mechanisms: the merge and the aggregation mechanisms. The merge mechanism aims at combining models including structural similarities. The aggregation mechanism, however, intends to compose models without any structural similarity but having eventual constraints across their structural elements. We focus on UML composite structures of SPLs and use specific annotations to identify variable elements. Our composition mechanisms deal with the variability information of structural elements, the variability constraints associated with the variable elements as well as the structures of the manipulated models. We also specify a set of semantic properties that have to be considered during the composition process and show how to preserve them. At the end, we have carried out an assessment of the proposals and have showed their ability to compose SPL models in a reasonable time. We have also showed how model consolidation is important in reducing le number of products having incomplete structure
Taffo, Tiam Raoul. "Modèles opérationnels de processus métier et d'exigences variables pour le développement de lignes de produits logiciels." Thesis, Montpellier, 2015. http://www.theses.fr/2015MONTS268.
Повний текст джерелаAny organization involved in software engineering has to deal with reduction of production time and cost, in order to face the competitiveness challenge. This imperative of thinking the software economy resulted in the goal of getting better control on developer productivity. Software Reuse is a preferred way to increase the productivity, particularly when it is systematized. Two types of activities should be considered to improve software reuse, development for reuse and development by reuse. Several solutions have been proposed to contribute and improve development for reuse. For its part, product line approach is distinguished by its contribution to development by reuse through support and automation of selection, configuration, and derivation of new products. However, although this approach has positioned reuse as a core activity in its engineering process, it remains difficult to realize it in many situations. For example, due to lack of specification or management of variability which may occur in each artifacts from all steps of the engineering process. In this context, the general issue of this thesis consists in industrialization of software product line, by the contribution to systematization of reuse in each steps and automation of transitions between those steps. To better support the business agility, our first goal is the specification of variability within business process models, in order to make them directly usable into software factory. Our second goal is to introduce variability specification into requirements engineering, enabling systematic reuse of requirements models and establishing traceability links with previous models of variable business processes. Thus, an architecture model (service oriented) can be generated in software factory, as implementation of modeled business processes with compliance to specified requirements
Dumitrescu, Cosmin. "CO-OVM: Une approche pratique pour la modélisation de la variabilité en Ingénierie Système." Phd thesis, Université Panthéon-Sorbonne - Paris I, 2014. http://tel.archives-ouvertes.fr/tel-01011186.
Повний текст джерелаEyal, Salman Hamzeh. "Recovering traceability links between artifacts of software variants in the context of software product line engineering." Thesis, Montpellier 2, 2014. http://www.theses.fr/2014MON20008/document.
Повний текст джерелаSoftware Product Line Engineering (SPLE) is a software engineering discipline providing methods to promote systematic software reuse for developing short time-to-market and quality products in a cost-efficient way. SPLE leverages what Software Product Line (SPL) members have in common and manages what varies among them. The idea behind SPLE is to builds core assets consisting of all reusable software artifacts (such as requirements, architecture, components, etc.) that can be leveraged to develop SPL's products in a prescribed way. Creating these core assets is driven by features provided by SPL products.Unfortunately, building SPL core assets from scratch is a costly task and requires a long time which leads to increasing time-to-market and up-front investment. To reduce these costs, existing similar product variants developed by ad-hoc reuse should be re-engineered to build SPLs. In this context, our thesis proposes three contributions. Firstly, we proposed an approach to recover traceability links between features and their implementing source code in a collection of product variants. This helps to understand source code of product variants and facilitate new product derivation from SPL's core assets. The proposed approach is based on Information Retrieval (IR) for recovering such traceability links. In our experimental evaluation, we showed that our approach outperforms the conventional application of IR as well as the most recent and relevant work on the subject. Secondly, we proposed an approach, based on traceability links recovered in the first contribution, to study feature-level Change Impact Analysis (CIA) for changes made to source code of features of product variants. This approach helps to conduct change management from a SPL's manager point of view. This allows him to decide which change strategy should be executed, as there is often more than one change that can solve the same problem. In our experimental evaluation, we proved the effectiveness of our approach in terms of the most used metrics on the subject. Finally, based on traceability recovered in the first contribution, we proposed an approach to contribute for building Software Product Line Architecture (SPLA) and linking its elements with features. Our focus is to identify mandatory components and variation points of components. Therefore, we proposed a set of algorithms to identify this commonality and variability across a given collection of product variants. According to the experimental evaluation, the efficiency of these algorithms mainly depends on the available product configurations
Pham, Thi-Kim-Dung. "Development of Correct-by-Construction Software using Product Lines." Thesis, Paris, CNAM, 2017. http://www.theses.fr/2017CNAM1138/document.
Повний текст джерелаWe began the thesis by survey literature on SPLE and CbyC approaches in the State of the Art. Based on the overview and the insights obtained, we have analyzed the existing problems and suggested ways to solve them for our main goal. We have proposed in Chapter 2 a methodology to develop product lines such that the generated products are correct-by-construction. Our main intention is that a user does not need to know the product generation process but can receive a correct final product from selecting a configuration of features. Using the methodology, the final products are generated automatically and their correctness is guaranteed. Following this proposal, we have moved in Chapter 3 to define the FFML language that is used for writing modules. The reuse and modification mechanism, defined for the language and applied to all kinds of artifacts (specification, code and correctness proof), reduce the programming effort. In Chapter 4, we have focused on defining the composition mechanisms for composing FFML modules and embedded them into the FFML Product Generator tool. The evaluation of our methodology is performed through the development of two software product lines, the Bank Account SPL and the Poker SPL, the latter being a bit more complex than the former. In the evaluation, we have highlighted the advantages and the limitation of our methodology
Martinez, Jabier. "Exploration des variantes d'artefacts logiciels pour une analyse et une migration vers des lignes de produits." Thesis, Paris 6, 2016. http://www.theses.fr/2016PA066344/document.
Повний текст джерелаSoftware Product Lines (SPLs) enable the derivation of a family of products based on variability management techniques. Inspired by the manufacturing industry, SPLs use feature configurations to satisfy different customer needs, along with reusable assets to allow systematic reuse. Capitalizing on existing variants by extracting the common and varying elements is referred to as extractive approaches for SPL adoption. Feature identification is needed to analyse the domain variability. Also, to identify the associated implementation elements of the features, their location is needed. In addition, feature constraints should be identified to guarantee that customers are not able to select invalid feature combinations. Then, the reusable assets associated to the features should be constructed. And finally, a comprehensive feature model need to be synthesized. This dissertation presents Bottom-Up Technologies for Reuse (BUT4Reuse), a unified, generic and extensible framework for mining software artefact variants. Special attention is paid to model-driven development scenarios. We also focus on benchmarks and in the analysis of variants, in particular, in benchmarking feature location techniques and in identifying families of variants in the wild for experimenting with feature identification techniques. We present visualisation paradigms to support domain experts on feature naming and to support on feature constraints discovery. Finally, we investigate and discuss the mining of artefact variants for SPL analysis once the SPL is already operational. Concretely, we present an approach to find relevant variants within the SPL configuration space guided by end user assessments
Martinez, Jabier. "Exploration des variantes d'artefacts logiciels pour une analyse et une migration vers des lignes de produits." Electronic Thesis or Diss., Paris 6, 2016. http://www.theses.fr/2016PA066344.
Повний текст джерелаSoftware Product Lines (SPLs) enable the derivation of a family of products based on variability management techniques. Inspired by the manufacturing industry, SPLs use feature configurations to satisfy different customer needs, along with reusable assets to allow systematic reuse. Capitalizing on existing variants by extracting the common and varying elements is referred to as extractive approaches for SPL adoption. Feature identification is needed to analyse the domain variability. Also, to identify the associated implementation elements of the features, their location is needed. In addition, feature constraints should be identified to guarantee that customers are not able to select invalid feature combinations. Then, the reusable assets associated to the features should be constructed. And finally, a comprehensive feature model need to be synthesized. This dissertation presents Bottom-Up Technologies for Reuse (BUT4Reuse), a unified, generic and extensible framework for mining software artefact variants. Special attention is paid to model-driven development scenarios. We also focus on benchmarks and in the analysis of variants, in particular, in benchmarking feature location techniques and in identifying families of variants in the wild for experimenting with feature identification techniques. We present visualisation paradigms to support domain experts on feature naming and to support on feature constraints discovery. Finally, we investigate and discuss the mining of artefact variants for SPL analysis once the SPL is already operational. Concretely, we present an approach to find relevant variants within the SPL configuration space guided by end user assessments
Collet, Philippe. "Taming Complexity of Large Software Systems: Contracting, Self-Adaptation and Feature Modeling." Habilitation à diriger des recherches, Université de Nice Sophia-Antipolis, 2011. http://tel.archives-ouvertes.fr/tel-00657444.
Повний текст джерелаCarbonnel, Jessie. "L'analyse formelle de concepts : un cadre structurel pour l'étude de la variabilité de familles de logiciels." Thesis, Montpellier, 2018. http://www.theses.fr/2018MONTS057/document.
Повний текст джерелаSoftware families often rise from reuse practices as cloning existing software products which are then enhanced or pruned to fulfill new requirements. With time, these variants grow in number and in complexity, and become more and more complex to maintain. Software product line engineering gathers a set of methods that aims at facilitating the management and development of such collections of existing variants. Documenting variability is the central point of this paradigm; This variability is represented in variability models that support a large part of software product line engineering processes.The partial or complete migration from software families to a product line approach eases their exploitation.Reverse-engineering, modeling and managing variability are known as crucial tasks of the migration: therefore, numerous methods have been proposed to study descriptions of software families for this goal.Some of them are based on formal concept analysis, a mathematical framework for hierarchical clustering which organises set of objects and their descriptions in canonical structures highlighting naturally their commonalities and variability.In this thesis, we defend that formal concept analysis, more than a tool, is a relevant structural, reusable and extensible framework to study variability of software families.First, we propose an overview of variability information which is highlighted thanks to this framework, and we discuss its scope of applicability.We study the common points between the conceptual structures of formal concept analysis and variability models.Then, we show how to use these conceptual structures to support research and modeling operations.Finally, we broaden the scope of this study to take into account more complex information about extended variability.We evaluate our method on data taken from the SPLOT repository, fork-insight and product comparison matrices from wikipedia
Pham, Thi-Kim-Dung. "Development of Correct-by-Construction Software using Product Lines." Electronic Thesis or Diss., Paris, CNAM, 2017. http://www.theses.fr/2017CNAM1138.
Повний текст джерелаWe began the thesis by survey literature on SPLE and CbyC approaches in the State of the Art. Based on the overview and the insights obtained, we have analyzed the existing problems and suggested ways to solve them for our main goal. We have proposed in Chapter 2 a methodology to develop product lines such that the generated products are correct-by-construction. Our main intention is that a user does not need to know the product generation process but can receive a correct final product from selecting a configuration of features. Using the methodology, the final products are generated automatically and their correctness is guaranteed. Following this proposal, we have moved in Chapter 3 to define the FFML language that is used for writing modules. The reuse and modification mechanism, defined for the language and applied to all kinds of artifacts (specification, code and correctness proof), reduce the programming effort. In Chapter 4, we have focused on defining the composition mechanisms for composing FFML modules and embedded them into the FFML Product Generator tool. The evaluation of our methodology is performed through the development of two software product lines, the Bank Account SPL and the Poker SPL, the latter being a bit more complex than the former. In the evaluation, we have highlighted the advantages and the limitation of our methodology
Boufedji, Dounia. "Vers une approche d’ingénierie multi-agents à base de lignes de produits logiciels." Electronic Thesis or Diss., Sorbonne université, 2020. http://www.theses.fr/2020SORUS438.
Повний текст джерелаMulti-Agent Systems represent an ideal solution that has already proved positive for the modelling of complex systems. The AOSE (Agent Oriented Software Engineering) offers different methodologies, meta-models, templates and reuse patterns that facilitate their development and accelerate their acceptance within the software industry. However, the existing approaches to MAS engineering do not allow the management and development of similar applications known as MAS families. These applications have some commonalities, as well as differences called variability. The management of variability can be done at different levels such as design and development, except that it is not taken into account in existing approaches. In order to compensate for the lack of variability management within multi-agent families at the level of agent-oriented approaches, SPL (Software Product Lines) engineering turns out to be the appropriate solution for which the management of variability remains a key element. In this context, the exploitation of SPL engineering techniques within the framework of multi-agent approaches is known as Multi-agent systems Product Lines engineering. This thesis subject is part of this thematic of MAS-PL approaches meant to enhance the management of variability within families of MAS; what, consequently, improves the aspects of reuse revolving around variability. This is how our approach, which is based on the general SPL process, in favour of an improvement, pushes the limits of current MAS-PL approaches
Tërnava, Xhevahire. "Gestion de la variabilité au niveau du code : modélisation, traçabilité et vérification de cohérence." Thesis, Université Côte d'Azur (ComUE), 2017. http://www.theses.fr/2017AZUR4114/document.
Повний текст джерелаWhen large software product lines are engineered, a combined set of traditional techniques, such as inheritance, or design patterns, is likely to be used for implementing variability. In these techniques, the concept of feature, as a reusable unit, does not have a first-class representation at the implementation level. Further, an inappropriate choice of techniques becomes the source of variability inconsistencies between the domain and the implemented variabilities. In this thesis, we study the diversity of the majority of variability implementation techniques and provide a catalog that covers an enriched set of them. Then, we propose a framework to explicitly capture and model, in a fragmented way, the variability implemented by several combined techniques into technical variability models. These models use variation points and variants, with their logical relation and binding time, to abstract the implementation techniques. We show how to extend the framework to trace features with their respective implementation. In addition, we use this framework and provide a tooled approach to check the consistency of the implemented variability. Our method uses slicing to partially check the corresponding propositional formulas at the domain and implementation levels in case of 1–to–m mapping. It offers an early and automatic detection of inconsistencies. As validation, we report on the implementation in Scala of the framework as an internal domain specific language, and of the consistency checking method. These implementations have been applied on a real feature-rich system and on three product line case studies, showing the feasibility of the proposed contributions
Filho, João Bosco Ferreira. "Leveraging model-based product lines for systems engineering." Thesis, Rennes 1, 2014. http://www.theses.fr/2014REN1S080/document.
Повний текст джерелаSystems Engineering is a complex and expensive activity in several kinds of companies, it imposes stakeholders to deal with massive pieces of software and their integration with several hardware components. To ease the development of such systems, engineers adopt a divide and conquer approach : each concern of the system is engineered separately, with several domain specific languages (DSL) and stakeholders. The current practice for making DSLs is to rely on the Model-driven Engineering (MDE. On the other hand, systems engineering companies also need to construct slightly different versions/variants of a same system; these variants share commonalities and variabilities that can be managed using a Software Product Line (SPL) approach. A promising approach is to ally MDE with SPL – Model-based SPLs (MSPL) – in a way that the products of the SPL are expressed as models conforming to a metamodel and well-formedness rules. The Common Variability Language (CVL) has recently emerged as an effort to standardize and promote MSPLs. Engineering an MSPL is extremely complex to an engineer: the number of possible products is exponential; the derived product models have to conform to numerous well- formedness and business rules; and the realization model that connects a variability model and a set of design models can be very expressive specially in the case of CVL. Managing variability models and design models is a non-trivial activity. Connecting both parts and therefore managing all the models is a daunting and error-prone task. Added to these challenges, we have the multiple different modeling languages of systems engineering. Each time a new modeling language is used for developing an MSPL, the realization layer should be revised accordingly. The objective of this thesis is to assist the engineering of MSPLs in the systems engineering field, considering the need to support it as earlier as possible and without compromising the existing development process. To achieve this, we provide a systematic and automated process, based on CVL, to randomly search the space of MSPLs for a given language, generating counterexamples that can server as antipatterns. We then provide ways to specialize CVL’s realization layer (and derivation engine) based on the knowledge acquired from the counterexamples. We validate our approach with four modeling languages, being one acquired from industry; the approach generates counterexamples efficiently, and we could make initial progress to increase the safety of the MSPL mechanisms for those languages, by implementing antipattern detection rules. Besides, we also analyse big Java programs, assessing the adequacy of CVL to deal with complex languages; it is also a first step to assess qualitatively the counterexamples. Finally, we provide a methodology to define the processes and roles to leverage MSPL engineering in an organization
Cornieles, Ernesto. "Étude théorique et expérimentale d'un résonateur ultrasonore composé : développement d'un logiciel basé sur l'analogie des lignes électriques." Thèse, Université du Québec à Trois-Rivières, 1992. http://depot-e.uqtr.ca/5233/1/000601770.pdf.
Повний текст джерелаEl, Gueder Jawhar. "Modèle et logiciel KBE pour l'intégration du chainage des opérations de fabrication en conception de produit." Troyes, 2012. http://www.theses.fr/2012TROY0018.
Повний текст джерелаCAD-centred design process and information related to the manufacturing is only selected and assessed subsequently. This work presents an original approach in mechanical design to integrate the manufacturing constraints in the CAD model while it is being built. A specific user-interface is proposed to integrate manufacturing constraints (for example tolerances) and manufacturing consequences (for example roughness or hardening) into the model. The tool includes three modules: a knowledge base engineering data base for manufacturing processes, a geometric modeller and a link to a finite element code. It is the possible to consider several manufacturing processes and chain the different operation and hence to build the model of the part knowing the manufacturing history. One of the important consequences of manufacturing processes is the generation of residual stresses into the part. An appropriate tool has been developed and used as a design tool to manage the global evolution of residual stresses in the mechanical part. To support the model, a database is used for the integration of residual stresses from stored experimental results, analytical or numerical calculations. An example of a metal sheet deformed by shot peening is treated, the deformed geometry is rebuilt and compared to the experimental results
Picard, Aubry. "Analyse du couplage électromagnétique produit par un objet installé dans une cellule TEM 3D." Lille 1, 2007. https://pepite-depot.univ-lille.fr/LIBRE/Th_Num/2007/50376-2007-73.pdf.
Повний текст джерелаHabhouba, Dounia. "Assistance à la prise de decision dans le processus de modification d'un produit en utilisant la technologie "Agent logiciel"." Thèse, Université de Sherbrooke, 2008. http://savoirs.usherbrooke.ca/handle/11143/1826.
Повний текст джерелаLefebvre, Frédéric. "Contribution à la modélisation pour le diagnostic des systèmes complexes : application à la signalisation des lignes à grande vitesse." Valenciennes, 2000. https://ged.uphf.fr/nuxeo/site/esupversions/dd5382c4-64ce-45c5-aa00-21f80af16cf3.
Повний текст джерелаIstoan, Paul. "Methodology for the derivation of product behaviour in a Software Product Line." Phd thesis, Université Rennes 1, 2013. http://tel.archives-ouvertes.fr/tel-00925479.
Повний текст джерелаAhmed-Nacer, Mohamed. "Un modèle de gestion et d'évolution de schéma pour les bases de données de génie logiciel." Grenoble INPG, 1994. http://www.theses.fr/1994INPG0067.
Повний текст джерелаNous faisons d'abord le point des travaux concernant l'évolution de schémas et l’évolution des modèles de procédés logiciels ; nous définissons des critères d'évolution et nous montrons que les principales approches ne satisfont pas les besoins en génie logiciel
Nous présentons ensuite notre modèle : celui-ci permet l'existence simultanée de plusieurs points de vues de la base d'objets, la composition de schémas et enfin, l'expression de politiques différentes d'évolution de ces schémas ; chaque application pouvant définir la politique d’évolution souhaitée<
La gestion des points de vue se fonde sur le versionnement de la métabase. Le maintien de la cohérence de la base d'objets et du système global de gestion et d'évolution de schémas est assuré par l'expression de contraintes au niveau de cette métabase. La composition des schémas utilise une technique de configurations de logiciels appliqués aux types et la définition de politique d’évolution utilise les capacités de la base active du système Adèle"
Martinez-Leal, Jorge. "Développement d’outils d’aide à la décision en conception pilotés par l’analyse multicritère de la valorisabilité du produit et l’outillage des lignes directrices d’écoconception pour la fin de vie." Thesis, Paris, ENSAM, 2019. http://www.theses.fr/2019ENAM0062.
Повний текст джерелаCurrent regulations encourage designers and manufacturers to engage in circular economy and eco-design approaches in order to mitigate the environmental impact of their products. Today, design choices are mainly driven by product recoverability assessment. Guidelines associated with design for X approaches are then used as a decision-making tool for finding solutions. It is therefore necessary to establish a link between the assessed recoverability and eco-design guidelines so that designers can better interpret the information they are given and simplify their design process accordingly. An inventory of these guidelines combined with an eco-design for an end-of-life approach makes it possible to identify the associated levers for action. However, there is only minimal correlation between recoverability indicators and the levers associated with the guidelines. Therefore, this proposal aims to build an indicator-based decision-making design methodology based on a multi-criteria analysis of the product's recoverability. It is enhanced by a tooled guide based on eco-design for end-of-life guidelines. The proposed approach has been validated through the study of a Fairphone® included in the WEEE recycling chain
Huynh, Ngoc Tho. "A development process for building adaptative software architectures." Thesis, Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2017. http://www.theses.fr/2017IMTA0026/document.
Повний текст джерелаAdaptive software is a class of software which is able to modify its own internal structure and hence its behavior at runtime in response to changes in its operating environment. Adaptive software development has been an emerging research area of software engineering in the last decade. Many existing approaches use techniques issued from software product lines (SPLs) to develop adaptive software architectures. They propose tools, frameworks or languages to build adaptive software architectures but do not guide developers on the process of using them. Moreover, they suppose that all elements in the SPL specified are available in the architecture for adaptation. Therefore, the adaptive software architecture may embed unnecessary elements (components that will never be used) thus limiting the possible deployment targets. On the other hand, the components replacement at runtime remains a complex task since it must ensure the validity of the new version, in addition to preserving the correct completion of ongoing activities. To cope with these issues, this thesis proposes an adaptive software development process where tasks, roles, and associate artifacts are explicit. The process aims at specifying the necessary information for building adaptive software architectures. The result of such process is an adaptive software architecture that only contains necessary elements for adaptation. On the other hand, an adaptation mechanism is proposed based on transactions management for ensuring consistent dynamic adaptation. Such adaptation must guarantee the system state and ensure the correct completion of ongoing transactions. In particular, transactional dependencies are specified at design time in the variability model. Then, based on such dependencies, components in the architecture include the necessary mechanisms to manage transactions at runtime consistently
Verdier, Frédéric. "COMpOSER : a model-driven software product line approach for an effective management of software reuse within software product families and populations." Thesis, Montpellier, 2019. http://www.theses.fr/2019MONTS117.
Повний текст джерелаSoftware systems are constantly increasing in size and complexity, forcing the software industry to migrate their hand-craft development processes, slowly realizing each product, to more systematized and automated ones, mass-producing software at lower costs. This migration process, that we call industrialization, can be achieved through the integration of systematic software reuse and automation in their development processes. Their combination results in the realization of sounder products at lower costs. Existing approaches combining Model-Driven Architecture (MDA) and Software Product Line Engineering (SPLE) partially automate development processes with systematic software reuse by capitalizing on their compatible benefits. While MDA permits developers to define highly reusable assets and automatic operations to perform on them, SPLE systematizes software reuse by relying on the commonalities and variabilities of a set of related products named a software product family. However, these approaches suffer from two major restrictions that can be a brake for companies aiming to industrialize their development processes using these solutions. Firstly, they have difficulties to fully manage variability at different levels of abstraction because of the rapidly increasing complexity of operations performed on assets alongside the addition of new variation points. Then, existing combinations of MDA and SPLE are limited to the management of variability in a software product family. But, in some contexts such as IT services companies, variability can be related to more heterogeneous sets of products than families named software product populations. Although some existing works propose to manage the variability in a software product population, these approaches, by composing independent software product lines, are limited to the composition, and by extension to reuse, of coarse-grained assets.In this PhD thesis, we propose a new approach named COMpOSER (CrOss-platform MOdel-driven Software product line EngineeRing) which defines an efficient way to compose MDA and SPLE in order to fully manages variability in a software product family but also in a software product population without reducing its reuse capabilities. To do so, COMpOSER introduces a new characterization of variability to organize reusable assets in three dimensions: the business dimension; the architecture dimension; and the technological ecosystem dimension. Additionally, this characterization distinguishes inter-domain variability, organizing the different software product families of a population, and intra-domain variability, organizing assets in a single software product family. To properly organize reusable assets, COMpOSER defines a model of fine-grained core assets which is compatible with its characterization of variability. In parallel, our approach defines partially automated operations to produce new software through systematic reuse which permit to fully manage variability without inducing scaling up issues with the addition of new variation points. Thanks to our collaboration with an industrial partner, we could experiment COMpOSER by applying our propositions to help the company industrializing its development processes. As such, we implemented a framework that supports our approach while considering the specificities of our industrial context. This framework embeds the principles of COMpOSER in a format that is easier to understand to developers with little knowledge about SPLE and MDA. In this way, we have observed that the framework facilitated the adoption of our solutions by the company's development teams. Using the COMpOSER framework, we obtained results demonstrating how our approach improves systematic software reuse when compared to concurrent approaches. These results stemmed from empirical experiments performed on concrete industrial case studies
Bouarar, Selma. "Vers une conception logique et physique des bases de données avancées dirigée par la variabilité." Thesis, Chasseneuil-du-Poitou, Ecole nationale supérieure de mécanique et d'aérotechnique, 2016. http://www.theses.fr/2016ESMA0024/document.
Повний текст джерелаThe evolution of computer technology has strongly impacted the database design process which is henceforth requiring more time and resources to encompass the diversity of DB applications.Note that designers rely on their talent and knowledge, which have proven insufficient to face the increasing diversity of design choices, raising the problem of the reliability and completeness of this knowledge. This problem is well known as variability management in software engineering. While there exist some works on managing variability of physical and conceptual phases, very few have focused on logical design. Moreover, these works focus on design phases separately, thus ignore the different interdependencies. In this thesis, we first present a methodology to manage the variability of the whole DB design process using the technique of software product lines, so that (i)interdependencies between design phases can be considered, (ii) a holistic vision is provided to the designer and (iii) process automation is increased. Given the scope of the study, we proceed step-bystepin implementing this vision, by studying a case that shows: (i) the importance of logical design variability (iii) its impact on physical design (multi-phase management), (iv) the evaluation of logical design, and the impact of logical variability on the physical design (materialized view selection) in terms of non-functional requirements: execution time, energy consumption and storage space
Mer, Stéphane. "Les mondes et les outils de la conception : pour une approche socio-technique de la conception du produit." Grenoble INPG, 1998. http://www.theses.fr/1998INPG0146.
Повний текст джерелаFerreira, Leite Alessandro. "A user-centered and autonomic multi-cloud architecture for high performance computing applications." Thesis, Paris 11, 2014. http://www.theses.fr/2014PA112355/document.
Повний текст джерелаCloud computing has been seen as an option to execute high performance computing (HPC) applications. While traditional HPC platforms such as grid and supercomputers offer a stable environment in terms of failures, performance, and number of resources, cloud computing offers on-Demand resources generally with unpredictable performance at low financial cost. Furthermore, in cloud environment, failures are part of its normal operation. To overcome the limits of a single cloud, clouds can be combined, forming a cloud federation often with minimal additional costs for the users. A cloud federation can help both cloud providers and cloud users to achieve their goals such as to reduce the execution time, to achieve minimum cost, to increase availability, to reduce power consumption, among others. Hence, cloud federation can be an elegant solution to avoid over provisioning, thus reducing the operational costs in an average load situation, and removing resources that would otherwise remain idle and wasting power consumption, for instance. However, cloud federation increases the range of resources available for the users. As a result, cloud or system administration skills may be demanded from the users, as well as a considerable time to learn about the available options. In this context, some questions arise such as: (a) which cloud resource is appropriate for a given application? (b) how can the users execute their HPC applications with acceptable performance and financial costs, without needing to re-Engineer the applications to fit clouds' constraints? (c) how can non-Cloud specialists maximize the features of the clouds, without being tied to a cloud provider? and (d) how can the cloud providers use the federation to reduce power consumption of the clouds, while still being able to give service-Level agreement (SLA) guarantees to the users? Motivated by these questions, this thesis presents a SLA-Aware application consolidation solution for cloud federation. Using a multi-Agent system (MAS) to negotiate virtual machine (VM) migrations between the clouds, simulation results show that our approach could reduce up to 46% of the power consumption, while trying to meet performance requirements. Using the federation, we developed and evaluated an approach to execute a huge bioinformatics application at zero-Cost. Moreover, we could decrease the execution time in 22.55% over the best single cloud execution. In addition, this thesis presents a cloud architecture called Excalibur to auto-Scale cloud-Unaware application. Executing a genomics workflow, Excalibur could seamlessly scale the applications up to 11 virtual machines, reducing the execution time by 63% and the cost by 84% when compared to a user's configuration. Finally, this thesis presents a product line engineering (PLE) process to handle the variabilities of infrastructure-As-A-Service (IaaS) clouds, and an autonomic multi-Cloud architecture that uses this process to configure and to deal with failures autonomously. The PLE process uses extended feature model (EFM) with attributes to describe the resources and to select them based on users' objectives. Experiments realized with two different cloud providers show that using the proposed model, the users could execute their application in a cloud federation environment, without needing to know the variabilities and constraints of the clouds
Dudret, Stéphane. "Modèles de convection-diffusion pour les colonnes de distillation : application à l'estimation et au contrôle des procédés de séparation cryogéniques des gaz de l'air." Phd thesis, Ecole Nationale Supérieure des Mines de Paris, 2013. http://pastel.archives-ouvertes.fr/pastel-00874677.
Повний текст джерелаHarani, Yasmina. "Une approche multi-modèles pour la capitalisation des connaissances dans le domaine de la conception." Grenoble INPG, 1997. http://www.theses.fr/1997INPG0178.
Повний текст джерелаHe, Junkai. "Effective models and methods for stochastic disassembly line problems." Thesis, université Paris-Saclay, 2020. http://www.theses.fr/2020UPASE009.
Повний текст джерелаStudying the disassembly of End-of-Life (EOL) products under uncertainty is becoming a hot research topic due to its benefits in reducing waste, saving non-renewable resources, and protecting the environment. Existing disassembly line works assume that stochastic information can be estimated as probability distributions or functions and most of them focus on stochastic disassembly line balancing problems. However, it is not always possible to obtain complete stochastic information due to a lack of historical data or excessive data volume, and the integrated disassembly line problem has been rarely addressed. In this thesis, four novel stochastic disassembly line problems with only partial stochastic information are investigated. The purpose is to propose effective models and solution methods for the considered problems. The main works of this thesis are:Firstly, a new stochastic disassembly line balancing problem (SDLBP) is studied to minimize the disassembly line cost under stochastic task processing times, given only the mean, standard deviation, and change-rate upper bound. For the problem, a chance-constrained model is first formulated, which is further approximately transformed into a distribution-free model by property analysis. Then, a fast heuristic is devised to solve the transformed model. Experimental results demonstrate that the distribution-free model can effectively solve the SDLBP with only partial stochastic information.In most existing literature, the cycle time that represents the maximum completion time among workstations is given. However, the disassembly line cost and cycle time are two conflicting performance criteria and impact mutually. In this thesis, a new bi-objective distribution-free SDLBP is studied to minimize the disassembly line cost and cycle time, where partial information of task processing times is required. For the problem, a bi-objective distribution-free model is constructed, and an improved ε-constraint method is designed. Numerical experiments show that the proposed method can reduce more than 90% computation rounds, compared with the basic ε-constraint method.Disassembly lines may generate pollution during separating EOL products, but this factor has not been considered in the previous SDLBP works. In this thesis, we study a new green-oriented distribution-free SDLBP to minimize the disassembly line cost and pollution emission simultaneously, in which workstations with different purchase prices can have different amounts of pollution emissions. For the problem, a new bi-objective model is formulated and a problem-specific ε-constraint method is devised. Experimental results show that selecting appropriately workstations can effectively reduce the pollution emission of a disassembly line. Besides, some managerial insights are discussed.The integrated optimization of disassembly line balancing and planning may enhance the efficiency of the disassembly system and reduce its expenses, which has not been studied before. In this thesis, an integrated stochastic disassembly line balancing and planning problem (ISDLBPP) is addressed to minimize the overall system cost, where component demands and component yield ratios are assumed to be uncertain. For the problem, a two-stage stochastic programming model is established and valid inequalities are devised to reduce the search space. Then, the sample average approximation (SAA) method and the L-shaped method are applied to solve the model. Numerical experiments show that the L-shaped method can save more than 60% computation time than the SAA method, without sacrificing solution quality