Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Iterative change.

Dissertationen zum Thema „Iterative change“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-15 Dissertationen für die Forschung zum Thema "Iterative change" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Jeoffroy, Matthew. „Internet protocol - based information systems : an investigation into integration issues and iterative organisational change strategies“. Thesis, Kingston University, 2001. http://eprints.kingston.ac.uk/20681/.

Der volle Inhalt der Quelle
Annotation:
Internet-based electronic commerce is a rapidly evolving phenomenon. Organisations have reacted to the opportunities that have been presented through electronic commerce as new class of strategic information system that can be defined as an Internet Protocol Based Information System (IPBIS). As the demand for IPBIS grows, organisations are looking for ways to use it in order to leverage strategic advantage within their given markets. However, IPBIS are not yet established, and there are many unknowns surrounding its use and the change effects it may have on adopting organisations. Research is emerging that answers some of the organisational and electronic market issues that are being posed by organisations, but which are not being addressed by the increasing amounts of non-academic hyperbole that is in evidence. This study was conducted using a mixed mode of case study research within a grounded theory framework to explore the role of IPBIS as a contributing factor to organisational change. Twelve cases were studied using semi-structure interviews and observation, to assess technology implementation strategies, change effects, and management of change strategies. This study has revealed that organisations follow a staged model of integration that may start as a tentative venture with simple email facilities, and then moves through a set of discreet stages to potential full integration with internal information systems, which may be outsourced to third party solution providers. Evidence supports a substantive theory of 'Push-Pull Decision Taking' that was developed to provide an explanatory framework showing that organisations reach a stage of risk analysis and information elicited, and then feel compelled to participate in IPBIS electronic commerce initiatives, which are not always in the immediate interests of the organisation. The results of this decision taking are that the organisation and its actors try to develop appropriate management strategies, which typically support incremental change. This resulting model of change and a series of working propositions provide a basis for practitioner work, and further academic research in this domain.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Gudjonsson, Knutur. „Iterative Business Model Innovation : Exploring a Holistic Framework in Order to Create and Capture New Value“. Thesis, Linköpings universitet, Företagsekonomi, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-97540.

Der volle Inhalt der Quelle
Annotation:
Background: There is an increasing amount of arguments made that new business models are the solution when companies and industries face radical changes in the environment. To be able to prosper in the long run, organizations must reinvent themselves over and over again. Many authors (e.g. Abernathy & Utterback, 1978; Christensen, 1997; Kim & Mauborgne, 2005; Ries, 2011) claim that big, radical, reconfigurations are needed in order to prosper in the long-term. Theories, concepts and framework have been developed to answer how this reconfiguration should happen within organizations. However, the concepts derived are just parts of the solution, and none take a holistic approach, trying to cover them in a practical framework that could be used by organizations. Aim: The aim of the thesis is to propose a framework that enables organizations to systemize their innovation processes, making them flexible enough to repetitively seize opportunities through business model innovation where new value can be created and captured. The proposed framework aims to enable organizations to start discussing how they should create and capture new value and give them a more pragmatic view on the innovation process. It also aims to act as a starting point for future research. Methodology: The thesis follows March & Smith’s (1995) design science methodology in order to build and evaluate the framework. This is done in three steps; first by building a model from theory. Second, the emergence of business models in three different case companies are compared and investigated qualitatively. Lastly the model and the factors derived from the data are contrasted and a framework is built and evaluated. Findings & Conclusion: The basis of the derived framework proposes for big steps to change, and create and capture new value; analyze the basis of competition in the macro and micro environment, analyze and experiment with different non-customer tiers, experiment with the creation of value and experiment and analyze the capture of the value created. More tangible tools are proposed for each of these steps. Actually testing the framework and further evaluating and theorizing of the framework is proposed as future research directions.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Lee, Sang Hyun 1973. „Dynamic Planning and control Methodology : understanding and managing iterative error and change cycles in large-scale concurrent design and construction projects“. Thesis, Massachusetts Institute of Technology, 2006. http://hdl.handle.net/1721.1/34672.

Der volle Inhalt der Quelle
Annotation:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Civil and Environmental Engineering, 2006.
Vita.
Includes bibliographical references (v. 1, leaves 174-180).
Construction projects are uncertain and complex in nature. One of the major driving forces that may account for these characteristics is iterative cycles caused by errors and changes. Errors and changes worsen project performance and consequently, cause schedule and cost overruns to be prevalent. In particular, these iterative cycles are more detrimental when large-scale concurrent design and construction is applied. In an effort to address these issues, this research proposes Dynamic Planning and control Methodology (DPM) as a robust design and construction planning methodology for large-scale concurrent design and construction. The proposed DPM is composed of: 1) an error and change management framework that enables understanding of the construction processes associated with errors and changes and how they affect construction performance; 2) a proactive buffering strategy for reducing sensitivity to iterative error and changes cycles; 3) a System Dynamics-based construction project model which provides policy guidelines for the planning and control of projects; and
(cont.) 4) a web-based error and change management system, which supports coordination of errors and changes among contractors and design professionals without hardware and software compatibility issues. Applying all research components into a couple of real world case projects, this research concludes that a concurrently developed project can benefit by: 1) adding realism to planning taking into account iterative error and change cycles; 2) implementing a proactive mechanism to look and act ahead against uncertainties; 3) making appropriate policies with the help of the system dynamics-based simulation model; and 4) facilitating coordination from the IT-supported management system; even if the time frame of a project is shortened. Also, future research opportunities are discussed extending the findings from this research.
by SangHyun Lee.
Ph.D.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Todaro, Valeria. „Advanced techniques for solving groundwater and surface water problems in the context of inverse methods and climate change“. Doctoral thesis, Universitat Politècnica de València, 2021. http://hdl.handle.net/10251/166439.

Der volle Inhalt der Quelle
Annotation:
[ES] El tema de la investigación se centra en técnicas avanzadas para manejar problemas de aguas subterráneas y superficiales relacionados con métodos inversos y cambio climático. Los filtros de Kalman, con especial atención en Ensemble Smoother with Multiple Data Assimilation (ES-MDA), se analizan y mejoran para la solución de diferentes tipos de problemas inversos. En particular, la principal novedad es la aplicación de estos métodos para la identificación de series temporales. La primera parte de la tesis, luego de la descripción del método, presenta el desarrollo de un software escrito en Python para la aplicación de la metodología propuesta. El software cuenta con un flujo de trabajo flexible que puede adaptarse fácilmente para implementar diferentes variantes del filtro de Kalman y ser aplicado para la solución de varios tipos de problemas. Un paquete de herramientas proporciona varias funcionalidades que permiten de configurar el algoritmo de acuerdo con el problema específico analizado. La primera aplicación se refiere a la solución del problema inverso de flujo en ríos. Este es un procedimiento inverso destinado a estimar el flujo de entrada a un sistema hidráulico en función de información recopilada abajo. El procedimiento se prueba mediante dos ejemplos sintéticos y un estudio de caso real; se investiga el impacto de los tamaños de los conjuntos y la aplicación de técnicas de localización e inflación de covarianzas. Los resultados muestran la capacidad del método propuesto de resolver este tipo de problemas; el rendimiento de ES-MDA mejora, especialmente para tamaños de conjuntos pequeños, cuando se aplican técnicas de inflación y localización de covarianza. La segunda aplicación en el campo de las aguas superficiales se refiere a la calibración de un modelo hidrológico-hidráulico que simula los mecanismos de formación de eventos de inundación. ES-MDA se acopla al modelo numérico de forma paralela para la estimación de los coeficientes de rugosidad e infiltración en base al conocimiento de un hidrograma de flujo en una sección del dominio. Los resultados de dos casos sintéticos y un estudio de caso real demuestran la capacidad del método propuesto para calibrar el modelo hidrológico-hidráulico con un tiempo computacional razonable. En el campo de aguas subterráneas, ES-MDA se aplica por primera vez para identificar simultáneamente la ubicación de la fuente y el historial de liberación de un contaminante en un acuífero a partir de datos de concentración detectados en diferentes puntos del dominio. Se realizaron numerosas pruebas para evaluar la influencia de la distribución espacial y temporal de los datos de concentración, el número del conjunto y el uso de técnicas de localización e inflación; además, se presenta un nuevo procedimiento para realizar una localización iterativa espacio-temporal. La metodología se valida mediante un ejemplo analítico y un estudio de caso que utiliza datos obtenidos en el laboratorio mediante una caja de arena. ES-MDA conduce a una buena estimación de los parámetros investigados; una red de monitoreo bien diseñada y la aplicación de correcciones de covarianza mejoran el rendimiento del método y ayudan a mitigar el posible problema de no unicidad de la solución. Otro propósito de la tesis es investigar el efecto del cambio climático en las aguas subterráneas. Se presenta un modelo simplificado que describe la respuesta de los niveles de agua subterránea a las variables meteorológicas hasta 2100. Es un enfoque estadístico sencillo basado en las correlaciones entre los niveles de agua subterránea y dos índices de sequía que dependen de los datos de precipitación y temperatura. El método se utiliza para evaluar el impacto del cambio climático en los recursos de agua subterránea en un área de estudio ubicada en el norte de Italia utilizando datos históricos y de modelos climáticos regionales. Los resultados m
[CA] El tema de la investigació se centra en tècniques avançades per a manejar problemes d'aigües subterrànies i superficials relacionats amb mètodes inversos i canvi climàtic. Els filtres de Kalman, amb especial atenció en Ensemble Smoother with Multiple Data Assimilation (ES-MDA), s'analitzen i milloren per a la solució de diferents tipus de problemes inversos. En particular, la principal novetat és l'aplicació d'aquests mètodes per a la identificació de sèries temporals. La primera part de la tesi presenta el desenvolupament d'un programari escrit en Python per l'aplicació de la metodologia presentada. El programari compta amb un flux de treball flexible que pot adaptar-se fàcilment per a implementar diferents variants del filtre de Kalman i ser aplicat per a la solució de diversos tipus de problemes. Un paquet complementar d'eines proporciona diverses funcionalitats que permeten de configurar l'algorisme d'acord amb el problema específic analitzat. La primera aplicació es un nou enfocament per la solució del problema invers de flux en rius. Aquest és un procediment invers destinat a estimar el flux d'entrada a un sistema hidràulic en funció d'informació recopilada aigües avall. El procediment es prova mitjançant dos exemples sintètics i un estudi de cas real; s'investiga l'impacte de les grandàries dels conjunts i l'aplicació de tècniques de localització i inflació de covariàncies. Els resultats mostren la capacitat del mètode proposat de resoldre aquest tipus de problemes; el rendiment de ES-MDA millora, especialment per a grandàries de conjunts xicotets, quan s'apliquen tècniques d'inflació i localització de covariància. La segona aplicació en el camp de les aigües superficials es refereix al calibratge d'un model hidrològic-hidràulic que simula els mecanismes de formació d'esdeveniments d'inundació a partir de sollicitació hidrometeorológicas i la seua posterior propagació. ES-MDA s'acobla al model numèric de manera paral·lela per l'estimació dels coeficients de rugositat i infiltració sobre la base del coneixement d'un hidrograma de flux en una secció del domini. Els resultats de dos casos sintètics i un estudi de cas real demostren la capacitat del mètode proposat per calibrar el model hidrològic-hidràulic amb un temps computacional raonable. En el camp d'aigües subterrànies, ES-MDA s'aplica per primera vegada per identificar simultàniament la ubicació de la font i l'historial d'alliberament d'un contaminant en un aqüífer a partir d'un conjunt de dades de concentració detectats en diferents punts del domini. Es van realitzar nombroses proves per avaluar la influència de la distribució espacial i temporal de les dades de concentració, el número del conjunt i l'ús de tècniques de localització i inflació; a més, es presenta un nou procediment per realitzar una localització iterativa espaciotemporal. La metodologia es valguda mitjançant un exemple analític i un estudi de cas per al qual s'utilitzen dades obtingudes en el laboratori mitjançant una caixa d'arena. ES-MDA condueix a una bona reconstrucció dels paràmetres investigats; una xarxa de monitoratge ben dissenyada i l'aplicació de correccions de covariància milloren el rendiment del mètode i ajuden a mitigar el possible problema de no unicitat de la solució. Un altre propòsit de la tesi és investigar l'efecte del canvi climàtic en les aigües subterrànies. Es presenta un model simplificat que descriu la resposta dels nivells d'aigua subterrània a les variables meteorològiques fins a 2100. És un enfocament estadístic senzill basat en les correlacions entre els nivells d'aigua subterrània i dos índexs de sequera que depenen de les dades de precipitació i temperatura. El mètode s'utilitza per a avaluar l'impacte del canvi climàtic en els recursos d'aigua subterrània en una àrea d'estudi situada en el nord d'Itàlia utilitzant dades històriques i de models climàtics regionals.
[EN] This work focuses on the investigation of advanced techniques to handle groundwater and surface water problems in the framework of inverse methods and climate change. The Ensemble Kalman filter methods, with particular attention to the Ensemble Smoother with Multiple Data Assimilation (ES-MDA), are extensively analyzed and improved for the solution of different types of inverse problems. In particular, the main novelty is the application of these methods for the identification of time series function. In the first part of the thesis, after the description of the ES-MDA method, the development of a Python software package for the application of the proposed methodology is presented. It is designed with a flexible workflow that can be easily adapted to implement different variants of the Ensemble Kalman filter and to be applied for the solution of various types of inverse problems. A complemented tool package provides several functionalities that allow to setup the algorithm configuration suiting the specific analyzed problem. The first novelty application of the ES-MDA method aimed at solving the reverse flow routing problem. The objective of the inverse procedure is the estimation of an unknown inflow hydrograph to a hydraulic system on the basis of information collected downstream and a given forward routing model that relates inflow hydrograph and downstream observations. The procedure is tested by means of two synthetic examples and a real case study; the impact of ensemble sizes and the application of covariance localization and inflation techniques are also investigated. The tests show the capability of the proposed method to solve this type of problem; the performance of ES-MDA improves, especially for small ensemble sizes, when covariance localization and inflation techniques are applied. The second application, in the context of surface water, concerns the calibration of a hydrological-hydraulic model that simulates rainfall-runoff processes. The ES-MDA is coupled with the numerical model by parallel way for the estimation of roughness and infiltration coefficients based on the knowledge of a discharge hydrograph at the basin outlet. The results of two synthetic tests and a real case study demonstrate the capability of the proposed method to calibrate the hydrological-hydraulic model with a reasonable computational time. In the groundwater field, ES-MDA is applied for the first time to simultaneously identify the source location and the release history of a contaminant spill in an aquifer from a sparse set of concentration data collected in few points of the aquifer. The impacts of the concentration sampling scheme, the ensemble size and the use of covariance localization and covariance inflation techniques are tested; furthermore, a new procedure to perform a spatiotemporal iterative localization is presented. The methodology is tested by means of an analytical example and a study case that uses real data collected in a laboratory sandbox. ES-MDA leads to a good estimation of the investigated parameters; a well-designed monitoring network and the use of covariance corrections improve the performance of the method and help to minimize ill-posedness and equifinality. A part of the thesis investigates the impact of climate change on the groundwater availability. A surrogate model that describes the response of groundwater levels to meteorological variables up to 2100 is presented. It is a simple statistical approach based on the correlations between groundwater levels and two drought indices that depend on precipitation and temperature data. The presented method is used to evaluate the impact of climate change on groundwater resources in a study area located in Northern Italy using historical and regional climate model data. The results denote a progressive increase of groundwater droughts in the investigated area.
Todaro, V. (2021). Advanced techniques for solving groundwater and surface water problems in the context of inverse methods and climate change [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/166439
TESIS
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Wilkerson, Jerod W. „Closing the Defect Reduction Gap between Software Inspection and Test-Driven Development: Applying Mutation Analysis to Iterative, Test-First Programming“. Diss., The University of Arizona, 2008. http://hdl.handle.net/10150/195160.

Der volle Inhalt der Quelle
Annotation:
The main objective of this dissertation is to assist in reducing the chaotic state of the software engineering discipline by providing insights into both the effectiveness of software defect reduction methods and ways these methods can be improved. The dissertation is divided into two main parts. The first is a quasi-experiment comparing the software defect rates and initial development costs of two methods of software defect reduction: software inspection and test-driven development (TDD). Participants, consisting of computer science students at the University of Arizona, were divided into four treatment groups and were asked to complete the same programming assignment using either TDD, software inspection, both, or neither. Resulting defect counts and initial development costs were compared across groups. The study found that software inspection is more effective than TDD at reducing defects, but that it also has a higher initial cost of development. The study establishes the existence of a defect-reduction gap between software inspection and TDD and highlights the need to improve TDD because of its other benefits.The second part of the dissertation explores a method of applying mutation analysis to TDD to reduce the defect reduction gap between the two methods and to make TDD more reliable and predictable. A new change impact analysis algorithm (CHA-AS) based on CHA is presented and evaluated for applications of software change impact analysis where a predetermined set of program entry points is not available or is not known. An estimated average case complexity analysis indicates that the algorithm's time and space complexity is linear in the size of the program under analysis, and a simulation experiment indicates that the algorithm can capitalize on the iterative nature of TDD to produce a cost savings in mutation analysis applied to TDD projects. The algorithm should also be useful for other change impact analysis situations with undefined program entry points such as code library and framework development.An enhanced TDD method is proposed that incorporates mutation analysis, and a set of future research directions are proposed for developing tools to support mutation analysis enhanced TDD and to continue to improve the TDD method.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Neubert, Peer. „Superpixels and their Application for Visual Place Recognition in Changing Environments“. Doctoral thesis, Universitätsbibliothek Chemnitz, 2015. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-190241.

Der volle Inhalt der Quelle
Annotation:
Superpixels are the results of an image oversegmentation. They are an established intermediate level image representation and used for various applications including object detection, 3d reconstruction and semantic segmentation. While there are various approaches to create such segmentations, there is a lack of knowledge about their properties. In particular, there are contradicting results published in the literature. This thesis identifies segmentation quality, stability, compactness and runtime to be important properties of superpixel segmentation algorithms. While for some of these properties there are established evaluation methodologies available, this is not the case for segmentation stability and compactness. Therefore, this thesis presents two novel metrics for their evaluation based on ground truth optical flow. These two metrics are used together with other novel and existing measures to create a standardized benchmark for superpixel algorithms. This benchmark is used for extensive comparison of available algorithms. The evaluation results motivate two novel segmentation algorithms that better balance trade-offs of existing algorithms: The proposed Preemptive SLIC algorithm incorporates a local preemption criterion in the established SLIC algorithm and saves about 80 % of the runtime. The proposed Compact Watershed algorithm combines Seeded Watershed segmentation with compactness constraints to create regularly shaped, compact superpixels at the even higher speed of the plain watershed transformation. Operating autonomous systems over the course of days, weeks or months, based on visual navigation, requires repeated recognition of places despite severe appearance changes as they are for example induced by illumination changes, day-night cycles, changing weather or seasons - a severe problem for existing methods. Therefore, the second part of this thesis presents two novel approaches that incorporate superpixel segmentations in place recognition in changing environments. The first novel approach is the learning of systematic appearance changes. Instead of matching images between, for example, summer and winter directly, an additional prediction step is proposed. Based on superpixel vocabularies, a predicted image is generated that shows, how the summer scene could look like in winter or vice versa. The presented results show that, if certain assumptions on the appearance changes and the available training data are met, existing holistic place recognition approaches can benefit from this additional prediction step. Holistic approaches to place recognition are known to fail in presence of viewpoint changes. Therefore, this thesis presents a new place recognition system based on local landmarks and Star-Hough. Star-Hough is a novel approach to incorporate the spatial arrangement of local image features in the computation of image similarities. It is based on star graph models and Hough voting and particularly suited for local features with low spatial precision and high outlier rates as they are expected in the presence of appearance changes. The novel landmarks are a combination of local region detectors and descriptors based on convolutional neural networks. This thesis presents and evaluates several new approaches to incorporate superpixel segmentations in local region detection. While the proposed system can be used with different types of local regions, in particular the combination with regions obtained from the novel multiscale superpixel grid shows to perform superior to the state of the art methods - a promising basis for practical applications.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Fujii, Taku. „Studies on Measurement Techniques of Artifact Changes under Iterative Development Process“. 京都大学 (Kyoto University), 2002. http://hdl.handle.net/2433/149386.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Tomasin, Martina. „We Grow Wild : Experimenting and learning about wild botanical allies to reclaim our food sovereignty“. Thesis, Linnéuniversitetet, Institutionen för design (DE), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-105372.

Der volle Inhalt der Quelle
Annotation:
The biology and the patterns of wild environments and their organisms have solutions to the many environmental, social and economical challenges that we are facing globally. As an emerging designer, I believe that the tendencies of the ecological environments can be analyzed, mimicked and implemented by designers into different socio-cultural systems. In my design process I have been exploring practices that promote food sovereignty as a right that every living being should have. The results of my exploration is a guide to help to learn about and from wild edibles to deepen our connection with nature. My design includes my own process and iteration as well as one designed for those who are interested in exploring foraging practices.This project recognizes the different spheres and complexities of sustainability. It analyzes how our cultural and social practices impact the ecological environment, while, at the same time, it brings practical examples to understand the effects that our economy has on the overall well-being of the ecology, and suggests that we all can be beneficial participants as and in nature.The title “We grow wild” refers to the plants, which grow wildly in parks, hedgerows, paths and forests, as well as it encourages to rediscover the wild nature that re-emerges in us through active participation in the ecological environment we inhabit.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Greffier, Joël. „Reconstruction itérative en scanographie : optimisation de la qualité image et de la dose pour une prise en charge personnalisée“. Thesis, Montpellier, 2016. http://www.theses.fr/2016MONTT043/document.

Der volle Inhalt der Quelle
Annotation:
Avec l’augmentation du nombre de scanner et de la dose collective, le risque potentiel d’apparition d’effets stochastiques est accentué. Pour limiter au maximum ce risque, les principes de justification et d’optimisation doivent être appliqués avec rigueur. L’optimisation des pratiques a pour but de délivrer la dose la plus faible possible tout en conservant une qualité diagnostique des images. C’est une tâche complexe qui implique de trouver en permanence un compromis entre la dose délivrée et la qualité image résultante. Pour faciliter cette démarche, des évolutions technologiques ont été développées. Les deux évolutions majeures sont la modulation du courant du tube en fonction de l’atténuation du patient et l’apparition des reconstructions itératives (IR). L’introduction des IR a modifié les habitudes puisqu’elles permettent de conserver des indices de qualité image équivalents en réduisant les doses. Cependant, leurs utilisations s’accompagnent d’une modification de la composition et de la texture de l’image nécessitant d’utiliser des métriques adaptées pour les évaluer. Le but de cette thèse est d’évaluer l’impact d’une utilisation des IR sur la réduction de la dose et sur la qualité des images afin de proposer en routine pour tous les patients, des protocoles avec la dose la plus faible possible et une qualité image adaptée au diagnostic. La première partie de cette thèse est consacrée à une mise au point sur la problématique du compromis dose/qualité image en scanographie. Les métriques de qualité image et les indicateurs dosimétriques à utiliser, ainsi que le principe et l’apport des reconstructions itératives y sont exposés. La deuxième partie est consacrée à la description des trois étapes réalisées dans cette thèse pour atteindre les objectifs. La troisième partie est constituée d’une production scientifique de 7 articles. Le 1er article présente la méthodologie d’optimisation globale permettant la mise en place de protocoles Basses Doses en routine avec utilisation de niveaux modérés des IR. Le 2ème article évalue l’impact et l’apport sur la qualité des images obtenues pour des niveaux de doses très bas. Le 3ème et le 4ème article montrent l’intérêt d’adapter ou de proposer des protocoles optimisés selon la morphologie du patient. Enfin les 3 derniers articles, illustrent la mise en place de protocoles Très Basses Doses pour des structures ayant un fort contraste spontané. Pour ces protocoles les doses sont proches des examens radiographiques avec des niveaux élevés des IR. La démarche d’optimisation mise en place a permis de réduire considérablement les doses. Malgré une modification de la texture et de la composition des images, la qualité des images obtenues pour tous les protocoles était jugée satisfaisante pour le diagnostic par les radiologues. L’utilisation des IR en routine nécessite une évaluation particulière et un temps d’apprentissage pour les radiologues
The increasing number of scanner and the cumulative dose delivered lead to potential risk of stochastic effects. To minimize this risk, optimization on CT usage should be rigorously employed. Optimization aims to deliver the lowest dose but maintaining image quality for an accurate diagnosis. This is a complex task, which requires setting up the compromise between the dose delivered and the resulting image quality. To achieve such goal, several CT technological evolutions have been developed. Two predominant developments are the Tube Current Modulation and the Iterative Reconstruction (IR). The former lays one patient's attenuation, the latter depend on advanced mathematical approaches. Using IR allows one to maintain equivalent image quality values by reducing the dose. However, it changes the composition and texture of the image and requires the use of appropriate metric to evaluate them. The aim of this thesis was to evaluate the impact of using IR on dose reduction and image quality in routine for all patients, protocols with the lowest dose delivered with an image quality suitable for diagnosis. The first part of the thesis addressed the compromise between dose delivered and image quality. Metrics of the image quality and the dosimetric indicators were applied as well the principle and the contribution of IRs were explored. The second part targets the description of the three steps performed in this thesis to achieve the objectives. The third part of the thesis consists of a scientific production of seven papers. The first paper presents the global optimization methodology for the establishment of low dose protocols in routine using moderate levels of IR. The second paper assesses the impact and contribution of IR to the image quality obtained to levels very low doses. The third and the fourth papers show the interest to adapt or propose protocols optimized according to patient's morphology. Finally the last three papers illustrate the development of Very Low Dose protocols for structures with high spontaneous contrast. For these protocols, doses are close to radiographic examinations with high levels of IR. The optimization process implementation has significantly doses reduction. Despite the change on the texture and on composition of the images, the quality of images obtained for all protocols was satisfactory for the diagnosis by radiologists. However, the use of routine IR requires special assessment and a learning time for radiologists
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Schröder, Thomas. „Sustainability in practice : a study of how reflexive agents negotiate multiple domains of consumption, enact change, and articulate visions of the 'good life'“. Thesis, University of Manchester, 2013. https://www.research.manchester.ac.uk/portal/en/theses/sustainability-in-practice-a-study-of-how-reflexive-agentsnegotiate-multiple-domains-of-consumption-enact-change-andarticulate-visions-of-the-good-life(c19dc146-1b93-402e-b3b5-cbbd3f6778be).html.

Der volle Inhalt der Quelle
Annotation:
A small proportion of people claim to live and consume in ways they consider more sustainable in social and environmental terms. As yet, we do not know how many exactly, but possibly no more than 5-10% of the population. The thesis intentionally focuses on this minority finding there are at least three reasons why it is interesting to do so. First because they are all but ignored in sociologies of practice in the context of sustainable consumption which considers this minority an insignificance and focuses almost exclusively on 'mainstream' majority which more closely maps onto the stereotype of 'consumer society'. Second because we think we can learn much from juxtapositioning this group empirically against the spectrum of theories of practice to devise more robust and appropriate theoretical explanation of how these subjects, in the context of everyday practice, negotiate the many interpretations and contradictions involved in trying to put 'sustainability' into practice. Third because by understanding them better we can reflect on theoretical, empirical and policy implications for nudging this minority of the population to a higher percentage. The thesis sits at one end of a spectrum of positions in theories of practice applied to consumption, and in particular with a normative interest in sustainable consumption. It aligns with those who seek to re-insert the reflexive agent into accounts of practice, with particular reference to the conceptual construct of the 'citizen-consumer' and the context of political consumption (Spaargaren & Oosterveer 2010). Referring to theories of consumption, the thesis adds perspectives on how people negotiate multiple domains of consumption simultaneously since everyday practice involves interactions across multiple domains (such as eating, mobility, householding); and yet typically in theories of practice these are artificially separated into single domains. The study therefore considers the implications which domains have on how particular practices are carried out, first separately (per domain) and then as they come together (in a cross-cutting domain perspective). The study then takes theories of practice as a springboard to develop a theoretical position and framework which better fits the narrated accounts of the 37 subjects who participated in this study. In iteratively co-developing a theoretical framework and multiple 'stages' of empirical research (using grounded theory methodology) the study seeks to explain theoretically how subjects justify their 'doings' (drawing on 'conventions' and 'orders of worth' (Boltanski & Thévenot 2006)); how they appear to muddle through as best they can (introducing 'bricolage' (Lévi-Strauss 1972)); and how subjects appear to devise decision short-cuts when approaching decisions characterised by the multiple contradictions of sustainable consumption and incomplete or 'too much' information (introducing heuristics (Gigerenzer & Gaissmaier 2011)). In joining calls to re-insert the reflexive agent to account for how, when and why subjects enact changes towards trajectories which they consider 'more sustainable' in their own terms, the study takes inspiration from Margaret Archer's morphogenesis approach (1998) and explores her model of multiple modes of reflexivity, announcing certain modes as 'better fitting' conditions of late modernity. The study finally finds that contrary to a notion of the un-reflexive agent, the citizen-consumer is able to articulate visions of the 'good life'. In addition she is able to fold these visions back onto everyday practices performed in the past, present and future, laying out normative guidelines and positive accounts of how to achieve personal or societal well-being and happiness. The overarching positioning of the study is much inspired by Andrew Sayer's (2011; 2000) 'normative turn' calling upon social sciences to re-instate research into the things about which people care. The study is therefore guided by the overarching question of how people translate their environmental and/or social concerns into the ways in which they live and consume.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

El, Baaklini Isabelle. „Outil de simulation de propagation des creux de tension dans les réseaux industriels“. Phd thesis, Grenoble INPG, 2001. http://tel.archives-ouvertes.fr/tel-00549736.

Der volle Inhalt der Quelle
Annotation:
Cette thèse a trait au problème de l'apparition des creux de tension dans les réseaux industriels ainsi qu'à l'élaboration d'un outil de simulation spécifique y inhérent. Une introduction est faite autour de la qualité de l'énergie et en particulier le problème des creux de tension, leurs causes et leurs effets les plus notables. Une méthode itérative de modélisation de réseau arborescent est ensuite proposée, elle permet un rapide calcul des répartitions de puissance. La machine à induction, charge principalement présente en réseau industriel est ensuite modélisée en statique en s'appuyant sur des données minimales susceptibles d'être présentes sur le terrain. Des modèles dynamiques de cette machine de divers ordres sont élaborés et compares afin d'allier simplicité et précision. La dernière partie concerne la validation du logiciel élaboré. Ce dernier s'appuie sur un calcul dynamique de répartition de puissance il est testé sûr un réseau type comprenant 14 machines et 3 transformateurs. Les résultats tant en précision qu'en rapidité sont très satisfaisants.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Casadei, Astrid. „Optimisations des solveurs linéaires creux hybrides basés sur une approche par complément de Schur et décomposition de domaine“. Thesis, Bordeaux, 2015. http://www.theses.fr/2015BORD0186/document.

Der volle Inhalt der Quelle
Annotation:
Dans cette thèse, nous nous intéressons à la résolution parallèle de grands systèmes linéaires creux. Nous nous focalisons plus particulièrement sur les solveurs linéaires creux hybrides directs itératifs tels que HIPS, MaPHyS, PDSLIN ou ShyLU, qui sont basés sur une décomposition de domaine et une approche « complément de Schur ». Bien que ces solveurs soient moins coûteux en temps et en mémoire que leurs homologues directs, ils ne sont néanmoins pas exempts de surcoûts. Dans une première partie, nous présentons les différentes méthodes de réduction de la consommation mémoire déjà existantes et en proposons une nouvelle qui n’impacte pas la robustesse numérique du précondionneur construit. Cette technique se base sur une atténuation du pic mémoire par un ordonnancement spécifique des tâches de calcul, d’allocation et de désallocation des blocs, notamment ceux se trouvant dans les parties « couplage » des domaines.Dans une seconde partie, nous nous intéressons à la question de l’équilibrage de la charge que pose la décomposition de domaine pour le calcul parallèle. Ce problème revient à partitionner le graphe d’adjacence de la matrice en autant de parties que de domaines désirés. Nous mettons en évidence le fait que pour avoir un équilibrage correct des temps de calcul lors des phases les plus coûteuses d’un solveur hybride tel que MaPHyS, il faut à la fois équilibrer les domaines en termes de nombre de noeuds et de taille d’interface locale. Jusqu’à aujourd’hui, les partitionneurs de graphes tels que Scotch et MeTiS ne s’intéressaient toutefois qu’au premier critère (la taille des domaines) dans le contexte de la renumérotation des matrices creuses. Nous proposons plusieurs variantes des algorithmes existants afin de prendre également en compte l’équilibrage des interfaces locales. Toutes nos modifications sont implémentées dans le partitionneur Scotch, et nous présentons des résultats sur de grands cas de tests industriels
In this thesis, we focus on the parallel solving of large sparse linear systems. Our main interestis on direct-iterative hybrid solvers such as HIPS, MaPHyS, PDSLIN or ShyLU, whichrely on domain decomposition and Schur complement approaches. Althrough these solvers arenot as time and space consuming as direct methods, they still suffer from serious overheads. Ina first part, we thus present the existing techniques for reducing the memory consumption, andwe present a new method which does not impact the numerical robustness of the preconditioner.This technique reduces the memory peak by doing a special scheduling of computation, allocation,and freeing tasks in particular in the Schur coupling blocks of the matrix. In a second part,we focus on the load balancing of the domain decomposition in a parallel context. This problemconsists in partitioning the adjacency graph of the matrix in as many domains as desired. Wepoint out that a good load balancing for the most expensive steps of an hybrid solver such asMaPHyS relies on the balancing of both interior nodes and interface nodes of the domains.Through, until now, graph partitioners such as MeTiS or Scotch used to optimize only thefirst criteria (i.e., the balancing of interior nodes) in the context of sparse matrix ordering. Wepropose different variations of the existing algorithms to improve the balancing of interface nodesand interior nodes simultaneously. All our changes are implemented in the Scotch partitioner.We present our results on large collection of matrices coming from real industrial cases
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Staley, Andrew W. „Patterns of morphologic change and iterative evolution in the gastropod genus Melanopsis from the late Miocene Pannonian basin“. 1992. http://catalog.hathitrust.org/api/volumes/oclc/31716289.html.

Der volle Inhalt der Quelle
Annotation:
Thesis (M.S.)--University of Wisconsin--Madison, 1992.
Typescript. eContent provider-neutral record in process. Description based on print version record. Includes bibliographical references (leaves 65-71).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Bardhan, Jaydeep P., J. H. Lee, Shihhsien Kuo, Michael D. Altman, Bruce Tidor und Jacob K. White. „Fast Methods for Bimolecular Charge Optimization“. 2003. http://hdl.handle.net/1721.1/3711.

Der volle Inhalt der Quelle
Annotation:
We report a Hessian-implicit optimization method to quickly solve the charge optimization problem over protein molecules: given a ligand and its complex with a receptor, determine the ligand charge distribution that minimizes the electrostatic free energy of binding. The new optimization couples boundary element method (BEM) and primal-dual interior point method (PDIPM); initial results suggest that the method scales much better than the previous methods. The quadratic objective function is the electrostatic free energy of binding where the Hessian matrix serves as an operator that maps the charge to the potential. The unknowns are the charge values at the charge points, and they are limited by equality and inequality constraints that model physical considerations, i.e. conservation of charge. In the previous approaches, finite-difference method is used to model the Hessian matrix, which requires significant computational effort to remove grid-based inaccuracies. In the novel approach, BEM is used instead, with precorrected FFT (pFFT) acceleration to compute the potential induced by the charges. This part will be explained in detail by Shihhsien Kuo in another talk. Even though the Hessian matrix can be calculated an order faster than the previous approaches, still it is quite expensive to find it explicitly. Instead, the KKT condition is solved by a PDIPM, and a Krylov based iterative solver is used to find the Newton direction at each step. Hence, only Hessian times a vector is necessary, which can be evaluated quickly using pFFT. The new method with proper preconditioning solves a 500 variable problem nearly 10 times faster than the techniques that must find a Hessian matrix explicitly. Furthermore, the algorithm scales nicely due to the robustness in number of IPM iterations to the size of the problem. The significant reduction in cost allows the analysis of much larger molecular system than those could be solved in a reasonable time using the previous methods.
Singapore-MIT Alliance (SMA)
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Rodrigues, Sara Xavier Reis Gonçalves. „On the iteration of Katsuno and Mendelzon update“. Master's thesis, 2015. http://hdl.handle.net/10400.13/1164.

Der volle Inhalt der Quelle
Annotation:
In this dissertation we present a model for iteration of Katsuno and Mendelzon’s Update, inspired in the developments for iteration in AGM belief revision. We adapt Darwiche and Pearls’ postulates of iterated belief revision to update (as well as the independence postulate proposed in [BM06, JT07]) and show two families of such operators, based in natural [Bou96] and lexicographic revision [Nay94a, NPP03]. In all cases, we provide a possible worlds semantics of the models.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie