Дисертації з теми "Scalable modeling and control"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Scalable modeling and control.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 дисертацій для дослідження на тему "Scalable modeling and control".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Jordan, Philip [Verfasser]. "Scalable Modelling of Aircraft Environmental Control Systems / Philip Jordan." München : Verlag Dr. Hut, 2019. http://d-nb.info/118151441X/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Kumar, Vibhore. "Enabling scalable self-management for enterprise-scale systems." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/24788.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Thesis (Ph.D.)--Computing, Georgia Institute of Technology, 2008.
Committee Chair: Schwan, Karsten; Committee Member: Cooper, Brian F.; Committee Member: Feamster, Nick; Committee Member: Liu, Ling; Committee Member: Sahai, Akhil.
3

Chuku, Ejike E. "Security and Performance Engineering of Scalable Cognitive Radio Networks. Sensing, Performance and Security Modelling and Analysis of ’Optimal’ Trade-offs for Detection of Attacks and Congestion Control in Scalable Cognitive Radio Networks." Thesis, University of Bradford, 2019. http://hdl.handle.net/10454/18448.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
A Cognitive Radio Network (CRN) is a technology that allows unlicensed users to utilise licensed spectrum by detecting an idle band through sensing. How- ever, most research studies on CRNs have been carried out without considering the impact of sensing on the performance and security of CRNs. Sensing is essential for secondary users (SUs) to get hold of free band without interfering with the signal generated by primary users (PUs). However, excessive sensing time for the detection of free spectrum for SUs as well as extended periods of CRNs in an insecure state have adverse effects on network performance. Moreover, a CRN is very vulnerable to attacks as a result of its wireless nature and other unique characteristics such as spectrum sensing and sharing. These attacks may attempt to eavesdrop or modify the contents of packets being transmitted and they could also deny legitimate users the opportunity to use the band, leading to underutilization of the spectrum space. In this context, it is often challenging to differentiate between networks under Denial of Service (DoS) attacks from those networks experiencing congestion. This thesis employs a novel Stochastic Activity Network (SAN) model as an effective analytic tool to represent and study sensing vs performance vs security trade-offs in CRNs. Specifically, an investigation is carried out focusing on sensing vs security vs performance trade-offs, leading to the optimization of the spectrum band’s usage. Moreover, consideration is given either when a CRN experiencing congestion and or it is under attack. Consequently, the data delivery ratio (PDR) is employed to determine if the network is under DoS attack or experiencing congestion. In this context, packet loss probability, queue length and throughput of the transmitter are often used to measure the PDR with reference to interarrival times of PUs. Furthermore, this thesis takes into consideration the impact of scalability on the performance of the CRN. Due to the unpredictable nature of PUsactivities on the spectrum, it is imperative for SUs to swiftly utilize the band as soon as it becomes available. Unfortunately, the CRN models proposed in literature are static and unable to respond effectively to changes in service demands. To this end, a numerical simulation experiment is carried out to determine the impact of scalability towards the enhancement of nodal CRN sensing, security and performance. Atthe instant the band becomes idle and there are requests by SUs waiting for encryption and transmission, additional resources are dynamically released in order to largely utilize the spectrum space before the reappearance of PUs. These additional resources make the same service provision, such as encryption and intrusion detection, as the initial resources. To this end,SAN model is proposed in order to investigate the impact of scalability on the performance of CRN. Typical numerical simulation experiments are carried out, based on the application of the Mobius Petri Net Package to determine the performance of scalable CRNs (SCRNs) in comparison with unscalable CRNs (UCRNs) and associated interpretations are made.
4

May, Brian 1975. "Scalable access control." Monash University, School of Computer Science and Software, 2001. http://arrow.monash.edu.au/hdl/1959.1/8043.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Aroua, Ayoub. "Mise à l'échelle des entraînements électromécaniques pour la conception au niveau système dans les premières phases de développement des véhicules électriques." Electronic Thesis or Diss., Université de Lille (2022-....), 2023. http://www.theses.fr/2023ULILN042.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
L'industrie automobile est contrainte d'accélérer le développement et le déploiement des véhicules électrifiés à un rythme sans précédent, afin d'aligner le secteur du transport avec les objectifs climatiques. La réduction du temps de développement des véhicules électriques devient une priorité urgente. D'autre part, l'industrie est confrontée à une complexité accrue et à l'ampleur de l'espace de conception des chaînes de traction électrifiées émergentes. Les approches existantes pour aborder la conception des composants, notamment les méthodes numériques telles que la méthode des éléments finis, la mécanique des fluides numérique, etc., reposent sur un processus de conception détaillé. Cela entraîne une longue charge de calcul lorsqu'on essaie de les intégrer au niveau système. L'accélération des premières phases de développement des véhicules électrifiés nécessite de nouvelles méthodologies et de nouveaux outils, permettant d'explorer l'espace de conception au niveau système. Ces méthodologies devraient permettre d'évaluer les différents choix de pré-dimensionnement des chaînes de traction électrifiées dans les phases de pré-étude. Cette évaluation devrait se faire de manière efficace en termes de temps de calcul, tout en garantissant des résultats fiables en termes de consommation énergétique au niveau système. Pour relever ce défi, cette thèse de doctorat vise à développer une méthodologie de mise à l'échelle pour les systèmes d'entraînement électromécaniques, permettant l'étude au niveau système de différents véhicules électriques. Un système d'entraînement électromécanique se compose d'un ensemble comprenant un onduleur, une machine électrique, un réducteur mécanique et une unité de contrôle. La procédure de mise à l'échelle vise à prédire les données d'une conception nouvellement définie d'un composant donné avec des spécifications différentes sur la base d'une conception de référence, sans avoir à refaire des étapes qui demandent beaucoup de temps et d'efforts. À cette fin, différentes formulations de lois de mise à l'échelle des composants du système d'entraînement électromécanique sont examinées en détail et comparées au niveau composant en termes de mise à l'échelle de la perte de puissance. Un accent particulier est mis sur l'examen de la méthode de mise à l'échelle linéaire des pertes par une homothétie, qui est largement employée dans les études au niveau système. En effet, cette méthode présente des hypothèses discutables et n'a pas fait l'objet d'une étude approfondie. En outre, l'une des principales contributions de ce travail est la formulation des lois de mise à l'échelle des pertes de puissance des réducteurs mécaniques, qui ont été identifiées comme une lacune dans la littérature actuelle. Pour ce faire, une campagne expérimentale intensive a été menée sur des réducteurs mécaniques commerciaux. Pour intégrer les lois d'échelle au niveau système et étudier l'interaction entre les composants mis à l'échelle, le formalisme de la représentation macroscopique énergétique est utilisé. La nouveauté de la méthode proposée réside dans la structuration d'un modèle et d'une commande évolutifs du système d'entraînement électromécanique de référence à utiliser dans la simulation au niveau système. La nouvelle organisation consiste en un modèle et une commande de référence complétés par deux éléments d'adaptation de puissance du côté électrique et mécanique. Ces derniers éléments prennent en compte les effets d'échelle, y compris les pertes de puissance. La méthodologie est appliquée à différents cas d'étude de véhicules électriques à batterie, allant des véhicules légers aux véhicules lourds. Une attention particulière est accordée à l'évaluation de l'impact de la méthode de la mise à l'échelle linéaire sur la consommation d'énergie, en tenant compte de différents facteurs de mise à l'échelle de la puissance et des cycles de conduite, par rapport à d'autres méthodes de mise à l'échelle avec une haute-fidélité
The automotive industry is required to accelerate the development and deployment of electrified vehicles at a faster pace than ever, to align the transportation sector with the climate goals. Reducing the development time of electric vehicles becomes an urgent priority. On the other hand, the industry is challenged by the increasing complexity and large design space of the emerging electrified powertrains. The existing approaches to address component design, such as numerical methods exemplified by finite element method, computational fluid dynamic, etc., are based on a detailed design process. This leads to a long computational burden when trying to incorporate them at system-level. Speeding up the early development phases of electrified vehicles necessitates new methodologies and tools, supporting the exploration of the system-level design space. These methodologies should allow for assessing different sizing choices of electrified powertrains in the early development phases, both efficiently in terms of computational time and with reliable results in terms of energy consumption at system-level. To address this challenge, this Ph.D. thesis aims to develop a scaling methodology for electric axles, allowing system-level investigation of different power-rated electric vehicles. The electric axle considered in this thesis comprises a voltage source inverter, an electric machine, a gearbox, and a control unit. The scaling procedure is aimed at predicting the data of a newly defined design of a given component with different specifications based on a reference design, without redoing time and effort-consuming steps. For this purpose, different derivations of scaling laws of the electric axle components are thoroughly discussed and compared at component-level in terms of power loss scaling. A particular emphasis is placed on examining the linear losses-to-power scaling method, which is widely employed in system-level studies. This is because, this method presents questionable assumptions, and has not been the subject of a comprehensive examination. A key contribution of the presented work is the derivation of power loss scaling laws of gearboxes, which has been identified as a gap in the current literature. This is achieved through an intensive experimental campaign using commercial gearboxes. To incorporate the scaling laws at system-level and study the interaction between the scaled components, the energetic macroscopic representation formalism is employed. The novelty of the proposed method lies in structuring a scalable model and control for a reference electric axle to be used in system-level simulation. The novel organization consists of a reference model and control complemented by two power adaptation elements at the electrical and mechanical sides. These latter elements consider the scaling effects, including the power losses. The methodology is applied for different study cases of battery electric vehicles, ranging from light to heavy-duty vehicles. Particular attention is paid to assessing the impact of the linear power-to-losses scaling method on the energy consumption considering different power scaling factors and driving cycles, as compared to high-fidelity scaling methods
6

TUCCI, MICHELE. "Scalable control of islanded microgrids." Doctoral thesis, Università degli studi di Pavia, 2018. http://hdl.handle.net/11571/1214890.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In the recent years, the increasing penetration of renewable energy sources has motivated a growing interest for microgrids, energy networks composed of interconnected Distributed Generation Units (DGUs) and loads. Microgrids are self-sustained electric systems that can operate either connected to the main grid or detached from it. In this thesis, we focus on the latter case, thus dealing with the so-called Islanded microGrids (ImGs). We propose scalable control design methodologies for both AC and DC ImGs, allowing DGUs and loads to be connected in general topologies and enter/leave the network over time. In order to ensure safe and reliable operations, we mirror the flexibility of ImGs structures in their primary and secondary control layers. Notably, off-line control design hinges on Plug-and-Play (PnP) synthesis, meaning that the computation of individual regulators is complemented by local optimization-based tests for denying dangerous plug-in/out requests. The solutions presented in this work aim to address some of the key challenges arising in control of AC and DC ImGs, while overcoming the limitations of the existing approaches. More precisely, this thesis comprises the following main contributions: (i) the development of decentralized primary control schemes for load-connected networks (i.e. where local loads appear only at the output terminals of each DGU) ensuring voltage stability in DC ImGs, and voltage and frequency stability in AC ImGs. In contrast with the most commonly used control strategies available in the literature, our regulators guarantee offset-free tracking of reference signals. Moreover, the proposed primary local controllers can be designed or updated on-the-fly when DGUs are plugged in/out, and the closed-loop stability of the ImG is always preserved. (ii) Novel approximate network reduction methods for handling totally general interconnections of DGUs and loads in AC ImGs. We study and exploit Kron reduction in order to derive an equivalent load-connected model of the original ImG, and designing stabilizing voltage and frequency regulators, independently of the ImG topology. (iii) Distributed secondary control schemes, built on top of primary layers, for accurate reactive power sharing in AC ImGs, and current sharing and voltage balancing in DC ImGs. In the latter case, we prove that the desired coordinated behaviors are achieved in a stable fashion and we describe how to design secondary regulators in a PnP manner when DGUs are added/removed to/from the network. (iv) Theoretical results are validated through extensive simulations, and some of the proposed design algorithms have been successfully tested on real ImG platforms.
In the recent years, the increasing penetration of renewable energy sources has motivated a growing interest for microgrids, energy networks composed of interconnected Distributed Generation Units (DGUs) and loads. Microgrids are self-sustained electric systems that can operate either connected to the main grid or detached from it. In this thesis, we focus on the latter case, thus dealing with the so-called Islanded microGrids (ImGs). We propose scalable control design methodologies for both AC and DC ImGs, allowing DGUs and loads to be connected in general topologies and enter/leave the network over time. In order to ensure safe and reliable operations, we mirror the flexibility of ImGs structures in their primary and secondary control layers. Notably, off-line control design hinges on Plug-and-Play (PnP) synthesis, meaning that the computation of individual regulators is complemented by local optimization-based tests for denying dangerous plug-in/out requests. The solutions presented in this work aim to address some of the key challenges arising in control of AC and DC ImGs, while overcoming the limitations of the existing approaches. More precisely, this thesis comprises the following main contributions: (i) the development of decentralized primary control schemes for load-connected networks (i.e. where local loads appear only at the output terminals of each DGU) ensuring voltage stability in DC ImGs, and voltage and frequency stability in AC ImGs. In contrast with the most commonly used control strategies available in the literature, our regulators guarantee offset-free tracking of reference signals. Moreover, the proposed primary local controllers can be designed or updated on-the-fly when DGUs are plugged in/out, and the closed-loop stability of the ImG is always preserved. (ii) Novel approximate network reduction methods for handling totally general interconnections of DGUs and loads in AC ImGs. We study and exploit Kron reduction in order to derive an equivalent load-connected model of the original ImG, and designing stabilizing voltage and frequency regulators, independently of the ImG topology. (iii) Distributed secondary control schemes, built on top of primary layers, for accurate reactive power sharing in AC ImGs, and current sharing and voltage balancing in DC ImGs. In the latter case, we prove that the desired coordinated behaviors are achieved in a stable fashion and we describe how to design secondary regulators in a PnP manner when DGUs are added/removed to/from the network. (iv) Theoretical results are validated through extensive simulations, and some of the proposed design algorithms have been successfully tested on real ImG platforms.
7

Liu, Xin. "Scalable online simulation for modeling grid dynamics /." Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 2004. http://wwwlib.umi.com/cr/ucsd/fullcit?p3158471.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Gramsamer, Ferdinand. "Scalable flow control for interconnection networks /." [Zürich] : [Institut für Technische Informatik und Kommunikationsnetze TIK, ETH Zürich], 2003. http://e-collection.ethbib.ethz.ch/show?type=diss&nr=15020.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Gevros, Panagiotis. "Congestion control mechanisms for scalable bandwidth sharing." Thesis, University College London (University of London), 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.249696.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Roman, Alexandru Bogdan. "Scalable cross-layer wireless medium access control." Thesis, University of Cambridge, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.609506.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Santiago, Alexandre José Batista. "HEVC scalable analysis : performance and bitrate control." Master's thesis, Universidade de Aveiro, 2016. http://hdl.handle.net/10773/21689.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Mestrado em Engenharia Eletrónica e Telecomunicações
Esta dissertação apresenta um estudo da norma de codificação de vídeo de alta eficiência (HEVC) e a sua extensão para vídeo escalável, SHVC. A norma de vídeo SHVC proporciona um melhor desempenho quando codifica várias camadas em simultâneo do que quando se usa o codificador HEVC numa configuração simulcast. Ambos os codificadores de referência, tanto para a camada base como para a camada superior usam o mesmo modelo de controlo de débito, modelo R-λ, que foi otimizado para o HEVC. Nenhuma otimização de alocação de débito entre camadas foi até ao momento proposto para o modelo de testes (SHM 8) para a escalabilidade do HEVC (SHVC). Derivamos um novo modelo R-λ apropriado para a camada superior e para o caso de escalabilidade espacial, que conduziu a um ganho de BD-débito de 1,81% e de BD-PSNR de 0,025 em relação ao modelo de débito-distorção existente no SHM do SHVC. Todavia, mostrou-se também nesta dissertação que o proposto modelo de R-λ não deve ser usado na camada inferior (camada base) no SHVC e por conseguinte no HEVC.
This dissertation provides a study of the High Efficiency Video Coding standard (HEVC) and its scalable extension, SHVC. The SHVC provides a better performance when encoding several layers simultaneously than using an HEVC encoder in a simulcast configuration. Both reference encoders, in the base layer and in the enhancement layer use the same rate control model, R-λ model, which was optimized for HEVC. No optimal bitrate partitioning amongst layers is proposed in scalable HEVC (SHVC) test model (SHM 8). We derived a new R-λ model for the enhancement layer and for the spatial case which led to a DB-rate gain of 1.81% and DB-PSNR gain of 0.025 in relation to the rate-distortion model of SHM-SHVC. Nevertheless, we also show in this dissertation that the proposed model of R-λ should not be used neither in the base layer nor in HEVC.
12

Bourki, Amine. "Towards scalable, multi-view urban modeling using structure priors." Thesis, Paris Est, 2017. http://www.theses.fr/2017PESC1062/document.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Nous étudions dans cette thèse le problème de reconstruction 3D multi-vue à partir d’une séquence d’images au sol acquises dans des environnements urbains ainsi que la prise en compte d’a priori permettant la préservation de la structure sous-jacente de la géométrie 3D observée, ainsi que le passage à l’échelle de tels processus de reconstruction qui est intrinsèquement délicat dans le contexte de l’imagerie urbaine. Bien que ces deux axes aient été traités de manière extensive dans la littérature, les méthodes de reconstruction 3D structurée souffrent d’une complexité en temps de calculs restreignant significativement leur intérêt. D’autre part, les approches de reconstruction 3D large échelle produisent généralement une géométrie simplifiée, perdant ainsi des éléments de structures qui sont importants dans le contexte urbain. L’objectif de cette thèse est de concilier les avantages des approches de reconstruction 3D structurée à celles des méthodes rapides produisant une géométrie simplifiée. Pour ce faire, nous présentons “Patchwork Stereo”, un framework qui combine stéréoscopie photométrique utilisant une poignée d’images issues de points de vue éloignés, et un nuage de point épars. Notre méthode intègre une analyse simultanée 2D-3D réalisant une extraction robuste de plans 3D ainsi qu’une segmentation d’images top-down structurée et repose sur une optimisation par champs de Markov aléatoires. Les contributions présentées sont évaluées via des expériences quantitatives et qualitatives sur des données d’imagerie urbaine complexes illustrant des performances tant quant à la fidélité structurelle des reconstructions 3D que du passage à l’échelle
In this thesis, we address the problem of 3D reconstruction from a sequence of calibrated street-level photographs with a simultaneous focus on scalability and the use of structure priors in Multi-View Stereo (MVS).While both aspects have been studied broadly, existing scalable MVS approaches do not handle well the ubiquitous structural regularities, yet simple, of man-made environments. On the other hand, structure-aware 3D reconstruction methods are slow and scale poorly with the size of the input sequences and/or may even require additional restrictive information. The goal of this thesis is to reconcile scalability and structure awareness within common MVS grounds using soft, generic priors which encourage : (i) piecewise planarity, (ii) alignment of objects boundaries with image gradients and (iii) with vanishing directions (VDs), and (iv) objects co-planarity. To do so, we present the novel “Patchwork Stereo” framework which integrates photometric stereo from a handful of wide-baseline views and a sparse 3D point cloud combining robust 3D plane extraction and top-down image partitioning from a unified 2D-3D analysis in a principled Markov Random Field energy minimization. We evaluate our contributions quantitatively and qualitatively on challenging urban datasets and illustrate results which are at least on par with state-of-the-art methods in terms of geometric structure, but achieved in several orders of magnitude faster paving the way for photo-realistic city-scale modeling
13

Biatek, Thibaud. "Efficient rate control strategies for scalable video coding." Thesis, Rennes, INSA, 2016. http://www.theses.fr/2016ISAR0007/document.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
High Efficiency Video Coding (HEVC/H.265) est la dernière norme de compression vidéo, finalisée en Janvier 20 13. Son extension scalable, SHVC, a été publiée en Octobre 2014 et supporte la scalabilité spatiale, en gamut de couleur (CGS) et même en norme de compression (AVC vers HEVC). SHVC peut être utilisée pour l'introduction de nouveaux services, notamment grâce à la rétrocompatibilité qu'elle apporte par la couche de base (BL) et qui est complétée par une couche d'amélioration (BL+EL) qui apporte les nouveaux services. De plus, SHVC apporte des gains en débit significatifs par rapport à l'encodage dit simulcast (l'encodage HEVC séparés). SHVC est considérée par DVB pour accompagner l'introduction de services UHD et est déjà incluse dans la norme ATSC-3.0. Dans ce contexte, l'objectif de la thèse est la conception de stratégies de régulation de débit pour les codeurs HEVC/SHVC lors de l'introduction de nouveaux services UHD. Premièrement, nous avons étudié l'approche p-domaine qui modélise linéairement le nombre coefficient non-nuls dans les résidus transformés et quantifiés avec le débit, et qui permet de réaliser des régulations de débit peu complexes. Après validation du modèle, nous avons conçu un premier algorithme de contrôle de débit au niveau bloc en utilisant cette approche. Pour chaque bloc et son débit cible associé, notre méthode estime de façon précise le paramètre de quantification (QP) optimal à partir des blocs voisins, en limitant l'erreur de débit sous les 4%. Puis, nous avons proposé un modèle d'estimation déterministe du p-domaine qui évite l'utilisation de tables de correspondance et atteignant une précision d'estimation supérieure à90%. Deuxièmement, nous avons investigué l'impact du ratio de débit entre les couches d'un codeur SHVC sur ses performances de compression, pour la scalabilité spatiale, CGS et SOR vers HDR. En se basant sur les résultats de cette étude, nous avons élaborés un algorithme de régulation de débit adaptatif. La première approche proposée optimise les gains de codage en choisissant dynamiquement le ratio de débit optimal dans un intervalle prédéterminé et fixe lors de l'encodage. Cette première méthode a montré un gain de codage significatif de 4.25% par rapport à une approche à ratio fixe. Cette méthode a été ensuite améliorée en lui ajoutant des contraintes de qualité et de débit sur chaque couche, au lieu de considérer un in tervalle fixe. Ce second algorithme a été testé sur le cas de diffusion de programme HD/UHD ct de déploiement de se1vices UHDI-P1 vers UHD 1-P2 (cas d'usage DVB), où elle permet des gains de 7.51% ct 8.30% respectivement. Enfin, le multiplexage statistique de programmes scalable a été introduit et brièvement investigué. Nous avons proposé une première approche qui ajuste conjointement le débit global attribué à chaque programme ainsi que le ratio de débit, de façon à optimiser les performances de codage. De plus, la méthode proposée lisse les variations et l'homogénéité de la qualité parmi les programmes. Cette méthode a été appliquée à une base de données contenant des flux pré-encodés. La méthode permet dans ce cas une réduction du surcoût de la scalabilité de 11.01% à 7.65% comparé à l'encodage a débit et ratio fixe, tout en apportant une excellente précision et une variation de qualité limitée
High Efficiency Video Coding (HEVC/H.265) is the latest video coding standard, finalized in Janua1y 2013 as the successor of Advanced Video Coding (AVC/H.264). Its scalable extension, called SHVC was released in October 2014 and enables spatial, bitdepth, color-gamut (CGS) and even standard scalability. SHVC is a good candidate for introducing new services thanks to backward compatibility features with legacy HEVC receivers through the base-layer (BL) stream and next generation ones through the BL+EL (enhancement layer). In addition, SHVC saves substantial bitrate with respect to simulcast coding (independent coding of layers) and is also considered by DVB for UHD introduction and included in ATSC-3 .0. In this context, the work of this thesis aims at designing efficient rate-control strategies for HEVC and its scalable extension SHVC in the context of new UHD formats introduction. First, we have investigated the p-domain approach which consists in linking the number of non-zero transfonned and quantized residual coefficients with the bitrate, in a linear way, to achieve straightforward rate-control. After validating it in the context of HEVC and SHVC codings, we have developed an innovative Coding Tree Unit (CTU)-level rate-control algorithm using the p-domain. For each CTU and its associated targeted bit rate, our method accurately estimates the most appropriate quantization parameter (QP) based on neighborhood indicators, with a bit rate error below 4%. Then, we have proposed a deterministic way of estimating the p-domain model which avoids the implementation of look-up tables. The proposed method enables accurate model estimation over 90%. Second, we have explored the impact of the bitrate ratio between layers on the SHVC performance for the spatial, CGS and SDR-to-HDR scalability. Based on statistical observations, we have built an adaptive rate control algorithms (ARC). We have first proposed an ARC scheme which optimizes coding performance by selecting the optimal ratio into a fixed ratio inte1val, under a global bitrate instruction (BL+EL). This method is adaptive and considers the content and the type of scalability. This first approach enables a coding gain of 4.25% compared to fixed-ratio encoding. Then, this method has been enhanced with quality and bandwidth constraints in each layer instead of considering a fixed interval. This second method has been tested on hybrid delivery of HD/UHD services and backward compatible SHVC encoding of UHDI -PI /UHDI -P2 services (DVB use-case) where it enables significant coding gains of 7.51% and 8.30%, respectively. Finally, the statistical multiplexing of SHVC programs has been investigated. We have proposed a first approach which adjusts both the global bit rate to allocate in each program and the ratio between BL and EL to optimize the coding performance. In addition, the proposed method smooths the quality variations and enforces the quality homogeneity between programs. This method has been applied to a database containing pre-encoded bitstreams and enables an overhead reduction from 11.01% to 7.65% compared to constant bitrate encoding, while maintaining a good accuracy and an acceptable quality variations among programs
14

Wang, Zhikui. "Congestion control with scalable stability analysis and implementation /." Diss., Restricted to subscribing institutions, 2004. http://proquest.umi.com/pqdweb?did=828453671&sid=1&Fmt=2&clientId=1564&RQT=309&VName=PQD.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Dai, Min. "Rate-distortion analysis and traffic modeling of scalable video coders." Texas A&M University, 2004. http://hdl.handle.net/1969.1/3143.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In this work, we focus on two important goals of the transmission of scalable video over the Internet. The first goal is to provide high quality video to end users and the second one is to properly design networks and predict network performance for video transmission based on the characteristics of existing video traffic. Rate-distortion (R-D) based schemes are often applied to improve and stabilize video quality; however, the lack of R-D modeling of scalable coders limits their applications in scalable streaming. Thus, in the first part of this work, we analyze R-D curves of scalable video coders and propose a novel operational R-D model. We evaluate and demonstrate the accuracy of our R-D function in various scalable coders, such as Fine Granular Scalable (FGS) and Progressive FGS coders. Furthermore, due to the time-constraint nature of Internet streaming, we propose another operational R-D model, which is accurate yet with low computational cost, and apply it to streaming applications for quality control purposes. The Internet is a changing environment; however, most quality control approaches only consider constant bit rate (CBR) channels and no specific studies have been conducted for quality control in variable bit rate (VBR) channels. To fill this void, we examine an asymptotically stable congestion control mechanism and combine it with our R-D model to present smooth visual quality to end users under various network conditions. Our second focus in this work concerns the modeling and analysis of video traffic, which is crucial to protocol design and efficient network utilization for video transmission. Although scalable video traffic is expected to be an important source for the Internet, we find that little work has been done on analyzing or modeling it. In this regard, we develop a frame-level hybrid framework for modeling multi-layer VBR video traffic. In the proposed framework, the base layer is modeled using a combination of wavelet and time-domain methods and the enhancement layer is linearly predicted from the base layer using the cross-layer correlation.
16

Mansour, Hassan Bader. "Modeling of scalable video content for multi-user wireless transmission." Thesis, University of British Columbia, 2009. http://hdl.handle.net/2429/12550.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This thesis addresses different aspects of wireless video transmission of scalable video content to multiple users over lossy and under-provisioned channels. Modern wireless video transmission systems, such as the Third Generation Partnership Project (3GPP)'s high speed packet access (HSPA) networks and IEEE 802.11-based wireless local area networks (WLANs) allow sharing common bandwidth resources among multiple video users. However, the unreliable nature of the wireless link results in packet losses and fluctuations in the available channel capacity. This calls for flexible encoding, error protection, and rate control strategies implemented at the video encoder or base station. The scalable video coding (SVC) extension of the H.264/AVC video standard delivers quality scalable video bitstreams that help define and provide quality of service (QoS) guarantees for wireless video transmission applications. We develop real-time rate and distortion estimation models for the coarse/medium granular scalability (CGS/MGS) features in SVC. These models allow mobile video encoders to predict the packet size and corresponding distortion of a video frame using only the residual mean absolute difference (MAD) and the quantization parameter (QP). This thesis employs different cross layer resource allocation techniques that jointly optimize the video bit-rate, error protection, and latency control algorithms in pre-encoded and real-time streaming scenarios. In the first scenario, real-time multi-user streaming with dynamic channel throughput and packet losses is solved by controlling the base and enhancement layer quality as well as unequal erasure protection (UXP) overhead to minimize the frame-level distortion. The second scenario considers pre-encoded scalable video streaming in capacity limited wireless channels suffering from latency problems and packet losses. We develop a loss distortion model for hierarchical predictive coders and employ dynamic UXP allocation with a delay-aware non-stationary rate-allocation streaming policy. The third scenario addresses the problem of efficiently allocating multi-rate IEEE 802.11-based network resources among multiple scalable video streams using temporal fairness constraints. We present a joint link-adaptation at the physical (PHY) layer and a dynamic packet dropping mechanism in the network or medium access control (MAC) layer for multi-rate wireless networks. We demonstrate that these methods result in significant performance gains over existing schemes.
17

Wachtel, Amanda M. (Amanda Marie). "A scalable methodology for modeling cities as systems of systems." Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/82418.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2013.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 144-146).
As cities evolve in size and complexity, their component systems become more interconnected. Comprehensive modeling and simulation is needed to capture interactions and correctly assess the impact of changes. This thesis presents a methodology for modeling cities from a systems of systems perspective. The framework supplies general modeling guidelines and key steps. Also addressed are the importance of stakeholder interactions, creating the model structure, using smart city sensor data, and applying the methodology to larger, traditional cities. As an initial step, four city modeling including CityNet, CityOne, Sim City 4, and SoSAT software programs were evaluated from both a user and mathematical perspective. From the assessments, a list was developed of features critical to successful city modeling software including visualization, a streamlined user interface, accurate mathematics, the ability to specify systems and attributes, and the ability to model interconnections between systems. SoSAT was selected as the modeling tool for the case study, which involved modeling the Army's Base Camp Integration Laboratory. A model of the camp's baseline configuration was built and the camp was simulated for 30 days with results recorded at one hour intervals. 100 trials were run with averaged results presented by time intervals and for the total simulation time. Results were presented at all levels of structural aggregation. Two sensitivity analyses were conducted to analyze the impact of maintenance personnel and the frequency of potable water deliveries. Adding or subtracting a maintenance person impacted the availability of the generator systems that were being serviced, in turn impacting the performance of the micro grid. Extending the time between deliveries by 24 and 48 hours revealed two systems experienced resource depletions. Lastly, two technology insertions cases were conducted to assess the impact of adding a laundry water reuse system (LWRS) and a solar powered hot water heater (SHWH). The LWRS provided 70% of the laundry system's water needs, significantly reducing dependency upon deliveries. The SHWH was expected to decrease electricity consumption and increase fuel consumption. However, the reduction in energy demand meant fewer generators were needed to power the micro grid and both electricity and fuel consumption decreased.
by Amanda M. Wachtel.
S.M.
18

Li, Yuliang. "Congestion control for scalable video transmission over IP networks." Thesis, University of Bristol, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.441312.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Jarratt, Marie Claire. "Readout and Control: Scalable Techniques for Quantum Information Processing." Thesis, The University of Sydney, 2019. https://hdl.handle.net/2123/21572.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Quantum mechanics allows for the processing of information in entirely new ways, surpassing the computational limits set by classical physics. Termed `quantum information processing', scaling this scheme relies on simultaneously increasing the number of qubits -- the fundamental unit of quantum computation -- whilst reducing their error rates. With this comes a variety of challenges, including the ability to readout the quantum state of large numbers of qubits, as well as to control their evolution in order to mitigate errors. This thesis aims to address these challenges by developing techniques for the readout and control of quantum systems. The first series of experiments focuses on the readout of GaAs/AlGaAs semiconductor quantum systems, primarily relating to the technique of dispersive gate sensing (DGS). DGS is used to probe electron transmission in an open system, a quantum point contact, demonstrating an ability to resolve characteristic features of a one-dimensional ballistic channel in the limit where transport is not possible. DGS is also used to observe anomalous signals in the potential landscape of quantum-dot defining gate electrodes. A technique for time domain multiplexing is also presented, which allows for readout resources, in the form of microwave components, to be shared between multiple qubits, increasing the capacity of a single readout line. The second series of experiments validates control techniques using trapped 171Yb+ ions. Classical error models are engineered using high-bandwidth IQ modulation of the microwave source used to drive qubit rotations. Reductions in the coherent lifetime of the quantum system are shown to match well with quantitative models. This segues in to developing techniques to understand and suppress noise in the system. This is achieved using the filter-transfer function approach, which casts arbitrary quantum control operations on qubits as noise spectral filters.
20

Zhang, Yueping. "Stable and scalable congestion control for high-speed heterogeneous networks." Diss., Texas A&M University, 2008. http://hdl.handle.net/1969.1/85909.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
For any congestion control mechanisms, the most fundamental design objectives are stability and scalability. However, achieving both properties are very challenging in such a heterogeneous environment as the Internet. From the end-users' perspective, heterogeneity is due to the fact that different flows have different routing paths and therefore different communication delays, which can significantly affect stability of the entire system. In this work, we successfully address this problem by first proving a sufficient and necessary condition for a system to be stable under arbitrary delay. Utilizing this result, we design a series of practical congestion control protocols (MKC and JetMax) that achieve stability regardless of delay as well as many additional appealing properties. From the routers' perspective, the system is heterogeneous because the incoming traffic is a mixture of short- and long-lived, TCP and non-TCP flows. This imposes a severe challenge on traditional buffer sizing mechanisms, which are derived using the simplistic model of a single or multiple synchronized long-lived TCP flows. To overcome this problem, we take a control-theoretic approach and design a new intelligent buffer sizing scheme called Adaptive Buffer Sizing (ABS), which based on the current incoming traffic, dynamically sets the optimal buffer size under the target performance constraints. Our extensive simulation results demonstrate that ABS exhibits quick responses to changes of traffic load, scalability to a large number of incoming flows, and robustness to generic Internet traffic.
21

Casas, Escoda Adrià. "An Erlang Implementation of a Scalable Node B Control Unit." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-37235.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The demand of mobile data traffic is increasing due to the popularization of advanced mobile devices such as smartphones and tablets and to the generalization of the use of mobile Internet. The Node B is one of the main elements of the control plane of the UMTS network. It is responsible for the tasks directly connected to the radio interface and provides the physical radio link between the mobile devices and the network. This master thesis presents a design of the Node B control unit that can handle multiple requests concurrently and scale both by the number of cores and the cards. Additionally, analyzes the suitability of using a high level language such as Erlang for implementing the Node B control unit. To achieve these objectives, a prototype of the Node B control unit that can handle requests concurrently and scale by the number of cores and cards has been designed and implemented with Erlang. The developed prototype shows that implementing a concurrent and scalable Node B control unit with Erlang is completely feasible and the tests that have been carried out demonstrate that the performance and scalability of the system are good. Furthermore, some realistic deployment scenarios of an Erlang implementation of the Node B control unit over the real hardware used in the Radio Base Station at Ericsson have been discussed and they show that it is completely possible to use Erlang for implementing the Node B control unit.
22

Li, Ming. "Resource discovery and fair intelligent admission control over scalable Internet /." Electronic version, 2004. http://adt.lib.uts.edu.au/public/adt-NTSM20050314.180037/index.html.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Zhang, Zhixiong. "Scalable role & organization based access control and its administration." Fairfax, VA : George Mason University, 2008. http://hdl.handle.net/1920/3110.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Thesis (Ph.D.)--George Mason University, 2008.
Vita: p. 121. Thesis directors: Ravi S. Sandhu, Daniel Menascé. Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Information Technology. Title from PDF t.p. (viewed July 7, 2008). Includes bibliographical references (p. 113-120). Also issued in print.
24

Bischof, Jonathan Michael. "Interpretable and Scalable Bayesian Models for Advertising and Text." Thesis, Harvard University, 2014. http://dissertations.umi.com/gsas.harvard:11400.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In the era of "big data", scalable statistical inference is necessary to learn from new and growing sources of quantitative information. However, many commercial and scientific applications also require models to be interpretable to end users in order to generate actionable insights about quantities of interest. We present three case studies of Bayesian hierarchical models that improve the interpretability of existing models while also maintaining or improving the efficiency of inference. The first paper is an application to online advertising that presents an augmented regression model interpretable in terms of the amount of revenue a customer is expected to generate over his or her entire relationship with the company---even if complete histories are never observed. The resulting Poisson Process Regression employs a marginal inference strategy that avoids specifying customer-level latent variables used in previous work that complicate inference and interpretability. The second and third papers are applications to the analysis of text data that propose improved summaries of topic components discovered by these mixture models. While the current practice is to summarize topics in terms of their most frequent words, we show significantly greater interpretability in online experiments with human evaluators by using words that are also relatively exclusive to the topic of interest. In the process we develop a new class of topic models that directly regularize the differential usage of words across topics in order to produce stable estimates of the combined frequency-exclusivity metric as well as proposing efficient and parallelizable MCMC inference strategies.
Statistics
25

Utermöhlen, Fabian [Verfasser]. "Modeling of MEMS Microbolometers : A Physics-Based Scalable Compact Model / Fabian Utermöhlen." Aachen : Shaker, 2015. http://d-nb.info/1070151580/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Rusert, Thomas [Verfasser]. "Distortion Modeling for Rate-Constrained Optimization of Scalable Video Coding / Thomas Rusert." Aachen : Shaker, 2007. http://d-nb.info/1164340301/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Behrens, Diogo, Marco Serafini, Sergei Arnautov, Flavio Junqueira, and Christof Fetzer. "Scalable error isolation for distributed systems: modeling, correctness proofs, and additional experiments." Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-203622.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Chang, Hung-Ching. "Measuring, modeling, and optimizing counterintuitive performance phenomena in power-scalable, parallel systems." Diss., Virginia Tech, 2015. http://hdl.handle.net/10919/51682.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The demands of exascale computing systems and applications have pushed for a rapid, continual design paradigm coupled with increasing design complexities from the interaction between the application, the middleware, and the underlying system hardware, which forms a breeding ground for inefficiency. This work seeks to improve system efficiency by exposing the root causes of unexpected performance slowdowns (e.g., lower performance at higher processor speeds) that occur more frequently in power-scalable systems where raw processor speed varies. More precisely, we perform an exhaustive empirical study that conclusively shows that increasing processor speed often reduces performance and wastes energy. Our experimental work shows that the frequency of occurrence and magnitude of slowdowns grow with clock frequency and parallelism, indicating that such slowdowns will increasingly be observed with trends in processor and system design. Performance speedups at lower frequencies (or slowdowns at higher frequencies) have been anecdotally observed in the prevailing literature since 2004, but no research has explained nor exploited this phenomenon. This work conclusively demonstrates that performance slowdowns during processor speedup phases can exceed 47% in common I/O workloads. Our hypothesis challenges (and ultimately debunks) a fundamental assumption in computer systems: faster processor speeds result in the same or better performance. In this work, with the use of code and kernel instrumentation, exhaustive experiments, and deep insight into the inner workings of the Linux I/O subsystem, I overcome the aforementioned challenges of variance, complexity, and nondeterminism and identify the I/O resource contention as the root cause of the slowdowns during processor speedup. Specifically, such contention comes from the Linux kernel when the journaling block device (JBD) interacts with the ext3/4 file system that introduces file write delays and file synchronization delays. To fully explain how such I/O contention causes performance anomaly, I propose analytical models of resource contention among I/O threads to describe the root cause of the observed I/O slowdowns when processors speed up. To this end, I introduce LUC, a runtime system to limit the unintended consequences of power scaling and demonstrate the effectiveness of the LUC system for two critical parallel transaction-oriented workloads, including a mail server (varMail) and online transaction processing (oltp).
Ph. D.
29

Behrens, Diogo, Marco Serafini, Sergei Arnautov, Flavio Junqueira, and Christof Fetzer. "Scalable error isolation for distributed systems: modeling, correctness proofs, and additional experiments." Technische Universität Dresden, 2015. https://tud.qucosa.de/id/qucosa%3A29539.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Sjöberg, Bilstrup Katrin. "Predictable and Scalable Medium Access Control for Vehicular Ad Hoc Networks." Licentiate thesis, Halmstad University, Embedded Systems (CERES), 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-5482.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:

This licentiate thesis work investigates two medium access control (MAC) methods, when used in traffic safety applications over vehicular ad hoc networks (VANETs). The MAC methods are carrier sense multiple access (CSMA), as specified by the leading standard for VANETs IEEE 802.11p, and self-organizing time-division multiple access (STDMA) as used by the leading standard for transponders on ships. All vehicles in traffic safety applications periodically broadcast cooperative awareness messages (CAMs). The CAM based data traffic implies requirements on a predictable, fair and scalable medium access mechanism. The investigated performance measures are channel access delay, number of consecutive packet drops and the distance between concurrently transmitting nodes. Performance is evaluated by computer simulations of a highway scenario in which all vehicles broadcast CAMs with different update rates and packet lengths. The obtained results show that nodes in a CSMA system can experience unbounded channel access delays and further that there is a significant difference between the best case and worst case channel access delay that a node could experience. In addition, with CSMA there is a very high probability that several concurrently transmitting nodes are located close to each other. This occurs when nodes start their listening periods at the same time or when nodes choose the same backoff value, which results in nodes starting to transmit at the same time instant. The CSMA algorithm is therefore both unpredictable and unfair besides the fact that it scales badly for broadcasted CAMs. STDMA, on the other hand, will always grant channel access for all packets before a predetermined time, regardless of the number of competing nodes. Therefore, the STDMA algorithm is predictable and fair. STDMA, using parameter settings that have been adapted to the vehicular environment, is shown to outperform CSMA when considering the performance measure distance between concurrently transmitting nodes. In CSMA the distance between concurrent transmissions is random, whereas STDMA uses the side information from the CAMs to properly schedule concurrent transmissions in space. The price paid for the superior performance of STDMA is the required network synchronization through a global navigation satellite system, e.g., GPS. That aside since STDMA was shown to be scalable, predictable and fair; it is an excellent candidate for use in VANETs when complex communication requirements from traffic safety applications should be met.

31

Liu, Guanglei. "Management and Control of Scalable and Resilient Next-Generation Optical Networks." Diss., Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/14610.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Two research topics in next-generation optical networks with wavelength-division multiplexing (WDM) technologies were investigated: (1) scalability of network management and control, and (2) resilience/reliability of networks upon faults and attacks. In scalable network management, the scalability of management information for inter-domain light-path assessment was studied. The light-path assessment was formulated as a decision problem based on decision theory and probabilistic graphical models. It was found that partial information available can provide the desired performance, i.e., a small percentage of erroneous decisions can be traded off to achieve a large saving in the amount of management information. In network resilience under malicious attacks, the resilience of all-optical networks under in-band crosstalk attacks was investigated with probabilistic graphical models. Graphical models provide an explicit view of the spatial dependencies in attack propagation, as well as computationally efficient approaches, e.g., sum-product algorithm, for studying network resilience. With the proposed cross-layer model of attack propagation, key factors that affect the resilience of the network from the physical layer and the network layer were identified. In addition, analytical results on network resilience were obtained for typical topologies including ring, star, and mesh-torus networks. In network performance upon failures, traffic-based network reliability was systematically studied. First a uniform deterministic traffic at the network layer was adopted to analyze the impacts of network topology, failure dependency, and failure protection on network reliability. Then a random network layer traffic model with Poisson arrivals was applied to further investigate the effect of network layer traffic distributions on network reliability. Finally, asymptotic results of network reliability metrics with respect to arrival rate were obtained for typical network topologies under heavy load regime. The main contributions of the thesis include: (1) fundamental understandings of scalable management and resilience of next-generation optical networks with WDM technologies; and (2) the innovative application of probabilistic graphical models, an emerging approach in machine learning, to the research of communication networks.
32

Liu, Yang. "Distributed Algorithms for Consensus and Formation Control in Scalable Robotic Swarms." Case Western Reserve University School of Graduate Studies / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=case1528376075213318.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Zhang, Xuan. "Scalable (re)design frameworks for optimal, distributed control in power networks." Thesis, University of Oxford, 2015. http://ora.ox.ac.uk/objects/uuid:4121ae7d-d505-4d3d-8ea6-49efeb9ba048.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In this thesis, we develop scalable frameworks to (re)design a class of large-scale network systems with built-in control mechanisms, including electric power systems and the Internet, in order to improve their economic efficiency and performance while guaranteeing their stability and robustness. After a detailed introduction relating to power system control and optimization, as well as network congestion control, we turn our attention to merging primary and secondary frequency control for the power grid. We present modifications in the conventional generation control using a consensus design approach while considering the participation of controllable loads. The optimality, stability and delay robustness of the redesigned system are studied. Moreover, we extend the proposed control scheme to (i) networks with more complexity and (ii) the case where controllable loads are involved in the optimization. As a result, our controllers can balance power flow and drive the system to an economically optimal operating point in the steady state. We then study a real-time control framework that merges primary, secondary and tertiary frequency control in power systems. In particular, we consider a transmission level network with tree topology. A distributed dynamic feedback controller is designed via a primal-dual decomposition approach and the stability of the overall system is studied. In addition, we introduce extra dynamics to improve system performance and emphasize the trade-off when choosing the gains of the extra dynamics. As a result, the proposed controller can balance supply and demand in the presence of disturbances, and achieve optimal power flow in the steady state. Furthermore, after introducing the extra dynamics, the transient performance of the system significantly improves. A redesign framework for network congestion control is developed next. Motivated by the augmented Lagrangian method, we introduce extra terms to the Lagrangian, which is used to redesign the primal-dual, primal and dual algorithms. We investigate how the gains resulting from the extra dynamics influence the stability and robustness of the system. Moreover, we show that the overall system can achieve added robustness to communication delays by appropriately tuning these gains. Also, the meaning of these extra dynamics is investigated and a distributed proportional-integral-derivative controller for solving network congestion control problems is further developed. Finally, we concentrate on a reverse- and forward-engineering framework for distributed control of a class of linear network systems to achieve optimal steady-state performance. As a typical illustration, we use the proposed framework to solve the real-time economic dispatch problem in the power grid. On the other hand, we provide a general procedure to modify control schemes for a special class of dynamic systems. In order to investigate how general the reverse- and forward-engineering framework is, we develop necessary and sufficient conditions under which an linear time-invariant system can be reverse-engineered as a gradient algorithm to solve an optimization problem. These conditions are characterized using properties of system matrices and relevant linear matrix inequalities. We conclude this thesis with an account for future research.
34

Müller, Paul [Verfasser], and Karlheinz [Akademischer Betreuer] Meier. "Modeling and Verification for a Scalable Neuromorphic Substrate / Paul Müller ; Betreuer: Karlheinz Meier." Heidelberg : Universitätsbibliothek Heidelberg, 2017. http://d-nb.info/1177688875/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Yang, Weiwei. "Towards Scalable Nanomanufacturing: Modeling the Interaction of Charged Droplets from Electrospray using GPU." Master's thesis, University of Central Florida, 2012. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/5583.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Electrospray is an atomization method subject to intense study recently due to its monodispersity and the wide size range of droplets it can produce, from nanometers to hundreds of micrometers. This thesis focuses on the numerical and theoretical modeling of the interaction of charged droplets from the single and multiplexed electrospray. We studied two typical scenarios: large area film depositions using multiplexed electrospray and fine pattern printings assisted by linear electrostatic quadrupole focusing. Due to the high computation power requirement in the unsteady n-body problem, graphical processing unit (GPU) which delivers 10 Tera flops in computation power is used to dramatically speed up the numerical simulation both efficiently and with low cost. For large area film deposition, both the spray profile and deposition number density are studied for different arrangements of electrospray and electrodes. Multiplexed electrospray with hexagonal nozzle configuration can not give us uniform deposition though it has the highest packing density. Uniform film deposition with variation < 5% in thickness was observed with the linear nozzle configuration combined with relative motion between ES source and deposition substrate. For fine pattern printing, linear quadrupole is used to focus the droplets in the radial direction while maintaining a constant driving field at the axial direction. Simulation shows that the linear quadrupole can focus the droplets to a resolution of a few nanometers quickly when the inter-droplet separation is larger than a certain value. Resolution began to deteriorate drastically when the inter-droplet separation is smaller than that value. This study will shed light on using electrospray as a scalable nanomanufacturing approach.
M.S.M.E.
Masters
Mechanical and Aerospace Engineering
Engineering and Computer Science
Mechanical Engineering; Thermofluids
36

Petrosyan, Vahan. "Fast, Robust and Scalable Clustering Algorithms with Applications in Computer Vision." Licentiate thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-238512.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In this thesis, we address a number of challenges in cluster analysis. We begin by investigating one of the oldest and most challenging problems: determining the number of clusters, k. For this problem, we propose a novel solution that, unlike previous techniques, delivers both the number of clusters and the clusters in one-shot (in contrast, conventional techniques run a given clustering algorithm several times for different values of k, and/or for several initialization with the same k). The second challenge we treat is the drawback, briefly mentioned above, of many conventional iterative clustering algorithms: how should they be initialized? We propose an initialization scheme that is applicable to multiple iterative clustering techniques that are widely used in practice (e.g., spectral clustering, EM-based, k-means). Numerical simulations demonstrate a significant improvement compared to many state-of-the-art initialization techniques. Third, we consider the computation of pairwise similarities between datapoints. A matrix of such similarities (the similarity matrix) constitutes the backbone of many clustering as well as unsupervised learning algorithms. In particular, for a given similarity metric, we propose a similarity transformation that promotes high similarity between the points within the cluster and deceases the similarity within the cluster overlapping regions. The transformation is particularly well-suited for many clustering and dimensionality reduction techniques, which we demonstrate in extensive numerical experiments. Finally, we investigate the application of clustering algorithms to image and video datasets (also known as superpixel segmentation). We propose a segmentation algorithm that significantly outperforms current state-of-the-art algorithms; both in terms of runtime as well as standard accuracy metrics. Based on this algorithm, we develop a tool for fast and accurate image annotation. Our findings show that our annotation technique accelerates the annotation processes by up to 20 times, without compromising the quality. This indicates a big opportunity to significantly speed up all AI computer vision tasks, since image annotation forms a crucial step in creating training data.
Den här avhandlingen behandlar en rad utmaningar inom klusteranalys. Till att börja med undersöker vi ett av de äldsta och mest utmanande problemen: att bestämma antalet kluster k utifrån data. Vi föreslår en ny algoritm som, till skillnad från tidigare algoritmer, ger antalet kluster (samt själva klustren) från enbart en körning. (Tidigare algoritmer kräver att dessa algoritmer körs flera gånger för olika värden på k, eller från olika startpunkter för samma värde på k.) Den andra utmaningen vi behandlar är hur de idag vanligast förekommande klusteringsalgoritmerna (e.g., spektral-, EM-baserad-, samt k-means-klustering) bör initialiseras. Vi föreslår en initialiseringsmetod, och demonstrerar i numeriska simulationer att denna ger signifikant bättre resultat jämfört med de nuvarande bästa metoderna. Som tredje del i avhandlingen studerar vi konstruktionen av mått på parvis likhet mellan datapunkter. En matris bestående av sådana likheter ligger till grund för många klusteringsalgoritmer, samt metoder för oövervakad inlärning inom maskininlärning. Mer specifikt så demonstrerar vi hur en likhetstransformation kan tillämpas på ett givet likhetsmått för att främja likhet mellan datapunkter inom samma kluster och undertrycka likhet för punkter som ligger i områden där kluster överlappar. Transformationen vi föreslår är speciellt lämpad för klusterings- och dimensionsreduceringsalgoritmer, vilket vi demonstrerar i omfattande numeriska experiment. Till sist studerar vi tillämpningar av klusteringsalgoritmer på bild- och videodata. Vi föreslår en segmenteringsalgoritm med signifikant bättre prestanda än de nuvarande bästa algoritmerna; både i termer av beräkningskomplexitet samt precision. Vidare så utvecklar vi en mjukvara baserad på vår algoritm för snabb och precis bildsegmentering. Våra studier visar att bildsegmentering och -annotering kan utföras upp till 20 gånger snabbare än med nuvarande mjukvaror, utan att vi kompromissar på kvalit´en. Detta pekar mot stora möjligheter att snabba upp många applikationer inom datorseende, eftersom segmenterad bilddata ligger till grund som träningsdata för många algoritmer.
37

Harnischmacher, Gerrit. "Block structured modeling for control /." Düsseldorf : VDI-Verl, 2007. http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&doc_number=016244726&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Claflin, Robert Alan. "Modeling control in computer simulations." Thesis, Monterey, California. Naval Postgraduate School, 1994. http://hdl.handle.net/10945/30927.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This study outlines the design, implementation, and testing of the General Control Model as applied to the Future Theater-Level Model (FTLM) for the control of Joint and Allied Forces for all operational sides. The study develops a notion of battlefield control and describes the characteristics necessary to represent this notion of control in a computer simulation. Central to the implementation of the General Control Model is the robust capability for the user-analyst to describe any control relationship of research interest and to do so without having to alter the programming code. The user-analyst is provided the capability to determine the cause and effect relationship of different control representations in a simulation. A full description of the model is complimented by an explanation of the implementation to facilitate the use of the General Control Model. A discussion of the initial test results leads to a more rigorous test which confirms the intended behavior of the General Control Model in FTLM. Lastly, recommendations for future improvements to the General Control Model and FTLM are outlined to assist future research endeavors.
39

Pais, Gabriel Dias. "Order book modeling and control." Instituto Tecnológico de Aeronáutica, 2012. http://www.bd.bibl.ita.br/tde_busca/arquivo.php?codArquivo=2209.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
On 6th May 2010, the stock market experienced in few minutes a large price decline and recovery collectively known as the Flash Crash. A serie of events on financial market before and during the flash crash setted up a high volume of transactions and unbalanced order flow collapsing into a lack of liquidity. The ultimate effect of this event was a challenge to the investor';s confidence in the markets. The financial regulators, responsibles to maintain the integrity of the financial makets, must be ahead of technological advances and be prepared to handle with unbalanced order flow and illiquidity scenarios. This work presents a preliminary and innovative solution for financial regulators to manage unbalanced order flow and to control an Order Book based on assumptions of Control Theory. Empirical simulations showed that in a high frequency world, algorithms could be used to control an Order Book and deal with Automatic Traders and Market Makers regulating the economy of supply and demand by adjusting execution fees. Under a stress scenario, when the Order Book become too unbalanced, the control system may change the fees attempting to induce the market makers to assume the role of counterpart of the Order book. The new orders may tend to balance the order flow and therefore prevent the imminent illiquidity scenario. Case studies show that an Order Book control can be an useful tool to manage unbalanced order flow and to promote market integrity.
40

Young, Vinson. "Hardware-assisted security: bloom cache – scalable low-overhead control flow integrity checking." Thesis, Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/53994.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Computers were not built with security in mind. As such, security has and still often takes a back seat to performance. However, in an era where there is so much sensitive data being stored, with cloud storage and huge customer databases, much has to be done to keep this data safe from intruders. Control flow hijacking attacks, stemming from a basic code injection attack to return-into-libc and other code re-use attacks, are among the most dangerous attacks. Currently available solutions, like Data execution prevention that can prevent a user from executing writable pages to prevent code injection attacks, do not have an efficient solution for protecting against code re-use attacks, which can execute valid code in a malicious order. To protect against control flow hijacking attacks, this work proposes architecture to make Control Flow Integrity, a solution that proposes to validate control flow against pre-computed control flow graph, practical. Current implementations of Control Flow Integrity have problems with code modularity, performance, or scalability, so I propose Dynamic Bloom Cache, a blocked-Bloom-filter-based approach, to solve current implementation issues.
41

Ho, Qirong. "Modeling Large Social Networks in Context." Research Showcase @ CMU, 2014. http://repository.cmu.edu/dissertations/543.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Today’s social and internet networks contain millions or even billions of nodes, and copious amounts of side information (context) such as text, attribute, temporal, image and video data. A thorough analysis of a social network should consider both the graph and the associated side information, yet we also expect the algorithm to execute in a reasonable amount of time on even the largest networks. Towards the goal of rich analysis on societal-scale networks, this thesis provides (1) modeling and algorithmic techniques for incorporating network context into existing network analysis algorithms based on statistical models, and (2) strategies for network data representation, model design, algorithm design and distributed multi-machine programming that, together, ensure scalability to very large networks. The methods presented herein combine the flexibility of statistical models with key ideas and empirical observations from the data mining and social networks communities, and are supported by software libraries for cluster computing based on original distributed systems research. These efforts culminate in a novel mixed-membership triangle motif model that easily scales to large networks with over 100 million nodes on just a few cluster machines, and can be readily extended to accommodate network context using the other techniques presented in this thesis.
42

Wu, Tao. "Off-network control processing for scalable routing in very large sensor networks." Diss., Connect to online resource - MSU authorized users, 2008.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Seagraves, Andrew Nathan. "A scalable computational approach for modeling dynamic fracture of brittle solids in three dimensions." Thesis, Massachusetts Institute of Technology, 2010. http://hdl.handle.net/1721.1/62997.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2010.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 133-140).
In this thesis a new parallel computational method is proposed for modeling threedimensional dynamic fracture of brittle solids. The method is based on a combination of the discontinuous Galerkin (DG) formulation of the continuum elastodynamic problem with Cohesive Zone Models (CZM) of fracture. In the proposed framework, discontinuous displacement jumps are allowed to occur at all element boundaries in the pre-fracture regime in a manner similar to "intrinsic" cohesive element methods. However, owing to the DG framework, consistency and stability of the finite element solution are guaranteed prior to fracture. This in stark contrast to the intrinsic cohesive element methods which suffer wave propagation and stability issues as a result of allowing discontinuous displacement jumps in the pre-fracture regime without properly accounting for them in the weak statement of the problem. In the new method, a fracture criterion is evaluated at all element boundaries throughout the calculation and upon satisfaction of this criterion, cracks are allowed to nucleate and propagate in the finite element mesh, governed by a cohesive tractionseparation law (TSL). This aspect of the method is similar to existing "extrinsic" cohesive element methods which introduce new fracture surfaces in the mesh through the adaptive insertion of cohesive elements subsequent to the onset of fracture. Typically this requires the mesh topology to be modified on-the-fly, a process which is highly complex and hinders the scalability of parallel implementations. However, for the DG method, discontinuities exist at element boundaries from the start of the calculation and so modifications of the mesh topology are unnecessary for introducing new fracture surfaces. As a result, the parallel computational framework is highly scalable and algorithmically simple. In this thesis, the formulation and numerical implementation of the method is described in detail. The method is then applied to simulate two practical problems. First a ceramic spall test is simulated. In this example, the DG method is shown to accurately capture the propagation of longitudinal elastic waves and the formation of a spall plane. Mesh dependency of the predicted spall plane and the dissipated cohesive energy is investigated for refined meshes resolving the size of the fracture process zone and the results are shown to be highly mesh-sensitive for the range of mesh sizes used. The spall test is also simulated using an existing intrinsic cohesive approach which is shown to alter the propagation of elastic stress waves, leading to the spurious result that no spallation occurs. In a second numerical example, the proposed DG method is applied to simulate high-velocity impact of an unconfined ceramic plate with a rigid spherical projectile. The method is shown to capture some of the basic fundamental aspects of the impact response of unconfined ceramics including the formation of conical and radial cracking patterns.
by Andrew Nathan Seagraves.
S.M.
44

Agbi, Clarence. "Scalable and Robust Designs of Model - Based Control Strategies for Energy - Efficient Buildings." Research Showcase @ CMU, 2014. http://repository.cmu.edu/dissertations/333.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In the wake of rising energy costs, there is a critical need for sustainable energy management of commercial and residential buildings. Buildings consume approximately 40% of total energy consumed in the US, and current methods to reduce this level of consumption include energy monitoring, smart sensing, and advanced integrated building control. However, the building industry has been slow to replace current PID and rule-based control strategies with more advanced strategies such as model-based building control. This is largely due to the additional cost of accurately modeling the dynamics of the building and the general uncertainty that model-based controllers can be reliably used in real conditions. The first half of this thesis addresses the challenge of constructing accurate grey-box building models for control using model identification. Current identification methods poorly estimate building model parameters because of the complexity of the building model structure, and fail to do so quickly because these methods are not scalable for large buildings. Therefore, we introduce the notion of parameter identifiability to determine those parameters in the building model that may not be accurately estimated and we use this information to strategically improve the identifiability of the building model. Finally, we present a decentralized identification scheme to reduce the computational effort and time needed to identify large buildings. The second half of this thesis discusses the challenge of using uncertain building models to reliably control building temperature. Under real conditions, building models may not match the dynamics of the building, which directly causes increased building energy consumption and poor thermal comfort. To reduce the impact of model uncertainty on building control, we pose the model-based building control problem as a robust control problem using well-known H1 control methods. Furthermore, we introduce a tuning law to reduce the conservativeness of a robust building control strategy in the presence of high model uncertainty, both in a centralized and decentralized building control framework.
45

Li, Dong. "Scalable and Energy Efficient Execution Methods for Multicore Systems." Diss., Virginia Tech, 2011. http://hdl.handle.net/10919/26098.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Multicore architectures impose great pressure on resource management. The exploration spaces available for resource management increase explosively, especially for large-scale high end computing systems. The availability of abundant parallelism causes scalability concerns at all levels. Multicore architectures also impose pressure on power management. Growth in the number of cores causes continuous growth in power. In this dissertation, we introduce methods and techniques to enable scalable and energy efficient execution of parallel applications on multicore architectures. We study strategies and methodologies that combine DCT and DVFS for the hybrid MPI/OpenMP programming model. Our algorithms yield substantial energy saving (8.74% on average and up to 13.8%) with either negligible performance loss or performance gain (up to 7.5%). To save additional energy for high-end computing systems, we propose a power-aware MPI task aggregation framework. The framework predicts the performance effect of task aggregation in both computation and communication phases and its impact in terms of execution time and energy of MPI programs. Our framework provides accurate predictions that lead to substantial energy saving through aggregation (64.87% on average and up to 70.03%) with tolerable performance loss (under 5%). As we aggregate multiple MPI tasks within the same node, we have the scalability concern of memory registration for high performance networking. We propose a new memory registration/deregistration strategy to reduce registered memory on multicore architectures with helper threads. We investigate design polices and performance implications of the helper thread approach. Our method efficiently reduces registered memory (23.62% on average and up to 49.39%) and avoids memory registration/deregistration costs for reused communication memory. Our system enables the execution of application input sets that could not run to the completion with the memory registration limitation.
Ph. D.
46

Helal, Ahmed Elmohamadi Mohamed. "Automated Runtime Analysis and Adaptation for Scalable Heterogeneous Computing." Diss., Virginia Tech, 2020. http://hdl.handle.net/10919/96607.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In the last decade, there have been tectonic shifts in computer hardware because of reaching the physical limits of the sequential CPU performance. As a consequence, current high-performance computing (HPC) systems integrate a wide variety of compute resources with different capabilities and execution models, ranging from multi-core CPUs to many-core accelerators. While such heterogeneous systems can enable dramatic acceleration of user applications, extracting optimal performance via manual analysis and optimization is a complicated and time-consuming process. This dissertation presents graph-structured program representations to reason about the performance bottlenecks on modern HPC systems and to guide novel automation frameworks for performance analysis and modeling and runtime adaptation. The proposed program representations exploit domain knowledge and capture the inherent computation and communication patterns in user applications, at multiple levels of computational granularity, via compiler analysis and dynamic instrumentation. The empirical results demonstrate that the introduced modeling frameworks accurately estimate the realizable parallel performance and scalability of a given sequential code when ported to heterogeneous HPC systems. As a result, these frameworks enable efficient workload distribution schemes that utilize all the available compute resources in a performance-proportional way. In addition, the proposed runtime adaptation frameworks significantly improve the end-to-end performance of important real-world applications which suffer from limited parallelism and fine-grained data dependencies. Specifically, compared to the state-of-the-art methods, such an adaptive parallel execution achieves up to an order-of-magnitude speedup on the target HPC systems while preserving the inherent data dependencies of user applications.
Doctor of Philosophy
Current supercomputers integrate a massive number of heterogeneous compute units with varying speed, computational throughput, memory bandwidth, and memory access latency. This trend represents a major challenge to end users, as their applications have been designed from the ground up to primarily exploit homogeneous CPUs. While heterogeneous systems can deliver several orders of magnitude speedup compared to traditional CPU-based systems, end users need extensive software and hardware expertise as well as significant time and effort to efficiently utilize all the available compute resources. To streamline such a daunting process, this dissertation presents automated frameworks for analyzing and modeling the performance on parallel architectures and for transforming the execution of user applications at runtime. The proposed frameworks incorporate domain knowledge and adapt to the input data and the underlying hardware using novel static and dynamic analyses. The experimental results show the efficacy of the introduced frameworks across many important application domains, such as computational fluid dynamics (CFD), and computer-aided design (CAD). In particular, the adaptive execution approach on heterogeneous systems achieves up to an order-of-magnitude speedup over the optimized parallel implementations.
47

Blanpain, Yannick. "A scalable and deployable approach for achieving fair rate allocation in high speed networks." Thesis, Georgia Institute of Technology, 2001. http://hdl.handle.net/1853/14868.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Fitje, Tryggve. "Monetary macroeconomic modeling, simulation and control." Thesis, Norwegian University of Science and Technology, Department of Engineering Cybernetics, 2008. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-10416.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:

In this report a dynamic macroeconomic model is built in an incremental manner. The economic units are built on an understandable tank analogy that can be described with the use of first order differential equations. The units are realised in the form of block diagrams in Mathworks Simulink®. Simulink block diagrams gives an explicit mathematical formulation of the model, thereby making all the equations, the model consists of, readily available. The economy’s ability to service debt in different settings is tested. The first part of this model is kept simple, so as to make it understandable to people with different backgrounds. Then model realism gets increased priority, but still it is considered vital for the usefulness of the model that the reader truly understands it. However, the fundamental point of the thesis is made before model complexity should become an issue. The foundation for the thesis, which is the work of Trond Andresen is represented in an oral manner, with simple mathematics at the base. The model is built by adding new elements to the model throughout the report, giving an understanding of the individual parts of the model, before they are merged together and the complexity increase. A system that models economic mood is implemented, which has an amplifying effect on the current trend in the economy, and adds new dynamics because of its lag to the actual state of the economy. Real economy deposits are implemented as a “counterweight” to the financial sectors loan issuing. Insolvency in both the real economy and the financial sector is implemented. They both make up balancing loops, setting limits on real economy earnings on deposits, as well as bank’s earnings on issued loans. They both also prevent the model from displaying impossible dynamics under extreme conditions. Banks lending is limited by capital requirement, set by the bank of international settlements, BIS. An regulating agent is also implemented, to guide the economy and keep it on track. It imposes a reserve requirement on the financial sector, which it makes the bank sector uphold through open market operations. Finally a stock market with innate oscillations , developed by Trond Andresen, is implemented in the model. It is not connected to the economy via vessel dynamics, rather it influence the economy through the implemented mood system. This makes the economy more less robust, and induces oscillations in to the economy in otherwise static settings. The economy appear as very fragile, just minor variations in interest rates makes for large fluctuations in the debt burden. The model exhibits path dependence. Minor changes in parameters makes the economy lock into states from which it does not recover. Modelling dynamic systems, especially nontechnical is an imperfect try and fail science. The model will always contain flaws and shortcomings. The best we can do is try to minimize the flaws in the elements that are important for answering the questions we want answered.

49

Reite, Karl Johan. "Modeling and Control of Trawl Systems." Doctoral thesis, Norwegian University of Science and Technology, Department of Marine Technology, 2006. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-1706.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:

This thesis is motivated by the possible benefits of a more precise trawl control system with respect to both environmental impact and fishing efficiency. It considers how the control performance of a pelagic trawl system can be improved, partly by introducing a control architecture tailor-made for the trawl system subject to industrial requirements, and partly by developing a trawl door control concept.

A mathematical model of the trawl system is developed, including an accurate model of the hydrodynamic forces on the trawl doors. This model estimates both the steady state and the transient forces on trawl doors moving in six degrees of freedom. The steady state hydrodynamic forces are based on wind tunnel experiments. To estimate the transient forces, a software code based on potential theory is developed. This software estimates the time-dependency of the forces from circulation about the foil, the angular damping forces, and the forces from relative accelerations between the fluid and the trawl door.

Various concepts for trawl door control are evaluated. This is done both analytically, by simulations and by towing tank experiments. Based on the results, a new trawl door control concept is proposed. The trawl door control concept is developed to fulfill the demands on both energy consumption, robustness and control performance. Because of the contradictory demands on performance, stability and energy efficiency, the control concept is improved using numerical optimization. The optimization is based on timedomain simulations of the trawl system.

The design of an overall trawl control architecture taking advantage of the trawl door control system is presented. This takes industrial constraints into account, such as the energy supply on the trawl doors. The control system is based on model predictive control and facilitates complex objectives, constraints and process models. The use of model predictive control is made possible by letting PID plant controllers act as a layer between the model predictive controller and the trawl system. The model predictive controller is thus able to operate on a stable and predictable system with no fast dynamics. To reduce the energy consumption of the trawl door, conventional feedback control is avoided on this part of the control system, and step wise feedforward control is instead employed.

The main contributions in this work are the mathematical modeling of the hydrodynamic forces on a trawl door, the design of a control architecture tailor-made for trawl system control and the method for optimization of the trawl door control concept.

50

Agi, Egemen. "Mathematical Modeling Of Gate Control Theory." Master's thesis, METU, 2009. http://etd.lib.metu.edu.tr/upload/3/12611468/index.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The purpose of this thesis work is to model the gate control theory, which explains the modulation of pain signals, with a motivation of finding new possible targets for pain treatment and to find novel control algorithms that can be used in engineering practice. The difference of the current study from the previous modeling trials is that morphologies of neurons that constitute gate control system are also included in the model by which structure-function relationship can be observed. Model of an excitable neuron is constructed and the response of the model for different perturbations are investigated. The simulation results of the excitable cell model is obtained and when compared with the experimental findings obtained by using crayfish, it is found that they are in good agreement. Model encodes stimulation intensity information as firing frequency and also it can add sub-threshold inputs and fire action potentials as real neurons. Moreover, model is able to predict depolarization block. Absolute refractory period of the single cell model is found as 3.7 ms. The developed model, produces no action potentials when the sodium channels are blocked by tetrodotoxin. Also, frequency and amplitudes of generated action potentials increase when the reversal potential of Na is increased. In addition, propagation of signals along myelinated and unmyelinated fibers is simulated and input current intensity-frequency relationships for both type of fibers are constructed. Myelinated fiber starts to conduct when current input is about 400 pA whereas this minimum threshold value for unmyelinated fiber is around 1100 pA. Propagation velocity in the 1 cm long unmyelinated fiber is found as 0.43 m/s whereas velocity along myelinated fiber with the same length is found to be 64.35 m/s. Developed synapse model exhibits the summation and tetanization properties of real synapses while simulating the time dependency of neurotransmitter concentration in the synaptic cleft. Morphometric analysis of neurons that constitute gate control system are done in order to find electrophysiological properties according to dimensions of the neurons. All of the individual parts of the gate control system are connected and the whole system is simulated. For different connection configurations, results of the simulations predict the observed phenomena for the suppression of pain. If the myelinated fiber is dissected, the projection neuron generates action potentials that would convey to brain and elicit pain. However, if the unmyelinated fiber is dissected, projection neuron remains silent. In this study all of the simulations are preformed using Simulink.

До бібліографії