Дисертації з теми "MM models"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: MM models.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 дисертацій для дослідження на тему "MM models".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

GRECO, CLAUDIO. "A DFT and QM/MM Investigation on Models Related to the [FeFe]-Hydrogenase Active Site." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2007. http://hdl.handle.net/10281/45775.

Повний текст джерела
Анотація:
In the present thesis, a theoretical investigation is described regarding hydroge- nases - enzymes that are able to catalyze the reversible oxidation of molecular hydrogen: H2 2H+ + 2e− . Such a very simple reaction could have fundamen- tal importance for the possible future development of a hydrogen-based econ- omy. However, the current approaches for molecular hydrogen oxidation imply the use of very expensive platinum-containing catalysts, while H2 production at industrial level still depends on hydrocarbons. In this framework, hydrogenases represent a model for the development of new-generation catalysts, as they con- tain only inexpensive transition metal cofactors (iron and/or nickel ions) and are able to evolve hydrogen directly from acidic aqueous solutions supplied with a convenient source of electrons. The present work deals with the characterization of a specific class of hydro- genases, termed [FeFe]-hydrogenases. These enzymes contain in their active site a peculiar Fe6 S6 cluster - the so-called H-cluster - which can be ideally subdi- vided in two distinct portions: a classical Fe4 S4 moiety, and a Fe2 S2 subcluster (commonly termed [2Fe]H ) bearing CO and CN− ligands; these subclusters are linked to each other through the sulphur atom of a cysteine residue. The two iron atoms of the binuclear sub-site are termed proximal (Fep ) or distal (Fed ), de- pending on their positions with respect to the Fe4 S4 moiety. Notably one of the carbonyl groups included in the [2Fe]H subsite bridges the Fep and Fed centers, and it moves to a semibridging position when the enzyme is in its completely reduced form. The coordination environment of the iron ions included in the binuclear cluster is completed by a bidentate ligand which has been proposed to correspond either to a di(thiomethyl)amine (DTMA) or to a propanedithiolate (PDT) residue. Direct metal-hydrogen interaction at the binuclear sub-site is required for the enzymatic activity of [FeFe]-hydrogenases; however, there is still some debate about the way in which the interaction takes place, and about the catalytic mechanism leading to H2 splitting/formation. In fact, despite the large number of theoretical and experimental investigations carried out to clarify the catalytic mechanism of [FeFe]-hydrogenases, a direct comparison between the two more plausible routes for dihydrogen evolution/oxidation - i.e. a path involving the formation of metal-bound terminal hydrides, as opposed to a route that implies the presence of a hydride bridging Fep and Fed - was still lacking. Such study has then been carried out in our laboratories, using computational models of the H-cluster binuclear subsite in the context of a Density Functional Theory (DFT) representation; this work is presented in Chapter 2. It turns out that H2 formation can take place according to reaction pathways that imply initial protonation of the Fe(I)-Fe(I) form of [2Fe]H , leading to a formal Fe(II)-Fe(II) hydride species, subsequent monoelectron reduction to an Fe(II)-Fe(I) species, further protonation, and H2 release. A comparison of pathways involving either the initial protonation of Fed or protonation of the Fep -Fed bond shows also that the former pathway is characterized by smaller activation barriers, as well as a downhill free-energy profile, suggesting that it could be the H2 production pathway operative in the enzyme. The next chapter in the present thesis is devoted to the characterization of CO-mediated enzyme inhibition; indeed, the enzyme active site is able to bind exogenous carbon monoxide, and such an interaction impairs the catalytic process of H2 production/oxidation. Experimental and computational studies have converged towards the assignment of a Fe(I)Fe(II) state to the CO-inhibited binuclear sub-cluster, while there is still much debate about the disposition of CO and CN− ligands around Fed in this form. Our analysis is carried out us- ing a hybrid quantum mechanical/molecular mechanical (QM/MM) approach; this means that an all-atom model of the enzyme is used for studying different geometrical configurations of the active site. This allows us to show that the protein environment surrounding the H-cluster plays a crucial role in influenc- ing the mechanism of CO-inhibition; as a result, the CO-inhibited H-cluster is expected to be characterized by a terminal CO ligand trans to the μ-CO group on Fed . A QM/MM approach is also used in order to unravel key issues regarding the activation of the enzyme from its completely oxidized inactive state (Hox inact , an enzyme form in which the [2Fe]H subcluster attains the Fe(II)Fe(II) redox state), and the influence of the protein environment on the structural and cat- alytic properties of the H-cluster (see Chapter 4). Our results show that, in Hox inact , a water molecule is bound to Fed . The computed QM/MM energy values for water binding to the diferrous subsite are in fact over 17 kcal mol−1 ; however, the affinity towards water decreases by one order of magnitude af- ter a one-electron reduction of Hox inact , thus leading to release of coordinated water from the H-cluster. The investigation of a catalytic cycle of the [FeFe]- hydrogenase that implies formation of a terminal hydride ion and a DTMA molecule acting as acid/base catalyst indicates that all steps have reasonable reaction energies, and that the influence of the protein on the thermodynamic profile of H2 production catalysis is not negligible; QM/MM results show that the interactions between the Fe2 S2 subsite and the protein environment could give place to structural rearrangements of the H-cluster functional for catalysis, provided that the bidentate ligand that bridges the iron atoms in the binuclear subsite is actually a DTMA residue. In the last two studies included in the present thesis (Chapter 5 and Chapter 6), DFT investigations are presented regarding the characterization of two syn- thetic model complexes that represent structural and functional model of the [2Fe]H cluster: Fe2 (S2 C3 H6 )(CO)6 and (S2 C3 H6 )[Fe2 (CO)5 P(NC4 H8 )3 ]. Both of them are known to be able to catalyze proton reduction in an electrochemical cell, but the details of the electrocatalytic mechanisms leading to H2 produc- tion needed clarification. As for Fe2 (S2 C3 H6 )(CO)6 (a), it is showed that, in the early stages of the catalytic cycle, a neutral μ-H adduct is formed; mono-electron reduction and subsequent protonation can give rise to a diprotonated neutral species (a-μH-SH), which is characterized by a μ-H group, a protonated sulfur atom and a CO group bridging the two iron centers, in agreement with experi- mental IR data indicating the formation of a long-lived μ-CO species. H2 release from a-μH-SH and its less stable isomer a-H2 is kinetically unfavourable, while the corresponding monoanionic compounds (a-μH-SH− and a-H2 − ) are more reactive in terms of dihydrogen evolution, in agreement with experimental data. As far as (S2 C3 H6 )[Fe2 (CO)5 P(NC4 H8 )3 ] (A) is concerned, experimental results have suggested that the presence of the electron donor P(NC4 H8 )3 ligand in A could favour the formation of a μ-CO species similar to that observed in the enzymatic cluster. However, insight into the structural features of key catalytic intermediates deriving from reduction and protonation of A was still lacking. Thus, in Chapter 6 we present results obtained using Density Functional Theory to evaluate structures, relative stabilities and spectroscopic properties of several species relevant for the electrocatalytic H2 evolving process. The results enable us to unravel the structure of the μ-CO complex ex- perimentally detected after monoelectronic reduction of A. Moreover, we show that the introduction of the large electron-donor ligand P(NC4 H8 )3 in the bio- mimetic complex does not favour the stabilization of terminal-hydride adducts, which are expected to be very reactive in terms of H2 production. The comparison of our findings with previous theoretical and experimental results obtained on similar model complexes suggests that the introduction of an electron donor ligand as good as P(NC4 H8 )3 , but less sterically demanding, could represent a better choice to facilitate the formation of μ-CO complexes more closely resembling the structure of the enzymatic cluster.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Walker, Christina H. "Monte Carlo radiation transfer studies of protoplanetary environments." Thesis, University of St Andrews, 2007. http://hdl.handle.net/10023/218.

Повний текст джерела
Анотація:
Monte Carlo radiation transfer provides an efficient modelling tool for probing the dusty local environment of young stars. Within this thesis, such theoretical models are used to study the disk structure of objects across the mass spectrum - young low mass Brown Dwarfs, solar mass T-Tauri stars, intermediate mass Herbig Ae stars, and candidate B-stars with massive disks. A Monte Carlo radiation transfer code is used to model images and photometric data in the UV - mm wavelength range. These models demonstrate how modelling techniques have been updated in an attempt to reduce the number of unknown parameters and extend the diversity of objects that can be studied.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Lange, Adrian W. "Multi-layer Methods for Quantum Chemistry in the Condensed Phase: Combining Density Functional Theory, Molecular Mechanics, and Continuum Solvation Models." The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1329752615.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Caricato, Marco. "Theoretical Models to describe Time-Dependent and Nonequilibrium Solvation." Doctoral thesis, Scuola Normale Superiore, 2005. http://hdl.handle.net/11384/85801.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Huynh, Bao Tuyen. "Estimation and feature selection in high-dimensional mixtures-of-experts models." Thesis, Normandie, 2019. http://www.theses.fr/2019NORMC237.

Повний текст джерела
Анотація:
Cette thèse traite de la modélisation et de l’estimation de modèles de mélanges d’experts de grande dimension, en vue d’efficaces estimation de densité, prédiction et classification de telles données complexes car hétérogènes et de grande dimension. Nous proposons de nouvelles stratégies basées sur l’estimation par maximum de vraisemblance régularisé des modèles pour pallier aux limites des méthodes standards, y compris l’EMV avec les algorithmes d’espérance-maximisation (EM), et pour effectuer simultanément la sélection des variables pertinentes afin d’encourager des solutions parcimonieuses dans un contexte haute dimension. Nous introduisons d’abord une méthode d’estimation régularisée des paramètres et de sélection de variables d’un mélange d’experts, basée sur des régularisations l1 (lasso) et le cadre de l’algorithme EM, pour la régression et la classification adaptés aux contextes de la grande dimension. Ensuite, nous étendons la stratégie un mélange régularisé de modèles d’experts pour les données discrètes, y compris pour la classification. Nous développons des algorithmes efficaces pour maximiser la fonction de log-vraisemblance l1 -pénalisée des données observées. Nos stratégies proposées jouissent de la maximisation monotone efficace du critère optimisé, et contrairement aux approches précédentes, ne s’appuient pas sur des approximations des fonctions de pénalité, évitent l’inversion de matrices et exploitent l’efficacité de l’algorithme de montée de coordonnées, particulièrement dans l’approche proximale par montée de coordonnées
This thesis deals with the problem of modeling and estimation of high-dimensional MoE models, towards effective density estimation, prediction and clustering of such heterogeneous and high-dimensional data. We propose new strategies based on regularized maximum-likelihood estimation (MLE) of MoE models to overcome the limitations of standard methods, including MLE estimation with Expectation-Maximization (EM) algorithms, and to simultaneously perform feature selection so that sparse models are encouraged in such a high-dimensional setting. We first introduce a mixture-of-experts’ parameter estimation and variable selection methodology, based on l1 (lasso) regularizations and the EM framework, for regression and clustering suited to high-dimensional contexts. Then, we extend the method to regularized mixture of experts models for discrete data, including classification. We develop efficient algorithms to maximize the proposed l1 -penalized observed-data log-likelihood function. Our proposed strategies enjoy the efficient monotone maximization of the optimized criterion, and unlike previous approaches, they do not rely on approximations on the penalty functions, avoid matrix inversion, and exploit the efficiency of the coordinate ascent algorithm, particularly within the proximal Newton-based approach
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Pérez, María del Carmen Marín. "Benchmarking and applications of a computational photobiology tool for design of novel and highly fluorescent rhodopsin proteins." Doctoral thesis, Università di Siena, 2019. http://hdl.handle.net/11365/1070289.

Повний текст джерела
Анотація:
In recent years, world economy and technological advancement have been transformed by Genomics, which allows us to study, design and build biologically relevant molecules. Genomics is already deeply embedded in industries as diverse as pharmaceutical, food and agricultural, environmental and bio-tech in general. Fast and cheap tools for gene sequencing, protein expression and analysis are commonly used for high-throughput genomic-related studies. However, due to experimental difficulties and long time scales (e.g., protein crystallization), protein structure determination, and thus the fundamental structure function rationalization, cannot presently be performed at the same fast pace: a fact that is slowing down the discovery of proteins with new features, as well as ex novo design. These difficulties are particularly felt in the field of photobiology, where the crystal structure of Bovine rhodopsin (Rh, retina dim-light visual photo-receptor), still remains the only structure of a vertebrate photo-receptor sensor available for photobiological studies since the year 2000. Rhodopsins constitute a class of light-triggered proteins that can be found throughout the whole spectrum of living organisms, and represent the perfect blue-print for building light-activated bio-molecular machines. In principle, the problem of not having a sufficient number of rhodopsins molecular structures could be circumvented and overcome with the construction of accurate atomistic computer models of the set of studied photoreceptors, which would allow: (i) in silico fundamental structure-function characterization, (ii) thorough and detailed screening of mutant series, and even (iii) ex novo design. Nevertheless, such models should also be constructed using a fast, relatively cheap, reliable and standardized protocol, of known accuracy. In this thesis, we refine and test the Automatic Rhodopsin Modeling (ARM) computational protocol, which we demonstrate as being capable of helping to address the above issues. Such protocol has the primary target of generating congruous quantum mechanical/molecular mechanical (QM/MM) models of rhodopsins, with the aim of facilitating systematic rhodopsin-mutants studies. The cornerstone of this thesis is the validation of the ARM protocol as a successful attempt to provide a basis for the standardization and reproducibility of rhodopsin QM/MM models, aimed to study the behaviour of photoactive molecules. First, we validate the ARM protocol, which employs a CASPT2//CASSCF/AMBER scheme, for a benchmark set of rhodopsins from different biological kingdoms. We show that ARM is able to reproduce and predict absorption trends in rhodopsin protein sets, with blue-shifted values not much displaced (a few kcal/mol) from the observed data. Secondly, we present how to use this protocol towards a better design of novel mutations as applications for Optogenetics, an innovative biological tool aimed to visualize and control neuron signals through light. Two different microbial rhodopsins are studied: Krokinobacter eikastus rhodopsin 2 (KR2), a light-driven outward sodium pump, and Anabaena sensory rhodopsin (ASR), a light sensor. In both cases, the qualitative and quantitative information acquired from the ARM-obtained QM/MM models reveal nature (electrostatic or steric) and extent of the mutation-induced changes on the retinal configuration, which, in turn, are the cause of the shift in the absorption wavelength of the relative mutants. Finally, we explore the fluorescence of ASR mutants, particularly useful for the visualization of neuronal activity. The target of this work is to use QM/MM simulations to understand the opposite behaviour observed in two blue-shifted ASR mutants, where one presents a negligible fluorescence, while the other displays one order of magnitude enhanced fluorescence, with respect to the wild type protein. Our QM/MM models show that specific electrostatic and steric interactions control the character mixing of different electronic states, opening a path to the rational engineering of highly fluorescent rhodopsins. In conclusion, within the limits of its automation, the ARM protocol allows the study of ground and excited states of specific photoactive proteins: rhodopsins. This opens the way to an improved molecular-level understanding of rhodopsin photochemistry and photobiology. The results obtained highlight the importance of having a standardized, effective and automatic protocol, which renders this kind of studies more efficient and accessible, by drastically shortening the time required to produce accurate and congruous QM/MM models. For the above reasons the author of the present thesis believes that ARM stands as an important cogwheel in the virtuous cycle between experimental and theoretical work, aimed to prepare the photobiological tools for tomorrow’s needs.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Mohamed, Imran. "The application of negative refractive index metamaterials to mm and sub-mm wavelength instrumentation." Thesis, University of Manchester, 2013. https://www.research.manchester.ac.uk/portal/en/theses/the-application-of-negative-refractive-index-metamaterials-to-mm-and-submm-wavelength-instrumentation(2f650eb9-27bb-4865-81a5-d7f086c6cf22).html.

Повний текст джерела
Анотація:
The manipulation of electromagnetic radiation via the use of periodic arrays of sub-wavelength metallic structures (unit cells), nowadays named "metamaterials", has been known of in the microwave engineering community for over fifty years. In the last decade interest in such sub-wavelength structures grew, mainly due to their ability to interact with radiation in ways natural materials could not e.g. by producing a negative refractive index (NRI). This project sought to see whether NRI metamaterials could provide benefits to the mm and sub-mm wavelength astronomical instrumentation currently in use. To aid rapid design and optimisation of devices made from a cascaded set of metamaterial unit cells, a hybridised Transmission Line (TL) model was developed where the matrix components used in the TL model were "seeded" with data taken from a Finite Element Method (FEM) model of a simpler structure. A comparison between the two found that the TL model was capable of providing results that differed from the FEM model by no more than ~10E−4 for the transmitted intensity, |S21|^2, and <1° for transmitted phase, arg(S21). A slab of material with a refractive index, n = −1, can exhibit an effect known as "superlensing". A three unit cell thick NRI slab was designed, manufactured and experimentally tested. It was found to be capable of producing an NRI across a fractional band of at least 21%, producing a refractive index value of n = −1 at around 90 GHz. The experimental and simulated transmission and reflection data show good match with each other. A highly birefringent air gap Half Wave Plate (HWP) was designed, manufactured and experimentally tested. Defining its useful bandwidth as the region where the phase difference, is equal to (−180 ± 3)° a single HWP had a fractional bandwidth of 0.3%. The bandwidth was extended by using the Pancharatnam method, developed in the 1950's to produce highly achromatic optical wave plates. The method however is applicable to other frequencies and polarisation control technologies. Optimising a three HWP TL-based Pancharatnam model, the HWP's modelled fractional bandwidth increased to 6.6%. Experimental data agrees with the model showing a plateauing of the phase difference at −180°. A highly birefringent polypropylene embedded Quarter Wave Plate (QWP) was also designed, manufactured and tested. Defining its useful bandwidth as the region where the differential phase is (90 ± 2)° a single QWP produced a fractional bandwidth of 0.6%. By optimising a four QWP TL-based Pancharatnam model, the QWP's performance was improved to 7.8%. Experimental data, whilst not in complete agreement with the model does show a reduction in the gradient of phase difference where it crossed 90°. It was found that current designs for NRI metamaterials fall short of the standards required to be used in quasi-optical astronomical instrumentation due to high dispersion and absorption. The high dispersion limits NRI metamaterials to uses in instruments built for narrowband applications. Whilst the Pancharatnam method can increase bandwidths where a flat differential phase response is required, this comes at the cost of increased absorption. To reach their full potential, NRI metamaterials' lossiness must be reduced e.g. possibly by cryogenic means or the use of "active" metamaterials.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Laino, Teodoro. "Multigrid QM/MM approaches in ab initio molecular dynamics." Doctoral thesis, Scuola Normale Superiore, 2006. http://hdl.handle.net/11384/85799.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Orrego, Rodrigo Barbosa Sousa. "CORE-MM: um modelo de crowdsourcing para cidades inteligentes baseado em gamificação." Universidade do Vale do Rio dos Sinos, 2017. http://www.repositorio.jesuita.org.br/handle/UNISINOS/6813.

Повний текст джерела
Анотація:
Submitted by JOSIANE SANTOS DE OLIVEIRA (josianeso) on 2017-12-05T13:53:52Z No. of bitstreams: 1 Rodrigo Barbosa Sousa Orrego_.pdf: 1535012 bytes, checksum: b6e744cff7702628ea69ae095d198122 (MD5)
Made available in DSpace on 2017-12-05T13:53:52Z (GMT). No. of bitstreams: 1 Rodrigo Barbosa Sousa Orrego_.pdf: 1535012 bytes, checksum: b6e744cff7702628ea69ae095d198122 (MD5) Previous issue date: 2017-07-31
Nenhuma
O surgimento de cidades que utilizam conceitos de tecnologia de ponta em várias áreas tem sido possibilitado graças aos progressos no desenvolvimento de sistemas de tecnologia da informação e comunicação. Avanços nas tecnologias de comunicação wireless e da tecnológica da informação e comunicação em geral oferecem oportunidades para a criação de um modelo de crowdsourcing, para registrar e atualizar recursos de uma cidade, baseado em gamificação para cidades inteligentes, objetivando ampliar a independência das pessoas que necessitam dos recursos das cidades e melhorar a qualidade de vida dos cidadãos. Esta dissertação aborda o problema do gerenciamento de recursos para cidades inteligentes utilizando crowdsourcing combinada com gamificação. Foi proposto um modelo, chamado CORE-MM, que permite a utilização de técnicas de crowdsourcing para que o gerenciamento de recursos das cidades seja feito pelos próprios cidadãos interessados, sem depender obrigatoriamente de uma organização ou da administração pública, e técnicas de gamificação para que este comportamento de participação neste processo de gerenciamento de recursos seja incentivado. O CORE-MM propõe o uso de crowdsourcing integrado com gamificação, para gerenciar recursos de uma cidade inteligente, com dois objetivos interdependentes: o de motivar a utilização do sistema pelos usuários, e também o de incentivar sua participação no compartilhamento e gerenciamento de informações. O nome CORE-MM significa COllaborative REsource Management Model, que em português significa Modelo de Gerenciamento Colaborativo de Recursos.
The emergence of cities that use state-of-the-art technology concepts in various areas has been made possible by advances in the development of information and communication technology systems. Advances in wireless communication technologies and information and communication technologies in general offer opportunities for creating a crowdsourcing model, based on gamification for smart cities, to manage cities resources, aiming to broaden the independence of the people that need the cities resources and improve the quality of life of citizens. This study addresses the problem of resource management for smart cities using crowdsourcing combined with gamification. A model called CORE-MM has been proposed by this study, which allows the use of crowdsourcing techniques so that the management of cities resources is done by the citizens themselves, without necessarily having to rely on an organization or public administration, and techniques of gamification to encourage this behavior of participation in this resource management process. CORE-MM proposes the use of crowdsourcing integrated with gamification to manage the resources of an intelligent city, with two interdependent objectives: to motivate the use of the system by the users, and also to encourage their participation in the sharing and management of information. The name CORE-MM stands for COllaborative REsource Management Model.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Rahman, Farhana. "Performance evaluation of 4.75-mm NMAS Superpave mixture." Diss., Manhattan, Kan. : Kansas State University, 2010. http://hdl.handle.net/2097/3893.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Kyösti, P. (Pekka). "Radio channel modelling for 5G telecommunication system evaluation and over the air testing." Doctoral thesis, Oulun yliopisto, 2018. http://urn.fi/urn:isbn:9789526219035.

Повний текст джерела
Анотація:
Abstract This thesis discusses radio channel modelling for evolving radio access and of using the models in practice in a setup for radiated testing of radio devices. New telecommunication systems are constantly being developed to address continuously increasing wireless communication needs. The first pieces of intensively developed fifth generation (5G) networks and devices are planned to be available, in some form, approximately by 2020. An interesting feature of 5G concerning propagation and consequently channel modelling, is the expected utilization of frequencies clearly higher than the legacy cellular systems. This work aims to define a channel model for the simulated evaluation of the coming 5G systems. New requirements for the channel model are identified and addressed. Further, over the air (OTA) testing of 5G devices in fading conditions is examined and a new setup is proposed. The test environment aims to reconstruct a time variant electromagnetic (EM) field around a device under test (DUT) considering the spatial, polarimetric, Doppler, and delay dispersion characteristics specified by the target channel model. Components and key design parameters of the setup are designated. It was found that the proposed map-based channel model is capable of fulfilling the identified requirements. Furthermore, the simulations indicate that the state-of-the-art geometry-based stochastic channel model (GSCM) may give over-optimistic multi-user MIMO (MU-MIMO) performance in an urban micro-cellular environment. The sectored OTA concept was shown appropriate for device testing. Key parameters, such as the physical dimensions of the multi-probe anechoic chamber (MPAC) OTA configuration and the number of active probes, were determined by simulations. The 3GPP is the main forum working towards 5G standards. A channel model for 5G evaluations has recently (2016) been specified. The base-line model is a GSCM inherited from the 4G models. However, the author expects that the proposed map-based models will gain popularity in the future, despite the current widespread use of GSCMs. In the 3GPP working group RAN4 (Radio performance and protocol aspects) the test methods for 5G user equipment (UE) are currently (2017) under investigation. The proposed sectored MPAC method has been contributed to and is under consideration in 3GPP. Even if it is not approved for UE testing, the author expects it to be useful for performance testing of base stations (BSs)
Tiivistelmä Tämän väitöskirjan sisältönä on radiokanavamallinnus langattomia tiedonsiirtojärjestelmiä varten ja lisäksi mallien käyttöönottoa tulevien radiolaitteiden säteilevässä testauksessa. Uusia tietoliikennejärjestelmiä kehitetään jatkuvasti, yhtä lailla jatkuvasti kasvavien tiedonsiirtotarpeiden tyydyttämiseksi. Ensimmäisten verkkojen ja päätelaitteiden pitäisi olla saatavilla tulevaan viidennen sukupolven (5G) järjestelmään vuoden 2020 tietämillä. Työn tarkoituksena on määritellä kanavamalli 5G-järjestelmän simulointeja varten ja sitä edeltäen tunnistaa kanavamallin vaatimukset. Lisäksi tutkimuksen aiheena on 5G-laitteiden säteilevä (OTA) testaus häipyvässä radiokanavassa ja uuden testijärjestelmän määrittely tätä tarkoitusta varten. Ehdotetun testijärjestelmän keskeisenä toimintona on halutun sähkömagneettisen (EM) kentän tuottaminen testattavan laitteen ympärille. EM-kentän pitää olla kanavamallin mukainen ja toteuttaa sen häipymä-, tila-, polarisaatio-, doppler- ja viiveominaisuudet. Testijärjestelmän komponentit ja tärkeimmän suunnitteluparametrit pyritään selvittämään tässä työssä. Tutkimuksessa havaittiin, että ehdotettu karttapohjainen kanavamalli toteuttaa 5G-mallille asetetut vaatimukset. Simuloinnit osoittavat myös, että tyypillisessä kaupunkiympäristössä suosittu GSCM-malli yliarvioi kanavan kapasiteetin monen käyttäjän MIMO-tekniikka käytettäessä. Lisäksi osoitettiin sektoroidun OTA-järjestelmän, jossa hyödynnetään useita antenneja, radiokanavaemulaattoria sekä radiokaiutonta huonetta (MPAC), käyttökelpoisuus 5G-laitteiden testaukseen. Kyseisen testausjärjestelmän parametrit määriteltiin. 3GPP on tärkein foorumi, jolla 5G-standardeja luodaan. Siellä on hiljattain (2016) sovittu 5G-simulointeja varten kanavamalli, joka ei ole tässä työssä ehdotetun kaltainen. Standardoitu malli on tyypiltään GSCM ja se on johdettu suoraan edellisen sukupolven (4G) kanavamallista. Kirjoittaja olettaa tästä GSCM:n nykyisestä vahvasta asemasta huolimatta, että ehdotettu karttapohjainen kanavamalli lisää suosiotaan tulevaisuudessa. Parhaillaan (2017) on 3GPP:n RAN4-työryhmässä käynnissä 5G-päätelaitteiden (UE) testimenetelmien määrittelyvaihe. Väitöskirjassa tutkittua sektoroitua MPAC-menetelmää on ehdotettu työryhmälle standardoitavaksi. Vaikka mainittua menetelmää ei siellä standardoitaisikaan UE-testaukseen, niin voidaan olettaa menetelmän olevan hyödyllinen erityisesti tukiasemien säteilevään testaukseen
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Thuresson, Marcus. "Development and evaluation of a six degrees of freedom model of a 155 mm artillery projectile." Thesis, KTH, Maskinkonstruktion (Inst.), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-182753.

Повний текст джерела
Анотація:
In this Master Thesis, the author has evaluated a six degrees of freedom model of a 155 mm artillery projectile commonly used in the Swedish Armed Forces and compared it to a modified point mass trajectory model of the same projectile. Uncertainties in the six degrees of freedom model were assumed to evaluate the precision of the model compared to the precision of the real fire data. The models were simulated using the software FLAMES, thatuses a spherical earth model, terrain data and measured atmospheric conditions. The real fire data comes from a number of fire series performed by the Swedish Armed Forces in 2001. The results showed that six degrees of freedom model was accurate in length but had an upwards 35 % error in drift when verified against a firetable. When the six degrees of freedom model was compared to a modified point mass model and real fire hits the mean distance to target was about 250 m. A plausible reason for this large mean distance is that the in data used in this thesis had to low order of accuracy. This thesis also showed a large difference in angle of attack between the models during high elevation simulations as well as when there was wind present. The results for the six degrees of freedom model with uncertainties showed that 90 % of all projectiles hit within a 50 m x 75 m ellipse, at a simulated fire distance of about 16 km.
I detta examensarbete har författaren utvärderat en sex frihetsgrads modell av en 155 mm artillerigranat som används av den svenska Försvarsmakten och jämfört den med en modifierad punktmassa modell av samma projektil. Osäkerheter antogs i sex frihetsgradsmodellen för att utvärdera modellens precision mot verklig precision. Modellerna simulerades i programmet FLAMES, med en sfärisk jordmodell, terräng data och uppmätta atmosfäriska förhållanden. Skjutdatan som användes som jämförelse kommer från en skjutning i Boden år 2001 utfört av den svenska Försvarsmakten. Resultaten visade att sex frihetsgrads-modellen var korrekt i längd, men hade uppåt 35 % fel idrift när den verifierades mot en skjuttabell. När sex frihetsgrads-modellen jämfördes med skjutdata från en riktig skjutning var medelavståndet från riktigt nedslag till nedslag för sex frihetsgrads-modellen ca 250m. En trolig orsak till det stora medelavståndet är att indata som användes hade en noggrannhet som inte var tillräckligt hög. Detta examensarbete visade också en stor skillnad i anfallsvinkeln mellan modellerna under höga skjutvinklar samt när det fanns vind närvarande. Resultaten för sex frihetsgrads-modellen med osäkerheter visade att 90% av alla projektiler träffar inom en 50 m x 75 m ellips, vid skjutning på cirka 16 km avstånd.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Margalef, rovira Marc. "Design of mm-wave Reflection-Type Phase Shifters with Oscillation-Based Test capabilities." Thesis, Université Grenoble Alpes, 2020. http://www.theses.fr/2020GRALT025.

Повний текст джерела
Анотація:
Ce travail porte sur la conception de déphaseurs de type réflectif (RTPS) sur silicium à ondes millimétriques avec des moyens de test basé sur l'oscillation (OBT). Pour plus de cohérence, une seule technologie a été utilisée, la BiCMOS STM 55 nm. Tout d'abord, la théorie et les mises en œuvre pratiques des coupleurs 3-dB sont discutées. Une attention particulière est portée à la topologie du guide d'onde coplanaire couplé à ondes lentes (CS-CPW), en raison de ses bonnes performances. En utilisant cette topologie, les mesures de deux coupleurs 3-dB sont rapportées : (i) un coupleur de 120 GHz, et (ii) un coupleur de 185 GHz.Ensuite, les topologies existantes des varactors intégrés sont discutées. Les résultats des mesures sont présentés pour un varactor MOS en mode inversion (I-MOS) de 1 à 325 GHz. En outre, l'architecture de varactor MOS de source commune (CS-MOS) est proposée et les résultats des mesures de 1 à 145 GHz pour cette architecture sont présentés.Puis, la théorie des RTPS est présentée et les coupleurs basés sur CS-CPW ainsi que les varactors MOS en mode d'accumulation (A-MOS), I-MOS et CS-MOS sont utilisés pour la conception de quatre RTPS. Les résultats des mesures et des simulations de ces RTPS, avec des fréquences centrales allant de 60 à 200 GHz, sont présentés.Ensuite, la théorie et les résultats des mesures de l'OBT sur un RTPS intégré de 60 GHz sont discutés.Enfin, une technique de compactage du calibrage TRL des ondes millimétriques est décrite à l'aide d'outils d'apprentissage sur machine
This work focuses on the design of on-silicon mm-wave Reflection-Type Phase Shifters (RTPS) with Oscillation-Based Test (OBT) capabilities. For more consistency, a single technology was considered, the STM 55-nm BiCMOS. First, the theory and practical implementations of 3-dB couplers is discussed. Particular attention is brought to the Coupled Slow-wave CoPlanar Waveguide (CS-CPW) topology, due to its good performance. Using this topology, the measurements of two 3-dB couplers are reported: (i) a 120-GHz, and (ii) a 185-GHz coupler.Next, the existing topologies of integrated varactors are discussed. Measurement results are reported for an Inversion-mode MOS (I-MOS) varactor from 1 up to 325 GHz. Additionally, the Common-Source MOS (CS-MOS) varactor architecture is proposed and measurement results from 1 to 145 GHz for this architecture are reported.Then, the theory of RTPS is presented and CS-CPW-based couplers together with Accumulation-mode MOS (A-MOS), I-MOS and CS-MOS varactors are used for the design of four RTPS. The measurement and simulation results of these RTPS, with central frequencies ranging from 60 to 200 GHz, are presented.Subsequently, the theory and measurement results of the OBT on an integrated 60-GHz RTPS are discussed.Finally, a mm-wave TRL calibration compaction technique is described using machine-learning tools
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Vieira, Vinícius Manzoni. "Estudo das propriedades espectroscópicas de moléculas orgânicas em solução utilizando a combinação do modelo contínuo PCM e o método sequencial QM/MM." Universidade Federal de Alagoas, 2010. http://www.repositorio.ufal.br/handle/riufal/1687.

Повний текст джерела
Анотація:
As most of spectroscopic measurements occur in solution, the solvent effect in the spectral properties of molecular systems is of great scientific interest. In particular, when the formation of hydrogen bond between the solute and solvent molecules occurs, the traditional effective dielectric solvent models provides inadequate results in some situations of interest. This is the case of diazines in aqueous solution. In this work, we study the solvent effects on the low-lying n=>π* absorption transition and on the 15N chemical shielding of diazines in water using the combination of Monte Carlo simulations and Quantum Mechanical calculations. As the classical simulations and the Quantum Mechanical calculations are uncoupled, a special attention is devoted to the solute polarization. Firstly, this is included by a previously developed iterative procedure where the solute is electrostatically equilibrated with the solvent. In addition, we verify the simple yet unexplored alternative of combining the polarizable continuum model (PCM) and the hybrid QM/MM method. In this procedure, we use PCM to obtain the average solute polarization and include this in the MM part of the sequential QM/MM methodology, PCM-MM/QM. These procedures are compared and further used in discrete and explicit solvent models. As the solute geometry is unchanged during all process, we study the influence of small differences on the molecular conformation in the spectroscopic properties using as prototype some recently sintetized quinone based molecules with antitrypanocidal activity. This study showed that small changes in the molecular conformation provide a small impact in the studied spectroscopic properties. The two implemented procedures in this work were systematically performed for each diazine and they were used in discrete and explicit solvent models for the calculations of the spectroscopic properties of each diazine in water as solvent. The results show good agreement between the values obtained by the two polarization methods. Considering the simplicity of the PCM over the iterative polarization method, this is na important aspect and the computational savings point to the possibility of dealing with larger solute molecules. This PCM-MM/QM approach conciliates the simplicity of the PCM model with the reliability of the combined QM/MM approaches.
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
Conselho Nacional de Desenvolvimento Científico e Tecnológico
O efeito de solvente nas propriedades espectrais de sistemas moleculares é de grande interesse científico uma vez que grande parte das medidas espectroscópicas são realizadas na presença de um meio solvente. Métodos tradicionais que consideram o solvente como um meio dielétrico efetivo oferecem resultados insatisfatórios em algumas situações de interesse, em particular quando há a formação de ligações de hidrogênio entre o soluto e o solvente. Este é o caso de soluções aquosas de diazinas, que são as moléculas básicas para a formação dos ácidos nucleicos. Neste trabalho estudamos o efeito do solvente na energia de absorção da primeira transição n =>π* e a blindagem magnética nuclear do 15N das diazinas em água utilizando a combinação de simulações Monte Carlo e cálculos de Mecânica Quântica. No método utilizado neste trabalho, as simulações clássicas são desacopladas dos cálculos quânticos. Portanto, foi dada uma atenção especial à polarização do soluto que foi incluída de duas maneiras distintas. Inicialmente, a polarização é obtida utilizando um procedimento iterativo onde o soluto é submetido ao equilíbrio eletrostático com o solvente. Adicionalmente, é utilizada uma abordagem simples onde realizamos uma combinação entre o modelo contínuo (PCM) e os métodos híbridos QM/MM. Nesse último procedimento, que denominamos como PCM-MM/QM, nós utilizamos o método PCM para obter a polarização média do soluto, a qual é incluída na parte clássica do método sequencial QM/MM. Como a geometria do soluto é mantida inalterada durante todo o processo, nós realizamos um estudo do efeito da conformação molecular nas propriedades espectroscópicas, utilizando como protótipo moléculas recentemente sintetizadas derivadas de quinonas que apresentam atividade antimalárica. Este estudo mostra que pequenas variações na conformação molecular têm um pequeno impacto nas propriedades espectroscópicas estudadas. Os dois procedimentos implementados neste trabalho foram realizados de maneira sistemática para cada diazina e utilizados nos modelos discreto e explícito de solvente para os cálculos das propriedades espectroscópicas de cada diazina em água. Os resultados obtidos mostram uma boa concordância entre os valores obtidos através dos dois métodos de polarização. Considerando a simplicidade da polarização PCM se comparado com o processo iterativo, o método PCM-MM/QM abre a possibilidade de tratar sistemas compostos por moléculas do soluto maiores. Além disso, o método PCM-MM/QM concilia a simplicidade do modelo PCM com a confiabilidade dos métodos híbridos QM/MM.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Chelda, Samir. "Simulation du parcours des électrons élastiques dans les matériaux et structures. Application à la spectroscopie du pic élastique multi-modes MM-EPES." Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 2010. http://tel.archives-ouvertes.fr/tel-00629659.

Повний текст джерела
Анотація:
La spectroscopie EPES (Elastic Peak Electron Spectroscopy) permet de mesurer le pourcentage he d'électrons rétrodiffusés élastiquement par la surface d'un échantillon soumis à un bombardement électronique. C'est une méthode non destructive et extrêmement sensible à la surface. L'objectif de ce travail est de modéliser le cheminement des électrons élastiques dans la matière grâce à une simulation informatique basée sur la méthode Monte Carlo. Cette simulation contribue de manière essentielle à la connaissance et à l'interprétation des résultats expérimentaux obtenus par spectroscopie EPES. Nous avons, de plus, adapté cette simulation à différentes surfaces transformées à l'échelle micrométrique et nanométrique. A l'aide d'une méthode originale, basée sur une description couche par couche du matériau, j'ai réalisé un programme informatique (MC1) rendant compte du cheminement des électrons élastiques dans les différentes couches du matériau. Le nombre d'électrons ressortant de la surface dépend de nombreux paramètres comme : la nature du matériau à étudier, l'énergie des électrons incidents, l'angle d'incidence, les angles de collection des analyseurs. De plus, je me suis intéressé à l'effet de la rugosité de la surface et j'ai démontré qu'elle joue un rôle déterminant sur l'intensité du pic élastique. Ensuite, grâce à l'association de la spectroscopie EPES et de la simulation Monte Carlo, j'ai déduit les modes de croissance de l'or sur substrat d'argent et de cuivre. Les effets de l'arrangement atomique et des pertes énergétiques de surfaces ont ensuite été étudiés. Pour cela, une deuxième simulation MC2 tenant compte de ces deux paramètres a été réalisée permettant d'étudier les surfaces à l'échelle nanométriques. Ces paramètres jusqu'alors non pris en compte dans notre simulation MC1, joue un rôle essentiel sur l'intensité élastique. Ensuite, j'ai obtenu une formulation simple et exploitable pour l'interprétation des résultats obtenus par la simulation MC2 pour un analyseur RFA. Afin de valider, les différents résultats de la simulationMC2, j'ai réalisé des surfaces de silicium nanostructurées, à l'aide de masques d'oxyde d'alumine réalisés par voie électrochimique. J'ai pu créer des nano-pores par bombardement ionique sous ultravide sur des surfaces de silicium. Afin de contrôler la morphologie de la surface, j'ai effectué de l'imagerie MEB ex-situ. La simulation Monte Carlo développée associée aux résultats EPES expérimentaux permet d'estimer la profondeur, le diamètre et la morphologie des pores sans avoir recours à d'autres techniques ex-situ.Cette simulation MC2 permet de connaître la surface étudiée à l'échelle nanométrique.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Klimeš, Ivo. "Messagingová infrastruktura a produktová analýza trhu." Master's thesis, Vysoká škola ekonomická v Praze, 2008. http://www.nusl.cz/ntk/nusl-10481.

Повний текст джерела
Анотація:
There are considered modern messaging architectonical concepts SOA and EDA in this diploma thesis. There are presented the elementary principals of functioning of these paradigms and principals are given into the wider context with the business processes and IT Governance. The aim of this thesis is to compare two preselected software solutions of the operational monitoring. There is always one solution per architectonical style and predefined comparative criteria. This thesis is divided into five consequential parts. The first part is focused on the putting the modern architectures into the historical context. The historical context is the way out for the modern architectonical styles. The part afterwards is closely focused on the concepts of SOA and EDA, and also on the comparison of the mentioned architectonical styles. There are put the concepts into the connection with business processes and maturity models In the next chapters. That all together has a influence on the successful implementation and governance. The chapters continuously flow into the last theoretical part of the thesis, IT Governance. There are described all the elements connected with the successful IT systems operating based on the paradigm SOA or EDA. The context in the practical part is link to the all these previous chapters. There are selected and described two software solutions in the practical part of this thesis. These solutions are then compared by the predefined criteria. The conclusion summarizes all the knowledge acquired during the paradigm comparison and there are also summarized knowledge acquired during the comparison of selected monitoring products.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Sánchez, Malpartida Jose Luis. "Diseño de un sistema de control difuso de posición e implementación de un generador de órdenes para un cañon 76/62 mm de una corbeta misilera clase pr-72p en aplicaciones de tiro contra costa." Master's thesis, Pontificia Universidad Católica del Perú, 2019. http://hdl.handle.net/20.500.12404/15142.

Повний текст джерела
Анотація:
En la actualidad las Corbetas Misileras clase PR-72P, tienen como artillería principal los montajes 76/62mm, los cuales cuentan con el sistema de control de tiro Vega II, que posee un Calculador balístico híbrido “BCH”, con interfaces analógicas, el cual a través de tarjetas impresas y sistemas sincrónicos resuelven el problema de tiro. Este sistema y sus componentes ya no se fabrican en la actualidad, siendo su reparación muy onerosa y poco confiable, debido a que tienen una antigüedad de más de 35 años. El tiro contra costa se realiza con el montaje antes mencionado, con la finalidad de brindar fuego naval de apoyo, para batir blancos en tierra, lo cual en la actualidad demanda de mucha precisión, con la finalidad de evitar daños colaterales, motivo por el cual demanda de mucha fidelidad de sus sistema de control. El sistema original contempla dos métodos de tiro contra costa: el método cronométrico y el método de punto de referencia, ambos son usados estimando la posición real del buque, originando que mientras más tiempo pase se incremente el error. Actualmente existe el sistema de posicionamiento satelital (GPS), el cual es aprovechado para tener en tiempo real la posición del buque, con lo cual se reduce considerablemente el error. Asimismo, existen técnicas de control avanzado, las cuales permiten compensar errores y perturbaciones, causadas por desgaste de componentes o ruidos, aumentando la confiabilidad y precisión del sistema. Es por eso que, se obtiene el modelo matemático del cañón y se diseña un sistema de control difuso para un cañón 76/62mm en aplicaciones de tiro contra costa, el cual se compara con el sistema actual, evidenciando mejoras en las respuestas y en la robustez. Finalmente, se implementa un generador de órdenes al cañón, el cual está basado en un software y su implementación en una computadora industrial modular PXI de National Instruments, que permita realizar funciones de captura de sensores y procesamiento de datos en tiempo real para la solución del problema de control de tiro contra blancos terrestres, presentación gráfica de los principales datos importantes para el operador y pruebas dinámicas al montaje mediante un SCADA.
Tesis
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Jiao, Feiran. "High-dimensional inference of ordinal data with medical applications." Diss., University of Iowa, 2016. https://ir.uiowa.edu/etd/6150.

Повний текст джерела
Анотація:
Ordinal response variables abound in scientific and quantitative analyses, whose outcomes comprise a few categorical values that admit a natural ordering, so that their values are often represented by non-negative integers, for instance, pain score (0-10) or disease severity (0-4) in medical research. Ordinal variables differ from rational variables in that its values delineate qualitative rather than quantitative differences. In this thesis, we develop new statistical methods for variable selection in a high-dimensional cumulative link regression model with an ordinal response. Our study is partly motivated by the needs for exploring the association structure between disease phenotype and high-dimensional medical covariates. The cumulative link regression model specifies that the ordinal response of interest results from an order-preserving quantization of some latent continuous variable that bears a linear regression relationship with a set of covariates. Commonly used error distributions in the latent regression include the normal distribution, the logistic distribution, the Cauchy distribution and the standard Gumbel distribution (minimum). The cumulative link model with normal (logit, Gumbel) errors is also known as the ordered probit (logit, complementary log-log) model. While the likelihood function has a closed-form solution for the aforementioned error distributions, its strong nonlinearity renders direct optimization of the likelihood to sometimes fail. To mitigate this problem and to facilitate extension to penalized likelihood estimation, we proposed specific minorization-maximization (MM) algorithms for maximum likelihood estimation of a cumulative link model for each of the preceding 4 error distributions. Penalized ordinal regression models play a role when variable selection needs to be performed. In some applications, covariates may often be grouped according to some meaningful way but some groups may be mixed in that they contain both relevant and irrelevant variables, i.e., whose coefficients are non-zero and zero, respectively. Thus, it is pertinent to develop a consistent method for simultaneously selecting relevant groups and the relevant variables within each selected group, which constitutes the so-called bi-level selection problem. We have proposed to use a penalized maximum likelihood approach with a composite bridge penalty to solve the bi-level selection problem in a cumulative link model. An MM algorithm was developed for implementing the proposed method, which is specific to each of the 4 error distributions. The proposed approach is shown to enjoy a number of desirable theoretical properties including bi-level selection consistency and oracle properties, under suitable regularity conditions. Simulations demonstrate that the proposed method enjoys good empirical performance. We illustrated the proposed methods with several real medical applications.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Casasnovas, Perera Rodrigo. "Theoretical studies on pyridoxal 5’-phosphate-catalyzed reactions of biological relevance." Doctoral thesis, Universitat de les Illes Balears, 2014. http://hdl.handle.net/10803/133519.

Повний текст джерела
Анотація:
El piridoxal fosfato (PLP) es cofactor de más de un centenar de enzimas que catalizan reacciones sobre aminoácidos como racemizaciones, transaminaciones o descarboxilaciones entre otras. Todas las reacciones catalizadas por PLP implican al menos una etapa de protonacion/desprotonación del carbono C del aminoácido o C4’ del PLP. El estudio experimental de la acidez de estos carbonos presenta muchas dificultades. En este trabajo se han diseñado estrategias computacionales para la determinación de pKas, obteniendo una metodología que proporciona pKas con incertidumbres equivalentes a las experimentales. Sehan calculado las energías de activación de la protonación/reprotonación de C por diversos residuos enzimáticos, así como simulaciones de metadinámica en la enzima Ornitina descarboxilasa que han permitido entender cómo se consigue una buena especificidad de reacción en las enzimas PLP-dependientes. Los resultados indican que las enzimas PLP-dependientes controlan la especificidad de la reacción deseada favoreciendo estados de protonación específicos para el cofactor PLP.
El piridoxal fosfat (PLP) és cofactor de més d’un centenar d’enzims que catalitzen reaccions d’aminoàcids com ara racemitzacions, transaminacions o descarboxilacions entre altres. Totes les reaccions catalitzades per PLP impliquen al menys una etapa de protonació/desprotonació del carboni C de l’aminoàcid o C4’ del PLP. L’estudi experimental d’acidesa de carbonis presenten molta dificultat. En aquest treball s’han dissenyat estratègies computacionals per la determinació de pKas, obtenint una metodologia que proporciona pKas amb una incertesa equivalent a la experimental. S’han calculat les energies d’activació de la protonació/ desprotonació de C per diversos residus enzimàtics, així com simulacions de metadinàmiques de l’enzim Ornitina descarboxilasa que han permès entendre com s’aconsegueix una bona especificitat de reacció als enzims PLP-dependents. Els resultats indiquen que aquests enzims controlen l’especificitat de la reacció desitjada afavorint estats de protonació específics del cofactor PLP.
Pyridoxal phosphate (PLP) is a cofactor of more than a hundred enzymes that catalyze amino acid reactions like racemizations, transaminations and decarboxylations amongst others. All the PLP-catalyzed reactions entail, at least, one step of protonation/deprotonation of the C carbon of the amino acid or C4’ of the PLP. The experimental study of carbon acidities involves significant difficulties. Several computational strategies for pKa predictions were designed in this work, obtaining a methodology that provides pKas with uncertainties equivalent to experiment. The activation energies of protonation/deprotonation of C by diverse enzymatic residues were calculated and the metadynamics simulations on Ornithine decarboxylase allowed understanding how PLP-dependent enzymes achieve good reaction specificities. The results indicate that PLP-dependent enzymes control the specificity of the desired reaction by favoring certain protonation states of the PLP cofactor.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Refulio, Sánchez Ronald Percy, and Gutiérrez Diana Paola Rodríguez. "Propuesta de un modelo de control y gestión de existencias en una empresa de telecomunicaciones usando como herramienta de soporte los módulos MM-WM del sistema SAP R/3." Bachelor's thesis, Pontificia Universidad Católica del Perú, 2011. http://tesis.pucp.edu.pe/repositorio/handle/123456789/1144.

Повний текст джерела
Анотація:
El presente trabajo de investigación que comprende el desarrollo de un Modelo de Control y Gestión de Existencias en una Empresa de Telecomunicaciones, tiene como objetivo primordial el planteamiento de una estrategia y metodología de abastecimiento y control de existencias, que garantice ahorros significativos para la empresa, mediante la gestión eficiente de la disponibilidad de los materiales en términos de cantidad, calidad, nivel de servicio y oportunidad, empleando como herramienta de soporte los módulo MM - WM del sistema SAP R/3. En base al objetivo planteado, el diseño de la propuesta se lleva a cabo en cuatro capítulos: en principio se establecen los fundamentos teóricos para definir y analizar los diferentes procesos de planificación y control de la empresa, seguidamente se muestra la situación actual identificando los puntos críticos susceptibles a mejoras con sus respectivos impactos económicos en términos de ahorros, para luego plantear la propuesta de mejora que finalmente será respaldada mediante el análisis beneficio- costo. La metodología empleada para el análisis y diagnóstico de la situación actual de la empresa, partió de establecer los valores objetivo de Rotación de Inventario (4.00), Exactitud de Registro de Inventario (99.00%) y Fill Rate (90.00%), que la empresa debería alcanzar para ser más competitiva en el mercado, ya que en base a estos valores esperados, se procedió a plantear el modelo de control y gestión de los inventarios, logrando el incremento de la rotación desde el punto de vista de gestión del abastecimiento, mediante la implementación de una clasificación estandarizada de materiales cargada en el SAP/R3 y a la vez logrando el incremento del ERI mediante el desarrollo de su filosofía en las operaciones del almacén. Paralelamente el aumento de ambos indicadores impacto directamente en el fill rate originando su incremento. Finalmente a través del análisis beneficio-costo en el que se indican los ahorros finales, que se obtienen como consecuencia del incremento de los tres indicadores de desempeño evaluados en este estudio, se concluye que la propuesta es viable.
Tesis
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Carbajal, Olivera Carlos Humberto. "Metodología para inspección visual remota en juntas soldadas de tuberías de acero inoxidable mayores a 4" de diámetro y con espesores hasta 3 mm empleando el videoscopio lenox modelo PVS." Bachelor's thesis, Pontificia Universidad Católica del Perú, 2017. http://tesis.pucp.edu.pe/repositorio/handle/123456789/8022.

Повний текст джерела
Анотація:
En el Laboratorio de Materiales de la Pontificia Universidad Católica del Perú (PUCP) se vienen brindando servicios a la industria desde 1973. Actualmente el Laboratorio de Materiales de la PUCP cuenta con un prestigio ganado a lo largo de su trayectoria y brinda los servicios de los ensayos no destructivos como: Inspección Visual, Líquidos penetrantes, Partículas Magnéticas, Radiografía Industrial y Ultrasonido a las empresas más importantes del medio. En las empresas de rubro alimentario, en especial la empresa Backus tiene la necesidad de supervisar que las juntas soldadas de las tuberías de acero inoxidable, por donde se transporta miles de litros de cerveza, se encuentren bien elaboradas. Es cierto que mediante el uso de la radiografía industr ial se puede corroborar la sanidad de la junta soldada, pero lo que no detecta la radiografía es la posible oxidación (carbonización), que puede presentarse cuando no se realiza un aislamiento adecuado de argón al interior de la tubería antes de proceder a soldar, pero con el Videoscopio Lenox, del laboratorio de materiales de la PUCP, mediante la inspección visual remota si se puede hacer, pero no existe una metodología para la inspección por medio de este equipo. El presente trabajo de tesis tiene como objetivo desarrollar una metodología para inspeccionar visualmente las juntas soldadas de tuberías de acero inoxidable mayores a 4" de diámetro y con espesores hasta 3mm, empleando el Videoscopio Lenox modelo PVS. Para conseguir el objetivo planteado, la tesis detalla los criterios de aceptación y rechazo de uniones soldadas en tuberías de acero inoxidable, que nos servirá para elaborar las imágenes patrones de discontinuidades no permitidas por el código AWS D18.1. Se preparó 02 prototipos, uno para realizar el so Ideo sin purga con gas argón, el cual permitirá capturar la imagen patrón de la oxidación en una unión soldada al interior de la tubería y el segundo prototipo para realizar el soldeo con previa purga de gas argón, el cual permitirá demostrar que es indispensable purgar las tuberías de acero inoxidable para garantizar la sanidad de la junta soldada, a pesar que sea costoso por el número de metros de tubería que normalmente tienen las empresas de rubro alimentario. Se detalla las características técnicas de cada equipo que conforman el Videoscopio Lenox modelo PVS, así como sus partes más importantes, que permitirá familiarizarse con el equipo antes de usarlo. Finalmente se elabora el procedimiento para el uso del Videoscopio Lenox modelo PVS en la inspección visual remota para tuberías de acero inoxidable mayores a 4" de diámetro y con espesores hasta 3mm y se elaboran las imágenes patrones de discontin uidades no permitidas por el código AWS D18.1 haciendo uso de los prototipos preparados.
Tesis
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Machado, Renato Ferraz. "MM-GSTI : proposta de um modelo de maturidade em gerenciamento de serviços de TI com foco nas pequenas e médias empresas / Renato Ferraz Machado ; orientadora, Sheila Reinehr ; co-orientadora, Andreia Malucelli." reponame:Biblioteca Digital de Teses e Dissertações da PUC_PR, 2011. http://www.biblioteca.pucpr.br/tede/tde_busca/arquivo.php?codArquivo=2387.

Повний текст джерела
Анотація:
Dissertação (mestrado) - Pontifícia Universidade Católica do Paraná, Curitiba, 2011
Bibliografia: p. 162-167
Alocar as atividades de negócio para uma organização externa, na tentativa de reduzir custos e ganhar em qualidade, não é um conceito novo. A rápida globalização dos negócios e o crescimento do foco nas competências principais, iniciada nos anos oitenta e
Allocating business activities to an external organization in an attempt to reduce costs and gain in quality is not a new concept. The rapid globalization of business growth and focus on core competencies, which began in the eighties and nineties, led fir
Стилі APA, Harvard, Vancouver, ISO та ін.
23

O'Lone, Christopher Edward. "Statistical Analysis of Geolocation Fundamentals Using Stochastic Geometry." Diss., Virginia Tech, 2021. http://hdl.handle.net/10919/102027.

Повний текст джерела
Анотація:
The past two decades have seen a surge in the number of applications requiring precise positioning data. Modern cellular networks offer many services based on the user's location, such as emergency services (e.g., E911), and emerging wireless sensor networks are being used in applications spanning environmental monitoring, precision agriculture, warehouse and manufacturing logistics, and traffic monitoring, just to name a few. In these sensor networks in particular, obtaining precise positioning data of the sensors gives vital context to the measurements being reported. While the Global Positioning System (GPS) has traditionally been used to obtain this positioning data, the deployment locations of these cellular and sensor networks in GPS-constrained environments (e.g., cities, indoors, etc.), along with the need for reliable positioning, requires a localization scheme that does not rely solely on GPS. This has lead to localization being performed entirely by the network infrastructure itself, or by the network infrastructure aided, in part, by GPS. In the literature, benchmarking localization performance in these networks has traditionally been done in a deterministic manner. That is, for a fixed setup of anchors (nodes with known location) and a target (a node with unknown location) a commonly used benchmark for localization error, such as the Cramer-Rao lower bound (CRLB), can be calculated for a given localization strategy, e.g., time-of-arrival (TOA), angle-of-arrival (AOA), etc. While this CRLB calculation provides excellent insight into expected localization performance, its traditional treatment as a deterministic value for a specific setup is limited. Rather than trying to gain insight into a specific setup, network designers are more often interested in aggregate localization error statistics within the network as a whole. Questions such as: "What percentage of the time is localization error less than x meters in the network?" are commonplace. In order to answer these types of questions, network designers often turn to simulations; however, these come with many drawbacks, such as lengthy execution times and the inability to provide fundamental insights due to their inherent ``block box'' nature. Thus, this dissertation presents the first analytical solution with which to answer these questions. By leveraging tools from stochastic geometry, anchor positions and potential target positions can be modeled by Poisson point processes (PPPs). This allows for the CRLB of position error to be characterized over all setups of anchor positions and potential target positions realizable within the network. This leads to a distribution of the CRLB, which can completely characterize localization error experienced by a target within the network, and can consequently be used to answer questions regarding network-wide localization performance. The particular CRLB distribution derived in this dissertation is for fourth-generation (4G) and fifth-generation (5G) sub-6GHz networks employing a TOA localization strategy. Recognizing the tremendous potential that stochastic geometry has in gaining new insight into localization, this dissertation continues by further exploring the union of these two fields. First, the concept of localizability, which is the probability that a mobile is able to obtain an unambiguous position estimate, is explored in a 5G, millimeter wave (mm-wave) framework. In this framework, unambiguous single-anchor localization is possible with either a line-of-sight (LOS) path between the anchor and mobile or, if blocked, then via at least two NLOS paths. Thus, for a single anchor-mobile pair in a 5G, mm-wave network, this dissertation derives the mobile's localizability over all environmental realizations this anchor-mobile pair is likely to experience in the network. This is done by: (1) utilizing the Boolean model from stochastic geometry, which statistically characterizes the random positions, sizes, and orientations of reflectors (e.g., buildings) in the environment, (2) considering the availability of first-order (i.e., single-bounce) reflections as well as the LOS path, and (3) considering the possibility that reflectors can either facilitate or block reflections. In addition to the derivation of the mobile's localizability, this analysis also reveals that unambiguous localization, via reflected NLOS signals exclusively, is a relatively small contributor to the mobile's overall localizability. Lastly, using this first-order reflection framework developed under the Boolean model, this dissertation then statistically characterizes the NLOS bias present on range measurements. This NLOS bias is a common phenomenon that arises when trying to measure the distance between two nodes via the time delay of a transmitted signal. If the LOS path is blocked, then the extra distance that the signal must travel to the receiver, in excess of the LOS path, is termed the NLOS bias. Due to the random nature of the propagation environment, the NLOS bias is a random variable, and as such, its distribution is sought. As before, assuming NLOS propagation is due to first-order reflections, and that reflectors can either facilitate or block reflections, the distribution of the path length (i.e., absolute time delay) of the first-arriving multipath component (MPC) is derived. This result is then used to obtain the first NLOS bias distribution in the localization literature that is based on the absolute delay of the first-arriving MPC for outdoor time-of-flight (TOF) range measurements. This distribution is shown to match exceptionally well with commonly assumed gamma and exponential NLOS bias models in the literature, which were only attained previously through heuristic or indirect methods. Finally, the flexibility of this analytical framework is utilized by further deriving the angle-of-arrival (AOA) distribution of the first-arriving MPC at the mobile. This distribution gives novel insight into how environmental obstacles affect the AOA and also represents the first AOA distribution, of any kind, derived under the Boolean model. In summary, this dissertation uses the analytical tools offered by stochastic geometry to gain new insights into localization metrics by performing analyses over the entire ensemble of infrastructure or environmental realizations that a target is likely to experience in a network.
Doctor of Philosophy
The past two decades have seen a surge in the number of applications requiring precise positioning data. Modern cellular networks offer many services based on the user's location, such as emergency services (e.g., E911), and emerging wireless sensor networks are being used in applications spanning environmental monitoring, precision agriculture, warehouse and manufacturing logistics, and traffic monitoring, just to name a few. In these sensor networks in particular, obtaining precise positioning data of the sensors gives vital context to the measurements being reported. While the Global Positioning System (GPS) has traditionally been used to obtain this positioning data, the deployment locations of these cellular and sensor networks in GPS-constrained environments (e.g., cities, indoors, etc.), along with the need for reliable positioning, requires a localization scheme that does not rely solely on GPS. This has lead to localization being performed entirely by the network infrastructure itself, or by the network infrastructure aided, in part, by GPS. When speaking in terms of localization, the network infrastructure consists of what are called anchors, which are simply nodes (points) with a known location. These can be base stations, WiFi access points, or designated sensor nodes, depending on the network. In trying to determine the position of a target (i.e., a user, or a mobile), various measurements can be made between this target and the anchor nodes in close proximity. These measurements are typically distance (range) measurements or angle (bearing) measurements. Localization algorithms then process these measurements to obtain an estimate of the target position. The performance of a given localization algorithm (i.e., estimator) is typically evaluated by examining the distance, in meters, between the position estimates it produces vs. the actual (true) target position. This is called the positioning error of the estimator. There are various benchmarks that bound the best (lowest) error that these algorithms can hope to achieve; however, these benchmarks depend on the particular setup of anchors and the target. The benchmark of localization error considered in this dissertation is the Cramer-Rao lower bound (CRLB). To determine how this benchmark of localization error behaves over the entire network, all of the various setups of anchors and the target that would arise in the network must be considered. Thus, this dissertation uses a field of statistics called stochastic geometry} to model all of these random placements of anchors and the target, which represent all the setups that can be experienced in the network. Under this model, the probability distribution of this localization error benchmark across the entirety of the network is then derived. This distribution allows network designers to examine localization performance in the network as a whole, rather than just for a specific setup, and allows one to obtain answers to questions such as: "What percentage of the time is localization error less than x meters in the network?" Next, this dissertation examines a concept called localizability, which is the probability that a target can obtain a unique position estimate. Oftentimes localization algorithms can produce position estimates that congregate around different potential target positions, and thus, it is important to know when algorithms will produce estimates that congregate around a unique (single) potential target position; hence the importance of localizability. In fifth generation (5G), millimeter wave (mm-wave) networks, only one anchor is needed to produce a unique target position estimate if the line-of-sight (LOS) path between the anchor and the target is unimpeded. If the LOS path is impeded, then a unique target position can still be obtained if two or more non-line-of-sight (NLOS) paths are available. Thus, over all possible environmental realizations likely to be experienced in the network by this single anchor-mobile pair, this dissertation derives the mobile's localizability, or in this case, the probability the LOS path or at least two NLOS paths are available. This is done by utilizing another analytical tool from stochastic geometry known as the Boolean model, which statistically characterizes the random positions, sizes, and orientations of reflectors (e.g., buildings) in the environment. Under this model, considering the availability of first-order (i.e., single-bounce) reflections as well as the LOS path, and considering the possibility that reflectors can either facilitate or block reflections, the mobile's localizability is derived. This result reveals the roles that the LOS path and the NLOS paths play in obtaining a unique position estimate of the target. Using this first-order reflection framework developed under the Boolean model, this dissertation then statistically characterizes the NLOS bias present on range measurements. This NLOS bias is a common phenomenon that arises when trying to measure the distance between two nodes via the time-of-flight (TOF) of a transmitted signal. If the LOS path is blocked, then the extra distance that the signal must travel to the receiver, in excess of the LOS path, is termed the NLOS bias. As before, assuming NLOS propagation is due to first-order reflections and that reflectors can either facilitate or block reflections, the distribution of the path length (i.e., absolute time delay) of the first-arriving multipath component (MPC) (or first-arriving ``reflection path'') is derived. This result is then used to obtain the first NLOS bias distribution in the localization literature that is based on the absolute delay of the first-arriving MPC for outdoor TOF range measurements. This distribution is shown to match exceptionally well with commonly assumed NLOS bias distributions in the literature, which were only attained previously through heuristic or indirect methods. Finally, the flexibility of this analytical framework is utilized by further deriving angle-of-arrival (AOA) distribution of the first-arriving MPC at the mobile. This distribution yields the probability that, for a specific angle, the first-arriving reflection path arrives at the mobile at this angle. This distribution gives novel insight into how environmental obstacles affect the AOA and also represents the first AOA distribution, of any kind, derived under the Boolean model. In summary, this dissertation uses the analytical tools offered by stochastic geometry to gain new insights into localization metrics by performing analyses over all of the possible infrastructure or environmental realizations that a target is likely to experience in a network.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Villaron, Emilie. "Modèles aléatoires harmoniques pour les signaux électroencéphalographiques." Thesis, Aix-Marseille, 2012. http://www.theses.fr/2012AIXM4815.

Повний текст джерела
Анотація:
Cette thèse s'inscrit dans le contexte de l'analyse des signaux biomédicaux multicapteurs par des méthodes stochastiques. Les signaux auxquels nous nous intéressons présentent un caractère oscillant transitoire bien représenté par les décompositions dans le plan temps-fréquence c'est pourquoi nous avons choisi de considérer non plus les décours temporels de ces signaux mais les coefficients issus de la décomposition de ces derniers dans le plan temps-fréquence. Dans une première partie, nous décomposons les signaux multicapteurs sur une base de cosinus locaux (appelée base MDCT) et nous modélisons les coefficients à l'aide d'un modèle à états latents. Les coefficients sont considérés comme les réalisations de processus aléatoires gaussiens multivariés dont la distribution est gouvernée par une chaîne de Markov cachée. Nous présentons les algorithmes classiques liés à l'utilisation des modèles de Markov caché et nous proposons une extension dans le cas où les matrices de covariance sont factorisées sous forme d'un produit de Kronecker. Cette modélisation permet de diminuer la complexité des méthodes de calcul numérique utilisées tout en stabilisant les algorithmes associés. Nous appliquons ces modèles à des données électroencéphalographiques et nous montrons que les matrices de covariance représentant les corrélations entre les capteurs et les fréquences apportent des informations pertinentes sur les signaux analysés. Ceci est notamment illustré par un cas d'étude sur la caractérisation de la désynchronisation des ondes alpha dans le contexte de la sclérose en plaques
This thesis adresses the problem of multichannel biomedical signals analysis using stochastic methods. EEG signals exhibit specific features that are both time and frequency localized, which motivates the use of time-frequency signal representations. In this document the (time-frequency labelled) coefficients are modelled as multivariate random variables. In the first part of this work, multichannel signals are expanded using a local cosine basis (called MDCT basis). The approach we propose models the distribution of time-frequency coefficients (here MDCT coefficients) in terms of latent variables by the use of a hidden Markov model. In the framework of application to EEG signals, the latent variables describe some hidden mental state of the subject. The latter control the covariance matrices of Gaussian vectors of fixed-time vectors of multi-channel, multi-frequency, MDCT coefficients. After presenting classical algorithms to estimate the parameters, we define a new model in which the (space-frequency) covariance matrices are expanded as tensor products (also named Kronecker products) of frequency and channels matrices. Inference for the proposed model is developped and yields estimates for the model parameters, together with maximum likelihood estimates for the sequences of latent variables. The model is applied to electroencephalogram data, and it is shown that variance-covariance matrices labelled by sensor and frequency indices can yield relevant informations on the analyzed signals. This is illustrated with a case study, namely the detection of alpha waves in rest EEG for multiple sclerosis patients and control subjects
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Petersen, Fazlyn. "The realisation of business benefits when implementing mandatory software in an IT department: a case study in a South African financial services organisation." Thesis, University of the Western Cape, 2011. http://etd.uwc.ac.za/index.php?module=etd&action=viewtitle&id=gen8Srv25Nme4_1565_1365670947.

Повний текст джерела
Анотація:

No organisation has an endless and unlimited supply of money, especially in a recessive economy, and therefore decisions have to be made as to which areas an organisation will invest in. As organisations, such as YZ1 financial services organisation, are focused on seeing returns on investment (ROI), implementing software that is not being used 
will not render any benefits to the organisation. Research problem: Project Managers (PMs) in YZ organisation&rsquo
s IT department need to perform mandated processes, as 
defined in their centralised repository. PMs need to use Financial and Planning Software (FPS)2 software to perform certain project management activities, as required by their 
job function. However, it was found that MPP3 software, another tool, was used for more detailed project schedules, as well as activities that were not strictly enforced by 
management, the Project Office or the Quality Assurance team. Therefore, from this discovery, it was not clear whether the intended benefit of implementing this mandatory 
software (FPS) was being realised &ndash
since implementing software that is not being utilised fully would not deliver the intended benefits to the IT department (Devaraj &
Kohli 
2003), even if the software is termed &lsquo
mandatory&rsquo
. Objective: The primary objective of this research was to explore and optimise the key success factors for an effective 
implementation of mandatory software in a department, in order to derive the intended business benefits. Literature Review: Literature was reviewed in the search for models or 
theories that explore the relationship between the use of mandatory software and the achievement of business benefits. The Information Management Body of Knowledge 
(IMBOK) was selected as this framework defines the relationship between IT and the realisation of business benefits, and ultimately the achievement of any business strategy. The literature review focused predominantly on the level of user involvement, change management, as well as factors that influence the usage of mandatory software by 
individuals. 1. The name of the organisation utilised has been changed. Refer to Ethical Consideration 2. The name of the tools utilised has been changed. Refer to Ethical 
Consideration and list of acronyms 3. The name of the tools utilised has been changed. Refer to Ethical Consideration and list of acronyms. Focus was given to organisational factors affecting usage, such as top management support and organisational processes. A model was compiled using unique constructs in the Technology Acceptance Model 
(and TAM2), the Motivational Model (MM) and the Model of PC Utilisation (MPCU) &ndash
in order to test user acceptance of mandatory software. The literature study concludes with a 
review of an approach to benefits management including five stages, namely: identifying and structuring benefits, planning for the realisation of benefits, executing the plan, in 
addition to the evaluation and the review. Research design and methodology: A case study was used in this research, as it examined the phenomenon in its natural setting, 
 
 
employing multiple methods of data collection to gather information from a few entities (groups and data sources). In this way, it was not limited to only qualitative or quantitative 
approaches, but utilised mixed methods instead. A mixed methods approach was used in order to elaborate, enhance and clarify the results from the qualitative research 
through the results of the quantitative analysis. Findings: The main finding, based on the compilation of three models of user acceptance, proved that FPS was not being utilised 
as intended. There was also no evidence of an improvement in business operations. Therefore, benefits management was negatively impacted. Organisational processes were 
dentified as the most important organisational factor, influencing the usage of FPS software. Own technological capability was considered to be the least important factor, 
as respondents believed that they had sufficient IT skills in order to learn how to use FPS software. Change management was rated negatively
and as a result, it impacted the 
usage of FPS, as users were not involved in the decision to implement, and had limited interaction in the implementation process. In addition, there was no evidence found that 
benefits management was conducted in the IT department
and therefore, the impact of using alternative software could not be quantitatively assessed. Recommendations: In concluding this research, it is recommended that the &ldquo
best practice&rdquo
, derived from the pertinent literate should be followed more diligently if YZ organisation is to benefit from the 
implementation of mandatory software. For example, in this research, it was found that top management&rsquo
s support of FPS (second most important organisational factor influencing use) was lacking, despite the literature suggestion that senior management involvement in changing technology is crucial for organisational commitment. It is 
suggested that a more formal approach to benefits management should be implemented. It is also recommended that further study should be conducted &ndash
in order to explore 
the applicability of the Japanese framing (achieving benefits from IT software through the concept of strategic instinct, rather than strategic alignment) in the context of a 
developing country (such as South Africa).

Стилі APA, Harvard, Vancouver, ISO та ін.
26

Weststrate, Marnus. "LC-ladder and capacitive shunt-shunt feedback LNA modelling for wideband HBT receivers." Thesis, University of Pretoria, 2011. http://hdl.handle.net/2263/26615.

Повний текст джерела
Анотація:
Although the majority of wireless receiver subsystems have moved to digital signal processing over the last decade, the low noise amplifier (LNA) remains a crucial analogue subsystem in any design being the dominant subsystem in determining the noise figure (NF) and dynamic range of the receiver as a whole. In this research a novel LNA configuration, namely the LC-ladder and capacitive shunt-shunt feedback topology, was proposed for use in the implementation of very wideband LNAs. This was done after a thorough theoretical investigation of LNA configurations available in the body of knowledge from which it became apparent that for the most part narrowband LNA configurations are applied to wideband applications with suboptimal results, and also that the wideband configurations that exist have certain shortcomings. A mathematical model was derived to describe the new configuration and consists of equations for the input impedance, input return loss, gain and NF, as well as an approximation of the worst case IIP3. Compact design equations were also derived from this model and a design strategy was given which allows for electronic design automation of a LNA using this configuration. A process for simultaneously optimizing the circuit for minimum NF and maximum gain was deduced from this model and different means of improving the linearity of the LNA were given. This proposed design process was used successfully throughout this research. The accuracy of the mathematical model has been verified using simulations. Two versions of the LNA were also fabricated and the measured results compared well with these simulations. The good correlation found between the calculated, simulated and measured results prove the accuracy of the model, and some comments on how the accuracy of the model could be improved even further are provided as well. The simulated results of a LNA designed for the 1 GHz to 18 GHz band in the IBM 8HP process show a gain of 21.4 dB and a minimum NF of only 1.7 dB, increasing to 3.3 dB at the upper corner frequency while maintaining an input return loss below -10 dB. After steps were taken to improve the linearity, the IIP3 of the LNA is -14.5 dBm with only a small degradation in NF now 2.15 dB at the minimum. The power consumption of the respective LNAs are 12.75 mW and 23.25 mW and each LNA occupies a chip area of only 0.43 mm2. Measured results of the LNA fabricated in the IBM 7WL process had a gain of 10 dB compared to an expected simulated gain of 20 dB, however significant path loss was introduced by the IC package and PCB parasitics. The S11 tracked the simulated response very well and remained below -10 dB over the feasible frequency range. Reliable noise figure measurements could not be obtained. The measured P1dB compression point is -22 dBm. A 60 GHz LNA was also designed using this topology in a SiGe process with ƒT of 200 GHz. A simulated NF of 5.2 dB was achieved for a gain of 14.2 dB and an input return loss below -15 dB using three amplifier stages. The IIP3 of the LNA is -8.4 dBm and the power consumption 25.5 mW. Although these are acceptable results in the mm-wave range it was however found that the wideband nature of this configuration is redundant in the unlicensed 60 GHz band and results are often inconsistent with the design theory due to second order effects. The wideband results however prove that the LC-ladder and capacitive shunt-shunt feedback topology is a viable means for especially implementing LNAs that require a very wide operating frequency range and also very low NF over that range.
Thesis (PhD(Eng))--University of Pretoria, 2011.
Electrical, Electronic and Computer Engineering
unrestricted
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Čechová, Pavla. "Vztah typologie a konstrukční soustavy." Doctoral thesis, Vysoké učení technické v Brně. Fakulta architektury, 2015. http://www.nusl.cz/ntk/nusl-233257.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Parameswaran, Sreeja. "Solar Energy Conversion in Plants and Bacteria Studied Using FTIR Difference Spectroscopy and Quantum Chemical Computational Methodologies." Digital Archive @ GSU, 2009. http://digitalarchive.gsu.edu/phy_astr_diss/32.

Повний текст джерела
Анотація:
This dissertation presents a study of the molecular mechanism underlying the highly efficient solar energy conversion processes that occur in the Photosystem I (PS I) reaction centers in plants and bacteria. The primary electron donor P700 is at the heart of solar energy conversion process in PS I and the aim is to obtain a better understanding of the electronic and structural organization of P700 in the ground and excited states. Static Fourier Transform Infra-Red (FTIR) difference spectroscopy (DS) in combination with site directed mutagenesis and Density Functional Theory (DFT) based vibrational frequency simulations were used to investigate how protein interactions such as histidine ligation and hydrogen bonding modulate this organization. (P700+-P700) FTIR DS at 77K were obtained from a series of mutants from the cyanobacterium Synechocystis sp. 6803 (S. 6803) where the amino acid residues near the C=O groups of the two chlorophylls of P700 where specifically changed. (P700+-P700) FTIR DS was also obtained for a set of mutants from C. reinhardtii where the axial ligand to A0-, the primary electron acceptor in PS I was modified. The FTIR DS obtained from these mutants provides information on the axial ligands, the hydrogen bonding status as well as the polarity of the environment of specific functional groups that are part of the chlorophyll molecules that constitute P700. Assignment of the FTIR bands to vibrational modes in specific types of environment is very difficult. In order to assist the assignment of the difference bands in experimental spectra DFT based vibrational mode frequency calculations were undertaken for Chl-a and Chl-a+ model molecular systems under different set of conditions; in the gas phase, in solvents using the Polarizable Continuum Model (PCM), in the presence of explicit solvent molecules using QM/MM methods, and in the presence of axial ligands and hydrogen bonds. DFT methods were also used to calculate the charge, spin and redox properties of Chl-a/Chl-a’ dimer models that are representative of P700, the primary electron donor in PS I.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Urban, Marek. "Návrh zavěšení kol Formule Student." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2020. http://www.nusl.cz/ntk/nusl-417557.

Повний текст джерела
Анотація:
Tato práce se se zabývá návrhem kinematiky zavěšení kol obou náprav. Na základě analýz jízdních dat, multi-body simulací v softwaru Adams Car, simulací v Matlabu a analytických kalkulací v Mathcadu, je navržena řada změn s cílem zlepšit jízdní vlastnosti vozu Formule student, tyto změny jsou následně implementovány do CAD modelu vozu. Jednotlivé změny kinematiky náprav jsou provedeny na základě analýzy konkrétního problému, který se snaží řešit. Jednou z problematik je zástavbová náročnost systému odpružení a zavěšení zadních kol, zde je cílem snížit hmotnost, výšku těžiště a moment setrvačnosti. Další problematikou je geometrie předního kola, kde je cílem zlepšit využití pneumatik a snížit síly v řízení. Dále se práce zabývá simulacemi elastokinematiky zadní nápravy, součástí je také návrh měřícího zařízení. V poslední části je zkoumán vliv provedených změn i elastokinematiky na jízdní dynamiku vozu v ustálených stavech za pomocí MM metody simulované s modelem celého vozu v Adams Car a zpracované v Matlabu.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Gonçalves, Fabricio Mota. "Evapotranspiração e coeficientes de cultivo (KC) da cana-de-açúcar irrigada por gotejamento subsuperficial." reponame:Repositório Institucional da UFC, 2010. http://www.repositorio.ufc.br/handle/riufc/18596.

Повний текст джерела
Анотація:
GONÇALVES, Fabricio Mota. Evapotranspiração e coeficientes de cultivo (KC) da cana-de-açúcar irrigada por gotejamento subsuperficial. 2010. 64 f. Dissertação (Mestrado em engenharia agrícola)- Universidade Federal do Ceará, Fortaleza-CE, 2010.
Submitted by Elineudson Ribeiro (elineudsonr@gmail.com) on 2016-06-21T18:44:18Z No. of bitstreams: 1 2010_dis_fmgoncalves.pdf: 3909193 bytes, checksum: b818df330f21fcd52b71e61d84782c03 (MD5)
Approved for entry into archive by José Jairo Viana de Sousa (jairo@ufc.br) on 2016-07-21T20:13:45Z (GMT) No. of bitstreams: 1 2010_dis_fmgoncalves.pdf: 3909193 bytes, checksum: b818df330f21fcd52b71e61d84782c03 (MD5)
Made available in DSpace on 2016-07-21T20:13:45Z (GMT). No. of bitstreams: 1 2010_dis_fmgoncalves.pdf: 3909193 bytes, checksum: b818df330f21fcd52b71e61d84782c03 (MD5) Previous issue date: 2010
Sugar cane is one of the crops with the highest level of water consumption, with a high evapotranspiration demand throughout most of its cycle. Knowledge of the sugar cane water demand, the time that irrigation should be applied, and how to irrigate the crop are becoming increasingly important for the sustainable development of irrigated sugar cane cultivation, especially in areas of low water availability, as it is the case in northea stern Brazil. The experiment was carried out at the Curu Experimental Field (property of the Embrapa Tropical Agroindustry), located in the Curu-Paraipaba Irrig ation Project, city of Paraipaba-CE (3 ° 29 '20''S, 39 ° 9' 45''W and elevation 30m), in order to determine the evapotrans piration and crop coefficients (Kc) of sugarcane (Saccharum L. officinarium), subsurface drip-irrigated. It was used the SP 6949 sugar cane variety, planted in a spacing of 1.8 m between double rows and 0.4 m between rows within the double row. As to t he irrigation, we used one lateral water line for each double row, buried at a depth of 0.15 m, w ith self-compensating drip emitters, with a flow rate of 1 L h-1, spaced 0.5 m on the water line. The crop was irrigated daily and there were three fertigations per week. The irrigation de pth was periodically adjusted, keeping the soil water potential between -8 kPa to -20 kPa. The crop evapotranspiration (ETc) was determined using a wei ghing lysimeter with a surface area of 2.25 m 2. The reference evapotranspiration (ETo) was estimated by the FAO Penman-Monteith method. The durations of the phenological stages of cane sugar were estimated through analysis of soil cover, with the use of digital images. Durations of 31, 49, 237 and 118 days were observed, respectively for initial, crop development, mid-season and late season stages. The yields achieved within and outside the lysimeter were respectively 144.4 and 108.8 t ha -1 . The total ETc observed during the cycle of the sugar cane was 1074.1 m m, with maximum values of 6.6 mm d-1 during the mid-season stage. Observed Kc values for initial, mid-season and late season stages were respectively 0.23, 1 .03 and 0.50. In the crop development stage, the relationship between Kc and the (crop) soil cover was represented with good accuracy by a negative quadratic model.
A cana-de-açúcar é uma das culturas com maior consumo de água, apresentando uma alta demanda evapotranspirativa ao longo da maior parte do seu ciclo. O conhecimento das necessidades hídricas da cana-de-açúcar, do momento de aplicação da água requerida pela mesma e a forma de aplicação torna-se cada vez mais importante para o desenvolvimento sustentável da sua produção irrigada, principalmente em regiões de pouca disponibilidade hídrica, como é o caso da região Nordeste do Brasil. O experimento foi conduzido no Campo Experimental do Curu, pertencente à Embrapa Agroindústria Tropical localizado no Perímetro Irrigado Curu-Paraipaba, município de Paraipaba-CE (3° 29’ 20’’ S, 39° 9’ 45’’ W e altitude de 30 m), com o objetivo de determinar a evapotranspiração e os coeficientes de cultivo (Kc) da cana-de-açúcar (Saccharum officinarium L.), irrigada por gotejamento subsuperficial. Utilizou-se a variedade de cana SP 6949, plantada no espaçamento de 1,8 m entre fileiras duplas e 0,4 m entre linhas dentro da fileira dupla. Na irrigação utilizou-se uma linha lateral para cada fileira dupla, enterrada na profundidade de 0,15 m, com gotejadores autocompensantes, com vazão de 1 L h-1, espaçados de 0,5 m na linha. A cultura foi irrigada com freqüência diária e foram realizadas três fertirrigações por semana. A lâmina de irrigação foi ajustada periodicamente, mantendo-se o potencial de água no solo entre -8 kPa e -20 kPa. A evapotranspiração da cultura (ETc) foi determinada utilizando-se um lisímetro de pesagem com área superficial de 2,25 m2. A evapotranspiração de referência (ETo) foi estimada pelo método FAO Penman-Monteith. As durações das fases fenológicas da cana-de-açúcar foram estimadas por meio da análise da cobertura do solo pela cultura, determinada com o uso de imagens digitais. Foram observadas durações de 31, 49, 237 e 118 dias, para as fases inicial, de desenvolvimento, intermediária e final, respectivamente. A produtividade alcançada dentro e fora do lisímetro foi de 144,4 e 108,8 t ha-1, respectivamente. A ETc total observada durante o ciclo da cana-de-açúcar foi de 1.074,1 mm, com valores máximos da ordem de 6,6 mm d-1 durante a fase intermediária. Foram observados valores de Kc iguais a 0,23, 1,03 e 0,50, para as fases inicial, intermediária e final, respectivamente. Na fase de desenvolvimento vegetativo a relação entre o Kc e a cobertura do solo pela cultura foi representada com bastante exatidão por um modelo quadrático negativo.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

(5930258), Zhou Shen. "Two component semiparametric density mixture models with a known component." Thesis, 2019.

Знайти повний текст джерела
Анотація:
Finite mixture models have been successfully used in many applications, such as classification, clustering, and many others. As opposed to classical parametric mixture models, nonparametric and semiparametric mixture models often provide more flexible approaches to the description of inhomogeneous populations. As an example, in the last decade a particular two-component semiparametric density mixture model with a known component has attracted substantial research interest. Our thesis provides an innovative way of estimation for this model based on minimization of a smoothed objective functional, conceptually similar to the log-likelihood. The minimization is performed with the help of an EM-like algorithm.  We show that the algorithm is convergent and the minimizers of the objective functional, viewed as estimators of the model parameters, are consistent. 

More specifically, in our thesis, a semiparametric mixture of two density functions is considered where one of them is known while the weight and the other function are unknown. For the first part, a new sufficient identifiability condition for this model is derived, and a specific class of distributions describing the unknown component is given for which this condition is mostly satisfied. A novel approach to estimation of this model is derived. That approach is based on an idea of using a smoothed likelihood-like functional as an objective functional in order to avoid ill-posedness of the original problem. Minimization of this functional is performed using an iterative Majorization-Minimization (MM) algorithm that estimates all of the unknown parts of the model. The algorithm possesses a descent property with respect to the objective functional. Moreover, we show that the algorithm converges even when the unknown density is not defined on a compact interval. Later, we also study properties of the minimizers of this functional viewed as estimators of the mixture model parameters. Their convergence to the true solution with respect to a bandwidth parameter is justified by reconsidering in the framework of Tikhonov-type functional. They also turn out to be large-sample consistent; this is justified using empirical minimization approach. The third part of the thesis contains a series of simulation studies, comparison with another method and a real data example. All of them show the good performance of the proposed algorithm in recovering unknown components from data.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Obeidat, Huthaifa A. N., Rameez Asif, N. T. Ali, O. A. Obeidat, N. T. Ali, Steven M. R. Jones, Wafa S. A. Shuaieb, et al. "An Indoor Path Loss Prediction Model using Wall Correction Factors for WLAN and 5G Indoor Networks." 2018. http://hdl.handle.net/10454/15503.

Повний текст джерела
Анотація:
Yes
A modified indoor path loss prediction model is presented, namely Effective Wall Loss Model (EWLM). The modified model is compared to other indoor path loss prediction models using simulation data and real-time measurements. Different operating frequencies and antenna polarizations are considered to verify the observations. In the simulation part, EWLM shows the best performance among other models as it outperforms two times the dual slope model which is the second-best performance. Similar observations were recorded from the experimental results. Linear attenuation and one slope models have similar behaviour, the two models parameters show dependency on operating frequency and antenna polarization.
The full-text of this article will be released for public view at the end of the publisher embargo on 3 Oct 2018.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Huang, Shih-ping, and 黃詩蘋. "Conditional (t, k)-diagnosis under the MM* model." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/41510935455489586644.

Повний текст джерела
Анотація:
碩士
國立中央大學
資訊工程學系
101
System-level diagnosis is used to identify faulty processors. Each processor in the system can test its neighboring processors and declare them fault-free or faulty. And (t, k)-diagnosis is an conceptualization of sequential diagnosis. In (t, k)-diagnosis, at least k faulty processors (or all faulty processors if fewer than k faulty processors remain) are identified and repaired in each iteration provided there are at most t faulty processors. In this paper, it assumes that each processor neighbor to at least one fault-free processor. We prove the conditional (t, k)-diagnosability of the r-regular graph under the MM* model.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Yao, Lihong. "Evolving Starburst Model of FIR/sub-mm/mm Line Emission and Its Applications to M82 and Nearby Luminous Infrared Galaxies." Thesis, 2009. http://hdl.handle.net/1807/26484.

Повний текст джерела
Анотація:
This thesis presents a starburst model for far-infrared/sub-millimeter/millimeter (FIR/sub-mm/mm) line emission of molecular and atomic gas in an evolving starburst region, which is treated as an ensemble of non-interacting hot bubbles which drive spherical shells of swept-up gas into a surrounding uniform gas medium. These bubbles and shells are driven by winds and supernovae within massive star clusters formed during an instantaneous starburst. The underlying stellar radiation from the evolving clusters affects the properties and structure of photodissociation regions (PDRs) in the shells, and hence the spectral energy distributions (SEDs) of the molecular and atomic line emission from these swept-up shells and the associated parent giant molecular clouds (GMCs) contains a signature of the stage evolution of the starburst. The physical and chemical properties of the shells and their structure are computed using a a simple well known similarity solution for the shell expansion, a stellar population synthesis code, and a time-dependent PDR chemistry model. The SEDs for several molecular and atomic lines ($^{12}$CO and its isotope $^{13}$CO, HCN, HCO$^+$, C, O, and C$^+$) are computed using a non-local thermodynamic equilibrium (non-LTE) line radiative transfer model. By comparing our models with the available observed data of nearby infrared bright galaxies, especially M 82, we constrain the models and in the case of M 82, provide estimates for the age of the recent starburst activity. We also derive the total H$_2$ gas mass in the measured regions of the central 1 kpc starburst disk of M 82. In addition, we apply the model to represent various stages of starburst evolution in a well known sample of nearby luminous infrared galaxies (LIRGs). In this way, we interpret the relationship between the degree of molecular excitation and ratio of FIR to CO luminosity to possibly reflect different stages of the evolution of star-forming activity within their nuclear regions. We conclude with an assessment of the strengths and weaknesses of this approach to dating starbursts, and suggest future work for improving the model.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Chang, Pei-Lun, and 張培倫. "Using CTPN to Model a Hybrid Material Handling System in a 300 mm Fab." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/70070981111778167301.

Повний текст джерела
Анотація:
碩士
國立清華大學
工業工程與工程管理學系
93
This study investigates the transportation problems in the semiconductor fabrication. Due to the weight and size of the 300 mm wafer lot, it is difficult to transport lots by operators, and therefore the automated material handling system (AMHS) is widely employed. Nevertheless, in practice, there are several special situations where operators perform better than AMHS. To date, fully-automated transportation for an intrabay is still difficult to achieve, so that generally the hybrid transportation system (HTS) is adopted. Hence, this study focuses on the optimal design of the HTS that incorporate overhead hoist transports (OHTs) with operators. The proposed methodology exploits the colored timed Petri nets (CTPN) to model the HTS, which can be applied to diverse areas in the intrabay system of semiconductor fabrication, and the modeling tool for analyzing the HTS has to be generalized. The model structure is separated from the model configuration in modeling the HTS, and thus prevents the need to modify a model structure due to a different problem configuration. There are two sub-systems, the transportation system and the production system in the HTS. In the construction of the transportation system, one needs to consider simultaneously the different behavior of operators and OHTs, and different situations in the transportation. Also one needs to consider the different production behavior in the different fabrication areas. Then, these two sub-systems are combined and the CTPN-based hybrid transport system is transformed into a three-phase discrete event simulation system for further analysis. After simulating different scenarios, the response surface method (RSM) is used to obtain the optimal allocation of OHTs and operators. The results of this study can provide the needed information to support the decision for allocation of OHTs and operators. Besides, the CTPN-based HTS simulator is a general model which can be applied to diverse areas of semiconductor fabrication.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Barbosa, Patrícia Manuela da Silva. "Análise e monitorização estratégica de mercado na MM+A Branding." Master's thesis, 2014. http://hdl.handle.net/10400.14/36128.

Повний текст джерела
Анотація:
Ao contrário do que se possa pensar, a análise e monitorização estratégica de mercado constitui um dos pilares mais importantes para a formulação e implementação de estratégias de sucesso. A rápida evolução do mercado, a intensificação da competitividade entre as empresas e as mudanças do comportamento dos consumidores levaram a que a análise estratégica de mercado passasse a ser mais do que uma fase do processo da gestão, marketing e branding, tornando-se uma fonte privilegiada de oportunidades e know-how. O estágio realizado na mm+a branding vem contribuir para o aperfeiçoamento das práticas desenvolvidas no âmbito da análise e monitorização estratégica de mercado. O resultado deste trabalho culmina num estudo de caso que permite o desenvolvimento de métodos e ferramentas de análise e aplicação de um modelo de análise - 1’Strategic Analysis Model (1’SAM) – que constitui uma ferramenta estratégica com reais contributos para a eficácia das estratégias de branding. Este modelo pode não só acrescentar valor aos futuros projetos de branding da mm+a branding, como também maximizar o valor entregue aos seus clientes.
Contrary to what one might think, strategic market analysis and monitoring is one of the most important aspects for the formulation and implementation of a successful strategy. The rapid evolution of the market, the intensification in competition between firms and the changes in consumer behviour are major challenges that can be primary sources of opportunities and know-how. The training done at mm+a branding aims to contribute to the improvement of the process of strategic market analysis and monitoring implemented by the company. The result of the work developed during the internship culminates in a case study that enabled the development of a sound methodology for strategic market analysis – 1’Strategic Analysis Model (1’SAM) – a strategic tool with real contributions to the effectiveness of brand management. This model not only adds value to the future branding projects of MM+A Branding, but also maximizes the value delivered to its customers.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Dykstra, Daniel William. "Elucidating binding modes of zuonin A enantiomers to JNK1 via in silico methods." Thesis, 2013. http://hdl.handle.net/2152/25261.

Повний текст джерела
Анотація:
Aberrant JNK signaling can result in two main forms of disease in humans: 1) neurological, coronary, hepatobiliary, and respiratory diseases and 2) autoimmune, inflammatory, and cancer conditions. Enantiomers of the lignan zuonin A, (-)-zuonin A and (+)-zuonin A, have been shown to bind to JNK isoforms with similar affinity and disrupt protein-protein interactions at JNK's D-recruitment site, making them a good candidate for specific non-ATP competitive inhibitors. However, (-)-zuonin A inhibits 80% of JNK catalyzed reactions at saturating levels, while (+)-zuonin A only inhibits 15%. Molecular docking and molecular dynamics simulations were performed to gain a better understanding of how these inhibitors interact JNK. The results of this study provide an alternative binding mode for (-)-zuonin A, compared to one proposed in a previous study, that shows (-)-zuonin A interacting with JNK via an induced fit mechanism by forming a larger pocket for itself near the highly conserved [phi]A-X-[phi]B recognition site, a dynamic move not seen in (+)-zuonin A simulations, and may help explain their different inhibition patterns.
text
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Wang, Mujiangshan. "Reliability of interconnection networks." Thesis, 2019. http://hdl.handle.net/1959.13/1404487.

Повний текст джерела
Анотація:
Research Doctorate - Doctor of Philosophy (PhD)
Graph is a type of mathematical model to study the relationships among entities. The theory on graphs is called Graph Theory. It started in 1736 and has 283 years of history since the paper was written by Leonhard Euler on the Seven Bridges of Königsberg. In computer science, the term "Interconnection Networks" has been used to refer to a set of interconnected elements. For example, a computer network where computers was connected by wires or Internet of Things (IoT) is connected via wireless connection. There are two types of network: static and dynamic. Static networks are hard-wired and their configurations do not change. The structure, which is also called topology signifies that the nodes are arranged in specific shape and the shape is maintained throughout the networks. In this thesis, we focus on the static networks. In graph theory, graphs are used to model the topology of network, whether it is networks of communication, data organization, computational devices, the flow of computation. For instance, the link structure of a local area network can be represented by an undirected graph, in which the vertices represent computers and edges represent connections between two computers. A similar approach can be applied to problems in social media, travel, biology, computer design, mapping the progression of neuro-degenerative diseases, and many other fields. Graph models could be directed, undirected and weighted, depending on the properties of the network we are studying. Fault-tolerance of networks is an important property. Fault-tolerance is the property that enables a system to continue operating properly in the event of the failure of some (one or more faults) of its components. Fault-tolerance is particularly sought after in high-availability or life-critical systems. We are interested in the fault-tolerance of networks. Considering the corresponding graph model of the networks, connectivity of the graphs measures how resistant a graph can be against the nodes (link) removal. In graph theory, there is a set of fault-tolerance related parameters, such as restricted-connectivity, extra-connectivity etc., which gave refined information about how robust is a network. Performance of the distributed system is significantly determined by the choice of the network topology. Desirable properties of an interconnection network include low degree, low diameter, symmetry, low congestion, high connectivity, and high fault-tolerance. For the past several decades, there has been active research on a class of graphs called Cayley graphs because this type of graph possesses many of the above properties. Many Cayley graphs based on permutation groups has proven to be suitable for designing interconnection networks, such as Star graph [1, 2, 47], Hypercubes [8], Pancake graphs [2, 79], Shuffel- Exchange Permutation Network [50], the Rotation-Exchange Network [110]. These graphs are symmetric, regular, and share the desirable properties described above. In this thesis, we studied the connectivity and diagnosability of some popular network structures. For instance, Cayley graphs generated by transpositions, expanded k-ary n-cube and locally twisted cube.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Chuang, Sua-hua, and 莊淑華. "Discussing the Mathematics theorem of System Dynamic by the transition between MM and SDM to help validating model." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/kv2eww.

Повний текст джерела
Анотація:
碩士
國立中山大學
資訊管理學系研究所
97
Mathematical model-building is one kind of mathematical thinking, which uses mathematical signs and methods to build a mathematical tool which can describe and solve practical problem. However, System Dynamic is to solve complicated non-linear dynamic problem. It uses the model-building software of diagram interface to make dynamic simulation. Behind the dynamic simulation is the calculation of a set of mathematical equation. The purpose of this research is , by the transition between Mathematics Model and System Dynamic Model, to discuss the mathematical principle of System Dynamic ,and to avoid distorting model and making the model validation harder due to the model-builder''s misusing function. Using the conclusion of the transition will help validating the model and raising the reliability and the efficiency of the model. From the conclusion of this research, we find out that System Dynamic is non-linear dynamic mathematical model. Both are exchangeable. From the transition, we also find that the graph function used in the process of System Dynamic model-building is a piecewise-linear- approximate function. The numerical value setting of the graph function can influence the system action. After the transition from System Dynamic model to mathematical model, we can use some other mathematical software to draw the phase-plane-plot or phase-portrait-plot. It will analyze clearly the system action in any situation, and validate the correction of model construction.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Chiang, Cheng-Tao, and 江政道. "Validation of the crew of 400-mm sanitary sewer by productivity model-using Pingtung country and Dashu Township as an example." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/03023873141745208265.

Повний текст джерела
Анотація:
碩士
國立高雄應用科技大學
土木工程與防災科技研究所
98
At present the empirical rule is mostly used to estimate construction schedule of sanitary sewer in Taiwan. In recent years, although many researchers have studied on the investigation of the crew of construction rate based on Thomas’s productivity theory and multiple regression models, few study was carried on the verification and modification of crew-scheduling productivity model. Thus, this study estimates the difference between model-predicted productivity and actual productivity (using Pingtung city and Dashu township as the actual cases) This research used the crew of a 700-mm jacking pipe as a model-predicted productivity (7.57 wh/m) to predict the productivity of case A and case B (400-mm jacking pipe). The results indicated that the both of model-predicted productivities are higher than the actual productivity. Then, we calculate the ratio of optimal productivity and model-predicted productivity (0.4465 and 0.2867, respectively). By the comparison of construction conditions, it shows that the overestimation influencing factor could be related to the shorter pipe-jacking size, the longer pipe-jacking length, and the better geological condition Therefore, if the overestimation influencing factor can be controlled before construction, the adjustment factor of crew-scheduling productivity model will be good for the contractor accurately to estimate construction time and manpower arrangement. Further, the construction cost and construction efficiency can be controlled effectively.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Hwang, Meng-Ruey, and 黃盟瑞. "Validation of the crew of 300-mm sanitary sewer by productivity model -using Pingtung country and Dashu Township as an example." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/06146310151055140297.

Повний текст джерела
Анотація:
碩士
國立高雄應用科技大學
土木工程與防災科技研究所
98
At present the empirical rule is mostly used to estimate construction schedule of sanitary sewer in Taiwan. In recent years, although many researchers have studied on the investigation of the crew of construction rate based on Thomas’s productivity theory and multiple regression models, few study was carried on the verification and modification of crew-scheduling productivity model. Thus, this study estimates the difference between model-predicted productivity and actual productivity (using Pingtung city and Dashu township as the actual cases) This research used the crew of a 700-mm jacking pipe as a model-predicted productivity (7.57 wh/m) to predict the productivity of caseⅠand case Ⅱ (300-mm jacking pipe). The results indicated that the both of model-predicted productivities are higher than the actual productivity. Then, we calculate the ratio of overestimation productivity and model-predicted productivity (0.515 and 0.211, respectively). By the comparison of construction conditions, it shows that the overestimation influencing factor could be related to the shorter pipe-jacking size, the longer pipe-jacking length, and the better geological condition Therefore, if the overestimation influencing factor can be controlled before construction, the adjustment factor of crew-scheduling productivity model will be good for the contractor accurately to estimate construction time and manpower arrangement. Further, the construction cost and construction efficiency can be controlled effectively.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Lin, Ku-Yeh, and 林谷燁. "Validation of the crew of 500 mm sanitary sewer by Productivity Model-using Pingtung country A and B as an example." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/48726486854185434658.

Повний текст джерела
Анотація:
碩士
國立高雄應用科技大學
土木工程與防災科技研究所
98
At present the empirical rule is mostly used to estimate construction schedule of sanitary sewer in Taiwan. In recent years, although many researchers have studied on the investigation of the crew of construction rate based on Thomas’s productivity theory and multiple regression models, few study was carried on the verification and modification of crew-scheduling productivity model. Thus, this study estimates the same between model-predicted productivity and actual productivity (using Pingtung city A and B as the actual cases). This research used the crew of a 700-mm jacking pipe as a model-predicted productivity (7.57 wh/m) to predict the productivity of case A and case B (500-mm jacking pipe). The results indicated that the both of model-predicted productivities are higher than the actual productivity. Then, we calculate the ratio of overestimation productivity and model-predicted productivity (0.7398 and 0.3567, respectively). By the comparison of construction conditions, it shows that the overestimation influencing factor could be related to the shorter pipe-jacking size, the longer pipe-jacking shorter, and the better geological condition Therefore, if the overestimation influencing factor can be controlled before construction, the adjustment factor of crew-scheduling productivity model will be good for the contractor accurately to estimate construction time and manpower arrangement. Further, the construction cost and construction efficiency can be controlled effectively.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Chou, Chien-Hung, and 周建宏. "Validation of the crew of 800-mm sanitary sewer by productivity model--using Nanzih and Fengshan districts of Kaohsiung as an example." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/38314698229437304236.

Повний текст джерела
Анотація:
碩士
國立高雄應用科技大學
土木工程與防災科技研究所
99
In recent years, although many researchers have studied on the investigation of the crew of construction rate based on Thomas’s productivity theory and multiple regression models, few study was carried on the verification and modification of crew-scheduling productivity model under different geological condition. Thus, this study estimates the difference between model-predicted productivity and actual productivity (using Nanzih and Fengshan districts of Kaohsiung city as the actual cases). This research used the crew of a 700-mm jacking pipe as a model-predicted productivity (7.57 wh/m) to predict the productivity of case A and case B (800-mm jacking pipe). The results indicated that the both of model-predicted productivities are higher than the actual productivity. Then, we calculate the ratio of optimal productivity and model-predicted productivity (0.4465 and 0.2867, respectively). By the comparison of construction conditions, it shows that the overestimation influencing factor could be related to the deeper groundwater level and the better geological condition.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Chiu, Hou-Chun, and 邱厚郡. "Validation of the pipe jacking crew of 300-mm sanitary sewer by productivity model-using Rende district of Tainan city as an example." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/dnufa7.

Повний текст джерела
Анотація:
碩士
國立高雄應用科技大學
土木工程與防災科技研究所
105
This research used Fongshan and Niaosong districts of Kaohsiung city as cases and the crew of a 700-mm jacking pipe as a control model for productivity prediction (7.57wh/m). The aim of investigation is especially focused on the effect of gravel formation on the pipe jacking construction. The actural productivity are calculated by statistical analysis. The optimal productivities of Fongshan district of Kaohsiung (case A) and Niaosong district of Kaohsiung (case B) are separately 7.96wh/m and 4.99wh/m. The comparison of two pratical cases and control model for the optimal productivity showed that only case A is higher than control model, but compare to diameter and distance both are lower. It inferred that the construction of pipe jacking crew in gravel formation is more difficult. Consequently, the control model are used to estimate the actural productivity in the future. The correction coefficients are 1.051 (case A) and 0.660 (case B) respectively.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Wu, Chi-Chiuan, and 吳啟全. "Validation of the crew of 300-mm sanitary sewer by productivity model-using Cijin and Cushan districts of Kaohsiung city as an example." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/56951261097661314086.

Повний текст джерела
Анотація:
碩士
國立高雄應用科技大學
土木工程與防災科技研究所
100
Abstract This research used Cijin and Cushan districts of Kaohsiung as cases and the crew of a 700-mm jacking pipe as a model-predicted productivity (7.57wh/m) to predict the productivity of Cijin district of Kaohsiung as case A and case Cushan district of Kaohsiung as case B (300-mm jacking pipe). The subject of investigation is the size distribution of the geology condition and the unsymmetrical shell pieces (Lao-Gu Stone). The results indicated that both model-predicted productivities are higher than the actual productivity, but compare to the same diameter and distance are lower. Then, we calculate the optimal productivities are 3.01wh/m and 4.22 wh/m. The correction coefficients are 0.398 and 0.558 respectively. The reason of the lower one is the geology and influence on productivity 1.42wh/m and 2.63wh/m. The results indicated that the both of model-predicted productivities are higher than the actual productivity, but compare to the same diameter and distance lower. And then, we calculate the optimal productivity are 3.01wh/m and 4.22 wh/m. The corrected coefficients are 0.398 and 0.558 respectively. The reason of the lower one is the geology and the influence on productivity 1.42wh/m and 2.63wh/m.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Matunová, Petra. "Studium termodynamických a kinetických parametrů interakcí oligomerních modelů DNK s organokovovými komplexy aktivními v protirakovinné léčbě stanovených metodami kvantové chemie a kombinovanými QM/MM metodami." Master's thesis, 2015. http://www.nusl.cz/ntk/nusl-352318.

Повний текст джерела
Анотація:
It has been proven that platinum and ruthenium complexes are active in anti- cancer treatment. Nowadays, the common chemotherapeutica have a lot of side effects, therefore, drugs with fewer negative impacts are intensively searched for. The first part of the thesis focuses on the study of cis-[Pt(NH3)2Cl2] (cisplatin, DDP) and four platinum potential anticancer agents PtCl2(diaminocyclohexane), PtCl2(NH3)(cyclohexylamine) (JM118), cis-[PtCl2(NH3)(piperidine)] and trans-[PtCl2(NH3)(thiazole)]. Thermodynamic and kinetic parameters of reac- tions of these complexes in semi-hydrated and fully-hydrated form with guanine were studied using QM methods. The reaction with guanine is the key process ini- tiating the anticancer activity. Analyses of electron density were performed at the B3LYP/6-311++G(2df,2pd) level of theory in IEF-PCM model. The second part of the thesis studies the reaction of the so-called 'piano stool' Ru(II) transition metal complex, [Ru(II)(η6 -p-cymene(nalidixic acid)(H2O)]2+ , first with guanine using QM methods and second with ds-DNA model using QM/MM methods. The reaction site, which is described by QM method, is two consecutive guanines and the Ru(II) complex. Analyses of thermodynamic and kinetic parameters, and electron density were performed at the B97D/6-31G* level of theory. All the...
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Chen, Zn-Wen, and 陳咨汶. "Validation of The Pipe Jacking Crew of 300-mm Sanitary Sewer by Productivity Model-using The Fifth Redevelopmonet Zone in Tainan City as An Example." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/8rn6n4.

Повний текст джерела
Анотація:
碩士
國立高雄應用科技大學
土木工程與防災科技研究所
105
This research used five re-designated of Tainan city as cases and the crew of a 700-mm jacking pipe as a control model for productivity prediction (7.57wh/m). The aim of investigation is especially focused on the effect of gravel formation on the pipe jacking construction. The actural productivity are calculated by statistical analysis. The optimal productivities of Tainan (case A) and Tainan (case B) are separately 3.417wh/m and 1.937wh/m. The comparison of two pratical cases and control model for the optimal productivity showed that only case A is higher than control model, but compare to diameter and distance both are lower. It inferred that the construction of pipe jacking crew in gravel formation is more difficult. Consequently, the control model are used to estimate the actural productivity in the future. The correction coefficients are 0.451 (case A) and 0.256 (case B) respectively.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Huang, Yao-Nan, and 黃耀南. "Validation of the pipe jacking crew in gravel formation of 300-mm sanitary sewer by productivity model-using Fongshan and Niaosong districts of Kaohsiung city as an example." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/65079842797665101510.

Повний текст джерела
Анотація:
碩士
國立高雄應用科技大學
土木工程與防災科技研究所
102
This research used Fongshan and Niaosong districts of Kaohsiung city as cases and the crew of a 700-mm jacking pipe as a control model for productivity prediction (7.57wh/m). The aim of investigation is especially focused on the effect of gravel formation on the pipe jacking construction. The actural productivity are calculated by statistical analysis. The optimal productivities of Fongshan district of Kaohsiung (case A) and Niaosong district of Kaohsiung (case B) are separately 8.07wh/m and 5.25wh/m. The comparison of two pratical cases and control model for the optimal productivity showed that only case A is higher than control model, but compare to diameter and distance both are lower. It inferred that the construction of pipe jacking crew in gravel formation is more difficult. Consequently, the control model are used to estimate the actural productivity in the future. The correction coefficients are 1.066 (case A) and 0.693 (case B) respectively.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Nejatbakhsh, Esfahani Nazereh. "Interoperability of Traffic Infrastructure Planning and Geospatial Information Systems." Doctoral thesis, 2016. https://tud.qucosa.de/id/qucosa%3A31143.

Повний текст джерела
Анотація:
Building Information Modelling (BIM) as a Model-based design facilitates to investigate multiple solutions in the infrastructure planning process. The most important reason for implementing model-based design is to help designers and to increase communication between different design parties. It decentralizes and coordinates team collaboration and facilitates faster and lossless project data exchange and management across extended teams and external partners in project lifecycle. Infrastructure are fundamental facilities, services, and installations needed for the functioning of a community or society, such as transportation, roads, communication systems, water and power networks, as well as power plants. Geospatial Information Systems (GIS) as the digital representation of the world are systems for maintaining, managing, modelling, analyzing, and visualizing of the world data including infrastructure. High level infrastructure suits mostly facilitate to analyze the infrastructure design based on the international or user defined standards. Called regulation1-based design, this minimizes errors, reduces costly design conflicts, increases time savings and provides consistent project quality, yet mostly in standalone solutions. Tasks of infrastructure usually require both model based and regulation based design packages. Infrastructure tasks deal with cross-domain information. However, the corresponding data is split in several domain models. Besides infrastructure projects demand a lot of decision makings on governmental as well as on private level considering different data models. Therefore lossless flow of project data as well as documents like regulations across project team, stakeholders, governmental and private level is highly important. Yet infrastructure projects have largely been absent from product modelling discourses for a long time. Thus, as will be explained in chapter 2 interoperability is needed in infrastructure processes. Multimodel (MM) is one of the interoperability methods which enable heterogeneous data models from various domains get bundled together into a container keeping their original format. Existing interoperability methods including existing MM solutions can’t satisfactorily fulfill the typical demands of infrastructure information processes like dynamic data resources and a huge amount of inter model relations. Therefore chapter 3 concept of infrastructure information modelling investigates a method for loose and rule based coupling of exchangeable heterogeneous information spaces. This hypothesis is an extension for the existing MM to a rule-based Multimodel named extended Multimodel (eMM) with semantic rules – instead of static links. The semantic rules will be used to describe relations between data elements of various models dynamically in a link-database. Most of the confusion about geospatial data models arises from their diversity. In some of these data models spatial IDs are the basic identities of entities and in some other data models there are no IDs. That is why in the geospatial data, data structure is more important than data models. There are always spatial indexes that enable accessing to the geodata. The most important unification of data models involved in infrastructure projects is the spatiality. Explained in chapter 4 the method of infrastructure information modelling for interoperation in spatial domains generate interlinks through spatial identity of entities. Match finding through spatial links enables any kind of data models sharing spatial property get interlinked. Through such spatial links each entity receives the spatial information from other data models which is related to the target entity due to sharing equivalent spatial index. This information will be the virtual properties for the object. The thesis uses Nearest Neighborhood algorithm for spatial match finding and performs filtering and refining approaches. For the abstraction of the spatial matching results hierarchical filtering techniques are used for refining the virtual properties. These approaches focus on two main application areas which are product model and Level of Detail (LoD). For the eMM suggested in this thesis a rule based interoperability method between arbitrary data models of spatial domain has been developed. The implementation of this method enables transaction of data in spatial domains run loss less. The system architecture and the implementation which has been applied on the case study of this thesis namely infrastructure and geospatial data models are described in chapter 5. Achieving afore mentioned aims results in reducing the whole project lifecycle costs, increasing reliability of the comprehensive fundamental information, and consequently in independent, cost-effective, aesthetically pleasing, and environmentally sensitive infrastructure design.:ABSTRACT 4 KEYWORDS 7 TABLE OF CONTENT 8 LIST OF FIGURES 9 LIST OF TABLES 11 LIST OF ABBREVIATION 12 INTRODUCTION 13 1.1. A GENERAL VIEW 14 1.2. PROBLEM STATEMENT 15 1.3. OBJECTIVES 17 1.4. APPROACH 18 1.5. STRUCTURE OF THESIS 18 INTEROPERABILITY IN INFRASTRUCTURE ENGINEERING 20 2.1. STATE OF INTEROPERABILITY 21 2.1.1. Interoperability of GIS and BIM 23 2.1.2. Interoperability of GIS and Infrastructure 25 2.2. MAIN CHALLENGES AND RELATED WORK 27 2.3. INFRASTRUCTURE MODELING IN GEOSPATIAL CONTEXT 29 2.3.1. LamdXML: Infrastructure Data Standards 32 2.3.2. CityGML: Geospatial Data Standards 33 2.3.3. LandXML and CityGML 36 2.4. INTEROPERABILITY AND MULTIMODEL TECHNOLOGY 39 2.5. LIMITATIONS OF EXISTING APPROACHES 41 INFRASTRUCTURE INFORMATION MODELLING 44 3.1. MULTI MODEL FOR GEOSPATIAL AND INFRASTRUCTURE DATA MODELS 45 3.2. LINKING APPROACH, QUERYING AND FILTERING 48 3.2.1. Virtual Properties via Link Model 49 3.3. MULTI MODEL AS AN INTERDISCIPLINARY METHOD 52 3.4. USING LEVEL OF DETAIL (LOD) FOR FILTERING 53 SPATIAL MODELLING AND PROCESSING 58 4.1. SPATIAL IDENTIFIERS 59 4.1.1. Spatial Indexes 60 4.1.2. Tree-Based Spatial Indexes 61 4.2. NEAREST NEIGHBORHOOD AS A BASIC LINK METHOD 63 4.3. HIERARCHICAL FILTERING 70 4.4. OTHER FUNCTIONAL LINK METHODS 75 4.5. ADVANCES AND LIMITATIONS OF FUNCTIONAL LINK METHODS 76 IMPLEMENTATION OF THE PROPOSED IIM METHOD 77 5.1. IMPLEMENTATION 78 5.2. CASE STUDY 83 CONCLUSION 89 6.1. SUMMERY 90 6.2. DISCUSSION OF RESULTS 92 6.3. FUTURE WORK 93 BIBLIOGRAPHY 94 7.1. BOOKS AND PAPERS 95 7.2. WEBSITES 101
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Трунін, К. С., та Kostiantyn S. Trunin. "Математична модель динаміки гнучкого зв’язку морської прив’язної системи з урахуванням впливу кручення гнучкого зв’язку на його силу розтягування". Thesis, 2021. http://eir.nuos.edu.ua/xmlui/handle/123456789/5035.

Повний текст джерела
Анотація:
Трунін, К. С. Математична модель динаміки гнучкого зв’язку морської прив’язної системи з урахуванням впливу кручення гнучкого зв’язку на його силу розтягування = The mathematical model of flexible link marine tethered system dynamic’s with account of torsion to it tensile force / К. С. Трунін // Матеріали XII міжнар. наук.-техн. конф. "Інновації в суднобудуванні та океанотехніці". – Миколаїв : НУК, 2021. – С. 115–119.
Важливою характеристикою гнучкого зв’язку (ГЗ) є опір крученню, яке виникає від процесу набігання на блок і вигину на блоці, і яке необхідно враховувати в умовах експлуатації. Запропоновано метод визначення векторів узагальнених сил кручення ГЗ. Досліджено вплив від кручення ГЗ на його силу розтягування на конкретних прикладах, у ряді випадків кручення ГЗ помітним чином впливає на характер руху ППС в цілому. Тема розробки ММ динаміки МПС з урахуванням впливу кручення є важливою і актульною.
The important of characteristic of flexible link (FL) is rigidity in bending (RB) which is probability be taken into account at regular service conditions. The elements of rope (wire) by endues testing also tension and bend with torsion. The method of calculation of vectors of generalized of forces of bend of FL was proposed. One of the causes of torsional stresses in the power plant of the Underwater Tethered Systems (UTS) is the interaction with ship equipment, in which the spiral winding on the winch drum, friction on the flanges of the pulleys or winch drums, bends on various blocks and rolls cause torsion. The source of torsional stresses in FL there may by technological reasons related to both the manufacture and storage, transportation and placement on the drooms ship’s winch.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії