Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Optimisation of experimental protocols.

Dissertationen zum Thema „Optimisation of experimental protocols“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Dissertationen für die Forschung zum Thema "Optimisation of experimental protocols" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Pavkovic, Bogdan. „Vers le futur Internet d'Objets au travers d'une optimisation inter­couche des protocols standardisés“. Phd thesis, Université de Grenoble, 2012. http://tel.archives-ouvertes.fr/tel-00804540.

Der volle Inhalt der Quelle
Annotation:
Le paradigme de l'Internet des Objets (IoT) envisage d'enrichir l'Internet actuel avec un grand nombre de dispositifs intelligents communicants. Les réseaux de capteurs sans fil (RCF) exploitent des appareils avec des ressources énergétiques limitées équipés de capteurs afin de récupérer en temps réel des mesures (comme la température, la radioactivité, ou le CO2). Les réseaux de capteurs sont particulièrement pertinents pour la surveillance, la télémétrie ou la prévention des catastrophes naturelles. Cependant, ce type de réseau pose des problèmes majeurs tels que l'utilisation efficace de ressources énergétiques limitées, la prise en charge transparente de nœuds défaillants, sans intervention humaine. L'Internet des Objets ne permettra d'intégrer des réseaux de capteurs autonomes que si les protocoles sont standards et passent à l'échelle. Les contributions de cette thèse sont les suivantes : * nous avons caractérisé expérimentalement un réseau radio multisaut en exploitant statistiquement un grand volume de mesures provenant d'une plate-forme expérimentale opérée par Orange. Notre analyse porte sur la caractérisation d'un lien et de sa qualité ainsi que de la dynamique du réseau. * nous avons proposé de modifier le standard IEEE 802.15.4 afin qu'il puisse cohabiter efficacement avec le protocole de routage actuellement standard de l'Internet des Objets, RPL. En particulier, nous proposons d'exploiter une structure de graphe dirigé acyclique afin d'exploiter une topologie maillée et pallier à la déficience éventuelle d'un nœud. Nous avons proposé également des algorithmes simples d'ordonnancement distribué des supertrames adaptés à cette topologie. * le choix des pères au niveau MAC dans une structure de graphe dirigé acyclique est déterminant dans la qualité des routes possibles dans la couche réseau. Nous avons ainsi proposé un algorithme de choix des pères basé sur plusieurs métriques. Nous aboutissons à une structure permettant d'équilibrer la charge, limitant les points de congestion, utilisant des liens radio de bonne qualité, limitant la congestion au niveau MAC. * nous avons enfin présenté des mécanismes permettant d'offrir une qualité de service dans une pile s'appuyant sur IEEE 802.15.4 et RPL. Notre extension de routage opportuniste et multi-chemin contribue à améliorer la livraison des paquets avant une date limite, tout en minimisant le surcout et la consommation d'énergie par rapport à la version originale de RPL.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Gross, Viktoriia. „An integrative approach to characterize and predict cell death and escape to beta-lactam antibiotic treatments“. Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASB035.

Der volle Inhalt der Quelle
Annotation:
La résistance aux antibiotique est de plus en plus courante. En particulier, la fraction croissante d'Escherichia coli commensaux et pathogènes exprimant des bêta-lactamases à spectre étendu et/ou des carbapénémases est alarmante. E. coli est une cause majeure d'infections courantes telles que les infections urinaires, qui touchent plus de 150 millions de personnes dans le monde. Il est important de noter que de nombreuses infections récidivent. Il est donc essentiel de comprendre en profondeur la sensibilité des isolats cliniques d'E. coli aux bêta-lactamines pour proposer des traitements efficaces.Les bactéries peuvent échapper aux traitements de différentes manières. Les bactéries résistantes se développent et se divisent normalement en présence d'antibiotiques. Leur caractérisation est facile à l'aide de tests de diagnostic standard. Les bactéries tolérantes se contentent de survivre en présence d'antibiotiques et repoussent lorsque l'antibiotique est retiré ou dégradé. Ce comportement biphasique complique la prédiction des résultats du traitement. La résilience au traitement est notamment observée dans la tolérance collective aux antibiotiques, où les cellules mortes libèrent des bêta-lactamases qui dégradent l'antibiotique dans l'environnement. Les approches standard ne sont pas adaptées pour quantifier et comprendre le rôle de la résistance et/ou de la résilience.Nos principaux objectifs sont de quantifier la dynamique de la mort cellulaire au cours de traitements répétés et de quantifier l'impact des différentes conditions environnementaux sur la mort cellulaire. Tout d'abord, nous avons développé de nouveaux protocoles pour résoudre les problèmes de variabilité dans les mesures de densité optique, et pour effectuer des tests d'unités formant colonies d'une manière efficace. Grâce à ces techniques, nous avons généré un vaste ensemble de données décrivant l'impact de traitements répétés sur différents isolats cliniques. Nous avons calibré un modèle, précédemment développé par l'équipe, de la réponse de la population aux antibiotiques et de l'évolution de l'environnement dans le contexte de tolérance collective aux antibiotiques. Nous avons calibré le modèle sur l'ensemble de données, et nous avons montré que le modèle tient compte de l'évolution temporelle de la biomasse et du nombre de cellules vivantes. En outre, nous avons démontré qu'en utilisant ce modèle, nous pouvons prédire le nombre de cellules vivantes à partir des mesures de la biomasse.Dans ce travail, nous avons mis en évidence l'écart entre l'in vitro et l'in vivo en évaluant l'effet de différentes conditions de croissance sur la survie des cellules. Pour relever ce défi, nous avons étudié la réponse bactérienne dans l'urine humaine et dans le milieu de Mueller-Hinton (milieu utilisé pour les antibiogrammes standard), ainsi que dans un milieu défini avec différentes sources de carbone. Tout d'abord, nous avons observé une meilleure survie dans l'urine par rapport au milieu Mueller-Hinton, mais ce résultat variait en fonction de la souche et de la concentration d'antibiotique. Il est intéressant de noter que les données expérimentales ont montré que la concentration en nutriments n'avait pas d'effet sur le taux de croissance, mais un effet important sur la capacité de charge et la réponse aux antibiotiques. Grâce à l'étalonnage du modèle et à l'analyse des valeurs des paramètres du modèle, nous avons identifié des processus biologiques qui pourraient expliquer les différences entre le comportement des bactéries dans différents milieux
Resistance to first-line antimicrobial drugs is now commonly encountered. In particular, the increasing fraction of commensal and pathogenic Escherichia coli expressing extended-spectrum beta-lactamases and/or carbapenemases is alarming. E. coli is a major cause of common infections such as urinary tract infections, affecting over 150 million people worldwide. Importantly, many infections relapse. Therefore, an in-depth understanding of the susceptibility of E. coli clinical isolates to beta-lactams is essential for proposing effective treatments.Bacteria might escape treatments in many different ways. Resistant bacteria grow and divide normally in the presence of antibiotics. Their characterization is easy using standard diagnostic tests. Resilient bacteria merely survive in the presence of antibiotics and regrow when the antibiotic is removed or degraded. This biphasic behavior complicates the prediction of treatment outcomes. Resilience to treatment is notably observed in collective antibiotic tolerance, where dead cells release beta-lactamases degrading the antibiotic in the environment. Standard approaches are not adapted for quantifying and understanding the role of resistance and/or resilience.Our main objectives are to quantify the dynamics of cell death during repeated treatments and to quantify the impact of different growth conditions on cell death. First, we developed novel protocols to address variability issues in optical density measurements, and to perform colony forming unit assays in an efficient manner. Using these techniques, we generated an extensive dataset describing the impact of repeated treatments on different clinical isolates. We calibrated a previously developed in the team model of population response to antibiotic and evolution of the environment in the context of collective antibiotic tolerance. We calibrated the model to our dataset, and we showed that the model accounts for the temporal evolution of both biomass and live cell counts. Further, we demonstrated that using this model we can predict live cell number from biomass measurements.In addition, in this work we highlighted the in vitro - in vivo gap by assessing the effect of different growth conditions on cell survival. To address this challenge, we studied the bacterial response in human urine and in Mueller-Hinton media (media used for standard antibiotic susceptibility tests), as well as a defined media with different carbon sources. First, we observed better survival in urine compared to Mueller-Hinton media, but this result varied depending on the strain and the antibiotic concentration. Interestingly, the experimental data showed that nutrient concentration had no effect on growth rate, but a strong effect on carrying capacity and antibiotic response. Through model calibration and analysis of identified model parameter values, we identified biological processes that could explain the differences between bacterial behavior in different media
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Al-Mahrooqi, Khalid Mohammed Salim. „The Optimisation of Routine Paediatric CT Scanning Protocols“. Thesis, Curtin University, 2015. http://hdl.handle.net/20.500.11937/2552.

Der volle Inhalt der Quelle
Annotation:
This study represents one of the first comprehensive paediatric optimisation processes that were systematically approached via three main objectives. It is expected that the devised optimisation paediatric protocols could have profound implications on clinical practices that use 64-slice CT or greater. The study provides a better understanding of the influence of each acquisition reconstruction parameter with regards to image quality and radiation dose during routine paediatric imaging protocols.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Atanackov, NatasÌŒa. „Trend forecasting by constrained optimisation and method selection protocols“. Thesis, Brunel University, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.400597.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Donaldson, Stephen Richard. „Global optimisation of communication protocols for bulk synchronous parallel computation“. Thesis, University of Oxford, 1999. http://ora.ox.ac.uk/objects/uuid:5bb12320-92ac-4e9f-a9f1-b9b23947c8ae.

Der volle Inhalt der Quelle
Annotation:
In the Bulk Synchronous Parallel (or BSP) model of parallel communication represented by BSPlib, the relaxed coupling of the global computation, communication and synchronisation, whilst providing a definite semantics, does not prescribe exactly when and where communication is to be carried out during the computation. It merely states that it cannot happen before requested by the application and that at certain points local computation cannot proceed unless updates have been applied from the other participating processors. The nature of the computation and this framework is open to exploitation by the implementation of the runtime system and can be made to suit particular physical environments without requiring application program changes. This bulk and global view of parallel computation can be used to implement protocols that both maintain and take into account global state for optimising performance. Such global protocols can provide performance improvements which are not easily achieved with local and greedy strategies and may in turn be locally sub-optimal. This global perspective and the exploitable nature of BSP computation is applied to congestion avoidance, transport layer protocols suitable for BSP computation, global stable check-pointing, and work process placement and migration, to achieve a better overall performance. An important consideration for the compositionality of parallel computer systems into larger systems is that in order for the composite to exhibit good performance, the individual components must also do so. However, it is not obvious how the individual components contribute to the global performance. Already mentioned is that non-locally optimal strategies might lead to globally optimal performance, but also of importance is that variance observed at the local level also influences performance. A number of decisions in the transport protocol design and implementations have been made in order that the observed variance in the protocol's behaviour is minimised. It is demonstrated why this is required using the BSP model. The analysis also suggests a regression technique which can be applied to sampled global performance data.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Hadjitheodosiou, Michael H. „Performance optimisation of multiple access protocols for multiservice VSAT networks“. Thesis, University of Surrey, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.362639.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Wyre, Christopher John. „Recombinant protein production in Escherichia coli : optimisation of improved protocols“. Thesis, University of Birmingham, 2015. http://etheses.bham.ac.uk//id/eprint/5743/.

Der volle Inhalt der Quelle
Annotation:
Recombinant protein production (RPP) is a cornerstone of bioprocessing. This study presents novel analytical techniques and production protocols for RPP in E. coli, particularly regarding industrial applications. Flow cytometry (FCM) was used to monitor cell physiology and RPP during production of a fluorescent model protein, CheY::GFP. Further applications of FCM for monitoring RPP were developed: The amyloidophilic dye Congo red was used to identify inclusion bodies produced under high-stress conditions. FCM analysis of transformants on agar plates identified 3 populations of varying fluorescence intensity and the progressive transfer of cells from the high fluorescence population to one of intermediate fluorescence and low culturability. Congo red staining showed this was due to amyloid-inclusion body formation. RPP conditions that minimise physiological stress by reducing temperature and inducer concentration can increase product yields, solubility and biomass yields. The original fermentation protocol used for stress-minimised RPP proved unsuitable for industrial use. Application of stress-minimisation to an industrially-derived protocol using early or late-phase induction and glucose or glycerol as carbon source generated high biomass, total CheY::GFP and soluble CheY::GFP yields. These protocols improved biomass generation, product formation and reproducibility over the original stress-minimised and unmodified industrially-derived protocols and therefore stress-minimisation is of potential industrial use.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Lamoureux, Louis-Philippe. „Theoretical and experimental aspects of quantum cryptographic protocols“. Doctoral thesis, Universite Libre de Bruxelles, 2006. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210776.

Der volle Inhalt der Quelle
Annotation:
La mécanique quantique est sans aucun doute la théorie la mieux vérifiée qui n’a jamais existée. En se retournant vers le passé, nous constatons qu’un siècle de théorie quantique a non seulement changé la perception que nous avons de l’univers dans lequel nous vivons mais aussi est responsable de plusieurs concepts technologiques qui ont le potentiel de révolutionner notre monde.

La présente dissertation a pour but de mettre en avance ces potentiels, tant dans le domaine théorique qu’expérimental. Plus précisément, dans un premier temps, nous étudierons des protocoles de communication quantique et démontrerons que ces protocoles offrent des avantages de sécurité qui n’ont pas d’égaux en communication classique. Dans un deuxième temps nous étudierons trois problèmes spécifiques en clonage quantique ou chaque solution

apportée pourrait, à sa façon, être exploitée dans un problème de communication quantique.

Nous débuterons par décrire de façon théorique le premier protocole de communication quantique qui a pour but la distribution d’une clé secrète entre deux parties éloignées. Ce chapitre nous permettra d’introduire plusieurs concepts et outils théoriques qui seront nécessaires dans les chapitres successifs. Le chapitre suivant servira aussi d’introduction, mais cette fois-ci penché plutôt vers le côté expériemental. Nous présenterons une élégante technique qui nous permettra d’implémenter des protocoles de communication quantique de façon simple. Nous décrirons ensuite des expériences originales de communication quantique basées sur cette technique. Plus précisément, nous introduirons le concept de filtration d’erreur et utiliserons cette technique afin d’implémenter une distribution de clé quantique bruyante qui ne pourrait pas être sécurisé sans cette technique. Nous démontrerons ensuite des expériences implémentant le tirage au sort quantique et d’identification quantique.

Dans un deuxième temps nous étudierons des problèmes de clonage quantique basé sur le formalisme introduit dans le chapitre d’introduction. Puisqu’il ne sera pas toujours possible de prouver l’optimalité de nos solutions, nous introduirons une technique numérique qui nous

permettra de mettre en valeur nos résultats.


Doctorat en sciences, Spécialisation physique
info:eu-repo/semantics/nonPublished

APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Al-Janabi, Thair. „Design of energy efficient protocols-based optimisation algorithms for IoT networks“. Thesis, Brunel University, 2018. http://bura.brunel.ac.uk/handle/2438/17121.

Der volle Inhalt der Quelle
Annotation:
The increased globalisation of information and communication technologies has transformed the world into the internet of things (IoT), which is accomplished within the resources of wireless sensor networks (WSNs). Therefore, the future IoT networks will consist of high density of connected nodes that suffer from resource limitation, especially the energy one, and distribute randomly in a harsh and large-scale areas. Accordingly, the contributions in this thesis are focused on the development of energy efficient design protocols based on optimisation algorithms, with consideration of the resource limitations, adaptability, scalability, node density and random distribution of node density in the geographical area. One MAC protocol and two routing protocols, with both a static and mobile sink, are proposed. The first proposed protocol is an energy efficient hybrid MAC protocol with dynamic sleep/wake-up extension to the IEEE 802.15.4 MAC, namely, HSW-802.15.4. The model automates the network by enabling it to work exibly in low and high-density networks with a lower number of collisions. A frame structure that offers an enhanced exploitation for the TDMA time slots (TDMAslots) is provided. To implement these enhanced slots exploitation, this hybrid protocol rst schedules the TDMAsslots, and then allocates each slot to a group of devices. A three-dimensional Markov chain is developed to display the proposed model in a theoretical manner. Simulation results show an enhancement in the energy conservation by 40% - 60% in comparison to the IEEE 802.15.4 MAC protocol. Secondly, an efficient centralised clustering-based whale optimisation algorithm (CC- WOA) is suggested, which employs the concept of software de ned network (SDN) in its mechanism. The cluster formulation process in this algorithm considers the random di- versi cation of node density in the geographical area and involves both sensor resource restrictions and the node density in the tness function. The results offer an efficient con- servation of energy in comparison to other protocols. Another clustering algorithm, called centralised load balancing clustering algorithm (C-LBCA), is also developed that uses par- ticle swarm optimisation (PSO) and presents robust load-balancing for data gathering in IoT. However, in large scale networks, the nodes, especially the cluster heads (CHs), suffer from a higher energy exhaustion. Hence, in this thesis, a centralised load balanced and scheduling protocol is proposed utilising optimisation algorithms for large scale IoT net- works, named, optimised mobile sink based load balancing (OMS-LB). This model connects the impact of the Optimal Path for the MS (MSOpath) determination and the adjustable set of data aggregation points (SDG) with the cluster formulation process to de ne an op- timised routing protocol suitable for large scale networks. Simulation results display an improvement in the network lifespan of up to 54% over the other approaches.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Vernon, A. J. „Optimisation of ex vivo expansion protocols for cultivated limbal epithelial transplantation“. Thesis, University College London (University of London), 2015. http://discovery.ucl.ac.uk/1463939/.

Der volle Inhalt der Quelle
Annotation:
Limbal epithelial stem cells (LESCs) produce progeny (human limbal epithelial cells, hLEC) responsible for corneal repair and maintenance. The loss, or dysfunction, of LESCs results in LESC deficiency (LESCD) characterised by corneal opacification, neovascularisation and vision loss. Current treatments include LESC transplantation through cultivated limbal epithelial transplantation (CLET). There remains a number of manufacturing and regulatory challenges with CLET, therefore, this research was performed to address these challenges and improve existing protocols. This thesis aims were; to improve the starting number of LESCs by investigating to what extent corneal storage media preserve LESCs. Secondly, to decrease risk of zoonotic agent transmission through animal-derived materials, the potential of human-derived feeders (MRC5 fibroblasts) and serum for hLEC expansion was assessed. The final aim was to examine the feasibility of collagen-based tissue equivalents (TE) containing surrogate niche cells (human dermal fibroblasts, hDFs) as alternative delivery method to current protocols utilising human amniotic membrane, a substrate predisposed to inter-and intra-donor variability. Experiments showed different corneal storage formulations can preserve poorly differentiated cells; however, investigations into other factors (donor age/ hLEC isolation) may further improve the starting number of LESCs. Human derived serum was effective in maintaining hLECs and is an adequate replacement for animal-derived serum in CLET protocols. However, human-derived MRC5 feeders should not replace animal-derived feeders in CLET manufacture due to unfavourable hLEC characteristics observed. Furthermore, future investigations into cytokines expressed by human MRC5 feeders may elucidate factors influencing hLEC behaviour. Finally, changes in hLEC characteristics and increased inflammation-associated mediator expression demonstrated hDFs were not a suitable TE surrogate niche cell; however, the potential role of such mediators in epithelial-stromal interactions should be explored. To conclude; these experiments have improved existing CLET protocols, highlighted lower-risk materials may not always be effective, and that the surrounding culture environment is integral for CLET graft success.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Abdullah, Kamarul Amin. „Optimisation of CT protocols for cardiac imaging using three-dimensional printing technology“. Thesis, The University of Sydney, 2018. http://hdl.handle.net/2123/18607.

Der volle Inhalt der Quelle
Annotation:
Objective: This thesis investigates the application of 3D-printing technology for optimising coronary CT angiography (CCTA) protocols using iterative reconstruction (IR) as a dose optimisation strategy. Methods: In phase one, a novel 3D-printed cardiac insert phantom for the Lungman phantom was developed. The attenuation values of the printed phantom were compared to CCTA patients and Catphan® 500 images. In phase two, the printed phantom was scanned at multiple dose levels, and the datasets were reconstructed using different IR strengths. The image quality characteristics were measured to determine the dose reduction potential. In phase three, the influence of IR strengths with low-tube voltage for dose optimisation studies was investigated. The printed phantom and the Catphan® 500 were scanned at different tube currents and voltages. The results were compared to the patient datasets to measure the agreement between the phantoms and patient datasets. Results: In phase one, the attenuation values were consistent between the printed phantom, patient and Catphan® 500 images. In phase two, the results showed that decreasing dose levels had significantly increased the image noise (p<0.001). The application of various IR strengths had yielded a stepwise improvement of noise image quality with a dose reduction potential of up to 40%. In phase three, the results showed a significant interaction between the effects of low-tube voltage and the IR strengths on image quality (all p<0.001) but not the attenuation values. The mean differences were small between the patient-phantom datasets. The optimised CT protocols allowed up to 57% dose reduction in CCTA protocols while maintaining the image quality. Conclusions: The 3D-printed cardiac insert phantom can be used to evaluate the effect of using IR on dose reduction and image quality. This thesis proposes and validates a new method of developing phantoms for CCTA dose optimisation studies.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Quintana, Carla Lucía Soto. „Development and optimisation of protocols for surface cleaning of cultural heritage metals“. Master's thesis, Universidade de Évora, 2016. http://hdl.handle.net/10174/20580.

Der volle Inhalt der Quelle
Annotation:
The conservation and valorisation of cultural heritage is of fundamental importance for our society, since it is witness to the legacies of human societies. In the case of metallic artefacts, because corrosion is a never-ending problem, the correct strategies for their cleaning and preservation must be chosen. Thus, the aim of this project was the development of protocols for cleaning archaeological copper artefacts by laser and plasma cleaning, since they allow the treatment of artefacts in a controlled and selective manner. Additionally, electrochemical characterisation of the artificial patinas was performed in order to obtain information on the protective properties of the corrosion layers. Reference copper samples with different artificial corrosion layers were used to evaluate the tested parameters. Laser cleaning tests resulted in partial removal of the corrosion products, but the lasermaterial interactions resulted in melting of the desired corrosion layers. The main obstacle for this process is that the materials that must be preserved show lower ablation thresholds than the undesired layers, which makes the proper elimination of dangerous corrosion products very difficult without damaging the artefacts. Different protocols should be developed for different patinas, and real artefacts should be characterised previous to any treatment to determine the best course of action. Low pressure hydrogen plasma cleaning treatments were performed on two kinds of patinas. In both cases the corrosion layers were partially removed. The total removal of the undesired corrosion products can probably be achieved by increasing the treatment time or applied power, or increasing the hydrogen pressure. Since the process is non-invasive and does not modify the bulk material, modifying the cleaning parameters is easy. EIS measurements show that, for the artificial patinas, the impedance increases while the patina is growing on the surface and then drops, probably due to diffusion reactions and a slow dissolution of copper. It appears from these results that the dissolution of copper is heavily influenced by diffusion phenomena and the corrosion product film porosity. Both techniques show good results for cleaning, as long as the proper parameters are used. These depend on the nature of the artefact and the corrosion layers that are found on its surface.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Masquelier, Michèle. „Leukemia chemotherapy : experimental studies on pharmacological optimisation /“. Stockholm, 2004. http://diss.kib.ki.se/2004/91-7140-046-X/.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Nordström, Erik. „Challenged Networking : An Experimental Study of new Protocols and Architectures“. Doctoral thesis, Uppsala universitet, Avdelningen för datorteknik, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-9002.

Der volle Inhalt der Quelle
Annotation:
With the growth of Internet, the underlying protocols are increasingly challenged by new technologies and applications. The original Internet protocols were, however, not designed for wireless communication, mobility, long disconnection times, and varying bandwidths. In this thesis, we study challenged networking, and how well old and new protocols operate under such constraints. Our study is experimental. We build network testbeds and measure the performance of alternative protocols and architectures. We develop novel methodologies for repeatable experiments that combine emulations, simulations and real world experiments. Based on our results we suggest modifications to existing protocols, and we also develop a new network architecture that matches the constraints of a challenged network, in our case, an opportunistic network. One of our most important contributions is an Ad hoc Protocol Evaluation (APE) testbed. It has been successfully used worldwide. The key to its success is that it significantly lowers the barrier to repeatable experiments involving wireless and mobile computing devices. Using APE, we present side-by-side performance comparisons of IETF MANET routing protocols. A somewhat surprising result is that some ad hoc routing protocols perform a factor 10 worse in the testbed than predicted by a common simulation tool (ns-2). We find that this discrepancy is mainly related to the protocols’ sensing abilities, e.g., how accurately they can infer their neighborhood in a real radio environment. We propose and implement improvements to these protocols based on the results. Our novel network architecture Haggle is another important contribution. It is based on content addressing and searching. Mobile devices in opportunistic networks exchange content whenever they detect each other. We suggest that the exchange should be based on interests and searches, rather than on destination names and addresses. We argue that content binding should be done late in challenged networks, something which our search approach supports well.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Lundgren, Henrik. „Implementation and Experimental Evaluation of Wireless Ad hoc Routing Protocols“. Doctoral thesis, Uppsala : Acta Universitatis Upsaliensis : Univ.-bibl. [distributör], 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-4806.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Rani, Kaddour. „Stratégies d’optimisation des protocoles en scanographie pédiatrique“. Thesis, Université de Lorraine, 2015. http://www.theses.fr/2015LORR0282/document.

Der volle Inhalt der Quelle
Annotation:
Depuis le début des années soixante-dix, le nombre de scanners par hôpitaux n’a fait qu’augmenter et leur utilisation est de plus en plus fréquente. Même si cette technique permet de donner des informations cliniques précieuses, elle a un revers qui est l’exposition du patient à des rayonnements ionisants. La sensibilité des enfants aux rayonnements est plus grande que celle des adultes, les enfants ont une espérance de vie importante et ont donc plus de risques de développer des cancers dans le futur. Il y a donc nécessité de tenter de réduire la dose au patient. Cette thèse vise donc à développer des stratégies d’optimisation sur les protocoles cliniques en utilisant des méthodes de simulation et de modélisation permettant de comprendre l’influence des paramètres des protocoles sur les indicateurs de qualité d’image et sur la dose délivrée au patient. Ce travail se divise en quatre parties: La première partie porte sur la modélisation de l’influence des paramètres des protocoles scanographiques sur deux indicateurs de qualité d’image et un indicateur de dose en utilisant la méthodologie des plans d’expériences. La seconde partie traite du développement d’un Protocole Générique Optimisé (PGO) pour la région de l’abdomen. A partir des données des modèles développés, un PGO recalculé pour cinq morphologies de patients pédiatriques et pour quatre modèles de scanners a été réalisé. L’ensemble des résultats, ont permis le développement d’un outil d’aide à l’optimisation permettant à l’utilisateur de générer un protocole optimisé en fonction du modèle de scanner et de la morphologie du patient
For the last 10-years, computed tomography (CT) procedures and their increased use have been a major source for concern in the scientific community. This concern has been the starting point for several studies aiming to optimize the dose while maintaining a diagnostic image quality. In addition, it is important to pay special attention to dose levels for children (age range considered to be from a newborn baby to a 16-y-old patient). Indeed, children are more sensitive to ionizing radiations, and they have a longer life expectancy. Optimizing the CT protocols is a very difficult process due to the complexity of the acquisition parameters, starting with the individual patient characteristics, taking into account the available CT device and the required diagnostic image quality. This PhD project is contributing to the advancement of knowledge by: (1) Developing a new approach that can minimize the number of testing CT scans examinations while developing a predictive mathematical model allowing radiologists to prospectively anticipate how changes in protocols will affect the image quality and the delivered dose for four models of CT scan. (2) Setting-up a Generic Optimized Protocol (based on the size of the phantom CATPAHN 600) for four models of CT scan. (3) Developing a methodology to adapt the GOP to five sizes of pediatric patient using Size Specific Dose Estimate calculation (SSDE). (4) Evaluating subjective and objective image quality between size-based optimised CT protocol and age-based CT protocols. (5) Developing a CT protocol optimization tool and a tutorial helping the radiologists in the process of optimization
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Uwase, Marie-Paule. „Experimental Comparison of Radio Duty Cycling Protocols for Wireless Sensor Networks“. Doctoral thesis, Universite Libre de Bruxelles, 2018. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/277807.

Der volle Inhalt der Quelle
Annotation:
Wireless sensor networks are often battery powered and therefore their power consumption is of critical importance. Power requirements can be reduced by switching off radios when they are not needed and by using multi-hop communications to reduce the length of the radio links. Multi-hop communications however require message routing through the network. The Routing Protocol for lossy networks (RPL) has been designed by the Internet Engineering Task Force (IETF) for seamless integration of wireless sensor networks in the Internet. For switching on and off radios, radio duty cycling (RDC) protocols have been added to the traditional medium access control (MAC) protocols. Despite the fact they belong to different layers in the communications stack, it is intuitively clear that the choice of a specific RDC protocol for saving energy can influence the performances of RPL. Exploring experimentally this influence was the initial goal of this research. A 25 nodes wireless sensor network using Zolertia Z1 motes and the Contiki software was used for this investigation. Performance measurements without RDC protocol and with the three different RDC protocols readily available in Contiki were organized and the results of the experiments were compared. Unfortunately, with all three RDC protocols, serious malfunctions obscured the experimental results. Those malfunctions did not show up in absence of a RDC protocol and they could not be reproduced by our simulation studies. To tackle this issue, the behavior of the RDC protocols was scrutinized by means of experimental set-ups that eliminated as much as possible all non RDC related issues. Many, quite varied, malfunctions were discovered which all could have caused the observed RPL issues. Further research and better experimental set-ups made clear that all the discovered RDC malfunctions could be attributed to two real-world facts that were not considered by the implementers of the Contiki RDC protocols. The first cause is the small frequency difference between hardware real time clocks in stand-alone motes. The second is that the threshold built in the receiver to detect radio activity is much higher than the minimum level of signal that the same receiver can decode. Work-arounds have been designed for the observed malfunctions and they have been tested by means of a systematic comparison of the performance of the three modified RDC protocols.
Doctorat en Sciences de l'ingénieur et technologie
info:eu-repo/semantics/nonPublished
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Martínez, Pérez Paula. „Development and Optimization of Experimental Biosensing Protocols Using Porous Optical Transducers“. Doctoral thesis, Universitat Politècnica de València, 2021. http://hdl.handle.net/10251/172541.

Der volle Inhalt der Quelle
Annotation:
[ES] Los biosensores son dispositivos analíticos con aplicabilidad en diferentes campos y con numerosas ventajas frente a otros métodos analíticos convencionales, como son el uso de pequeños volúmenes de muestra y reactivos, su sensibilidad y su rápida respuesta, sin necesidad de pretratamiento de la muestra, equipos caros o personal especializado. Sin embargo, se trata de un campo de investigación relativamente nuevo en el que todavía queda mucho camino por andar. Esta Tesis doctoral pretende aportar un granito de arena a este campo de conocimiento mediante el estudio del potencial de diferentes materiales porosos como transductores para el desarrollo de biosensores ópticos con respuesta en tiempo real y sin marcajes. Los materiales propuestos van desde aquellos artificialmente sintetizados, como silicio poroso (SiP), nanofibras (NFs) poliméricas o membranas poliméricas comerciales, hasta materiales naturales con propiedades fotónicas que todavía no habían sido explotadas para el sensado, como son los exoesqueletos de biosílice de diatomeas. Todos ellos tienen en común la simplicidad en su obtención, evitando costosos y laboriosos procesos de nanofabricación. Para su estudio, se analizará su respuesta óptica y, en aquellos casos en los que ésta permita llevar a cabo experimentos de detección, se desarrollarán estrategias para su biofuncionalización y su implementación en experimentos de biosensado. En el caso del SiP y las NFs se han optimizado los parámetros de fabricación para obtener una respuesta óptica adecuada que permita su interrogación. A continuación, se ha llevado a cabo su biofuncionalización empleando métodos covalentes y no covalentes, así como diferentes bioreceptores (aptámeros de ADN y anticuerpos) para estudiar su potencial y sus limitaciones como biosensores. En el caso de las membranas comerciales y el exoesqueleto de sílice de diatomeas, se ha caracterizado su respuesta óptica y se han llevado a cabo experimentos de sensado de índice de refracción para estudiar su sensibilidad. Así mismo, se ha desarrollado un método de funcionalización de la superficie del exoesqueleto de diatomeas basado en el uso de polielectrolitos catiónicos. Como resultado, se ha demostrado el potencial tanto de NFs para el desarrollo de biosensores, como el de membranas comerciales para sensores cuya aplicación no requiera una elevada sensibilidad pero sí un bajo coste. Además, se ha puesto de manifiesto el gran potencial del exoesqueleto de diatomeas para el desarrollo de sensores basados en su respuesta óptica. Por el contrario, las limitaciones encontradas en el desarrollo de biosensores basados en SiP han evidenciado la necesidad de un estudio riguroso y la optimización de la estructura de materiales porosos previamente a ser usados en (bio)sensado.
[CA] Els biosensors són dispositius analítics amb aplicabilitat en diferents camps i amb nombrosos avantatges enfront d'altres mètodes analítics convencionals, com són l'ús de xicotets volums de mostra i reactius, la seua sensibilitat i la seua ràpida resposta, sense necessitat de pretractament de la mostra, equips cars o personal especialitzat. No obstant això, es tracta d'un camp d'investigació relativament nou en el qual encara queda molt camí per fer. Aquesta Tesi doctoral pretén aportar el seu òbol a aquest camp de coneixement mitjançant l'estudi del potencial de diferents materials porosos com a transductors per al desenvolupament de biosensors òptics amb resposta en temps real i sense marcatges. Els materials proposats van des d'aquells artificialment sintetitzats, com a silici porós (SiP), nanofibras (NFs) polimèriques o membranes polimèriques comercials, fins a materials naturals amb propietats fotòniques que encara no havien sigut explotades per al sensat, com són els exoesquelets de biosílice de diatomees. Tots ells tenen en comú la simplicitat en la seua obtenció, evitant costosos i laboriosos processos de nanofabricació. Per al seu estudi, s'analitzarà la seua resposta òptica i, en aquells casos en els quals aquesta permeta dur a terme experiments de detecció, es desenvoluparan estratègies per a la seua biofuncionalizació i la seua implementació en experiments de biosensat. En el cas del SiP i les NFs s'han optimitzat els paràmetres de fabricació per a obtenir una resposta òptica adequada que permeta la seua interrogació. A continuació, s'ha dut a terme la seua biofuncionalizació emprant mètodes covalents i no covalents, així com diferents bioreceptors (aptàmers d'ADN i anticossos) per a estudiar el seu potencial i les seues limitacions com a biosensors. En el cas de les membranes comercials i l'exoesquelet de sílice de diatomees, s'ha caracteritzat la seua resposta òptica i s'han dut a terme experiments de sensat d'índex de refracció per a estudiar la seua sensibilitat. Així mateix, s'ha desenvolupat un mètode de funcionalizació de la superfície de l'exoesquelet de diatomees basat en l'ús de polielectròlits catiònics. Com a resultat, s'ha demostrat el potencial tant de NFs per al desenvolupament de biosensors, com el de membranes comercials per a sensors amb una aplicació que no requerisca una elevada sensibilitat però sí un baix cost. A més, s'ha posat de manifest el gran potencial de l'exoesquelet de diatomees per al desenvolupament de sensors basats en la seua resposta òptica. Per contra, les limitacions trobades en el desenvolupament de biosensors basats en SiP han evidenciat la necessitat d'un estudi rigorós i l'optimització de l'estructura dels materials porosos prèviament a ser usats en (bio)sensat.
[EN] Biosensors are analytical devices with application in diverse fields and with several advantages relative to other conventional methods, such as the use of small volumes of sample and reagents, their sensitivity and their fast response, without the need of the sample pretreatment, expensive equipments or specialised technicians. Nevertheless, this is a relatively new research field in which there is a long way to go yet. This doctoral Thesis aims at doing its bit to this field of knowledge by studying the potential of different porous materials as transducers for the development of real-time and label-free optical biosensors. The proposed materials range from those artificially synthesised, such as porous silicon (pSi), polymeric nanofibres (NFs) or commercial polymeric membranes, to natural materials with photonic properties that had not been exploited for sensing yet, such as biosilica exoskeletons of diatoms. All of them have in common its simple production, avoiding expensive and laborious nanofabrication processes. For their study, their optical response will be analysed and, in those cases in which such optical response allows performing detection experiments, strategies for their biofunctionalisation and their implementation in biosensing experiments will be developed as well. Regarding pSi and NFs, the fabrication parameters were optimised to get a suitable optical response for their interrogation. Afterwards, their surface functionalisation was carried out by covalent and non-covalent methods, as well as different bioreceptors (DNA aptamers and antibodies), to study their potential and their constraints as biosensors. Concerning commercial membranes and the biosilica exoskeleton of diatoms, their optical response was characterised and refractive index sensing experiments were carried out to study their sensitivity. Additionally, a biofunctionalisation method for the surface of the diatoms exoskeleton was developed based on the use of cationic polyelectrolytes. As a result, it was demonstrated the potential of NFs for the development of biosensors, as well as the potential of commercial membranes for developing sensors for an application that does not require a high sensitivity but a low cost. Furthermore, the great potential of biosilica exoskeleton of diatoms for the development of sensors based on their optical response has been revealed. By contrast, the constraints found in the development of pSi illustrate the importance of an accurate study and optimisation of porous materials structure before using them for (bio)sensing.
Martínez Pérez, P. (2021). Development and Optimization of Experimental Biosensing Protocols Using Porous Optical Transducers [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/172541
TESIS
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Prater, Brock Andrew. „Experimental Comparison of ACR and ICAMRL Magnetic Resonance Imaging Accreditation Protocols“. Wright State University / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=wright1284754038.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Buchaly, Carsten. „Experimental investigation, analysis and optimisation of hybrid separation processes /“. München : Verl. Dr. Hut, 2009. http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&doc_number=017369025&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Buchaly, Carsten. „Experimental investigation, analysis and optimisation of hybrid separation processes“. München Verl. Dr. Hut, 2008. http://d-nb.info/993260381/04.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Haynes, Rhona Claire. „Internal hip fracture fixation systems : analysis of implant performance for the optimisation of test protocols“. Thesis, University of Bath, 1996. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.320560.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Friginal, López Jesús. „An experimental methodology to evaluate the resilience of ad hoc routing protocols“. Doctoral thesis, Editorial Universitat Politècnica de València, 2013. http://hdl.handle.net/10251/18483.

Der volle Inhalt der Quelle
Annotation:
Friginal López, J. (2013). An experimental methodology to evaluate the resilience of ad hoc routing protocols [Tesis doctoral]. Editorial Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/18483
Palancia
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Mariotti, Erika. „Experimental protocols and analysis methods for real-time assessment of cardiac metabolism“. Thesis, King's College London (University of London), 2014. https://kclpure.kcl.ac.uk/portal/en/theses/experimental-protocols-and-analysis-methods-for-realtime-assessment-of-cardiac-metabolism(1d8de5c5-3d55-43fa-b3f6-38595fce1188).html.

Der volle Inhalt der Quelle
Annotation:
Cardiovascular disease remains the primary cause of death worldwide. Medical imaging plays a critical and increasing role in both its diagnosis and characterisation. Recent advances in hyperpolarised 13C Magnetic Resonance (MR) allow, for example, imaging an injected molecule and its downstream metabolites to uncover biochemical changes in the myocardium. On the other hand, radionuclide imaging using Positron Emission Tomography (PET) and Single Photon Emission Computed Tomography (SPECT) has significantly higher sensitivity than MR, but the signals from injected tracers and their metabolites are indistinguishable. All these imaging modalities are able to produce dynamic data containing information of the kinetics of their respective tracers. In order to relate the measured signal or activity to the underlying physiological or biochemical processes mathematical models have to be used. In this thesis, experimental protocols have been developed and semi-quantitative and quantitative methods have been evaluated for the analysis of hyperpolarised 13C dynamic time-series and time-activity curves of PET tracers acquired ex vivo from Langendorff perfused rat hearts. A number of compartmental models were explored to fit in vitro and ex vivo hyperpolarised 13C time-series acquired for pyruvate and its downstream metabolites to derive apparent rates of the enzymatic reactions involved in pyruvate metabolic pathways. Compartmental modelling was also used in combination with Monte Carlo simulations to explore the detection limits of transmural cardiac ischemia in vivo in small rodents using hyperpolarised 13C spectroscopic imaging. The feasibility of using a model free maximum entropy/nonlinear least square method (MEM/NLS) for the kinetic analysis of hyperpolarised 13C dynamic data was explored in this thesis for the first time and validated using Monte Carlo simulations and experimental hyperpolarised 13C in vitro time-series. Finally, the feasibility of extending the analysis methods validated for in vivo PET experiments (spectral-based algorithm, Patlak graphical plot and the semi-quantitative index RATIO) to the kinetic analysis of time-activity curves of PET tracers acquired ex vivo was also assessed in this thesis.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Ashenden, Peter J. „An experimental system for evaluating cache coherence protocols in shared memory multiprocessors /“. Title page, contents and abstract only, 1997. http://web4.library.adelaide.edu.au/theses/09PH/09pha824.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Woolhouse, Leanne. „An experimental study of individual differences in intuition : preference and process“. Thesis, University of East London, 1996. http://roar.uel.ac.uk/1261/.

Der volle Inhalt der Quelle
Annotation:
This research investigated two aspects of intuition: preference and process. The underlying basis of preference for intuition defined by Jung in his theory of psychological types and measured by the sensing-intuition (SN) scale of the Myers- Briggs Type Indicator was explored in two areas: performance on ability tests and individual differences in use of intuition. Process of intuition is defined as the use of unconscious associations to guide decision making. A thinking aloud protocol technique was used to investigate differences in strategy between sensing and intuitive types on two ability tests. Test instructions and conditions were varied to investigate whether preference or ability underlies this difference. Results indicated that the SN difference is best characterised as a focus on different types of information - concrete reality vs. looking beyond reality to patterns, connections and possibilities. The finding that sensing types could modify their style suggested that this is due to a personality preference that can be overridden rather than an underlying ability difference. The nature of the SN difference was further explored by examining the differences predicted by type theory between the types in the use of intuition. This prediction contrasts with some process theories of intuition which expect few or no individual differences. Results indicated that intuitive types were more accurate and more likely to choose to use intuition than sensing types. Results suggested that preference for different types of information led to use of different strategies on the tasks. Intuitive types tended to focus on feelings of familiarity, which resulted in their accessing intuition in the form of unconsciously learnt associations. Sensing types preferred to focus on concrete information such as conscious memory of prior experience. The research has made contributions by evaluating the theory of psychological types, validating the sensing-intuition scale, and also by demonstrating the existence of individual differences in certain measures of intuition.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Cheng, Qixiang. „Design and experimental characterisation of scalable, low-energy optical switches“. Thesis, University of Cambridge, 2015. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.708708.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Matassa, Vincenzo. „Optimisation of experimental design and analysis for sugarcane variety trials /“. [St. Lucia, Qld.], 2003. http://www.library.uq.edu.au/pdfserve.php?image=thesisabs/absthe17336.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Puthiran, Mayuratheepan [Verfasser]. „Membrane chromatography - process analysis, experimental investigation and optimisation / Mayuratheepan Puthiran“. München : Verlag Dr. Hut, 2015. http://d-nb.info/1070800082/34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Medici, Davide. „Experimental studies of wind turbine wakes : power optimisation and meandering“. Doctoral thesis, Stockholm, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-598.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Yan, Yu. „Performance optimisation of HFC refrigerants by experimental and mathematical methods“. Thesis, University of Strathclyde, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.248796.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Sim, Lik Fang. „Numerical and experimental optimisation of a high performance heat exchanger“. Thesis, Sheffield Hallam University, 2007. http://shura.shu.ac.uk/20362/.

Der volle Inhalt der Quelle
Annotation:
The aim of this research is to numerically and experimentally scrutinise the thermal performance of a typical heat exchanger fitted in a domestic condensing boiler. The optimisation process considered the pins' geometry (circular pins and elliptical pins), pins' spacing, pitch distance, the pressure drop across the heat chamber and the occurrence of thermal hot spots. The first part of the study focused on the effect of altering the circular pins spacing and pins pitch distance of the heat exchanger. Computational Fluid Dynamics (CFD) is used to scrutinise the thermal performance and the air flow properties of each model by changing these two parameters. In total, 13 circular pin models were investigated. Numerical modelling was used to analyse the performance of each model in three-dimensional computational domain. For comparison, all models shared similar boundary conditions and maintained the same pin height of 35 mm and pin diameter of 8 mm. The results showed that at a given flow rate, the total heat transfer rate is more sensitive to a change in the pins spacing than a change of the pins pitch. The results also showed that an optimum spacing of circular pins can increase the heat transfer rate by up to 10%.The second part of the study, focused on investigating the thermal performance of elliptical pins. Four elliptical pin setups were created to study the thermal performance and the air flow properties. In comparison with circular pins, the simulation results showed that the optimum use of eccentricity of elliptical pins could increase the total energy transfer by up to 23% and reduce the pressure drop by 55%. To validate the acquired CFD results, a Thermal Wind Tunnel (TWT) was designed, built and commissioned. The experimental results showed that the numerical simulation under predicted the circular pin models' core temperatures, but over predicted the elliptical models core temperatures. This effect is due to the default values of the standard k - E transport equations model used in the numerical study. Both numerical and experimental results showed that the elliptical models performed better compared to its circular pins counter parts. The study also showed that heat exchanger optimisation can be carried out within a fixed physical geometry with the effective use of CFD.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

White, Bradley Michael. „Experimental Development of Automated Search Techniques for Discrete Combinatorial Optimisation“. Thesis, Griffith University, 2009. http://hdl.handle.net/10072/365420.

Der volle Inhalt der Quelle
Annotation:
A suite of techniques for finding the optimal solutions for a set of discrete combinatorial problems was developed. An experimental approach was used, with a suitable test-bed found in a class of word-puzzles. The crux of such research is that seeking optimal solutions to discrete combinatorial problems requires the use of deterministic algorithms. Attention was focused on the development of new techniques capable of exhausting the search space more efficiently. Although research was restricted to tractable problems, exhaustion of the search space was recognised to be practically infeasible for all but small problem instances. Thus the size and complexity of the problems examined was necessarily restricted. On these grounds the selection of an appropriate test-bed was fundamental to the research. Complex word problems were used because they encompass a wide range of discrete combinatorial problems, but have only a small literature. The specific puzzle examples employed as test-beds had all been used in public competitions with solutions submitted by thousands of humans, with the winning solutions and scores published. This allowed a simple and independent initial benchmark of success. The techniques developed could be judged to be at least partially successful in that they were able to at least equal and in some cases beat the highest recorded scores. The general problem of benchmarking is discussed. It was observed that small changes to the test bed puzzles or to the techniques would often impact dramatically on the results. In an attempt to isolate the reasons for this, a focused view of the search algorithms was adopted. Complex holistic algorithms were broken into smaller sub-algorithmic categories, such as: node selection, domain maintenance, forward tracking, backtracking, branch-and-bound, primary slot selection, variable ordering, value ordering, and constraint ordering. Within each of these categories a range of variations is presented. Techniques for removing inconsistencies prior to search were also experimented with. These consistency pre-processors were found to have a minimal and at times detrimental effect on search times when a good selection of search techniques was used. However, they were found to offer considerable benefits in instances where a poor selection of search techniques was chosen. As such these consistency pre-processors may be viewed as useful in terms of a risk management strategy for solving these problems. Whilst not the primary focus of this research experimentation with stochastic techniques within a deterministic framework was performed. The purpose of which was to gauge the impact of generating good solutions prior to an exhaustive search. A technique developed was observed to frequently improve the time taken to form an optimal solution, and improve the total time taken to exhaust the search space. While the major effort in the research was necessarily spent in developing and testing these algorithms and their implementations, specific attention was paid to the methodological problems inherent in experimental approaches to program development.
Thesis (PhD Doctorate)
Doctor of Philosophy (PhD)
Griffith Business School
Griffith Business School
Full Text
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Dobeli, Karen L. „The influence of slice thickness and reconstruction kernel on dose optimisation of hepatic lesion computed tomography (CT) protocols“. Thesis, The University of Sydney, 2014. http://hdl.handle.net/2123/11842.

Der volle Inhalt der Quelle
Annotation:
Purpose: To determine if phantom-based methodologies for dose-optimisation of CT liver protocols require randomisation of lesion location, and to explore the combined influence of current-time-product (mAs), slice thickness and reconstruction algorithm on hepatic lesion detection with CT. Methods: A CT liver phantom containing opacities (diameters 9.5 mm, 4.8mm, 2.4 mm; CT density 10 HU below background) was scanned at different mAs levels and the data reconstructed at 5 mm, 3 mm and 1 mm slice thicknesses and with three reconstruction algorithms: standard filtered backprojection (FBP), smoothing FBP and hybrid FBP/iterative reconstruction. To investigate the first aim, observer performance for the detection of the opacities in two image sets were compared: the first set contained images with known opacity location, the second set contained images with randomly positioned opacities. For the second aim, observer performance for opacity detection at the various combinations of mAs, slice thickness and reconstruction algorithm were compared to that for the standard setting. Results: When conditions were unambiguous (i.e. lesions were very obvious or invisible), lesion presentation method (known or unknown location) did not affect observer performance. Under difficult perception conditions, known lesion location was associated with poor observer agreement. Slice thickness of 5 mm provided better dose optimisation than 3mm and 1mm slice thicknesses. Standard FBP performed better than smoothing FBP and hybrid FBP/iterative at 5 mm, similar to hybrid FBP/iterative and better than smoothing FBP at 3 mm, and similar to both alternate methods at 1 mm. Conclusions: For routine hepatic protocols, dose optimisation can be performed with fixed lesion location. For protocols aimed at detecting very small lesions or providing very low dose, randomised lesion location is warranted. To optimise patient doses for routine liver examinations it is recommended to avoid the use of very thin slice thicknesses. A smoothing FBP kernel and a hybrid FBP/iterative kernel did not provide superior dose optimisation than the standard FBP algorithm.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Pryde, Meinwen. „Evolutionary computation and experimental design“. Thesis, University of South Wales, 2001. https://pure.southwales.ac.uk/en/studentthesis/evolutionary-computation-and-experimental-design(acc0a9a5-aa01-4d4a-aa4e-836ee5190a48).html.

Der volle Inhalt der Quelle
Annotation:
This thesis describes the investigations undertaken to produce a novel hybrid optimisation technique that combines both global and local searching to produce good solutions quickly. Many evolutionary computation and experimental design methods are considered before genetic algorithms and evolutionary operation are combined to produce novel optimisation algorithms. A novel piece of software is created to run two and three factor evolutionary operation experiments. A range of new hybrid small population genetic algorithms are created that contain evolutionary operation in all generations (static hybrids) or contain evolutionary operation in a controlled number of generations (dynamic hybrids). A large number of empirical tests are carried out to determine the influence of operators and the performance of the hybrids over a range of standard test functions. For very small populations, twenty or less individuals, stochastic universal sampling is demonstrated to be the most suitable method of selection. The performance of very small population evolutionary operation hybrid genetic algorithms is shown to improve with larger generation gaps on simple functions and on more complex functions increasing the generation gap does not deteriorate performance. As a result of the testing carried out for this study a generation gap of 0.7 is recommended as a starting point for empirical searches using small population genetic algorithms and their hybrids. Due to the changing presence of evolutionary operation, the generation gap has less influence on dynamic hybrids compared to the static hybrids. The evolutionary operation, local search element is shown to positively influence the performance of the small population genetic algorithm search. The evolutionary operation element in the hybrid genetic algorithm gives the greatest improvement in performance when present in the middle generations or with a progressively greater presence. A recommendation for the information required to be reported for benchmarking genetic algorithm performance is also presented. This includes processor, platform, software information as well as genetic algorithm parameters such as population size, number of generations, crossover method and selection operators and results of testing on a set of standard test functions.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Scott, Linda. „Optimisation of protocols for detection of free cholesterol and Niemann-Pick type C 1 and 2 protein“. Thesis, Uppsala University, Department of Medical Biochemistry and Microbiology, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-126147.

Der volle Inhalt der Quelle
Annotation:

The purpose of this project was to optimise the protocols for detection of free cholesterol and NPC 1 and NPC 2 proteins. Paraffin embedded human and rat tissues, cellblocks and cytospins of HepG2 and HeLa cells were used for immunohistochemistry to try out the best antibody dilutions and unmasking method of the antigen. Adrenal tissue was used to stain lipids with Filipin. The dilution that worked best for the NPC 1 was 1:150 and with EDTA unmasking. For the NPC 2 the dilution 1:100 was optimal and with Citrate as unmasking method. NPC 1 was highly expressed in ovary tissue, stomach epithelium, HeLa cells and rat kidney and liver, while NPC 2 was highly expressed in neurons and astrocytes in Alzheimer’s disease, seminiferous tubules in testis, neurons in intestine, neurons in healthy brain tissue and HeLa cells. The cholesterol inducing chemical U18666A was applied to HepG2 cells but no alteration in lipid staining was observed and NPC protein expression was similar at all doses applied. Filipin staining worked well with a concentration of 250μg/mL and Propidium Iodide with concentration 1mg/mL for nuclei stain was optimised at 1:1000.The fixation of cells before lipid stain and immunoperoxidase staining has to be evaluated further as the fixations used, 10% formalin and acetone, had adverse effects on the antigen. In this project methods were optimized for lipid and NPC protein staining for further application in disease investigations.

APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

David, Christopher A. W. „Assessing the relationship between nanoparticle physicochemical characteristics and biological interactions : optimisation of in vitro techniques and protocols“. Thesis, University of Liverpool, 2016. http://livrepository.liverpool.ac.uk/3007382/.

Der volle Inhalt der Quelle
Annotation:
The development and implementation of nanomaterials for a variety of clinical applications is increasing as their utility in improving healthcare is demonstrated. However, consideration must be given to appropriate pre-clinical testing to fully translate these materials into clinical use. A library of 22 nanomaterials, both commercially available and those developed in-house, were subject to an assay cascade forming the basis of a preclinical in vitro assessment which utilised a broad and widely accessible range of techniques. The library comprised numerous material classes; metallic (gold, silver, iron oxide, titanium dioxide, zinc oxide), non-metal (silica), and polymeric (polystyrene, liposome, emulsion, polydendron), varying in manufacturer stated particle size, charge, and functionalization. Chapter 2 details characterisation of the size and zeta potential of the nanomaterial library in biologically relevant matrices. When combined with information provided by the manufacturers regarding stabilisation and surface functionalization, where available, these measures allowed associations to be made between nanoparticle physicochemical characteristics and the biological effects observed in subsequent chapters. Inherent optical properties of the nanomaterials in biologically relevant matrices and sample sterility were assessed in order to gain indication of any potential incompatibility with subsequent assays. The haemocompatibility of nanomaterials is of primary concern in their application as nanomedicines, especially those administered intravenously. The work presented in Chapter 3 assessed the haemolytic potential of a subset of nanomaterials. All nanomaterial treatments were found to result in a lower level of complement activation compared to untreated cells, and cases of prolongation or reduction in plasma coagulation times via the extrinsic, intrinsic, and common pathways were observed. In Chapter 4 the impact of nanomaterials on pro-inflammatory and antiin ammatory cytokine secretion by primary immune cells demonstrated. Endotoxin was shown to exacerbate the inflammatory responses toward tested nanoparticles. Further to this; the inhibitory effects of polystyrene nanoparticles to caspase-1 activity described in the literature was confirmed. Proliferation in primary human leukocytes was shown to be significantly affected by certain nanomaterials where particular variants of silver and silica nanoparticles had antiproliferative and proliferation effects, respectively. The work presented in Chapter 5 describes the development and utilisation of screening methodologies to investigate the influence of nanomaterials on reactive oxygen species generation, reduced glutathione and autophagy. Trends have been observed within assays e.g. the reduction in levels of autophagy appears to be linked with surface charge of the nanomaterials with the most negative having the greatest effect. Chapter 6 details the application of methods optimised throughout the thesis to perform a preclinical assessment on a novel class of polymeric nanomaterial termed polydendrons. It was found that variants composed of a higher ratio of novel G2' initiator demonstrated less immunogenic potential than those with an equal ratio to PEG. Given the heterogeneity of engineered nanomaterials in terms of composition, coatings, particle characteristics and functionalization, the identification of particle characteristics that influence biological interactions will enable the rational design of future nanomaterials. The work presented in this thesis has found associations between nanoparticle characteristics and biological effects. These included concentration-dependent correlations between zeta potential and reactive oxygen species generation, and nanoparticle size and autophagic impact. Additionally, the need for thorough physicochemical characterisation, to generate as many parameters as possible for determining structure-activity relationships, has been presented. The methodologies used, and developed, throughout this thesis will aid future preclinical characterisation of novel nanomaterials.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Abdelalim, Kaoutar. „Study and optimisation of IEEE 802.11 PHY and MAC protocols towards a new generation integrated in 5G“. Thesis, Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2019. http://www.theses.fr/2019IMTA0163.

Der volle Inhalt der Quelle
Annotation:
L’évolution des réseaux sans fil et leur utilisation massive soulève de plus en plus de défis. Cela est particulièrement vrai pour les environnements denses pour lesquels la performance par utilisateur est un problème clé. Le standard IEEE 802.11ax a été lancé dans ce contexte, afin d’améliorer les protocoles des couches PHY(Physical) et MAC (Medium Access Control) en maintenant la rétrocompatibilité avec les standards précédents. Les travaux de cette thèse rejoignent les mêmes perspectives que le IEEE 802.11ax, à savoir l’amélioration des protocoles existants ou l’introduction de nouvelles techniques. Dans cette perspective, trois contributions principales sont proposées : premièrement, une analyse approfondie de la norme IEEE 802.11 a été élaborée, depuis sa première version en 1999 jusqu’au dernier amendement, le IEEE 802.11ax, en cours de finalisation. Deuxièmement, nous exposons une amélioration de la couche PHY à travers une nouvelle méthode d’allocation de ressources pour l’OFDMA (OFDMA pour Orthogonal Frequency Division Multiple Access) en mode accès aléatoire. Troisièmement, nous avons développé une évolution de la couche MAC avec un mode de négociation adaptatif pour la session d’acquittement par bloc (AN-BA)
The high growth of wireless applications brings greater challenges to wireless technologies and calls for more improvements and better efficiency. This is particularly true for dense environments for which per user performance is a key issue. IEEE 802.11ax amendment was launched in that context, to improve the physical (PHY) and medium access control (MAC) layer protocols of Wi-Fi networks in dense environments while maintaining backward compatibility with previous standards.The works of this thesis join the same perspectives as the IEEE 802.11ax, namely the improvement of existing protocols or the proposal of new ones. To that aim, three main contributions are proposed: first, we provide a deep analysis of the IEEE 802.11 norm, from its first version in 1999 to the ongoing IEEE 802.11ax amendment. Second, we expose a PHY layer improvement with a new method for the resource allocation in orthogonal frequency division multiple access (OFDMA) in random access mode. Third, we develop a MAC layer evolution with an adaptive negotiation mode for the block acknowledgement session (AN-BA)
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Rani, Kaddour. „Stratégies d’optimisation des protocoles en scanographie pédiatrique“. Electronic Thesis or Diss., Université de Lorraine, 2015. http://www.theses.fr/2015LORR0282.

Der volle Inhalt der Quelle
Annotation:
Depuis le début des années soixante-dix, le nombre de scanners par hôpitaux n’a fait qu’augmenter et leur utilisation est de plus en plus fréquente. Même si cette technique permet de donner des informations cliniques précieuses, elle a un revers qui est l’exposition du patient à des rayonnements ionisants. La sensibilité des enfants aux rayonnements est plus grande que celle des adultes, les enfants ont une espérance de vie importante et ont donc plus de risques de développer des cancers dans le futur. Il y a donc nécessité de tenter de réduire la dose au patient. Cette thèse vise donc à développer des stratégies d’optimisation sur les protocoles cliniques en utilisant des méthodes de simulation et de modélisation permettant de comprendre l’influence des paramètres des protocoles sur les indicateurs de qualité d’image et sur la dose délivrée au patient. Ce travail se divise en quatre parties: La première partie porte sur la modélisation de l’influence des paramètres des protocoles scanographiques sur deux indicateurs de qualité d’image et un indicateur de dose en utilisant la méthodologie des plans d’expériences. La seconde partie traite du développement d’un Protocole Générique Optimisé (PGO) pour la région de l’abdomen. A partir des données des modèles développés, un PGO recalculé pour cinq morphologies de patients pédiatriques et pour quatre modèles de scanners a été réalisé. L’ensemble des résultats, ont permis le développement d’un outil d’aide à l’optimisation permettant à l’utilisateur de générer un protocole optimisé en fonction du modèle de scanner et de la morphologie du patient
For the last 10-years, computed tomography (CT) procedures and their increased use have been a major source for concern in the scientific community. This concern has been the starting point for several studies aiming to optimize the dose while maintaining a diagnostic image quality. In addition, it is important to pay special attention to dose levels for children (age range considered to be from a newborn baby to a 16-y-old patient). Indeed, children are more sensitive to ionizing radiations, and they have a longer life expectancy. Optimizing the CT protocols is a very difficult process due to the complexity of the acquisition parameters, starting with the individual patient characteristics, taking into account the available CT device and the required diagnostic image quality. This PhD project is contributing to the advancement of knowledge by: (1) Developing a new approach that can minimize the number of testing CT scans examinations while developing a predictive mathematical model allowing radiologists to prospectively anticipate how changes in protocols will affect the image quality and the delivered dose for four models of CT scan. (2) Setting-up a Generic Optimized Protocol (based on the size of the phantom CATPAHN 600) for four models of CT scan. (3) Developing a methodology to adapt the GOP to five sizes of pediatric patient using Size Specific Dose Estimate calculation (SSDE). (4) Evaluating subjective and objective image quality between size-based optimised CT protocol and age-based CT protocols. (5) Developing a CT protocol optimization tool and a tutorial helping the radiologists in the process of optimization
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Sidahmed, Gaffour. „Advanced control, hybrid modelling and optimisation for an experimental hot-rolling mill“. Thesis, University of Sheffield, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.541693.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Reuter, Katherine J. „EXPERIMENTAL TAPHONOMY OF PENAEID SHRIMP: ANALYSES OFMORPHOLOGICAL DECAY IN DIFFERENT SEDIMENTARY CONDITIONS AND OF METHODOLOGICAL PROTOCOLS“. Kent State University / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=kent1587036410016952.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Zhu, Shaoling. „Experimental Study on Low Power Wireless Sensor Network Protocols with Native IP Connectivity for BuildingA utomation“. Thesis, KTH, Reglerteknik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-175792.

Der volle Inhalt der Quelle
Annotation:
The recent development of wired and wireless communication technologiesmake building automation the next battlefield of the Internet of Things. Multiplestandards have been drafted to accommodate the complex environmentand minimize the resource consumption of wireless sensor networks. This MasterThesis presents a thorough experimental evaluation with the latest Contikinetwork stack and TI CC2650 platform of network performance indicators,including signal coverage, round trip time, packet delivery ratio and powerconsumption. The Master Thesis also provides a comparison of the networkprotocols for low power operations, the existing operating systems for wirelesssensor networks, and the chips that operate on various network protocols. Theresults show that CC2650 is a promising competitor for future development inthe market of building automation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Kapolka, Andrzej. „The extensible run-time infrastructure (XRTI) : an experimental implementation of proposed improvements to the high level architecture“. Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2003. http://library.nps.navy.mil/uhtbin/hyperion-image/03Dec%5FKapolka.pdf.

Der volle Inhalt der Quelle
Annotation:
Thesis (M.S. in Modeling, Virtual Environments and Simulation)--Naval Postgraduate School, December 2003.
Thesis advisor(s): Michael Zyda, Bret Michael. Includes bibliographical references (p. 111-114). Also available online.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Keller, Tobias [Verfasser]. „Reactive Distillation for Multiple-reaction Systems: Experimental Investigation, Modelling and Optimisation / Tobias Keller“. München : Verlag Dr. Hut, 2013. http://d-nb.info/103728948X/34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Ghanbari, Fatemeh. „Optimisation of in vitro experimental design for human cytochrome P450 mechanisms-based inhibition“. Thesis, University of Sheffield, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.489050.

Der volle Inhalt der Quelle
Annotation:
Critical evaluation of the published literature showed that there are considerable variations in the experimental procedures and methods of data analysis used to characterise MBI. These variations may have contributed to reports of differing values of MBI kinetic parameters for a number of compounds. The influence of applying different values of MBI kinetic parameter on the predicted maximal fold reduction in in vivo intrinsic clearance is likely to be greatest for those compounds of intermediate inhibitory potency. However, such an effect also depends on the fraction of net clearance of substrate subject to MBI, and the pre-systemic exposure to the inhibitor.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Johansson, Andreas. „Design principles for noise reduction in hydraulic piston pumps : simulation, optimisation and experimental verification /“. Linköping : Univ, 2005. http://www.bibl.liu.se/liupubl/disp/disp2005/tek965s.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Morton, Karen M. „Optimisation of a high-energy loss control valve trim using computational and experimental techniques“. Thesis, University of Manchester, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.603418.

Der volle Inhalt der Quelle
Annotation:
The focus of the present study has been directed towards the development and optimisation of a new high-energy loss / severe service control valve cage. From this, an advanced design methodology has been created which is able to relate the effects of strong three-dimensionality in the flow to the specific geometrical features of the flow path design through the valve cage, thus allowing the design of the cage to be optimised and aid the prediction of troublesome phenomena such as cavitation. The chosen cage design, which consisted of a number of flow paths in which were located a series of densely packed, low aspect ratio, staggered cylinder arrays, was evaluated against existing cage designs of this type. From this point the critical geometrical features of the flow path though the cage were identified and used to define the basis for a parametric study. This study was carried out using computational fluid dynamics (CFD) to simulate the flow through 140 different cylinder array configurations. The results from this were used to develop a series of analytical expressions able to represent the effect of each geometrical feature on the properties of the fluid flow. These were then compiled into a design methodology which could be used to size the valve cage. To validate the computational predictions, a senes of experimental measurements of the velocity distributions within five representative models of the cylinder arrays, considered in the parametric study, were taken using particle image velocimetry (PIV). This required the use of a specially built flow rig and the selection of a line fluid able to provide a near refractive index match with the Perspex cylinders. The chosen line fluid was liquid paraffin BP. A prototype cage was installed into a real process environment to test the performance of the new design methodology and its ability to predict the onset of cavitation. The performance of the prototype cage showed a good agreement with the computational predictions. As a consequence of this study, the author has developed a new approach to the design of a control valve cage which allows the cage to be sized and optimised against a given set of process conditions. It is envisaged that this new method will be of benefit in the future design of control valves, leading to an improved level of performance across the Industry
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Muhammad, Ashfaq. „Design and Development of a Database for the Classification of Corynebacterium glutamicum Genes, Proteins, Mutants and Experimental Protocols“. Thesis, University of Skövde, School of Humanities and Informatics, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-23.

Der volle Inhalt der Quelle
Annotation:

Coryneform bacteria are largely distributed in nature and are rod like, aerobic soil bacteria capable of growing on a variety of sugars and organic acids. Corynebacterium glutamicum is a nonpathogenic species of Coryneform bacteria used for industrial production of amino acids. There are three main publicly available genome annotations, Cg, Cgl and NCgl for C. glutamicum. All these three annotations have different numbers of protein coding genes and varying numbers of overlaps of similar genes. The original data is only available in text files. In this format of genome data, it was not easy to search and compare the data among different annotations and it was impossible to make an extensive multidimensional customized formal search against different protein parameters. Comparison of all genome annotations for construction deletion, over-expression mutants, graphical representation of genome information, such as gene locations, neighboring genes, orientation (direct or complementary strand), overlapping genes, gene lengths, graphical output for structure function relation by comparison of predicted trans-membrane domains (TMD) and functional protein domains protein motifs was not possible when data is inconsistent and redundant on various publicly available biological database servers. There was therefore a need for a system of managing the data for mutants and experimental setups. In spite of the fact that the genome sequence is known, until now no databank providing such a complete set of information has been available. We solved these problems by developing a standalone relational database software application covering data processing, protein-DNA sequence extraction and

management of lab data. The result of the study is an application named, CORYNEBASE, which is a software that meets our aims and objectives.

APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Shi, Shuai. „Development of experimental protocols for a heterogeneous bioscaffold-chondrocyte construct with application to a tissue engineered spinal disc“. University of Toledo / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1271444483.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Lindstrom, Zachary Kendall. „Design and Experimental Testing of Nanoinjection Protocols for Delivering Molecules into HeLa Cells with a Bio-MEMS Device“. BYU ScholarsArchive, 2014. https://scholarsarchive.byu.edu/etd/4035.

Der volle Inhalt der Quelle
Annotation:
Delivering foreign molecules into living cells is a broad and ongoing area of research. Gene therapy, or delivering nucleic acids into cells via non-viral or viral pathways, is an especially promising area for pharmaceutics. All gene therapy methods have their respective advantages and disadvantages, including limited delivery efficiency and low viability. Nanoinjection, or delivering molecules into cells using a solid lance, has proven to be highly efficient while maintaining high viability levels. In this thesis, an array of solid silicon lances was tested by nanoinjecting tens of thousands of HeLa cancer cells simultaneously. Several molecule types were injected in different tests to understand cell uptake efficiency and cell viability. Voltage was used to determine the impact of an electric field on molecule delivery. Propidium iodide, a dye that fluoresces when bound to nucleic acids and does not fluoresce when unbound, was delivered into cells using the lance array. Results show that the lance array delivers propidium iodide into up to 78% of a nanoinjected HeLa cell culture, while maintaining 78%-91% viability. Using similar protocol as in propidium iodide experiments, plasmid DNA containing the code for a fluorescent protein was nanoinjected into HeLa cells, resulting in an average expression rate of up to 0.21%. Since gene expression only occurs in cells which have integrated DNA into the genome in the nucleus, a different DNA detection method was developed to determine total DNA count in cells following nanoinjection. DNA strands tagged with a radioactive isotope were nanoinjected into HeLa cells. Liquid scintillation was employed to quantify and discriminate between DNA delivered to cells and DNA that remained in solution around cells following nanoinjection. The largest average amount of DNA delivered to cells was 20.0 x 10^3 DNA molecules per cell. Further development of the radioactive nanoinjection process is needed to more fully understand the parameters that affect DNA delivery efficiency. In all experiments with propidium iodide and DNA molecules, low accumulation voltage, coupled with a short pulsed release voltage, resulted in the greatest molecule delivery efficiencies when compared to tests without voltage or with a constant voltage only. Lastly, an automated nanoinjection system was developed to eliminate variability in user applied nanoinjection force. The automated system was found to reduce variability in average propidium iodide uptake values by 56%. In conclusion, experimental testing of the multi-cell nanoinjection process has shown promising molecule delivery results into human cells, suggesting that further optimization of the process would have positive implications in the field of academic and clinical gene therapy.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie