To see the other types of publications on this topic, follow the link: BoBW model.

Dissertations / Theses on the topic 'BoBW model'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 26 dissertations / theses for your research on the topic 'BoBW model.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Sessions, Blake A. "A computational bow-spring model." Thesis, Massachusetts Institute of Technology, 2010. http://hdl.handle.net/1721.1/65303.

Full text
Abstract:
Thesis (S.B.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, February 2011.
"May 2010." Cataloged from PDF version of thesis.
Bow-springs find few applications in industry. Principally, they are used in archery. In addition, they have found some use in a compression-spring mode in the field of biomechatronics, to emulate elastic human legs. The mechanical behavior (characterized by deflected shape and deformation force) is difficult to model, because internal forces and moments and the geometry are both unknown. The only closed-form solutions to such systems are relatively useless to a mechanical engineer. This work comprises an iterative model developed in MATLAB that computes the mechanical behavior of buckled beam (or bow-spring) sections, over a range of parameters and geometries, to be used in the development and testing of compression bow-springs as parallel loading systems to the human leg.
by Blake A. Sessions.
S.B.
APA, Harvard, Vancouver, ISO, and other styles
2

Cantone, Daniel. "Bob Johnson: Coach, Leader, Role Model, Community Servant." Digital Commons @ East Tennessee State University, 2013. https://dc.etsu.edu/etd/1155.

Full text
Abstract:
Many things are known about Coach Bob Johnson including his military background and dynamic coaching career, but there are still many more facts that are unknown. By most accounts he was a dynamic leader who was able to motivate, influence, and lead over the course of his 27- year career coaching and teaching at Emory and Henry College. The success of his career is visible through the success of his players and teams, the number of wins, and the many accomplishments, awards, and recognitions he received. The purpose of this qualitative study was to describe Coach Johnson’s life by examining his life as a coach, teacher, administrator, and individual to help demonstrate his leadership and examine events that led to his impact and influence at Emory and Henry College. This study was based on 5 research questions: 1. What was his leadership style? 2. What type of person was he? 3. What type of coach was he? 4. What are the interviewees’ perceptions of how he influenced their lives? 5. What are the interviewees’ perceptions of his life and work? Findings from these questions helped provide answers that demonstrated the leadership and influence of Coach Johnson. The findings were consistent with Leithwood, Riehl, and the National College for School Leadership’s (2003) 3 core leadership practices for successful leadership in educational settings, which are setting directions, developing people, and developing the organization. The findings also fit into the Leadership Challenge Model (Kouzes & Posner, 1997), which consists of challenging the process, inspiring a shared vision, enabling others to act, modeling the way, and encouraging the heart. As there is no published research on Coach Johnson, this study is significant. The data were gathered by conducting semistructured interviews with those who knew Coach Johnson well. The results provide insight on leadership and how one can influence others.
APA, Harvard, Vancouver, ISO, and other styles
3

Vakkalanka, Suryanarayana. "Simplified Bow Model for a Striking Ship in Collision." Thesis, Virginia Tech, 2000. http://hdl.handle.net/10919/32974.

Full text
Abstract:
The serious consequences of ship collisions necessitate the development of regulations and requirements for the subdivision and structural design of ships to reduce damage and environmental pollution, and improve safety. Differences in striking ship bow stiffness, draft, bow height and shape have an important influence on the allocation of absorbed energy between striking and struck ships. The energy absorbed by the striking ship may be significant. The assumption of a â rigidâ striking bow may no longer hold good and typical simplifying assumptions may not be sufficient. The bow collision process is simulated by developing a striking ship bow model that uses Pedersen's super-element approach and the explicit non-linear FE code LS-DYNA. This model is applied to a series of collision scenarios. Results are compared with conventional FE model results, closed-form calculations, DAMAGE, DTU, ALPS/SCOL and SIMCOL. The results demonstrate that the universal assumption of a rigid striking ship bow is not valid. Bow deformation should be included in future versions of SIMCOL. A simplified bow model is proposed which approximates the results predicted by the three collision models, closed-form, conventional and intersection elements, to a reasonable degree of accuracy. This simplified bow model can be used in further calculations and damage predictions. A single stiffness can be defined for all striking ships in collision, irrespective of size.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
4

Mims, Patricia A. "Agricultural change in the urban-rural fringe: a test of the perimetropolitan bow wave model." Thesis, Virginia Tech, 1994. http://hdl.handle.net/10919/43133.

Full text
Abstract:
The urban-rural fringe in the United States is constantly shifting outward from the center of the metropolis, and urban landuses are displacing agriculture. Geographer John Fraser Hart developed the Perimetropolitan Bow Wave model to examine the movement of agriculture in a fifty mile radius of New York City. He concluded that agricultural activities differ in their rates of movement through four identified zones around the urban center. This thesis presents case studies of the movement of agricultural activities around two cities of different size--Washington, D.C. and Richmond, Virginia--to examine the validity of the bow wave phenomenon. The findings of this research are that Hart's model is useful only partially when examining other cities and that individual size and characteristics of the urban area must also be considered when analyzing agricultural change.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
5

Zawadzki, Alexi. "A sensitivity analysis of the hydrology of the Bow Valley above Banff, Alberta using the UBC Watershed Model." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp05/mq24395.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Weymouth, Gabriel David. "Physics and learning based computational models for breaking bow waves based on new boundary immersion approaches." Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/44754.

Full text
Abstract:
Thesis (Sc. D.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2008.
Includes bibliographical references (p. 215-219).
A ship moving on the free surface produces energetic breaking bow waves which generate spray and air entrainment. Present experimental, analytic, and numerical studies of this problem are costly, inaccurate and not robust. This thesis presents new cost-effective and accurate computational tools for the design and analysis of such ocean systems through a combination of physics-based and learning-based models. Methods which immerse physical boundaries on Cartesian background grids can model complex topologies and are well suited to study breaking bow waves. However, current methods such as Volume of Fluid and Immersed Boundary methods have numerical and modeling limitations. This thesis advances the state of the art in Cartesian-grid methods through development of a new conservative Volume-of-fluid algorithm and the Boundary Data Immersion Method, a new approach to the formulation and implementation of immersed bodies. The new methods are simple, robust and shown to out perform existing approaches for a wide range of canonical test problems relevant to ship wave flows. The new approach is used to study breaking bow waves through 2D+T and 3D simulations. The 2D+T computations compare well with experiments and breaking bow wave metrics are shown to be highly sensitive to the ship geometry. 2D+T breaking bow wave predictions are compared quantitatively to 3D computations and shown to be accurate only for certain flow features and very slender high speed vessels. Finally the thesis formalizes the study and development of physics-based learning models (PBLM) for complex engineering systems. A new generalized PBLM architecture is developed based on combining fast simple physics-based models with available high-fidelity data.
(cont.) Models are developed and trained to accurately predict the wave field and breaking bow waves of a ship orders of magnitude faster than standard methods. Built on the new boundary immersion approaches, these computational tools are sufficiently cost-effective and robust for use in practical design and analysis.
by Gabriel David Weymouth.
Sc.D.
APA, Harvard, Vancouver, ISO, and other styles
7

Ghaffari, Reza [Verfasser], Roger Andrew Akademischer Betreuer] Sauer, and Robert [Akademischer Betreuer] [Svendsen. "Continuum mechanical models of atomistic crystalline surface manifolds / Reza Ghaffari ; Akademische Betreuer: Roger Andrew Sauer, Bob Svendsen." Aachen : Universitätsbibliothek RWTH Aachen, 2019. http://d-nb.info/1194111513/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Tauscher, Helga [Verfasser], Raimar J. [Akademischer Betreuer] [Gutachter] Scherer, Bob [Gutachter] Martens, and Frank [Gutachter] Petzold. "Configurable nD-visualization for complex Building Information Models / Helga Tauscher ; Gutachter: Raimar J. Scherer, Bob Martens, Frank Petzold ; Betreuer: Raimar J. Scherer." Dresden : Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2017. http://d-nb.info/1144283493/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Bellodi, Daniele Manfrim. "Influência da inclinação do garfo de mordida do arco facial na montagem do modelo superior em articulador semi-ajustável do tipo arcon e não-arcon." Universidade de São Paulo, 2008. http://www.teses.usp.br/teses/disponiveis/58/58131/tde-29102008-180252/.

Full text
Abstract:
Durante o processo ensino/aprendizagem em Odontologia, é comum a ocorrência de inclinações do garfo de mordida em relação ao plano oclusal maxilar dos pacientes durante a tomada do arco facial realizada por alunos. Este trabalho avaliou a influência das inclinações do garfo de mordida para anterior e para posterior em relação ao plano oclusal na montagem do modelo maxilar em articuladores semi-ajustáveis do tipo arcon e não-arcon. Foram obtidos vinte modelos do arco maxilar de vinte pacientes, nos quais dois pontos foram marcados: um na aresta vestibular do canino direito e outro na aresta vestibular da cúspide mésio-vestibular do primeiro molar direito. Os registros no garfo de mordida foram realizados em três posições: garfo de mordida paralelo ao plano oclusal dos modelos, inclinado para anterior e para posterior (ambas 5º). Cada posição foi repetida cinco vezes, sendo o conjunto articulador/arco facial fixado em suporte especial para registro fotográfico. Cada fotografia foi levada ao programa Auto CAD, obtendo-se linhas de referências para a coleta de cinco medidas (uma angular e quatro lineares). Os dados obtidos foram submetidos à análise estatística paramétrica (Análise de Variância). Os resultados indicaram não haver diferença estatisticamente significante para inclinações do garfo de mordida tanto para anterior como para posterior em relação ao plano oclusal do modelo tanto para o articulador semi-ajustável do tipo arcon como para o não-arcon. O trabalho sugere que pequenas inclinações do garfo de mordida não comprometem a tomada do arco facial e sua transferência para articuladores semi-ajustáveis.
During the process teaching/learning in Dentistry, the occurrence of adverse inclinations of the bite fork concerned to maxillary occlusal plane of patients are current during the face-bow taking performed by students. This study evaluated the influence of the inclinations of bite fork to anterior and posterior related to oclusal plane on mounting of the maxillary cast in arcon and nonarcon semiadjustable articulators. Twenty casts of the maxillary arch from twenty patients were obtained, on which two points were marked: one on the vestibular edge of the right canine and another on the vestibular edge of mesiovestibular cusp of the right first molar. The records in the bite fork were made in three positions: the bite fork parallel to the castss plane occlusal, inclined to anterior and to posterior (both 5°). Each position was repeated five times, being the articulator/face-bow set fixed in special support for photographic register. Each photograph was introduced in Auto CAD program, resulting in lines of references for the collection of five measures (one angular and four linear). The data were subjected to statistical analysis parametric (Analysis of Variance). The results showed no significant statistical difference for inclinations of the bite fork to both anterior and to posterior related to the castss occlusal plane for the arcon and nonarcon semiadjustable articulators. The work suggests that small inclinations of bite fork do not compromise the face-bow taking and its transfer for the semiadjustable articulators.
APA, Harvard, Vancouver, ISO, and other styles
10

Hascoët, Nicolas. "Méthodes pour l'interprétation automatique d'images en milieu urbain." Thesis, Evry, Institut national des télécommunications, 2017. http://www.theses.fr/2017TELE0004/document.

Full text
Abstract:
Cette thèse présente une étude pour l'interprétation automatique d'images en milieu urbain. Nous proposons une application permettant de reconnaître différents monuments au sein d'images représentant des scènes complexes. La problématique principale est ici de différencier l'information locale extraite des points d'intérêt du bâtiment recherché parmi tous les points extraits de l'image. En effet, la particularité d'une image en milieu urbain vient de la nature publique de la scène. L'objet que l'on cherche à identifier est au milieu de divers autres objets pouvant interférer avec ce dernier. Nous présentons dans une première partie un état de l'art des méthodes de reconnaissance d’images en se concentrant sur l'utilisation de points d'intérêts locaux ainsi que des bases de données pouvant être employées lors des phases d'expérimentation. Nous retenons au final le modèle de sac de mots (BOW) appliqué aux descripteurs locaux SIFT (Scale-Invariant Feature Transform). Dans un second temps nous proposons une approche de classification des données locales faisant intervenir le modèle de machine à vecteurs de support (SVM). L'intérêt présenté dans cette approche proposée est le faible nombre de données requises lors de la phase d'entraînement des modèles. Différentes stratégies d'entraînement et de classification sont exposées ici. Une troisième partie suggère l'ajout d'une correction géométrique de la classification obtenue précédemment. Nous obtenons ainsi une classification non seulement de l'information locale mais aussi visuelle permettant ainsi une cohérence géométrique de la distribution des points d'intérêt. Enfin, un dernier chapitre présente les résultats expérimentaux obtenus, notamment sur des bâtiments de Paris et d'Oxford
This thesis presents a study for an automatic interpretation of urban images. We propose an application for the retrieval of different landmarks in images representing complex scenes. The main issue here is to differentiate the local information extracted from the key-points of the desired building from all the points extracted within the entire image. Indeed, an urban area image is specific by the public nature of the scene depicted. The object sought to be identified is fused within various other objects that can interfere. First of all, we present a state of the art about image recognition and retrieval methods focusing on local points of interest. Databases that can be used during the phases of experimentation are also exposed in a second chapter. We finally retain the Bag of Words modèle applied to local SIFT descriptors. In a second part, we propose a local data classification approach involving the Support Vector Machine model. The interest shown with this proposed approach is the low number of data required during the training phase of the models. Different training and classification strategies are also discussed. A third step suggests the addition of a geometric correction on the classification obtained previously. We thus obtain a classification not only for the local information but also for the visual information allowing thereby a geometric consistency of the points of interest. Finally, a last chapter presents the experimental results obtained, in particular involving images of buildings in Paris and Oxford
APA, Harvard, Vancouver, ISO, and other styles
11

Baraka, Suleiman. "Etude de l'interactionentre le vent solaire et la magnetosphere de la Terre: Modele theorique et Application sur l'analyse de donnees de l'evenement du Halloween d'octobre 2003." Phd thesis, Université Pierre et Marie Curie - Paris VI, 2007. http://tel.archives-ouvertes.fr/tel-00138416.

Full text
Abstract:
Une nouvelle approche, en utilisant un 3D code électromagnétique (PIC), est présentée pour étudier la sensibilité de la magnétosphère de la terre à la variabilité du vent solaire. Commençant par un vent solaire empiétant sur une terre magnétisée, le temps a été laissé au système ainsi une structure d'état d'équilibre de la magnétosphère a été atteinte. Une perturbation impulsive a été appliquée au système par changeant la vitesse du vent solaire pour simuler une dépression en sa pression dynamique, pour zéro, au sud et du nord du champ magnétique interplanétaire(IMF). La perturbation appliquée, un effet de trou d'air qui pourrait être décrit comme espace ~15Re est formé pour tous les cas d'état de IMF. Dès que le trou d'air a frappé le choc d'arc initial de la magnétosphère régulière, une reconnexion entre le champ magnétique de la terre et le IMF sud a été notée à la coté jour magnétopause(MP). Pendant la phase d'expansion du système, la frontière externe de la coté jour du MP a enfoncé quand IMF=0, et pourtant elle sa forme de balle quand un IMF au sud et nordique étaient inclus. La relaxation de temps du MP pour les trois cas de IMF a été étudiée. Le code est alors appliqué pour étudier l'événement d'Halloween de l'octobre 2003. Notre simulation a produit un nouveau genre de trou d'air, un espace raréfié qui a été produit après un gradient fort en IMF d'empiétement. Un tel dispositif est tout à fait semblable aux anomalies chaudes observées d'écoulement et peut avoir la même origine
APA, Harvard, Vancouver, ISO, and other styles
12

Reydellet, Itto. "Effet de la rhizosphère du maïs sur la minéralisation brute de l'azote dans un sol ferrugineux tropical (Burkina Faso)." Vandoeuvre-les-Nancy, INPL, 1997. http://www.theses.fr/1997INPL083N.

Full text
Abstract:
L’objectif de ce travail est d'évaluer l'importance de l'effet rhizosphérique sur la minéralisation de l'azote et d'en déterminer les mécanismes en zone de savane ouest-africaine. L’utilisation de la technique isotopique (15N) et d'un modèle de la dynamique de l'azote permet d'évaluer les flux bruts de minéralisation et d'organisation. Cette technique a été utilisée in situ, sur un sol de la région de Bobo Dioulasso, au Burkina Faso. Les mesures concernent un sol nu et un sol cultivé en maïs. Les résultats des flux d'azote dans le sol suggèrent que le turn-over minéralisation-organisation est très important. Les flux présentent une grande variabilité spatiale et temporelle, qui dépasse les différences entre les traitements. Il est cependant souligné que les méthodes isotopiques classiques de mesures des flux bruts, qui ne tiennent pas compte de l'hétérogénéité induite par les racines, appliquées en sol cultivé peuvent conduire à sous estimer l'augmentation de minéralisation brute due à l'effet rhizosphérique. Une approche du point de vue de la plante est proposée pour l'évaluation de la part de la minéralisation rhizosphérique dans le prélèvement azoté du maïs. Les résultats suggèrent que, dans le cas ou les teneurs en nitrate dans le sol sont très faibles, 35% de l'azote absorbé provient de la minéralisation rhizosphérique. Une méthode d'évaluation des flux bruts de la dynamique de l'azote dans l'environnement des racines a été mise au point. Elle est basée sur un dispositif qui autorise l'échantillonnage du sol à différentes distances de la racine et sur un modèle de simulation de la dynamique des flux d'azote dans l'environnement des racines. La minéralisation brute en présence de racines actives est au moins deux fois plus important que celle du sol hors de l'effet des racines. Ce résultat confirme l'hypothèse d'un effet rhizosphérique stimulant la minéralisation brute de l'azote.
APA, Harvard, Vancouver, ISO, and other styles
13

Loguercio, Salvatore. "Reductionist and Integrative approaches to explore the H.pylori genome." Doctoral thesis, Università degli studi di Padova, 2008. http://hdl.handle.net/11577/3425099.

Full text
Abstract:
The reductionist approach of decomposing biological systems into their constituent parts has dominated molecular biology for half a century. Since organisms are composed solely of atoms and molecules without the participation of extraneous forces, it has been assumed that it should be possible to explain biological systems on the basis of the physico-chemical properties of their individual components, down to the atomic level. However, despite the remarkable success of methodological reductionism in analyzing individual cellular components, it is now generally accepted that the behavior of complex biological systems cannot be understood by studying their individual parts in isolation. To tackle the complexity inherent in understanding large networks of interacting biomolecules, the integrative viewpoint emphasizes cybernetic and systems theoretical methods, using a combination of mathematics, computation and empirical observation. Such an approach is beginning to become feasible in prokaryotes, combining an almost complete view of the genome and transcriptome with a reasonably extensive picture of the proteome. Pathogenic bacteria are undoubtedly the most investigated subjects among prokaryotes. A paradigmatic example is the the human pathogen H.pylori, a causative agent of severe gastroduodenal disorders that infects almost half of the world population. In this thesis, we investigated various aspects of Helicobacter pylori molecular physiology using both reductionist and integrative approaches. In Section I, we have employed a reductionist, bottom-up perspective in studying the Cysteine oxidised/reduced state and the disulphide bridge pattern of an unusual GroES homolog expressed by H.pylori, Heat Shock protein A (HspA). This protein possesses a high Cys content, is involved in nickel binding and exhibits an extended subcellular localization, ranging from cytoplasm to cell surface. We have produced and characterized a recombinant HspA and mutants Cys94Ala and C94A/C111A. The disulphide bridge pattern has been assigned by integrating biochemical methodologies with mass spectrometry. All Cys are engaged in disulphide bonds that force the C-term domain to assume a peculiar closed loop structure, prone to host nickel ions. This novel Ni binding structural arrangement can be related to the Ni uptake/delivery to the extracellular urease, essential for the bacterium survival. In Section II, we combined different computational methods with two main goals: 1) Analyze the H.pylori biomolecular interaction network in an attempt to select new molecular targets against H.pylori infection (Chapters 4 & 5); 2) Model and simulate the signaling perturbations induced by invading H.pylori proteins in the host ephitelial cells (Chapter 6). Chapter 4 explores the 'robust yet fragile' feature of the H.pylori cell, viewed as a complex system in which robustness in response to certain perturbation is inevitably associated with fragility in response to other perturbations. With this in mind, we developed a general strategy aimed at identify control points in bacterial metabolic networks, which could be targets for novel drugs. The methodology is implemented on Helicobacter pylori 26695. The entire metabolic network of the pathogen is analyzed to find biochemically critical points, e.g. enzymes which uniquely consume and/or produce a certain metabolite. Once identified, the list of critical enzymes is filtered in order to find candidate targets wich are non-homologous with the human enzymes. Finally, the essentiality of the identified targets is cross-validated by in silico deletion studies using flux-balance analysis (FBA) on a recent genome-scale metabolic model of H. pylori. Following this approach, we identified some enzymes which could be interesting targets for inhibition studies of H.pylori infection. The study reported in Chapter 5 extends the previously described approach in light of recent theoretical studies on biological networks. These studies suggested that multiple weak attacks on selected targets are inevitably more efficient than the knockout of a single target, thus providing a conceptual framework for the recent success of multi-target drugs. We used this concept to exploit H.pylori metabolic robustness through multiple weak attacks on selected enzymes, therefore directing us toward target-sets discovery for combinatorial therapies. We used the known metabolic and protein interaction data to build an integrated biomolecular network of the pathogen. The network was subsequently screened to find central elements of network communication, e.g. hubs, bridges with high betweenness centrality and overlaps of network communities. The selected enzymes were then classified on the basis of available data about cellular function and essentiality in an attempt to predict successful target-combinations. In order to evaluate the network effect triggered by the partial inactivation of candidate targets, robustness analysis was performed on small groups of selected enzymes using flux balance analysis (FBA) on a recent genome-scale metabolic model of H.pylori. In particular, the FBA simulation framework allowed to predict the growth phenotype associated to every partial inactivation set. The preliminary results obtained so far may help to restrict the initial target-pool in search of target-sets for novel combinatorial drugs against H.pylori persistence. However, our long-term goal is to better understand the indirect network effects that lie at the heart of multi-target drug action and, ultimately, how multiple weak hits can perturb complex biological systems. H.pylori produces various a cytotoxic protein, CagA, that interfere with a very important host signaling pathway, i.e. the epidermal growth factor receptor (EGFR) signaling network. EGFR signaling is one of the most extensively studied areas of signal transduction, since it regulates growth, survival, proliferation and differentiation in mammalian cells. In Chapter 6, we attempted to build an executable model of the EGFR-signaling core process using a process algebra approach. In the EGFR network, the core process is the heart of its underlying hour-glass architecture, as it plays a central role in downstream signaling cascades to gene expression through activation of multiple transcription factors. It consists in a dense array of molecules and interactions wich are tightly coupled to each other. In order to build the executable model, a small set of EGFR core molecules and their interactions is tentatively translated in a BetaWB model. BetaWB is a framework for modelling and simulating biological processes based on Beta-binders language and its stochastic extension. Once obtained, the computational model of the EGFR core process can be used to test and compare hypotheses regarding the principles of operation of the signaling network, i.e. how the EGFR network generates different responses for each set of combinatorial stimuli. In particular, probabilistic model checking can be used to explore the states and possible state changes of the computational model, whereas stochastic simulation (corresponding to the execution of the BetaWB model) may give quantitative insights into the dynamic behaviour of the system in response to different stimuli. Information from the above tecniques allows model validation through comparison within the experimental data available in the literature. The inherent compositionality of the process algebra modeling approach enables further expansion of the EGFR core model, as well as the study of its behavior under specific perturbations, such as invading H.pylori proteins. This latter aspect might be of great value for H.pylori pathogenesis research, as signaling through the EGF receptors is intricately involved in gastric cancer and in many other gastroduodenal diseases.
APA, Harvard, Vancouver, ISO, and other styles
14

Gajová, Veronika. "Automatické třídění fotografií podle obsahu." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2012. http://www.nusl.cz/ntk/nusl-236565.

Full text
Abstract:
Purpose of this thesis is to design and implement a tool for automatic categorization of photos. The proposed tool is based on the Bag of Words classification method and it is realized as a plug-in for the XnView image viewer. The plug-in is able to classify a selected group of photos into predefined image categories. Subsequent notation of image categories is written directly into IPTC metadata of the picture as a keyword.
APA, Harvard, Vancouver, ISO, and other styles
15

Singla, Swati. "Honest Majority and Beyond: Efficient Secure Computation over Small Population." Thesis, 2019. https://etd.iisc.ac.in/handle/2005/5051.

Full text
Abstract:
Secure Multi-Party Computation for small population has witnessed notable practically-efficient works in the setting of both honest majority and dishonest majority. While honest majority provides the promise of stronger security goals of fairness (either all parties get the output or none of them do) and guaranteed output delivery (honest parties always get the output irrespective of adversary’s behaviour), the best that dishonest majority can offer is unanimous abort (either all honest parties get the output or none of them do). In this work, we consider the computation among 4 parties in two different threat models. To avoid clutter and enable ease of understanding, we segregate the thesis into two parts (one for each threat model). Part I considers the standard honest majority (i.e. 1 corruption) where we provide constant-round (lowlatency) protocols in a minimal model of pairwise private channels. Improving over the state-of-the-art work of Byali et al. (ACM CCS ’18), we present two instantiations that efficiently achieve: (a) fairness in 3 rounds using 2 garbled circuits (GC) (b) guaranteed output delivery (GOD) in 3 rounds using 4 GCs. Further, improving the efficiency of 2-round 4PC feasibility result of Ishai et al. (CRYPTO ’15) that achieves GOD at the expense of 12 GCs, we achieve GOD in 2 rounds with 8 GCs, thus saving 4 GCs over that of Ishai et al. Under a mild one-time setup, the GC count can further be reduced to 6 which is half of what the prior work attains. This widely-followed demarcation of the world of MPC into the classes of honest and dishonest majority suffers from a worrisome shortcoming: one class of protocols does not seem to withstand the threat model of the other. Specifically, an honest-majority protocol promising fairness or GOD violates the primary notion of privacy as soon as half (or more) parties are corrupted, while a dishonest-majority protocol does not promise fairness or GOD even against a single corruption, let alone a minority. The promise of the unconventional yet much sought-after brand of MPC, termed as Best-of-Both-Worlds (BoBW), is to offer the best possible security in the same protocol depending on the actual corruption scenario. With this motivation in perspective, part II presents two practically-efficient 4PC protocols in the BoBW model, that achieve: (1) guaranteed output delivery against 1 corruption and unanimous abort against 2 corruptions. (2) fairness against 1 corruption and unanimous abort against arbitrary corruptions. The thresholds are optimal considering the feasibility given in the work of Ishai et al. (CRYPTO ’06) that marks the inauguration of the BoBW setting. We provide elaborate empirical results through implementation that support the theoretical claims made in all our protocols. We emphasize that this work is the first of its kind in providing practically-efficient constructions with implementation in the BoBW model. Also, the quality of constant-rounds makes all protocols in this work suitable for high-latency networks such as the Internet.
APA, Harvard, Vancouver, ISO, and other styles
16

Kalaidjian, Alex. "Automated Landscape Painting in the Style of Bob Ross." Thesis, 2007. http://hdl.handle.net/10012/2761.

Full text
Abstract:
This thesis presents a way of automatically generating a landscape painting in the artistic style of Bob Ross. First, a relatively simple, yet effective and versatile, painting model is presented. The brushes of the painting model can be used on their own for creative applications or as a lower layer to the software components responsible for automation. Next, the brush strokes and parameters used to automatically paint eight different landscape features, each with its own adjustable attributes and randomized forms, are described. Finally, the placement of all of the automated landscape features required to achieve the layout of one of Bob Ross's landscape paintings is shown.
APA, Harvard, Vancouver, ISO, and other styles
17

Jordaan, Ian Jacques. "Ultimate loads and modes of failure for circular-arc bow girders." Thesis, 2015. http://hdl.handle.net/10539/17482.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Shabani, B. "The effects of tunnel height and centre bow length on motions and slam loads in large wave piercing catamarans." Thesis, 2017. https://eprints.utas.edu.au/23923/1/Shabani_whole_thesis.pdf.

Full text
Abstract:
An above water centre bow in wave piercing catamarans provides additional reserve buoyancy for minimising deck diving in following seas. The centre bow entry in waves, however, contributes to the severity of the wet-deck arch slam loads and slam induced bending moments. In this thesis, the effects of the centre bow length and wet-deck height/tunnel clearance on the motions and slam loads in large wave piercing catamarans are investigated through model tests to establish a framework for the preliminary design analysis. The model tests were performed in regular waves in head seas using a 2.5 m hydroelastic segmented catamaran model designed with an adjustable wet-deck and a changeable centre bow segment. Five different centre bow and wet-deck configurations were considered and over 500 towing tests were performed at two model speeds, in three wave heights and for various wave encounter frequencies. The catamaran model was comprehensively instrumented to measure the pitch and heave, centre bow loads, centre bow accelerations, wet-deck arch slam pressures and the vertical bending moments at two segment cuts located in each port and starboard demihull. Motion analyses showed that both heave and pitch increased over a wide range of encounter wave frequency as the wet-deck height of the catamaran model increased. Increasing the length of the centre bow showed an increase in the pitch but a decrease in the heave for a particular range of encounter wave frequency. The vertical motions along the model length indicated that the position of minimum vertical displacements and accelerations were aft of the LCG, between 20% and 38% of the overall length from the transom. The increase in wet-deck and consequently the archways clearance also resulted in an increase in relative vertical displacement in the centre bow area. This indicated that although the wet deck height had been increased, the consequent increase in motion still caused slamming to occur. In dynamic analyses, the maximum force acting on the centre bow segment during a slam event was decomposed into a bow entry force and a slam force. It was found that the slamming force, the centre bow entry force and slam induced bending moment increase as the centre bow length increases. Increasing the wet-deck height reduced the maximum slam load and pressure in moderate waves, but not in large waves. A correlation analysis between the slam force and slam pressure showed that the slam loads increase in the longer centre bows because of the increase in the impact area. The location of maximum pressures along the centre bow length was more related to the encounter wave frequency rather than the centre bow configurations. The distribution of the peak pressures within the centre bow archways showed that the inboard peak pressures were higher than the top arch and outboard peak pressures. The slam occurrence and severity for different centre bow and wet-deck configurations were analysed by considering the centre bow immersion depth and relative velocity at slam using the undisturbed water profile. It was found that the undisturbed immersion depth for severe slam loads was greater than the immersion depth at the wet-deck level. Therefore, the cross structure between the demihulls and the centre bow can be modified to reduce the slamming pressure. The results obtained by the comparison of the various centre bow and wet-deck configurations demonstrate the significance of considering the effect of the centre bow and archway slamming in structural design and suggest that the class rules currently available for wave piercing catamarans with a flat deck structure could be modified accordingly. Amongst the tested centre bow configurations, the shortest centre bow was the best design for slam load alleviation. Ideally, a trade-off amongst the centre bow buoyancy in waves, slam load and the catamaran motions should be made to optimise the centre bow design according to the vessel‟s operating conditions.
APA, Harvard, Vancouver, ISO, and other styles
19

Jelínek, Karel. "Dynamika okolozemní rázové vlny a magnetopauzy." Doctoral thesis, 2012. http://www.nusl.cz/ntk/nusl-309848.

Full text
Abstract:
viii Title: Dynamics of the bow shock and magnetopause Author: Karel Jelínek Department: Department of Surface and Plasma Science Supervisor: Prof. RNDr. Zdeněk Němeček, DrSc. Department of Surface and Plasma Science e-mail address: zdenek.nemecek@mff.cuni.cz Abstract: The interplanetary space is a unique laboratory which allows us to dis- cover (i) a behavior of the plasma under different conditions, (ii) origin of its insta- bilities, and (iii) its interaction with obstacles such as the Earth's magnetosphere. The present thesis analyzes the outer Earth's magnetosphere. The results are based on the in situ sensing by a variety of the spacecraft (e.g., IMP-8, INTERBALL-1, MAGION-4, Geotail, Cluster-II and Themis). The solar wind curently monitored by the WIND and ACE spacecraft near the La- grange point L1 affects by its dynamic pressure the Earth's magnetic field which acts as a counter-pressure and the boundary where these pressures are balanced is the magnetopause. Due to supersonic solar wind speed, the bow shock forms in front of the magnetopause and a region in between, where plasma flows around an obstacle is named the magnetosheath. The thesis contributes to a deaper understanding of the dependence of magnetopause and bow shock shapes and positions, especially, (1) on the orientation of the inter-...
APA, Harvard, Vancouver, ISO, and other styles
20

Chen, Guan-Jhih, and 陳冠智. "Action Segmentation and Recognition Based on Bag of Visual Words Model (BOVW) Using Self-Growing and Self-Organized Neural Gas (SGONG) Clustering." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/68280876805927076117.

Full text
Abstract:
碩士
國立東華大學
資訊工程學系
104
Nowadays recording and sharing the videos has become a trend in people life. Besides keeping the videos in memory of the life, information extraction of videos is useful in diverse applications. Action recognition and segmentation are important techniques in the field of computer vision. We propose an action segmentation system which can segment the video into a number of video shots according to the action the actor performs within the video. .At the same time, the proposed system also recognizes the action class of those segmented shots. Our system estimates the action number of the video by using the distribution of visual word vectors without any predefine information and adjust the window step size automatically instead of manual selection. Moreover, we adopt the BOVW with SGONG cluster method to construct the word dictionary of action to recognize the video shots. The experimental results demonstrate the performance of our segmentation method. Our system has the segmentation precision rate around 94.2% and recognition rate 90%. The results show that the performance is better than previous similar works.
APA, Harvard, Vancouver, ISO, and other styles
21

Silva, Francisco António Coelho e. "A qualitative approach to risk assessment and control in engineered nanoparticles occupational exposure." Doctoral thesis, 2016. http://hdl.handle.net/1822/41863.

Full text
Abstract:
Doctoral Dissertation for PhD degree in Industrial and Systems Engineering
The existing research effort and common use of nanomaterials, that are an opportunity for economic growth, pose health and safety problems. The research on the nanoparticles health effects performed during the last decade shows the possible harmfulness of several nanoparticles, including those already present in everyday use products, thus worker’s health and safety are critical to the development of nanotechnology applications. Despite the increasing knowledge in the nanotoxicology field and occupational safety and hygiene, the uncertainties related to exposure to nanoparticles and related effects are important. Qualitative risk assessment methods and design based approaches are considered to be useful when dealing with those uncertainties and their improvement relevant research issues. This work included, among other things, three individualized, although related, studies. In the first one, the exposure to TiO2 nanoparticles risk was assessed in a research laboratory using a quantitative exposure method and qualitative risk assessment methods. It was found that results from direct-reading Condensation Particle Counter (CPC) equipment and the CB Nanotool seem to be related and aligned, while the results obtained from the use of the Stofenmanager Nano seem to indicate a higher risk level. The main objective of the second study was to analyse and compare different qualitative risk assessment methods during the production of polymer mortars with nanomaterials. It was verified that the different methods applied also produce different final results. Accordingly, it is necessary to improve the use of qualitative methods by defining narrow criteria for the methods selection for each assessed situation, bearing in mind that the uncertainties are also a relevant factor when dealing with the risk related to nanotechnologies. The third study described the application of the Systematic Design Analysis Approach based on the hazard process model (bow-tie), as well as design analysis of the production process, during a development project to produce a new type of ceramic tile with photocatalytic properties. Applying Systematic Design Analysis Approach to the production process, made it possible to identify the emission and exposure scenarios and the related barriers based on the different technological options of the production process. The intervention model proposed will allow occupational safety and hygiene to be integrated into the new production processes development projects that will involve a multidisciplinary team. The current thesis aims to contribute to the improvement of occupational risk assessment and risk control in nanotechnologies, contributing to improve the use of qualitative risk assessment methods by drawing the attention for the importance of the information available on the nanomaterials and the differences obtained by using different methods for the same task and discussing possible ways to obtain more reliable results. The obtained results also shown that, when using a design-based approach, it is possible to reduce risks for workers in the workplace, by changing the production process, reducing or eliminating nanoparticles emission and consequently reducing workers’ exposure.
O esforço atual de investigação e o uso comum de nanomateriais, sendo uma oportunidade para o crescimento económico, colocam problemas para a segurança e saúde. A investigação sobre os efeitos das nanopartículas para a saúde realizada durante a última década mostra a possível nocividade de várias nanopartículas, incluindo aquelas incluídas em produtos utilizados no dia-adia. Assim, a segurança e saúde dos trabalhadores são críticas para o desenvolvimento de novas aplicações da nanotecnologia. Apesar do crescente conhecimento no campo da nanotoxicologia e na segurança e saúde ocupacional, são importantes as incertezas associadas com a exposição a nanopartículas e os efeitos relacionados O recurso a métodos de avaliação de risco qualitativos e abordagens baseadas no design é considerado útil para lidar com estas incertezas e a sua melhoria tema relevante de investigação. O presente trabalho incluiu, entre outras coisas, três estudos individualizados, no entanto interrelacionados. No primeiro estudo, é avaliado o risco de exposição a nanopartículas de TiO2 num laboratório de investigação, utilizando um método quantitativo de avaliação da exposição e métodos qualitativos de avaliação do risco. Verificou-se que os resultados do equipamento de leitura direta Condensation Particle Counter e o CB Nanotool parecem estar relacionados e alinhados, enquanto os resultados obtidos com o método Stofenmanager Nano apontam para um nível de risco mais elevado. O objetivo principal do segundo estudo era analisar e comparar diferentes métodos qualitativos de avaliação do risco durante a produção de argamassas poliméricas contendo nanomateriais. Verificou-se que os diferentes métodos aplicados também produzem diferentes resultados. Assim, é necessário melhorar a utilização destes métodos definindo critérios mais apertados para a sua seleção em função do tipo de situação avaliada, tendo em conta que as incertezas são também um fator relevante quando se interage com o risco relacionado com as nanotecnologias. O terceiro estudo descreve a aplicação do Systematic Design Analysis Design Approach, baseado no modelo do processo de perigo ( bow-tie) e na análise ao design do processo de produção durante um projeto de desenvolvimento para a produção de um novo tipo de ladrilho cerâmico com propriedades fotocatalíticas. Aplicando o Systematic Design Analysis Design Approach ao processo de produção foi possível identificar os cenários de emissão e exposição e as barreiras relacionadas, com base nas diferentes opções tecnológicas do processo de produção. O modelo de intervenção proposto vai permitir que a segurança e higiene ocupacionais sejam integradas nos projetos de desenvolvimento de novos processos de produção, os quais envolvem uma equipa multidisciplinar. A presente tese pretende contribuir para a melhoria da avaliação de riscos ocupacionais e o controlo do risco no sector da nanotecnologia, contribuindo para melhorar a utilização dos métodos qualitativos de avaliação de risco, chamando a atenção para a importância da informação disponível sobre os nanomateriais e as diferenças obtidas quando se utilizam diferentes métodos para a mesma tarefa e discutindo diferentes formas de obter resultados mais fidedignos. Os resultados obtidos demonstram, igualmente, que utilizando uma abordagem baseada no design é possível reduzir os riscos para os trabalhadores no posto de trabalho, alterando o processo de produção, reduzindo ou eliminando a emissão de nanopartículas e, assim, reduzindo a exposição dos trabalhadores.
APA, Harvard, Vancouver, ISO, and other styles
22

"Assessing the Tradeoffs of Water Allocation: Design and Application of an Integrated Water Resources Model." Thesis, 2015. http://hdl.handle.net/10388/ETD-2015-11-2300.

Full text
Abstract:
The Bow River Basin in Southern Alberta is a semi-arid catchment, with surface water provided from the Rocky Mountains. Water resources in this basin, primarily surface water, are allocated to a variety of users- industry, municipalities, agriculture, energy and needs for the environment. The largest consumptive use is by agriculture (80%), and several large dams at the headwaters provide for over 800,000 MWhrs of hydropower. This water is managed by the 1990 Water Act, distributing water via licenses following the “first in time first in right” principle. Currently, the basin is over-allocated, and closed to any new licenses. Conflicts between different water users have consequences for the economy and the environment. By using an integrated water resources model, these conflicts can be further examined and solutions can be investigated and proposed. In this research an integrated water resources model, referred to as Sustainability-oriented Water Allocation Management and Planning Model applied to the Bow Basin (SWAMPB), is developed to emulate Alberta’s Water Resources Management Model (WRMM). While having the same allocation structure as WRMM, SWAMPB instead provides a simulation environment, linking allocation with dynamic irrigation and economic sub-models. SWAMPB is part of a much larger framework, SWAMP, to simulate the water resources systems for the entire South Saskatchewan River Basin (SSRB). SWAMPB integrates economics with a water resources allocation model as well as an irrigation model- all developed using the system dynamics approach. Water is allocated following the allocation structure provided in WRMM, through operation rules of reservoirs and diversions to water users. The irrigation component calculates the water balance of farms, determining the crop water demand and crop yields. An economic valuation is provided for both crops and hydropower generation through the economic component. The structure of SWAMPB is verified through several phases. First, the operation of reservoirs with fixed (known) inflows, and modeled releases, are compared against WRMM for a historical simulation period (1928-2001). Further verifications compare the operation of SWAMPB as a whole without any fixed flows but fixed demands to identify errors in the system water allocation. A final verification then compares both models against historical flows and reservoir levels to assess the validity of each model. SWAMPB, although found to have some minor differences in model structure due to the system dynamics modeling environment, is to be evaluated as an acceptable emulator. SWAMPB is applied to assess a variety of management and policy solutions to mitigating environmental flow deficit. Solutions include increasing irrigation efficiency (S1), requiring more summer release from hydropower reservoirs at the headwaters (S2), a combination of the previous two (S3), implementing the In-Stream Flow Needs (S4) and implementing Water Conservation Objectives (S5). The solutions are not only examined by their ability to restore river flows, but also with respect to the economic consequences and effect on hydropower, irrigation, and municipalities. It is found that the three technical solutions (S1, S2, and S3) provide economic gains and allow more efficient water use, but do little to restore streamflows. Conversely, the two policy solutions (S4 and S5) are more effective at restoring river flow, but have severe consequences on the economy and water availability for irrigation and municipal uses. This analysis does not recommend a particular solution, but provides a quantification of the tradeoffs that can be used by stakeholders to make decisions. Further work on the SWAMP methodology is foreseen, to link SWAMPB with other models, enabling a comprehensive analysis across the entire SSRB.
APA, Harvard, Vancouver, ISO, and other styles
23

Babu, B. Sathish. "A Dynamic Security And Authentication System For Mobile Transactions : A Cognitive Agents Based Approach." Thesis, 2009. https://etd.iisc.ac.in/handle/2005/1029.

Full text
Abstract:
In the world of high mobility, there is a growing need for people to communicate with each other and have timely access to information regardless of the location of the individuals or the information. This need is supported by the advances in the technologies of networking, wireless communications, and portable computing devices with reduction in the physical size of computers, lead to the rapid development in mobile communication infrastructure. Hence, mobile and wireless networks present many challenges to application, hardware, software and network designers and implementers. One of the biggest challenge is to provide a secure mobile environment. Security plays a more important role in mobile communication systems than in systems that use wired communication. This is mainly because of the ubiquitous nature of the wireless medium that makes it more susceptible to security attacks than wired communications. The aim of the thesis is to develop an integrated dynamic security and authentication system for mobile transactions. The proposed system operates at the transactions-level of a mobile application, by intelligently selecting the suitable security technique and authentication protocol for ongoing transaction. To do this, we have designed two schemes: the transactions-based security selection scheme and the transactions-based authentication selection scheme. These schemes use transactions sensitivity levels and the usage context, which includes users behaviors, network used, device used, and so on, to decide the required security and authentication levels. Based on this analysis, requisite security technique, and authentication protocols are applied for the trans-action in process. The Behaviors-Observations-Beliefs (BOB) model is developed using cognitive agents to supplement the working of the security and authentication selection schemes. A transaction classification model is proposed to classify the transactions into various sensitivity levels. The BOB model The BOB model is a cognitive theory based model, to generate beliefs over a user, by observing various behaviors exhibited by a user during transactions. The BOB model uses two types of Cognitive Agents (CAs), the mobile CAs (MCAs) and the static CAs (SCAs). The MCAs are deployed over the client devices to formulate beliefs by observing various behaviors of a user during the transaction execution. The SCA performs belief analysis, and identifies the belief deviations w.r.t. established beliefs. We have developed four constructs to implement the BOB model, namely: behaviors identifier, observations generator, beliefs formulator, and beliefs analyser. The BOB model is developed by giving emphasis on using the minimum computation and minimum code size, by keeping the resource restrictiveness of the mobile devices and infrastructure. The knowledge organisation using cognitive factors, helps in selecting the rational approach for deciding the legitimacy of a user or a session. It also reduces the solution search space by consolidating the user behaviors into an high-level data such as beliefs, as a result the decision making time reduces considerably. The transactions classification model This model is proposed to classify the given set of transactions of an application service into four sensitivity levels. The grouping of transactions is based on the operations they perform, and the amount of risk/loss involved if they are misused. The four levels are namely, transactions who’s execution may cause no-damage (level-0), minor-damage (level-1), significant-damage (level-2) and substantial-damage (level-3). A policy-based transaction classifier is developed and incorporated in the SCA to decide the transaction sensitivity level of a given transaction. Transactions-based security selection scheme (TBSS-Scheme) The traditional security schemes at application-level are either session or transaction or event based. They secure the application-data with prefixed security techniques on mobile transactions or events. Generally mobile transactions possesses different security risk profiles, so, empirically we may find that there is a need for various levels of data security schemes for the mobile communications environment, which face the resource insufficiency in terms of bandwidth, energy, and computation capabilities. We have proposed an intelligent security techniques selection scheme at the application-level, which dynamically decides the security technique to be used for a given transaction in real-time. The TBSS-Scheme uses the BOB model and transactions classification model, while deciding the required security technique. The selection is purely based on the transaction sensitivity level, and user behaviors. The Security techniques repository is used in the proposed scheme, organised under three levels based on the complexity of security techniques. The complexities are decided based on time and space complexities, and the strength of the security technique against some of the latest security attacks. The credibility factors are computed using the credibility module, over transaction network, and transaction device are also used while choosing the security technique from a particular level of security repository. Analytical models are presented on beliefs analysis, security threat analysis, and average security cost incurred during the transactions session. The results of this scheme are compared with regular schemes, and advantageous and limitations of the proposed scheme are discussed. A case study on application of the proposed security selection scheme is conducted over mobile banking application, and results are presented. Transactions-based authentication selection scheme (TBAS-Scheme) The authentication protocols/schemes are used at the application-level to authenticate the genuine users/parties and devices used in the application. Most of these protocols challenges the user/device to get the authentication information, rather than deploying the methods to identify the validity of a user/device. Therefore, there is a need for an authentication scheme, which intelligently authenticates a user by continuously monitoring the genuinity of the activities/events/ behaviors/transactions through out the session. Transactions-based authentication selection scheme provides a new dimension in authenticating users of services. It enables strong authentication at the transaction level, based on sensitivity level of the given transaction, and user behaviors. The proposed approach intensifies the procedure of authentication by selecting authentication schemes by using the BOB-model and transactions classification models. It provides effective authentication solution, by relieving the conventional authentication systems, from being dependent only on the strength of authentication identifiers. We have made a performance comparison between transactions-based authentication selection scheme with session-based authentication scheme in terms of identification of various active attacks, and average authentication delay and average authentication costs are analysed. We have also shown the working of the proposed scheme in inter-domain and intra-domain hand-off scenarios, and discussed the merits of the scheme comparing it with mobile IP authentication scheme. A case study on application of the proposed authentication selection scheme for authenticating personalized multimedia services is presented. Implementation of the TBSS and the TBAS schemes for mobile commerce application We have implemented the integrated working of both the TBSS and TBAS schemes for a mo-bile commerce application. The details on identifying vendor selection, day of purchase, time of purchase, transaction value, frequency of purchase behaviors are given. A sample list of mobile commerce transactions is presented along with their classification into various sensitivity levels. The working of the system is discussed using three cases of purchases, and the results on trans-actions distribution, deviation factor generation, security technique selection, and authentication challenge generation are presented. In summary, we have developed an integrated dynamic security and authentication system using, the above mentioned selection schemes for mobile transactions, and by incorporating the BOB model, transactions classification model, and credibility modules. We have successfully implemented the proposed schemes using cognitive agents based middleware. The results of experiments suggest that incorporating user behaviors, and transaction sensitivity levels will bring dynamism and adaptiveness to security and authentication system. Through which the mobile communication security could be made more robust to attacks, and resource savvy in terms of reduced bandwidth and computation requirements by using an appropriate security and authentication technique/protocol.
APA, Harvard, Vancouver, ISO, and other styles
24

Babu, B. Sathish. "A Dynamic Security And Authentication System For Mobile Transactions : A Cognitive Agents Based Approach." Thesis, 2009. http://hdl.handle.net/2005/1029.

Full text
Abstract:
In the world of high mobility, there is a growing need for people to communicate with each other and have timely access to information regardless of the location of the individuals or the information. This need is supported by the advances in the technologies of networking, wireless communications, and portable computing devices with reduction in the physical size of computers, lead to the rapid development in mobile communication infrastructure. Hence, mobile and wireless networks present many challenges to application, hardware, software and network designers and implementers. One of the biggest challenge is to provide a secure mobile environment. Security plays a more important role in mobile communication systems than in systems that use wired communication. This is mainly because of the ubiquitous nature of the wireless medium that makes it more susceptible to security attacks than wired communications. The aim of the thesis is to develop an integrated dynamic security and authentication system for mobile transactions. The proposed system operates at the transactions-level of a mobile application, by intelligently selecting the suitable security technique and authentication protocol for ongoing transaction. To do this, we have designed two schemes: the transactions-based security selection scheme and the transactions-based authentication selection scheme. These schemes use transactions sensitivity levels and the usage context, which includes users behaviors, network used, device used, and so on, to decide the required security and authentication levels. Based on this analysis, requisite security technique, and authentication protocols are applied for the trans-action in process. The Behaviors-Observations-Beliefs (BOB) model is developed using cognitive agents to supplement the working of the security and authentication selection schemes. A transaction classification model is proposed to classify the transactions into various sensitivity levels. The BOB model The BOB model is a cognitive theory based model, to generate beliefs over a user, by observing various behaviors exhibited by a user during transactions. The BOB model uses two types of Cognitive Agents (CAs), the mobile CAs (MCAs) and the static CAs (SCAs). The MCAs are deployed over the client devices to formulate beliefs by observing various behaviors of a user during the transaction execution. The SCA performs belief analysis, and identifies the belief deviations w.r.t. established beliefs. We have developed four constructs to implement the BOB model, namely: behaviors identifier, observations generator, beliefs formulator, and beliefs analyser. The BOB model is developed by giving emphasis on using the minimum computation and minimum code size, by keeping the resource restrictiveness of the mobile devices and infrastructure. The knowledge organisation using cognitive factors, helps in selecting the rational approach for deciding the legitimacy of a user or a session. It also reduces the solution search space by consolidating the user behaviors into an high-level data such as beliefs, as a result the decision making time reduces considerably. The transactions classification model This model is proposed to classify the given set of transactions of an application service into four sensitivity levels. The grouping of transactions is based on the operations they perform, and the amount of risk/loss involved if they are misused. The four levels are namely, transactions who’s execution may cause no-damage (level-0), minor-damage (level-1), significant-damage (level-2) and substantial-damage (level-3). A policy-based transaction classifier is developed and incorporated in the SCA to decide the transaction sensitivity level of a given transaction. Transactions-based security selection scheme (TBSS-Scheme) The traditional security schemes at application-level are either session or transaction or event based. They secure the application-data with prefixed security techniques on mobile transactions or events. Generally mobile transactions possesses different security risk profiles, so, empirically we may find that there is a need for various levels of data security schemes for the mobile communications environment, which face the resource insufficiency in terms of bandwidth, energy, and computation capabilities. We have proposed an intelligent security techniques selection scheme at the application-level, which dynamically decides the security technique to be used for a given transaction in real-time. The TBSS-Scheme uses the BOB model and transactions classification model, while deciding the required security technique. The selection is purely based on the transaction sensitivity level, and user behaviors. The Security techniques repository is used in the proposed scheme, organised under three levels based on the complexity of security techniques. The complexities are decided based on time and space complexities, and the strength of the security technique against some of the latest security attacks. The credibility factors are computed using the credibility module, over transaction network, and transaction device are also used while choosing the security technique from a particular level of security repository. Analytical models are presented on beliefs analysis, security threat analysis, and average security cost incurred during the transactions session. The results of this scheme are compared with regular schemes, and advantageous and limitations of the proposed scheme are discussed. A case study on application of the proposed security selection scheme is conducted over mobile banking application, and results are presented. Transactions-based authentication selection scheme (TBAS-Scheme) The authentication protocols/schemes are used at the application-level to authenticate the genuine users/parties and devices used in the application. Most of these protocols challenges the user/device to get the authentication information, rather than deploying the methods to identify the validity of a user/device. Therefore, there is a need for an authentication scheme, which intelligently authenticates a user by continuously monitoring the genuinity of the activities/events/ behaviors/transactions through out the session. Transactions-based authentication selection scheme provides a new dimension in authenticating users of services. It enables strong authentication at the transaction level, based on sensitivity level of the given transaction, and user behaviors. The proposed approach intensifies the procedure of authentication by selecting authentication schemes by using the BOB-model and transactions classification models. It provides effective authentication solution, by relieving the conventional authentication systems, from being dependent only on the strength of authentication identifiers. We have made a performance comparison between transactions-based authentication selection scheme with session-based authentication scheme in terms of identification of various active attacks, and average authentication delay and average authentication costs are analysed. We have also shown the working of the proposed scheme in inter-domain and intra-domain hand-off scenarios, and discussed the merits of the scheme comparing it with mobile IP authentication scheme. A case study on application of the proposed authentication selection scheme for authenticating personalized multimedia services is presented. Implementation of the TBSS and the TBAS schemes for mobile commerce application We have implemented the integrated working of both the TBSS and TBAS schemes for a mo-bile commerce application. The details on identifying vendor selection, day of purchase, time of purchase, transaction value, frequency of purchase behaviors are given. A sample list of mobile commerce transactions is presented along with their classification into various sensitivity levels. The working of the system is discussed using three cases of purchases, and the results on trans-actions distribution, deviation factor generation, security technique selection, and authentication challenge generation are presented. In summary, we have developed an integrated dynamic security and authentication system using, the above mentioned selection schemes for mobile transactions, and by incorporating the BOB model, transactions classification model, and credibility modules. We have successfully implemented the proposed schemes using cognitive agents based middleware. The results of experiments suggest that incorporating user behaviors, and transaction sensitivity levels will bring dynamism and adaptiveness to security and authentication system. Through which the mobile communication security could be made more robust to attacks, and resource savvy in terms of reduced bandwidth and computation requirements by using an appropriate security and authentication technique/protocol.
APA, Harvard, Vancouver, ISO, and other styles
25

Venugopal, Thushara. "Sensitivity of Sea Surface Temperature Intraseasonal Oscillation to Diurnal Atmospheric Forcings in an OGCM." Thesis, 2013. http://etd.iisc.ac.in/handle/2005/3347.

Full text
Abstract:
Abstract The diurnal cycle is a dominant mode of sea surface temperature (SST) variability in trop-ical oceans, that influences air-sea interaction and climate processes. Diurnal variability of SST generally ranges from ~0.1 to 2.0◦C and is controlled by atmospheric fluxes of heat and momentum. In the present study, the response of intraseasonal variability (ISV) of SST in the Bay of Bengal (BoB) to diurnal atmospheric forcings, during the summer monsoon of 2007, has been examined using an Ocean General Circulation Model (OGCM). The model is based on the Modular Ocean Model Version 4 (MOM4p0), having a horizontal resolution of 0.25◦ and 40 vertical levels, with a fine resolution of 5 m in the upper 60 m. Numerical experiments were conducted by forcing the model with daily and hourly atmospheric forcings to examine the SST-ISV modulation with the diurnal cycle. Additional experiments were performed to determine the relative role of diurnal cycle in solar radiation and winds on SST and mixed layer depth (MLD). Since salinity, which is decisive in SST variability, varies meridionally in the BoB, two locations were selected for analyses: one in the northern bay at 89◦E, 19◦N where salinity is lower and the other in the southern bay at 90◦E, 8◦N where salinity is higher, as well as observations are available from Research Moored Array for African-Asian-Australian Monsoon Analysis and Prediction (RAMA) buoy for comparision with model simulation. Diurnal atmospheric forcings modify SST-ISV in both southern and northern bay. SST-ISV in the southern bay, is dominantly controlled by the diurnal cycle of insolation, while in the northern bay, diurnal cycle of insolation and winds have comparable contribution. Diurnal cycle enhanced the amplitude of 3 selected intraseasonal events in the southern bay and 3 out of the 6 events in the northern bay, during the study period. In the southern bay, simulated SST variability with hourly forcing was closer to the observations from RAMA, implying that incorporating the diurnal cycle in model forcing rectifies SST-ISV. Moreover, SST obtained with diurnal forcing consists of additional fluctuations at higher frequencies within and in between intraseasonal events; such fluctuations are absent with daily forcing. The diurnal variability of SST is significant during the warming phase of intraseasonal events and reduces during the cooling phase. Diurnal amplitude of SST decreases with depth; depth dependence also being larger during the warming phase. SST-ISV modulation with diurnal forcing results from the diurnal cycle of upper ocean heat fluxes and vertical mixing. Diurnal warming and cooling result in a net gain or loss of heat in the mixed layer after a day’s cycle. When the retention (loss) of heat in the mixed layer increases with diurnal forcing during the warming (cooling) phase of intraseasonal events, the daily mean SST rise (fall) becomes higher, amplifying the intraseasonal warming (cooling). In the southern bay, SST-ISV amplification is mainly controlled by the diurnal variability of MLD, which modifies the heat fluxes. Increased intraseasonal warming with diurnal forcing results from the increase in radiative heating, due to the shoaling of the daytime mixed layer. Amplified intraseasonal cooling is dominantly con-trolled by the strengthening of sub-surface processes, due to the nocturnal deepening of mixed layer and increased temperature gradients below the mixed layer. In the northern bay, SST-ISV modulation with diurnal forcing is not as large as that in the southern bay. The mean increase in SST-ISV amplitudes with diurnal forcing is ~0.16◦C in the southern bay, while it is only ~0.03◦C in the northern bay. Reduced response of SST-ISV to diurnal forcings in the northern bay is related to the weaker diurnal variability of MLD. Salinity stratification limits diurnal variability of mixed layer in the northern bay, unlike in the southern bay. The seasonal (June - September) mean diurnal amplitude of MLD is ~15 m in the southern bay, while it is reduced to ~1.5 m in the northern bay. Diurnal variability of MLD, spanning only a few meters is not sufficient to create large modifications in mixed layer heat fluxes and SST-ISV in the northern bay. The vertical resolution of the model limits the shallowing of mixed layer to 7.5 m, thus restricting the diurnal variability of simulated MLD.
APA, Harvard, Vancouver, ISO, and other styles
26

Venugopal, Thushara. "Sensitivity of Sea Surface Temperature Intraseasonal Oscillation to Diurnal Atmospheric Forcings in an OGCM." Thesis, 2013. http://etd.iisc.ernet.in/2005/3347.

Full text
Abstract:
Abstract The diurnal cycle is a dominant mode of sea surface temperature (SST) variability in trop-ical oceans, that influences air-sea interaction and climate processes. Diurnal variability of SST generally ranges from ~0.1 to 2.0◦C and is controlled by atmospheric fluxes of heat and momentum. In the present study, the response of intraseasonal variability (ISV) of SST in the Bay of Bengal (BoB) to diurnal atmospheric forcings, during the summer monsoon of 2007, has been examined using an Ocean General Circulation Model (OGCM). The model is based on the Modular Ocean Model Version 4 (MOM4p0), having a horizontal resolution of 0.25◦ and 40 vertical levels, with a fine resolution of 5 m in the upper 60 m. Numerical experiments were conducted by forcing the model with daily and hourly atmospheric forcings to examine the SST-ISV modulation with the diurnal cycle. Additional experiments were performed to determine the relative role of diurnal cycle in solar radiation and winds on SST and mixed layer depth (MLD). Since salinity, which is decisive in SST variability, varies meridionally in the BoB, two locations were selected for analyses: one in the northern bay at 89◦E, 19◦N where salinity is lower and the other in the southern bay at 90◦E, 8◦N where salinity is higher, as well as observations are available from Research Moored Array for African-Asian-Australian Monsoon Analysis and Prediction (RAMA) buoy for comparision with model simulation. Diurnal atmospheric forcings modify SST-ISV in both southern and northern bay. SST-ISV in the southern bay, is dominantly controlled by the diurnal cycle of insolation, while in the northern bay, diurnal cycle of insolation and winds have comparable contribution. Diurnal cycle enhanced the amplitude of 3 selected intraseasonal events in the southern bay and 3 out of the 6 events in the northern bay, during the study period. In the southern bay, simulated SST variability with hourly forcing was closer to the observations from RAMA, implying that incorporating the diurnal cycle in model forcing rectifies SST-ISV. Moreover, SST obtained with diurnal forcing consists of additional fluctuations at higher frequencies within and in between intraseasonal events; such fluctuations are absent with daily forcing. The diurnal variability of SST is significant during the warming phase of intraseasonal events and reduces during the cooling phase. Diurnal amplitude of SST decreases with depth; depth dependence also being larger during the warming phase. SST-ISV modulation with diurnal forcing results from the diurnal cycle of upper ocean heat fluxes and vertical mixing. Diurnal warming and cooling result in a net gain or loss of heat in the mixed layer after a day’s cycle. When the retention (loss) of heat in the mixed layer increases with diurnal forcing during the warming (cooling) phase of intraseasonal events, the daily mean SST rise (fall) becomes higher, amplifying the intraseasonal warming (cooling). In the southern bay, SST-ISV amplification is mainly controlled by the diurnal variability of MLD, which modifies the heat fluxes. Increased intraseasonal warming with diurnal forcing results from the increase in radiative heating, due to the shoaling of the daytime mixed layer. Amplified intraseasonal cooling is dominantly con-trolled by the strengthening of sub-surface processes, due to the nocturnal deepening of mixed layer and increased temperature gradients below the mixed layer. In the northern bay, SST-ISV modulation with diurnal forcing is not as large as that in the southern bay. The mean increase in SST-ISV amplitudes with diurnal forcing is ~0.16◦C in the southern bay, while it is only ~0.03◦C in the northern bay. Reduced response of SST-ISV to diurnal forcings in the northern bay is related to the weaker diurnal variability of MLD. Salinity stratification limits diurnal variability of mixed layer in the northern bay, unlike in the southern bay. The seasonal (June - September) mean diurnal amplitude of MLD is ~15 m in the southern bay, while it is reduced to ~1.5 m in the northern bay. Diurnal variability of MLD, spanning only a few meters is not sufficient to create large modifications in mixed layer heat fluxes and SST-ISV in the northern bay. The vertical resolution of the model limits the shallowing of mixed layer to 7.5 m, thus restricting the diurnal variability of simulated MLD.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography