Дисертації з теми "Computational practice"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Computational practice.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 дисертацій для дослідження на тему "Computational practice".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Barbaresi, Mattia. "Computational mechanics: from theory to practice." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2018. http://amslaurea.unibo.it/15649/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In the last fifty years, computational mechanics has gained the attention of a large number of disciplines, ranging from physics and mathematics to biology, involving all the disciplines that deal with complex systems or processes. With ϵ-machines, computational mechanics provides powerful models that can help characterizing these systems. To date, an increasing number of studies concern the use of such methodologies; nevertheless, an attempt to make this approach more accessible in practice is lacking yet. Starting from this point, this thesis aims at investigating a more practical approach to computational mechanics so as to make it suitable for applications in a wide spectrum of domains. ϵ-machines are analyzed more in the robotics scene, trying to understand if they can be exploited in contexts with typically complex dynamics like swarms. Experiments are conducted with random walk behavior and the aggregation task. Statistical complexity is first studied and tested on the logistical map and then exploited, as a more applicative case, in the analysis of electroencephalograms as a classification parameter, resulting in the discrimination between patients (with different sleep disorders) and healthy subjects. The number of applications that may benefit from the use of such a technique is enormous. Hopefully, this work has broadened the prospect towards a more applicative interest.
2

Kelly, David. "Computational mechanics in practice : mathematical adaptions and experimental applications." Thesis, University of Bristol, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.570852.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The definition and quantification of complexity is a source of debate. A promising answer, from Crutch field, Shalizi and co-workers, identifies complexity with the amount of information required to optimally predict the future of a process. Computational mechanics computes this quantity for discrete time series; quantifying the complexity and generating minimal, optimally predictive models. Here we adapt and apply these methods to two very different problems. First, we extend computational mechanics to continuous data which cluster around discrete values. This is applied to the analysis of single molecule experimental data; allowing us to infer hidden Markov models without the necessity of assuming model architecture and allowing for the inference of degenerate states, giving advantages over previous analysis methods. The new analysis methods are demonstrated to perform well on both simulated data, in high noise and sparse data conditions; and experimental data, namely fluorescence resonance energy transfer spectra of Holliday junction conformational dynamics. Secondly, we apply computational mechanics to investigations of the HP model of protein folding. Computational mechanics was used to investigate the properties of the sequence sets folding to the highly designable structures. A hypothesised correlation between structures' designability and the statistical complexity of its sequence set was unsupported. However, methods were developed to succinctly encapsulate the non-local statistical regularities of sequence sets and used to accurately predict the structure of designing and randomly generated sequences. Finally, limitations of the standard algorithm for reconstructing predictive models are addressed. The algorithm can fail due to pair-wise comparisons of conditional distributions. A clustering method, considering all distributions simultaneously has been developed. This also makes clear when the algorithm may be effectively employed. A second issue concerns a class of processes for which computational mechanics cannot infer the correct, optimally predictive models. Adaptions to allow the inference of these processes have been devised.
3

Garg, Shilpa [Verfasser], and Tobias [Akademischer Betreuer] Marschall. "Computational haplotyping : theory and practice / Shilpa Garg ; Betreuer: Tobias Marschall." Saarbrücken : Saarländische Universitäts- und Landesbibliothek, 2018. http://d-nb.info/116249607X/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Joyce, Sam. "Performance driven design systems in practice." Thesis, University of Bath, 2016. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.687303.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This thesis is concerned with the application of computation in the context of professional architectural practice and specifically towards defining complex buildings that are highly integrated with respect to design and engineering performance. The thesis represents applied research undertaken whilst in practice at Foster + Partners. It reviews the current state of the art of computational design techniques to quickly but flexibly model and analyse building options. The application of parametric design tools to active design projects is discussed with respect to real examples as well as methods to then link the geometric definitions to structural engineering analysis, to provide performance data in near real time. The practical interoperability between design software and engineering tools is also examined. The role of performance data in design decision making is analysed by comparing manual work-flows with methods assisted by computation. This extends to optimisation methods which by making use of design automation actively make design decisions to return optimised results. The challenges and drawbacks of using these methods effectively in real deign situations is discussed, especially the limitations of these methods with respect to incomplete problem definitions, and the design exploration resulting in modified performance requirements. To counter these issues a performance driven design work flow is proposed. This is a mixed initiative whereby designer centric understanding and decisions are computer assisted. Flexible meta-design descriptions that encapsulate the variability of the design space under consideration are explored and compared with existing optimisation approaches. Computation is used to produce and visualise the performance data from these large design spaces generated by parametric design descriptions and associated engineering analysis. Novel methods are introduced that define a design and performance space using cluster computing methods to speed up the generation of large numbers of options. The use of data visualisation is applied to design problems, showing how in real situations it can aid design orientation and decision making using the large amount of data produced. Strategies to enable these work-flows are discussed and implemented, focusing on re-appropriating existing web design paradigms using a modular approach concentrating on scalable data creation and information display.
5

Harding, John. "Meta-parametric design : developing a computational approach for early stage collaborative practice." Thesis, University of Bath, 2015. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.646138.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Computational design is the study of how programmable computers can be integrated into the process of design. It is not simply the use of pre-compiled computer aided design software that aims to replicate the drawing board, but rather the development of computer algorithms as an integral part of the design process. Programmable machines have begun to challenge traditional modes of thinking in architecture and engineering, placing further emphasis on process ahead of the final result. Just as Darwin and Wallace had to think beyond form and inquire into the development of biological organisms to understand evolution, so computational methods enable us to rethink how we approach the design process itself. The subject is broad and multidisciplinary, with influences from design, computer science, mathematics, biology and engineering. This thesis begins similarly wide in its scope, addressing both the technological aspects of computational design and its application on several case study projects in professional practice. By learning through participant observation in combination with secondary research, it is found that design teams can be most effective at the early stage of projects by engaging with the additional complexity this entails. At this concept stage, computational tools such as parametric models are found to have insufficient flexibility for wide design exploration. In response, an approach called Meta-Parametric Design is proposed, inspired by developments in genetic programming (GP). By moving to a higher level of abstraction as computational designers, a Meta-Parametric approach is able to adapt to changing constraints and requirements whilst maintaining an explicit record of process for collaborative working.
6

Tsoukalas, Kyriakos. "On Affective States in Computational Cognitive Practice through Visual and Musical Modalities." Diss., Virginia Tech, 2021. http://hdl.handle.net/10919/104069.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Learners' affective states correlate with learning outcomes. A key aspect of instructional design is the choice of modalities by which learners interact with instructional content. The existing literature focuses on quantifying learning outcomes without quantifying learners' affective states during instructional activities. An investigation of how learners feel during instructional activities will inform the instructional systems design methodology of a method for quantifying the effects of individually available modalities on learners' affect. The objective of this dissertation is to investigate the relationship between affective states and learning modalities of instructional computing. During an instructional activity, learners' enjoyment, excitement, and motivation are measured before and after a computing activity offered in three distinct modalities. The modalities concentrate on visual and musical computing for the practice of computational thinking. An affective model for the practice of computational thinking through musical expression was developed and validated. This dissertation begins with a literature review of relevant theories on embodied cognition, learning, and affective states. It continues with designing and fabricating a prototype instructional apparatus and its virtual simulation as a web service, both for the practice of computational thinking through musical expression, and concludes with a study investigating participants' affective states before and after four distinct online computing activities. This dissertation builds on and contributes to extant literature by validating an affective model for computational thinking practice through self-expression. It also proposes a nomological network for the construct of computational thinking for future exploration of the construct, and develops a method for the assessment of instructional activities based on predefined levels of skill and knowledge.
Doctor of Philosophy
This dissertation investigates the role of learners' affect during instructional activities of visual and musical computing. More specifically, learners' enjoyment, excitement, and motivation are measured before and after a computing activity offered in four distinct ways. The computing activities are based on a prototype instructional apparatus, which was designed and fabricated for the practice of computational thinking. A study was performed using a virtual simulation accessible via internet browser. The study suggests that maintaining enjoyment during instructional activities is a more direct path to academic motivation than excitement.
7

Hudson, Roland. "Strategies for parametric design in architecture : an application of practice led research." Thesis, University of Bath, 2010. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.524059.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
A new specialist design role is emerging in the construction industry. The primary task related to this role is focused on the control, development and sharing of geometric information with members of the design team in order to develop a design solution. Individuals engaged in this role can be described as a parametric designers. Parametric design involves the exploration of multiple solutions to architectural design problems using parametric models. In the past these models have been defined by computer programs, nowcommercially available parametric software provides a simpler means of creating these models. It is anticipated that the emergence of parametric designers will spread and a deeper understanding of the role is required. This thesis is aimed at establishing a detailed understanding of the tasks related to this new specialism and to develop a set of considerations that should be made when undertaking these tasks. The position of the parametric designer in architectural practice presents new opportunities in the design process this thesis also aims to capture these. Developments in this field of design are driven by practice. It is proposed that a generalised understanding of applied parametric design is primarily developed through the study of practical experience. Two bodies of work inform this study. First, a detailed analytical review of published work that focuses on the application of parametric technology and originatesfrompractice. This material concentrates on the documentation of case studies from a limited number of practices. Second, a series of case studies involving the author as participant and observer in the context of contemporary practice. This primary research of applied use of parametric tools is documented in detail and generalised findings are extracted. Analysis of the literature from practice and generalisations based on case studies is contrasted with a review of relevant design theory. Based on this, a series of strategies for the parametric designer are identified and discussed.
8

Van, der Beek Nick. "From practice to theory : computational studies on fluorescence detection and laser therapy in dermatology." Thesis, University of Wales Trinity Saint David, 2017. http://repository.uwtsd.ac.uk/819/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Computational studies on light‐tissue interactions in medical treatment and diagnosis have offered deeper insights in the processes underlying laser treatments and fluorescence measurements. I apply this approach in the study of fluorescence detection and of laser therapy. First, I investigate three methods of fluorescence detection and the reported contrast between healthy skin and malignant tissue. I varied the concentration of haemoglobin in the target, the concentration of melanin in the epidermis, the scattering of light in the skin, the depth at which the target is located in the skin, the width of the target, the thickness of the target, the concentration of photosensitizer in the target, and the concentration of photosensitizer in the skin. My findings confirm previous clinical studies in that the auto‐fluorescence corrected fluorescence detection method generally shows a higher contrast than the other methods. The results support earlier clinical studies and are in accordance with expert experience. Second, I study laser therapy for psoriasis. In a series of simulations, I analyse three types of pulsed dye laser systems and one IPL system. The investigated biological effects are heat shock proteins, hyperthermic tissue damage and vasoconstriction of the microvasculature. The changes in the skin concern blood volume, blood oxygenation and scattering in the epidermis. The calculations show that there are some notable differences in the effect changes in the composition of psoriatic tissue has on the efficacy of laser and IPL therapy. Still, Inter‐device variance was more prominent than intra‐geometry variance. My study adds to the understanding of fluorescence detection of keratinocyte skin cancers, as well as that of laser therapy for psoriasis. Additionally, it offers potential avenues for increasing the efficacy and efficiency of these therapies.
9

Selby, Cynthia Collins. "How can the teaching of programming be used to enhance computational thinking skills?" Thesis, University of Southampton, 2014. https://eprints.soton.ac.uk/366256/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The use of the term computational thinking, introduced in 2006 by Jeanette Wing, is having repercussions in the field of education. The term brings into sharp focus the concept of thinking about problems in a way that can lead to solutions that may be implemented in a computing device. Implementation of these solutions may involve the use of programming languages. This study explores ways in which programming can be employed as a tool to teach computational thinking and problem solving. Data is collected from teachers, academics, and professionals, purposively selected because of their knowledge of the topics of problem solving, computational thinking, or the teaching of programming. This data is analysed following a grounded theory approach. A Computational Thinking Taxonomy is developed. The relationships between cognitive processes, the pedagogy of programming, and the perceived levels of difficulty of computational thinking skills are illustrated by a model. Specifically, a definition for computational thinking is presented. The skills identified are mapped to Bloom’s Taxonomy: Cognitive Domain. This mapping concentrates computational skills at the application, analysis, synthesis, and evaluation levels. Analysis of the data indicates that the less difficult computational thinking skills for beginner programmers are generalisation, evaluation, and algorithm design. Abstraction of functionality is less difficult than abstraction of data, but both are perceived as difficult. The most difficult computational thinking skill is reported as decomposition. This ordering of difficulty for learners is a reversal of the cognitive complexity predicted by Bloom’s model. The plausibility of this inconsistency is explored. The taxonomy, model, and the other results of this study may be used by educators to focus learning onto the computational thinking skills acquired by the learners, while using programming as a tool. They may also be employed in the design of curriculum subjects, such as ICT, computing, or computer science.
10

Bridge, Catherine. "Computational case-based redesign for people with ability impairment rethinking, reuse and redesign learning for home modification practice /." Connect to full text, 2005. http://hdl.handle.net/2123/707.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Thesis (Ph. D.)--University of Sydney, 2006.
Title from title screen (viewed 30 May 2008). Submitted in fulfilment of the requirements for the degree of Doctor of Philosophy to the School of Architecture, Design Science and Planning, Faculty of Architecture. Degree awarded 2006; thesis submitted 2005. Includes bibliographical references. Also available in print form.
11

Bridge, Catherine Elizabeth. "Computational case-based redesign for people with ability impairment: Rethinking, reuse and redesign learning for home modification practice." Thesis, The University of Sydney, 2005. http://hdl.handle.net/2123/707.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Home modification practice for people with impairments of ability involves redesigning existing residential environments as distinct from the creation of a new dwelling. A redesigner alters existing structures, fittings and fixtures to better meet the occupant's ability requirements. While research on case-based design reasoning and healthcare informatics are well documented, the reasoning and process of redesign and its integration with individual human functional abilities remains poorly understood. Developing a means of capturing redesign knowledge in the form of case documentation online provides a means for integrating and learning from individual case-based redesign episodes where assessment and interventions are naturally linked. A key aim of the research outlined in this thesis was to gain a better understanding of the redesign of spaces for individual human ability with the view to computational modelling. Consequently, the foundational knowledge underpinning the model development includes design, redesign, case-based building design and human functional ability. Case-based redesign as proposed within the thesis, is a method for capturing the redesign context, the residential environment, the modification and the transformational knowledge involved in the redesign. Computational simulation methods are traditionally field dependent. Consequently, part of the research undertaken within this thesis involved the development of a framework for analysing cases within an online case-studies library to validate redesign for individuals and a method of acquiring reuse information so as to be able to estimate the redesign needs of a given population based on either their environment or ability profile. As home modification for people with functional impairments was a novel application field, an explorative action-based methodological approach using computational modelling was needed to underpin a case-based reasoning method. The action-based method involved a process of articulating and examining existing knowledge, suggesting new case-based computational practices, and evaluating the results. This cyclic process led to an improvement cycle that included theory, computational tool development and practical application. The rapid explosion of protocols and online redesign communities that utilise Web technologies meant that a web-based prototype capable of acquiring cases directly from home modification practitioners online and in context was both desirable and achievable. The first online version in 1998-99, encoded home modification redesigns using static WebPages and hyperlinks. This motivated the full-scale more dynamic and robust HMMinfo casestudies prototype whose action-based development is detailed within this thesis. The home modification casestudies library results from the development and integration of a novel case-based redesign model in combination with a Human- Activity-Space computational ontology. These two models are then integrated into a relational database design to enable online case acquisition, browsing, case reuse and redesign learning. The application of the redesign ontology illustrates case reuse and learning, and presents some of the implementation issues and their resolution. Original contributions resulting from this work include: extending case-based design theory to encompass redesign and redesign models, distinguishing the importance of human ability in redesign and the development of the Human-Activity-Space ontology. Additionally all data models were combined and their associated inter-relationships evaluated within a prototype made available to redesign practitioners. v Reflective and practitioner based evaluation contributed enhanced understanding of redesign case contribution dynamics in an online environment. Feedback from redesign practitioners indicated that gaining informed consent to share cases from consumers of home modification and maintenance services, in combination with the additional time required to document a case online, and reticence to go public for fear of critical feedback, all contributed to a less than expected case library growth. This is despite considerable interest in the HMMinfo casestudies website as evidenced by web usage statistics. Additionally the redesign model described in this thesis has practical implications for all design practitioners and educators who seek to create new work by reinterpreting, reconstructing and redesigning spaces.
12

Ranandeh, Kalankesh Leila. "Exploring nature of the structured data in GP electronic patient records." Thesis, University of Manchester, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.529222.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Ramraj, Varun. "Computational analysis of clinical practice guidelines : development of a software suite and document standard for storage and analysis of care maps." Thesis, University of British Columbia, 2010. http://hdl.handle.net/2429/28760.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Clinical Practice Guidelines (CPGs) guide optimal utilization of clinical delivery of health care through evidence-based medicine, where care procedures are rigorously evaluated and improved through the examination of evidence. Care mapping is the technique of using flowcharts to graphically capture CPGs as discrete, actionable steps. Health professionals can create and use care maps to expedite and ensure excellence in optimal process workflow in patient care. Analysis of care maps would provide insight into similarities and differences in care procedures. However, quantitative analysis of care maps is difficult to perform manually, and becomes impossible as the set of care maps for comparison increases. Computational methods could be employed to obtain the required quantitative data, but current document standards for developing, sharing and visualizing care maps are not rigorous enough for computational analysis to take place. By using Bioinformatics approaches, we can solve these problems. Firstly, we can develop a standard care map file format for electronic storage. Systems Biology Markup Language (SBML), a document format used to describe biological pathways, can be used to develop the required file format. This method works because care maps are notionally very similar to biological pathways. It allows use of multiple alignment algorithms (traditionally used to align and cluster biological pathways) with these transformed care maps in order to derive quantitative data. This project involved the development of a software suite that is able to generate care maps in the SBML format and align them using an existing global multiple pathway alignment algorithm. It is part of a larger project that examines efficacy of CPGs. This would allow for two important studies to be conducted: a breadth study across multiple EDs and a longitudinal study over time within a single ED to see how it has been able to implement and adapt to the CPGs. By utilizing Bioinformatics approaches in care mapping, two important objectives were realized: the creation of a document standard for care maps, and computational comparison and contrast of CPGs. This opens up the exciting new field of Translational Informatics, which applies existing Bioinformatics concepts to e-Health, e-Medicine and Health Informatics.
14

Adedoyin, Adetokunbo Adelana. "Determination of best practice guidelines for performing large eddy simulation of flows in configurations of engineering interest." Master's thesis, Mississippi State : Mississippi State University, 2007. http://library.msstate.edu/etd/show.asp?etd=etd-06222007-140721.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Hemström, B., P. Mühlbauer, a. Nijeholt J. A. Lycklama, I. Farkas, I. Boros, A. Aszodi, M. Scheuerer, et al. "The European project FLOMIX-R: Fluid mixing and flow distribution inthe reactor circuit - Final summary report." Forschungszentrum Dresden, 2010. http://nbn-resolving.de/urn:nbn:de:bsz:d120-qucosa-28619.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The project was aimed at describing the mixing phenomena relevant for both safety analysis, particularly in steam line break and boron dilution scenarios, and mixing phenomena of interest for economical operation and the structural integrity. Measurement data from a set of mixing experiments, gained by using advanced measurement techniques with enhanced resolution in time and space help to improve the basic understanding of turbulent mixing and to provide data for Computational Fluid Dynamics (CFD) code validation. Slug mixing tests simulating the start-up of the first main circulation pump are performed with two 1:5 scaled facilities: The Rossendorf coolant mixing model ROCOM and the VATTENFALL test facility, modelling a German Konvoi type and a Westinghouse type three-loop PWR, respectively. Additional data on slug mixing in a VVER-1000 type reactor gained at a 1:5 scaled metal mock-up at EDO Gidropress are provided. Experimental results on mixing of fluids with density differences obtained at ROCOM and the FORTUM PTS test facility are made available. Concerning mixing phenomena of interest for operational issues and thermal fatigue, flow distribution data available from commissioning tests (Sizewell-B for PWRs, Loviisa and Paks for VVERs) are used together with the data from the ROCOM facility as a basis for the flow distribution studies. The test matrix on flow distribution and steady state mixing performed at ROCOM comprises experiments with various combinations of running pumps and various mass flow rates in the working loops. Computational fluid dynamics calculations are accomplished for selected experiments with two different CFD codes (CFX-5, FLUENT). Best practice guidelines (BPG) are applied in all CFD work when choosing computational grid, time step, turbulence models, modelling of internal geometry, boundary conditions, numerical schemes and convergence criteria. The BPG contain a set of systematic procedures for quantifying and reducing numerical errors. The knowledge of these numerical errors is a prerequisite for the proper judgement of model errors. The strategy of code validation based on the BPG and a matrix of CFD code validation calculations have been elaborated. Besides of the benchmark cases, additional experiments were calculated by new partners and observers, joining the project later. Based on the "best practice solutions", conclusions on the applicability of CFD for turbulent mixing problems in PWR were drawn and recommendations on CFD modelling were given. The high importance of proper grid generation was outlined. In general, second order discretization schemes should be used to minimise numerical diffusion. First order schemes can provide physically wrong results. With optimised "production meshes" reasonable results were obtained, but due to the complex geometry of the flow domains, no fully grid independent solutions were achieved. Therefore, with respect to turbulence models, no final conclusions can be given. However, first order turbulence models like K-e or SST K-w are suitable for momentum driven slug mixing. For buoyancy driven mixing (PTS scenarios), Reynolds stress models provide better results.
16

Kalua, Amos. "Framework for Integrated Multi-Scale CFD Simulations in Architectural Design." Diss., Virginia Tech, 2021. http://hdl.handle.net/10919/105013.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
An important aspect in the process of architectural design is the testing of solution alternatives in order to evaluate them on their appropriateness within the context of the design problem. Computational Fluid Dynamics (CFD) analysis is one of the approaches that have gained popularity in the testing of architectural design solutions especially for purposes of evaluating the performance of natural ventilation strategies in buildings. Natural ventilation strategies can reduce the energy consumption in buildings while ensuring the good health and wellbeing of the occupants. In order for natural ventilation strategies to perform as intended, a number of factors interact and these factors must be carefully analysed. CFD simulations provide an affordable platform for such analyses to be undertaken. Traditionally, these simulations have largely followed the direction of Best Practice Guidelines (BPGs) for quality control. These guidelines are built around certain simplifications due to the high computational cost of CFD modelling. However, while the computational cost has increasingly fallen and is predicted to continue to drop, the BPGs have largely remained without significant updates. The need to develop a CFD simulation framework that leverages the contemporary and anticipates the future computational cost and capacity can, therefore, not be overemphasised. When conducting CFD simulations during the process of architectural design, the variability of the wind flow field including the wind direction and its velocity constitute an important input parameter. Presently, however, in many simulations, the wind direction is largely used in a steady state manner. It is assumed that the direction of flow downwind of a meteorological station remains constant. This assumption may potentially compromise the integrity of CFD modelling as in reality, the wind flow field is bound to be dynamic from place to place. In order to improve the accuracy of the CFD simulations for architectural design, it is therefore necessary to adequately account for this variability. This study was a two-pronged investigation with the ultimate objective of improving the accuracy of the CFD simulations that are used in the architectural design process, particularly for the design and analysis of natural ventilation strategies. Firstly, a framework for integrated meso-scale and building scale CFD simulations was developed. Secondly, the newly developed framework was then implemented by deploying it to study the variability of the wind flow field between a reference meteorological station, the Virginia Tech Airport, and a selected localized building scale site on the Virginia Tech campus. The findings confirmed that the wind flow field varies from place to place and showed that the newly developed framework was able to capture this variation, ultimately, generating a wind flow field characterization representative of the conditions prevalent at the localized building site. This framework can be particularly useful when undertaking de-coupled CFD simulations to design and analyse natural ventilation strategies in the building design process.
Doctor of Philosophy
The use of natural ventilation strategies in building design has been identified as one viable pathway toward minimizing energy consumption in buildings. Natural ventilation can also reduce the prevalence of the Sick Building Syndrome (SBS) and enhance the productivity of building occupants. This research study sought to develop a framework that can improve the usage of Computational Fluid Dynamics (CFD) analyses in the architectural design process for purposes of enhancing the efficiency of natural ventilation strategies in buildings. CFD is a branch of computational physics that studies the behaviour of fluids as they move from one point to another. The usage of CFD analyses in architectural design requires the input of wind environment data such as direction and velocity. Presently, this data is obtained from a weather station and there is an assumption that this data remains the same even for a building site located at a considerable distance away from the weather station. This potentially compromises the accuracy of the CFD analyses as studies have shown that due to a number of factors such the urban built form, vegetation, terrain and others, the wind environment is bound to vary from one point to another. This study sought to develop a framework that quantifies this variation and provides a way for translating the wind data obtained from a weather station to data that more accurately characterizes a local building site. With this accurate site wind data, the CFD analyses can then provide more meaningful insights into the use of natural ventilation in the process of architectural design. This newly developed framework was deployed on a study site at Virginia Tech. The findings showed that the framework was able to demonstrate that the wind flow field varies from one place to another and it also provided a way to capture this variation, ultimately, generating a wind flow field characterization that was more representative of the local conditions.
17

Hemström, B., P. Mühlbauer, a. Nijeholt J. A. Lycklama, I. Farkas, I. Boros, A. Aszodi, M. Scheuerer, et al. "The European project FLOMIX-R: Fluid mixing and flow distribution inthe reactor circuit - Final summary report." Forschungszentrum Rossendorf, 2005. https://hzdr.qucosa.de/id/qucosa%3A21688.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The project was aimed at describing the mixing phenomena relevant for both safety analysis, particularly in steam line break and boron dilution scenarios, and mixing phenomena of interest for economical operation and the structural integrity. Measurement data from a set of mixing experiments, gained by using advanced measurement techniques with enhanced resolution in time and space help to improve the basic understanding of turbulent mixing and to provide data for Computational Fluid Dynamics (CFD) code validation. Slug mixing tests simulating the start-up of the first main circulation pump are performed with two 1:5 scaled facilities: The Rossendorf coolant mixing model ROCOM and the VATTENFALL test facility, modelling a German Konvoi type and a Westinghouse type three-loop PWR, respectively. Additional data on slug mixing in a VVER-1000 type reactor gained at a 1:5 scaled metal mock-up at EDO Gidropress are provided. Experimental results on mixing of fluids with density differences obtained at ROCOM and the FORTUM PTS test facility are made available. Concerning mixing phenomena of interest for operational issues and thermal fatigue, flow distribution data available from commissioning tests (Sizewell-B for PWRs, Loviisa and Paks for VVERs) are used together with the data from the ROCOM facility as a basis for the flow distribution studies. The test matrix on flow distribution and steady state mixing performed at ROCOM comprises experiments with various combinations of running pumps and various mass flow rates in the working loops. Computational fluid dynamics calculations are accomplished for selected experiments with two different CFD codes (CFX-5, FLUENT). Best practice guidelines (BPG) are applied in all CFD work when choosing computational grid, time step, turbulence models, modelling of internal geometry, boundary conditions, numerical schemes and convergence criteria. The BPG contain a set of systematic procedures for quantifying and reducing numerical errors. The knowledge of these numerical errors is a prerequisite for the proper judgement of model errors. The strategy of code validation based on the BPG and a matrix of CFD code validation calculations have been elaborated. Besides of the benchmark cases, additional experiments were calculated by new partners and observers, joining the project later. Based on the "best practice solutions", conclusions on the applicability of CFD for turbulent mixing problems in PWR were drawn and recommendations on CFD modelling were given. The high importance of proper grid generation was outlined. In general, second order discretization schemes should be used to minimise numerical diffusion. First order schemes can provide physically wrong results. With optimised "production meshes" reasonable results were obtained, but due to the complex geometry of the flow domains, no fully grid independent solutions were achieved. Therefore, with respect to turbulence models, no final conclusions can be given. However, first order turbulence models like K-e or SST K-w are suitable for momentum driven slug mixing. For buoyancy driven mixing (PTS scenarios), Reynolds stress models provide better results.
18

Kaninda, Tshitwala Lynda. "Analyse des pratiques computationnelles anormes des enseignants du primaire en République Démocratique du Congo : réflexions pour une théorie des pratiques retournées." Electronic Thesis or Diss., Bordeaux 3, 2023. http://www.theses.fr/2023BOR30039.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Ce travail porte sur les pratiques computationnelles des enseignants congolais du primaire bénéficiaires d’une initiation à l’outil et aux services informatiques dans le cadre de l’initiative francophone pour la formation à distance des maîtres (IFADEM, en sigle). Il propose un discours qui s’éloigne de tout a priori de conformité et de non conformité aux normes d’usage prescrites par le corps des formateurs aux technologies de l’information et de la communication pour l’éducation (TICE). Ce propos doctoral a pour but de comprendre les mécanismes “retournés”, autant singuliers qu’individuels, autant inédits qu’instables, par lesquels chaque praticien ré-invente l’utilisation des TICE (pratiques d’usage computationnel anormes). Quels sont les facteurs représentationnels, contextuels, pédagogiques et translittératiques par lesquels certains de ces utilisateurs agissent en praticiens-refusants ? Pour répondre à cette problématique, nous avons effectué une analyse de type microsociologique. Notre méthodologie s’appuie sur un dispositif de recherche-action-formation à travers lequel des techniques à la fois qualitatives (entretiens et screen tracking) et quantitatives (traitement statistique issu de questionnaires) ont été utilisées. Contrairement à nos hypothèses de départ, les conclusions auxquelles nous avons abouties sont plutôt surprenantes
This research work examines the computational practices of Congolese primary school teachers who have been introduced to computer tools and services as part of the Francophone initiative for distance teacher training (IFADEM, in French). It proposes a discourse that distances itself from any a priori conformity or non-conformity to the norms of use prescribed by the body of trainers in information and communication technologies for education (ICTE). The aim of this doctoral project is to understand the "reversed" mechanisms - as singular as they are individual, as novel as they are unstable - by which each practitioner reinvents the use of ICTE (abnormal computational usage practices). What are the representational, contextual, pedagogical and transliterative factors by which some of these users act as practitioner-refusers? To answer this question, we carried out a microsociological analysis. Our methodology is based on a research-action-training approach, using both qualitative (interviews and screen tracking) and quantitative (statistical processing of questionnaires) techniques. Contrary to our initial hypotheses, the conclusions we reached are rather surprising
19

CICCOLELLA, SIMONE. "Practical algorithms for Computational Phylogenetics." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2022. http://hdl.handle.net/10281/364980.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In questo manoscritto vengono discussi le principali sfide computazionali nel campo della inferenza di filogenesi tumorale a vengono proposte diverse soluzione per i tre principali problemi di (i) ricostruzione dell'evoluzioni di un campione tumorale, (ii) clustering di dati SCS per una piu' pulita e veloce inferenza e (iii) il confronto di diverse filogenesi. Inoltre viene discusso come combinare le diverse soluzioni in una singola pipeline per una piu' rapida analisi.
In this manuscript we described the main computational challenges of the cancer phylogenetic field and we proposed different solutions for the three main problems of (i) the progression reconstruction of a tumor sample, (ii) the clustering of SCS data to allow for a cleaner and faster inference and (iii) the evaluation of different phylogenies. Furthermore we combined them into a usable pipeline to allow for a faster analysis.
20

Gruen, John. "Atlas: Practical Computational Literacy for Designers." Research Showcase @ CMU, 2013. http://repository.cmu.edu/theses/49.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract. “Atlas: Practical Computational Literacy for Designers” is a masters thesis exploring the relationship between design learners and code. In this document, models of learning and motivation serve as a launch point for discussing goals and fears of designers as they relate to code and programming. Using models and interviews with design learners, design professionals, computer scientists, and developers, this document isolates three critical choke points for many design learners relating to code: lack of immediacy and feedback, lack of orientation, and a priori fear of failure. It goes on to explain how how only one of these, lack of immediacy, can be fully compensated for within code-learning contexts, and suggests that the other two originate in contradictions in design learners attitudes about code’s evolving relationship to design practice. The document concludes by introducing Atlas, a system to help designers model real-world computational systems, and better articulate their goals and interests relating to these systems.
21

Poon, Phillip K., and Phillip K. Poon. "Practical Considerations In Experimental Computational Sensing." Diss., The University of Arizona, 2017. http://hdl.handle.net/10150/623022.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Computational sensing has demonstrated the ability to ameliorate or eliminate many trade-offs in traditional sensors. Rather than attempting to form a perfect image, then sampling at the Nyquist rate, and reconstructing the signal of interest prior to post-processing, the computational sensor attempts to utilize a priori knowledge, active or passive coding of the signal-of-interest combined with a variety of algorithms to overcome the trade-offs or to improve various task-specific metrics. While it is a powerful approach to radically new sensor architectures, published research tends to focus on architecture concepts and positive results. Little attention is given towards the practical issues when faced with implementing computational sensing prototypes. I will discuss the various practical challenges that I encountered while developing three separate applications of computational sensors. The first is a compressive sensing based object tracking camera, the SCOUT, which exploits the sparsity of motion between consecutive frames while using no moving parts to create a psuedo-random shift variant point-spread function. The second is a spectral imaging camera, the AFSSI-C, which uses a modified version of Principal Component Analysis with a Bayesian strategy to adaptively design spectral filters for direct spectral classification using a digital micro-mirror device (DMD) based architecture. The third demonstrates two separate architectures to perform spectral unmixing by using an adaptive algorithm or a hybrid techniques of using Maximum Noise Fraction and random filter selection from a liquid crystal on silicon based computational spectral imager, the LCSI. All of these applications demonstrate a variety of challenges that have been addressed or continue to challenge the computational sensing community. One issue is calibration, since many computational sensors require an inversion step and in the case of compressive sensing, lack of redundancy in the measurement data. Another issue is over multiplexing, as more light is collected per sample, the finite amount of dynamic range and quantization resolution can begin to degrade the recovery of the relevant information. A priori knowledge of the sparsity and or other statistics of the signal or noise is often used by computational sensors to outperform their isomorphic counterparts. This is demonstrated in all three of the sensors I have developed. These challenges and others will be discussed using a case-study approach through these three applications.
22

Aghaei, Meibodi Mania. "Generative Design Exploration : Computation and Material Practice." Doctoral thesis, KTH, Arkitekturteknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-182194.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Today, computation serves as an important intermediary agent for the integration of analyses and the constraints of materialisation into design processes. Research efforts in the field have emphasised digital continuity and conformity between different aspects of a building project. Such an approach can limit the potential for significant discoveries, because the expression of architectural form is reduced to the varying tones of one fabrication technique and simulation at a time. This dissertation argues that disparate sets of digital and physical models are needed to incorporate multiple constraints into the exploration, and that the way the designer links them to one another significantly impacts the potential for arriving at significant discoveries. Discoveries are made in the moment of bridging between models, representational mediums, and affiliated processes. This dissertation examines the capacity of algorithm—as a basis for computation—to diversify and expand the design exploration by enabling the designer to link disparate models and different representational mediums. It is developed around a series of design experiments that question how computation and digital fabrication can be used to diversify design ideation, foster significant discoveries, and at the same time increase flexibility for the designer’s operation in the design process. The experiments reveal the interdependence of the mediums of design—algorithm, geometry, and material—and the designer’s mode of operation. They show that each medium provides the designer with a particular way of incorporating constraints into the exploration. From the way the designer treats these mediums and the design process, two types of exploration are identified: goal oriented and open-ended. In the former, the exploration model is shaped by the designer’s objective to reach a specified goal through the selection of mediums, models, and tools. In the latter, the design process itself informs the designer’s intention. From the kinds of interdependencies that are created between mediums in each experiment, three main exploration models emerge: circular and uniform, branched and incremental, and parallel and bidirectional. Finally, this dissertation argues that the theoretical case for integral computational design and fabrication must be revised to go beyond merely applying established computational processes to encompass the designer and several design mediums. The new model of design exploration is a cooperation between algorithm, geometry, materials, tools, and the designer. For the exploration to be novel, the designer must play a significant role by choosing one medium over another when formulating the design problem and establishing design drivers from the set of constraints, by linking the design mediums, by translating between design representations, and by describing the key aspects of the exploration in terms of algorithms.

QC 20160217

23

Yoder, Theodore J. "Practical fault-tolerant quantum computation." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/115680.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 2018.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 190-201).
For the past two and a half decades, a subset of the physics community has been focused on building a new type of computer, one that exploits the superposition, interference, and entanglement of quantum states to compute faster than a classical computer on select tasks. Manipulating quantum systems requires great care, however, as they are quite sensitive to many sources of noise. Surpassing the limits of hardware fabrication and control, quantum error-correcting codes can reduce error-rates to arbitrarily low levels, albeit with some overhead. This thesis takes another look at several aspects of stabilizer code quantum error-correction to discover solutions to the practical problems of choosing a code, using it to correct errors, and performing fault-tolerant operations. Our first result looks at limitations on the simplest implementation of fault-tolerant operations, transversality. By defining a new property of stabilizer codes, the disjointness, we find transversal operations on stabilizer codes are limited to the Clifford hierarchy and thus are not universal for computation. Next, we address these limitations by designing non-transversal fault-tolerant operations that can be used to universally compute on some codes. The key idea in our constructions is that error-correction is performed at various points partway through the non-transversal operation (even at points when the code is not-necessarily still a stabilizer code) to catch errors before they spread. Since the operation is thus divided into pieces, we dub this pieceable fault-tolerance. In applying pieceable fault tolerance to the Bacon-Shor family of codes, we find an interesting tradeoff between space and time, where a fault-tolerant controlled-controlled-Z operation takes less time as the code becomes more asymmetric, eventually becoming transversal. Further, with a novel error-correction procedure designed to preserve the coherence of errors, we design a reasonably practical implementation of the controlled-controlled-Z operation on the smallest Bacon-Shor code. Our last contribution is a new family of topological quantum codes, the triangle codes, which operate within the limits of a 2-dimensional plane. These codes can perform all encoded Clifford operations within the plane. Moreover, we describe how to do the same for the popular family of surface codes, by relation to the triangle codes.
by Theodore J. Yoder.
Ph. D.
24

Cuffel, Terry. "LINKING PLACE VALUE CONCEPTS WITH COMPUTATIONAL PRACTICES IN THIRD GRADE." Master's thesis, University of Central Florida, 2009. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/3374.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In an attempt to examine student understanding of place value with third graders, I conducted action research with a small group of girls to determine if my use of instructional strategies would encourage the development of conceptual understanding of place value. Strategies that have been found to encourage conceptual development of place value, such as use of the candy factory, were incorporated into my instruction. Instructional strategies were adjusted as the study progressed to meet the needs of the students and the development of their understanding of place value. Student explanations of their use of strategies contributed to my interpretation of their understanding. Additionally, I examined the strategies that the students chose to use when adding or subtracting multidigit numbers. Student understanding was demonstrated through group discussion and written and oral explanations. My observations, anecdotal records and audio recordings allowed me to further analyze student understanding. The results of my research seem to corroborate previous research studies that emphasize the difficulty that many students have in understanding place value at the conceptual level.
M.Ed.
Department of Teaching and Learning Principles
Education
K-8 Math and Science MEd
25

Carlsson, Jimmy. "A practical approach toward architectures for open computational systems." Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik och datavetenskap, 2002. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-5821.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
By means of a systemic approach toward analysis and design of complex systems, we introduce the issue of implementing open computational systems on service-oriented architectures. We start by studying general systems theory, as it accounts for analysis and modeling of complex systems, and then compare three different implementation strategies toward system implementation. As such, the comparison is grounded in the notion of supporting architectures and, more specifically, in the practical case of a service-oriented layered architecture for communicating entities (SOLACE).
More material can be found on http://www.soclab.bth.se
26

Phinjaroenphan, Panu, and s2118294@student rmit edu au. "An Efficient, Practical, Portable Mapping Technique on Computational Grids." RMIT University. Computer Science and Information Technology, 2007. http://adt.lib.rmit.edu.au/adt/public/adt-VIT20080516.145808.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Grid computing provides a powerful, virtual parallel system known as a computational Grid on which users can run parallel applications to solve problems quickly. However, users must be careful to allocate tasks to nodes properly because improper allocation of only one task could result in lengthy executions of applications, or even worse, applications could crash. This allocation problem is called the mapping problem, and an entity that tackles this problem is called a mapper. In this thesis, we aim to develop an efficient, practical, portable mapper. To study the mapping problem, researchers often make unrealistic assumptions such as that nodes of Grids are always reliable, that execution times of tasks assigned to nodes are known a priori, or that detailed information of parallel applications is always known. As a result, the practicality and portability of mappers developed in such conditions are uncertain. Our review of related work suggested that a more efficient tool is required to study this problem; therefore, we developed GMap, a simulator researchers/developers can use to develop practical, portable mappers. The fact that nodes are not always reliable leads to the development of an algorithm for predicting the reliability of nodes and a predictor for identifying reliable nodes of Grids. Experimental results showed that the predictor reduced the chance of failures in executions of applications by half. The facts that execution times of tasks assigned to nodes are not known a priori and that detailed information of parallel applications is not alw ays known, lead to the evaluation of five nearest-neighbour (nn) execution time estimators: k-nn smoothing, k-nn, adaptive k-nn, one-nn, and adaptive one-nn. Experimental results showed that adaptive k-nn was the most efficient one. We also implemented the predictor and the estimator in GMap. Using GMap, we could reliably compare the efficiency of six mapping algorithms: Min-min, Max-min, Genetic Algorithms, Simulated Annealing, Tabu Search, and Quick-quality Map, with none of the preceding unrealistic assumptions. Experimental results showed that Quick-quality Map was the most efficient one. As a result of these findings, we achieved our goal in developing an efficient, practical, portable mapper.
27

Seah, Tsu Tham Tommy. "Exploring historical Russian pianism in Sergei Lyapunov’s Twelve Transcendental Études, Op. 11: The development of a performance edition." Thesis, Edith Cowan University, Research Online, Perth, Western Australia, 2018. https://ro.ecu.edu.au/theses/2131.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The genre of the Étude emerged with the increased popularity of the piano in the nineteenth century. It was often composed to provide practice material for perfecting a particular performance technique. Frederic Chopin (1819 – 1849) and Franz Liszt (1811 – 1886) are among the earliest composers whose Études are well established in the canon of concert piano repertory. Later emerged the new genre of Études that are considered worthy of the concert platform – the Concert Étude. Born in 1859 in Yaroslavl, Russia, Sergei Mikhailovich Lyapunov (1859 – 1924) was a highly regarded composer, pianist, conductor, editor, and teacher of his time. Lyapunov’s ‘Douze Études d’Execution Transcendantes’, Op. 11, was composed by between 1897 and 1905, in the memory of Franz Liszt, and is considered to be one of Lyapunov’s most significant compositional achievements. The influence of Liszt and composers of the Russian Mighty Five are evidently present in Op. 11. Overshadowed by his younger contemporaries whom experimented with new styles and techniques, Lyapunov and his music are largely forgotten today. This research project explores the performance practice of three Russian pianists – with a focus on pianists associated with the Moscow Conservatory at the turn of the twentieth century – with an aim to quantify aspects of style of Russian pianism of the time. The methodology employed in this research includes both qualitative discourse of literary sources, and a series of systematic computational tempo analyses of reproducing piano roll recordings by pianists that fit the research criteria. Results revealed a number of stylistic aspects such as unnotated arpeggiation, notes inégales, and other aspects of rubato. These aspects of style, alongside historiographical findings, are both corroborated, reconsidered through reflexive pratice, and finally compiled in a performance edition of Lyapunov’s ‘Douze Études d’Execution Transcendantes’, Op. 11.
28

Thaler, Justin R. "Practical Verified Computation with Streaming Interactive Proofs." Thesis, Harvard University, 2013. http://dissertations.umi.com/gsas.harvard:11086.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
As the cloud computing paradigm has gained prominence, the need for verifiable computation has grown urgent. Protocols for verifiable computation enable a weak client to outsource difficult computations to a powerful, but untrusted, server. These protocols provide the client with a (probabilistic) guarantee that the server performed the requested computations correctly, without requiring the client to perform the computations herself.
Engineering and Applied Sciences
29

Sakellariou, C. "A practical hardware implementation of systemic computation." Thesis, University College London (University of London), 2013. http://discovery.ucl.ac.uk/1416837/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
It is widely accepted that natural computation, such as brain computation, is far superior to typical computational approaches addressing tasks such as learning and parallel processing. As conventional silicon-based technologies are about to reach their physical limits, researchers have drawn inspiration from nature to found new computational paradigms. Such a newly-conceived paradigm is Systemic Computation (SC). SC is a bio-inspired model of computation. It incorporates natural characteristics and defines a massively parallel non-von Neumann computer architecture that can model natural systems efficiently. This thesis investigates the viability and utility of a Systemic Computation hardware implementation, since prior software-based approaches have proved inadequate in terms of performance and flexibility. This is achieved by addressing three main research challenges regarding the level of support for the natural properties of SC, the design of its implied architecture and methods to make the implementation practical and efficient. Various hardware-based approaches to Natural Computation are reviewed and their compatibility and suitability, with respect to the SC paradigm, is investigated. FPGAs are identified as the most appropriate implementation platform through critical evaluation and the first prototype Hardware Architecture of Systemic computation (HAoS) is presented. HAoS is a novel custom digital design, which takes advantage of the inbuilt parallelism of an FPGA and the highly efficient matching capability of a Ternary Content Addressable Memory. It provides basic processing capabilities in order to minimize time-demanding data transfers, while the optional use of a CPU provides high-level processing support. It is optimized and extended to a practical hardware platform accompanied by a software framework to provide an efficient SC programming solution. The suggested platform is evaluated using three bio-inspired models and analysis shows that it satisfies the research challenges and provides an effective solution in terms of efficiency versus flexibility trade-off.
30

Nordahl, Emily Rose. "Best Practices in Computational Fluid Dynamics Modeling of Cerebral Aneurysms using ANSYS CFX." Thesis, North Dakota State University, 2015. https://hdl.handle.net/10365/27810.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Today many researchers are looking toward computational fluid dynamics (CFD) as a tool that can help doctors understand and predict the severity of aneurysms, but there has yet to be any conclusive proof of the accuracy or the ease of implementation of this CFD analysis. To help solve this issue, CFD simulations were conducted to compare these setup practices in order to find the most accurate and computationally efficient setup. These simulation comparisons were applied over two CFD group challenges from the CFD community whose goal was not only to assess modeling accuracy, but the analysis of clinical use and the hemodynamics of rupture as well. Methodology compared included mesh style and refinement, timestep comparison, steady and unsteady flow comparison as well as flow rate amplitude comparison, inlet flow profile conditions, and outlet boundary conditions. The ?Best Practice? setup gave good overall results compared with challenge participant and in-vitro data.
31

Erdmann, Alexander. "Practical Morphological Modeling: Insights from Dialectal Arabic." The Ohio State University, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=osu1598006284544079.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Gómez-Mateu, Moisés. "Composite endpoints in clinical trials : computational tools, practical guidelines and methodological extensions." Doctoral thesis, Universitat Politècnica de Catalunya, 2016. http://hdl.handle.net/10803/396263.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The conclusions from randomized clinical trials (RCT) rely on the primary endpoint (PE), which is chosen at the design stage of the study; thus, it is of utmost importance to select it appropriately. In RCT, there should generally be only one PE, and it should be able to provide the most clinically relevant and scientific evidence regarding the potential efficacy of the new treatment. Composite endpoints (CE) consist of the union of two or more outcomes and are often used in RCT. When the focus is time-to-event analysis, CE refer to the elapse time from randomization until the first component of the CE. In oncology trials, for instance, progression-free survival is defined as the time to disease progression or death. The decision on whether to use a CE versus a single component as the PE is controversial. The advantages and drawbacks regarding the use of CE have been extensively discussed in the literature. Gómez and Lagakos develop a statistical methodology to evaluate the convenience of using a relevant endpoint RE versus a CE consisting of the union of the RE plus another additional endpoint (AE). Their strategy is based on the value of the asymptotic relative efficiency (ARE), which relates the efficiency of using the logrank test based on the RE versus the efficiency based on the CE. The ARE is expressed as a function of the marginal laws of the time to each component RE and AE, the probabilities of observing each component in the control group, the hazard ratios measured by each component of the CE between the two treatment groups, and the correlation between components. This thesis explores, elaborates on, implements and applies the ARE method. We have also developed a new online platform named CompARE that facilitates the practical use of this method. The ARE method has been applied to cardiovascular studies. We have made further progress into the theoretical meaning of the ARE and have explored how to handle the probability and the hazard ratio of a combination of endpoints. In cardiovascular trials, it is common to use CE. We systematically examine the use of CE in this field by means of a literature search and the discussion of several case studies. Based on the ARE methodology, we provide guidelines for the informed choice of the PE. We prove that the usual interpretation of the ARE as the ratio of sample sizes holds and that it can be applied to evaluate the efficiency of the RE versus the CE. Furthermore, we carry out a simulation study to empirically check the proximity between the ratio of finite sample sizes and the ARE. We discuss how to derive the probabilities and hazard ratios when they come from a combination of several components. Furthermore, it is shown that the combined hazard ratio (HR*) is, in general, not constant over time, even if the hazard ratio of the marginal components are. This non-constant behaviour might have a strong influence on the interpretation of treatment effect and on sample size assessment. We evaluate the behaviour of the HR* in respect to the marginal parameters, and we study its departure from constancy, depending on different scenarios. This thesis has implemented the ARE methodology on the online platform CompARE. Clinicians and biostatisticians can use CompARE to study the performance of different endpoints in a variety of scenarios. CompARE has an intuitive interface and it is a convenient tool for better informed decisions regarding the PE. Results from different parameter settings are shown immediately by means of tables and plots. CompARE is extended to quantify specific values for the combined probability and hazard ratios. When the user cannot anticipate some of the needed parameters, CompARE provides a range of plausible values. Moreover, the departure from constancy of a combined hazard ratio can be explored by visualizing its shape over time. Sample size computations are implemented as well.
Els esdeveniments compostos consisteixen en la unió de dos o més esdeveniments, i són utilitzats usualment en assajos clínics aleatoritzats. Sovint, les anàlisis es basen en el temps fins que es produeix l’esdeveniment d’interès; en aquest cas parlaríem del temps fins al primer dels components. En assajos oncològics, per exemple, la supervivència lliure de progressió es defineix com a temps fins a la pro-gressió o la mort. La decisió entre utilitzar un esdeveniment compost o un component d’aquest com a variable principal és controvertida. Gómez i Lagakos desenvolupen una metodologia estadística per avaluar la conveniència d’utilitzar un esdeveniment rellevant enfront d'un esdeveniment compost consistent en la unió de l’esdeveniment rellevant més un esdeveniment addicional. La seva estratègia es basa en el valor de l’eficiència relativa asimptòtica (ARE, fent servir l’acrònim en anglès), la qual relaciona l’eficiència d’utilitzar la prova logrank basada en l’esdeveniment rellevant enfront de l’eficiència basada en l’esdeveniment compost. L’ARE s’expressa com a funció de les lleis marginals corresponents al temps fins a cada component rellevant i addicional, les probabilitats d’observar cada component en el grup control, els hazard ratios mesurats per a cada component de l’esdeveniment compost entre els dos grups de tractament i la correlació entre els components. Aquesta tesi explora, aprofundeix, implementa i aplica la metodologia ARE. També hem creat una nova plataforma en línia, CompARE, que facilita l’ús pràctic d’aquesta metodologia. Examinem sistemàticament l’ús d’esdeveniments compostos en assajos cardiovasculars a partir d’una recerca en la literatura existent i en discutim diferents casos. Basant-nos en la metodologia ARE, aportem guies per a l’elecció informada de la variable principal. Provem que la interpretació usual de l’ARE com la ràtio de les mides mostrals se sustenta i pot ser aplicada per avaluar l’eficiència de l’esdeveniment rellevant enfront de l’esdeveniment compost. A més, portem a terme una simulació per estudiar empíricament com n’està, de prop, la ràtio de mides mostrals finites respecte de l’ARE. Discutim com es poden derivar les probabilitats i els hazard ratios quan provenen d’una combinació de diversos components. També mostrem que el hazard ratio combinat és, en general, no constant al llarg del temps, fins i tot quan els hazard ratios dels components marginals ho són. Aquest comportament no constant pot tenir una gran influència en la interpretació de l’efecte del tractament i en el càlcul de les mides mostrals. Avaluem el comportament del hazard ratio combinat respecte dels paràmetres marginals i l’estudiem per a diferents escenaris. En aquesta tesi també s’ha implementat la metodologia ARE en la plataforma en línia CompARE. Clínics i bioestadístics poden utilitzar CompARE per estudiar el comportament de diferents esdeveniments en un gran ventall d’escenaris. CompARE conté una interfície intuïtiva i és una eina convenient per prendre una decisió informada millor sobre la variable principal. Els resultats provinents de diferents escenaris són mostrats instantàniament a partir de taules i gràfics. CompARE s’ha ampliat per quantificar valors específics per a la probabilitat combinada i el hazard ratio. Quan l’usuari no pot anticipar algun dels paràmetres necessaris, CompARE facilita un rang de valors possibles. A més, el hazard ratio pot ser explorat visualitzant-ne la forma al llarg del temps i, per tant, proporciona una ajuda gràfica per a possibles desviacions de la proporcionalitat dels hazards. Càlculs sobre la mida mostral també han estat implementats en la plataforma.
33

Atkinson, Katie Marie. "What should we do? : computational representation of persuasive argument in practical reasoning." Thesis, University of Liverpool, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.426134.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Raja, Sahdia Tabassum. "Integrating practical and computational approaches to understand morphogenesis of the vertebrate limb." Thesis, University of Edinburgh, 2007. http://hdl.handle.net/1842/30666.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Optimisation of the experimental technique (BrdU-IddU double-staining) and development of new computational tools have allowed, for the first time, a comprehensive spatio-temporal map of quantitative cell cycle times in the early vertebrate limb. A key question of limb morphogenesis is how genes create the digit pattern. An example of such a gene is Sox9, which is an early marker of chondrogenesis and is, therefore, assumed to follow a pattern similar to early stages of digit patterning. Classical chondrogenic experiments, suggest digital regions are patterned by the intermediate formation of a “digital arch” from which the digits arise in a posterior to anterior order. In contrast, a through analysis of a large number of Sox9 in situs revealed digital regions 1, 2 and 3 branch from a region reminiscent of the tibia (anterior zeugopod) and digital regions 4 and 5 branch from a fibula-like region (posterior zeugopod). Moreover, the Sox9 pattern first arises in digital regions 2, 3 and 4, followed by digital regions 5 and 1. The Sox9 in situ analysis was achieved using newly developed software for the 3D analysis of optical projection tomographic (OPT) images at a very high spatial resolution. These studies have highlighted the importance of integrating practical and computational tools in order to close the gaps in our knowledge and understanding of limb development, and developmental processes as a whole. The computational tools generated for the proliferation studies are valuable in offering a thorough means of analysis of cell cycle times and the new OPT software will be invaluable for the study of both weak and strong gene expression patterns in whole embryos. In the future, the proliferation data and 3D Sox9 in situ data can be incorporated into simulation software, the results of which should shed light upon the interactive effects of different factors upon the process of limb morphogenesis.
35

Weinberg, Zasha. "Accurate annotation of non-coding RNAs in practical time /." Thesis, Connect to this title online; UW restricted, 2005. http://hdl.handle.net/1773/6893.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Affenzeller, Michael. "Population genetics and evolutionary computation : theoretical and practical aspects /." Linz : Trauner, 2005. http://www.gbv.de/dms/ilmenau/toc/490631479affen.PDF.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Li, Lin 1971. "A practical MHP information computation for concurrent Java programs /." Thesis, McGill University, 2005. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=84096.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
With the development of multi-processors, multi-threaded programs and programming languages have become more and more popular. This requires extending the scope of program analysis and compiler optimization from traditional, sequential programs to concurrent programs.
Naumovich et al. proposed May Happen in Parallel (MHP) analysis that determines which program statements may be executed concurrently. From this information, compiler optimization improvements, as well as analysis data on potential program problems such as dataraces can be analyzed or discovered.
Unfortunately, MHP analysis has some limitations with respect to practical use. In this thesis we present an implementation of MHP analysis for Java that attempts to address some of the practical implementation concerns of the original work. We describe a design that incorporates techniques for aiding a feasible implementation and expanding the range of acceptable inputs.
The MHP analysis requires a particular internal representation in order to run. By using a combination of techniques, we are able to compact that representation, and thus significantly improve MHP execution time without, affecting accuracy. We also provide experimental results showing the utility and impact of our approach and optimizations using a variety of concurrent benchmarks. The results show that our optimizations are effective, and allow more and larger benchmarks to be analyzed. For some benchmarks, our optimizations have very impressive results, speeding up MHP analysis by several orders of magnitude.
38

Widing, Björn, and Jimmy Jansson. "Valuation Practices of IFRS 17." Thesis, KTH, Matematisk statistik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-224211.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This research assesses the IFRS 17 Insurance Contracts standard from a mathematical and actuarial point of view. Specifically, a valuation model that complies with the standard is developed in order to investigate implications of the standard on financial statements of insurance companies. This includes a deep insight into the standard, construction a valuation model of a fictive traditional life insurance product and an investigation of the outcomes of the model. The findings show firstly that an investment strategy favorable for valuing insurance contracts according to the standard may conflict with the Asset & Liability Management of the firm. Secondly, that a low risk adjustment increases the contractual service margin (CSM) and hence the possibility of smoothing profits over time. Thirdly, that the policy for releasing the CSM should take both risk-neutral and real assumptions into account.
I denna rapport ansätts redovisningsstandarden IFRS 17 Insurance Contracts utifrån ett matematiskt och aktuariellt perspektiv. En värderingsmodell som överensstämmer med standarden konstrueras för att undersöka standardens implikationer på ett försäkringsbolags resultaträkning. Detta inkluderar en fördjupning i standarden, konstruktion och modellering av en fiktiv traditionell livförsäkringsprodukt samt undersökning av resultaten från modellen. Resultaten visar att det finns en möjlig konflikt mellan investeringsstrategier som är gynnsamma med avseende på värdering enligt standarden och ett försäkringsbolags tillgångs- och skuldförvaltning. Vidare leder en låg riskjustering till en högre avtalsmässig servicemarginal (CSM) vilket ökar möjligheten att utjämna vinster över tid. Slutligen bör policyn för hur CSM frisläpps beakta både risk-neutrala och verkliga antaganden.
39

Alperin-Sheriff, Jacob. "Towards practical fully homomorphic encryption." Diss., Georgia Institute of Technology, 2015. http://hdl.handle.net/1853/53951.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Fully homomorphic encryption (FHE) allows for computation of arbitrary func- tions on encrypted data by a third party, while keeping the contents of the encrypted data secure. This area of research has exploded in recent years following Gentry’s seminal work. However, the early realizations of FHE, while very interesting from a theoretical and proof-of-concept perspective, are unfortunately far too inefficient to provide any use in practice. The bootstrapping step is the main bottleneck in current FHE schemes. This step refreshes the noise level present in the ciphertexts by homomorphically evaluating the scheme’s decryption function over encryptions of the secret key. Bootstrapping is necessary in all known FHE schemes in order to allow an unlimited amount of computation, as without bootstrapping, the noise in the ciphertexts eventually grows to a point where decryption is no longer guaranteed to be correct. In this work, we present two new bootstrapping algorithms for FHE schemes. The first works on packed ciphertexts, which encrypt many bits at a time, while the second works on unpacked ciphertexts, which encrypt a single bit at a time. Our algorithms lie at the heart of the fastest currently existing implementations of fully homomorphic encryption for packed ciphertexts and for single-bit encryptions, respectively, running hundreds of times as fast for practical parameters as the previous best implementations.
40

Spengler, Ryan Michael. "Mechanisms Of MicroRNA evolution, regulation and function: computational insight, biological evaluation and practical application." Diss., University of Iowa, 2013. https://ir.uiowa.edu/etd/2636.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
MicroRNAs (miRNAs) are an abundant and diverse class of small, non-protein coding RNAs that guide the post-transcriptional repression of messenger RNA (mRNA) targets in a sequence-specific manner. Hundreds, if not thousands of distinct miRNA sequences have been described, each of which has the potential to regulate a large number of mRNAs. Over the last decade, miRNAs have been ascribed roles in nearly all biological processes in which they have been tested. More recently, interest has grown in understanding how individual miRNAs evolved, and how they are regulated. In this work, we demonstrate that Transposable Elements are a source for novel miRNA genes and miRNA target sites. We find that primate-specific miRNA binding sites were gained through the transposition of Alu elements. We also find that remnants of Mammalian Interspersed Repeat transposition, which occurred early in mammalian evolution, provide highly conserved functional miRNA binding sites in the human genome. We also provide data to support that long non-coding RNAs (lncRNAs) can provide a novel miRNA binding substrate which, rather than inhibiting the miRNA target, inhibits the miRNA. As such, lncRNAs are proposed to function as endogenous miRNA "sponges," competing for miRNA binding and reducing miRNA-mediated repression of protein-coding mRNA targets. We also explored how dynamic changes to miRNA binding sites can occur by A-to-I editing of the 3 `UTRs of mRNA targets. These works, together with knowledge gained from the regulatory activity of endogenous and exogenously added miRNAs, provided a platform for algorithm development that can be used in the rational design of artificial RNAi triggers with improved target specificity. The cumulative results from our studies identify and in some cases clarify important mechanisms for the emergence of miRNAs and miRNA binding sites on large (over eons) and small (developmental) time scales, and help in translating these gene silencing processes into practical application.
41

Morgan, Geoffrey Robert. "An analysis of the nature and function of mental computation in primary mathematics curricula." Queensland University of Technology, 2005. http://eprints.qut.edu.au/16011/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This study was conducted to analyse aspects of mental computation within primary school mathematics curricula and to formulate recommendations to inform future revisions to the Number strand of mathematics syllabuses for primary schools. The analyses were undertaken from past, contemporary, and futures perspectives. Although this study had syllabus development in Queensland as a prime focus, its findings and recommendations have an international applicability. Little has been documented in relation to the nature and role of mental computation in mathematics curricula in Australia (McIntosh, Bana, & Farrell, 1995,p. 2), despite an international resurgence of interest by mathematics educators. This resurgence has arisen from a recognition that computing mentally remains a viable computational alternative in a technological age, and that the development of mental procedures contributes to the formation of powerful mathematical thinking strategies (R. E. Reys, 1992, p. 63). The emphasis needs to be placed upon the mental processes involved, and it is this which distinguishes mental computation from mental arithmetic, as defined in this study. Traditionally, the latter has been concerned with speed and accuracy rather than with the mental strategies used to arrive at the correct answers. In Australia, the place of mental computation in mathematics curricula is only beginning to be seriously considered. Little attention has been given to teaching, as opposed to testing, mental computation. Additionally, such attention has predominantly been confined to those calculations needed to be performed mentally to enable the efficient use of the conventional written algorithms. Teachers are inclined to associate mental computation with isolated facts, most commonly the basic ones, rather than with the interrelationships between numbers and the methods used to calculate. To enhance the use of mental computation and to achieve an improvement in performance levels, children need to be encouraged to value all methods of computation, and to place a priority on mental procedures. This requires that teachers be encouraged to change the way in which they view mental computation. An outcome of this study is to provide the background and recommendations for this to occur. The mathematics education literature of relevance to mental computation was analysed, and its nature and function, together with the approaches to teaching, under each of the Queensland mathematics syllabuses from 1860 to 1997 were documented. Three distinct time-periods were analysed: 1860-1965, 1966-1987, and post-1987. The first of these was characterised by syllabuses which included specific references to calculating mentally. To provide insights into the current status of mental computation in Queensland primary schools, a survey of a representative sample of teachers and administrators was undertaken. The statements in the postal, self-completion opinionnaire were based on data from the literature review. This study, therefore, has significance for Queensland educational history, curriculum development, and pedagogy. The review of mental computation research indicated that the development of flexible mental strategies is influenced by the order in which mental and written techniques are introduced. Therefore, the traditional written-mental sequence needs to be reevaluated. As a contribution to this reevaluation, this study presents a mental-written sequence for introducing each of the four operations. However, findings from the survey of Queensland school personnel revealed that a majority disagreed with the proposition that an emphasis on written algorithms should be delayed to allow increased attention on mental computation. Hence, for this sequence to be successfully introduced, much professional debate and experimentation needs to occur to demonstrate its efficacy to teachers. Of significance to the development of efficient mental techniques is the way in which mental computation is taught. R. E. Reys, B. J. Reys, Nohda, and Emori (1995, p. 305) have suggested that there are two broad approaches to teaching mental computation,,Ya behaviourist approach and a constructivist approach. The former views mental computation as a basic skill and is considered an essential prerequisite to written computation, with proficiency gained through direct teaching. In contrast, the constructivist approach contends that mental computation is a process of higher-order thinking in which the act of generating and applying mental strategies is significant for an individual's mathematical development. Nonetheless, this study has concluded that there may be a place for the direct teaching of selected mental strategies. To support syllabus development, a sequence of mental strategies appropriate for focussed teaching for each of the four operations has been delineated. The implications for teachers with respect to these recommendations are discussed. Their implementation has the potential to severely threaten many teachersf sense of efficacy. To support the changed approach to developing competence with mental computation, aspects requiring further theoretical and empirical investigation are also outlined.
42

Morgan, Geoffrey Robert. "An analysis of the nature and function of mental computation in primary mathematics curricula." Thesis, Queensland University of Technology, 1999. https://eprints.qut.edu.au/16011/1/Geoffrey_Morgan_Thesis.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This study was conducted to analyse aspects of mental computation within primary school mathematics curricula and to formulate recommendations to inform future revisions to the Number strand of mathematics syllabuses for primary schools. The analyses were undertaken from past, contemporary, and futures perspectives. Although this study had syllabus development in Queensland as a prime focus, its findings and recommendations have an international applicability. Little has been documented in relation to the nature and role of mental computation in mathematics curricula in Australia (McIntosh, Bana, & Farrell, 1995,p. 2), despite an international resurgence of interest by mathematics educators. This resurgence has arisen from a recognition that computing mentally remains a viable computational alternative in a technological age, and that the development of mental procedures contributes to the formation of powerful mathematical thinking strategies (R. E. Reys, 1992, p. 63). The emphasis needs to be placed upon the mental processes involved, and it is this which distinguishes mental computation from mental arithmetic, as defined in this study. Traditionally, the latter has been concerned with speed and accuracy rather than with the mental strategies used to arrive at the correct answers. In Australia, the place of mental computation in mathematics curricula is only beginning to be seriously considered. Little attention has been given to teaching, as opposed to testing, mental computation. Additionally, such attention has predominantly been confined to those calculations needed to be performed mentally to enable the efficient use of the conventional written algorithms. Teachers are inclined to associate mental computation with isolated facts, most commonly the basic ones, rather than with the interrelationships between numbers and the methods used to calculate. To enhance the use of mental computation and to achieve an improvement in performance levels, children need to be encouraged to value all methods of computation, and to place a priority on mental procedures. This requires that teachers be encouraged to change the way in which they view mental computation. An outcome of this study is to provide the background and recommendations for this to occur. The mathematics education literature of relevance to mental computation was analysed, and its nature and function, together with the approaches to teaching, under each of the Queensland mathematics syllabuses from 1860 to 1997 were documented. Three distinct time-periods were analysed: 1860-1965, 1966-1987, and post-1987. The first of these was characterised by syllabuses which included specific references to calculating mentally. To provide insights into the current status of mental computation in Queensland primary schools, a survey of a representative sample of teachers and administrators was undertaken. The statements in the postal, self-completion opinionnaire were based on data from the literature review. This study, therefore, has significance for Queensland educational history, curriculum development, and pedagogy. The review of mental computation research indicated that the development of flexible mental strategies is influenced by the order in which mental and written techniques are introduced. Therefore, the traditional written-mental sequence needs to be reevaluated. As a contribution to this reevaluation, this study presents a mental-written sequence for introducing each of the four operations. However, findings from the survey of Queensland school personnel revealed that a majority disagreed with the proposition that an emphasis on written algorithms should be delayed to allow increased attention on mental computation. Hence, for this sequence to be successfully introduced, much professional debate and experimentation needs to occur to demonstrate its efficacy to teachers. Of significance to the development of efficient mental techniques is the way in which mental computation is taught. R. E. Reys, B. J. Reys, Nohda, and Emori (1995, p. 305) have suggested that there are two broad approaches to teaching mental computation,,Ya behaviourist approach and a constructivist approach. The former views mental computation as a basic skill and is considered an essential prerequisite to written computation, with proficiency gained through direct teaching. In contrast, the constructivist approach contends that mental computation is a process of higher-order thinking in which the act of generating and applying mental strategies is significant for an individual's mathematical development. Nonetheless, this study has concluded that there may be a place for the direct teaching of selected mental strategies. To support syllabus development, a sequence of mental strategies appropriate for focussed teaching for each of the four operations has been delineated. The implications for teachers with respect to these recommendations are discussed. Their implementation has the potential to severely threaten many teachers�f sense of efficacy. To support the changed approach to developing competence with mental computation, aspects requiring further theoretical and empirical investigation are also outlined.
43

Blocker, Alexander W. "Distributed and multiphase inference in theory and practice| Principles, modeling, and computation for high-throughput science." Thesis, Harvard University, 2013. http://pqdtopen.proquest.com/#viewpdf?dispub=3566820.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:

The rise of high-throughput scientific experimentation and data collection has introduced new classes of statistical and computational challenges. The technologies driving this data explosion are subject to complex new forms of measurement error, requiring sophisticated statistical approaches. Simultaneously, statistical computing must adapt to larger volumes of data and new computational environments, particularly parallel and distributed settings. This dissertation presents several computational and theoretical contributions to these challenges.

In chapter 1, we consider the problem of estimating the genome-wide distribution of nucleosome positions from paired-end sequencing data. We develop a modeling approach based on nonparametric templates that controls for variability due to enzymatic digestion. We use this to construct a calibrated Bayesian method to detect local concentrations of nucleosome positions. Inference is carried out via a distributed HMC algorithm that scales linearly in complexity with the length of the genome being analyzed. We provide MPI-based implementations of the proposed methods, stand-alone and on Amazon EC2, which can provide inferences on an entire S. cerevisiae genome in less than 1 hour on EC2.

We then present a method for absolute quantitation from LC-MS/MS proteomics experiments in chapter 2. We present a Bayesian model for the non-ignorable missing data mechanism induced by this technology, which includes an unusual combination of censoring and truncation. We provide a scalable MCMC sampler for inference in this setting, enabling full-proteome analyses using cluster computing environments. A set of simulation studies and actual experiments demonstrate this approach's validity and utility.

We close in chapter 3 by proposing a theoretical framework for the analysis of preprocessing under the banner of multiphase inference. Preprocessing forms an oft-neglected foundation for a wide range of statistical and scientific analyses. We provide some initial theoretical foundations for this area, including distributed preprocessing, building upon previous work in multiple imputation. We demonstrate that multiphase inferences can, in some cases, even surpass standard single-phase estimators in efficiency and robustness. Our work suggests several paths for further research into the statistical principles underlying preprocessing.

44

Blocker, Alexander Weaver. "Distributed and Multiphase Inference in Theory and Practice: Principles, Modeling, and Computation for High-Throughput Science." Thesis, Harvard University, 2013. http://dissertations.umi.com/gsas.harvard:10977.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The rise of high-throughput scientific experimentation and data collection has introduced new classes of statistical and computational challenges. The technologies driving this data explosion are subject to complex new forms of measurement error, requiring sophisticated statistical approaches. Simultaneously, statistical computing must adapt to larger volumes of data and new computational environments, particularly parallel and distributed settings. This dissertation presents several computational and theoretical contributions to these challenges. In chapter 1, we consider the problem of estimating the genome-wide distribution of nucleosome positions from paired-end sequencing data. We develop a modeling approach based on nonparametric templates that controls for variability due to enzymatic digestion. We use this to construct a calibrated Bayesian method to detect local concentrations of nucleosome positions. Inference is carried out via a distributed HMC algorithm that scales linearly in complexity with the length of the genome being analyzed. We provide MPI-based implementations of the proposed methods, stand-alone and on Amazon EC2, which can provide inferences on an entire S. cerevisiae genome in less than 1 hour on EC2. We then present a method for absolute quantitation from LC-MS/MS proteomics experiments in chapter 2. We present a Bayesian model for the non-ignorable missing data mechanism induced by this technology, which includes an unusual combination of censoring and truncation. We provide a scalable MCMC sampler for inference in this setting, enabling full-proteome analyses using cluster computing environments. A set of simulation studies and actual experiments demonstrate this approach's validity and utility. We close in chapter 3 by proposing a theoretical framework for the analysis of preprocessing under the banner of multiphase inference. Preprocessing forms an oft-neglected foundation for a wide range of statistical and scientific analyses. We provide some initial theoretical foundations for this area, including distributed preprocessing, building upon previous work in multiple imputation. We demonstrate that multiphase inferences can, in some cases, even surpass standard single-phase estimators in efficiency and robustness. Our work suggests several paths for further research into the statistical principles underlying preprocessing.
Statistics
45

Kumar, Nishant. "Towards practical implementation of computational solution of the Kinematic -wave Model for simulating traffic-flow scenarios." Thesis, Texas A&M University, 2004. http://hdl.handle.net/1969.1/1037.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The Kinematic-wave model is one of the models proposed to simulate vehicular traffic. It has not received widespread use because of poor understanding of associated interface conditions and early use of incorrect numerical schemes used. This thesis analyzes mathematically correct boundary and interface conditions in the context of the Godunov method as the numerical scheme for the simulation software created. This thesis simulates a set of scenarios originally proposed by Ross, to verify the validity of simulation. The results of the simulation are compared against the corresponding results of Ross, and against intuitive expectation of the behavior of actual traffic under the scenarios. Our results tend either to agree with or improve upon those reported by Ross, who used alternate models.
46

Rumming, Madis [Verfasser], Alexander [Akademischer Betreuer] Sczyrba, and Leonid [Akademischer Betreuer] Chindelevitch. "Metadata-driven computational (meta)genomics. A practical machine learning approach / Madis Rumming ; Alexander Sczyrba, Leonid Chindelevitch." Bielefeld : Universitätsbibliothek Bielefeld, 2018. http://d-nb.info/1155922220/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Zeliha, Işıl Vural. "Sports Data Journalism: Data driven journalistic practices in Spanish newspapers." Doctoral thesis, Universitat Ramon Llull, 2021. http://hdl.handle.net/10803/672394.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Treballar amb dades sempre és una part important del periodisme, però la seva combinació amb la tecnologia és una innovació per als diaris. En els últims anys, els diaris han començat a adaptar el periodisme de dades i el periodisme de dades s'ha convertit en part de les redaccions a contra de l'entorn periodístic tradicional dels diaris espanyols. Aquesta tesi té com a objectiu analitzar les pràctiques del periodisme de dades esportius a Espanya amb enfocament quantitatiu i qualitatiu amb anàlisi de contingut de 1068 articles de periodisme de dades publicades per 6 diaris (Marca, Mundo Deportivo, AS, El Mundo, El Periódico, El País) entre 2017- 2019, i entrevistes a 15 participants de 6 diaris (Marca, Mundo Deportivo, AS, El Mundo, El Confidencial, El País). Tant l'anàlisi quantitativa com el qualitatiu es centren en com s'està adaptant el periodisme de dades a Espanya, la seva situació actual i característiques tècniques, oportunitats i amenaces en el seu desenvolupament.
Trabajar con datos siempre es una parte importante del periodismo, pero su combinación con la tecnología es una innovación para los periódicos. En los últimos años, los periódicos han comenzado a adaptar el periodismo de datos y el periodismo de datos se ha convertido en parte de las redacciones al contrario del entorno periodístico tradicional de los periódicos españoles. Esta tesis tiene como objetivo analizar las prácticas del periodismo de datos deportivos en España con enfoque cuantitativo y cualitativo con análisis de contenido de 1068 artículos de periodismo de datos publicados por 6 periódicos (Marca, Mundo Deportivo, AS, El Mundo, El Periódico, El País) entre 2017-2019, y entrevistas a 15 participantes de 6 periódicos (Marca, Mundo Deportivo, AS, El Mundo, El Confidencial, El País). Tanto el análisis cuantitativo como el cualitativo se centran en cómo se está adaptando el periodismo de datos en España, su situación actual y características técnicas, oportunidades y amenazas en su desarrollo.
Working with data is always an important part of journalism but its combination with technology is an innovation for newspapers. In recent years, newspapers have started to adapt data journalism and data journalism became a part of newsrooms to the contrary of the traditional journalism environment in Spanish newspapers. This thesis aims to analyse sports data journalism practices in Spain with quantitative and qualitative approach with content analysis of 1068 data journalism articles published by 6 newspapers (Marca, Mundo Deportivo, AS, El mundo, El Periódico, El Pais) between 2017-2019, and interviews with 15 participants from 6 newspapers (Marca, Mundo Deportivo, AS, El Mundo, El Confidencial, El País). Both quantitative and qualitative analysis focus on how data journalism is being adapted in Spain, its current situation and technical features, opportunities and threats in its development.
48

Büscher, Niklas [Verfasser], Stefan [Akademischer Betreuer] Katzenbeisser, and Florian [Akademischer Betreuer] Kerschbaum. "Compilation for More Practical Secure Multi-Party Computation / Niklas Büscher ; Stefan Katzenbeisser, Florian Kerschbaum." Darmstadt : Universitäts- und Landesbibliothek Darmstadt, 2018. http://d-nb.info/1179361792/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Sridaran, Kris Shrishak [Verfasser], Michael [Akademischer Betreuer] Waidner, and Jean-Pierre [Akademischer Betreuer] Seifert. "Practical Secure Computation for Internet Infrastructure / Kris Shrishak Sridaran ; Michael Waidner, Jean-Pierre Seifert." Darmstadt : Universitäts- und Landesbibliothek, 2021. http://d-nb.info/123291486X/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Donley, Kevin Scott. "Coding in the Curriculum: Learning Computational Practices and Concepts, Creative Problem Solving Skills, and Academic Content in Ten to Fourteen-Year-Old Children." Diss., Temple University Libraries, 2018. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/514678.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Educational Psychology
Ph.D.
The fundamentals of computer science are increasingly important to consider as critical educational and occupational competencies, as evidenced by the rapid growth of computing capabilities and the proliferation of the Internet in the 21st century, combined with reimagined national education standards. Despite this technological and social transformation, the general education environment has yet to embrace widespread incorporation of computational concepts within traditional curricular content and instruction. Researchers have posited that exercises in computational thinking can result in gains in other academic areas (Baytak & Land, 2011; Olive, 1991), but their studies aimed at identifying any measurable educational benefits of teaching computational concepts to school age children have often lacked both sufficient experimental control and inclusion of psychometrically sound measures of cognitive abilities and academic achievement (Calao, Moreno-León, Correa, & Robles, 2015). The current study attempted to shed new light on the question of whether using a graphically-based computer coding environment and semi-structured curriculum –the Creative Computing Course in the Scratch programming language –can lead to demonstrable and significant changes in problem solving, creative thinking, and knowledge of computer programming concepts. The study introduced 24 youth in a summer educational program in Philadelphia, PA to the Scratch programming environment through structured lessons and open-ended projects for approximately 25 hours over the course of two weeks. A delayed treatment, control trial design was utilized to measure problem solving ability with a modified version of the Woodcock-Johnson Tests of Cognitive Abilities, Fourth Edition (WJ-IV), Concept Formation subtest, and the Kaufman Tests of Educational Achievement, Third Edition (KTEA-3) Math Concepts and Applications subtest. Creative problem solving was measured using a consensual assessment technique (Amabile, 1982). A pre-test and post-test of programming conceptual knowledge was used to understand how participants’ computational thinking skills influenced their learning. In addition, two questionnaires measuring computer use and the Type-T (Thrill) personality characteristic were given to participants to examine the relationship between risk-taking or differences in children’s usage of computing devices and their problem solving ability and creative thinking skills. There were no differences found among experimental and control groups on problem solving or creative thinking, although a substantial number of factors limited and qualified interpretation of the results. There was also no relationship between performance on a pre-test of computational thinking, and a post-test measuring specific computational thinking skills and curricular content. There were, however, significant, moderate to strong correlations among academic achievement as measured by state standardized test scores, the KTEA-3 Math Concepts and Applications subtest, and both the pre and post Creative Problem Solving test developed for the study. Also, higher levels of the Type T, or thrill-seeking, personality characteristic were associated with lower behavioral reinforcement token computer “chips," but there were no significant relationships among computer use and performance on assessments. The results of the current study supported retention of the null hypothesis, but were limited by small sample size, environmental and motivational issues, and problems with the implementation of the curriculum and selected measures. The results should, therefore, not be taken as conclusive evidence to support the notion that computer programming activities have no impact in other areas of cognitive functioning, mathematic conceptual knowledge, or creative thinking. Instead, the results may help future researchers to further refine their techniques to both deliver effective instruction in the Scratch programming environment, and also target assessments to more accurately measure learning.
Temple University--Theses

До бібліографії