Dissertations / Theses on the topic 'Biology Research Computer programs'

To see the other types of publications on this topic, follow the link: Biology Research Computer programs.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Biology Research Computer programs.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Kiehl, Debra Elisabeth. "A comparison of traditional animal dissection and computer simulation dissection." CSUSB ScholarWorks, 2007. https://scholarworks.lib.csusb.edu/etd-project/3247.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Iturrioz, Amaia Bernaras. "A method for understanding experimental computer programs in artificial intelligence research." Thesis, University of Edinburgh, 1993. http://hdl.handle.net/1842/20026.

Full text
Abstract:
This thesis is concerned with the use of Software Engineering abstraction constructs to help in the process of understanding computer programs that are built as part of experiments in the Symbolic Paradigm. It is also concerned with developing and testing a method to analyse these programs in an organised and structured way. In a series of three experiments, the use of abstraction constructs to help the process of transforming a program to a more abstract form, and how to do this in a structured way, was incrementally investigated. This involved first exploring the use of abstraction constructs to achieve higher degrees of abstraction in a small example; the next step was to use the understanding of their use and of how to transform a program in a bigger exanple, from which a more clearly defined role of the abstraction constructs, and an initial scheme for transforming a program, was achieved; the last step involved investigating a complete form of an analysis procedure to analyse experimental programs built by incremental prototying, and that is supported by the use of abstraction constructs. The result is the Prototype Analysis Method (PAM): a static analysis method to help in the understanding of incrementally built experimental computer programs in AI. An essential part of this method is a transformation process that is supported by the use of Software Engineering abstraction constructs, and of test sets from dynamic analysis for validation. This research clearly demonstrates the successful application of Software Engineering abstraction constructs is an important aspect of AI research. Results from this research also point to further interesting issues such as the relation between Knowledge Level descriptions and abstract Symbol Level descriptions.
APA, Harvard, Vancouver, ISO, and other styles
3

Joseph, Arokiya Louis Monica. "Sequence Similarity Search portal." CSUSB ScholarWorks, 2007. https://scholarworks.lib.csusb.edu/etd-project/3124.

Full text
Abstract:
This project brings the bioinformatics community a new development concept in which users can access all data and applications hosted in the main research centers as if they were installed on their local machines, providing seamless integration between disparate services. The project moves to integrate the sequence similarity searching at EBI and NCBI by using web services. It also intends to allow molecular biologists to save their searches and act as a log book for their sequence similarity searches. The project will also allow the biologists to share their sequences and results with others.
APA, Harvard, Vancouver, ISO, and other styles
4

Rahim, Humaira. "Athena: An online proposal development system." CSUSB ScholarWorks, 2005. https://scholarworks.lib.csusb.edu/etd-project/2856.

Full text
Abstract:
Athena - Online Proposal Development System was the first version of a vision of Dr. Richard Botting, Professor, Department of Computer Science at California State University, San Bernardino. The program, a JSP based system incorporating a MySql database, moves the writing, review, and annotation of project proposals into the digital environment. It allows Computer Science Master's students to provide their project proposals online for review and annotation by the committee members.
APA, Harvard, Vancouver, ISO, and other styles
5

McIntosh, Cecilia A. "Building and Sustaining Research at East Tennessee State University." Digital Commons @ East Tennessee State University, 2012. https://dc.etsu.edu/etsu-works/366.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Beyers, Ronald Noel. "Selecting educational computer software and evaluating its use, with special reference to biology education." Thesis, Rhodes University, 1992. http://hdl.handle.net/10962/d1003649.

Full text
Abstract:
In the field of Biology there is a reasonable amount of software available for educational use but in the researcher's experience there are few teachers who take the computer into the classroom/laboratory, Teachers will make use of video machines and tape recorders quite happily, but a computer is a piece of apparatus which they are not prepared to use in the classroom/laboratory. This thesis is an attempt to devise an educational package, consisting of a Selection Form and an Evaluation Form, which can be used by teachers to select and evaluate educational software in the field of Biology. The forms were designed specifically for teachers to use in preparation of a computer lesson. The evaluation package also provides the teacher with a means of identifying whether the lesson has achieved its objectives or not. The teacher may also be provided with feedback about the lesson. The data is gathered by means of a questionnaire which the pupils complete. It would appear that teachers are uncertain as regards the purchase of software for their subject from the many catalogues that are available. The evaluation package implemented in this research can be regarded as the beginnings of a data base for the accumulation of information to assist teachers with details on which software to select. Evidence is provided in this thesis for the practical application of the Selection and Evaluation Forms, using Biology software.
APA, Harvard, Vancouver, ISO, and other styles
7

Lira, Bonates Eduardo Jorge. "Analysis of truckshovel dispatching policies using computer simulation." Thesis, McGill University, 1986. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=65406.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Chen, Kun-Che. "Extending the solicitation management system: User interface improvement and system administration support." CSUSB ScholarWorks, 2008. https://scholarworks.lib.csusb.edu/etd-project/3398.

Full text
Abstract:
The main purpose of this project is to develop new functionalities for the Solicitation Management System (SMS) to support the Office of Technology Transfer and Commercialization (OTTC), California State University San Bernardino (CSUSB) and the Center for the Commercialization of Advanced Technology (CCAT), San Diego State University (SDSU) for the 2008 solicitation, which opened on 28 Jan 2008. SMS is a system built to facilitate the processing of grant proposal solicitations. The SMS was first built in 2004 and was primarily used by the OTTC, CSUSB for its solicitation activities. The new version of the SMS is more user friendly, so that it is easier for users to use and comprehend. The purpose of this software is to aid the processing of a solicitation for organizations that conduct solicitations for grant proposals.
APA, Harvard, Vancouver, ISO, and other styles
9

Beedell, David C. (David Charles). "The effect of sampling error on the interpretation of a least squares regression relating phosporus and chlorophyll." Thesis, McGill University, 1995. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=22720.

Full text
Abstract:
Least squares linear regression is a common tool in ecological research. One of the central assumptions of least squares linear regression is that the independent variable is measured without error. But this variable is measured with error whenever it is a sample mean. The significance of such contraventions is not regularly assessed in ecological studies. A simulation program was made to provide such an assessment. The program requires a hypothetical data set, and using estimates of S$ sp2$ it scatters the hypothetical data to simulate the effect of sampling error. A regression line is drawn through the scattered data, and SSE and r$ sp2$ are measured. This is repeated numerous times (e.g. 1000) to generate probability distributions for r$ sp2$ and SSE. From these distributions it is possible to assess the likelihood of the hypothetical data resulting in a given SSE or r$ sp2$. The method was applied to survey data used in a published TP-CHLa regression (Pace 1984). Beginning with a hypothetical, linear data set (r$ sp2$ = 1), simulated scatter due to sampling exceeded the SSE from the regression through the survey data about 30% of the time. Thus chances are 3 out of 10 that the level of uncertainty found in the surveyed TP-CHLa relationship would be observed if the true relationship were perfectly linear. If this is so, more precise and more comprehensive models will only be possible when better estimates of the means are available. This simulation approach should apply to all least squares regression studies that use sampled means, and should be especially relevant to studies that use log-transformed values.
APA, Harvard, Vancouver, ISO, and other styles
10

Gregory, Victor Paul. "Monte Carlo computer simulation of sub-critical Lennard-Jones particles." Thesis, This resource online, 1991. http://scholar.lib.vt.edu/theses/available/etd-11242009-020125/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Wang, Chia-Chi. "Online solicitation management system for the Office of Technology Transfer and Commercialization." CSUSB ScholarWorks, 2005. https://scholarworks.lib.csusb.edu/etd-project/2950.

Full text
Abstract:
The Online Solicitation Management System (OSMS) is a web-based system designed for California State University, San Bernardino's Office of Technology Transfer and Commercialization (OTTC) to run grant proposal solicitations more efficiently. The system accepts grant proposals, finds the best matched evaluators, calculates evaluation scores, and generated reports. Users in the system are divided into five (5) different roles: system administrator, program officer, staff, evaluator and applicant.
APA, Harvard, Vancouver, ISO, and other styles
12

Montgomery, David Eric. "An interactive PHIGS+ model rendering system applied to postprocessing of spatial mechanisms." Thesis, This resource online, 1990. http://scholar.lib.vt.edu/theses/available/etd-03242009-040504/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Chen, Zhuo. "Smart Sequence Similarity Search (S⁴) system." CSUSB ScholarWorks, 2004. https://scholarworks.lib.csusb.edu/etd-project/2458.

Full text
Abstract:
Sequence similarity searching is commonly used to help clarify the biochemical and physiological features of newly discovered genes or proteins. An efficient similarity search relies on the choice of tools and their associated subprograms and numerous parameter settings. To assist researchers in selecting optimal programs and parameter settings for efficient sequence similarity searches, the web-based expert system, Smart Sequence Similarity Search (S4) was developed.
APA, Harvard, Vancouver, ISO, and other styles
14

Liu, Huayang. "Cell Proliferation Control: from Intrinsic Transcriptional Programs to Extrinsic Stromal Networks." The Ohio State University, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=osu1430953475.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Miner, Roy. "Analysis of operational pace versus logistical support rate in the ground combat element of a Marine Expeditionary Brigade." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2006. http://library.nps.navy.mil/uhtbin/hyperion/06Sep%5FMiner.pdf.

Full text
Abstract:
Thesis (M.S. in Operations Research)--Naval Postgraduate School, September 2006.
Thesis Advisor(s): Arnold Buss, David Schrady. "September 2006." Includes bibliographical references (p. 75). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
16

DeSa, Colin Joseph. "Distributed problem solving environments for scientific computing." Thesis, This resource online, 1991. http://scholar.lib.vt.edu/theses/available/etd-08042009-040307/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Qiu, Shuhao. "Computational Simulation and Analysis of Mutations: Nucleotide Fixation, Allelic Age and rare Genetic Variations in population." University of Toledo Health Science Campus / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=mco1430494327.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Orsatti, Joanne Information Systems Technology &amp Management Australian School of Business UNSW. "Characterising scholarly identities :a citation identity analysis of the field of the scientific study of consciousness." Publisher:University of New South Wales. Information Systems, Technology & Management, 2007. http://handle.unsw.edu.au/1959.4/40472.

Full text
Abstract:
The professional profile of researchers is established through communication of scientific work practices, leading to the establishment of a scholarly identity. Understanding scholarly identities is currently addressed through a conceptualisation of research narrative mechanisms. Citation and citing practices are a central component of scientific communication work practices. Therefore understanding these formal communication practices of researchers through their citing behaviours may contribute to the building of scholarly identity. This study is undertaken to understand whether scholarly identity could be informed through the use of citation identities. Studies on the citation identities of individuals were conducted, using authors working in the area of Consciousness, which provided a diverse field of participants for the testing of citation analysis techniques. This is accomplished through methodological development and further examined using a combination of field-level and individual-level analyses. A new methodology was developed for the generation of citing identities, based on the calculation of the Gini coefficient and the citee-citation ratio of authors' citing profiles. The resu!ting relationship was found to have high levels of consistency across a heterogenous set of researchers. An exploration of identification of author characteristics was subsequently undertaken using the new methodology and existing citation analysis techniques. The techniques were successful in identifying departures from conventional citation practice, highlighting idiosyncrasies well, but otherwise understanding of scholarly identity through citation analysis was only marginally successful. Aportion of the difficulty of achieving clarity was the complexity of the Consciousness author set, which was useful for establishing broad applicability of a new methodology, but poor for judging its successful application. In summary, definition of citing identity type offers possibilities for improving the understanding of scholarly identity, but will require further methodology development to reach its full potential.
APA, Harvard, Vancouver, ISO, and other styles
19

Smith, Brian G. "Two Highly Diverse Studies In Computing: A Vitruvian Framework For Distribution And A Search Approach To Cancer Therapies." DigitalCommons@USU, 2008. https://digitalcommons.usu.edu/etd/136.

Full text
Abstract:
Solid cancer tumors must recruit new blood vessels for growth and maintenance. Discovering drugs that block this tumor-induced development of new blood vessels (angiogenesis) is an important approach in cancer treatment. However, the complexity of angiogenesis and the difficulty in implementing and evaluating medical changes prevent the discovery of novel and effective new therapies. This paper presents a massively parallel computational search-based approach for the discovery of novel potential cancer treatments, using a high fidelity simulation of angiogenesis. Discovering new therapies is viewed as multi-objective combinatorial optimization over two competing objectives: minimizing the medical cost of the intervention while minimizing the oxygen provided to the cancer tumor by angiogenesis. Results show the effectiveness of the search process in finding simple interventions that are currently in use and more interestingly, discovering some new approaches that are counterintuitive yet effective. Distributed systems are becoming more prevalent as the demand for connectivity increases. Developers are faced with the challenge of creating software systems that meet these demands and adhere to good software practices. Technologies of today aid developers in this, but they may cause applications to suffer performance problems and require developers to abandon basic software concepts, such as modularization, performance, and maintainability. This work presents the Vitruvian framework that provides solutions to common distribution goals, and distributes applications using replication and transparency at varying stages of application development.
APA, Harvard, Vancouver, ISO, and other styles
20

Vũ, John Huân. "Software Internationalization: A Framework Validated Against Industry Requirements for Computer Science and Software Engineering Programs." DigitalCommons@CalPoly, 2010. https://digitalcommons.calpoly.edu/theses/248.

Full text
Abstract:
View John Huân Vũ's thesis presentation at http://youtu.be/y3bzNmkTr-c. In 2001, the ACM and IEEE Computing Curriculum stated that it was necessary to address "the need to develop implementation models that are international in scope and could be practiced in universities around the world." With increasing connectivity through the internet, the move towards a global economy and growing use of technology places software internationalization as a more important concern for developers. However, there has been a "clear shortage in terms of numbers of trained persons applying for entry-level positions" in this area. Eric Brechner, Director of Microsoft Development Training, suggested five new courses to add to the computer science curriculum due to the growing "gap between what college graduates in any field are taught and what they need to know to work in industry." He concludes that "globalization and accessibility should be part of any course of introductory programming," stating: A course on globalization and accessibility is long overdue on college campuses. It is embarrassing to take graduates from a college with a diverse student population and have to teach them how to write software for a diverse set of customers. This should be part of introductory software development. Anything less is insulting to students, their family, and the peoples of the world. There is very little research into how the subject of software internationalization should be taught to meet the major requirements of the industry. The research question of the thesis is thus, "Is there a framework for software internationalization that has been validated against industry requirements?" The answer is no. The framework "would promote communication between academia and industry ... that could serve as a common reference point in discussions." Since no such framework for software internationalization currently exists, one will be developed here. The contribution of this thesis includes a provisional framework to prepare graduates to internationalize software and a validation of the framework against industry requirements. The requirement of this framework is to provide a portable and standardized set of requirements for computer science and software engineering programs to teach future graduates.
APA, Harvard, Vancouver, ISO, and other styles
21

SACRAMENTO, JOSE M. N. "Ferramentas para adequação das linhas de pesquisa de intitutos de pesquisa: o exemplo do IPEN." reponame:Repositório Institucional do IPEN, 2011. http://repositorio.ipen.br:8080/xmlui/handle/123456789/9975.

Full text
Abstract:
Made available in DSpace on 2014-10-09T12:33:24Z (GMT). No. of bitstreams: 0
Made available in DSpace on 2014-10-09T14:05:52Z (GMT). No. of bitstreams: 0
Tese (Doutoramento)
IPEN/T
Instituto de Pesquisas Energeticas e Nucleares - IPEN-CNEN/SP
APA, Harvard, Vancouver, ISO, and other styles
22

Banda, Peter. "Novel Methods for Learning and Adaptation in Chemical Reaction Networks." PDXScholar, 2015. https://pdxscholar.library.pdx.edu/open_access_etds/2329.

Full text
Abstract:
State-of-the-art biochemical systems for medical applications and chemical computing are application-specific and cannot be re-programmed or trained once fabricated. The implementation of adaptive biochemical systems that would offer flexibility through programmability and autonomous adaptation faces major challenges because of the large number of required chemical species as well as the timing-sensitive feedback loops required for learning. Currently, biochemistry lacks a systems vision on how the user-level programming interface and abstraction with a subsequent translation to chemistry should look like. By developing adaptation in chemistry, we could replace multiple hard-wired systems with a single programmable template that can be (re)trained to match a desired input-output profile benefiting smart drug delivery, pattern recognition, and chemical computing. I aimed to address these challenges by proposing several approaches to learning and adaptation in Chemical Reaction Networks (CRNs), a type of simulated chemistry, where species are unstructured, i.e., they are identified by symbols rather than molecular structure, and their dynamics or concentration evolution are driven by reactions and reaction rates that follow mass-action and Michaelis-Menten kinetics. Several CRN and experimental DNA-based models of neural networks exist. However, these models successfully implement only the forward-pass, i.e., the input-weight integration part of a perceptron model. Learning is delegated to a non-chemical system that computes the weights before converting them to molecular concentrations. Autonomous learning, i.e., learning implemented fully inside chemistry has been absent from both theoretical and experimental research. The research in this thesis offers the first constructive evidence that learning in CRNs is, in fact, possible. I have introduced the original concept of a chemical binary perceptron that can learn all 14 linearly-separable logic functions and is robust to the perturbation of rate constants. That shows learning is universal and substrate-free. To simplify the model I later proposed and applied the "asymmetric" chemical arithmetic providing a compact solution for representing negative numbers in chemistry. To tackle more difficult tasks and to serve more complicated biochemical applications, I introduced several key modular building blocks, each addressing certain aspects of chemical information processing and learning. These parts organically combined into gradually more complex systems. First, instead of simple static Boolean functions, I tackled analog time-series learning and signal processing by modeling an analog chemical perceptron. To store past input concentrations as a sliding window I implemented a chemical delay line, which feeds the values to the underlying chemical perceptron. That allows the system to learn, e.g., the linear moving-average and to some degree predict a highly nonlinear NARMA benchmark series. Another important contribution to the area of chemical learning, which I have helped to shape, is the composability of perceptrons into larger multi-compartment networks. Each compartment hosts a single chemical perceptron and compartments communicate with each other through a channel-mediated exchange of molecular species. Besides the feedforward pass, I implemented the chemical error backpropagation analogous to that of feedforward neural networks. Also, after applying mass-action kinetics for the catalytic reactions, I succeeded to systematically analyze the ODEs of my models and derive the closed exact and approximative formulas for both the input-weight integration and the weight update with a learning rate annealing. I proved mathematically that the formulas of certain chemical perceptrons equal the formal linear and sigmoid neurons, essentially bridging neural networks and adaptive CRNs. For all my models the basic methodology was to first design species and reactions, and then set the rate constants either "empirically" by hand, automatically by a standard genetic algorithm (GA), or analytically if possible. I performed all simulations in my COEL framework, which is the first cloud-based chemistry modeling tool, accessible at http://coel-sim.org. I minimized the amount of required molecular species and reactions to make wet chemical implementation possible. I applied an automatized mapping technique, Soloveichik's CRN-to-DNA-strand-displacement transformation, to the chemical linear perceptron and the manual signalling delay line and obtained their full DNA-strand specified implementations. As an alternative DNA-based substrate, I mapped these two models also to deoxyribozyme-mediated cleavage reactions reducing the size of the displacement variant to a third. Both DNA-based incarnations could directly serve as blue-prints for wet biochemicals. Besides an actual synthesis of my models and conducting an experiment in a biochemical laboratory, the most promising future work is to employ so-called reservoir computing (RC), which is a novel machine learning method based on recurrent neural networks. The RC approach is relevant because for time-series prediction it is clearly superior to classical recurrent networks. It can also be implemented in various ways, such as electrical circuits, physical systems, such as a colony of Escherichia Coli, and water. RC's loose structural assumptions therefore suggest that it could be expressed in a chemical form as well. This could further enhance the expressivity and capabilities of chemically-embedded learning. My chemical learning systems may have applications in the area of medical diagnosis and smart medication, e.g., concentration signal processing and monitoring, and the detection of harmful species, such as chemicals produced by cancer cells in a host (cancer miRNAs) or the detection of a severe event, defined as a linear or nonlinear temporal concentration pattern. My approach could replace hard-coded solutions and would allow to specify, train, and reuse chemical systems without redesigning them. With time-series integration, biochemical computers could keep a record of changing biological systems and act as diagnostic aids and tools in preventative and highly personalized medicine.
APA, Harvard, Vancouver, ISO, and other styles
23

Fan, Yao-Long. "Re-engineering the solicitation management system." CSUSB ScholarWorks, 2006. https://scholarworks.lib.csusb.edu/etd-project/3179.

Full text
Abstract:
The scope of this project includes a re-engineering of the internal architecture of the Solicitation Management System (SMS), a web-based application that facilitates the running of grant proposal solicitations for the Office of Technology Transfer and Commercialization at California State University San Bernardino (CSUSB). A goal of the project is to increase consistency and efficiency of the code base of the system, making it easier to understand, maintain, and extend. The previous version of SMS was written to rely on the Spring and Hibernate frameworks. The project includes a restructuring of the system to remove reliance on the Spring framework, but maintain reliance on Hibernate. The result is an updated version of the SMS. The system was written using current technologies such as Java, JSP, and CSS.
APA, Harvard, Vancouver, ISO, and other styles
24

Omar, Yunus. "Comparative analysis of selected Personal Bibliographic Management Software (PBMS) with special reference to the requirements of researchers at a University of Technology." Thesis, Link to the online version, 2005. http://hdl.handle.net/10019/1339.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Warner, Robert L. "A computational model of human emotion." Thesis, This resource online, 1991. http://scholar.lib.vt.edu/theses/available/etd-12302008-063852/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Demetriou, Christodoulos S. "A PC implemented kinematic synthesis system for planar linkages." Thesis, Virginia Polytechnic Institute and State University, 1987. http://hdl.handle.net/10919/101343.

Full text
Abstract:
The purpose of this thesis is to develop a PC implemented kinematic synthesis system for four-bar and six-bar planar linkages using Turbo Pascal. CYPRUS is an interactive program that calculates and displays graphically the designed four-bar and six-bar linkages. This package can be used for three and four position synthesis of path generation, path generation with input timing, body guidance, and body guidance with input timing linkages. The package can also be used for function generation linkages where the user may enter a set of angle pairs or choose one of the following functions: tangent, cosine, sine, exponential, logarithmic, and natural logarithmic. The above syntheses can be combined to design linkages that produce more complex motion. For each kinematic synthesis case the code calculates a certain number of solutions. Then the designer chooses the most suitable solution for the particular application at hand. After a mechanism is synthesized, it can be animated for a check of the mechanical action. Watching this animation allows the designer to judge criteria such as clearances, forces, velocities and acceleration of the moving links. The software operates on an IBM PC or any other PC compatible. The language used is Turbo Pascal, an extremely effective tool and one of the fastest high level languages in compilation and execution time.
M.S.
APA, Harvard, Vancouver, ISO, and other styles
27

Moore, Michael Courtney. "The CADET Training Program Versus the Student Certification Program: A Study of IT- Support Training Programs at Western Kentucky University." TopSCHOLAR®, 2014. http://digitalcommons.wku.edu/theses/1435.

Full text
Abstract:
Technology is a critical component of modern-day success. Advancements in technology have improved communication between individuals and companies. Technological advancements have allowed students to earn college degrees online. People who habitually use technology expect a high level of performance and support. As new technologies are implemented, such as complex web services or new operating systems, the dependence for information technology (IT) support grows in demand. Even learning curves can be cumbersome without proper assistance from IT professionals. Companies and institutions must accommodate user needs by implementing fast, efficient, and friendly support. In order to offer optimal customer support, representatives must be knowledgeable of the products and services that are supported. At Western Kentucky University’s (WKU) IT Helpdesk, a training program called Consultant Accelerated Development and Education in Technology (CADET) focuses on software, hardware, customer service, and procedures mandated by the IT Division. Prior to CADET, the Student Certification program was used to train student consultants. The Student Certification program was developed to satisfy training needs that allowed consultants to support end-user technical issues. CADET was developed in 2008 to replace the Student Certification program. This study explored the question if CADET training is more effective in preparing consultants to do their jobs than the Student Certification program. The study investigated the effectiveness of CADET training compared to the Student Certification program by surveying IT Helpdesk student consultants. The survey results indicated which program was more adequate. Both programs contained the same training content, but training delivery methods differed. A t-test was used to compare both programs and determine the outcome of the study’s hypotheses. The Student Certification program did not accommodate different learning styles. The teaching methods only included traditional classroom-style delivery. CADET training did accommodate different learning styles, delivering training through a wide variety of formats including video, audio, assessment, assignment, and face-to-face training. The research focused on the importance of addressing different learning behaviors. The study suggested that CADET is more adequate in preparing students to do their job duties. When both Student Certification survey and the CADET survey were compared, CADET training is more adequate in 26 out of the 27 training sessions. The results suggested that learning style accommodation is directly related in the success in the CADET training program over the Student Certification program.
APA, Harvard, Vancouver, ISO, and other styles
28

Lin, Yu-Luen. "Solicitation Management System." CSUSB ScholarWorks, 2006. https://scholarworks.lib.csusb.edu/etd-project/2976.

Full text
Abstract:
This project updated the California State University, San Bernardino's Office of Technology Transfer and Commercialization's Solicitation Management System (SMS) software, used to facilitate the processing of grant proposal solicitations. The SMS software update improved the interface so that it is more user-friendly, increased the processing speed, and added additional functions necessary to comply with new requirements. The software was rewritten using the Spring and Hibernate frameworks.
APA, Harvard, Vancouver, ISO, and other styles
29

Ferro, Milene [UNESP]. "Desenvolvimento de Pipelines para análise de EST, bibliotecas ITS e 16S para estudos genômicos em formigas Attini." Universidade Estadual Paulista (UNESP), 2013. http://hdl.handle.net/11449/100547.

Full text
Abstract:
Made available in DSpace on 2014-06-11T19:30:56Z (GMT). No. of bitstreams: 0 Previous issue date: 2013-04-27Bitstream added on 2014-06-13T19:40:30Z : No. of bitstreams: 1 ferro_m_dr_rcla_parcial.pdf: 557493 bytes, checksum: fcae382eb9a89592e907d3171f912924 (MD5) Bitstreams deleted on 2015-05-04T12:07:27Z: ferro_m_dr_rcla_parcial.pdf,Bitstream added on 2015-05-04T12:07:58Z : No. of bitstreams: 1 000713741.pdf: 1451277 bytes, checksum: 241fb4d825af417c9645365b1885c071 (MD5)
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
As formigas Attini são destacadas pragas agrícolas e invertebrados modelos em estudos em genômica, metagenômica, sistemática e evolução molecular. Tais estudos têm gerado milhares de sequências nucleotídicas, cuja análise demanda o uso de diferentes programas de Bioinformática em processamentos automatizados. Pensando nisso, foram desenvolvidos protocolos para análises de ESTs e de bibliotecas de sequências ITS e 16S obtidas de formigas Attini. Foi desenvolvida uma arquitetura baseada no padrão MVC e J2EE responsável por todos os processos cliente-servidor. Os protocolos foram implementados na forma de pipelines em que cada componente é um serviço web em REST acoplado a arquitetura desenvolvida. bESTscan realiza a anotação de sequências EST transcriptômicas, 16Scan e ITScan realizam análises de ecologia microbiana, para bactérias (sequências 16S) e para fungos (sequências ITS) respectivamente, bem como identifica as sequências através de pesquisas em bancos de dados públicos. O sistema computacional desenvolvido automatiza a anotação de sequências para um pequeno volume de sequências como as obtidas por Sanger ou para grande volume de dados como as geradas por Sequenciadores de Nova Geração, reduzindo o tempo de processamento e facilitando a análise dos resultados. Todos os pipelines desenvolvidos foram validados com sequências de Attini, obtidas por Sanger e Illumina, geradas a partir de diferentes projetos do nosso laboratório e estão disponíveis na web nos endereços http://evol.rc.unesp.br/lem/?q=bioinformatics
The Attini ants are agricultural pests and invertebrate models for studies on genomics, metagenomics, molecular systematics and evolution. These studies have generated thousands of nucleotide sequences whose analysis requires the use of different Bioinformatics tools and automated processing. We developed automated sequence annotation protocols for ESTs and ITS and 16S libraries analysis for Attini ants. We developed an architecture model based on J2EE and MVC patterns which is responsible for all client-server processes and each tool in the pipeline is a REST web service coupled in the architecture model developed. The bESTscan pipeline to perform the EST transcriptome annotation, 16Scan e ITScan realize microbial ecology studies for both, bacteria (16S sequences) and for fungi (ITS sequences), as well as identify sequences through public databases. The computer system developed automates the annotation for both a small volume of sequences obtained by Sanger and for a large number of data generated by the Next Generation Sequencers (NGS), reducing processing time and with high performance. Pipelines were developed and validated using sequences Sanger and Illumina generated with different projects from our laboratory and are available at http://evol.rc.unesp.br/lem/?q=bioinformatics
APA, Harvard, Vancouver, ISO, and other styles
30

Ferro, Milene. "Desenvolvimento de Pipelines para análise de EST, bibliotecas ITS e 16S para estudos genômicos em formigas Attini /." Rio Claro, 2013. http://hdl.handle.net/11449/100547.

Full text
Abstract:
Orientador: Maurício Bacci Júnior
Banca: André Rodrigues
Banca: Henrique Ferreira
Banca: Flávio Henrique da Silva
Banca: Claudio José Von Zuben
Resumo: As formigas Attini são destacadas pragas agrícolas e invertebrados modelos em estudos em genômica, metagenômica, sistemática e evolução molecular. Tais estudos têm gerado milhares de sequências nucleotídicas, cuja análise demanda o uso de diferentes programas de Bioinformática em processamentos automatizados. Pensando nisso, foram desenvolvidos protocolos para análises de ESTs e de bibliotecas de sequências ITS e 16S obtidas de formigas Attini. Foi desenvolvida uma arquitetura baseada no padrão MVC e J2EE responsável por todos os processos cliente-servidor. Os protocolos foram implementados na forma de pipelines em que cada componente é um serviço web em REST acoplado a arquitetura desenvolvida. bESTscan realiza a anotação de sequências EST transcriptômicas, 16Scan e ITScan realizam análises de ecologia microbiana, para bactérias (sequências 16S) e para fungos (sequências ITS) respectivamente, bem como identifica as sequências através de pesquisas em bancos de dados públicos. O sistema computacional desenvolvido automatiza a anotação de sequências para um pequeno volume de sequências como as obtidas por Sanger ou para grande volume de dados como as geradas por Sequenciadores de Nova Geração, reduzindo o tempo de processamento e facilitando a análise dos resultados. Todos os pipelines desenvolvidos foram validados com sequências de Attini, obtidas por Sanger e Illumina, geradas a partir de diferentes projetos do nosso laboratório e estão disponíveis na web nos endereços http://evol.rc.unesp.br/lem/?q=bioinformatics
Abstract: The Attini ants are agricultural pests and invertebrate models for studies on genomics, metagenomics, molecular systematics and evolution. These studies have generated thousands of nucleotide sequences whose analysis requires the use of different Bioinformatics tools and automated processing. We developed automated sequence annotation protocols for ESTs and ITS and 16S libraries analysis for Attini ants. We developed an architecture model based on J2EE and MVC patterns which is responsible for all client-server processes and each tool in the pipeline is a REST web service coupled in the architecture model developed. The bESTscan pipeline to perform the EST transcriptome annotation, 16Scan e ITScan realize microbial ecology studies for both, bacteria (16S sequences) and for fungi (ITS sequences), as well as identify sequences through public databases. The computer system developed automates the annotation for both a small volume of sequences obtained by Sanger and for a large number of data generated by the Next Generation Sequencers (NGS), reducing processing time and with high performance. Pipelines were developed and validated using sequences Sanger and Illumina generated with different projects from our laboratory and are available at http://evol.rc.unesp.br/lem/?q=bioinformatics
Doutor
APA, Harvard, Vancouver, ISO, and other styles
31

Bebek, Gurkan. "Functional Characteristics of Cancer Driver Genes in Colorectal Cancer." Case Western Reserve University School of Graduate Studies / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=case1495012693440067.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Mark, Rutenberg Richard. "COMPUTER IMAGE ANALYSIS BASED QUANTIFICATION OF COMPARATIVE IHC LEVELS OF P53 AND SIGNALING ASSOCIATED WITH THE DNA DAMAGE REPAIR PATHWAY DISCRIMINATES BETWEEN INFLAMMATORY AND DYSPLASTIC CELLULAR ATYPIA." Cleveland State University / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=csu1586182859848301.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Wang, Yuepeng. "Integrative methods for gene data analysis and knowledge discovery on the case study of KEDRI's brain gene ontology a thesis submitted to Auckland University of Technology in partial fulfilment of the requirements for the degree of Master of Computer and Information sciences, 2008 /." Click here to access this resource online, 2008. http://hdl.handle.net/10292/467.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Choudhury, Salimur Rashid, and University of Lethbridge Faculty of Arts and Science. "Approximation algorithms for a graph-cut problem with applications to a clustering problem in bioinformatics." Thesis, Lethbridge, Alta. : University of Lethbridge, Deptartment of Mathematics and Computer Science, 2008, 2008. http://hdl.handle.net/10133/774.

Full text
Abstract:
Clusters in protein interaction networks can potentially help identify functional relationships among proteins. We study the clustering problem by modeling it as graph cut problems. Given an edge weighted graph, the goal is to partition the graph into a prescribed number of subsets obeying some capacity constraints, so as to maximize the total weight of the edges that are within a subset. Identification of a dense subset might shed some light on the biological function of all the proteins in the subset. We study integer programming formulations and exhibit large integrality gaps for various formulations. This is indicative of the difficulty in obtaining constant factor approximation algorithms using the primal-dual schema. We propose three approximation algorithms for the problem. We evaluate the algorithms on the database of interacting proteins and on randomly generated graphs. Our experiments show that the algorithms are fast and have good performance ratio in practice.
xiii, 71 leaves : ill. ; 29 cm.
APA, Harvard, Vancouver, ISO, and other styles
35

Kuntala, Prashant Kumar. "Optimizing Biomarkers From an Ensemble Learning Pipeline." Ohio University / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1503592057943043.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Stanfield, Zachary. "Comprehensive Characterization of the Transcriptional Signaling of Human Parturition through Integrative Analysis of Myometrial Tissues and Cell Lines." Case Western Reserve University School of Graduate Studies / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=case1562863761406809.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Robison, Pamula J. "Mathematical Modelling of Biofilm Growth and Decay Through Various Deliveries of Antimicrobial." University of Akron / OhioLINK, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=akron1258769688.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Giedt, Randy James. "Mitochondrial Network Dynamics in Vascular Endothelial Cells Exposed to Mechanochemical Stimuli: Experimental and Mathematical Analysis." The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1333985787.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Sweeney, Deacon John. "A Computational Tool for Biomolecular Structure Analysis Based On Chemical and Enzymatic Modification of Native Proteins." Wright State University / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=wright1316440232.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Goyal, Ayush. "Vasculature reconstruction from 3D cryomicrotome images." Thesis, University of Oxford, 2013. http://ora.ox.ac.uk/objects/uuid:aa9e500b-a0a4-48f3-8cb8-e75bbcc775e9.

Full text
Abstract:
Background: Research in heart disease can be aided by modelling myocardial hemodynamics with knowledge of coronary pressure and vascular resistance measured from the geometry and morphometry of coronary vasculature. This study presents methods to automatically reconstruct accurate detailed coronary vascular anatomical models from high-resolution three-dimensional optical fluorescence cryomicrotomography image volumes for simulating blood flow in coronary arterial trees. Methods: Images of fluorescent cast and bead particles perfused into the same heart comprise the vasculature and microsphere datasets, employed in a novel combined approach to measure vasculature and simulate a flow model on the extracted coronary vascular tree for estimating regional myocardial perfusion. The microspheres are used in two capacities - as fiducial biomarker point sources for measuring the image formation in order to accurately measure the vasculature dataset and as flowing particles for measuring regional myocardial perfusion through the reconstructed vasculature. A new model-based template-matching method of vascular radius estimation is proposed that incorporates a model of the optical fluorescent image formation measured from the microspheres and a template of the vessels’ tubular geometry. Results: The new method reduced the error in vessel radius estimation from 42.9% to 0.6% in a 170 micrometer vessel as compared to the Full-Width Half Maximum method. Whole-organ porcine coronary vascular trees, automatically reconstructed with the proposed method, contained on the order of 92,000+ vessel segments in the range 0.03 – 1.9 mm radius. Discrepancy between the microsphere perfusion measurements and regional flow estimated with a 1-D steady state linear static blood flow simulation on the reconstructed vasculature was modelled with daughter-to-parent area ratio and branching angle as the parameters. Correcting the flow simulation by incorporating this model of disproportionate distribution of microspheres reduced the error from 24% to 7.4% in the estimation of fractional microsphere distribution in oblique branches with angles of 100°-120°.
APA, Harvard, Vancouver, ISO, and other styles
41

Nakrani, Sunil. "Biomimetic and autonomic server ensemble orchestration." Thesis, University of Oxford, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.534214.

Full text
Abstract:
This thesis addresses orchestration of servers amongst multiple co-hosted internet services such as e-Banking, e-Auction and e-Retail in hosting centres. The hosting paradigm entails levying fees for hosting third party internet services on servers at guaranteed levels of service performance. The orchestration of server ensemble in hosting centres is considered in the context of maximising the hosting centre's revenue over a lengthy time horizon. The inspiration for the server orchestration approach proposed in this thesis is drawn from nature and generally classed as swarm intelligence, specifically, sophisticated collective behaviour of social insects borne out of primitive interactions amongst members of the group to solve problems beyond the capability of individual members. Consequently, the approach is self-organising, adaptive and robust. A new scheme for server ensemble orchestration is introduced in this thesis. This scheme exploits the many similarities between server orchestration in an internet hosting centre and forager allocation in a honeybee (Apis mellifera) colony. The scheme mimics the way a honeybee colony distributes foragers amongst flower patches to maximise nectar influx, to orchestrate servers amongst hosted internet services to maximise revenue. The scheme is extended by further exploiting inherent feedback loops within the colony to introduce self-tuning and energy-aware server ensemble orchestration. In order to evaluate the new server ensemble orchestration scheme, a collection of server ensemble orchestration methods is developed, including a classical technique that relies on past history to make time varying orchestration decisions and two theoretical techniques that omnisciently make optimal time varying orchestration decisions or an optimal static orchestration decision based on complete knowledge of the future. The efficacy of the new biomimetic scheme is assessed in terms of adaptiveness and versatility. The performance study uses representative classes of internet traffic stream behaviour, service user's behaviour, demand intensity, multiple services co-hosting as well as differentiated hosting fee schedule. The biomimetic orchestration scheme is compared with the classical and the theoretical optimal orchestration techniques in terms of revenue stream. This study reveals that the new server ensemble orchestration approach is adaptive in a widely varying external internet environments. The study also highlights the versatility of the biomimetic approach over the classical technique. The self-tuning scheme improves on the original performance. The energy-aware scheme is able to conserve significant energy with minimal revenue performance degradation. The simulation results also indicate that the new scheme is competitive or better than classical and static methods.
APA, Harvard, Vancouver, ISO, and other styles
42

Braman, Nathaniel. "Novel Radiomics and Deep Learning Approaches Targeting the Tumor Environment to Predict Response to Chemotherapy." Case Western Reserve University School of Graduate Studies / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=case1586546527544791.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Blomquist, Thomas M. "Development of Bimodal Gene Expression Analysis and Allele-Specific Competitive PCR for Investigation of Complex Genetic Traits, Lung Cancer Risk." University of Toledo Health Science Campus / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=mco1277151230.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Ausdenmoore, Benjamin D. "Synaptic contact localization in three dimensional space using a Center Distance Algorithm." Wright State University / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=wright1320866829.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Krohn, Jonathan Jacob Pastushchyn. "Genes contributing to variation in fear-related behaviour." Thesis, University of Oxford, 2013. http://ora.ox.ac.uk/objects/uuid:1e8e40bd-9a98-405f-9463-e9423f0a60ca.

Full text
Abstract:
Anxiety and depression are highly prevalent diseases with common heritable elements, but the particular genetic mechanisms and biological pathways underlying them are poorly understood. Part of the challenge in understanding the genetic basis of these disorders is that they are polygenic and often context-dependent. In my thesis, I apply a series of modern statistical tools to ascertain some of the myriad genetic and environmental factors that underlie fear-related behaviours in nearly two thousand heterogeneous stock mice, which serve as animal models of anxiety and depression. Using a Bayesian method called Sparse Partitioning and a frequentist method called Bagphenotype, I identify gene-by-sex interactions that contribute to variation in fear-related behaviours, such as those displayed in the elevated plus maze and the open field test, although I demonstrate that the contributions are generally small. Also using Bagphenotype, I identify hundreds of gene-by-environment interactions related to these traits. The interacting environmental covariates are diverse, ranging from experimenter to season of the year. With gene expression data from a brain structure associated with anxiety called the hippocampus, I generate modules of co-expressed genes and map them to the genome. Two of these modules were enriched for key nervous system components — one for dendritic spines, another for oligodendrocyte markers — but I was unable to find significant correlations between them and fear-related behaviours. Finally, I employed another Bayesian technique, Sparse Instrumental Variables, which takes advantage of conditional probabilities to identify hippocampus genes whose expression appears not just to be associated with variation in fear-related behaviours, but cause variation in those phenotypes.
APA, Harvard, Vancouver, ISO, and other styles
46

REIS, JUNIOR JOSE S. B. "Métodos e softwares para análise da produção científica e detecção de frentes emergentes de pesquisa." reponame:Repositório Institucional do IPEN, 2015. http://repositorio.ipen.br:8080/xmlui/handle/123456789/26929.

Full text
Abstract:
Submitted by Marco Antonio Oliveira da Silva (maosilva@ipen.br) on 2016-12-21T15:07:24Z No. of bitstreams: 0
Made available in DSpace on 2016-12-21T15:07:24Z (GMT). No. of bitstreams: 0
O progresso de projetos anteriores salientou a necessidade de tratar o problema dos softwares para detecção, a partir de bases de dados de publicações científicas, de tendências emergentes de pesquisa e desenvolvimento. Evidenciou-se a carência de aplicações computacionais eficientes dedicadas a este propósito, que são artigos de grande utilidade para um melhor planejamento de programas de pesquisa e desenvolvimento em instituições. Foi realizada, então, uma revisão dos softwares atualmente disponíveis, para poder-se delinear claramente a oportunidade de desenvolver novas ferramentas. Como resultado, implementou-se um aplicativo chamado Citesnake, projetado especialmente para auxiliar a detecção e o estudo de tendências emergentes a partir da análise de redes de vários tipos, extraídas das bases de dados científicas. Através desta ferramenta computacional robusta e eficaz, foram conduzidas análises de frentes emergentes de pesquisa e desenvolvimento na área de Sistemas Geradores de Energia Nuclear de Geração IV, de forma que se pudesse evidenciar, dentre os tipos de reatores selecionados como os mais promissores pelo GIF - Generation IV International Forum, aqueles que mais se desenvolveram nos últimos dez anos e que se apresentam, atualmente, como os mais capazes de cumprir as promessas realizadas sobre os seus conceitos inovadores.
Dissertação (Mestrado em Tecnologia Nuclear)
IPEN/D
Instituto de Pesquisas Energéticas e Nucleares - IPEN-CNEN/SP
APA, Harvard, Vancouver, ISO, and other styles
47

Boisson, Jean-Charles. "Modélisation et résolution par métaheuristiques coopératives : de l'atome à la séquence protéique." Phd thesis, Lille 1, 2008. http://tel.archives-ouvertes.fr/tel-00842054.

Full text
Abstract:
A travers cette thèse, nous montrons l'importance de la modélisation et de la coopération de métaheuristiques pour la résolution de problèmes réels en bioinformatique. Pour ce faire, deux problèmes ont été étudiés : le premier dans le domaine de la protéomique pour l'identification de protéines à partir de données spectrales et le second dans le domaine de l'analyse structurale de molécules pour le problème du docking moléculaire flexible. Ainsi, pour le premier problème, un nouveau modèle basé sur une comparaison directe des bases de données protéiques avec les données expérimentales brutes a été mise en place. L'approche associée a été intégrée au sein d'un moteur d'identification par empreinte de masse peptide appelé ASCQ_ME. Ce modèle d'identification a permis ensuite de proposer et de valider une modélisation pour le problème de " de novo protein sequencing " qui consiste à retrouver la séquence d'une protéine à partir seulement des données expérimentales. Il s'agit d'un modèle en trois étapes appelé SSO pour " Sequence ", " Shape " et " Order ". Après une étude de chacune de ces étapes, SSO a été implémenté et testé à travers trois métaheuristiques collaborant de manière séquentielle. Pour le second problème, une étude des nouvelles modélisations multi-objectives a été menée et a conduit à la définition d'un ensemble de huit modèles différents testés à l'aide d'algorithmes génétiques multi-objectifs parallèles. Une douzaine de configuration d'opérateurs génétiques ont été testé afin de mettre en évidence l'efficacité de l'hybridation des algorithmes génétiques avec des recherches locales. Pour chacune des parties, l'implémentation et la mise en place des collaborations fut possible grâce à la plateforme ParadisEO et notamment grâce à mes contributions à la partie ParadisEO-MO dédiée aux métaheuristiques à base de solution unique. L'ensemble de ces travaux a été soutenu par le PPF BioInformatique de l'Université des Sciences et Technologies de Lille et le projet ANR Dock.
APA, Harvard, Vancouver, ISO, and other styles
48

Desai, Akshay A. "Data analysis and creation of epigenetics database." Thesis, 2014. http://hdl.handle.net/1805/4452.

Full text
Abstract:
Indiana University-Purdue University Indianapolis (IUPUI)
This thesis is aimed at creating a pipeline for analyzing DNA methylation epigenetics data and creating a data model structured well enough to store the analysis results of the pipeline. In addition to storing the results, the model is also designed to hold information which will help researchers to decipher a meaningful epigenetics sense from the results made available. Current major epigenetics resources such as PubMeth, MethyCancer, MethDB and NCBI’s Epigenomics database fail to provide holistic view of epigenetics. They provide datasets produced from different analysis techniques which raises an important issue of data integration. The resources also fail to include numerous factors defining the epigenetic nature of a gene. Some of the resources are also struggling to keep the data stored in their databases up-to-date. This has diminished their validity and coverage of epigenetics data. In this thesis we have tackled a major branch of epigenetics: DNA methylation. As a case study to prove the effectiveness of our pipeline, we have used stage-wise DNA methylation and expression raw data for Lung adenocarcinoma (LUAD) from TCGA data repository. The pipeline helped us to identify progressive methylation patterns across different stages of LUAD. It also identified some key targets which have a potential for being a drug target. Along with the results from methylation data analysis pipeline we combined data from various online data reserves such as KEGG database, GO database, UCSC database and BioGRID database which helped us to overcome the shortcomings of existing data collections and present a resource as complete solution for studying DNA methylation epigenetics data.
APA, Harvard, Vancouver, ISO, and other styles
49

Van, der Vyver John. "Prescriptive computer-assisted learning environments from a teaching perspective." Thesis, 2014. http://hdl.handle.net/10210/9517.

Full text
Abstract:
D.Ed. (Media Science)
The education system in South Africa is at the moment in a state of flux and various strategies are being investigated to address and redress inequities in the system. Many persons would see the computer as playing a significant role in this process. The problem, however, is not whether but how to use the computer effectively and appropriately in the classroom. Should the computer merely be used as a recordkeeping facility, a tool to assist the teacher or as an instrument for assisting learners to develop their full potential as human beings (Schostak, 1988:147; HSRC, 1983a:38; HSRC, 1983b:163). Computer-assisted learning environments have variously been described as the best of learning environments and the worst of learning environments. No doubt, opinions as to the value of such environments can be found at every point along the continuum that joins these two contradictory viewpoints (Doulton, 1984; Hart, 1984; Roach, 1984; Merrill, Tolman, Christensen, Hammons, Vincent & Reynolds, 1986:279). It is the purpose of this study to systematically examine the literature regarding one of these computer-assisted learning environments and to describe its underlying theoretical assumptions in order to assess its significance for education and to provide guidelines for the development and evaluation of software that can be used in the learning environment...
APA, Harvard, Vancouver, ISO, and other styles
50

Levine, Jacob Harrison. "Learning cell states from high-dimensional single-cell data." Thesis, 2016. https://doi.org/10.7916/D88052DM.

Full text
Abstract:
Recent developments in single-cell measurement technologies have yielded dramatic increases in throughput (measured cells per experiment) and dimensionality (measured features per cell). In particular, the introduction of mass cytometry has made possible the simultaneous quantification of dozens of protein species in millions of individual cells in a single experiment. The raw data produced by such high-dimensional single-cell measurements provide unprecedented potential to reveal the phenotypic heterogeneity of cellular systems. In order to realize this potential, novel computational techniques are required to extract knowledge from these complex data. Analysis of single-cell data is a new challenge for computational biology, as early development in the field was tailored to technologies that sacrifice single-cell resolution, such as DNA microarrays. The challenges for single-cell data are quite distinct and require multidimensional modeling of complex population structure. Particular challenges include nonlinear relationships between measured features and non-convex subpopulations. This thesis integrates methods from computational geometry and network analysis to develop a framework for identifying the population structure in high-dimensional single-cell data. At the center of this framework is PhenoGraph, and algorithmic approach to defining subpopulations, which when applied to healthy bone marrow data was shown to reconstruct known immune cell types automatically without prior information. PhenoGraph demonstrated superior accuracy, robustness, and efficiency, compared to other methods. The data-driven approach becomes truly powerful when applied to less characterized systems, such as malignancies, in which the tissue diverges from its healthy population composition. Applying PhenoGraph to bone marrow samples from a cohort of acute myeloid leukemia (AML) patients, the thesis presents several insights into the pathophysiology of AML, which were extracted by virtue of the computational isolation of leukemic subpopulations. For example, it is shown that leukemic subpopulations diverge from healthy bone marrow but not without bound: Leukemic cells are apparently free to explore only a restricted phenotypic space that mimics normal myeloid development. Further, the phenotypic composition of a sample is associated with its cytogenetics, demonstrating a genetic influence on the population structure of leukemic bone marrow. The thesis goes on to show that functional heterogeneity of leukemic samples can be computationally inferred from molecular perturbation data. Using a variety of methods that build on PhenoGraph's foundations, the thesis presents a characterization of leukemic subpopulations based on an inferred stem-like signaling pattern. Through this analysis, it is shown that surface phenotypes often fail to reflect the true underlying functional state of the subpopulation, and that this functional stem-like state is in fact a powerful predictor of survival in large, independent cohorts. Altogether, the thesis takes the existence and importance of cellular heterogeneity as its starting point and presents a mathematical framework and computational toolkit for analyzing samples from this perspective. It is shown that phenotypic and functional heterogeneity are robust characteristics of acute myeloid leukemia with clinically significant ramifications.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography