Dissertations / Theses on the topic 'Brain – Computer simulation'

To see the other types of publications on this topic, follow the link: Brain – Computer simulation.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 28 dissertations / theses for your research on the topic 'Brain – Computer simulation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Stetner, Michael E. "Improving decoding in intracortical brain-machine interfaces." Cleveland, Ohio : Case Western Reserve University, 2009. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=case1254235417.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Mundy, Andrew. "Real time Spaun on SpiNNaker : functional brain simulation on a massively-parallel computer architecture." Thesis, University of Manchester, 2017. https://www.research.manchester.ac.uk/portal/en/theses/real-time-spaun-on-spinnaker--functional-brain-simulation-on-a-massivelyparallel-computer-architecture(fcf5388c-4893-4b10-a6b4-577ffee2d562).html.

Full text
Abstract:
Model building is a fundamental scientific tool. Increasingly there is interest in building neurally-implemented models of cognitive processes with the intention of modelling brains. However, simulation of such models can be prohibitively expensive in both the time and energy required. For example, Spaun - "the world's first functional brain model", comprising 2.5 million neurons - required 2.5 hours of computation for every second of simulation on a large compute cluster. SpiNNaker is a massively parallel, low power architecture specifically designed for the simulation of large neural models in biological real time. Ideally, SpiNNaker could be used to facilitate rapid simulation of models such as Spaun. However the Neural Engineering Framework (NEF), with which Spaun is built, maps poorly to the architecture - to the extent that models such as Spaun would consume vast portions of SpiNNaker machines and still not run as fast as biology. This thesis investigates whether real time simulation of Spaun on SpiNNaker is at all possible. Three techniques which facilitate such a simulation are presented. The first reduces the memory, compute and network loads consumed by the NEF. Consequently, it is demonstrated that only a twentieth of the cores are required to simulate a core component of the Spaun network than would otherwise have been needed. The second technique uses a small number of additional cores to significantly reduce the network traffic required to simulated this core component. As a result simulation in real time is shown to be feasible. The final technique is a novel logic minimisation algorithm which reduces the size of the routing tables which are used to direct information around the SpiNNaker machine. This last technique is necessary to allow the routing of models of the scale and complexity of Spaun. Together these provide the ability to simulate the Spaun model in biological real time - representing a speed-up of 9000 times over previously reported results - with room for much larger models on full-scale SpiNNaker machines.
APA, Harvard, Vancouver, ISO, and other styles
3

Quek, Melissa. "The role of simulation in developing and designing applications for 2-class motor imagery brain-computer interfaces." Thesis, University of Glasgow, 2013. http://theses.gla.ac.uk/4503/.

Full text
Abstract:
A Brain-Computer Interface (BCI) can be used by people with severe physical disabilities such as Locked-in Syndrome (LiS) as a channel of input to a computer. The time-consuming nature of setting up and using a BCI, together with individual variation in performance and limited access to end users makes it difficult to employ techniques such as rapid prototyping and user centred design (UCD) in the design and development of applications. This thesis proposes a design process which incorporates the use of simulation tools and techniques to improve the speed and quality of designing BCI applications for the target user group. Two different forms of simulation can be distinguished: offline simulation aims to make predictions about a user’s performance in a given application interface given measures of their baseline control characteristics, while online simulation abstracts properties of inter- action with a BCI system which can be shown to, or used by, a stakeholder in real time. Simulators that abstract properties of BCI control at different levels are useful for different purposes. Demonstrating the use of offline simulation, Chapter 3 investigates the use of finite state machines (FSMs) to predict the time to complete tasks given a particular menu hierarchy, and compares offline predictions of task performance with real data in a spelling task. Chapter 5 aims to explore the possibility of abstracting a user’s control characteristics from a typical calibration task to predict performance in a novel control paradigm. Online simulation encompasses a range of techniques from low-fidelity prototypes built using paper and cardboard, to computer simulation models that aim to emulate the feel of control of using a BCI without actually needing to put on the BCI cap. Chapter 4 details the develop- ment and evaluation of a high fidelity BCI simulator that models the control characteristics of a BCI based on the motor-imagery (MI) paradigm. The simulation tools and techniques can be used at different stages of the application design process to reduce the level of involvement of end users while at the same time striving to employ UCD principles. It is argued that prioritising the level of involvement of end users at different stages in the design process is an important strategy for design: end user input is paramount particularly at the initial user requirements stage where the goals that are important for the end user of the application can be ascertained. The interface and specific interaction techniques can then be iteratively developed through both real and simulated BCI with people who have no or less severe physical disabilities than the target end user group, and evaluations can be carried out with end users at the final stages of the process. Chapter 6 provides a case study of using the simulation tools and techniques in the development of a music player application. Although the tools discussed in the thesis specifically concern a 2-class Motor Imagery BCI which uses the electroencephalogram (EEG) to extract brain signals, the simulation principles can be expected to apply to a range of BCI systems.
APA, Harvard, Vancouver, ISO, and other styles
4

Grieve, Stuart Michael. "Development of fast magnetic resonance imaging methods for investigation of the brain." Thesis, University of Oxford, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.365824.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Stetner, Michael E. "Improving decoding in intracortical brain-machine interfaces." Case Western Reserve University School of Graduate Studies / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=case1254235417.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Hutt, Axel. "The study of neural oscillations by traversing scales in the brain." Habilitation à diriger des recherches, Université de Nice Sophia-Antipolis, 2011. http://tel.archives-ouvertes.fr/tel-00603975.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Hashemi, Fatemeh Sadat. "Sampling Controlled Stochastic Recursions: Applications to Simulation Optimization and Stochastic Root Finding." Diss., Virginia Tech, 2015. http://hdl.handle.net/10919/76740.

Full text
Abstract:
We consider unconstrained Simulation Optimization (SO) problems, that is, optimization problems where the underlying objective function is unknown but can be estimated at any chosen point by repeatedly executing a Monte Carlo (stochastic) simulation. SO, introduced more than six decades ago through the seminal work of Robbins and Monro (and later by Kiefer and Wolfowitz), has recently generated much attention. Such interest is primarily because of SOs flexibility, allowing the implicit specification of functions within the optimization problem, thereby providing the ability to embed virtually any level of complexity. The result of such versatility has been evident in SOs ready adoption in fields as varied as finance, logistics, healthcare, and telecommunication systems. While SO has become popular over the years, Robbins and Monros original stochastic approximation algorithm and its numerous modern incarnations have seen only mixed success in solving SO problems. The primary reason for this is stochastic approximations explicit reliance on a sequence of algorithmic parameters to guarantee convergence. The theory for choosing such parameters is now well-established, but most such theory focuses on asymptotic performance. Automatically choosing parameters to ensure good finite-time performance has remained vexingly elusive, as evidenced by continuing efforts six decades after the introduction of stochastic approximation! The other popular paradigm to solve SO is what has been called sample-average approximation. Sample-average approximation, more a philosophy than an algorithm to solve SO, attempts to leverage advances in modern nonlinear programming by first constructing a deterministic approximation of the SO problem using a fixed sample size, and then applying an appropriate nonlinear programming method. Sample-average approximation is reasonable as a solution paradigm but again suffers from finite-time inefficiency because of the simplistic manner in which sample sizes are prescribed. It turns out that in many SO contexts, the effort expended to execute the Monte Carlo oracle is the single most computationally expensive operation. Sample-average approximation essentially ignores this issue since, irrespective of where in the search space an incumbent solution resides, prescriptions for sample sizes within sample-average approximation remain the same. Like stochastic approximation, notwithstanding beautiful asymptotic theory, sample-average approximation suffers from the lack of automatic implementations that guarantee good finite-time performance. In this dissertation, we ask: can advances in algorithmic nonlinear programming theory be combined with intelligent sampling to create solution paradigms for SO that perform well in finite-time while exhibiting asymptotically optimal convergence rates? We propose and study a general solution paradigm called Sampling Controlled Stochastic Recursion (SCSR). Two simple ideas are central to SCSR: (i) use any recursion, particularly one that you would use (e.g., Newton and quasi- Newton, fixed-point, trust-region, and derivative-free recursions) if the functions involved in the problem were known through a deterministic oracle; and (ii) estimate objects appearing within the recursions (e.g., function derivatives) using Monte Carlo sampling to the extent required. The idea in (i) exploits advances in algorithmic nonlinear programming. The idea in (ii), with the objective of ensuring good finite-time performance and optimal asymptotic rates, minimizes Monte Carlo sampling by attempting to balance the estimated proximity of an incumbent solution with the sampling error stemming from Monte Carlo. This dissertation studies the theoretical and practical underpinnings of SCSR, leading to implementable algorithms to solve SO. We first analyze SCSR in a general context, identifying various sufficient conditions that ensure convergence of SCSRs iterates to a solution. We then analyze the nature of such convergence. For instance, we demonstrate that in SCSRs which guarantee optimal convergence rates, the speed of the underlying (deterministic) recursion and the extent of Monte Carlo sampling are intimately linked, with faster recursions permitting a wider range of Monte Carlo effort. With the objective of translating such asymptotic results into usable algorithms, we formulate a family of SCSRs called Adaptive SCSR (A-SCSR) that adaptively determines how much to sample as a recursion evolves through the search space. A-SCSRs are dynamic algorithms that identify sample sizes to balance estimated squared bias and variance of an incumbent solution. This makes the sample size (at every iteration of A-SCSR) a stopping time, thereby substantially complicating the analysis of the behavior of A-SCSRs iterates. That A-SCSR works well in practice is not surprising" the use of an appropriate recursion and the careful sample size choice ensures this. Remarkably, however, we show that A-SCSRs are convergent to a solution and exhibit asymptotically optimal convergence rates under conditions that are no less general than what has been established for stochastic approximation algorithms. We end with the application of a certain A-SCSR to a parameter estimation problem arising in the context of brain-computer interfaces (BCI). Specifically, we formulate and reduce the problem of probabilistically deciphering the electroencephalograph (EEG) signals recorded from the brain of a paralyzed patient attempting to perform one of a specified set of tasks. Monte Carlo simulation in this context takes a more general view, as the act of drawing an observation from a large dataset accumulated from the recorded EEG signals. We apply A-SCSR to nine such datasets, showing that in most cases A-SCSR achieves correct prediction rates that are between 5 and 15 percent better than competing algorithms. More importantly, due to the incorporated adaptive sampling strategies, A-SCSR tends to exhibit dramatically better efficiency rates for comparable prediction accuracies.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
8

Hetherington, Phil A. (Phillip Alan). "Hippocampal function and spatial information processing : computational and neural analyses." Thesis, McGill University, 1995. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=28778.

Full text
Abstract:
The hippocampus is necessary for normal memory in rodents, birds, monkeys, and people. Damage to the hippocampus can result in the inability to learn new facts, defined by the relationship among stimuli. In rodents, spatial learning involves learning about the relationships among stimuli, and exemplifies the kind of learning the requires the hippocampus. Therefore, understanding the neural mechanisms underlying spatial learning may elucidate basic memory processes. Many hippocampal neurons fire when behaving rats, cats, or monkeys are in circumscribed regions (place fields) of an environment. The neurons, called place cells, fire in relation to distal stimuli, but can persist in signaling location when the stimuli are removed or lights are turned off (memory fields). In this thesis, computational models of spatial information processing simulated many of the defining properties of hippocampal place cells, including memory fields. Furthermore, the models suggested a neurally plausible mechanism of goal directed spatial navigation which involved the encoding of distances in the connections between place cells. To navigate using memory fields, the models required an excitatory, distributed, and plastic association system among place cells. Such properties are well characterized in area CA3 of the hippocampus. In this thesis, a new electrophysiological study provides evidence that a second system in the dentate gyrus has similar properties. Thus, two circuits in the hippocampus meet the requirements of the models. Some predictions of the models were then tested in a single-unit recording experiment in behaving rats. Place fields were more likely to occur in information rich areas of the environment, and removal of single cues altered place fields in a way consistent with the distance encoding mechanism suggested by the models. It was concluded that a distance encoding theory of rat spatial navigation has much descriptive and predictive utility, but most of its predic
APA, Harvard, Vancouver, ISO, and other styles
9

Skare, Stefan. "Optimisation strategies in diffusion tensor MR imaging /." Stockholm, 2002. http://diss.kib.ki.se/2002/91-7349-175-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Nease, Stephen Howard. "Contributions to neuromorphic and reconfigurable circuits and systems." Thesis, Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/44923.

Full text
Abstract:
This thesis presents a body of work in the field of reconfigurable and neuromorphic circuits and systems. Three main projects were undertaken. The first was using a Field-Programmable Analog Array (FPAA) to model the cable behavior of dendrites using analog circuits. The second was to design, lay out, and test part of a new FPAA, the RASP 2.9v. The final project was to use floating-gate programming to remove offsets in a neuromorphic FPAA, the RASP Neuron 1D.
APA, Harvard, Vancouver, ISO, and other styles
11

Eisenträger, Almut. "Finite element simulation of a poroelastic model of the CSF system in the human brain during an infusion test." Thesis, University of Oxford, 2012. http://ora.ox.ac.uk/objects/uuid:372f291f-cf36-48ef-8ce8-d4c102bce9e3.

Full text
Abstract:
Cerebrospinal fluid (CSF) fills a system of cavities at the centre of the brain, known as ventricles, and the subarachnoid space surrounding the brain and the spinal cord. In addition, CSF is in free communication with the interstitial fluid of the brain tissue. Disturbances in CSF dynamics can lead to diseases that cause severe brain damage or even death. So-called infusion tests are frequently performed in the diagnosis of such diseases. In this type of test, changes in average CSF pressure are related to changes in CSF volume through infusion of known volumes of additional fluid. Traditionally, infusion tests are analysed with single compartment models, which treat all CSF as part of one compartment and balance fluid inflow, outflow and storage through a single ordinary differential equation. Poroelastic models of the brain, on the other hand, have been used to simulate spatial changes with disease, particularly of the ventricle size, on larger time scales of days, weeks or months. Wirth and Sobey (2008) developed a two-fluid poroelastic model of the brain in which CSF pressure pulsations are linked to arterial blood pressure pulsations. In this thesis, this model is developed further and simulation results are compared to clinical data. At first, the functional form of the compliance, which governs the storage of CSF in single compartment models, is examined by comparison of two different compliance models with clinical data. The derivations of a single-fluid and a two-fluid poroelastic model of the brain in spherical symmetry are laid out in detail and some of the parameters are related to the compliance functions considered earlier. The finite element implementation of the two-fluid model is described and finally simulation results of the average CSF pressure response and the pressure pulsations are compared to clinical data.
APA, Harvard, Vancouver, ISO, and other styles
12

Pillette, Léa. "Redefining and Adapting Feedback for Mental-Imagery based Brain-Computer Interface User Training to the Learners’ Traits and States." Thesis, Bordeaux, 2019. http://www.theses.fr/2019BORD0377/document.

Full text
Abstract:
Les interfaces cerveau-ordinateur basées sur l’imagerie mentale (MI-BCIs) offrent de nouvelles possibilités d’interaction avec les technologies numériques, telles que les neuroprothèses ou les jeux vidéo, uniquement en effectuant des tâches d’imagerie mentale, telles qu’imaginer d’un objet en rotation. La reconnaissance de la commande envoyée au système par l’utilisateur repose sur l’analyse de l’activité cérébrale de ce dernier. Les utilisateurs doivent apprendre à produire des patterns d’activité cérébrale reconnaissables par le système afin de contrôler les MI-BCIs. Cependant, les protocoles de formation actuels ne permettent pas à 10 à 30 % des personnes d’acquérir les compétences nécessaires pour utiliser les MI-BCIs. Ce manque de fiabilité des BCIs limite le développement de la technologie en dehors des laboratoires de recherche. Cette thèse a pour objectif d’examiner comment le feedback fourni tout au longde la formation peut être amélioré et adapté aux traits et aux états des utilisateurs. Dans un premier temps, nous examinons le rôle qui est actuellement donné au feedback dans les applications et les protocoles d’entraînement à l’utilisation des MI-BCIs. Nous analysons également les théories et les contributions expérimentales discutant de son rôle et de son utilité dans le processus d’apprentissage de contrôle de correlats neurophysiologiques. Ensuite, nous fournissons une analyse de l’utilité de différents feedback pour l’entraînement à l’utilisation des MI-BCIs. Nous nous concentrons sur trois caractéristiques principales du feedback, i.e., son contenu, sa modalité de présentation et enfin sa dimension temporelle. Pour chacune de ces caractéristiques, nous avons examiné la littérature afin d’évaluer quels types de feedback ont été testés et quel impact ils semblent avoir sur l’entraînement. Nous avons également analysé quels traits ou états des apprenants influaient sur les résultats de cet entraînement. En nous basant sur ces analyses de la littérature, nous avons émis l’hypothèse que différentes caractéristiques du feedback pourraient être exploitées afin d’améliorer l’entraînement en fonction des traits ou états des apprenants. Nous rapportons les résultats de nos contributions expérimentales pour chacune des caractéristiques du feedback. Enfin, nous présentons différentes recommandations et défis concernant chaque caractéristique du feedback. Des solutions potentielles sont proposées pour à l’avenir surmonter ces défis et répondre à ces recommandations
Mental-Imagery based Brain-Computer Interfaces (MI-BCIs) present new opportunities to interact with digital technologies, such as neuroprostheses or videogames, only by performing mental imagery tasks, such as imagining an object rotating. The recognition of the command for the system is based on the analysis of the brain activity of the user. The users must learn to produce brain activity patterns that are recognizable by the system in order to control BCIs. However, current training protocols do not enable 10 to 30% of persons to acquire the skills required to use BCIs. The lack of robustness of BCIs limit the development of the technology outside of research laboratories. This thesis aims at investigating how the feedback provided throughout the training can be improved and adapted to the traits and states of the users. First, we investigate the role that feedback is currently given in MI-BCI applications and training protocols. We also analyse the theories and experimental contributions discussing its role and usefulness. Then, we review the different feedback that have been used to train MI-BCI users. We focus on three main characteristics of feedback, i.e., its content, its modality of presentation and finally its timing. For each of these characteristics, we reviewed the literature to assess which types of feedback have been tested and what is their impact on the training. We also analysed which traits or states of the learners were shown to influence BCI training outcome. Based on these reviews of the literature, we hypothesised that different characteristics of feedback could be leveraged to improve the training of the learners depending on either traits or states. We reported the results of our experimental contributions for each of the characteristics of feedback. Finally, we presented different recommendations and challenges regarding each characteristic of feedback. Potential solutions were proposed to meet these recommendations in the future
APA, Harvard, Vancouver, ISO, and other styles
13

Picot, Alexis. "2P optogenetics : simulation and modeling for optimized thermal dissipation and current integration Temperature rise under two-photon optogenetics brain stimulation." Thesis, Sorbonne Paris Cité, 2018. http://www.theses.fr/2018USPCB227.

Full text
Abstract:
Depuis maintenant quinze ans, l’optogénétique a bouleversé la recherche en neurosciences en permettant de contrôler les circuits neuronaux. Le développement récent de plusieurs approches d’illumination, combinées à de nouvelles protéines photosensibles, les opsines, ont permis d’ouvrir une voie vers le contrôle neuronale avec la précision de la cellule unique. L’ambition nouvelle d’utiliser ces approches afin d’activer des dizaines, centaines, milliers de cellules in vivo a soulevé de nombreuses questions, notamment concernant les possibles dégâts photoinduits et l’optimisation du choix du couple illumination/opsine. Lors de mon doctorat, j’ai conçu une simulation vérifiée expérimentalement qui permet de calculer, dans toutes les conditions actuelles d’illumination, quel sera l’échauffement au sein du tissus cérébral dû à l’absorption de la lumière par le cerveau. Parallèlement, j’ai paramétré à partir de données expérimentales des modèles de dynamique des populations, à partir d’enregistrements d’électrophysiologie, qui permettent de simuler les courants intracellulaires observés lors de ces photostimulations, pour trois protéines différentes. Ces modèles permettront les chercheurs d’optimiser leurs protocoles d’illumination afin de garantir l’échauffement le plus faible possible dans l’échantillon, tout en favorisant des dynamiques de photocourant adaptées aux besoins expérimentaux
Over the past fifteen years, optogenetics has revolutionized neuroscience research by enabling control of neuronal circuits. The recent development of several illumination approaches, combined with new photosensitive proteins, opsins, have paved the way to neuronal control with the single-cell precision. The new ambition to use these approaches in order to activate tens, hundreds, thousands of cells in vivo has raised many questions, in particular concerning the possible photoinduced damages and the optimization of the choice of the illumination / opsin couple. During my PhD, I developed an experimentally verified simulation that calculates, under all actual illumination protocols, what will be the temperature rise in the brain tissue due to the absorption of light. In parallel, I modeled, from electrophysiology recordings, the intracellular currents observed during these photostimulations, for three different opsins, allowing me to simulate them. These models will allow the researchers to optimize their illumination protocols to keep heating as low as possible in the sample, while helping to generate optimized photocurrent dynamics according to experimental requirements
APA, Harvard, Vancouver, ISO, and other styles
14

Stretton, Erin. "Simulation de modèles personnalisés de gliomes pour la planification de thérapies." Thesis, Paris, ENMP, 2014. http://www.theses.fr/2014ENMP0064/document.

Full text
Abstract:
Les modèles de croissance tumorale fondés sur l'équation de réaction-diffusion Fisher Kolmogorov FK ont montré des résultats probants dans la reproduction et la prédiction de l'invasion de cellules tumorales du cerveau pour les gliomes. Nous utilisons différentes formulations du modèle FK pour évaluer la nécessité de l’imagerie de diffusion pour construire des modèles spécifiques de Gliomes de Bas Grade GBG, l'étude de l'infiltration de cellules tumorales après une résection chirurgicale de la tumeur, et définir une métrique pour quantifier l’évolution de GBG. L'imagerie en tenseur de diffusion ITD a été suggérée pour modéliser la diffusion anisotrope des cellules tumorales dans la matière blanche du cerveau. Les ITD acquises en basse résolution impactent la précision des résultats des modèles de croissance. Nous utilisons une formulation FK pour décrire l'évolution de la frontière visible de la tumeur pour étudier l'impact du remplacement de l'ITD patient par une hypothèse de diffusion isotrope ou une ITD de référence anisotrope en haute résolution formée par la moyenne des ITD de plusieurs patients. Nous quantifions l'impact du remplacement de l'ITD acquise sur un patient à aide de simulations de croissance tumorales synthétiques et des prévisions d'évolution de la tumeur d'un cas clinique. Cette étude suggère que la modélisation de la croissance du gliome à base de motilité différentielle de tissus donne des résultats un peu moins précis qu'à l'aide d'une ITD. S'abstenir d'utiliser une ITD serait suffisant lors de la modélisation de GBG. Par conséquent, toutes ces options d'ITD sont valides dans une formulation FK pour modéliser la croissance de GBG dans le but d'aider les cliniciens dans la planification du traitement. Après la résection d’une tumeur cérébrale, ils veulent savoir quel serait le meilleur traitement de suivi pour chaque patient : une chimiothérapie pour des tumeurs diffuses ou bien une deuxième résection après un laps de temps donné pour les tumeurs massives. Nous proposons une méthode pour tirer profit de modèles de croissance de gliome FK sur les cas post-opératoires montrant des distorsions du cerveau pour estimer l'infiltration des cellules tumorales au-delà des frontières visibles dans les IRM FLAIR. Notre méthode répond à 2 défis de modélisation : celui du mouvement du parenchyme cérébral après la chirurgie avec une technique de recalage non-linéaire et celui de la segmentation incomplète de la tumeur post-opératoire en combinant 2 cartes d'infiltration : une ayant été simulée à partir d'une image pré-opératoire et l’autre à partir d'une image post-opératoire. Nous avons utilisé les données de 2 patients ayant des GBG afin de démontrer l'efficacité de la méthode. Celle-ci pourrait aider les cliniciens à anticiper la récurrence de la tumeur après une résection et à caractériser l’étendue de l'infiltration non visible par la radiologie pour planifier la thérapie. Pour les GBG visibles par une IRM FLAIR/T2, il y a un débat important au sein du groupe de travail RANO Response Assessment in Neuro-Oncology sur la sélection d'un seuil pertinent des métriques basées sur l’évolution de la taille de la tumeur pour déterminer si la maladie est évolutive ME. Nous proposons une approche pour évaluer la ME du GBG en utilisant des estimations de la vitesse de croissance de la tumeur à partir d'une formulation FK qui prend en compte les irrégularités de forme de la tumeur, les différences de vitesse de croissance entre la matière grise et la matière blanche, et les changements volumétriques. En utilisant les IRM FLAIR de 9 patients, nous comparons les estimations de ME de notre approche proposée avec celles calculées en utilisant les estimations manuelles de la vitesse de croissance tumorale 1D, 2D et 3D et celles calculées en utilisant un ensemble de critères basés sur la taille critères RECIST, Macdonald et RANO. Notre approche est prometteuse pour évaluer la ME du GBG à partir d'un nombre limité d'examens par IRM
Tumor growth models based on the Fisher Kolmogorov (FK) reaction-diffusion equation have shown convincing results in reproducing and predicting the invasion patterns of glioma brain tumors. In this thesis we use different FK model formulations to i) assess the need of patient-specific DTIs when modeling LGGs, ii) study cancer cell infiltration after tumor resections, and iii) define a metric to determine progressive disease for low-grade glimoas (LGG).Diffusion tensor images (DTIs) have been suggested to model the anisotropic diffusion of tumor cells in brain white matter. However, patient specific DTIs are expensive and often acquired with low resolution, which compromises the accuracy of the tumor growth models' results. We used a FK formulation to describe the evolution of the visible boundary of the tumor to investigate the impact of replacing the patient DTI by i) an isotropic diffusion map or ii) an anisotropic high-resolution DTI atlas formed by averaging the DTIs of multiple patients. We quantify the impact of replacing the patient DTI using synthetic tumor growth simulations and tumor evolution predictions on a clinical case. This study suggests that modeling glioma growth with tissue based differential motility (not using a DTI) yields slightly less accurate results than using a DTI. However, refraining from using a DTI would be sufficient in situations when modeling LGGs. Therefore, any of these DTI options are valid to use in a FK formulation to model LGG growth with the purpose of aiding clinicians in therapy planning.After a brain resection medical professionals want to know what the best type of follow-up treatment would be for a particular patient, i.e., chemotherapy for diffuse tumors or a second resection after a given amount of time for bulky tumors. We propose a thorough method to leverage FK reaction-diffusion glioma growth models on post-operative cases showing brain distortions to estimate tumor cell infiltration beyond the visible boundaries in FLAIR MRIs. Our method addresses two modeling challenges: i) the challenge of brain parenchyma movement after surgery with a non-linear registration technique and ii) the challenge of incomplete post-operative tumor segmentations by combining two infiltration maps, where one was simulated from a pre-operative image and one estimated from a post-operative image. We used the data of two patients with LGG to demonstrate the effectiveness of the proposed three-step method. We believe that our proposed method could help clinicians anticipate tumor regrowth after a resection and better characterize the radiological non-visible infiltrative extent of a tumor to plan therapy.For LGGs captured on FLAIR/T2 MRIs, there is a substantial amount debate on selecting a definite threshold for size-based metrics to determine progressive disease (PD) and it is still an open item for the Response Assessment in Neuro-Oncology (RANO) Working Group. We propose an approach to assess PD of LGG using tumor growth speed estimates from a FK formulation that takes into consideration irregularities in tumor shape, differences in growth speed between gray matter and white matter, and volumetric changes. Using the FLAIR MRIs of nine patients we compare the PD estimates of our proposed approach to i) the ones calculated using 1D, 2D, and 3D manual tumor growth speed estimates and ii) the ones calculated using a set of well-established size-based criteria (RECIST, Macdonald, and RANO). We conclude from our comparison results that our proposed approach is promising for assessing PD of LGG from a limited number of MRI scans. It is our hope that this model's tumor growth speed estimates could one day be used as another parameter in clinical therapy planning
APA, Harvard, Vancouver, ISO, and other styles
15

Hurdal, Monica Kimberly. "Mathematical and computer modelling of the human brain with reference to cortical magnification and dipole source localisation in the visual cortx." Thesis, Queensland University of Technology, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
16

Wheeler, Katie, Kelsey N. Shubert, Marissa R. Kellicut, David B. Ryan, and Eric W. Dr Sellers. "Simulating random eye-movement in a P300- based brain-computer interface." Digital Commons @ East Tennessee State University, 2018. https://dc.etsu.edu/asrf/2018/schedule/9.

Full text
Abstract:
People who suffer from amyotrophic lateral sclerosis (ALS) eventually lose all voluntary muscle control. In the late stages of the disease, traditional augmentative and alternative communication (AAC) devices fail to provide adequate levels of communication. Brain-computer interface (BCI) technology has provided effective communication after all other AAC devices have failed. Nonetheless, EEG-based BCI devices may also fail for people with late-stage ALS due to loss of voluntary eye movement. Specifically, some people may suffer from random eye movement (nystagmus) and/or drooping of the eyelids (ptosis). Presently, it is unclear in the literature whether BCI operation requires voluntary control of eye movement. The current study attempts to simulate involuntary random eye movement in able-bodied individuals employing the P300-based BCI. To simulate involuntary random eye movement, the stimuli shift in the X and Y dimensions. Stimulus movement ‘Jitter’ occurs between each stimulus presentation in increments of 1-5 pixels (Jitter 1), 5-10 pixels (Jitter 2), 10-15 pixels-(Jitter 3), or a no movement control condition. Data collected from a previous study using 22 participants compared the control condition to Jitter 1 and Jitter 2 indicated higher accuracy for control and Jitter 1 than Jitter 2. No significant differences were found in accuracy, selections per minute, or bitrate. Waveform analysis indicated significantly higher P300 amplitude for the control condition and Jitter 1 than Jitter 2. Preference survey scores showed a preference for Jitter 1 as compared to control and Jitter 2. This finding was unexpected and may be due to the slight movement of Jitter 1 forcing participants to be vigilant, but not distracted. Based on our finding in this study, the current study examines the amount of pixel movement that could lead to reductions in performance. Participants completed a control condition and the three levels of Jitter in a counter-balanced design. Preliminary data for the current study was collected from 15 participants. No significant differences were observed between the three conditions in measures of BCI accuracy, selections per minute, and bitrate. Furthermore, preference survey scores indicated no significant difference in condition preference. Based on the findings of the first study, as well as the data collected so far in the current study, it appears that random movement does not have a significant impact on the ability of healthy participants to operate the BCI system. This could indicate that individuals with random eye movement should be able to operate the system with high rates of accuracy.
APA, Harvard, Vancouver, ISO, and other styles
17

Ren, Wuwei. "Brain Imaging with a Coded Pinhole Mask." Thesis, KTH, Medicinsk teknik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-101911.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Hunter, Russell I. "Improving associative memory in a network of spiking neurons." Thesis, University of Stirling, 2011. http://hdl.handle.net/1893/6177.

Full text
Abstract:
In this thesis we use computational neural network models to examine the dynamics and functionality of the CA3 region of the mammalian hippocampus. The emphasis of the project is to investigate how the dynamic control structures provided by inhibitory circuitry and cellular modification may effect the CA3 region during the recall of previously stored information. The CA3 region is commonly thought to work as a recurrent auto-associative neural network due to the neurophysiological characteristics found, such as, recurrent collaterals, strong and sparse synapses from external inputs and plasticity between coactive cells. Associative memory models have been developed using various configurations of mathematical artificial neural networks which were first developed over 40 years ago. Within these models we can store information via changes in the strength of connections between simplified model neurons (two-state). These memories can be recalled when a cue (noisy or partial) is instantiated upon the net. The type of information they can store is quite limited due to restrictions caused by the simplicity of the hard-limiting nodes which are commonly associated with a binary activation threshold. We build a much more biologically plausible model with complex spiking cell models and with realistic synaptic properties between cells. This model is based upon some of the many details we now know of the neuronal circuitry of the CA3 region. We implemented the model in computer software using Neuron and Matlab and tested it by running simulations of storage and recall in the network. By building this model we gain new insights into how different types of neurons, and the complex circuits they form, actually work. The mammalian brain consists of complex resistive-capacative electrical circuitry which is formed by the interconnection of large numbers of neurons. A principal cell type is the pyramidal cell within the cortex, which is the main information processor in our neural networks. Pyramidal cells are surrounded by diverse populations of interneurons which have proportionally smaller numbers compared to the pyramidal cells and these form connections with pyramidal cells and other inhibitory cells. By building detailed computational models of recurrent neural circuitry we explore how these microcircuits of interneurons control the flow of information through pyramidal cells and regulate the efficacy of the network. We also explore the effect of cellular modification due to neuronal activity and the effect of incorporating spatially dependent connectivity on the network during recall of previously stored information. In particular we implement a spiking neural network proposed by Sommer and Wennekers (2001). We consider methods for improving associative memory recall using methods inspired by the work by Graham and Willshaw (1995) where they apply mathematical transforms to an artificial neural network to improve the recall quality within the network. The networks tested contain either 100 or 1000 pyramidal cells with 10% connectivity applied and a partial cue instantiated, and with a global pseudo-inhibition.We investigate three methods. Firstly, applying localised disynaptic inhibition which will proportionalise the excitatory post synaptic potentials and provide a fast acting reversal potential which should help to reduce the variability in signal propagation between cells and provide further inhibition to help synchronise the network activity. Secondly, implementing a persistent sodium channel to the cell body which will act to non-linearise the activation threshold where after a given membrane potential the amplitude of the excitatory postsynaptic potential (EPSP) is boosted to push cells which receive slightly more excitation (most likely high units) over the firing threshold. Finally, implementing spatial characteristics of the dendritic tree will allow a greater probability of a modified synapse existing after 10% random connectivity has been applied throughout the network. We apply spatial characteristics by scaling the conductance weights of excitatory synapses which simulate the loss in potential in synapses found in the outer dendritic regions due to increased resistance. To further increase the biological plausibility of the network we remove the pseudo-inhibition and apply realistic basket cell models with differing configurations for a global inhibitory circuit. The networks are configured with; 1 single basket cell providing feedback inhibition, 10% basket cells providing feedback inhibition where 10 pyramidal cells connect to each basket cell and finally, 100% basket cells providing feedback inhibition. These networks are compared and contrasted for efficacy on recall quality and the effect on the network behaviour. We have found promising results from applying biologically plausible recall strategies and network configurations which suggests the role of inhibition and cellular dynamics are pivotal in learning and memory.
APA, Harvard, Vancouver, ISO, and other styles
19

Araujo, Carlos Eduardo de. "Implante neural controlado em malha fechada." Universidade Tecnológica Federal do Paraná, 2015. http://repositorio.utfpr.edu.br/jspui/handle/1/1687.

Full text
Abstract:
Um dos desafios propostos por pesquisadores em neurociência aos engenheiros biomédicos é a interação cérebro-máquina. O sistema nervoso comunica-se interpretando sinais eletroquímicos, e circuitos implantáveis podem tomar decisões de modo a interagir com o meio biológico. Sabe-se também que a doença de Parkinson está relacionada a um déficit do neurotransmissor dopamina. Para controlar a concentração de dopamina diferentes técnicas tem sido empregadas como estimuladores elétricos, magnéticos e drogas. Neste trabalho obteve-se o controle da concentração do neurotransmissor de maneira automática uma vez que atualmente isto não é realizado. Para tanto, projetou-se e desenvolveu-se quatro sistemas: a estimulação cerebral profunda ou deep brain stimulation (DBS), a estimulação transmagnética ou transmagnetic stimulation (TMS), um controle de bomba de infusão ou infusion pump control (IPC) para a entrega de drogas e um sistema de voltametria cíclica de varredura rápida ou fast scan ciclic voltammetry (FSCV) (circuito que detecta variações de concentração de neurotransmissores como a dopamina - DA). Também foi necessário o desenvolvimento de softwares para a visualização de dados e análises em sincronia com acontecimentos ou experimentos correntes, facilitando a utilização destes dispositivos quando emprega-se bombas de infusão e a sua flexibilidade é tal que a DBS ou a TMS podem ser utilizadas de maneira manual ou automática além de outras técnicas de estimulação como luzes, sons, etc. O sistema desenvolvido permite controlar de forma automática a concentração da DA. A resolução do sistema é de 0.4 µmol/L podendo-se ajustar o tempo para correção da concentração entre 1 e 90 segundos. O sistema permite controlar concentrações entre 1 e 10 µmol/L, com um erro de cerca de +/- 0,8 µmol/L. Embora desenhado para o controle da concentração de dopamina o sistema pode ser utilizado para controlar outros neurotransmissores. Propõe-se continuar o desenvolvimento em malha fechada empregando FSCV e DBS (ou TMS, ou infusão), utilizando modelos animais parkinsonianos.
One of the challenges to biomedical engineers proposed by researchers in neuroscience is brain machine interaction. The nervous system communicates by interpreting electrochemical signals, and implantable circuits make decisions in order to interact with the biological environment. It is well known that Parkinson’s disease is related to a deficit of dopamine (DA). Different methods has been employed to control dopamine concentration like magnetic or electrical stimulators or drugs. In this work was automatically controlled the neurotransmitter concentration since this is not currently employed. To do that, four systems were designed and developed: deep brain stimulation (DBS), transmagnetic stimulation (TMS), Infusion Pump Control (IPC) for drug delivery, and fast scan cyclic voltammetry (FSCV) (sensing circuits which detect varying concentrations of neurotransmitters like dopamine caused by these stimulations). Some softwares also were developed for data display and analysis in synchronously with current events in the experiments. This allowed the use of infusion pumps and their flexibility is such that DBS or TMS can be used in single mode and other stimulation techniques and combinations like lights, sounds, etc. The developed system allows to control automatically the concentration of DA. The resolution of the system is around 0.4 µmol/L with time correction of concentration adjustable between 1 and 90 seconds. The system allows controlling DA concentrations between 1 and 10 µmol/L, with an error about +/- 0.8 µmol/L. Although designed to control DA concentration, the system can be used to control, the concentration of other substances. It is proposed to continue the closed loop development with FSCV and DBS (or TMS, or infusion) using parkinsonian animals models.
APA, Harvard, Vancouver, ISO, and other styles
20

Grychtol, Bartlomiej. "A virtual reality electric oowered wheelchair simulator : a research platform for brain computer interface experimentation." Thesis, University of Strathclyde, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.549419.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Norelius, Jenny, and Antonello Tacchi. "Evaluating data structures for range queries in brain simulations." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-229767.

Full text
Abstract:
Our brain and nervous system is a vital organ to us, since it is from there our thoughts, personalities, and other mental capacities originate. Within this field of neuroscience a common method of study is to build and run large scale brain simulations where up to hundred thousand neurons are used to produce a model of a brain in three dimensional space. To find all neurites within a specific area is to perform a range query. A vast number of range queries are required when running brain simulations which makes it important that the data structure used to store the simulated neurons is efficient. This study evaluate three common data structures, also called spatial index; the R-tree, Quadtree and R*-tree (Rstar-tree). We test their performance for range queries with regards to execution time, incurred reads, build time, size of data and density of data. The data used is models of a typical neuron so that the characteristics of the data set is preserved. The results show that the R*-tree outperforms the other indices by being significantly more efficient compared to the others, with the R-tree having slightly worse performance than the Quadtree. The time it takes to build the index is to be almost identical for all implementations.
Vår hjärna och nervsystem är ett grundläggande organ för oss. Det är där ifrån våra tankar, personligheter och mentala kapaciteter kommer ifrån. Inom neurovetenskap är en vanlig forskningsmetod att köra storskaliga hjärnsimuleringar där hundratusentals neuroner används för att skapa en modell av hjärnan i 3D. För att hitta alla neuroner inom en viss area används en så kallad intervallfråga. En stor mängd intervallfrågor behövs för hjärnsimuleringar vilket gör det viktigt att datastrukturerna som används för detta är kostnadseffektiva. Denna studie har som mål att jämföra tre stycken vanliga datastrukturer som används för intervallfrågor. Dessa är R-tree, Quadtree och R*-tree. Deras prestanda testas för exekveringstid, antal läsningar, konstruktionstid, samt storlek och densitet på neuroner. För att skapa hjärnsimuleringen används en typisk neuron som standard sådant att dess karakteristiska egenskaper bevaras. Resultaten från studien visar att R*-tree hade den tydligt bästa prestandan för de givna kriterierna, och att Quadtree har en något bättre prestanda än R-tree. Tiden det tar att mata in neuronerna i datastrukturerna är i stort sett densamma.
APA, Harvard, Vancouver, ISO, and other styles
22

Liao, James Yu-Chang. "Evaluating Multi-Modal Brain-Computer Interfaces for Controlling Arm Movements Using a Simulator of Human Reaching." Case Western Reserve University School of Graduate Studies / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=case1404138858.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Qian, Kai. "Development of Electroencephalography based Brain Controlled Switch and Nerve Conduction Study Simulator Software." VCU Scholars Compass, 2010. http://scholarscompass.vcu.edu/etd/2320.

Full text
Abstract:
This thesis investigated the development of an EEG-based brain controlled switch and the design of a software for nerve conduction study. For EEG-based brain controlled switch, we proposed a novel paradigm for an online brain-controlled switch based on Event-Related Synchronizations (ERDs) following external sync signals. Furthermore, the ERD feature was enhanced by 3 event-related moving averages and the performance was tested online. Subjects were instructed to perform an intended motor task following an external sync signal in order to turn on a virtual switch. Meanwhile, the beta-band (16-20Hz) relative ERD power (ERD in reverse value order) of a single EEG Laplacian channel from primary motor area was calculated and filtered by 3 event-related moving average in real-time. The computer continuously monitored the filtered relative ERD power level until it exceeded a pre-set threshold selected based on the observations of ERD power range to turn on the virtual switch. Four right handed healthy volunteers participated in this study. The false positive rates encountered among the four subjects during the operation of the virtual switch were 0.8±0.4%, whereby the response time delay was 36.9±13.0s and the subjects required approximately 12.3±4.4 s of active urging time to perform repeated attempts in order to turn on the switch in the online experiments. The aim of nerve conduction simulator software design is to create software that can be used by nerve conduction simulator to serve as a medical simulator or education tool to train novice physicians for nerve conduction study test. The real response waveform of 10 different upper limb nerves in conduction studies were obtained from the equipment used in real patient studies. A waveform generation model was built to generalize the response waveform near the standard stimulus site within study interest region based on the extracted waveforms and normal reference parameters of each study and stimulus site coordinates. Finally, based on the model, a software interface was created to simulate 10 different nerve conduction studies of the upper limb with 9 pathological conditions.
APA, Harvard, Vancouver, ISO, and other styles
24

"COMPUTER SIMULATION SYSTEM FOR BRAIN AND CRANIOFACIAL SURGERIES." Thesis, 1989. http://hdl.handle.net/2237/11463.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

安田, 孝美, and Takami Yasuda. "COMPUTER SIMULATION SYSTEM FOR BRAIN AND CRANIOFACIAL SURGERIES." Thesis, 1989. http://hdl.handle.net/2237/11463.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Krebs, Peter Rudolf School of History &amp Philosophy of Science UNSW. "Artificial neural nets: a critical analysis of their effectiveness as empirical technique for cognitive modelling." 2007. http://handle.unsw.edu.au/1959.4/40475.

Full text
Abstract:
This thesis is concerned with the computational modelling and simulation of physiological structures and cognitive functions of brains through the use of artificial neural nets. While the structures of these models are loosely related to neurons and physiological structures observed in brains, the extent to which we can accept claims about how neurons and brains really function based on such models depends largely on judgments about the fitness of (virtual) computer experiments as empirical evidence. The thesis examines the computational foundations of neural models, neural nets, and some computational models of higher cognitive functions in terms of their ability to provide empirical support for theories within the framework of Parallel Distributed Processing (PDP). Models of higher cognitive functions in this framework are often presented in forms that hybridise top-down (e.g. employing terminology from Psychology or Linguistics) and bottom-up (neurons and neural circuits) approaches to cognition. In this thesis I argue that the use of terminology from either approach can blind us to the highly theory-laden nature of the models, and that this tends to produce overly optimistic evaluations of the empirical value of computer experiments on these models. I argue, further, that some classes of computational models and simulations based on methodologies that hybridise top-down and bottom-up approaches are ill-designed. Consequently, many of the theoretical claims based on these models cannot be supported by experiments with such models. As a result, I question the effectiveness of computer experiments with artificial neural nets as an empirical technique for cognitive modelling.
APA, Harvard, Vancouver, ISO, and other styles
27

Givon, Lev E. "An Open Pipeline for Generating Executable Neural Circuits from Fruit Fly Brain Data." Thesis, 2016. https://doi.org/10.7916/D8P26Z34.

Full text
Abstract:
Despite considerable progress in mapping the fly’s connectome and elucidating the patterns of information flow in its brain, the complexity of the fly brain’s structure and the still-incomplete state of knowledge regarding its neural circuitry pose significant challenges beyond satisfying the computational resource requirements of current fly brain models that must be addressed to successfully reverse the information processing capabilities of the fly brain. These include the need to explicitly facilitate collaborative development of brain models by combining the efforts of multiple researchers, and the need to enable programmatic generation of brain models that effectively utilize the burgeoning amount of increasingly detailed publicly available fly connectome data. This thesis presents an open pipeline for modular construction of executable models of the fruit fly brain from incomplete biological brain data that addresses both of the above requirements. This pipeline consists of two major open-source components respectively called Neurokernel and NeuroArch. Neurokernel is a framework for collaborative construction of executable connectome-based fly brain models by integration of independently developed models of different functional units in the brain into a single emulation that can be executed upon multiple Graphics Processing Units (GPUs). Neurokernel enforces a programming model that enables functional unit models that comply with its interface requirements to communicate during execution regardless of their internal design. We demonstrate the power of this programming model by using it to integrate independently developed models of the fly retina and lamina into a single vision processing system. We also show how Neurokernel’s communication performance can scale over multiple GPUs, number of functional units in a brain emulation, and over the number of communication ports exposed by a functional unit model. Although the increasing amount of experimentally obtained biological data regarding the fruit fly brain affords brain modelers a potentially valuable resource for model development, the actual use of this data to construct executable neural circuit models is currently challenging because of the disparate nature of different data sources, the range of storage formats they use, and the limited query features of those formats complicates the process of inferring executable circuit designs from biological data. To overcome these limitations, we created a software package called NeuroArch that defines a data model for concurrent representation of both biological data and model structure and the relationships between them within a single graph database. Coupled with a powerful interface for querying both types of data within the database in a uniform high-level manner, this representation enables construction and dispatching of executable neural circuits to Neurokernel for execution and evaluation. We demonstrate the utility of the NeuroArch/Neurokernel pipeline by using the packages to generate an executable model of the central complex of the fruit fly brain from both published and hypothetical data regarding overlapping neuron arborizations in different regions of the central complex neuropils. We also show how the pipeline empowers circuit model designers to devise computational analogues to biological experiments such as parallel concurrent recording from multiple neurons and emulation of genetic mutations that alter the fly’s neural circuitry.
APA, Harvard, Vancouver, ISO, and other styles
28

Tan, Wilson Hor Keong, Timothy Lee, and Chi-Hwa Wang. "Delivery of Etanidazole to Brain Tumor from PLGA Wafers." 2003. http://hdl.handle.net/1721.1/3954.

Full text
Abstract:
This paper presents the computer simulation results on the delivery of Etanidazole (radiosensitiser) to the brain tumor and examines several factors affecting the delivery. The simulation consists of a 3D model of tumor with poly (lactide-co-glycolide) (PLGA) wafers of 1% Etanidzole loading implanted in the resected cavity. A zero-order release device will produce a concentration profile in the tumor which increases with time until the drug in the carrier is depleted. This causes toxicity complications during the later stages of drug treatment. However, for wafers of similar loading, such release results in a higher drug penetration depth and therapeutic index as compared to the double drug burst profile. The numerical accuracy of the model was verified by the similar results obtained in the two-dimensional and three-dimensional models.
Singapore-MIT Alliance (SMA)
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography