Dissertations / Theses on the topic 'Multi-dimensional signals'

To see the other types of publications on this topic, follow the link: Multi-dimensional signals.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 35 dissertations / theses for your research on the topic 'Multi-dimensional signals.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Larkin, Kieran Gerard. "Topics in Multi dimensional Signal Demodulation." University of Sydney. Physics, 2001. http://hdl.handle.net/2123/367.

Full text
Abstract:
Problems in the demodulation of one, two, and three-dimensional signals are investigated. In one-dimensional linear systems the analytic signal and the Hilbert transform are central to the understanding of both modulation and demodulation. However, it is shown that an efficient nonlinear algorithm exists which is not explicable purely in terms of an approximation to the Hilbert transform. The algorithm is applied to the problem of finding the envelope peak of a white light interferogram. The accuracy of peak location is then shown to compare favourably with conventional, but less efficient, techniques. In two dimensions (2-D) the intensity of a wavefield yields to a phase demodulation technique equivalent to direct phase retrieval. The special symmetry of a Helmholtz wavefield allows a unique inversion of an autocorrelation. More generally, a 2-D (non-Helmholtz) fringe pattern can be demodulated by an isotropic 2-D extension of the Hilbert transform that uses a spiral phase signum function. The range of validity of the new transform is established using the asymptotic method of stationary phase. Simulations of the algorithm confirm that deviations from the ideal occur where the fringe pattern curvature is larger than the fringe frequency. A new self-calibrating algorithm for arbitrary sequences of phase-shifted interferograms is developed using the aforementioned spiral phase transform. The algorithm is shown to work even with discontinuous fringe patterns, which are known to seriously hamper other methods. Initial simulations of the algorithm indicate an accuracy of 5 milliradians is achievable. Previously undocumented connections between the demodulation techniques are uncovered and discussed.
APA, Harvard, Vancouver, ISO, and other styles
2

Larkin, Kieran Gerard. "Topics in Multi dimensional Signal Demodulation." Thesis, The University of Sydney, 2000. http://hdl.handle.net/2123/367.

Full text
Abstract:
Problems in the demodulation of one, two, and three-dimensional signals are investigated. In one-dimensional linear systems the analytic signal and the Hilbert transform are central to the understanding of both modulation and demodulation. However, it is shown that an efficient nonlinear algorithm exists which is not explicable purely in terms of an approximation to the Hilbert transform. The algorithm is applied to the problem of finding the envelope peak of a white light interferogram. The accuracy of peak location is then shown to compare favourably with conventional, but less efficient, techniques. In two dimensions (2-D) the intensity of a wavefield yields to a phase demodulation technique equivalent to direct phase retrieval. The special symmetry of a Helmholtz wavefield allows a unique inversion of an autocorrelation. More generally, a 2-D (non-Helmholtz) fringe pattern can be demodulated by an isotropic 2-D extension of the Hilbert transform that uses a spiral phase signum function. The range of validity of the new transform is established using the asymptotic method of stationary phase. Simulations of the algorithm confirm that deviations from the ideal occur where the fringe pattern curvature is larger than the fringe frequency. A new self-calibrating algorithm for arbitrary sequences of phase-shifted interferograms is developed using the aforementioned spiral phase transform. The algorithm is shown to work even with discontinuous fringe patterns, which are known to seriously hamper other methods. Initial simulations of the algorithm indicate an accuracy of 5 milliradians is achievable. Previously undocumented connections between the demodulation techniques are uncovered and discussed.
APA, Harvard, Vancouver, ISO, and other styles
3

Khandani, Amir K. (Amir Keyvan). "Shaping multi-dimensional signal spaces." Thesis, McGill University, 1992. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=70268.

Full text
Abstract:
In selecting the boundary of a signal constellation used for data transmission, the objective is to minimize the average energy of the set for a given number of points from a given packing. Reduction in the average energy because of using the region ${ cal C}$ as the boundary instead of a hypercube is called the shape gain of ${ cal C}$. The price to be paid for shaping is: (i) an increase in the factor CER$ sb{s}$ (Constellation-Expansion-Ratio), (ii) an increase in the factor PAR (Peak-to-Average-power-Ratio), and (iii) an increase in the addressing complexity. In this thesis, the structure of the region which optimizes the tradeoff between the shape gain and the CER$ sb{s}$ and also between the shape gain and the PAR in a finite dimensional space is found. Analytical expressions are derived for the optimum tradeoff. The optimum shaping region can be mapped to a hypercube truncated within a simplex. This mapping has properties which facilitate the addressing of the signal points. We introduce several addressing schemes with low complexity and good performance. The concept of the unsymmetrical shaping is discussed. This is the selection of the boundary of a constellation which has different values of power along different dimensions. The rate of the constellation is maximized subject to some constraints on its power spectrum. This spectral shaping also involves the selection of an appropriate basis (modulating waveform) for the space. Finally, we discuss the selection a signal constellation of signaling over a partial-response channel. In the continuous approximation, we introduce a method to select the nonempty dimensions. This method is based on minimizing the degradation caused by the channel memory. In the discrete case, shaping and coding depend on each other. In this case, a combined shaping and coding method is used. This concerns the joint selection of the shaping and coding to minimize the probability of the symbol error.
APA, Harvard, Vancouver, ISO, and other styles
4

Larkin, Kieran Gerard. "Topics in multi-dimensional signal demodulation." Connect to full text, 2000. http://hdl.handle.net/2123/367.

Full text
Abstract:
Thesis (Ph. D.)--University of Sydney, 2000.
Title from title screen (viewed Apr. 23, 2008). Submitted in fulfilment of the requirements for the degree of Doctor of Philosophy to the School of Physics, Faculty of Science. Includes bibliography. Also available in print form.
APA, Harvard, Vancouver, ISO, and other styles
5

Costa, João Paulo Carvalho Lustosa da. "Parameter estimation techniques for multi-dimensional array signal processing." Aachen Shaker, 2010. http://d-nb.info/1000960765/04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Randeny, Tharindu D. "Multi-Dimensional Digital Signal Processing in Radar Signature Extraction." University of Akron / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=akron1451944778.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Abewardana, Wijenayake Chamith K. "Multi-dimensional Signal Processing And Circuits For Advanced Electronically Scanned Antenna Arrays." University of Akron / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=akron1415358304.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Gianto, Gianto. "Multi-dimensional Teager-Kaiser signal processing for improved characterization using white light interferometry." Thesis, Strasbourg, 2018. http://www.theses.fr/2018STRAD026/document.

Full text
Abstract:
L'utilisation de franges d'interférence en lumière blanche comme une sonde optique en microscopie interférométrique est d'une importance croissante dans la caractérisation des matériaux, la métrologie de surface et de l'imagerie médicale. L'Interférométrie en lumière blanche est une technique basée sur la détection de l'enveloppe de franges d'interférence. Il a été démontré antérieurement, la capacité des approches 2D à rivaliser avec certaines méthodes classiques utilisées dans le domaine de l'interférométrie, en termes de robustesse et de temps de calcul. En outre, alors que la plupart des méthodes tiennent compte seulement des données 1 D, il semblerait avantageux de prendre en compte le voisinage spatial utilisant des approches multidimensionnelles (2D/3D), y compris le paramètre de temps afin d'améliorer les mesures. Le but de ce projet de thèse est de développer de nouvelles approches n-D qui sont appropriées pour une meilleure caractérisation des surfaces plus complexes et des couches transparentes
The use of white light interference fringes as an optical probe in microscopy is of growing importance in materials characterization, surface metrology and medical imaging. Coherence Scanning Interferometry (CSI, also known as White Light Scanning Interferometry, WSLI) is well known for surface roughness and topology measurement [1]. Full-Field Optical Coherence Tomography (FF-OCT) is the version used for the tomographic analysis of complex transparent layers. Both techniques generally make use of some sort of fringe scanning along the optical axis and the acquisition of a stack of xyz images. Image processing is then used to identify the fringe envelopes along z at each pixel in order to measure the positions of either a single surface or of multiple scattering objects within a layer.In CSI, the measurement of surface shape generally requires peak or phase extraction of the mono dimensional fringe signal. Most of the methods are based on an AM-FM signal model, which represents the variation in light intensity measured along the optical axis of an interference microscope [2]. We have demonstrated earlier [3, 4] the ability of 2D approaches to compete with some classical methods used in the field of interferometry, in terms of robustness and computing time. In addition, whereas most methods only take into account the 1D data, it would seem advantageous to take into account the spatial neighborhood using multidimensional approaches (2D, 3D, 4D), including the time parameter in order to improve the measurements.The purpose of this PhD project is to develop new n-D approaches that are suitable for improved characterization of more complex surfaces and transparent layers. In addition, we will enrich the field of study by means of heterogeneous image processing from multiple sensor sources (heterogeneous data fusion). Applications considered will be in the fields of materials metrology, biomaterials and medical imaging
APA, Harvard, Vancouver, ISO, and other styles
9

Son, Kyung-Im. "A multi-class, multi-dimensional classifier as a topology selector for analog circuit design / by Kyung-Im Son." Thesis, Connect to this title online; UW restricted, 1998. http://hdl.handle.net/1773/5919.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Carvalho, Lustosa da Costa Joao P. [Verfasser]. "Parameter Estimation Techniques for Multi-Dimensional Array Signal Processing / Joao P Carvalho Lustosa da Costa." Aachen : Shaker, 2010. http://d-nb.info/112254653X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Vorhies, John T. "Low-complexity Algorithms for Light Field Image Processing." University of Akron / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=akron1590771210097321.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Pulipati, Sravan Kumar. "Electronically-Scanned Wideband Digital Aperture Antenna Arrays using Multi-Dimensional Space-Time Circuit-Network Resonance." University of Akron / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=akron1499440141479455.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Steinwandt, Jens Verfasser], Martin [Akademischer Betreuer] [Haardt, Marius [Gutachter] Pesavento, and Sergiy A. [Gutachter] Vorobyov. "Advanced array signal processing algorithms for multi-dimensional parameter estimation / Jens Steinwandt ; Gutachter: Marius Pesavento, Sergiy A. Vorobyov ; Betreuer: Martin Haardt." Ilmenau : TU Ilmenau, 2019. http://d-nb.info/1177298449/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Cheema, Sher Ali Verfasser], Martin [Akademischer Betreuer] [Haardt, Mario [Gutachter] Huemer, and Eduard Axel [Gutachter] Jorswieck. "Advanced signal processing concepts for multi-dimensional communication systems / Sher Ali Cheema ; Gutachter: Mario Huemer, Eduard Axel Jorswieck ; Betreuer: Martin Haardt." Ilmenau : TU Ilmenau, 2018. http://d-nb.info/1178128989/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Cheng, Yao [Verfasser], Martin [Akademischer Betreuer] Haardt, Ruyet Didier [Gutachter] Le, and Ana Isabel [Gutachter] Pérez-Neira. "Advanced multi-dimensional signal processing for wireless systems / Yao Cheng ; Gutachter: Didier Le Ruyet, Ana Isabel Pérez-Neira ; Betreuer: Martin Haardt." Ilmenau : TU Ilmenau, 2016. http://d-nb.info/1178170934/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Steinwandt, Jens [Verfasser], Martin [Akademischer Betreuer] Haardt, Marius [Gutachter] Pesavento, and Sergiy A. [Gutachter] Vorobyov. "Advanced array signal processing algorithms for multi-dimensional parameter estimation / Jens Steinwandt ; Gutachter: Marius Pesavento, Sergiy A. Vorobyov ; Betreuer: Martin Haardt." Ilmenau : TU Ilmenau, 2019. http://d-nb.info/1177298449/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Li, Ting. "Contributions to Mean Shift filtering and segmentation : Application to MRI ischemic data." Phd thesis, INSA de Lyon, 2012. http://tel.archives-ouvertes.fr/tel-00768315.

Full text
Abstract:
Medical studies increasingly use multi-modality imaging, producing multidimensional data that bring additional information that are also challenging to process and interpret. As an example, for predicting salvageable tissue, ischemic studies in which combinations of different multiple MRI imaging modalities (DWI, PWI) are used produced more conclusive results than studies made using a single modality. However, the multi-modality approach necessitates the use of more advanced algorithms to perform otherwise regular image processing tasks such as filtering, segmentation and clustering. A robust method for addressing the problems associated with processing data obtained from multi-modality imaging is Mean Shift which is based on feature space analysis and on non-parametric kernel density estimation and can be used for multi-dimensional filtering, segmentation and clustering. In this thesis, we sought to optimize the mean shift process by analyzing the factors that influence it and optimizing its parameters. We examine the effect of noise in processing the feature space and how Mean Shift can be tuned for optimal de-noising and also to reduce blurring. The large success of Mean Shift is mainly due to the intuitive tuning of bandwidth parameters which describe the scale at which features are analyzed. Based on univariate Plug-In (PI) bandwidth selectors of kernel density estimation, we propose the bandwidth matrix estimation method based on multi-variate PI for Mean Shift filtering. We study the interest of using diagonal and full bandwidth matrix with experiment on synthesized and natural images. We propose a new and automatic volume-based segmentation framework which combines Mean Shift filtering and Region Growing segmentation as well as Probability Map optimization. The framework is developed using synthesized MRI images as test data and yielded a perfect segmentation with DICE similarity measurement values reaching the highest value of 1. Testing is then extended to real MRI data obtained from animals and patients with the aim of predicting the evolution of the ischemic penumbra several days following the onset of ischemia using only information obtained from the very first scan. The results obtained are an average DICE of 0.8 for the animal MRI image scans and 0.53 for the patients MRI image scans; the reference images for both cases are manually segmented by a team of expert medical staff. In addition, the most relevant combination of parameters for the MRI modalities is determined.
APA, Harvard, Vancouver, ISO, and other styles
18

Weis, Martin [Verfasser], Peter [Akademischer Betreuer] Husar, Galdo Giovanni [Akademischer Betreuer] Del, and Lustosa da Costa João Paulo [Akademischer Betreuer] Carvalho. "Multi-Dimensional Signal Decomposition Techniques for the Analysis of EEG Data / Martin Weis. Gutachter: Giovanni Del Galdo ; João Paulo Carvalho Lustosa da Costa. Betreuer: Peter Husar." Ilmenau : Universitätsbibliothek Ilmenau, 2015. http://d-nb.info/1074870891/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Weis, Martin [Verfasser], Peter [Akademischer Betreuer] Husar, Galdo Giovanni [Akademischer Betreuer] Del, and João Paulo Carvalho Lustosa da [Akademischer Betreuer] Costa. "Multi-Dimensional Signal Decomposition Techniques for the Analysis of EEG Data / Martin Weis. Gutachter: Giovanni Del Galdo ; João Paulo Carvalho Lustosa da Costa. Betreuer: Peter Husar." Ilmenau : Universitätsbibliothek Ilmenau, 2015. http://nbn-resolving.de/urn:nbn:de:gbv:ilm1-2015000127.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Merhej, Dany. "Intégration de connaissances a priori dans la reconstruction des signaux parcimonieux : Cas particulier de la spectroscopie RMN multidimensionnelle." Phd thesis, INSA de Lyon, 2012. http://tel.archives-ouvertes.fr/tel-00782465.

Full text
Abstract:
Les travaux de cette thèse concernent la conception d'outils algorithmiques permettant l'intégration de connaissances a priori dans la reconstruction de signaux parcimonieux. Le but étant principalement d'améliorer la reconstruction de ces signaux à partir d'un ensemble de mesures largement inférieur à ce que prédit le célèbre théorème de Shannon-Nyquist. Dans une première partie nous proposons, dans le contexte de la nouvelle théorie du " compressed sensing " (CS), l'algorithme NNOMP (Neural Network Orthogonal Matching Pursuit), qui est une version modifiée de l'algorithme OMP dans laquelle nous avons remplacé l'étape de corrélation par un réseau de neurones avec un entraînement adapté. Le but est de mieux reconstruire les signaux parcimonieux possédant des structures supplémentaires, i.e. appartenant à un modèle de signaux parcimonieux particulier. Pour la validation expérimentale de NNOMP, trois modèles simulés de signaux parcimonieux à structures supplémentaires ont été considérés, ainsi qu'une application pratique dans un arrangement similaire au " single pixel imaging ". Dans une deuxième partie, nous proposons une nouvelle méthode de sous-échantillonnage en spectroscopie RMN multidimensionnelle (y compris l'imagerie spectroscopique RMN), lorsque les spectres des acquisitions correspondantes de dimension inférieure, e.g. monodimensionnelle, sont intrinsèquement parcimonieux. Dans cette méthode, on modélise le processus d'acquisition des données et de reconstruction des spectres multidimensionnels, par un système d'équations linéaires. On utilise ensuite des connaissances a priori, sur les emplacements non nuls dans les spectres multidimensionnels, pour enlever la sous-détermination induite par le sous échantillonnage des données. Ces connaissances a priori sont obtenues à partir des spectres des acquisitions de dimension inférieure, e.g. monodimensionnelle. La possibilité de sous-échantillonnage est d'autant plus importante que ces spectres monodimensionnels sont parcimonieux. La méthode proposée est évaluée sur des données synthétiques et expérimentales in vitro et in vivo.
APA, Harvard, Vancouver, ISO, and other styles
21

Xu, Yanli. "Une mesure de non-stationnarité générale : Application en traitement d'images et du signaux biomédicaux." Thesis, Lyon, INSA, 2013. http://www.theses.fr/2013ISAL0090/document.

Full text
Abstract:
La variation des intensités est souvent exploitée comme une propriété importante du signal ou de l’image par les algorithmes de traitement. La grandeur permettant de représenter et de quantifier cette variation d’intensité est appelée une « mesure de changement », qui est couramment employée dans les méthodes de détection de ruptures d’un signal, dans la détection des contours d’une image, dans les modèles de segmentation basés sur les contours, et dans des méthodes de lissage d’images avec préservation de discontinuités. Dans le traitement des images et signaux biomédicaux, les mesures de changement existantes fournissent des résultats peu précis lorsque le signal ou l’image présentent un fort niveau de bruit ou un fort caractère aléatoire, ce qui conduit à des artefacts indésirables dans le résultat des méthodes basées sur la mesure de changement. D’autre part, de nouvelles techniques d'imagerie médicale produisent de nouveaux types de données dites à valeurs multiples, qui nécessitent le développement de mesures de changement adaptées. Mesurer le changement dans des données de tenseur pose alors de nouveaux problèmes. Dans ce contexte, une mesure de changement, appelée « mesure de non-stationnarité (NSM) », est améliorée et étendue pour permettre de mesurer la non-stationnarité de signaux multidimensionnels quelconques (scalaire, vectoriel, tensoriel) par rapport à un paramètre statistique, et en fait ainsi une mesure générique et robuste. Une méthode de détection de changements basée sur la NSM et une méthode de détection de contours basée sur la NSM sont respectivement proposées et appliquées aux signaux ECG et EEG, ainsi qu’a des images cardiaques pondérées en diffusion (DW). Les résultats expérimentaux montrent que les méthodes de détection basées sur la NSM permettent de fournir la position précise des points de changement et des contours des structures tout en réduisant efficacement les fausses détections. Un modèle de contour actif géométrique basé sur la NSM (NSM-GAC) est proposé et appliqué pour segmenter des images échographiques de la carotide. Les résultats de segmentation montrent que le modèle NSM-GAC permet d’obtenir de meilleurs résultats comparativement aux outils existants avec moins d'itérations et de temps de calcul, et de réduire les faux contours et les ponts. Enfin, et plus important encore, une nouvelle approche de lissage préservant les caractéristiques locales, appelée filtrage adaptatif de non-stationnarité (NAF), est proposée et appliquée pour améliorer les images DW cardiaques. Les résultats expérimentaux montrent que la méthode proposée peut atteindre un meilleur compromis entre le lissage des régions homogènes et la préservation des caractéristiques désirées telles que les bords ou frontières, ce qui conduit à des champs de tenseurs plus homogènes et par conséquent à des fibres cardiaques reconstruites plus cohérentes
The intensity variation is often used in signal or image processing algorithms after being quantified by a measurement method. The method for measuring and quantifying the intensity variation is called a « change measure », which is commonly used in methods for signal change detection, image edge detection, edge-based segmentation models, feature-preserving smoothing, etc. In these methods, the « change measure » plays such an important role that their performances are greatly affected by the result of the measurement of changes. The existing « change measures » may provide inaccurate information on changes, while processing biomedical images or signals, due to the high noise level or the strong randomness of the signals. This leads to various undesirable phenomena in the results of such methods. On the other hand, new medical imaging techniques bring out new data types and require new change measures. How to robustly measure changes in theos tensor-valued data becomes a new problem in image and signal processing. In this context, a « change measure », called the Non-Stationarity Measure (NSM), is improved and extended to become a general and robust « change measure » able to quantify changes existing in multidimensional data of different types, regarding different statistical parameters. A NSM-based change detection method and a NSM-based edge detection method are proposed and respectively applied to detect changes in ECG and EEG signals, and to detect edges in the cardiac diffusion weighted (DW) images. Experimental results show that the NSM-based detection methods can provide more accurate positions of change points and edges and can effectively reduce false detections. A NSM-based geometric active contour (NSM-GAC) model is proposed and applied to segment the ultrasound images of the carotid. Experimental results show that the NSM-GAC model provides better segmentation results with less iterations that comparative methods and can reduce false contours and leakages. Last and more important, a new feature-preserving smoothing approach called « Nonstationarity adaptive filtering (NAF) » is proposed and applied to enhance human cardiac DW images. Experimental results show that the proposed method achieves a better compromise between the smoothness of the homogeneous regions and the preservation of desirable features such as boundaries, thus leading to homogeneously consistent tensor fields and consequently a more reconstruction of the coherent fibers
APA, Harvard, Vancouver, ISO, and other styles
22

Hamrouni-Chtourou, Sameh. "Approches variationnelles statistiques spatio-temporelles pour l'analyse quantitative de la perfusion myocardique en IRM." Phd thesis, Institut National des Télécommunications, 2012. http://tel.archives-ouvertes.fr/tel-00814577.

Full text
Abstract:
L'analyse quantitative de la perfusion myocardique, i.e. l'estimation d'indices de perfusion segmentaires puis leur confrontation à des valeurs normatives, constitue un enjeu majeur pour le dépistage, le traitement et le suivi des cardiomyopathies ischémiques --parmi les premières causes de mortalité dans les pays occidentaux. Dans la dernière décennie, l'imagerie par résonance magnétique de perfusion (IRM-p) est la modalité privilégiée pour l'exploration dynamique non-invasive de la perfusion cardiaque. L'IRM-p consiste à acquérir des séries temporelles d'images cardiaques en incidence petit-axe et à plusieurs niveaux de coupe le long du grand axe du cœur durant le transit d'un agent de contraste vasculaire dans les cavités et le muscle cardiaques. Les examens IRM-p résultants présentent de fortes variations non linéaires de contraste et des artefacts de mouvements cardio-respiratoires. Dans ces conditions, l'analyse quantitative de la perfusion myocardique est confrontée aux problèmes complexes de recalage et de segmentation de structures cardiaques non rigides dans des examens IRM-p. Cette thèse se propose d'automatiser l'analyse quantitative de la perfusion du myocarde en développant un outil d'aide au diagnostic non supervisé dédié à l'IRM de perfusion cardiaque de premier passage, comprenant quatre étapes de traitement : -1.sélection automatique d'une région d'intérêt centrée sur le cœur; -2.compensation non rigide des mouvements cardio-respiratoires sur l'intégralité de l'examen traité; -3.segmentation des contours cardiaques; -4.quantification de la perfusion myocardique. Les réponses que nous apportons aux différents défis identifiés dans chaque étape s'articulent autour d'une idée commune : exploiter l'information liée à la cinématique de transit de l'agent de contraste dans les tissus pour discriminer les structures anatomiques et guider le processus de recalage des données. Ce dernier constitue le travail central de cette thèse. Les méthodes de recalage non rigide d'images fondées sur l'optimisation de mesures d'information constituent une référence en imagerie médicale. Leur cadre d'application usuel est l'alignement de paires d'images par appariement statistique de distributions de luminance, manipulées via leurs densités de probabilité marginales et conjointes, estimées par des méthodes à noyaux. Efficaces pour des densités jointes présentant des classes individualisées ou réductibles à des mélanges simples, ces approches atteignent leurs limites pour des mélanges non-linéaires où la luminance au pixel s'avère être un attribut trop frustre pour permettre une décision statistique discriminante, et pour des données mono-modal avec variations non linéaires et multi-modal. Cette thèse introduit un modèle mathématique de recalage informationnel multi-attributs/multi-vues générique répondant aux défis identifiés: (i) alignement simultané de l'intégralité de l'examen IRM-p analysé par usage d'un atlas, naturel ou synthétique, dans lequel le cœur est immobile et en utilisant les courbes de rehaussement au pixel comme ensemble dense de primitives; et (ii) capacité à intégrer des primitives image composites, spatiales ou spatio-temporelles, de grande dimension. Ce modèle, disponible dans le cadre classique de Shannon et dans le cadre généralisé d'Ali-Silvey, est fondé sur de nouveaux estimateurs géométriques de type k plus proches voisins des mesures d'information, consistants en dimension arbitraire. Nous étudions leur optimisation variationnelle en dérivant des expressions analytiques de leurs gradients sur des espaces de transformations spatiales régulières de dimension finie et infinie, et en proposant des schémas numériques et algorithmiques de descente en gradient efficace. Ce modèle de portée générale est ensuite instancié au cadre médical ciblé, et ses performances, notamment en terme de précision et de robustesse, sont évaluées dans le cadre d'un protocole expérimental tant qualitatif que quantitatif
APA, Harvard, Vancouver, ISO, and other styles
23

Mehlenbacher, Alan. "Multiagent system simulations of sealed-sid, English, and treasury auctions." Thesis, 2007. http://hdl.handle.net/1828/255.

Full text
Abstract:
I have developed a multiagent system platform that provides a valuable complement to the alternative research methods. The platform facilitates the development of heterogeneous agents in complex environments. The first application of the multiagent system is to the study of sealed-bid auctions with two-dimensional value signals from pure private to pure common value. I find that several auction outcomes are significantly nonlinear across the two-dimensional value signals. As the common value percent increases, profit, revenue, and efficiency all decrease monotonically, but they decrease in different ways. Finally, I find that forcing revelation by the auction winner of the true common value may have beneficial revenue effects when the common-value percent is high and there is a high degree of uncertainty about the common value. The second application of the multiagent system is to the study of English auctions with two-dimensional value signals using agents that learn a signal-averaging factor. I find that signal averaging increases nonlinearly as the common value percent increases, decreases with the number of bidders, and decreases at high common value percents when the common value signal is more uncertain. Using signal averaging, agents increase their profit when the value is more uncertain. The most obvious effect of signal averaging is on reducing the percentage of auctions won by bidders with the highest common value signal. The third application of the multiagent system is to the study of the optimal payment rule in Treasury auctions using Canadian rules. The model encompasses the when-issued, auction, and secondary markets, as well as constraints for primary dealers. I find that the Spanish payment rule is revenue inferior to the Discriminatory payment rule across all market price spreads, but the Average rule is revenue superior. For most market-price spreads, Uniform payment results in less revenue than Discriminatory, but there are many cases in which Vickrey payment produces more revenue.
APA, Harvard, Vancouver, ISO, and other styles
24

Mehlenbacher, Alan. "Multiagent system simulations of sealed-bid, English, and treasury auctions." Thesis, 2007. http://hdl.handle.net/1828/255.

Full text
Abstract:
I have developed a multiagent system platform that provides a valuable complement to the alternative research methods. The platform facilitates the development of heterogeneous agents in complex environments. The first application of the multiagent system is to the study of sealed-bid auctions with two-dimensional value signals from pure private to pure common value. I find that several auction outcomes are significantly nonlinear across the two-dimensional value signals. As the common value percent increases, profit, revenue, and efficiency all decrease monotonically, but they decrease in different ways. Finally, I find that forcing revelation by the auction winner of the true common value may have beneficial revenue effects when the common-value percent is high and there is a high degree of uncertainty about the common value. The second application of the multiagent system is to the study of English auctions with two-dimensional value signals using agents that learn a signal-averaging factor. I find that signal averaging increases nonlinearly as the common value percent increases, decreases with the number of bidders, and decreases at high common value percents when the common value signal is more uncertain. Using signal averaging, agents increase their profit when the value is more uncertain. The most obvious effect of signal averaging is on reducing the percentage of auctions won by bidders with the highest common value signal. The third application of the multiagent system is to the study of the optimal payment rule in Treasury auctions using Canadian rules. The model encompasses the when-issued, auction, and secondary markets, as well as constraints for primary dealers. I find that the Spanish payment rule is revenue inferior to the Discriminatory payment rule across all market price spreads, but the Average rule is revenue superior. For most market-price spreads, Uniform payment results in less revenue than Discriminatory, but there are many cases in which Vickrey payment produces more revenue.
APA, Harvard, Vancouver, ISO, and other styles
25

Edussooriya, Chamira Udaya Shantha. "Low-Complexity Multi-Dimensional Filters for Plenoptic Signal Processing." Thesis, 2015. http://hdl.handle.net/1828/6894.

Full text
Abstract:
Five-dimensional (5-D) light field video (LFV) (also known as plenoptic video) is a more powerful form of representing information of dynamic scenes compared to conventional three-dimensional (3-D) video. In this dissertation, the spectra of moving objects in LFVs are analyzed, and it is shown that such moving objects can be enhanced based on their depth and velocity by employing 5-D digital filters, what is defined as depth-velocity filters. In particular, the spectral region of support (ROS) of a Lambertian object moving with constant velocity and at constant depth is shown to be a skewed 3-D hyperfan in the 5-D frequency domain. Furthermore, it is shown that the spectral ROS of a Lambertian object moving at non-constant depth can be approximated as a sequence of ROSs, each of which is a skewed 3-D hyperfan, in the 5-D continuous frequency domain. Based on the spectral analysis, a novel 5-D finite-extent impulse response (FIR) depth-velocity filter and a novel ultra-low complexity 5-D infinite-extent impulse response (IIR) depth-velocity filter are proposed for enhancing objects moving with constant velocity and at constant depth in LFVs. Furthermore, a novel ultra-low complexity 5-D IIR adaptive depth-velocity filter is proposed for enhancing objects moving at non-constant depth in LFVs. Also, an ultra-low complexity 3-D linear-phase IIR velocity filter that can be incorporated to design 5-D IIR depth-velocity filters is proposed. To the best of the author’s knowledge, the proposed 5-D FIR and IIR depth-velocity filters and the proposed 5-D IIR adaptive depth-velocity filter are the first such 5-D filters applied for enhancing moving objects in LFVs based on their depth and velocity. Numerically generated LFVs and LFVs of real scenes, generated by means of a commercially available Lytro light field (LF) camera, are used to test the effectiveness of the proposed 5-D depth-velocity filters. Numerical simulation results indicate that the proposed 5-D depth-velocity filters outperform the 3-D velocity filters and the four-dimensional (4-D) depth filters in enhancing moving objects in LFVs. More importantly, the proposed 5-D depth-velocity filters are capable of exposing heavily occluded parts of a scene and of attenuating noise significantly. Considering the ultra-low complexity, the proposed 5-D IIR depth-velocity filter and the proposed 5-D IIR adaptive depth-velocity filter have significant potentials to be employed in real-time applications.
Graduate
0544
APA, Harvard, Vancouver, ISO, and other styles
26

Lin, Ming-Huang, and 林明煌. "A Sensor Array System for Multi-dimensional Signal Detection." Thesis, 1999. http://ndltd.ncl.edu.tw/handle/28819282560905862120.

Full text
Abstract:
碩士
逢甲大學
自動控制工程學系
87
In this paper, we have developed a sensor array measurement system combined with both of a serial-network circuit and the man-machine interface programs for detection of multi-dimensional signals. In a two-dimensional measurement application, 16 pieces of PT-100 thermistors are mounted 4 by 4 on the surface of a metal plate uniformly. While signals are measured from the sensor array through signal conditioning and cross-bilinear interpolation analysis, the temperature dynamic variation on the metal surface can be obtained. For compound material design applications, the system can be very useful. In a three-dimensional measurement application, 5 pieces of solar cells are attached on each surface of the cubic but the buttom surface. Through a reverse angle algorithm, the relation between azimuth, zenith angle and sunlight intensity can be analyzed, which is helpful to improve the efficiency of the solar power system. Most of the sensor array systems are developed under a specific purpose with the high cost and the low expandability. In contrast, the multi-purpose system design with the features of easy-connection and synchronous trigger conformed to the measurement of sensor array system shows the advantages of flexible parameter design and excellent expandability. The online measurement software developed under the LabVIEW environment is capable of multi-dimensional real-time signal detection and simulation. To develop the system under a limited budget, the system performance is acceptable on common applications. Based on the multi-point measurement data, the measured results consistently coincide with the simulation results. The user-friendly system design can be operated easily, and the multi-dimensional signal processing system with a low cost advantage is good for industrial and commercial applications.
APA, Harvard, Vancouver, ISO, and other styles
27

Chang, Yu-Chu, and 張佑竹. "Vector Quantization and its Application to multi-dimensional Digital Signal Processing." Thesis, 1995. http://ndltd.ncl.edu.tw/handle/46496766224358760087.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Srinivasan, Sabeshan. "Object tracking in distributed video networks using multi-dimensional signatures /." 2006. http://www.library.umaine.edu/theses/pdf/SrinivasanSX2006.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Su, Hung, and 蘇弘. "Multi-dimensional Histogram-based Watermarking Scheme for Resisting Geometric and Signal Processing Attacks." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/22594619266851566270.

Full text
Abstract:
碩士
國立中山大學
資訊工程學系研究所
92
Many digital watermarking schemes have been proposed for copyright protection recently due to the rapid growth of multimedia data distribution. Robustness is one of the crucial important issues in watermarking. But, most of traditional digital watermarking schemes is normally not to resist both geometric distortion and signal processing attacks well. There are two different types of solutions to resisting geometrical attacks: nonblind and blind methods. With the noblind approach, due to availability of the original image, the problem can be resolved with a good solution by elective search between the geometrically attacked and unattacked image. The blind solution, which does not use the original image in watermark extraction, is obviously more challenging. In this research, we propose a blind watermarking scheme which based on histogram property. So that, we propose a novel scheme to define the lattice structure of color space of host image for embedding watermark data. We utilize the histograms of various properties that calculated from the host image, and partition each histogram space into several divisions with dynamic interval. The number of pixels of each division is equal. And then we embed watermark data by modifying distribution of each division. The experimented results present the algorithm is robust to resist common geometric attacks and high quality JPEG compression at the same time.
APA, Harvard, Vancouver, ISO, and other styles
30

Sevcenco, Ioana Speranta. "Multi-dimensional digital signal integration with applications in image, video and light field processing." Thesis, 2018. https://dspace.library.uvic.ca//handle/1828/9915.

Full text
Abstract:
Multi-dimensional digital signals have become an intertwined part of day to day life, from digital images and videos used to capture and share life experiences, to more powerful scene representations such as light field images, which open the gate to previously challenging tasks, such as post capture refocusing or eliminating visible occlusions from a scene. This dissertation delves into the world of multi-dimensional signal processing and introduces a tool of particular use for gradient based solutions of well-known signal processing problems. Specifically, a technique to reconstruct a signal from a given gradient data set is developed in the case of two dimensional (2-D), three dimensional (3-D) and four dimensional (4-D) digital signals. The reconstruction technique is multiresolution in nature, and begins by using the given gradient to generate a multi-dimensional Haar wavelet decomposition of the signals of interest, and then reconstructs the signal by Haar wavelet synthesis, performed on successive resolution levels. The challenges in developing this technique are non-trivial and are brought about by the applications at hand. For example, in video content replacement, the gradient data from which a video sequence needs to be reconstructed is a combination of gradient values that belong to different video sequences. In most cases, such operations disrupt the conservative nature of the gradient data set. The effects of the non-conservative nature of the newly generated gradient data set are attenuated by using an iterative Poisson solver at each resolution level during the reconstruction. A second and more important challenge is brought about by the increase in signal dimensionality. In a previous approach, an intermediate extended signal with symmetric region of support is obtained, and the signal of interest is extracted from it. This approach is reasonable in 2-D, but becomes less appealing as the signal dimensionality increases. To avoid generating data that is then discarded, a new approach is proposed, in which signal extension is no longer performed. Instead, different procedures are suggested to generate a non-symmetric Haar wavelet decomposition of the signals of interest. In the case of 2-D and 3-D signals, ways to obtain this decomposition exactly from the given gradient data and the average value of the signal are proposed. In addition, ways to approximate a subset of decomposition coefficients are introduced and the visual consequences of such approximations are studied in the special case of 2-D digital images. Several ways to approximate the same subset of decomposition coefficients are developed in the special case of 4-D light field images. Experiments run on various 2-D, 3-D and 4-D test signals are included to provide an insight on the performance of the reconstruction technique. The value of the multi-dimensional reconstruction technique is then demonstrated by including it in a number of signal processing applications. First, an efficient algorithm is developed with the purpose of combining information from the gradient of a set of 2-D images with different regions in focus or different exposure times, with the purpose of generating an all-in-focus image or revealing details that were lost due to improper exposure setting. Moving on to 3-D signal processing applications, two video editing problems are studied and gradient based solutions are presented. In the first one, the objective is to seamlessly place content from one video sequence in another, while in the second one, to combine elements from two video sequences and generate a transparency effect. Lastly, a gradient based technique for editing 4-D scene representations (light fields) is presented, as well as a technique to combine information from two light fields with the purpose of generating a light field with more details of the imaged scene. All these applications show that the developed technique is a reliable tool for gradient domain based solutions of signal processing problems.
Graduate
APA, Harvard, Vancouver, ISO, and other styles
31

Chen, Guan-Wen, and 陳冠汶. "Real-time measurement of neuron signal -the development of two dimensional multi-electrode array chip and signal measuring system." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/32037629428674826392.

Full text
Abstract:
碩士
國立臺灣大學
電子工程學研究所
99
In this research, we set up a measurement platform for the recording of neuron ensemble by using semi-conductor technology. A 2D multiple electrode array chip , interface circuits, and signal recording system were designed and implemented. The MEA chips are fabricated in clean room using photolithography and thin film deposition. We fabricate two kinds of MEA chips. One is ITO electrode array chip, the other is gold electrode array chip. These chips are composed of a glass substrate, ITO or gold thin film electrode and silicon dioxide insulation layer. ITO chips are composed of ten electrodes .The length of ITO electrode is 500μm, the width is 200nm and the thickness is 150nm. Gold chips are composed of eighteen electrodes arranged in a 9*2 matrix. The length of gold electrode is 200μm ,the width is 40μm and the thickness is 100nm.Connecting these chip to probe enabling the transmission of the detected neuronal signal to signal processing system. This signal processing system includes an interface circuit which is composed an instrument amplifier, a fliege notch filter, a MFB high pass filter ,a MFB low pass filter and an analog to digital converter card with LabVIEW interface. Bio-electrical signals of rat neuron from embryo can be successfully detected from this system.
APA, Harvard, Vancouver, ISO, and other styles
32

Vlok, Jacobus David. "Sparse graph codes on a multi-dimensional WCDMA platform." Diss., 2007. http://upetd.up.ac.za/thesis/available/etd-07042007-155428.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Chang, An-Ni, and 張安妮. "Determination of the solution structure of Chrmo2 domain of the chloroplast signal recognition particle using multi-dimensional NMR spectroscopy." Thesis, 2002. http://ndltd.ncl.edu.tw/handle/65357759930250636454.

Full text
Abstract:
碩士
國立清華大學
化學系
90
The signal recognition particle (SRP) is a ubiquitous system for the targeting of membrane and secreted proteins. The chloroplast SRP (cpSRP) is unique among SRPs in that it possesses no RNA and is functional in post-translational as well as co-translational targeting. cpSRP is present within two pools in the chloroplast: a co-translationally active SRP54 homologue (cpSRP54) and a post translationally active cpSRP, which is a complex of cpSRP54 and the novel SRP component, cpSRP43. The presence or absence of cpSRP43 seems to determine the targeting activity of cpSRP54. cpSRP43 has the following modules: Chromo1, Ank1-4, Chromo2 and Chromo3. The recent studies show that the Chromo2 is required for both targeting complex formation and integration because this module interacts with cpSRP54. Hence, Chromo2 plays a crucial role in the formation of “transit complex” contains cpSRP54, cpSRP43 and Lhcb1 which is responsible for targeting the proteins into the chloroplast and into the thylakoid membrane. In this context, we tempted to solve the solution structure of Chromo2 domain using multi-dimensional NMR spectroscopy. We carried out variety of three dimensional NMR experiments such as, 15N-HSQC NOESY, 15N-HSQC TOCSY, HNCA, HN(CO)CA and HNHA. Also, we preformed 2D NOESY& TOCSY. Assignments of all protons, 15N and 13Ca have been done using 2D NOESY& TOCSY, HSQC 15N-HSQC NOESY, 15N-HSQC TOCSY, HNCA and HNCA datasets. TALOS and CSI are performed to predict dihedral angles and secondary structure of Chromo2. The structure is being calculated by dynamical simulated annealing protocol using ARIA (which is a structure calculation software generous provided by Michael Nigles). Also, we characterized Chromo2 domain using biophysical techniques such as UV-CD, Fluorescence, and FPLC. The results will be discussed in greater detail.
APA, Harvard, Vancouver, ISO, and other styles
34

Li, Thing. "Contributions to Mean Shift filtering and segmentation : Application to MRI ischemic data." Thesis, 2012. http://www.theses.fr/2012ISAL0030/document.

Full text
Abstract:
De plus en plus souvent, les études médicales utilisent simultanément de multiples modalités d'acquisition d'image, produisant ainsi des données multidimensionnelles comportant beaucoup d'information supplémentaire dont l'interprétation et le traitement deviennent délicat. Par exemple, les études sur l'ischémie cérébrale se basant sur la combinaison de plusieurs images IRM, provenant de différentes séquences d'acquisition, pour prédire l'évolution de la zone nécrosée, donnent de bien meilleurs résultats que celles basées sur une seule image. Ces approches nécessitent cependant l'utilisation d'algorithmes plus complexes pour réaliser les opérations de filtrage, segmentation et de clustering. Une approche robuste pour répondre à ces problèmes de traitements de données multidimensionnelles est le Mean Shift qui est basé sur l'analyse de l'espace des caractéristiques et l'estimation non-paramétrique par noyau de la densité de probabilité. Dans cette thèse, nous étudions les paramètres qui influencent les résultats du Mean Shift et nous cherchons à optimiser leur choix. Nous examinons notamment l'effet du bruit et du flou dans l'espace des caractéristiques et comment le Mean Shift doit être paramétrés pour être optimal pour le débruitage et la réduction du flou. Le grand succès du Mean Shift est principalement du au réglage intuitif de ces paramètres de la méthode. Ils représentent l'échelle à laquelle le Mean Shift analyse chacune des caractéristiques. En se basant sur la méthode du Plug In (PI) monodimensionnel, fréquemment utilisé pour le filtrage Mean Shift et permettant, dans le cadre de l'estimation non-paramétrique par noyau, d'approximer le paramètre d'échelle optimal, nous proposons l'utilisation du PI multidimensionnel pour le filtrage Mean Shift. Nous évaluons l'intérêt des matrices d'échelle diagonales et pleines calculées à partir des règles du PI sur des images de synthèses et naturelles. Enfin, nous proposons une méthode de segmentation automatique et volumique combinant le filtrage Mean Shift et la croissance de région ainsi qu'une optimisation basée sur les cartes de probabilité. Cette approche est d'abord étudiée sur des images IRM synthétisées. Des tests sur des données réelles issues d'études sur l'ischémie cérébrale chez le rats et l'humain sont aussi conduits pour déterminer l'efficacité de l'approche à prédire l'évolution de la zone de pénombre plusieurs jours après l'accident vasculaire et ce, à partir des IRM réalisées peu de temps après la survenue de cet accident. Par rapport aux segmentations manuelles réalisées des experts médicaux plusieurs jours après l'accident, les résultats obtenus par notre approche sont mitigés. Alors qu'une segmentation parfaite conduirait à un coefficient DICE de 1, le coefficient est de 0.8 pour l'étude chez le rat et de 0.53 pour l'étude sur l'homme. Toujours en utilisant le coefficient DICE, nous déterminons la combinaison de d'images IRM conduisant à la meilleure prédiction
Medical studies increasingly use multi-modality imaging, producing multidimensional data that bring additional information that are also challenging to process and interpret. As an example, for predicting salvageable tissue, ischemic studies in which combinations of different multiple MRI imaging modalities (DWI, PWI) are used produced more conclusive results than studies made using a single modality. However, the multi-modality approach necessitates the use of more advanced algorithms to perform otherwise regular image processing tasks such as filtering, segmentation and clustering. A robust method for addressing the problems associated with processing data obtained from multi-modality imaging is Mean Shift which is based on feature space analysis and on non-parametric kernel density estimation and can be used for multi-dimensional filtering, segmentation and clustering. In this thesis, we sought to optimize the mean shift process by analyzing the factors that influence it and optimizing its parameters. We examine the effect of noise in processing the feature space and how Mean Shift can be tuned for optimal de-noising and also to reduce blurring. The large success of Mean Shift is mainly due to the intuitive tuning of bandwidth parameters which describe the scale at which features are analyzed. Based on univariate Plug-In (PI) bandwidth selectors of kernel density estimation, we propose the bandwidth matrix estimation method based on multi-variate PI for Mean Shift filtering. We study the interest of using diagonal and full bandwidth matrix with experiment on synthesized and natural images. We propose a new and automatic volume-based segmentation framework which combines Mean Shift filtering and Region Growing segmentation as well as Probability Map optimization. The framework is developed using synthesized MRI images as test data and yielded a perfect segmentation with DICE similarity measurement values reaching the highest value of 1. Testing is then extended to real MRI data obtained from animals and patients with the aim of predicting the evolution of the ischemic penumbra several days following the onset of ischemia using only information obtained from the very first scan. The results obtained are an average DICE of 0.8 for the animal MRI image scans and 0.53 for the patients MRI image scans; the reference images for both cases are manually segmented by a team of expert medical staff. In addition, the most relevant combination of parameters for the MRI modalities is determined
APA, Harvard, Vancouver, ISO, and other styles
35

Freeman, Kim Renee. "In situ three-dimensional reconstruction of mouse heart sympathetic innervation by two-photon excitation fluorescence imaging." Thesis, 2014. http://hdl.handle.net/1805/4030.

Full text
Abstract:
Indiana University-Purdue University Indianapolis (IUPUI)
The sympathetic nervous system strongly modulates the contractile and electrical function of the heart. The anatomical underpinnings that enable a spatially and temporally coordinated dissemination of sympathetic signals within the cardiac tissue are only incompletely characterized. In this work we took the first step of unraveling the in situ 3D microarchitecture of the cardiac sympathetic nervous system. Using a combination of two-photon excitation fluorescence microscopy and computer-assisted image analyses, we reconstructed the sympathetic network in a portion of the left ventricular epicardium from adult transgenic mice expressing a fluorescent reporter protein in all peripheral sympathetic neurons. The reconstruction revealed several organizational principles of the local sympathetic tree that synergize to enable a coordinated and efficient signal transfer to the target tissue. First, synaptic boutons are aligned with high density along much of axon-cell contacts. Second, axon segments are oriented parallel to the main, i.e., longitudinal, axes of their apposed cardiomyocytes, optimizing the frequency of transmitter release sites per axon/per cardiomyocyte. Third, the local network was partitioned into branched and/or looped sub-trees which extended both radially and tangentially through the image volume. Fourth, sub-trees arrange to not much overlap, giving rise to multiple annexed innervation domains of variable complexity and configuration. The sympathetic network in the epicardial border zone of a chronic myocardial infarction was observed to undergo substantive remodeling, which included almost complete loss of fibers at depths >10 µm from the surface, spatially heterogeneous gain of axons, irregularly shaped synaptic boutons, and formation of axonal plexuses composed of nested loops of variable length. In conclusion, we provide, to the best of our knowledge, the first in situ 3D reconstruction of the local cardiac sympathetic network in normal and injured mammalian myocardium. Mapping the sympathetic network connectivity will aid in elucidating its role in sympathetic signal transmisson and processing.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography