Siga este enlace para ver otros tipos de publicaciones sobre el tema: Reconstruction par Patch.

Artículos de revistas sobre el tema "Reconstruction par Patch"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 40 mejores artículos de revistas para su investigación sobre el tema "Reconstruction par Patch".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore artículos de revistas sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Rabii, R., F. Saint, L. Salomon, A. Hoznek, D. K. Chopin y C. C. Abbou. "La reconstruction artérielle par patch aortique détubulé dans la greffe simultanée rein–pancréas". Annales d'Urologie 36, n.º 3 (2002): 168–70. http://dx.doi.org/10.1016/s0003-4401(02)00094-3.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Park, Hyeseung y Seungchul Park. "Improving Monocular Depth Estimation with Learned Perceptual Image Patch Similarity-Based Image Reconstruction and Left–Right Difference Image Constraints". Electronics 12, n.º 17 (4 de septiembre de 2023): 3730. http://dx.doi.org/10.3390/electronics12173730.

Texto completo
Resumen
This paper introduces a novel approach for self-supervised monocular depth estimation. The model is trained on stereo–image (left–right pair) data and incorporates carefully designed perceptual image quality assessment-based loss functions for image reconstruction and left–right image difference. The fidelity of the reconstructed images, obtained by warping the input images using the predicted disparity maps, significantly influences the accuracy of depth estimation in self-supervised monocular depth networks. The suggested LPIPS (Learned Perceptual Image Patch Similarity)-based evaluation of image reconstruction accurately emulates human perceptual mechanisms to quantify the quality of reconstructed images, serving as an image reconstruction loss. Consequently, it facilitates the gradual convergence of the reconstructed images toward a greater similarity with the target images during the training process. Stereo–image pair often exhibits slight discrepancies in brightness, contrast, color, and camera angle due to factors like lighting conditions and camera calibration inaccuracies. These factors limit the improvement of image reconstruction quality. To address this, the left–right difference image loss is introduced, aimed at aligning the disparities between the actual left–right image pair and the reconstructed left–right image pair. Due to the tendency of distant pixel values to approach zero in the difference images derived from the left and right source images of stereo pairs, this loss progressively steers the distant pixel values of the reconstructed difference images toward a convergence with zero. Hence, the use of this loss has demonstrated its efficacy in mitigating distortions in distant regions while enhancing overall performance. The primary objective of this study is to introduce and validate the effectiveness of LPIPS-based image reconstruction and left–right difference image losses in the context of monocular depth estimation. To this end, the proposed loss functions have been seamlessly integrated into a straightforward single-task stereo–image learning framework, incorporating simple hyperparameters. Notably, our approach achieves superior results compared to other state-of-the-art methods, even those adopting more intricate hybrid data and multi-task learning strategies.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Tarchouli, Marwa, Marc Riviere, Thomas Guionnet, Wassim Hamidouche, Meriem Outtas y Olivier Deforges. "Patch-Based Image Learned Codec using Overlapping". Signal & Image Processing : An International Journal 14, n.º 1 (27 de febrero de 2023): 1–21. http://dx.doi.org/10.5121/sipij.2023.14101.

Texto completo
Resumen
End-to-end learned image and video codecs, based on auto-encoder architecture, adapt naturally to image resolution, thanks to their convolutional aspect. However, while coding high resolution images, these codecs face hardware problems such as memory saturation. This paper proposes a patch-based image coding solution based on an end-to-end learned model, which aims to remedy to the hardware limitation while maintaining the same quality as full resolution image coding. Our method consists in coding overlapping patches of the image and reconstructing them into a decoded image using a weighting function. This approach manages to be on par with the performance of full resolution image coding using an endto-end learned model, and even slightly outperforms it, while being adaptable to different memory sizes. Moreover, this work undertakes a full study on the effect of the patch size on this solution’s performance, and consequently determines the best patch resolution in terms of coding time and coding efficiency. Finally, the method introduced in this work is also compatible with any learned codec based on a conv/deconvolutional autoencoder architecture without having to retrain the model.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Wilén, Nina. "Entre souveraineté copartagée et coopération conditionnelle". Études internationales 42, n.º 2 (13 de septiembre de 2011): 159–77. http://dx.doi.org/10.7202/1005824ar.

Texto completo
Resumen
Cet article souhaite examiner la reconstruction de l'État du Liberia par les acteurs externes et propose d'analyser en particulier le Programme d'assistance à la gouvernance et à la gestion de l'économie (GEMAP). Ce programme repose sur des cosignatures par des experts internationaux pour des dépenses étatiques. L'argument soutenu ici est que le Liberia se trouve dans une situation paradoxale, à mi-chemin entre une coopération conditionnelle et une cosouveraineté avec ses partenaires internationaux qui n'est ni volontaire, ni clairement imposée. Pour comprendre ce phénomène, l'auteure adopte un cadre théorique basé sur la path dependency . Cet article étudie donc le fondement et le fonctionnement du GEMAP afi n d'illustrer ce propos.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Dandi, Yatin, Homanga Bharadhwaj, Abhishek Kumar y Piyush Rai. "Generalized Adversarially Learned Inference". Proceedings of the AAAI Conference on Artificial Intelligence 35, n.º 8 (18 de mayo de 2021): 7185–92. http://dx.doi.org/10.1609/aaai.v35i8.16883.

Texto completo
Resumen
Allowing effective inference of latent vectors while training GANs can greatly increase their applicability in various downstream tasks. Recent approaches, such as ALI and BiGAN frameworks, develop methods of inference of latent variables in GANs by adversarially training an image generator along with an encoder to match two joint distributions of image and latent vector pairs. We generalize these approaches to incorporate multiple layers of feedback on reconstructions, self-supervision, and other forms of supervision based on prior or learned knowledge about the desired solutions. We achieve this by modifying the discriminator's objective to correctly identify more than two joint distributions of tuples of an arbitrary number of random variables consisting of images, latent vectors, and other variables generated through auxiliary tasks, such as reconstruction and inpainting or as outputs of suitable pre-trained models. We design a non-saturating maximization objective for the generator-encoder pair and prove that the resulting adversarial game corresponds to a global optimum that simultaneously matches all the distributions. Within our proposed framework, we introduce a novel set of techniques for providing self-supervised feedback to the model based on properties, such as patch-level correspondence and cycle consistency of reconstructions. Through comprehensive experiments, we demonstrate the efficacy, scalability, and flexibility of the proposed approach for a variety of tasks. The appendix of the paper can be found at the following link: https://drive.google.com/file/d/1i99e682CqYWMEDXlnqkqrctGLVA9viiz/view?usp=sharing
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Kwok, Ka Hei Martin, Matti Kortelainen, Giuseppe Cerati, Alexei Strelchenko, Oliver Gutsche, Allison Reinsvold Hall, Steve Lantz et al. "Application of performance portability solutions for GPUs and many-core CPUs to track reconstruction kernels". EPJ Web of Conferences 295 (2024): 11003. http://dx.doi.org/10.1051/epjconf/202429511003.

Texto completo
Resumen
Next generation High-Energy Physics (HEP) experiments are presented with significant computational challenges, both in terms of data volume and processing power. Using compute accelerators, such as GPUs, is one of the promising ways to provide the necessary computational power to meet the challenge. The current programming models for compute accelerators often involve using architecture-specific programming languages promoted by the hardware vendors and hence limit the set of platforms that the code can run on. Developing software with platform restrictions is especially unfeasible for HEP communities as it takes significant effort to convert typical HEP algorithms into ones that are efficient for compute accelerators. Multiple performance portability solutions have recently emerged and provide an alternative path for using compute accelerators, which allow the code to be executed on hardware from different vendors. We apply several portability solutions, such as Kokkos, SYCL, C++17 std::execution::par, Alpaka, and OpenMP/OpenACC, on two mini-apps extracted from the mkFit project: p2z and p2r. These apps include basic kernels for a Kalman filter track fit, such as propagation and update of track parameters, for detectors at a fixed z or fixed r position, respectively. The two mini-apps explore different memory layout formats. We report on the development experience with different portability solutions, as well as their performance on GPUs and many-core CPUs, measured as the throughput of the kernels from different GPU and CPU vendors such as NVIDIA, AMD and Intel.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

MacKenzie, Hector M. "The Path to Temptation: The Negotiation of Canada’s Reconstruction Loan to Britain in 1946". Historical Papers 17, n.º 1 (26 de abril de 2006): 196–220. http://dx.doi.org/10.7202/030891ar.

Texto completo
Resumen
Résumé Pendant la Deuxième Guerre Mondiale, la dépendance économique du Canada vis-à-vis du commerce international augmenta considérablement. En 1944, en effet, les exportations canadiennes comptaient pour 31 pour cent du revenu national. La bonne part de ces exportations, toutefois, était dirigée vers le Royaume-Uni et était surtout constituée des contributions canadiennes à ce qu'il était convenu d'appeler, à l'époque, l'effort de guerre. Conséquemment, tout au long des années de guerre, le marché britannique exerça une influence certaine, voire souvent décisive, sur les politiques économiques extérieures du Canada. On comprend, dès lors, que le Canada ait surveillé de près les politiques commerciales de Whitehall, préoccupé qu'il était des problèmes que poserait le financement des échanges commerciaux entre les pays pendant la période de transition entre la guerre et la paix et de la nécessité qu'il y avait de s'assurer un marché stable et prospère dans le futur. L'auteur étudie ici une des ententes qui résulta de ces préoccupations, soit celle d'un prêt consenti par le Canada au Royaume-Uni, en 1946, en vue de sa reconstruction. En replaçant toute la question dans le contexte des intérêts économiques et politiques du gouvernement canadien de l'époque, l'auteur examine l'arrière-plan de la démarche et retrace minutieusement les diverses négociations qui aboutirent à l'entente finale.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Gapon, Nikolay, Roman Sizyakin, Marina Zhdanova, Oksana Balabaeva y Yigang Cen. "Modified Depth-Map Inpainting Method Using the Neural Network". EPJ Web of Conferences 224 (2019): 04005. http://dx.doi.org/10.1051/epjconf/201922404005.

Texto completo
Resumen
This paper proposes a method for reconstructing a depth map obtained using a stereo pair image. The proposed approach is based on a geometric model for the synthesis of patches. The entire image is preliminarily divided into blocks of different size, where large blocks are used to restore homogeneous areas, and small blocks are used to restore details of the image structure. Lost pixels are recovered by copying the pixel values from the source based on the similarity criterion. We used a trained neural network to select the “best like” patch. Experimental results show that the proposed method gives better results than other modern methods, both in subjective and objective measurements for reconstructing a depth map.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Mehedi, Ibrahim M., K. Prahlad Rao, Ubaid M. Al-Saggaf, Hadi Mohsen Alkanfery, Maamar Bettayeb y Rahtul Jannat. "Intelligent Tomographic Microwave Imaging for Breast Tumor Localization". Mathematical Problems in Engineering 2022 (25 de mayo de 2022): 1–9. http://dx.doi.org/10.1155/2022/4090351.

Texto completo
Resumen
Researchers are continuously exploring the potential use of microwave imaging in the early detection of breast cancer. The technique offers a promising alternative to mammography, a standard clinical imaging procedure today. The contrast in dielectric properties between normal and cancerous tissues makes microwave imaging a viable technique for detecting breast cancer. Experimental results are presented in this paper that demonstrate the detection of breast cancer using microwaves operating at 2.4 GHz. The procedure involves antenna fabrication, phantom tissue development, and image reconstruction. Design and fabrication of patch antenna are used in the study, described in detail. The patch antenna pair is used for transmitting and receiving source waves. Tissue mimicking models were developed from paraffin wax and glycerin for the dielectric constants of 9 and 47, respectively, representing the tissue and tumor. Further, AI-based tomographic images were obtained by implementing a filtered back-projection algorithm in the computer. In the results, the presence of the tumor is quantitatively analyzed.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Stathopoulou, E. K. y F. Remondino. "MULTI-VIEW STEREO WITH SEMANTIC PRIORS". ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W15 (26 de agosto de 2019): 1135–40. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w15-1135-2019.

Texto completo
Resumen
<p><strong>Abstract.</strong> Patch-based stereo is nowadays a commonly used image-based technique for dense 3D reconstruction in large scale multi-view applications. The typical steps of such a pipeline can be summarized in stereo pair selection, depth map computation, depth map refinement and, finally, fusion in order to generate a complete and accurate representation of the scene in 3D. In this study, we aim to support the standard dense 3D reconstruction of scenes as implemented in the open source library OpenMVS by using semantic priors. To this end, during the depth map fusion step, along with the depth consistency check between depth maps of neighbouring views referring to the same part of the 3D scene, we impose extra semantic constraints in order to remove possible errors and selectively obtain segmented point clouds per label, boosting automation towards this direction. In order to reassure semantic coherence between neighbouring views, additional semantic criterions can be considered, aiming to eliminate mismatches of pixels belonging in different classes.</p>
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Xu, Wang, Kehai Chen y Tiejun Zhao. "Document-Level Relation Extraction with Reconstruction". Proceedings of the AAAI Conference on Artificial Intelligence 35, n.º 16 (18 de mayo de 2021): 14167–75. http://dx.doi.org/10.1609/aaai.v35i16.17667.

Texto completo
Resumen
In document-level relation extraction (DocRE), graph structure is generally used to encode relation information in the input document to classify the relation category between each entity pair, and has greatly advanced the DocRE task over the past several years. However, the learned graph representation universally models relation information between all entity pairs regardless of whether there are relationships between these entity pairs. Thus, those entity pairs without relationships disperse the attention of the encoder-classifier DocRE for ones with relationships, which may further hind the improvement of DocRE. To alleviate this issue, we propose a novel encoder-classifier-reconstructor model for DocRE. The reconstructor manages to reconstruct the ground-truth path dependencies from the graph representation, to ensure that the proposed DocRE model pays more attention to encode entity pairs with relationships in the training. Furthermore, the reconstructor is regarded as a relationship indicator to assist relation classification in the inference, which can further improve the performance of DocRE model. Experimental results on a large-scale DocRE dataset show that the proposed model can significantly improve the accuracy of relation extraction on a strong heterogeneous graph-based baseline. The code is publicly available at https://github.com/xwjim/DocRE-Rec.
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

JAROMERSKA, SLAVKA, PETR PRAUS y YOUNG-RAE CHO. "DISTANCE-WISE PATHWAY DISCOVERY FROM PROTEIN–PROTEIN INTERACTION NETWORKS WEIGHTED BY SEMANTIC SIMILARITY". Journal of Bioinformatics and Computational Biology 12, n.º 01 (28 de enero de 2014): 1450004. http://dx.doi.org/10.1142/s0219720014500048.

Texto completo
Resumen
Reconstruction of signaling pathways is crucial for understanding cellular mechanisms. A pathway is represented as a path of a signaling cascade involving a series of proteins to perform a particular function. Since a protein pair involved in signaling and response have a strong interaction, putative pathways can be detected from protein–protein interaction (PPI) networks. However, predicting directed pathways from the undirected genome-wide PPI networks has been challenging. We present a novel computational algorithm to efficiently predict signaling pathways from PPI networks given a starting protein and an ending protein. Our approach integrates topological analysis of PPI networks and semantic analysis of PPIs using Gene Ontology data. An advanced semantic similarity measure is used for weighting each interacting protein pair. Our distance-wise algorithm iteratively selects an adjacent protein from a PPI network to build a pathway based on a distance condition. On each iteration, the strength of a hypothetical path passing through a candidate edge is estimated by a local heuristic. We evaluate the performance by comparing the resultant paths to known signaling pathways on yeast. The results show that our approach has higher accuracy and efficiency than previous methods.
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

RAYNER, J. M. V. y H. D. J. N. ALDRIDGE. "Three-Dimensional Reconstruction of Animal Flight Paths and the Turning Flight of Microchiropteran Bats". Journal of Experimental Biology 118, n.º 1 (1 de septiembre de 1985): 247–65. http://dx.doi.org/10.1242/jeb.118.1.247.

Texto completo
Resumen
A method is presented by which a microcomputer is used to reconstruct the structure of a three-dimensional object from images obtained with a pair of non-metric cameras when the images contain the vertices of a cube as test patternand the camera-object configuration satisfies straightforward geometrical conditions. With a still camera and stroboscopic or repeating flash illumination, or with a cine camera, this method provides a simple and economic means of recording the flight path and wing movements of a flying animal accurately and reliably. Numerical methods for the further analysis of three-dimensional position data to determine velocity, acceleration, energy and curvature, and to interpolate and to correct for distortion due to inaccurate data records are described. The method is illustrated by analysis of a slow, powered turn of the bat Plecotus auritus (L.). Accurate reconstruction of the flight path permits mechanical forces and accelerations acting on the bat during the turn to be estimated: turning speed and radius in a narrow space are restricted by the bat's ability to generate sufficient lift to support the weight in nonlinear level flight.
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Mukherjee, Saswati, Matangini Chattopadhyay, Samiran Chattopadhyay y Pragma Kar. "Wormhole Detection Based on Ordinal MDS Using RTT in Wireless Sensor Network". Journal of Computer Networks and Communications 2016 (2016): 1–15. http://dx.doi.org/10.1155/2016/3405264.

Texto completo
Resumen
In wireless communication, wormhole attack is a crucial threat that deteriorates the normal functionality of the network. Invasion of wormholes destroys the network topology completely. However, most of the existing solutions require special hardware or synchronized clock or long processing time to defend against long path wormhole attacks. In this work, we propose a wormhole detection method using range-based topology comparison that exploits the local neighbourhood subgraph. The Round Trip Time (RTT) for each node pair is gathered to generate neighbour information. Then, the network is reconstructed by ordinal Multidimensional Scaling (MDS) followed by a suspicion phase that enlists the suspected wormholes based on the spatial reconstruction. Iterative computation of MDS helps to visualize the topology changes and can localize the potential wormholes. Finally, a verification phase is used to remove falsely accused nodes and identify real adversaries. The novelty of our algorithm is that it can detect both short path and long path wormhole links. Extensive simulations are executed to demonstrate the efficacy of our approach compared to existing ones.
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Deng, Yuxin y Jiayi Ma. "SDGMNet: Statistic-Based Dynamic Gradient Modulation for Local Descriptor Learning". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 2 (24 de marzo de 2024): 1510–18. http://dx.doi.org/10.1609/aaai.v38i2.27916.

Texto completo
Resumen
Rescaling the backpropagated gradient of contrastive loss has made significant progress in descriptor learning. However, current gradient modulation strategies have no regard for the varying distribution of global gradients, so they would suffer from changes in training phases or datasets. In this paper, we propose a dynamic gradient modulation, named SDGMNet, for contrastive local descriptor learning. The core of our method is formulating modulation functions with dynamically estimated statistical characteristics. Firstly, we introduce angle for distance measure after deep analysis on backpropagation of pair-wise loss. On this basis, auto-focus modulation is employed to moderate the impact of statistically uncommon individual pairs in stochastic gradient descent optimization; probabilistic margin cuts off the gradients of proportional triplets that have achieved enough optimization; power adjustment balances the total weights of negative pairs and positive pairs. Extensive experiments demonstrate that our novel descriptor surpasses previous state-of-the-art methods in several tasks including patch verification, retrieval, pose estimation, and 3D reconstruction.
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

McCain, Megan L., Thomas Desplantez, Nicholas A. Geisse, Barbara Rothen-Rutishauser, Helene Oberer, Kevin Kit Parker y Andre G. Kleber. "Cell-to-cell coupling in engineered pairs of rat ventricular cardiomyocytes: relation between Cx43 immunofluorescence and intercellular electrical conductance". American Journal of Physiology-Heart and Circulatory Physiology 302, n.º 2 (enero de 2012): H443—H450. http://dx.doi.org/10.1152/ajpheart.01218.2010.

Texto completo
Resumen
Gap junctions are composed of connexin (Cx) proteins, which mediate intercellular communication. Cx43 is the dominant Cx in ventricular myocardium, and Cx45 is present in trace amounts. Cx43 immunosignal has been associated with cell-to-cell coupling and electrical propagation, but no studies have directly correlated Cx43 immunosignal to electrical cell-to-cell conductance, gj, in ventricular cardiomyocyte pairs. To assess the correlation between Cx43 immunosignal and gj, we developed a method to determine both parameters from the same cell pair. Neonatal rat ventricular cardiomyocytes were seeded on micropatterned islands of fibronectin. This allowed formation of cell pairs with reproducible shapes and facilitated tracking of cell pair locations. Moreover, cell spreading was limited by the fibronectin pattern, which allowed us to increase cell height by reducing the surface area of the pattern. Whole cell dual voltage clamp was used to record gj of cell pairs after 3–5 days in culture. Fixation of cell pairs before removal of patch electrodes enabled preservation of cell morphology and offline identification of patched pairs. Subsequently, pairs were immunostained, and the volume of junctional Cx43 was quantified using confocal microscopy, image deconvolution, and three-dimensional reconstruction. Our results show a linear correlation between gj and Cx43 immunosignal within a range of 8–50 nS.
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Mirchia, Kanish, Sixuan Pan, Emily Payne, Rachel Agoglia, Marc Arribas-Layton y Harish Vasudevan. "PATH-54. HIGH-THROUGHPUT SINGLE NUCLEAR DNA SEQUENCING OF HUMAN SPORADICNF1 MUTANT IDH-WILDTYPE GLIOBLASTOMAS REVEALS PATTERNS OF TUMOR EVOLUTION AND RECURRENCE". Neuro-Oncology 26, Supplement_8 (1 de noviembre de 2024): viii191. http://dx.doi.org/10.1093/neuonc/noae165.0753.

Texto completo
Resumen
Abstract The advent of single cell techniques has advanced our understanding of glioblastoma (GBM) evolution and cell lineage relationships. However, study of mutational co-occurrence and clonal phylogeny has been hampered by limitations in single cell genotyping. Here, we perform semi-automated, rapid single nuclear dissociation followed by targeted single nucleus DNA sequencing (snDNA-seq) of eleven retrospectively identified somatic NF1 mutant IDH-wildtype glioblastomas. Bulk DNA sequencing was performed as part of routine clinical care using a CLIA-certified targeted sequencing panel. For snDNA-seq, tissue cores with greater than 30% tumor were dissociated on a S2 genomics S100 Singulator followed by snDNA-seq using a targeted 361 amplicon panel (Tapestri, Mission Bio, USA) to sequence for recurrently observed single nucleotide substitutions and copy number alterations. A mean of 3,661 cells were recovered per sample with a mean of 96 reads per cell per amplicon and 87% mean panel uniformity. SnDNA-seq validated point mutations and copy number alterations observed in bulk DNA-sequencing and revealed novel alterations and subclonal copy number alterations not detected on bulk analysis. Within individual samples, snDNA-seq based phylogenetic reconstruction identified mutually exclusive patterns of mutation between NF1 and PI3K signaling alterations, suggesting these events may occur in distinct tumor subclones. With regard to copy number alterations (CNAs), snDNA-seq demonstrated intra-tumoral heterogeneity for chromosome 9p loss, chromosome 7 gain, and chromosome 10 monosomy. Subclonal CNA clones not detected by bulk sequencing were identified in 5 samples. Analysis of a matched primary-recurrent tumor pair revealed expansion of a specific mutational clone (loss of 9p, 10p, gains of 1p, 7, 15, and 19q) at recurrence. SnDNA-seq recapitulated bulk DNA sequencing alterations and identified distinct patterns of mutational co-occurrence, CNAs, and tumor evolution over time. Reconstructing GBM phylogeny at single cell resolution has potential translational implications for understanding tumor evolution and treatment resistance.
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Gorbunov, Konstantin y Vassily Lyubetsky. "Multiplicatively Exact Algorithms for Transformation and Reconstruction of Directed Path-Cycle Graphs with Repeated Edges". Mathematics 9, n.º 20 (14 de octubre de 2021): 2576. http://dx.doi.org/10.3390/math9202576.

Texto completo
Resumen
For any weighted directed path-cycle graphs, a and b (referred to as structures), and any equal costs of operations (intermergings and duplication), we obtain an algorithm which, by successively applying these operations to a, outputs b if the first structure contains no paralogs (i.e., edges with a repeated name) and the second has no more than two paralogs for each edge. In finding the shortest sequence of operations to be applied to pass from a to b, the algorithm has a multiplicative error of at most 13/9 + ε, where ε is any strictly positive number, and its runtime is of the order of nO(ε−2.6), where n is the size of the input pair of graphs. In the case of no paralogs, equal sets of names in the structures, and equal operation costs, we have considered the following conditions on the transformation of a into b: all structures in them are from one cycle; all structures are from one path; all structures are from paths. For each of the conditions, we have obtained an exact (i.e., zero-error) quadratic time algorithm for finding the shortest transformation of a into b. For another list of operations (join and cut of a vertex, and deletion and insertion of an edge) over structures and for arbitrary costs of these operations, we have obtained an algorithm for the extension of structures specified at the leaves of a tree onto its interior vertices. The algorithm is exact if the tree is a star—in this case, structures in the leaves may even have unequal sets of names or paralogs. The runtime of the algorithm is of the order of nΧ + n2log(n), where n is the number of names in the leaves, and Χ is an easily computable characteristic of the structures in the leaves. In the general case, a cubic time algorithm finds a locally minimal solution.
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Walby, A. Peter. "Scala Tympani Measurement". Annals of Otology, Rhinology & Laryngology 94, n.º 4 (julio de 1985): 393–97. http://dx.doi.org/10.1177/000348948509400413.

Texto completo
Resumen
The length and cross-sectional height of the scala tympani are relevant to the design of cochlear implants. The lengths and heights of the scalae tympani in ten pairs of serially sectioned temporal bones were measured by an adaptation of the serial section method of cochlear reconstruction. The study found the middle segments of individual pairs of scalae tympani to be very similar in height, but each pair varied slightly from other pairs. The height decreased overall from the base to the apex, but there was a small expansion at the junction of the basal and middle turns where the interscalar septum originated. The theoretical relationships of different diameter electrodes to the organ of Corti were plotted for one cochlea. The size of the electrode and the path it followed were shown in theory to alter considerably its position in relation to the organ of Corti.
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Meng, Fanjing, Kun Liu y Tao Qin. "Experimental investigations of force transmission characteristics in granular flow lubrication". Industrial Lubrication and Tribology 70, n.º 7 (10 de septiembre de 2018): 1151–57. http://dx.doi.org/10.1108/ilt-07-2017-0211.

Texto completo
Resumen
Purpose Granular lubrication is a new lubrication method and can be used in extreme working conditions; however, the obstacle of force transmission characteristics needs to be urgently solved to fully understand the mechanical and bearing mechanisms of granular lubrication. Design/methodology/approach A flat sliding friction cell is developed to study the force transmission behaviors of granules under shearing. Granular material, sliding velocity, granule size and granule humidity are considered in these experiments. The measured normal and shear force, which is transmitted from the bottom friction pair to the top friction pair via the granular lubrication medium, reveals the influence of these controlling parameters on the force transmission characteristics of granules. Findings Experimental results show that a low sliding velocity, a large granule size and a low granular humidity increase the measured normal force and shear force. Besides, a comparison experiment with other typical lubrication styles is also carried out. The force transmission under granular lubrication is mainly dependent on the force transmission path, which is closely related to the deconstruction and reconstruction of the force chains in the granule assembly. Originality/value These findings reveal the force transmission mechanism of granular lubrication and can also offer the helpful reference for the design of the new granular lubrication bearing.
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Borisagar, Viral H. y Mukesh A. Zaveri. "Disparity Map Generation from Illumination Variant Stereo Images Using Efficient Hierarchical Dynamic Programming". Scientific World Journal 2014 (2014): 1–12. http://dx.doi.org/10.1155/2014/513417.

Texto completo
Resumen
A novel hierarchical stereo matching algorithm is presented which gives disparity map as output from illumination variant stereo pair. Illumination difference between two stereo images can lead to undesirable output. Stereo image pair often experience illumination variations due to many factors like real and practical situation, spatially and temporally separated camera positions, environmental illumination fluctuation, and the change in the strength or position of the light sources. Window matching and dynamic programming techniques are employed for disparity map estimation. Good quality disparity map is obtained with the optimized path. Homomorphic filtering is used as a preprocessing step to lessen illumination variation between the stereo images. Anisotropic diffusion is used to refine disparity map to give high quality disparity map as a final output. The robust performance of the proposed approach is suitable for real life circumstances where there will be always illumination variation between the images. The matching is carried out in a sequence of images representing the same scene, however in different resolutions. The hierarchical approach adopted decreases the computation time of the stereo matching problem. This algorithm can be helpful in applications like robot navigation, extraction of information from aerial surveys, 3D scene reconstruction, and military and security applications. Similarity measure SAD is often sensitive to illumination variation. It produces unacceptable disparity map results for illumination variant left and right images. Experimental results show that our proposed algorithm produces quality disparity maps for both wide range of illumination variant and invariant stereo image pair.
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Chang, Jung-Im. "The origin and the development of 焉 yān in Old Chinese". Language and Linguistics / 語言暨語言學 23, n.º 4 (12 de septiembre de 2022): 601–43. http://dx.doi.org/10.1075/lali.00117.cha.

Texto completo
Resumen
Abstract The Old Chinese function word 焉 yān is frequently interpreted as a fusion of [於 (‘at/on/in’) + 此 (near demonstrative pronoun)] in terms of its meaning. Ever since Kennedy (1940a, b; 1953) argued that 焉 is a fusion of [於 + *an (third-person pronoun)], it has been controversial as to exactly which third-person pronoun/demonstrative pronoun *an corresponds to in Old Chinese. There is no third-person pronoun/demonstrative pronoun that is appropriate for this reconstruction. This paper illustrates how 焉 *Ɂan is a fusion of 於 *Ɂa and *niɁ; *nih or *nɔɁ; *nɔh, which means ‘this’ in Proto-Austroasiatic (PAA). The demonstrative is borrowed into Chinese through language contact in the Early Archaic Chinese period (10th to 6th c. BC). This fusion is plausible in historical and phonological terms, while the grammaticalization path of 焉 also accords with that of [於 + demonstrative]. The grammaticalization path of 焉 is examined by analyzing all occurrences of it in the Bronze Inscriptions (BI), The book of odes (Shījīng 詩經), The book of documents (Shàngshū 尚書), and Zuo’s commentary (Zuǒzhuàn 左傳). Also, the usages of its etymological doublet 爰, which is considered to be a fusion of [于 *wa (‘at/on/in’) + near demonstrative pronoun], are analyzed in order to strengthen the argument.
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Perera, Saavidra, Jörg-Uwe Pott, Julien Woillez, Martin Kulas, Wolfgang Brandner, Sylvestre Lacour y Felix Widmann. "Piston Reconstruction Experiment (P-REx) – II. Off-line performance evaluation with VLTI/GRAVITY". Monthly Notices of the Royal Astronomical Society 511, n.º 4 (8 de enero de 2022): 5709–17. http://dx.doi.org/10.1093/mnras/stab3813.

Texto completo
Resumen
ABSTRACT For sensitive optical interferometry, it is crucial to control the evolution of the optical path difference (OPD) of the wavefront between the individual telescopes of the array. The OPD between a pair of telescopes is induced by differential optical properties such as atmospheric refraction, telescope alignment, etc. This has classically been measured using a fringe tracker that provides corrections to a piston actuator to account for this difference. An auxiliary method, known as the Piston Reconstruction Experiment (P-REx), has been developed to measure the OPD, or differential ‘piston’ of the wavefront, induced by the atmosphere at each telescope. Previously, this method was outlined and results obtained from Large Binocular Telescope adaptive optics data for a single telescope aperture were presented. P-REx has now been applied off-line to previously acquired Very Large Telescope Intereferometer (VLTI)’s GRAVITY Coudé Infrared Adaptive Optics wavefront sensing data to estimate the atmospheric OPD for the six baselines. Comparisons with the OPD obtained from the VLTI GRAVITY fringe tracker were made. The results indicate that the telescope and instrumental noise of the combined VLTI and GRAVITY systems dominates over the atmospheric turbulence contributions. However, good agreement between simulated and on-sky P-REx data indicates that if the telescope and instrumental noise was reduced to atmospheric piston noise levels, P-REx has the potential to reduce the OPD root mean square of piston turbulence by up to a factor of 10 for frequencies down to 1 Hz. In such conditions, P-REx will assist in pushing the sensitivity limits of optical fringe tracking with long baseline interferometers.
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Truong-Hong, L., D. F. Laefer y R. C. Lindenbergh. "AUTOMATIC DETECTION OF ROAD EDGES FROM AERIAL LASER SCANNING DATA". ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W13 (5 de junio de 2019): 1135–40. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w13-1135-2019.

Texto completo
Resumen
<p><strong>Abstract.</strong> When aerial laser scanning (ALS) is deployed with targeted flight path planning, urban scenes can be captured in points clouds with both high vertical and horizontal densities to support a new generation of urban analysis and applications. As an example, this paper proposes a hierarchical method to automatically extract data points describing road edges, which are then used for reconstructing road edges and identifying accessible passage areas. The proposed approach is a cell-based method consisting of 3 main steps: (1) filtering rough ground points, (2) extracting cells containing data points of the road curb, and (3) eliminating incorrect road curb segments. The method was tested on a pair of 100&amp;thinsp;m&amp;thinsp;&amp;times;&amp;thinsp;100&amp;thinsp;m tiles of ALS data of Dublin Ireland’s city center with a horizontal point density of about 325 points/m<sup>2</sup>. Results showed the data points of the road edges to be extracted properly for locations appearing as the road edges with the average distance errors of 0.07&amp;thinsp;m and the ratio between the extracted road edges and the ground truth by 73.2%.</p>
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Möller, Gregor y Daniel Landskron. "Atmospheric bending effects in GNSS tomography". Atmospheric Measurement Techniques 12, n.º 1 (3 de enero de 2019): 23–34. http://dx.doi.org/10.5194/amt-12-23-2019.

Texto completo
Resumen
Abstract. In Global Navigation Satellite System (GNSS) tomography, precise information about the tropospheric water vapor distribution is derived from integral measurements like ground-based GNSS slant wet delays (SWDs). Therefore, the functional relation between observations and unknowns, i.e., the signal paths through the atmosphere, have to be accurately known for each station–satellite pair involved. For GNSS signals observed above a 15∘ elevation angle, the signal path is well approximated by a straight line. However, since electromagnetic waves are prone to atmospheric bending effects, this assumption is not sufficient anymore for lower elevation angles. Thus, in the following, a mixed 2-D piecewise linear ray-tracing approach is introduced and possible error sources in the reconstruction of the bended signal paths are analyzed in more detail. Especially if low elevation observations are considered, unmodeled bending effects can introduce a systematic error of up to 10–20 ppm, on average 1–2 ppm, into the tomography solution. Thereby, not only the ray-tracing method but also the quality of the a priori field can have a significant impact on the reconstructed signal paths, if not reduced by iterative processing. In order to keep the processing time within acceptable limits, a bending model is applied for the upper part of the neutral atmosphere. It helps to reduce the number of processing steps by up to 85 % without significant degradation in accuracy. Therefore, the developed mixed ray-tracing approach allows not only for the correct treatment of low elevation observations but is also fast and applicable for near-real-time applications.
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Tisseur, David, Navnina Bhatia, Nicolas Estre, Léonie Berge, Daniel Eck y Emmanuel Payan. "Evaluation of a scattering correction method for high energy tomography". EPJ Web of Conferences 170 (2018): 06006. http://dx.doi.org/10.1051/epjconf/201817006006.

Texto completo
Resumen
One of the main drawbacks of Cone Beam Computed Tomography (CBCT) is the contribution of the scattered photons due to the object and the detector. Scattered photons are deflected from their original path after their interaction with the object. This additional contribution of the scattered photons results in increased measured intensities, since the scattered intensity simply adds to the transmitted intensity. This effect is seen as an overestimation in the measured intensity thus corresponding to an underestimation of absorption. This results in artifacts like cupping, shading, streaks etc. on the reconstructed images. Moreover, the scattered radiation provides a bias for the quantitative tomography reconstruction (for example atomic number and volumic mass measurement with dual-energy technique). The effect can be significant and difficult in the range of MeV energy using large objects due to higher Scatter to Primary Ratio (SPR). Additionally, the incident high energy photons which are scattered by the Compton effect are more forward directed and hence more likely to reach the detector. Moreover, for MeV energy range, the contribution of the photons produced by pair production and Bremsstrahlung process also becomes important. We propose an evaluation of a scattering correction technique based on the method named Scatter Kernel Superposition (SKS). The algorithm uses a continuously thickness-adapted kernels method. The analytical parameterizations of the scatter kernels are derived in terms of material thickness, to form continuously thickness-adapted kernel maps in order to correct the projections. This approach has proved to be efficient in producing better sampling of the kernels with respect to the object thickness. This technique offers applicability over a wide range of imaging conditions and gives users an additional advantage. Moreover, since no extra hardware is required by this approach, it forms a major advantage especially in those cases where experimental complexities must be avoided. This approach has been previously tested successfully in the energy range of 100 keV – 6 MeV. In this paper, the kernels are simulated using MCNP in order to take into account both photons and electronic processes in scattering radiation contribution. We present scatter correction results on a large object scanned with a 9 MeV linear accelerator.
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Koether, V., A. Dupont, J. Labreuche, P. Felloni, T. Perez, P. Degroote, E. Hachulla, J. Remy, M. Remy-Jardin y D. Launay. "AB0401 CAN DUAL-ENERGY CT LUNG PERFUSION DETECT ABNORMALITIES AT THE LEVEL OF LUNG CIRCULATION IN SYSTEMIC SCLEROSIS (SSC)? PRELIMINARY EXPERIENCE IN 101 PATIENTS". Annals of the Rheumatic Diseases 80, Suppl 1 (19 de mayo de 2021): 1229.1–1229. http://dx.doi.org/10.1136/annrheumdis-2021-eular.69.

Texto completo
Resumen
Background:Systemic sclerosis (SSc) is an autoimmune disorder that is characterized by a interplay of vascular abnormalities, immune system activation and an uncontrolled fibrotic response associated with interstitial lung disease affecting about 40% of patients. Identification of ILD relies on high-resolution CT that identify features suggestive of the histologic patterns of SSc(1). CT is used to determine pattern and extent of ILD and participates in the prediction of ILD progression(2).All group of pulmonary hypertension (PH) may occur with an overall prevalence reported in up to one fifth of patient. Whereas extensive SSc-ILD can be responsible for PH, PH can also be seen as a consequence of myocardial abnormalities or as primarily affecting small pulmonary arteries and classified as pulmonary arterial hypertension.Dual-energy CT introduction offers perspectives in the evaluation of SSc-related pulmonary manifestations. While these are not strictly perfusion images, they have been reported as adequate surrogate markers of lung perfusion (3). In the field of PH, detection of perfusion defects highly concordant with V/Q scintigraphic findings has been reported in the diagnostic approach of CTEPH but also in the differential diagnosis between PAH and peripheral forms of CTEPH (4).Objectives:To investigate lung perfusion abnormalities in patients with SSc.Methods:The study population included 101 patients who underwent dual-energy CT (DECT) angiography in the follow-up of SSc. CT examinations were obtained on a 3rd-generation dual-source CT system with reconstruction of morphologic and perfusion images. All patients underwent pulmonary function tests within two months of the follow-up CT scan. Fifteen patients had right heart catheterization-proven PH.Results:Our population included patients without SSc lung involvement (Group 1; n=37), patients with SSc-related ILD (Group 2; n=56) of variable extent (Group 2a: ≤10%: n=17; Group 2b: between 11-50%: n=31; Group 2c: >50%: n=8) and patients with PVOD/PCH (Group 3; n=8). Lung perfusion was abnormal in 8 patients in G 1 (21.6%), 14 patients in G 2 (25%) and 7 patients in G 3 (87.5%). Perfusion changes were mainly composed of bilateral perfusion defects, including patchy, PE-type perfusion defects and areas of hypoperfusion of variable size. In G 1 and G 2a (n=54): (a) patients with abnormal lung perfusion (n=14) had a significantly higher proportion of NYHA III/IV scores of dyspnea (p=0.031), a shorter mean walking distance at the 6MWT (p=0.042) and a trend towards lower mean DLCO% (p=0.055) when compared to patients with normal lung perfusion (n=40); (b) a negative albeit weak correlation was found between the iodine concentration in both lungs and the DLCO% (r=-0.27; p=0.059) whereas no correlation was found with PAPs (r=0.16; p=0.29) and walking distance during the 6MWT (r=-0.029; p=0.84).Conclusion:DECT lung perfusion provides complementary information to HRCT scans, depicting perfusion changes in SSc patients with normal or minimally infiltrated lung parenchyma.References:[1]Kim EA, Lee KS, Johkoh T, Kim TS, Suh GY, Kwon OJ, et al. Interstitial lung diseases associated with collagen vascular diseases: radiologic and histopathologic findings. Radiogr Rev Publ Radiol Soc N Am Inc. 2002 Oct;22 Spec No:S151-165.[2]Goh NSL, Desai SR, Veeraraghavan S, Hansell DM, Copley SJ, Maher TM, et al. Interstitial lung disease in systemic sclerosis: a simple staging system. Am J Respir Crit Care Med. 2008 Jun 1;177(11):1248–54.[3]Fuld MK, Halaweish AF, Haynes SE, Divekar AA, Guo J, Hoffman EA. Pulmonary perfused blood volume with dual-energy CT as surrogate for pulmonary perfusion assessed with dynamic multidetector CT. Radiology. 2013 Jun;267(3):747–56.[4]Giordano J, Khung S, Duhamel A, Hossein-Foucher C, Bellèvre D, Lamblin N, et al. Lung perfusion characteristics in pulmonary arterial hypertension and peripheral forms of chronic thromboembolic pulmonary hypertension: Dual-energy CT experience in 31 patients. Eur Radiol. 2017 Apr;27(4):1631–9.Disclosure of Interests:None declared
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Gkioulekas, Ioannis, Steven J. Gortler, Louis Theran y Todd Zickler. "Trilateration Using Unlabeled Path or Loop Lengths". Discrete & Computational Geometry, 25 de noviembre de 2023. http://dx.doi.org/10.1007/s00454-023-00605-x.

Texto completo
Resumen
AbstractLet $$\textbf{p}$$ p be a configuration of n points in $$\mathbb R^d$$ R d for some n and some $$d \ge 2$$ d ≥ 2 . Each pair of points defines an edge, which has a Euclidean length in the configuration. A path is an ordered sequence of the points, and a loop is a path that begins and ends at the same point. A path or loop, as a sequence of edges, also has a Euclidean length, which is simply the sum of its Euclidean edge lengths. We are interested in reconstructing $$\textbf{p}$$ p given a set of edge, path and loop lengths. In particular, we consider the unlabeled setting where the lengths are given simply as a set of real numbers, and are not labeled with the combinatorial data describing which paths or loops gave rise to these lengths. In this paper, we study the question of when $$\textbf{p}$$ p will be uniquely determined (up to an unknowable Euclidean transform) from some given set of path or loop lengths through an exhaustive trilateration process. Such a process has already been used for the simpler problem of reconstruction using unlabeled edge lengths. This paper also provides a complete proof that this process must work in that edge-setting when given a sufficiently rich set of edge measurements and assuming that $$\textbf{p}$$ p is generic.
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Furuya, Takahiko, Wujie Liu, Ryutarou Ohbuchi y Zhenzhong Kuang. "Hyperplane patch mixing-and-folding decoder and weighted chamfer distance loss for 3D point set reconstruction". Visual Computer, 7 de septiembre de 2022. http://dx.doi.org/10.1007/s00371-022-02652-6.

Texto completo
Resumen
Abstract3D point set reconstruction is an important and challenging 3D shape analysis task. Current state-of-the-art algorithms for 3D point set reconstruction employ a deep neural network (DNN) having an encoder–decoder architecture. Recently, the decoder DNNs that transform multiple 2D planar patches to reconstruct a 3D shape have seen some success. These “patch-folding” decoders are adept at approximating smooth surfaces in 3D objects. However, 3D point sets generated by these decoders often lack local geometrical details, as 2D planar patches tend to overly constrain the patch folding process. In this paper, we propose a novel decoder DNN for 3D point sets called Hyperplane Mixing and Folding Net (HMF-Net). HMF-Net uses less constrained hyperplane, not 2D plane, patches as its input to the folding process. HMF-Net has, as its core building block, a stack of token-mixing layers to effectively learn global consistency among the hyperplane patches. In addition to HMF-Net, we also propose a novel loss for 3D point set reconstruction called Weighted Chamfer Distance (WCD). WCD tries to weight, or amplify, loss from parts of shape that are highly variable across training samples by emphasizing higher point-pair distance values between a generated point set and a groundtruth point set. This helps the decoder DNN learn shape details better. We comprehensively evaluate our algorithm under three 3D point set reconstruction scenarios, that are, shape completion, shape upsampling, and shape reconstruction from 2D images. Experimental results demonstrate that our algorithm yields accuracies higher than the existing algorithms for 3D point set reconstruction.
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Huber, Katharina T. y Mike Steel. "Tree Reconstruction from Triplet Cover Distances". Electronic Journal of Combinatorics 21, n.º 2 (2 de mayo de 2014). http://dx.doi.org/10.37236/3388.

Texto completo
Resumen
It is a classical result that any finite tree with positively weighted edges, and without vertices of degree 2, is uniquely determined by the weighted path distance between each pair of leaves. Moreover, it is possible for a (small) strict subset $\mathcal{L}$ of leaf pairs to suffice for reconstructing the tree and its edge weights, given just the distances between the leaf pairs in $\mathcal{L}$. It is known that any set ${\mathcal L}$ with this property for a tree in which all interior vertices have degree 3 must form a cover for $T$ - that is, for each interior vertex $v$ of $T$, ${\mathcal L}$ must contain a pair of leaves from each pair of the three components of $T-v$. Here we provide a partial converse of this result by showing that if a set ${\mathcal L}$ of leaf pairs forms a cover of a certain type for such a tree $T$ then $T$ and its edge weights can be uniquely determined from the distances between the pairs of leaves in ${\mathcal L}$. Moreover, there is a polynomial-time algorithm for achieving this reconstruction. The result establishes a special case of a recent question concerning 'triplet covers', and is relevant to a problem arising in evolutionary genomics.
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Sola, Yesmin Naji, Luna Jeannie Alves Mangueira, Pedro Miranda Portugal Junior, Robson Xavier Ferro Filho, Nayme Naji Sola y Gustavo Teixeira Leão. "BONE RECONSTRUCTION IN THE TREATMENT OF TIBIAL HEMIMELIA: AN ALTERNATIVE TO AMPUTATION?" Acta Ortopédica Brasileira 32, spe1 (2024). http://dx.doi.org/10.1590/1413-785220243201e268462.

Texto completo
Resumen
ABSTRACT Objective: To evaluate the advantages and disadvantages of bone reconstruction and lengthening compared to amputation in the treatment of tibial hemimelia for patients and their families. Methods: Systematic review of articles published in English and Portuguese between 1982 and 2022 in the MEDLINE, PubMed, Cochrane and SciELO databases. The variables of interest were: year of publication, sample characteristics, classification of tibial hemimelia according to Jones, treatment outcome and follow-up time. Results: A total of eleven articles were included in the scope of this review. The studies involved 131 patients, 53.4% male and 46.6% female. The age of the patients who underwent a surgical procedure ranged from 1 year and 10 months to 15 years. The most common type was Jones’ I (40.9%). The most recurrent complications in the reconstruction treatment were: infection of the external fixator path, flexion contracture (mainly of the knee), reduction in the range of motion of the knee and ankle. Conclusion: We did not find enough relevant studies in the literature to prove the superiority of reconstruction. Amputation remains the gold standard treatment for tibial hemimelia to this day. Level of Evidence III, systematic review of level III studies
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Arici, M. Kaan y Nurcan Tuncbag. "Performance Assessment of the Network Reconstruction Approaches on Various Interactomes". Frontiers in Molecular Biosciences 8 (5 de octubre de 2021). http://dx.doi.org/10.3389/fmolb.2021.666705.

Texto completo
Resumen
Beyond the list of molecules, there is a necessity to collectively consider multiple sets of omic data and to reconstruct the connections between the molecules. Especially, pathway reconstruction is crucial to understanding disease biology because abnormal cellular signaling may be pathological. The main challenge is how to integrate the data together in an accurate way. In this study, we aim to comparatively analyze the performance of a set of network reconstruction algorithms on multiple reference interactomes. We first explored several human protein interactomes, including PathwayCommons, OmniPath, HIPPIE, iRefWeb, STRING, and ConsensusPathDB. The comparison is based on the coverage of each interactome in terms of cancer driver proteins, structural information of protein interactions, and the bias toward well-studied proteins. We next used these interactomes to evaluate the performance of network reconstruction algorithms including all-pair shortest path, heat diffusion with flux, personalized PageRank with flux, and prize-collecting Steiner forest (PCSF) approaches. Each approach has its own merits and weaknesses. Among them, PCSF had the most balanced performance in terms of precision and recall scores when 28 pathways from NetPath were reconstructed using the listed algorithms. Additionally, the reference interactome affects the performance of the network reconstruction approaches. The coverage and disease- or tissue-specificity of each interactome may vary, which may result in differences in the reconstructed networks.
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

LAI, Wenxin, Shuang Xu, Zechen LUO, Paixin CHEN, Honglin YAN y Kai Kai WANG. "Reconstruction of direct waves between passive flexible sensors using diffuse ultrasonic waves". e-Journal of Nondestructive Testing 29, n.º 7 (julio de 2024). http://dx.doi.org/10.58286/29686.

Texto completo
Resumen
The sensing system that enables generation and sensing of guided ultrasonic waves (GUWs) serves as a pivotal cornerstone for the structural health monitoring based on GUWs. Traditional sensing systems have been constrained by inherent limitations including substantial mass addition and limited flexibility, and this leads to an increasing research efforts for the emerging nano-composite sensors. Despite the superior properties of lightweight and flexibility, the nano-composite sensors is incapable of active wave excitation, and thus waves propagating directly between sensors cannot be obtained. To tackle this deficiency, an approach grounded in representation theorem to reconstruct signals using diffuse ultrasonic waves is proposed. In this approach, the waves propagating between two passive sensors can be approximated through the cross-correlation of wave signals in a diffused status. On this basis, the passive sensors can be virtually converted to active actuators, enabling the acquisition of waves virtually excited and captured by a pair of passive sensors. In experimental validations, the wave signals between passive sensors are retrieved and the damage close to the sensing path is detected. The proposed method enhances signal acquisition capability of sensing networks based on flexible passive sensors and opens avenues for the monitoring of previously inaccessible blind areas within complex structures.
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Zhang, Zhongjie, Liang Zeng y Nan Zhang. "Damage imaging using multipath-scattered Lamb waves under a sparse reconstruction framework". Structural Health Monitoring, 10 de octubre de 2023. http://dx.doi.org/10.1177/14759217231203241.

Texto completo
Resumen
This paper presents a damage sparse imaging method using multipath-scattered Lamb waves. It leverages a large number of echoes and reverberations in the recorded signal that may be usually ignored in conventional methods. First, reflections of Lamb waves at free edges are viewed as waves transmitted from a virtual transducer which is located at the mirror point of the actual one. On this basis, an optimized transducers-layout strategy is proposed based on the multipath propagation model of the Lamb wave. Benefiting from that, the direct damage-scattered wave and several waves scattered by both the damage and edges could be separately identified in the time domain, and further, each wave could be matched with a sensing path (either actual or virtual) in the expanded sensor network. Subsequently, a dictionary is constructed from the Lamb wave propagation and scattering model. By solving the sparse reconstruction problem, the pixel value of each point in the region of interest is obtained, and the whole area can be finally visualized. The proposed method is validated using experiments conducted on an aluminum plate with simulated damages. Results show that the damages can be correctly detected and accurately localized with only a single transmitter–receiver pair.
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Cyril Lynch, Christian Edward. "TEORIA PÓS-COLONIAL E PENSAMENTO BRASILEIRO NA OBRA DE GUERREIRO RAMOS: o pensamento sociológico (1953-1955)". Caderno CRH 28, n.º 73 (30 de septiembre de 2015). http://dx.doi.org/10.9771/ccrh.v28i73.19847.

Texto completo
Resumen
Este artigo sustenta que a obra de Guerreiro Ramos, na década de 1950, foi desenvolvida conforme um plano deliberado de elaborar uma teoria pós-colonial aplicada ao Brasil. Nela, o estudo crítico do pensamento social brasileiro exerceria um papel fundamental. Para demonstrar essa hipótese, tentarei reconstituir o percurso intelectual percorrido por Guerreiro durante sua atuação junto ao IBESP, combinando o método de reconstrução lógica com outro, de caráter histórico-sistemático. Nele, averiguarei os nexos entre sua teoria social e seus textos de crítica do pensamento sociológico brasileiro.Palavras-chave : Guerreiro Ramos; Teoria pós-colonial; Pensamento social brasileiro; Pensamento político brasileiro.Post-colonial theory and brazilian thought in the works of Guerreiro Ramos: the sociological thinking (1953-1955) Christian Edward Cyril LynchThis article argues that the works of Guerreiro Ramos, in the 1950s, was developed based on a deliberate plan to create a post-colonial theory applied to Brazil. In this theory, the critical study of Brazilian social thought has a central role. In order to prove this hypothesis, I will reconstruct the intellectual path taken by Guerreiro during his work with IBESP, joining the logical reconstruction method with a historical-systematic method. I will ascertain the connections between his social theory and his critical texts on Brazilian sociological thinking.Palavras-chave : Guerreiro Ramos; Post-colonial theory; Brazilian social thought; Brazilian political thought.La théorie post-coloniale et la pensée brésilienne dans l’oeuvre de Guerreiro Ramos: la pensée sociologique (1953-1955) Christian Edward Cyril LynchCet article défend que, dans les années 1950, l’oeuvre de Guerreiro Ramos a été développée en fonction d’un plan délibéré d’élaboration d’une théorie post-coloniale appliquée au Brésil. En son sein, l’étude critique de la pensée sociale brésilienne jouerait un rôle fondamental. Afin de démontrer cette hypothèse, j’essayerai de reconstituer le parcours intellectuel parcouru par Guerreiro Ramos au cours de son action auprès de l’IBESP en croisant la méthode de reconstruction logique avec celle à caractère historique et systématique. J’y vérifierai les liens existants entre sa théorie sociale et ses textes de critique de la pensée brésilienne.Palavras-chave : Guerreiro Ramos; Théorie post-coloniale; Pensée sociale brésilienne; Pensée politique brésilienne. Publicação Online do Caderno CRH no Scielo: http://www.scielo.br/ccrh Publicação Online do Caderno CRH: http://www.cadernocrh.ufba.br
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

De Almeida, Wilson Mesquita. "ESTUDANTES COM DESVANTAGENS ECONÔMICAS E EDUCACIONAIS E FRUIÇÃO DA UNIVERSIDADE". Caderno CRH 20, n.º 49 (2 de agosto de 2007). http://dx.doi.org/10.9771/ccrh.v20i49.18855.

Texto completo
Resumen
O texto discute os principais resultados de uma investigação que teve como objetivo apreender a utilização dos recursos e espaços da Universidade de São Paulo por um grupo de estudantes com desvantagens econômicas e educacionais. A partir de uma revisão crítica da bibliografia nacional e estrangeira sobre a trajetória de estudantes provindos das camadas populares que chegam ao ensino superior e da interpretação dos dados empíricos coligidos, refletese sobre o que esses alunos, efetivamente, aproveitam da estrutura propiciada pela universidade. Os alvos da investigação voltam-se para verificar como ocorreu o processo de socialização no ambiente familiar; a reconstrução da trajetória de ingresso e o trânsito no ambiente universitário mediante a apreensão do cotidiano, da adaptação à linguagem acadêmica, da realização dos afazeres, além do contato com indivíduos de origem similar bem como de outros estratos sociais. A pesquisa utilizou a metodologia qualitativa, operacionalizada em duas fases: grupos focais e entrevistas semiestruturadas com o objetivo de refinar as principais categorias surgidas. Espera-se contribuir na reflexão sobre os debates atuais em torno da inclusão social no ensino superior ao integrar à análise do acesso à universidade, uma discussão sobre a efetiva permanência, onde o foco passa a ser um estudo mais pormenorizado das diferenças na qualidade da educação recebida pelos diversos segmentos sociais presentes na universidade pública. PALAVRAS-CHAVE: desigualdades educacionais, ensino superior, elites sociais e econômicas, universidade pública, inclusão social.STUDENTS WITH ECONOMICAL AND EDUCATIONAl DISADVANTAGES AND FRUITION OF THE UNIVERSITY Wilson Mesquita de Almeida This text discusses the main results of an investigation that had as its objective to understand the use of the resources and spaces of the University of São Paulo by a group of students with economical and educational disadvantages. Starting from a critical revision of the national and foreign bibliography on the path of students stemming from society’s poorer layers that arrive at higher education and of the interpretation of the empiric data gathered, a reflection is made about what those students effectively take of the structure propitiated by the university. The investigation aims to verify how the socialization process happened in the family atmosphere; as well as how happened the reconstruction of the entrance path and the traffic in the academical atmosphere through the understanding of daily life, the adaptation to the academic language, the accomplishment of academic tasks, besides the contact with individuals of similar origin as well as from other social strata. The research used the qualitative methodology through two phases: focal groups and semi-structured interviews with the objective of refining the main categories to appear. This paper hopes to contribute in the reflection on the current debates around the social inclusion in the higher education through integrating a discussion on the effective permanence to the analysis of the access to the university, where the focus becomes a more detailed study of the differences in the quality of the education received by the several present social segments in the public university. KEYWORDS: educacional unequalities, higher education, social and economic elites, public university, social inclusion.LES ETUDIANTS DEFAVORISES SUR LE PLAN ECONOMIQUE ET EDUCATIONNEL ET LE PROFIT QU’ILS TIRENT DE L’UNIVERSITE Wilson Mesquita de Almeida Le texte aborde les principaux résultats d’une recherche dont l’objectif était d’appréhender l’utilisation des ressources et des espaces de l’Université de São Paulo faite par un groupe d’étudiants défavorisés sur le plan économique et éducationnel. Il s’agit d’une réflexion réalisée à partir d’une révision critique de la bibliographie nationale et étrangère sur la trajectoire des étudiants issus de couches populaires et qui entrent dans l’enseignement supérieur ainsi que d’une interprétation des données empiriques y relatives. On cherche également à savoir en quoi ces élèves profitent effectivement de la structure offerte par l’université. Le but de cette recherche est de vérifier comment le processus de socialisation a eu lieu dans le milieu familial; la reconstruction de la trajectoire d’insertion et de passage dans le milieu universitaire moyennant l’appréhension du quotidien, l’adaptation au langage académique, la réalisation des tâches, sans compter le contact avec des personnes d’origine similaire ou d’autres couches sociales. Pour cette recherche on a utilisé une méthodologie qualitative réalisée en deux étapes: des groupes focalisés et des entretiens semi-structurés ayant pour objectif de préciser les principales catégories qui ont surgi. On espère ainsi apporter une contribution à la réflexion sur les débats actuels concernant l’inclusion sociale dans l’enseignement supérieur, en y intégrant l’analyse de l’accès à l’université, une discussion sur la permanence effective à partir d’une étude détaillée de la différence de qualité de l’éducation reçue par les diverses couches sociales présentes au sein de l’université publique. MOTS-CLÉS: inégalités au niveau de l’éducation, enseignement supérieur, élites sociales et économiques, université publique, inclusion sociale.Publicação Online do Caderno CRH: http://www.cadernocrh.ufba.br
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Cegla, Adam, Witold Rohm, Gregor Moeller, Paweł Hordyniec, Estera Trzcina y Natalia Hanna. "GNSS signal ray-tracing algorithm for the simulation of satellite-to-satellite excess phase in the neutral atmosphere". Journal of Geodesy 98, n.º 5 (mayo de 2024). http://dx.doi.org/10.1007/s00190-024-01847-0.

Texto completo
Resumen
AbstractTraditionally, GNSS space-based and ground-based estimates of tropospheric conditions are performed separately. It leads to limitations in the horizontal (e.g., a single space-based radio occultation profile covers a 300 km slice of the troposphere) and vertical resolution (e.g., ground-based estimates of troposphere conditions have spacing equal to stations’ distribution) of the tropospheric products. The first stage to achieve an integrated model is to create an effective 3D ray-tracing algorithm for the satellite-to-satellite (radio occultation) path reconstruction. We verify the consistency of the simulated data with the RO observations from the Constellation Observing System for Meteorology, Ionosphere, and Climate (COSMIC-1) Data Analysis and Archive Center (CDAAC) in terms of excess phase and bending angle. The results show that our solution provides an effective RO excess phase, with a relative error varying from 35% at the height of 25–30 km (1.0–1.5 m) to 0.5% at heights 5–10 km (0.1–1 m) and 14 to 2% at heights below 5 km (2–14 m). The bending angle retrieval on simulated data attained for high-resolution ray-tracing, bias lower than 2% with respect to the observed bending angle. The optimal solution takes about 1 s for one transmitter–receiver pair with a tangent point below 5 km altitude. The high-resolution processing solution takes 3 times longer.
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Kalid, Thiago E., Everton Trento Jr., Tatiana de Almeida Prado, Gustavo P. Pires, Giovanni A. Guarneri, Thiago Alberto Rigo Passarin y Daniel Rodrigues Pipa. "Virtual encoder: a two-dimension visual odometer for NDT". Research and Review Journal of Nondestructive Testing 1, n.º 1 (agosto de 2023). http://dx.doi.org/10.58286/28119.

Texto completo
Resumen
Odometer information is an important feature of NDT systems for inspection procedures that involve mechanical scanning. Knowledge of transducer position during the inspection allows for the localization of flaws and the fusion (stitching) of several images into more sophisticated representations of the inspected object. Commercial encoders provide NDT systems with accurate realtime displacement information that can be integrated to obtain odometry. However, this information is typically limited to a single axis. Although composite schemes with more than one encoder can be built to provide 2-D or 3-D spatial information, they are mechanically intricate and lack flexibility and ease of use. We propose a 2-D position-tracking solution that is based on image processing. A miniature camera continuously captures images of the external surface of the inspected object, which are fed to an algorithm that detects and stores 2-D displacement between each pair of consecutive images. Additionally, the orientation quaternions provided by an Inertial Measurement Unit are stored, allowing for posterior 3-D path reconstruction over objects of known geometries, such as oil pipes. Besides logging position and orientation histories, the device also provides real-time displacement information to the NDT system, where it is perceived as a set of single-axis encoders, thus termed “the virtual encoder”. We demonstrate the applicability of the device to both contact and immersion ultrasonic inspections. The results show that the concept is promising, despite being based on simple principles and relatively easy to implement. The source code is provided as additional material in https://github.com/thiagokalid/VirtualEncoder-ECNDT-2023.
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Rico-Preciado, Erick, Mayte H. Laureano y Hiram Calvo. "Multidigraph representation of analogies from oneiric stories". Journal of Intelligent & Fuzzy Systems, 31 de julio de 2022, 1–16. http://dx.doi.org/10.3233/jifs-211895.

Texto completo
Resumen
Learning relationships between nodes in a directed graph is a task that has been widely studied and it has been applied to a large number of topics and research areas. We establish a definition of particular kind of relationship, called analogy in a directed multigraph. An analogy can be defined for a certain pair of concepts, and the paths connecting them are called explanation of this analogy. We experiment with a structure built from real oneiric stories obtained from psychoanalytic descriptions (e.g. mother is represented as a bull; book represents power). Analogies found by the analysts are automatically identified by means of linguistically motivated patterns. Analogies have degrees of similarity based on the words used to describe them: represents, is a, is like a, can be a, refers to, etc. Once they are identified and graded, they are represented in the multidigraph, allowing us to provide different hypotheses in how these analogies can be explained. In order to enrich the concept graph, we added information from ConceptNet and WordNet. In addition, we propose a learning method for association rules that, given the degree of the analogy and a starting concept, allow reaching a destination concept. For example, starting from “dream”, we obtain the path <dream, psychic, neurosis, symptom>, being "dream is a symptom" a description previously given by a psychoanalyst, that was not included when training the algorithm. We evaluated 100 analogies on 171 concepts with 8,034 properties using Leave One Out cross validation, and found that the correct analogy was found within the all the possible paths for 94% of the analogies, restricted to 85% if only the top 20% possible paths are considered. This implies that, by using our method, it is possible to learn analogies between two concepts by reconstructing paths of different lengths based on local decisions considering concept, property and degree of analogy.
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Emilio Faroldi. "The architecture of differences". TECHNE - Journal of Technology for Architecture and Environment, 26 de mayo de 2021, 9–15. http://dx.doi.org/10.36253/techne-11023.

Texto completo
Resumen
Following in the footsteps of the protagonists of the Italian architectural debate is a mark of culture and proactivity. The synthesis deriving from the artistic-humanistic factors, combined with the technical-scientific component, comprises the very root of the process that moulds the architect as an intellectual figure capable of governing material processes in conjunction with their ability to know how to skilfully select schedules, phases and actors: these are elements that – when paired with that magical and essential compositional sensitivity – have fuelled this profession since its origins. The act of X-raying the role of architecture through the filter of its “autonomy” or “heteronomy”, at a time when the hybridisation of different areas of knowledge and disciplinary interpenetration is rife, facilitates an understanding of current trends, allowing us to bring the fragments of a debate carved into our culture and tradition up to date. As such, heteronomy – as a condition in which an acting subject receives the norm of its action from outside itself: the matrix of its meaning, coming from ancient Greek, the result of the fusion of the two terms ἕτερος éteros “different, other” and νόμος nómos “law, ordinance” – suggests the existence of a dual sentiment now pervasive in architecture: the sin of self-reference and the strength of depending on other fields of knowledge. Difference, interpreted as a value, and the ability to establish relationships between different points of observation become moments of a practice that values the process and method of affirming architecture as a discipline. The term “heteronomy”, used in opposition to “autonomy”, has – from the time of Kant onwards – taken on a positive value connected to the mutual respect between reason and creativity, exact science and empirical approach, contamination and isolation, introducing the social value of its existence every time that it returns to the forefront. At the 1949 conference in Lima, Ernesto Nathan Rogers spoke on combining the principle of “Architecture is an Art” with the demands of a social dimension of architecture: «Alberti, in the extreme precision of his thought, admonishes us that the idea must be translated into works and that these must have a practical and moral purpose in order to adapt harmoniously ‘to the use of men’, and I would like to point out the use of the plural of ‘men’, society. The architect is neither a passive product nor a creator completely independent of his era: society is the raw material that he transforms, giving it an appearance, an expression, and the consciousness of those ideals that, without him, would remain implicit. Our prophecy, like that of the farmer, already contains the seeds for future growth, as our work also exists between heaven and earth. Poetry, painting, sculpture, dance and music, even when expressing the contemporary, are not necessarily limited within practical terms. But we architects, who have the task of synthesising the useful with the beautiful, must feel the fundamental drama of existence at every moment of our creative process, because life continually puts practical needs and spiritual aspirations at odds with one another. We cannot reject either of these necessities, because a merely practical or moralistic position denies the full value of architecture to the same extent that a purely aesthetic position would: we must mediate one position with the other» (Rogers, 1948). Rogers discusses at length the relationship between instinctive forces and knowledge acquired through culture, along with his thoughts on the role played by study in an artist’s training. It is in certain debates that have arisen within the “International Congresses of Modern Architecture” that the topic of architecture as a discipline caught between self-sufficiency and dependence acquires a certain centrality within the architectural context: in particular, in this scenario, the theme of the “autonomy” and “heteronomy” of pre-existing features of the environment plays a role of strategic importance. Arguments regarding the meaning of form in architecture and the need for liberation from heteronomous influences did not succeed in undermining the idea of an architecture capable of influencing the governing of society as a whole, thanks to an attitude very much in line with Rogers’ own writings. The idea of a project as the result of the fusion of an artistic idea and pre-existing features of an environment formed the translation of the push to coagulate the antithetical forces striving for a reading of the architectural work that was at once autonomous and heteronomous, as well as linked to geographical, cultural, sociological and psychological principles. The CIAM meeting in Otterlo was attended by Ignazio Gardella, Ernesto Nathan Rogers, Vico Magistretti and Giancarlo De Carlo as members of the Italian contingent: the architects brought one project each to share with the conference and comment on as a manifesto. Ernesto Nathan Rogers, who presented the Velasca Tower, and Giancarlo De Carlo, who presented a house in Matera in the Spine Bianche neighbourhood, were openly criticised as none of the principles established by the CIAM were recognisable in their work any longer, and De Carlo’s project represented a marked divergence from a consolidated method of designing and building in Matera. In this cultural condition, Giancarlo De Carlo – in justifying the choices he had made – even went so far as to say: «my position was not at all a flight from architecture, for example in sociology. I cannot stand those who, paraphrasing what I have said, dress up as politicians or sociologists because they are incapable of creating architecture. Architecture is – and cannot be anything other than – the organisation and form of physical space. It is not autonomous, it is heteronomous» (De Carlo, 2001). Even more so than in the past, it is not possible today to imagine an architecture encapsulated entirely within its own enclosure, autoimmune, averse to any contamination or relationships with other disciplinary worlds: architecture is the world and the world is the sum total of our knowledge. Architecture triggers reactions and phenomena: it is not solely and exclusively the active and passive product of a material work created by man. «We believed in the heteronomy of architecture, in its necessary dependence on the circumstances that produce it, in its intrinsic need to exist in harmony with history, with the happenings and expectations of individuals and social groups, with the arcane rhythms of nature. We denied that the purpose of architecture was to produce objects, and we argued that its fundamental role was to trigger processes of transformation of the physical environment that are capable of contributing to the improvement of the human condition» (De Carlo, 2001). Productive and cultural reinterpretations place the discipline of architecture firmly at the centre of the critical reconsideration of places for living and working. Consequently, new interpretative models continue to emerge which often highlight the instability of built architecture with the lack of a robust theoretical apparatus, demanding the sort of “technical rationality” capable of restoring the centrality of the act of construction, through the contribution of actions whose origins lie precisely in other subject areas. Indeed, the transformation of the practice of construction has resulted in direct changes to the structure of the nature of the knowledge of it, to the role of competencies, to the definition of new professional skills based on the demands emerging not just from the production system, but also from the socio-cultural system. The architect cannot disregard the fact that the making of architecture does not burn out by means of some implosive dynamic; rather, it is called upon to engage with the multiple facets and variations that the cognitive act of design itself implies, bringing into play a theory of disciplines which – to varying degrees and according to different logics – offer their significant contribution to the formation of the design and, ultimately, the work. As Álvaro Siza claims, «The architect is not a specialist. The sheer breadth and variety of knowledge that practicing design encompasses today – its rapid evolution and progressive complexity – in no way allow for sufficient knowledge and mastery. Establishing connections – pro-jecting [from Latin proicere, ‘to stretch out’] – is their domain, a place of compromise that is not tantamount to conformism, of navigation of the web of contradictions, the weight of the past and the weight of the doubts and alternatives of the future, aspects that explain the lack of a contemporary treatise on architecture. The architect works with specialists. The ability to chain things together, to cross bridges between fields of knowledge, to create beyond their respective borders, beyond the precarity of inventions, requires a specific education and stimulating conditions. [...] As such, architecture is risk, and risk requires impersonal desire and anonymity, starting with the merging of subjectivity and objectivity. In short, a gradual distancing from the ego. Architecture means compromise transformed into radical expression, in other words, a capacity to absorb the opposite and overcome contradiction. Learning this requires an education in search of the other within each of us» (Siza, 2008). We are seeing the coexistence of contrasting, often extreme, design trends aimed at recementing the historical and traditional mould of construction by means of the constant reproposal of the characteristics of “persistence” that long-established architecture, by its very nature, promotes, and at decrypting the evolutionary traits of architecture – markedly immaterial nowadays – that society promotes as phenomena of everyday living. Speed, temporariness, resilience, flexibility: these are just a few fragments. In other words, we indicate a direction which immediately composes and anticipates innovation as a characterising element, describing its stylistic features, materials, languages and technologies, and only later on do we tend to outline the space that these produce: what emerges is a largely anomalous path that goes from “technique” to “function” – by way of “form” – denying the circularity of the three factors at play. The threat of a short-circuit deriving from discourse that exceeds action – in conjunction with a push for standardisation aimed at asserting the dominance of construction over architecture, once again echoing the ideas posited by Rogers – may yet be able to finding a lifeline cast through the attempt to merge figurative research with technology in a balanced way, in the wake of the still-relevant example of the Bauhaus or by emulating the thinking of certain masters of modern Italian architecture who worked during that post-war period so synonymous with physical – and, at the same time, moral – reconstruction. These architectural giants’ aptitude for technical and formal transformation and adaptation can be held up as paradigmatic examples of methodological choice consistent with their high level of mastery over the design process and the rhythm of its phases. In all this exaltation of the outcome, the power of the process is often left behind in a haze: in the uncritical celebration of the architectural work, the method seems to dissolve entirely into the finished product. Technical innovation and disciplinary self-referentiality would seem to deny the concepts of continuity and transversality by means of a constant action of isolation and an insufficient relationship with itself: conversely, the act of designing, as an operation which involves selecting elements from a vast heritage of knowledge, cannot exempt itself from dealing in the variables of a functional, formal, material and linguistic nature – all of such closely intertwined intents – that have over time represented the energy of theoretical formulation and of the works created. For years, the debate in architecture has concentrated on the synergistic or contrasting dualism between cultural approaches linked to venustas and firmitas. Kenneth Frampton, with regard to the interpretative pair of “tectonics” and “form”, notes the existence of a dual trend that is both identifiable and contrasting: namely the predisposition to favour the formal sphere as the predominant one, rejecting all implications on the construction, on the one hand; and the tendency to celebrate the constructive matrix as the generator of the morphological signature – emphasised by the ostentation of architectural detail, including that of a technological matrix – on the other. The design of contemporary architecture is enriched with sprawling values that are often fundamental, yet at times even damaging to the successful completion of the work: it should identify the moment of coagulation within which the architect goes in pursuit of balance between all the interpretative categories that make it up, espousing the Vitruvian meaning, according to which practice is «the continuous reflection on utility» and theory «consists of being able to demonstrate and explain the things made with technical ability in terms of the principle of proportion» (Vitruvius Pollio, 15 BC). Architecture will increasingly be forced to demonstrate how it represents an applied and intellectual activity of a targeted synthesis, of a complex system within which it is not only desirable, but indeed critical, for the cultural, social, environmental, climatic, energy-related, geographical and many other components involved in it to interact proactively, together with the more spatial, functional and material components that are made explicit in the final construction itself through factors borrowed from neighbouring field that are not endogenous to the discipline of architecture alone. Within a unitary vision that exists parallel to the transcalarity that said vision presupposes, the technology of architecture – as a discipline often called upon to play the role of a collagen of skills, binding them together – acts as an instrument of domination within which science and technology interpret the tools for the translation of man’s intellectual needs, expressing the most up-to-date principles of contemporary culture. Within the concept of tradition – as inferred from its evolutionary character – form, technique and production, in their historical “continuity” and not placed in opposition to one other, make up the fields of application by which, in parallel, research proceeds with a view to ensuring a conforming overall design. The “technology of architecture” and “technological design” give the work of architecture its personal hallmark: a sort of DNA to be handed down to future generations, in part as a discipline dedicated to amalgamating the skills and expertise derived from other dimensions of knowledge. In the exercise of design, the categories of urban planning, composition, technology, structure and systems engineering converge, the result increasingly accentuated by multidisciplinary nuances in search of a sense of balance between the parts: a setup founded upon simultaneity and heteronomous logic in the study of variables, by means of translations, approaches and skills as expressions of multifaceted identities. «Architects can influence society with their theories and works, but they are not capable of completing any such transformation on their own, and end up being the interpreters of an overbearing historical reality under which, if the strongest and most honest do not succumb, that therefore means that they alone represent the value of a component that is algebraically added to the others, all acting in the common field» (Rogers, 1951). Construction, in this context, identifies the main element of the transmission of continuity in architecture, placing the “how” at the point of transition between past and future, rather than making it independent of any historical evolution. Architecture determines its path within a heteronomous practice of construction through an effective distinction between the strength of the principles and codes inherent to the discipline – long consolidated thanks to sedimented innovations – and the energy of experimentation in its own right. Architecture will have to seek out and affirm its own identity, its validity as a discipline that is at once scientific and poetic, its representation in the harmonies, codes and measures that history has handed down to us, along with the pressing duty of updating them in a way that is long overdue. The complexity of the architectural field occasionally expresses restricted forms of treatment bound to narrow disciplinary areas or, conversely, others that are excessively frayed, tending towards an eclecticism so vast that it prevents the tracing of any discernible cultural perimeter. In spite of the complex phenomenon that characterises the transformations that involve the status of the project and the figure of the architect themselves, it is a matter of urgency to attempt to renew the interpretation of the activity of design and architecture as a coherent system rather than a patchwork of components. «Contemporary architecture tends to produce objects, even though its most concrete purpose is to generate processes. This is a falsehood that is full of consequences because it confines architecture to a very limited band of its entire spectrum; in doing so, it isolates it, exposing it to the risks of subordination and delusions of grandeur, pushing it towards social and political irresponsibility. The transformation of the physical environment passes through a series of events: the decision to create a new organised space, detection, obtaining the necessary resources, defining the organisational system, defining the formal system, technological choices, use, management, technical obsolescence, reuse and – finally – physical obsolescence. This concatenation is the entire spectrum of architecture, and each link in the chain is affected by what happens in all the others. It is also the case that the cadence, scope and intensity of the various bands can differ according to the circumstances and in relation to the balances or imbalances within the contexts to which the spectrum corresponds. Moreover, each spectrum does not conclude at the end of the chain of events, because the signs of its existence – ruins and memory – are projected onto subsequent events. Architecture is involved with the entirety of this complex development: the design that it expresses is merely the starting point for a far-reaching process with significant consequences» (De Carlo, 1978). The contemporary era proposes the dialectic between specialisation, the coordination of ideas and actions, the relationship between actors, phases and disciplines: the practice of the organisational culture of design circumscribes its own code in the coexistence and reciprocal exploitation of specialised fields of knowledge and the discipline of synthesis that is architecture. With the revival of the global economy on the horizon, the dematerialisation of the working practice has entailed significant changes in the productive actions and social relationships that coordinate the process. Despite a growing need to implement skills and means of coordination between professional actors, disciplinary fields and sectors of activity, architectural design has become the emblem of the action of synthesis. This is a representation of society which, having developed over the last three centuries, from the division of social sciences that once defined it as a “machine”, an “organism” and a “system”, is now defined by the concept of the “network” or, more accurately, by that of the “system of networks”, in which a person’s desire to establish relationships places them within a multitude of social spheres. The “heteronomy” of architecture, between “hybridisation” and “contamination of knowledge”, is to be seen not only an objective fact, but also, crucially, as a concept aimed at providing the discipline with new and broader horizons, capable of putting it in a position of serenity, energy and courage allowing it to tackle the challenges that the cultural, social and economic landscape is increasingly throwing at the heart of our contemporary world.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía