Academic literature on the topic 'Distance de barrière minimale'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Distance de barrière minimale.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Distance de barrière minimale"

1

Dedecker, Jérôme, and Clémentine Prieur. "Couplage pour la distance minimale." Comptes Rendus Mathematique 338, no. 10 (May 2004): 805–8. http://dx.doi.org/10.1016/j.crma.2004.03.015.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Canillos, Thibaud. "Les bâtiments de stockage de denrées agricoles tardo-antiques de la Barrière 2 à Servian (Hérault)." Revue archéologique de Narbonnaise 53, no. 1 (2020): 345–62. http://dx.doi.org/10.3406/ran.2020.2017.

Full text
Abstract:
En 2017, la fouille préventive du site de la Barrière 2 sur la commune de Servian a permis de mettre en évidence un bâtiment antique de grandes dimensions, comportant des bases maçonnées. Celui-ci est structuré par deux murs d’orientation nord-est/ sud-ouest, cinq massifs maçonnés et huit fosses de spoliations en lien avec d’autres massifs. Ceux-ci ont la même orientation que les murs et semblent faire partie intégrante d’un même ensemble. Ce plan est incomplet, se poursuivant au sud et au nord de l’emprise de fouille. Sa superficie minimale est de 375 m ² et il a dû subir au moins une extension et/ ou une réfection. En effet, on observe des recoupements entre plusieurs de ces massifs, puis des spoliations au moment de l’abandon du site. Au moins deux états ont été documentés, le premier prenant place dans un large IVe s., alors que le second semble être abandonné à partir du milieu du Ve s. Le plan de cet édifice pourrait le rapprocher des grands greniers à piliers internes, surtout connus dans le nord de la Gaule, qui ont la particularité de posséder une superficie importante et des murs à contreforts.
APA, Harvard, Vancouver, ISO, and other styles
3

Aubry, Christophe. "Estimateur de la distance minimale en moyenne pour un modèle régulier." Comptes Rendus de l'Académie des Sciences - Series I - Mathematics 325, no. 8 (October 1997): 899–902. http://dx.doi.org/10.1016/s0764-4442(97)80134-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Hénaff, Sylvain. "Asymptotique de l'estimateur de distance minimale du paramètre du processus d'Ornstein-Uhlenbeck." Comptes Rendus de l'Académie des Sciences - Series I - Mathematics 325, no. 8 (October 1997): 911–14. http://dx.doi.org/10.1016/s0764-4442(97)80137-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Gauthier, M., J. Lebon, A. B. Tanguay, and F. Bégin. "MP007: Constats de décès à distance et disponibilité des services préhospitalier d’urgence." CJEM 18, S1 (May 2016): S68. http://dx.doi.org/10.1017/cem.2016.148.

Full text
Abstract:
Introduction: L’Unité de coordination clinique des services pré-hospitaliers d’urgence (UCCSPU) est un plateau clinique rattaché au CSSS Alphonse-Desjardins (CHAU Hôtel-Dieu de Lévis) qui permet un soutien médical à distance des patients transportés par ambulance dans la région de Chaudières-Appalaches (CA). En 2011, un projet novateur, devenu programme par la suite, a été instauré afin de réaliser des constats de décès à distance (CDD). Le but du programme est de réduire le nombre de transport de patients décédés vers les hôpitaux afin de remettre rapidement en service l’équipe ambulancière. Le but de l'étude est de décrire et comparer le taux de CDD et le gain de temps sur la remise en service de l’équipe ambulancière avant et après l’implantation du programme de CDD dans deux différentes régions géographiques (Chaudières-Appalaches et Saguenay-Lac-St-Jean (SLSJ)). Par la suite, déterminer s’il existe une distance minimale à partir de laquelle ce gain de temps est nul pour chaque région. Methods: Il s’agit d’une étude rétrospective portant sur 204 personnes réparties en 4 groupes : 2 groupes témoins [CA pré-CDD (50) et SLSJ pré-CDD (50)] et 2 groupes d’étude [CA post-CDD (52) et SLSJ post-CDD (52)] pour les deux régions. Le pourcentage de CDD réussi (taux de réalisation) par région et les gains de temps entre chaque groupe (intra- et inter-région) en fonction de la distance avec le centre hospitalier (CH) ont été calculés. Results: Pour un même nombre de patients, le taux de réalisation de CDD est similaire entre les deux régions [CA=80% (6 mois) et SLSJ=76% (4 mois)]. Le temps de remise en service des ambulances est différent (p<0.05) inter-région se caractérisant par des gains de temps moyens de 62 min (CA) et 28 min (SLSJ). Enfin, la distance minimale où le gain de temps est nul est de moins de 5 km pour chaque région. Conclusion: L’implantation du programme de CDD permet un gain de temps favorisant un retour plus rapide des services pré-hospitalier d’urgence si la distance entre le lieu du CDD et du CH est supérieure à 5 km. De plus, le gain en temps est proportionnel avec la distance entre le lieu du CCD et le CH.
APA, Harvard, Vancouver, ISO, and other styles
6

Kleißen, Jasmin, Niko Balkenhol, and Heike Pröhl. "Landscape Genetics of the Yellow-Bellied Toad (Bombina variegata) in the Northern Weser Hills of Germany." Diversity 13, no. 12 (November 27, 2021): 623. http://dx.doi.org/10.3390/d13120623.

Full text
Abstract:
Anthropogenic influences such as deforestation, increased infrastructure, and general urbanization has led to a continuous loss in biodiversity. Amphibians are especially affected by these landscape changes. This study focuses on the population genetics of the endangered yellow-bellied toad (Bombina variegata) in the northern Weser Hills of Germany. Additionally, a landscape genetic analysis was conducted to evaluate the impact of eight different landscape elements on the genetic connectivity of the subpopulations in this area. Multiple individuals from 15 study sites were genotyped using 10 highly polymorphic species-specific microsatellites. Four genetic clusters were detected, with only two of them having considerable genetic exchange. The average genetic differentiation between populations was moderate (global FST = 0.1). The analyzed landscape elements showed significant correlations with the migration rates and genetic distances between populations. Overall, anthropogenic structures had the greatest negative impact on gene flow, whereas wetlands, grasslands, and forests imposed minimal barriers in the landscape. The most remarkable finding was the positive impact of the underpasses of the motorway A2. This element seems to be the reason why some study sites on either site of the A2 showed little genetic distance even though their habitat has been separated by a strong dispersal barrier.
APA, Harvard, Vancouver, ISO, and other styles
7

Rodier, François. "Estimation asymptotique de la distance minimale du dual des codes BCH et polynômes de Dickson." Discrete Mathematics 149, no. 1-3 (February 1996): 205–21. http://dx.doi.org/10.1016/0012-365x(94)00320-i.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Nugraha, Priyambada Cahya, I. Dewa Gede Hari Wisana, Dyah Titisari, and Farid Amrinsani. "Optimal Long Distance ECG Signal Data Delivery Using LoRa Technology." Journal of Biomimetics, Biomaterials and Biomedical Engineering 55 (March 28, 2022): 239–49. http://dx.doi.org/10.4028/p-6z381m.

Full text
Abstract:
Cardiovascular disease is the leading cause of death in the world and the number one killer in Indonesia, with a mortality rate of 17.05%. The target of this research is to increase the range of electrocardiograph (ECG) equipment using LoRa Technology. With LoRa Technology, it is expected that the data transmission process can run effectively and produce an accurate ECG signal and minimal noise. The research method is by sending a heart signal from the ECG simulator by the microcontroller via LoRa Technology which is received by the PC (Personal Computer) and the ECG signal is displayed on the PC display. The most optimal setting will be obtained from the sender-receiver distance and baudrate by measuring data loss and delay. In this study, the simulated cardiac signal from the phantom ECG is fed to an analog signal processing circuit, then the signal is converted to digital and digitally filtered on the microcontroller, then the signal is sent via the LoRa HC-12 Transceiver to a PC with baudrate, distance and barrier settings. The results obtained are that data transmission can be carried out at a distance of 175 meters without a barrier and a distance of 50 meters with a barrier. This remote ECG equipment can detect heart signals and the results can be sent to a PC using LoRa Technology. The implication is that the transmission of ECG signal data via the Lora HC-12 Transceiver media can be carried out optimally at the 9600 baudrate setting.
APA, Harvard, Vancouver, ISO, and other styles
9

Boureille, Patrick. "Les relations navales franco-roumaines (1919-1928) : les illusions perdues." Revue Historique des Armées 244, no. 3 (August 1, 2006): 50–59. http://dx.doi.org/10.3917/rha.244.0050.

Full text
Abstract:
À l’issue de la Première Guerre mondiale, la France espérait s’appuyer sur la Roumanie, pour contrôler les bouches du Danube, tenir à distance la Russie soviétique du cœur de l’Europe et contrôler le révisionnisme allemand. Une décennie plus tard, les relations franco-roumaines n’avaient débouché sur aucune réalité concrète. Impécuniosité notoire et rivalités entre les dirigeants roumains s’étaient conjuguées avec la refondation d’un État multinational, dans le contexte diplomatique mouvant des années 1920, pour rendre chimérique l’hégémonie française. La sécurité des frontières poussa Bucarest à rechercher une garantie formelle que Paris refusait de donner contre les irrédentismes des pays limitrophes. De son côté, la France, qui voyait dans la Roumanie une barrière contre l’URSS et une alliance de revers jusqu’en 1925, mise sur la sécurité collective après Locarno. Les concurrences britannique et italienne achevèrent de ruiner la tentative française d’établir une tutelle régionale.
APA, Harvard, Vancouver, ISO, and other styles
10

Hjalmarsson, Clara, Maria Ohlson, and Börje Haraldsson. "Puromycin aminonucleoside damages the glomerular size barrier with minimal effects on charge density." American Journal of Physiology-Renal Physiology 281, no. 3 (September 1, 2001): F503—F512. http://dx.doi.org/10.1152/ajprenal.2001.281.3.f503.

Full text
Abstract:
Puromycin aminonucleoside (PAN) has been suggested to reduce glomerular charge density, to create large glomerular “leaks,” or not to affect the glomerular barrier. Therefore, we analyzed glomerular charge and size selectivity in vivo and in isolated kidneys perfused at 8°C (cIPK) in control and PAN-treated rats. The fractional clearances (θ) for albumin and Ficoll of similar hydrodynamic size were 0.0017 ± 0.0004 and 0.15 ± 0.02, respectively, in control cIPKs. Two-pore analysis gave similar results in vivo and in vitro, with small- and large-pore radii of 47–52 and 85–105 Å, respectively, in controls. Puromycin increased the number of large pores 40–50 times, the total pore area over diffusion distance decreased by a factor of 25–30, and the small-pore radius increased by 33% ( P < 0.001 for all comparisons of size selectivity and θ). The effect of PAN was less dramatic on the estimated wall charge density, which was 73% of that of controls. We conclude that puromycin effectively destroys the glomerular size barrier with minimal effects on charge density.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Distance de barrière minimale"

1

On, Vu Ngoc Minh. "A new minimum barrier distance for multivariate images with applications to salient object detection, shortest path finding, and segmentation." Electronic Thesis or Diss., Sorbonne université, 2020. http://www.theses.fr/2020SORUS454.

Full text
Abstract:
Les représentations hiérarchiques d’images sont largement utilisées dans le traitement d’images pour modéliser le contenu d’une image par un arbre. Une hiérarchie bien connue est l’arbre des formes (AdF) qui encode la relation d’inclusion entre les composants connectés à partir de différents niveaux de seuil. Ce genre d’arbre est auto-duale et invariant de changement de contraste, ce qu’il est utilisé dans de nombreuses applications de vision par ordinateur. En raison de ses propriétés, dans cette thèse, nous utilisons cette représentation pour calculer la nouvelle distance qui appartient au domaine de la morphologie mathématique. Les transformations de distance et les cartes de saillance qu’elles induisent sont généralement utilisées dans le traitement d’images, la vision par ordinateur et la reconnaissance de formes. L’une des transformations de distance les plus couramment utilisées est celle géodésique. Malheureusement, cette distance n’obtient pas toujours des résultats satisfaisants sur des images bruyantes ou floues. Récemment, une nouvelle pseudo-distance, appelée distance de barrière minimale (MBD), plus robuste aux variations de pixels, a été introduite. Quelques années plus tard, Géraud et al. ont proposé une bonne approximation rapide de cette distance : la pseudodistance de Dahu. Puisque cette distance a été initialement développée pour les images en niveaux de gris, nous proposons ici une extension de cette transformation aux images multivariées ; nous l’appelons vectorielle Dahu pseudo-distance. Cette nouvelle distance est facilement et efficacement calculée grâce à à l’arbre multivarié des formes (AdFM). Nous vous proposons une méthode de calcul efficace cette distance et sa carte de saillants déduits dans cette thèse. Nous enquêtons également sur le propriétés de cette distance dans le traitement du bruit et du flou dans l’image. Cette distance s’est avéré robuste pour les pixels invariants. Pour valider cette nouvelle distance, nous fournissons des repères démontrant à quel point la pseudo-distance vectorielle de Dahu est plus robuste et compétitive par rapport aux autres distances basées sur le MB. Cette distance est prometteuse pour la détection des objets saillants, la recherche du chemin le plus court et la segmentation des objets. De plus, nous appliquons cette distance pour détecter le document dans les vidéos. Notre méthode est une approche régionale qui s’appuie sur le saillance visuelle déduite de la pseudo-distance de Dahu. Nous montrons que la performance de notre méthode est compétitive par rapport aux méthodes de pointe de l’ensemble de données du concours Smartdoc 2015 ICDAR
Hierarchical image representations are widely used in image processing to model the content of an image in the multi-scale structure. A well-known hierarchical representation is the tree of shapes (ToS) which encodes the inclusion relationship between connected components from different thresholded levels. This kind of tree is self-dual, contrast-change invariant and popular in computer vision community. Typically, in our work, we use this representation to compute the new distance which belongs to the mathematical morphology domain. Distance transforms and the saliency maps they induce are generally used in image processing, computer vision, and pattern recognition. One of the most commonly used distance transforms is the geodesic one. Unfortunately, this distance does not always achieve satisfying results on noisy or blurred images. Recently, a new pseudo-distance, called the minimum barrier distance (MBD), more robust to pixel fluctuation, has been introduced. Some years after, Géraud et al. have proposed a good and fast-to-compute approximation of this distance: the Dahu pseudodistance. Since this distance was initially developed for grayscale images, we propose here an extension of this transform to multivariate images; we call it vectorial Dahu pseudo-distance. This new distance is easily and efficiently computed thanks to the multivariate tree of shapes (MToS). We propose an efficient way to compute this distance and its deduced saliency map in this thesis. We also investigate the properties of this distance in dealing with noise and blur in the image. This distance has been proved to be robust for pixel invariant. To validate this new distance, we provide benchmarks demonstrating how the vectorial Dahu pseudo-distance is more robust and competitive compared to other MB-based distances. This distance is promising for salient object detection, shortest path finding, and object segmentation. Moreover, we apply this distance to detect the document in videos. Our method is a region-based approach which relies on visual saliency deduced from the Dahu pseudo-distance. We show that the performance of our method is competitive with state-of-the-art methods on the ICDAR Smartdoc 2015 Competition dataset
APA, Harvard, Vancouver, ISO, and other styles
2

Ngo, Quoc-Tuong. "Généralisation des précodeurs MIMO basés sur la distance euclidienne minimale." Phd thesis, Université Rennes 1, 2012. http://tel.archives-ouvertes.fr/tel-00839594.

Full text
Abstract:
Dans cette thèse, nous avons utilisé la théorie des matrices et les propriétés d'algèbre linéaire pour concevoir de nouveaux précodeurs MIMO basés sur la distance euclidienne minimale (max-dmin) entre les points des constellations reçues. A cause de la grande complexité de résolution engendrée par le nombre d'antennes et le nombre de symboles dans la constellation utilisée, ce type de précodeur n'existait auparavant que pour 2 voies d'émission et des modulations simples. Nous l'avons dans un premier temps étendu à la modulation MAQ-16, avant de généraliser le concept pour toute modulation MAQ. L'utilisation des fonctions trigonométriques a ensuite permis une nouvelle représentation du canal à l'aide de deux angles, ouvrant la voie à un précodeur dmin pour trois voies de données. Grâce à ce schéma, une extension non-optimale du précodeur max-dmin pour un nombre impair de flux de symboles utilisant des modulations MAQ est obtenue. Lorsqu'une détection par maximum de vraisemblance est utilisée, le nombre de voisins fournissant la distance minimale est également très important pour le calcul du TEB. Pour prendre en compte ce paramètre, un nouveau précodeur, sans rotation possible, est considéré, menant à une expression moins complexe et un espace de solutions restreint. Enfin, une approximation de la distance minimale a été dérivée en maximisant la valeur minimale des éléments diagonaux de la matrice maximisant le RSB. L'avantage majeur de cette conception est que la solution est disponible pour toute modulation MAQ rectangulaire et pour tout nombre de flux de symboles.
APA, Harvard, Vancouver, ISO, and other styles
3

Collin, Ludovic. "Optimisation de systèmes multi-antennes basée sur la distance minimale." Brest, 2002. http://www.theses.fr/2002BRES2041.

Full text
Abstract:
Les systèmes de transmission numériques à entrées multiples et sorties multiples (MIMO) sont de plus en plus étudiés du fait de leur très bonne efficacité spectrale sur des canaux riches en diffuseurs, tels que ceux des réseaux locaux sans fil ou des communications mobiles urbaines sans fil. Dans cette thèse, pour évaluer rapidement et efficacement le taux d'erreur binaire (TEB) des communications MIMO, nous proposons d'utiliser la méthode du second ordre de l'Unscented Transformation. Ensuite,nous proposons un récepteur rapide basé sur le maximum de vraisemblance (BMV) pour un canal MIMO de Rice. Pour terminer, nous proposons deux précodeurs non-diagonaux, basés sur l'optimisation de la distance euclidienne minimale. Des comparaisons sont effectuées avec des précodeurs connus, tels que le Water-Filling (WF), le minimum de l'erreur quadratique moyenne (MMSE) et la maximisation de la valeur singulière minimale de la matrice du canal global, pour illustrer l'importante amélioration du TEB due à nos précodeurs
Multi-Input Multi-Output (MIMO) digital transmission systems currently retain more and more attention due to the very high spectral efficiencies they can achieve over rich scattering transmission channels, such as wireless local area networks (WLAN) or urban mobile wireless communications. In this thesis, in order to evaluate the Bit Error Rate (BER) of MIMO communications, we propose to use the second-order Unscented Transformation method. Next, we propose a fast Maximum Likelihood Based (MLB) decoder for a MIMO Rician fading channel. Lastly, we propose two new non-diagonal precoders, based on the optimization of the minimum euclidean distance. Comparisons to other known precoders, such as Water-Filling (WF), Minimum Mean Square Error (MMSE) and maximization of the minimum singular value of the global channel matrix, are performed to illustrate the significant BER improvement of the proposed precoders
APA, Harvard, Vancouver, ISO, and other styles
4

Cadic, Emmanuel. "Construction de Turbo Codes courts possédant de bonnes propriétés de distance minimale." Limoges, 2003. http://aurore.unilim.fr/theses/nxfile/default/2c131fa5-a15a-4726-8d49-663621bd2daf/blobholder:0/2003LIMO0018.pdf.

Full text
Abstract:
L'objectif de cette thèse est de réaliser des turbo codes possédant de bonnes distances minimales et de contribuer ainsi à repousser le phénomène ``d'error floor'' qui correspond à un seuil de l'ordre de 10-6 pour le taux d'erreur résiduelles binaires en dessous duquel la pente de la courbe de TEB diminue de façon significative. Ce problème s'est sensiblement amélioré avec l'apparition des codes duo-binaires de Berrou [11] qui permettent notamment d'obtenir de meilleures distances minimales. Pour obtenir de bonnes distances minimales avec des turbo codes courts (longueur inférieure à 512), la construction initialement utilisée et étudiée dans cette thèse a été celle proposée par Carlach et Vervoux [26] qui permet d'obtenir d'excellentes distances minimales mais qui malheureusement s'avère moins performante en terme de décodage notamment pour des raisons propres à la structure. Après avoir identifié les raisons qui empêchent un décodage efficace de cette famille de codes, nous faisons évoluer ces codes en utilisant des structures graphiques différentes reposant toujours sur l'assemblage de codes composants de petite complexité. L'idée est de réaliser ce changement sans pour autant perdre les qualités de distance minimale de ces codes et par conséquent il est nécessaire de comprendre pourquoi les distances minimales de cette famille initiale de codes sont bonnes et de définir un critère de choix pour les codes composants. Le critère de choix ne dépend pas de la distance minimale des codes composants mais du polynôme de transition de ces codes et permet donc de sélectionner des codes composants de très faible complexité qui sont assemblés de façon à générer des treillis cycliques à seulement 4 états. Ces treillis sont alors utilisés pour élaborer des turbo codes parallèle ou série présentant de bonnes distances minimales. Certains codes auto-duaux extrémaux sont notamment construits ainsi
This thesis is aimed at building turbo codes with good minimum distances and delaying the``error-floor'' which corespond to a threshold of 10-6 for the binary error rate. Under this threshold, the slope of the curve decreases significantly. This problem is alleviated by the use of duo-binary turbo codes [11] which guarantee better minimum distances. In order to obtain good minimum distances with short turbo codes (length inferior to 512), the first construction used and studied is the one proposed by Carlach and Vervoux [26]. It allows to obtain very good minimum distances but its decoding is unfortunately very difficult because of its structure. After identifying the reasons for this problem, we have modified these codes by using some graphicals structures which are the gathering of low complexity components codes. The idea is to realize this change without loosing the minimum distances properties, and consequently we had to understand why minimum distances are good for this familly of codes and define a new criteria to choose ``good'' components codes. This criteria is independent from the minimum distance of the component codes because it is derived from the Input-Output Weight Enumerator (IOWE) of the components codes. It allows us to choose components codes with very low complexity which are combined in order to provide 4-state tail-biting trellises. These trellises are then used to build multiple parallel concatenated and serial turbo codes with good minimum distances. Some extremal self-dual codes have been built in that way
APA, Harvard, Vancouver, ISO, and other styles
5

Ngo, Quoc-Tuong. "Généralisation des précodeurs basés sur la distance minimale pour les systèmes MIMO à multiplexage spatial." Rennes 1, 2012. http://www.theses.fr/2012REN1E001.

Full text
Abstract:
Dans cette thèse, nous avons utilisé la théorie des matrices et les propriétés d’algèbre linéaire pour concevoir de nouveaux précodeurs MIMO basés sur la distance euclidienne minimale (max-dmin) entre les points des constellations reçues. À cause de la grande complexité de résolution engendrée par le nombre d’antennes et le nombre de symboles dans la constellation utilisée, ce type de précodeur n’existait auparavant que pour 2 voies d’émission et des modulations simples. Nous l’avons dans un premier temps étendu à la modulation MAQ-16, avant de généraliser le concept pour toute modulation MAQ. L’utilisation des fonctions trigonométriques a ensuite permis une nouvelle représentation du canal à l’aide de deux angles, ouvrant la voie à un précodeur dmin pour trois voies de données. Grâce à ce schéma, une extension non-optimale du précodeur max-dmin pour un nombre impair de flux de symboles utilisant des modulations MAQ est obtenue. Lorsqu’une détection par maximum de vraisemblance est utilisée, le nombre de voisins fournissant la distance minimale est également très important pour le calcul du TEB. Pour prendre en compte ce paramètre, un nouveau précodeur, sans rotation possible, est considéré, menant à une expression moins complexe et un espace de solutions restreint. Enfin, une approximation de la distance minimale a été dérivée en maximisant la valeur minimale des éléments diagonaux de la matrice maximisant le RSB. L'avantage majeur de cette conception est que la solution est disponible pour toute modulation MAQ rectangulaire et pour tout nombre de flux de symboles
In this thesis, we studied the efficient non-diagonal precoder based on the maximization of the minimum Euclidean distance (max-dmin) between two received data vectors. Because the complexity of the optimized solutions depends on the number of antennas and the modulation order, the max-dmin precoder was only available in closed-form for two independent data-streams with low-order modulations. Therefore, we firstly extended this solution for two 16-QAM symbols and then generalized the concept to any rectangular QAM modulation. By using trigonometric functions, a new virtual MIMO channel representation thanks to two channel angles, allows the parameterization of the max-dmin precoder and the optimization of the distance for three parallel data streams. Thanks to this scheme, an extension for an odd number of data-streams using QAM modulations is obtained. Not only the minimum Euclidean distance but also the number of neighbors providing it has an important role in reducing the error probability when an ML detection is considered at the receiver. Aiming at reducing this number of neighbors, a new precoder in which the rotation parameter has no influence is proposed, leading to less complex processing and a smaller space of solutions. Finally, an approximation of the minimum distance was derived by maximizing the minimum diagonal element of the SNR-like matrix. The major advantage of this design is that the solution can be available for all rectangular QAM-modulation and any number of datastreams
APA, Harvard, Vancouver, ISO, and other styles
6

Vrigneau, Baptiste. "Systèmes MIMO précodés optimisant la distance minimale : étude des performances et extension du nombre de voies." Phd thesis, Université de Bretagne occidentale - Brest, 2006. http://tel.archives-ouvertes.fr/tel-00481141.

Full text
Abstract:
Les systèmes multi-antennaires (Multiple-Input Multiple-Ouput ou MIMO) dans le domaine des communications numériques permettent d'améliorer la transmission des données selon deux principaux paramètres souvent antagonistes : le débit d'information et la fiabilité de transmission estimée en terme de probabilité d'erreurs binaire moyenne (PEB). Avec de tels systèmes, la connaissance du canal à l'émission (Channel State Information ou CSI) est un point-clé pour diminuer la PEB grâce à différentes stratégies d'allocations de puissance. Ainsi, un précodeur linéaire à l'émission associé à un décodeur linéaire à la réception peuvent optimiser un critère particulier grâce à cette information. Il en résulte une famille importante de précodeurs dénommée «précodeurs diagonaux» : le système MIMO est équivalent à des sous-canaux SISO indépendants. Les critères optimisés sont par exemple la minimisation de l'erreur quadratique moyenne (EQMM), la maximisation de la capacité (WF), obtenir des PEB égales pour tous les flux de données (EE), la maximisation du RSB post-traitement (max-SNR) ou la qualité de service (QdS). L'équipe TST a récemment élaboré un nouveau précodeur non diagonal basé sur la maximisation de la distance minimale entre symboles de la constellation de réception (max-dmin ). L'enjeu de cette thèse est d'estimer les performances en terme de PEB de ce nouveau précodeur et de les comparer avec les méthodes existantes à savoir le code d'Alamouti et les précodeurs diagonaux. Nous nous sommes intéressés en particulier à la démonstration de l'ordre de diversité maximal du max-dmin puis à la détermination d'une bonne approximation de sa PEB. Le précodeur max-dmin est ensuite associé à de la diversité de polarisation permettant de réduire le coût et l'occupation spatiale d'un système MIMO. Malgré l'introduction de corrélation, les performances proposées par le max-dmin demeurent intéressantes. Nous avons ensuite proposé une extension du précodeur max-dmin permettant de supprimer la limitation à deux sous-canaux : les grands systèmes MIMO sont mieux exploités avec plus de deux sous-canaux.
APA, Harvard, Vancouver, ISO, and other styles
7

Vrigneau, Baptiste. "Système MIMO précodés optimisant la distance minimale : Etude des performances et extension du nombre de voies." Brest, 2006. http://www.theses.fr/2006BRES2033.

Full text
Abstract:
Les systèmes multi-antennaires (Multiple-Input Multiple-Ouput) dans le domaine des communications numériques permettent d'améliorer la transmission des données selon deux paramètres antagonistes : le débit d'information et la fiabilité de transmission (Probabilité d'Erreurs Binaire moyenne). Avec de tels systèmes, la connaissance du canal à l'émission (Channel State Information) est un point-clé pour diminuer la PEB grâce à différentes stratégies d'allocations de puissance. Ainsi, un précodeur linéaire à l'émission associé à un décodeur linéaire à la réception optimisent un critère pertinent grâce à cette information. Il en résulte une famille importante de précodeurs dénommée "précodeurs diagonaux" : le système MIMO est équivalent à des sous-canaux indépendants. Les critères optimisés sont par exemple la minimisation de l'erreur quadratique moyenne ou la maximisation de la capacité. L'équipe TST a récemment élaboré un nouveau précodeur non diagonal maximisant la distance minimale entre les symboles reçus (max-dmin). L'enjeu de cette thèse est d'estimer les performances en terme de PEB de ce nouveau précodeur et de les i comparer avec les méthodes existantes (code d'Alamouti et précodeurs diagonaux). Nous avons démontré l'ordre de diversité maximal du max-drain puis déterminé une bonne approximation de sa PEB. Le précodeur max-dmin est ensuite associé à de la diversité de polarisation permettant de réduire le coût et l'occupation spatiale d'un système MIMO. Malgré l'introduction de corrélation, les performances du max-dmin sont intéressantes. Nous avons ensuite proposé une extension du précodeur max-dmin permettant de supprimer la limitation à deux sous-canaux
In wireless communications, the Multiple-Input Multiple-Ouput (MIMO) systems constitute an efficient way to significantly enhance data transmission according to two main, though antagonistic, parameters: the spectral efficiency and reliability assessed from the average binary error probability (BEP). With such systems the knowledge of the channel state information (CSI) at the transmitter side is paramount to lower reduce the BEP through différent stratégies of power allocation. Indeed, once the CSI has been fully (or perfectly) known, a linear precoder at the transmit side and a linear decoder at the receive side can be designed for subséquent association by optimizing one among the following criteria: minimum mean square error (MMSE) or the capacity. Their respective optimisations have led to a family of diagonal precoders: the MIMO system is équivalent to indépendant SISO subchannels. Recently, a new no-diagonal precoder designed within our laboratory optimizes the minimal Euclidean distance between receive symbols. This thesis work was aimed at estimating the BEP of this precoder for comparison with other methods (Alamouti's code and diagonal precoders). We demonstrated the maximal diversity order of the max¬dmin, and then gave a tight BEP approximation. Moreover, the spatial dimensions and the final cost of a MIMO device were reduced by associating of the precoder max-dmin with polarity diversity. Despite the corrélation induced by this system, the max-dmin performances are still worth being considered. We also proposed an extension of the max-dmin to more than two sub-channels in order to exploit larger MIMO systems
APA, Harvard, Vancouver, ISO, and other styles
8

Aubry, Christophe. "Estimation parametrique par la methode de la distance minimale pour les processus de poisson et de diffusion." Le Mans, 1997. http://www.theses.fr/1997LEMA1005.

Full text
Abstract:
Le but de cette these est d'etudier certains aspects de la theorie asymptotique de l'estimation parametrique. Elle traite de la methode de la distance minimale et de la methode de la distance minimale en moyenne lorsqu'on observe un processus de diffusion faiblement bruite ou un processus de poisson. Le premier chapitre formule des resultats preliminaires. Ce sont des rappels sur les processus de diffusion, les processus de poisson et sur la statistique. Le deuxieme chapitre etablit la normalite suivant une asymptotique particuliere des estimateurs de la distance minimale pour l'observation d'un processus de diffusion. Dans le chapitre 3 est aborde l'estimation de la distance minimale pour l'observation de processus de poisson dont la fonction d'intensite est periodique. On etablit les proprietes de consistence, la forme de la distribution limite de l'estimateur, la convergence des moments et la normalite suivant une asymptotique particuliere. Enfin le chapitre 4 introduit une classe d'estimateurs bayesiens appeles estimateurs de la distance minimale en moyenne. Pour des modeles observes asymptotiquement normaux, on demontre la consistence, la normalite asymptotique et la convergence des moments. On applique ensuite ces resultats a l'observation de processus de diffusion definis comme dans le chapitre 2 et a l'observation de processus de poisson definis comme dans le chapitre 3.
APA, Harvard, Vancouver, ISO, and other styles
9

Galand, Fabien. "Construction de codes Z indice p à la puissance k linéaires de bonne distance minimale et schémas de dissimulation fondés sur les codes de recouvrement." Caen, 2004. http://www.theses.fr/2004CAEN2047.

Full text
Abstract:
Cette thèse étudie deux axes de recherches reposant sur les codes. Chaque axe porte sur un paramètre particulier. Le premier axe est celui de la correction d'erreur, et nous nous intéressons à la distance minimale des codes. Notre objectif est de construire des codes sur Fp ayant une bonne distance minimale. Pour cela nous utilisons conjointement le relèvement de Hensel et la Zpk-linéarité. Nous donnons la distance minimale en petite longueur d'une généralisation des codes de Kerdock et de Preparata, ainsi que des relevés des codes de résidus quadratiques. Parmi ces codes, nous en obtenons quatre égalant les meilleurs codes linéaires. Nous donnons également une construction visant à augmenter le cardinal des codes Zpk-linéaires par ajout de translatés. Cette construction nous conduit à une borne supérieure sur le cardinaux des codes Zpk-linéaires. Le second axe, disjoint du premier dans son objectif, mais le rejoignant sur les objets étudiés, est la construction de schémas de dissimulation. Nous relions cette problématique, relevant de la stéganographie, à la construction de codes de recouvrement. Nous envisageons deux modàles de schémas. Ces modàles sont prouvés équivalents aux cette équivalence pour mettre à jour la structure des recouvrements utilisés dans les travaux déjà publiés. Cette équivalence nous sert également à déduire des bornes supérieures sur la capacité des schémas, et en donnant des constructions fondées sur les recouvrements linéaires nous obtenons des bornes inférieures.
APA, Harvard, Vancouver, ISO, and other styles
10

Crozet, Sébastien. "Efficient contact determination between solids with boundary representations (B-Rep)." Thesis, Université Grenoble Alpes (ComUE), 2017. http://www.theses.fr/2017GREAM089/document.

Full text
Abstract:
Avec le développement de systèmes robotiques avancés et de tâches de téléopération complexes, le besoin pour la réalisation de simulations en amont des opérations sur les systèmes réels se fait de plus en plus ressentir. Cela concerne en particulier les tests de faisabilité, d’entrainement d’opérateurs humains, de planification de mouvement, etc. Ces simulations doivent généralement être réalisées avec une précision importante des phénomènes physiques, notamment si l’opérateur humain est supposé faire face aux mêmes comportements mécaniques dans le monde réel que sur la scène virtuelle. La détection de collision, c’est à dire le calcul de points de contact et des normales de contact entre des objets rigides en mouvement et susceptibles d’interagir, occupe une portion significative des temps de calcul pour ce type de simulations. La précision ainsi que le niveau de continuité de ces informations de contact sont d’importance premières afin de produire des comportements réalistes des objets simulés. Cependant, la qualité des informations de contact ainsi calculées dépend fortement de la représentation géométrique des parties de la scène virtuelle directement impliquées dans la simulation mécanique. D'une part, les représentations géométriques basées sur des volumes discrets ou des tessellations permettent une génération de contacts extrêmement rapide, mais en contrepartie introduisent des artefacts numériques dus à l’approximation des formes en contact. D'autre part, l’utilisation de représentations surfaciques lisses (composées de courbes et surfaces lisses) produites par les modeleurs CAO permet d’éliminer ce problème d’approximations. Cependant, ces approches sont actuellement considérées trop lentes en pratique pour des applications en temps réel.Cette thèse est dédiée au développement d’une premier framework de détection de collision entre solides modélisés par représentation surfacique lisses suffisamment efficace pour offrir des performances temps-réel pour certaines applications industrielles nécessitant un niveau de précision élevé. Ces applications prennent typiquement la forme de la simulation d’opérations d’insertion avec faible jeu. L’approche proposée est basée sur une hiérarchie de volumes englobants et tire profit de caractéristiques clef des composants mécaniques industriels dont les surfaces sujettes à des contacts fonctionnels sont généralement modélisées par des surfaces canoniques (cylindres, sphères, cônes, plans, tores). Les contacts sur des surfaces d’interpolation telles que les NURBS sont généralement accidentels et rencontrés lors d’opérations de maintenance et de fabrication. Cette hiérarchie de volumes englobants est améliorée par l’identification d'entités supermaximales afin d’éviter la localisation redondante de points de contacts entre surfaces canoniques parfois découpées en plusieurs entités distinctes. De plus, le concept de cônes polyédrique de normales est défini afin d’établir des bornes de normales plus précises que les cônes de normales de révolution existants. Additionnellement, le framework ainsi développé est étendu afin de supporter des configurations incluant des câbles modélisés par des courbes de Bézier dilatées. Enfin, l’exploitation de la cohérence temporelle, ainsi que la parallélisation de l’ensemble du framework permet l’exécution en temps réel de certains scénarios industriels
With the development of advanced robotic systems and complex teleoperation tasks, the need to perform simulations before operating on physical systems becomes of increasing interest for feasibility tests, training of the human operators, motion planning, etc. Such simulations usually need to be performed with great accuracy of physical phenomena if, e.g., the operator is expected to face the same ones in the real world and in the virtual scene. Collision detection, i.e., the computation of contact points and contact normals between interacting rigid bodies, occupies a time-consuming part of such a physical simulation. The accuracy and smoothness of such contact information is of primary importance to produce a realistic behavior of the simulated objects. However, the quality of the computed contact information strongly depends on the geometric representation of the parts of the virtual scene directly involved in the mechanical simulation. On the one hand, discrete volumes-based and tessellation-based geometric representations allow very fast contacts generation at the cost of the potential introduction of numerical artifacts due to the approximation of the interacting geometrical shapes. On the other hand, the use of boundary representations (issued by CAD modelers) composed of smooth curve and surfaces removes this approximation problem but is currently considered being too slow in practice for real-time applications.This Ph.D focuses on developing a first complete collision detection framework on solids with smooth boundary representations that achieves real-time performances. Our goal is to allow the real-time simulation of industrial scenarios that require a high level of accuracy. Typical applications are insertion tasks with small mechanical clearances. The proposed approach is based on a bounding-volume hierarchy and takes advantage of key features of industrial mechanical components which are often modeled with surfaces describing functional contacts with canonical surfaces (cylinder, sphere, cone, plane, torus) while contacts over free-form surfaces like B-Splines are mostly accidental and encountered during operations of maintenance and manufacturing. We augment this hierarchy with the identification of supermaximal features in order to avoid redundant exact localization of contact points on canonical surfaces that may be represented as distinct features of the CAD model. In addition, we define polyhedral normal cones that offer tighter bounds of normals than existing normal cones of revolution. Moreover, we extend our method to handle configurations that involve beams modeled as deformable dilated Bézier curves. Finally, parallelization of the full approach allows industrial scenarios to be executed in real-time
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Distance de barrière minimale"

1

Marzano, Gilberto. "Social Telerehabilitation." In Advanced Methodologies and Technologies in Medicine and Healthcare, 452–65. IGI Global, 2019. http://dx.doi.org/10.4018/978-1-5225-7489-7.ch036.

Full text
Abstract:
Social telerehabilitation, which focuses on solving limitations and social issues associated with health conditions, represents a further specialization in telerehabilitation. Both telerehabilitation and social telerehabilitation are grounded in the delivery of rehabilitation services through telecommunication networks, especially by means of the internet. Essentially, telerehabilitation comprises methods of delivering rehabilitation services using ICT to minimize the barriers of distance, time, and cost. One can define social telerehabilitation as being the application of ICT to provide equitable access to social rehabilitation services, at a distance, to individuals who are geographically remote, and to those who are physically and economically disadvantaged.
APA, Harvard, Vancouver, ISO, and other styles
2

Marzano, Gilberto. "Social Telerehabilitation." In Encyclopedia of Information Science and Technology, Fourth Edition, 5930–40. IGI Global, 2018. http://dx.doi.org/10.4018/978-1-5225-2255-3.ch516.

Full text
Abstract:
Social telerehabilitation, which focuses on solving limitations and social issues associated with health conditions, represents a further specialization in telerehabilitation. Both telerehabilitation and social telerehabilitation are grounded in the delivery of rehabilitation services through telecommunication networks, especially by means of the Internet. Essentially, telerehabilitation comprises methods of delivering rehabilitation services using ICT to minimize the barriers of distance, time, and cost. One can define social telerehabilitation as being the application of ICT to provide equitable access to social rehabilitation services, at a distance, to individuals who are geographically remote, and to those who are physically and economically disadvantaged.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Distance de barrière minimale"

1

Kostecki, Jodi, Matthew Edel, and John Montoya. "Importance of Connections in High-Pressure Barricade Design." In ASME 2018 Pressure Vessels and Piping Conference. American Society of Mechanical Engineers, 2018. http://dx.doi.org/10.1115/pvp2018-84765.

Full text
Abstract:
High pressure testing and operation of industrial tools and equipment can be hazardous to personnel and property in the event of an accidental mechanical failure or release of contained pressure. Hazard barricades are commonly installed around equipment to protect nearby personnel or property from projectile impacts or overpressure. The ASME Standard PCC-2 “Repair of Pressure Equipment and Piping” allows the use of hazard barricades for this purpose when a safe standoff distance cannot be satisfied, but it currently provides minimal guidance for engineered design. Other references provide guidance for preventing projectile perforation of a barricade. While perforation prevention is a key component of shield design, properly anchoring a shield and inter-connecting the shield components will make the difference between an effective barricade application and a barrier that could potentially compound the consequences of an accidental failure. This paper investigates the importance of engineered structural connections and consideration of global structural response in the design of protective barricades. The structural models focus on impact loading of steel plates and bolted connections, and the results are directly compared to test results in terms of effective barrier response.
APA, Harvard, Vancouver, ISO, and other styles
2

Miers, Kevin T., Daniel L. Prillaman, and Nausheen M. Al-Shehab. "Optimized Impact Mitigation Barriers for Insensitive Munitions Compliance of a 120mm Warhead." In 2019 15th Hypervelocity Impact Symposium. American Society of Mechanical Engineers, 2019. http://dx.doi.org/10.1115/hvis2019-105.

Full text
Abstract:
Abstract The U.S. Army Combat Capabilities Development Command (CCDC) Armaments Center at Picatinny Arsenal, NJ is working to develop technologies to mitigate the violent reaction of a 120 mm warhead, loaded with an aluminized HMX-based enhanced blast explosive, when subjected to the NATO Insensitive Munitions (IM) Fragment Impact (FI) test. As per NATO STANAG 4496, FI testing is conducted at 8300±300 ft/s with a 0.563” diameter, L/D~1, 160˚ conical nosed mild steel fragment. Reaction violence resulting from FI can be mitigated by the use of liners or barriers applied to the munition itself or its packaging, commonly referred to as a Particle Impact Mitigation Sleeves (PIMS). Previous development efforts for this item focused on a lightweight plastic warhead support which was able to reduce the severity of the input shock sufficiently to prevent high order detonation. However, violent sub-detonative responses were still observed which occurred over several hundred microseconds, consumed part of the explosive charge, and ejected hazardous debris over large distances. These responses are driven by rapid combustion coupled with damage to the explosive as well as mechanical confinement. Quantitative modeling of these scenarios is a challenging active research area. Prior experimental results and modeling guidance have shown that mitigation of these reactions requires a more substantial reduction in the overall mechanical insult to the explosive. In particular, steel and aluminum PIMS have been able to efficiently provide the necessary fragment velocity reduction, breakup and dispersion in typical packaging applications. Packaged warheads were tested at the GD-OTS Rock Hill facility with several PIMS designs incorporated into the ammunition containers. Several designs were demonstrated to provide benign reactions with minimal added weight. Future iterations will attempt to further improve the design using advanced lightweight barrier materials.
APA, Harvard, Vancouver, ISO, and other styles
3

Khan, Mohammed Yunus, Mohammed Taha Al-Murayri, Satish Kumar Eadulapally, Haya Ebrahim Al-Mayyan, Deema Alrukaibi, and Anfal Al-Kharji. "Strategies to Design Fit for Purpose EOR Pilots by Integrating Dynamic and Static Data in a Highly Heterogeneous Oolitic Carbonate Reservoir." In International Petroleum Technology Conference. IPTC, 2024. http://dx.doi.org/10.2523/iptc-23708-ms.

Full text
Abstract:
Abstract Umm Gudair Minagish Oolite is a heterogeneous carbonate reservoir with random intermittent micritic units forming low permeability barriers to fluid flow. The facies, permeability variations and barriers have limited lateral extension. Therefore, different strategies need to be designed to implement accelerated fit-for-purpose polymer injectivity pilots without compromising the proper assessment of key parameters such as polymer injectivity, polymer adsorption, resistance factor, in-situ rheological properties, volumetric sweep efficiency, incremental oil gains, and polymer breakthrough. The field is divided into geological sub-regions based on reservoir scale heterogeneities by integrating static and dynamic data. The pilot location for each region is selected such that it shows minimal variations in reservoir properties in terms of facies, permeability, and extension of barriers. Simulation results were analyzed for each considered pilot area based on injectivity, pilot duration, oil peak rate, overall polymer performance and economics. Using these parameters, pilot design and locations are ranked while emphasizing the need to reduce the number of additional required wells to de-risk polymer flooding as a precursor for commercial development. Based on time-lapse saturation logs different sweep zones are identified and correlated with the facies. The maximum oil swept is observed in clean Grainstones. The facies characterization along with production data were used for defining the geological sub-regions. The pilot performance was analyzed using high-resolution numerical simulation for each geological sub-region, using high-salinity produced water. Thereafter, pilot design and locations were ranked based on dynamic performance. The best performing polymer injectivity pilot, with limited well requirements, was selected for field implementation including one injector and one producer with an inter-well distance of 80m. The envisioned pilot duration is 6 months showing promising incremental oil gains from polymer injection compared to water injection. Besides incremental oil gains, the utilization of produced water for polymer injection improves operational efficiency and cost optimization.
APA, Harvard, Vancouver, ISO, and other styles
4

Sachakamol, Punnamee, and Liming Dai. "Noise Prediction Model Development for the Traffic Noise on Asphalt Rubber Roads." In ASME 2006 International Mechanical Engineering Congress and Exposition. ASMEDC, 2006. http://dx.doi.org/10.1115/imece2006-13327.

Full text
Abstract:
Traffic noise prediction techniques are important tools for assessing the effects of noise mitigation. A number of noise prediction models are available for predicting noise levels at a receptor point. Traditionally, these noise predictions are limited to road side areas, where the effects of building and other infrastructure act as a barrier impediment to noise propagation are not considered. This paper describes the application of simulation and modeling of a simplified traffic noise prediction method based on the U.S. FWHA highway and existing traffic noise prediction models. The simplification has been achieved mainly by using the assumption that traffic flow speeds of various vehicle classes are correlated and similar in magnitude Also, an assumption is made that ground attenuation depends not only on the type of ground cover but also on a horizontal distance between the source and the receiver. Finally, the research intends to numerically evaluate the tire-pavement noise of the road with Asphalt Rubber (AR) pavement to minimize the traffic noise generated by the pavement. The application of simulation and modeling by packaged software will be introduced for utilizing the results, planning purposes and preliminary prediction of the traffic noise level on the AR pavement road section in Saskatchewan. This traffic noise prediction model will be simple to use by any end users, particularly environmental planners, acoustic engineers, and non-specialists.
APA, Harvard, Vancouver, ISO, and other styles
5

Liu, Y., L. Deng, A. Augustine, H. Thar, A. Al Ali, A. K. Anurag, A. Alkatheeri, A. Soliman, E. Elabsy, and M. Atia. "Maximize Sweep Efficiency Using Proactive Tar Boundary Detection to Effectively Place a Water Injector Well Using Multiple Workflows of Distance-To-Boundary Inversion Techniques." In ADIPEC. SPE, 2023. http://dx.doi.org/10.2118/216017-ms.

Full text
Abstract:
Abstract In most field developments, the major objectives of placing water injectors are to minimize oil column below the well trajectory, maximize the sweep efficiency and avoid drilling below tar which acts as permeability barrier reducing the effectiveness of the injectors. The case study showcases integration of two conceptual tar models in the well planning phase & implementation of multiple inversion workflows for deep azimuthal resistivity (DAR) tool to enable a successful detection & mapping of tar surface. Two tar models were generated from seismic inversion and a petroleum system study. They highlight the depth and areal extension uncertainties for tar presence to be incorporated in well planning stages. A detailed Reservoir Navigation feasibility model constructed in pre-drilling stage showed that the tar surface shows characteristic high resistivity in offset wells. This could be used for early detection and avoidance of tar by using a DAR tool, NMR-Density-Neutron, and mud logging which were selected to effectively characterize and map the tar. While drilling, Azimuthal resistivity data based multiple inversion workflows were utilized in real-time to accurately assess several possible scenarios. More guidance was applied along sections with moderate change of environment, while open models allowed to check the global equivalency or find solutions in regions where changes in resistivity were substantial. The integrated work from planning to execution stage not only saved a pilot hole, sidetrack cost but also maximized value for injector placement & subsurface characterization. The well was successfully landed above the modelled shallower tar surface. The tar presence was evaluated using joint analysis of resistivity inversion, NMR-Density-Neutron data and cutting samples. It was key to have both conceptual models to provide a much smoother well profile for accessibility and to avoid any possible sidetrack. For the subject well, inversion workflows with optimized parameter inputs provided good mapping of the tar mat. Linear uncertainty analysis of the resistivity data inversion results confirmed high accuracy of the distance to boundary estimation. The multiple inversion workflows allowed to adjust the variables based on the environment and provide an interpretation output, that mapped the tar surface with high confidence & Geosteering decisions were made timely. The resistive top tar layer was tagged for final confirmation and interpretation from cuttings & NMR-Density / Neutron tool before TD. Proactive Geosteering recommendations allowed a gentle incident angle relative to the resistive tar layer below. The well was placed around 5 ft. TVD above the tar surface, thus minimizing oil column above the tar & below the wellbore thus increasing sweep efficiency. The DAR tool enabled 5000 ft. lateral mapping of tar surface for the first time in the field and enhanced the subsurface understanding for future field development plan. Utilization of resistivity data based multiple workflow inversion in real-time provided a higher confidence level & improved proactive Geosteering decision making that enabled the best well placement for this injector well while avoiding the shallower tar.
APA, Harvard, Vancouver, ISO, and other styles
6

Sun, Bo, Huafeng Ni, Zhongyuan Shi, Kecai Guo, Mingyu Lu, Yongdi Zhang, Haikun Ding, and John Zhou. "Defining Geologic Structure Encountered in Horizontal Well and Its Impact on Petrophysical Evaluation." In 2022 SPWLA 63rd Annual Symposium. Society of Petrophysicists and Well Log Analysts, 2022. http://dx.doi.org/10.30632/spwla-2022-0076.

Full text
Abstract:
In horizontal wells, the traditional formation evaluation can be effectively carried out only after the geometric relationship between the well trajectory and the target reservoir is correctly interpreted. With the omnipresence of horizontal wells nowadays, a priority in petrophysical uncertainty evaluation is to control the spatial uncertainty between the well path and the target formation. Because of the cost and technology access constraints, the set of logs in a typical horizontal well alone may not be adequate in fully defining the geometry. Consequently, constraints from the offset/pilot well logs, seismic images, and other geologic information are utilized to achieve an integrated geometric understanding. Through the analysis of a set of horizontal wells with extensive suite of LWD logs in a pilot study, this paper discusses the workflow to determine the geologic structure and fluid contact around the wellbore under complex geologic environment. The project evaluates the effectiveness of various combinations of LWD measurements to understand the role of geologic constraints in place of additional measurements. The investigation starts with the commonly applied log-correlation between neighboring wells, progresses to the inclusion of images for the geometric relationship between the well and bedding, and the addition of the advanced boundary-detection curtain sections around the wellbore to quantify the reservoir thickness and continuity wherever feasible. The lateral variation observed in horizontal wells improves our understanding of the reservoir extension and spatial variation. Modeling and inversion of logging tool responses and the understanding of the underlying response characteristics in various geologic environments also enable us to correct the environmental effects on some measurements to minimize the uncertainty in reserves computation and to understand the continuity of the fluid barrier. Field examples are presented with the focus on accounting for the complexity of the reservoir. In particular, the reality of gradual change in the saturation and/or shalyness does not fit nicely with the distance-to-boundary or DTB inversion model of step-variation in resistivity. In this pilot study, the service company provided a complete set of azimuth propagation resistivity logs for boundary-detection purpose. Therefore, the contribution of information content to the petrophysical evaluation by using various types of data is experimented. The project tells that the petrophysical evaluation must consider the spatial location of the wellbore inside the reservoir. The study provides workflow and examples to guide the interpretation sequence not commonly practiced otherwise.
APA, Harvard, Vancouver, ISO, and other styles
7

Boucher, Andrew, Josef Shaoul, Inna Tkachuk, Mohammed Rashdi, Khalfan Bahri, and Cornelis Veeken. "Improving Proppant Placement Success in Horizontal Wells in Layered Reservoirs in the Sultanate of Oman." In SPE Annual Technical Conference and Exhibition. SPE, 2021. http://dx.doi.org/10.2118/205919-ms.

Full text
Abstract:
Abstract A gas condensate field in the Sultanate of Oman has been developed since 1999 with vertical wells, with multiple fractures targeting different geological units. There were always issues with premature screenouts, especially when 16/30 or 12/20 proppant were used. The problems placing proppant were mainly in the upper two units, which have the lowest permeability and the most heterogeneous lithology, with alternating sand and shaly layers between the thick competent heterolith layers. Since 2015, a horizontal well pilot has been under way to determine if horizontal wells could be used for infill drilling, focusing on the least depleted units at the top of the reservoir. The horizontal wells have been plagued with problems of high fracturing pressures, low injectivity and premature screenouts. This paper describes a comprehensive analysis performed to understand the reasons for these difficulties and to determine how to improve the perforation interval selection criteria and treatment approach to minimize these problems in future horizontal wells. The method for improving the success rate of propped fracturing was based on analyzing all treatments performed in the first seven horizontal wells, and categorizing their proppant placement behavior into one of three categories (easy, difficult, impossible) based on injectivity, net pressure trend, proppant pumped and screenout occurrence. The stages in all three categories were then compared with relevant parameters, until a relationship was found that could explain both the successful and unsuccessful treatments. Treatments from offset vertical wells performed in the same geological units were re-analyzed, and used to better understand the behavior seen in the horizontal wells. The first observation was that proppant placement challenges and associated fracturing behavior were also seen in vertical wells in the two uppermost units, although to a much lesser extent. A strong correlation was found in the horizontal well fractures between the problems and the location of the perforated interval vertically within this heterogeneous reservoir. In order to place proppant successfully, it was necessary to initiate the fracture in a clean sand layer with sufficient vertical distance (TVT) to the heterolith (barrier) layers above and below the initiation point. The thickness of the heterolith layers was also important. Without sufficient "room" to grow vertically from where it initiates, the fracture appears to generate complex geometry, including horizontal fracture components that result in high fracturing pressures, large tortuosity friction, limited height growth and even poroelastic stress increase. This study has resulted in a better understanding of mechanisms that can make hydraulic fracturing more difficult in a horizontal well than a vertical well in a laminated heterogeneous low permeability reservoir. The guidelines given on how to select perforated intervals based on vertical position in the reservoir, rather than their position along the horizontal well, is a different approach than what is commonly used for horizontal well perforation interval selection.
APA, Harvard, Vancouver, ISO, and other styles
8

Lewis, Donald W. "U.S. Commercial Spent Fuel Storage Facilities: Public Health and Environmental Considerations." In ASME 2003 9th International Conference on Radioactive Waste Management and Environmental Remediation. ASMEDC, 2003. http://dx.doi.org/10.1115/icem2003-5004.

Full text
Abstract:
U. S. commercial reactor plants are installing spent fuel storage facilities formally called Independent Spent Fuel Storage Installations (ISFSI) to provide needed storage space for spent nuclear fuel assemblies. Although this might be a primary objective for the utility that owns the plant, the U.S. Nuclear Regulatory Commission (U.S. NRC) has other priorities as addressed by ISFSI regulations in Title 10 of the Code of Federal Regulations, Part 72. These regulations establish a number of criteria that ensure that above all, the storage of spent nuclear fuel does not adversely affect the health and safety of the public or the environment. There are 3 primary ISFSI design activities that ensure the health and safety of the public and protection of the environment: site selection, storage system selection, and storage facility design. The regulatory requirements that address ISFSI site selection are found in 10 CFR 72, Subpart E, “Siting Evaluation Factors.” This section requires that potential ISFSI sites be assessed for impacts such as site characteristics that may affect safety or the environment, external natural and man-induced events, radiological and other environmental conditions, floodplains and natural phenomena, man-made facilities and activities that could endanger the ISFSI, and construction, operation, and decommission activities. All of these potential impacts must be carefully evaluated. First, the ISFSI capacity requirements should be determined. Potential sites should then be evaluated for siting impacts to ensure the site has adequate space, it can be licensed, it will minimize radiological doses to the general public and on-site workers, and construction, operation, and decommissioning won’t have a major effect on the environment or nearby population. The regulatory requirements that address storage system selection are found in 10 CFR 72, Subpart F, “General Design Criteria.” This section requires that the storage system be designed to withstand environmental conditions, natural phenomena, fires and explosions and that it includes confinement barriers, retrievability measures, and criticality safety. In order to be licensed by the U. S. NRC, all spent fuel storage systems must be evaluated to show how they meet these requirements. U.S. NRC approval of the system ensures that the requirements have been met and therefore ensure the health and safety of the public and environment are protected. The regulatory requirements that address the ISFSI design are also found in 10 CFR 72, Subpart F as well as 10 CFR 72, Subpart H, “Physical Protection.” Like the storage systems, the ISFSI site must be designed to withstand environmental conditions, natural phenomena, fires, and explosions. But the design must also include security provisions. Security features protect the spent fuel from attack or sabotage and therefore protect the health and safety of the public and the environment. The primary potential impact of spent fuel storage is radiation dose. The key regulatory requirement that addresses radiation dose is found in 10 CFR 72.104. This section requires that the dose to any individual member of the public not exceed 0.25 mSv (25 mrem) to the whole body, 0.75 mSv (75 mrem) to the thyroid, and 0.25 mSv (25 mrem) to any other organ, from exposure to direct radiation from the ISFSI, radioactive liquid or gaseous effluents, and radiation from other nearby nuclear facilities. Design features of the storage system and ISFSI include shielding by the cask enclosure, distance, berms as required, etc. to attenuate direct radiation, and confinement provisions to prevent radiological effluent leakage. The ISFSI must be located such that the cumulative doses from the ISFSI and reactor plant do not exceed regulatory requirements. Thus it can be seen that ISFSI site selection, storage system selection, and storage facility design all work together to ensure the health and safety of the public and environment are protected. Comments regarding the contents of this paper may be submitted to the author, Donald W. Lewis, Shaw Environmental & Infrastructure, 9201 E. Dry Creek Road, Centennial, Colorado, 80112, U.S.A.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Distance de barrière minimale"

1

Hummels, David. Toward a Geography of Trade Costs. GTAP Working Paper, January 2003. http://dx.doi.org/10.21642/gtap.wp17.

Full text
Abstract:
What are the barriers that separate nations? While recent work provides intriguing clues, we have remarkably little concrete evidence as to the nature, size, and shape of barriers. This paper offers direct and indirect evidence on trade barriers, moving us toward a comprehensive geography of trade costs. There are three main contributions. One, we provide detailed data on freight rates for a number of importers. Rates vary substantially over exporters, and aggregate expenditures on freight are at the low end of the observed range. This suggests import choices are made so as to minimize transportation costs. Two, we estimate the technological relationship between freight rates and distance and use this to interpret the trade barriers equivalents of common trade barrier proxies taken from the literature. The calculation reveals implausibly large barriers. Three, we use a multi-sector model of trade to isolate channels through which trade barriers affect trade volumes. The model motivates an estimation technique that delivers direct estimates of substitution elasticities. This allows a complete characterization of the trade costs implied by trade flows and a partition of those costs into three components: explicitly measured costs (tariffs and freight), costs associated with common proxy variables, and costs that are implied but unmeasured. Acknowledgments: Thanks for the gracious provision of data go to Jon Haveman, Rob Feenstra, Azita Amjadi and the ALADI secretariat. Thanks for helpful suggestions on previous drafts go to seminar participants at the Universities of Chicago, Michigan, and Texas, Boston University, NBER and the 4th Annual EIIT Conference at Purdue University. Finally, Julia Grebelsky and Dawn Conner provided outstanding research assistance. This research was funded by a grant from the University of Chicago’s Graduate School of Business.
APA, Harvard, Vancouver, ISO, and other styles
2

Petrie, John, Yan Qi, Mark Cornwell, Md Al Adib Sarker, Pranesh Biswas, Sen Du, and Xianming Shi. Design of Living Barriers to Reduce the Impacts of Snowdrifts on Illinois Freeways. Illinois Center for Transportation, November 2020. http://dx.doi.org/10.36501/0197-9191/20-019.

Full text
Abstract:
Blowing snow accounts for a large part of Illinois Department of Transportation’s total winter maintenance expenditures. This project aims to develop recommendations on the design and placement of living snow fences (LSFs) to minimize snowdrift on Illinois highways. The research team examined historical IDOT data for resource expenditures, conducted a literature review and survey of northern agencies, developed and validated a numerical model, field tested selected LSFs, and used a model to assist LSF design. Field testing revealed that the proper snow fence setback distance should consider the local prevailing winter weather conditions, and snow fences within the right-of-way could still be beneficial to agencies. A series of numerical simulations of flow around porous fences were performed using Flow-3D, a computational fluid dynamics software. The results of the simulations of the validated model were employed to develop design guidelines for siting LSFs on flat terrain and for those with mild slopes (< 15° from horizontal). Guidance is provided for determining fence setback, wind characteristics, fence orientation, as well as fence height and porosity. Fences comprised of multiple rows are also addressed. For sites with embankments with steeper slopes, guidelines are provided that include a fence at the base and one or more fence on the embankment. The design procedure can use the available right-of-way at a site to determine the appropriate fence characteristics (e.g., height and porosity) to prevent snow deposition on the road. The procedure developed in this work provides an alternative that uses available setback to design the fence. This approach does not consider snow transport over an entire season and may be less effective in years with several large snowfall events, very large single events, or a sequence of small events with little snowmelt in between. However, this procedure is expected to be effective for more frequent snowfall events such as those that occurred over the field-monitoring period. Recommendations were made to facilitate the implementation of research results by IDOT. The recommendations include a proposed process flow for establishing LSFs for Illinois highways, LSF siting and design guidelines (along with a list of suitable plant species for LSFs), as well as other implementation considerations and identified research needs.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography