Letteratura scientifica selezionata sul tema "Estimation de normales"

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Consulta la lista di attuali articoli, libri, tesi, atti di convegni e altre fonti scientifiche attinenti al tema "Estimation de normales".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Articoli di riviste sul tema "Estimation de normales":

1

Bolduc, Denis, e Mustapha Kaci. "Estimation des modèles probit polytomiques : un survol des techniques". Articles 69, n. 3 (23 marzo 2009): 161–91. http://dx.doi.org/10.7202/602113ar.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
RÉSUMÉ Parce qu’il admet des structures très générales d’interdépendance entre les modalités, le probit polytomique (MNP) fournit une des formes les plus intéressantes pour modéliser les choix discrets qui découlent d’une maximisation d’utilité aléatoire. L’obstacle majeur et bien connu dans l’estimation de ce type de modèle tient à la complexité que prennent les calculs lorsque le nombre de modalités considérées est élevé. Cette situation est due essentiellement à la présence d’intégrales normales multidimensionnelles qui définissent les probabilités de sélection. Au cours des deux dernières décennies, de nombreux efforts ont été effectués visant à produire des méthodes qui permettent de contourner les difficultés de calcul liées à l’estimation des modèles probit polytomiques. L’objectif de ce texte consiste à produire un survol critique des principales méthodes mises de l’avant jusqu’à maintenant pour rendre opérationnel le cadre MNP. Nous espérons qu’il éclairera les praticiens de ces modèles quant au choix de technique d’estimation à favoriser au cours des prochaines années.
2

Camero Jiménez, Carlos W., Erick A. Chacón Montalvan, Vilma S. Romero Romero e Luisa E. Quispe Ortiz. "METODOLOGÍA PARA LA ESTIMACIÓN DE ÍNDICES DE CAPACIDAD EN PROCESOS PARA DATOS NO NORMALES". Revista Cientifica TECNIA 24, n. 1 (6 febbraio 2017): 43. http://dx.doi.org/10.21754/tecnia.v24i1.32.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
La globalización ha ido intensificando la competencia en muchos mercados. Con el fin de mantener su competitividad, las empresas buscan satisfacer las necesidades de los clientes mediante el cumplimiento de los requerimientos del mercado. En este contexto, los Índices de Capacidad de Proceso (ICP) juegan un rol trascendental en el análisis de capacidad de los procesos. Para el caso de datos no normales existen dos enfoques generales basados en transformaciones (Transformación de Box –Cox y de Johnson) y percentiles (Sistemas de distribuciones de Pearson y de Burr). Sin embargo, estudios anteriores sobre la comparación de tales métodos muestran distintas conclusiones y por ello nace la necesidad de aclarar las diferencias que existen entre estos métodos para poder implementar una correcta estimación de estos índices. En este trabajo, se realiza un estudio de simulación con el objetivo de comparar los métodos mencionados y proponer una metodología adecuada para la estimación del ICP en datos no normales. Además, se concluye que el mejor método a emplear depende del tipo de distribución, el nivel de asimetría de la misma y el valor del ICP. Palabras clave.- Ajuste de distribuciones de frecuencia, Índice de capacidad del proceso, normalidad, Transformación de datos, Simulación. ABSTRACTGlobalization has intensified competition in many markets. To remain competitive, the companies look for satisfying the needs of customers by meeting market requirements. In this context, Process Capability Indices (PCI) play a crucial role in assessing the quality of processes. In the case of non-normal data there are two general approaches based on transformations (Box-Cox and Johnson Transformation) and Percentiles (Pearson’s and Burr’s Distribution Systems). However, previous studies on the comparison of these methods show different conclusions, and thus arises the need to clarify the differences between these methods to implement a proper estimation of these indices. In this paper, a simulation study is made in order to compare the above methods and to propose an appropriate methodology for estimating the PCI in non-normal data. Furthermore, it is concluded that the best method used depends on the type of distribution, the asymmetry level of the distribution and the ICP value. Keywords.- Approximation to frequency distributions, Process capability indices, Normality, data transformations, Simulation.
3

Notton, Gilles, Ionut Caluianu, Iolanda Colda e Sorin Caluianu. "Influence d’un ombrage partiel sur la production électrique d’un module photovoltaïque en silicium monocristallin". Journal of Renewable Energies 13, n. 1 (25 ottobre 2023): 49–62. http://dx.doi.org/10.54966/jreen.v13i1.177.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Le développement du marché photovoltaïque nécessite de connaître parfaitement la production électrique de ces systèmes sur différents sites en particulier pour estimer sa rentabilité économique. Cette estimation précise ne peut se faire qu’en prenant en compte les effets d’ombrage qui ont des conséquences dramatiques sur la puissance électrique délivrée. Dans cet article, nous avons testé un modèle double-diode de courbes I-V sur un module photovoltaïque au silicium monocristallin (BP585F) sous des conditions normales d’éclairement; puis après une brève explication du comportement électrique d’une cellule totalement ou partiellement ombrée, nous présentons l’expérimentation mise en place. Les tests effectués ont permis de valider ce modèle de comportement de modules PV sous éclairement partiel. Enfin, les pertes de puissance induites par un tel ombrage ont pu être estimées.
4

García, Jhon Jario. "Los estudios de acontecimiento y la importancia de la metodología de estimación". Lecturas de Economía, n. 70 (11 settembre 2009): 223–35. http://dx.doi.org/10.17533/udea.le.n70a2262.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Esta nota realiza una revisión bibliográfica a partir de la literatura existente sobre los estudios de acontecimiento (event study). Su foco principal es la importancia de la metodología utilizada para la estimación de los retornos anormales en el largo plazo, ya que en este periodo los estudios de acontecimientos son sensibles al proceso de generación de los retornos Savickas, 2003 y Aktas et al., 2007). Se encuentra que, en el largo plazo se debe controlar el impacto de acontecimientos sin relación, con la estimación de la ventana para los retornos normales, donde la mejor alternativa para la estimación, en términos de su robustez, es el modelo de mercado en dos estados. Palabras Claves: Estudios de acontecimiento, retornos anormales. Clasificación JEL: G14, G34, G38. Abstract: This paper aims to review the literature on event studies. It highlights the importance of the methodology used for estimating long-run abnormal returns, as event studies are sensitive to the returns generating process in this time horizon (Savickas, 2003, and Aktas et al., 2007). We find that the impact of events has to be controlled for in the long run without regard to the estimation of the window for abnormal returns, in which the best estimation ethodology, in terms of robustness, turns out to be the two-state market model. Keywords: event studies, abnormal returns. Clasificación JEL: G14, G34, G38. Résumé: Cette note présente une révision bibliographique de la littérature existante concernant les études d'événement (event study). L'intérêt de cette littérature repose sur la méthodologie utilisée pour l'estimation des rendements anormaux de long terme, puisque dans cette période les études d'événements sont très sensibles au processus de génération des rendements (Savickas, 2003 et Aktas et al, 2007). On montre que, dans le long terme on doit contrôler l'impact d'événements sans relation avec l'estimation de la fenêtre pour les rendements anormaux, où la meilleure alternative pour l'estimation, en termes de robustesse, est le modèle de marché en deux états. Mots clé: Études événement, rendements anormaux. Classification JEL : G14, G34, G38
5

Wu, Zhaohao, Deyun Zhong, Zhaopeng Li, Liguan Wang e Lin Bi. "Orebody Modeling Method Based on the Normal Estimation of Cross-Contour Polylines". Mathematics 10, n. 3 (1 febbraio 2022): 473. http://dx.doi.org/10.3390/math10030473.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The normal estimation of cross-contour polylines largely determines the implicit orebody modeling result. However, traditional methods cannot estimate normals effectively due to the complex topological adjacency relationship of the cross-contour polylines manually interpreted in the process of exploration and production. In this work, we present an orebody implicit modeling method based on the normal estimation of cross-contour polylines. The improved method consists of three stages: (1) estimating the normals of cross-contour polylines by using the least square plane fitting method based on principal component analysis; (2) reorienting the normal directions by using the method based on the normal propagation; (3) using an implicit function to construct an orebody model. The innovation of this method is that it can automatically estimate the normals of the cross-contour polylines and reorient normal directions without manual intervention. Experimental results show that the proposed method has the advantages of a small amount of calculation, high efficiency and strong reliability. Moreover, this normal estimation method is useful to improve the automation of implicit orebody modeling.
6

MITRA, NILOY J., AN NGUYEN e LEONIDAS GUIBAS. "ESTIMATING SURFACE NORMALS IN NOISY POINT CLOUD DATA". International Journal of Computational Geometry & Applications 14, n. 04n05 (ottobre 2004): 261–76. http://dx.doi.org/10.1142/s0218195904001470.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
In this paper we describe and analyze a method based on local least square fitting for estimating the normals at all sample points of a point cloud data (PCD) set, in the presence of noise. We study the effects of neighborhood size, curvature, sampling density, and noise on the normal estimation when the PCD is sampled from a smooth curve in ℝ2or a smooth surface in ℝ3, and noise is added. The analysis allows us to find the optimal neighborhood size using other local information from the PCD. Experimental results are also provided.
7

SAUVANT, D., P. CHAPOUTOT e H. ARCHIMEDE. "La digestion des amidons par les ruminants et ses conséquences". INRAE Productions Animales 7, n. 2 (24 aprile 1994): 115–24. http://dx.doi.org/10.20870/productions-animales.1994.7.2.4161.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Il est maintenant acquis que la digestion ruminale de l’amidon varie largement en fonction de la nature de l’aliment et de son traitement technologique. La mesure in sacco de la dégradation théorique de l’amidon (X,%) permet de prédire la digestibilité préintestinale (Y,%) de ce constituant : Y = 0,483 X + 45,62. Les différences sur les paramètres de la digestion de l’amidon peuvent se répercuter sur la protéosynthèse microbienne, le profil fermentaire et la stabilité de l’écosystème ruminal. La digestibilité de l’amidon entrant dans l’intestin grêle n’est que partielle et semble révéler une limite de la capacité amylolytique de ce segment. L’amidon non digéré dans l’intestin grêle est en partie dégradé par les microorganismes du gros intestin. Les résultats de bilans digestifs partiels publiés à ce jour permettent d’effectuer une estimation des quantités d’acides gras volatils et de glucose potentiellement mises à disposition de l’animal par l’amidon d’un régime. Lorsque la ration distribuée aboutit à des valeurs normales du profil fermentaire ruminal et du taux butyreux du lait, il ne semble pas y avoir d’effet important de la vitesse de digestion de l’amidon et de sa répartition entre les différents segments digestifs. Une distinction apparaît par contre pour les régimes plus riches en concentrés qui entraînent des fermentations propioniques marquées et un faible taux butyreux du lait. Dans ce cas, le choix d’amidon lentement dégradable semble apporter une certaine "sécurité fermentaire" dans le rumen et permet de limiter la chute du taux butyreux du lait. D’une façon générale, des recherches doivent encore être poursuivies pour mieux connaître les conséquences nutritionnelles et zootechniques de la vitesse de dégradation de l’amidon des régimes dans le réticulo-rumen et pour élaborer des recommandations fiables.
8

Shi, Tiandong, Deyun Zhong e Liguan Wang. "Geological Modeling Method Based on the Normal Dynamic Estimation of Sparse Point Clouds". Mathematics 9, n. 15 (31 luglio 2021): 1819. http://dx.doi.org/10.3390/math9151819.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The effect of geological modeling largely depends on the normal estimation results of geological sampling points. However, due to the sparse and uneven characteristics of geological sampling points, the results of normal estimation have great uncertainty. This paper proposes a geological modeling method based on the dynamic normal estimation of sparse point clouds. The improved method consists of three stages: (1) using an improved local plane fitting method to estimate the normals of the point clouds; (2) using an improved minimum spanning tree method to redirect the normals of the point clouds; (3) using an implicit function to construct a geological model. The innovation of this method is an iterative estimation of the point cloud normal. The geological engineer adjusts the normal direction of some point clouds according to the geological law, and then the method uses these correct point cloud normals as a reference to estimate the normals of all point clouds. By continuously repeating the iterative process, the normal estimation result will be more accurate. Experimental results show that compared with the original method, the improved method is more suitable for the normal estimation of sparse point clouds by adjusting normals, according to prior knowledge, dynamically.
9

Wu, Xianyu, Penghao Li, Xin Zhang, Jiangtao Chen e Feng Huang. "Three Dimensional Shape Reconstruction via Polarization Imaging and Deep Learning". Sensors 23, n. 10 (9 maggio 2023): 4592. http://dx.doi.org/10.3390/s23104592.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Deep-learning-based polarization 3D imaging techniques, which train networks in a data-driven manner, are capable of estimating a target’s surface normal distribution under passive lighting conditions. However, existing methods have limitations in restoring target texture details and accurately estimating surface normals. Information loss can occur in the fine-textured areas of the target during the reconstruction process, which can result in inaccurate normal estimation and reduce the overall reconstruction accuracy. The proposed method enables extraction of more comprehensive information, mitigates the loss of texture information during object reconstruction, enhances the accuracy of surface normal estimation, and facilitates more comprehensive and precise reconstruction of objects. The proposed networks optimize the polarization representation input by utilizing the Stokes-vector-based parameter, in addition to separated specular and diffuse reflection components. This approach reduces the impact of background noise, extracts more relevant polarization features of the target, and provides more accurate cues for restoration of surface normals. Experiments are performed using both the DeepSfP dataset and newly collected data. The results show that the proposed model can provide more accurate surface normal estimates. Compared to the UNet architecture-based method, the mean angular error is reduced by 19%, calculation time is reduced by 62%, and the model size is reduced by 11%.
10

Zandy, Moe, Vicky Chang, Deepa P. Rao e Minh T. Do. "Exposition à la fumée du tabac et sommeil : estimation de l’association entre concentration de cotinine urinaire et qualité du sommeil". Promotion de la santé et prévention des maladies chroniques au Canada 40, n. 3 (marzo 2020): 77–89. http://dx.doi.org/10.24095/hpcdp.40.3.02f.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Introduction La majorité des études sur l’exposition à la fumée du tabac et la qualité du sommeil se fondent sur le tabagisme autodéclaré, ce qui peut entraîner des erreurs de classification de l’exposition et des biais liés à l’autodéclaration. L’objectif de cette étude consistait à étudier les associations entre la concentration de cotinine urinaire – un marqueur biologique de l’exposition à la fumée du tabac – et diverses mesures de la qualité du sommeil, soit le temps de sommeil, la continuité ou l’efficacité du sommeil, la satisfaction à l’égard du sommeil et le degré de vigilance durant les heures normales d’éveil. Méthodologie À l’aide des données d’un échantillon national de 10806 adultes (âgés de 18 à 79 ans) tiré de l’Enquête canadienne sur les mesures de la santé (2007-2013), nous avons effectué des analyses de régression logistique binaire pour estimer les associations entre concentration de cotinine urinaire et mesures de la qualité du sommeil, en tenant compte des facteurs de confusion éventuels. De plus, nous avons effectué une régression logistique ordinale pour évaluer les associations entre concentration urinaire de cotinine et présence d’un nombre plus élevé de troubles du sommeil. Résultats Dans l’ensemble, 28,7 % des répondants à l’enquête ciblant les adultes canadiens présentaient des concentrations de cotinine urinaire supérieures à la limite de détection, et la prévalence de chaque trouble du sommeil variait entre 5,5 % et 35,6 %. Une concentration élevée de cotinine dans l’urine (4e quartile par rapport à une concentration inférieure à la limite de détection) était associée à une probabilité significativement plus élevée d’avoir un temps de sommeil trop court ou trop long (rapport de cotes [RC] = 1,41; IC à 95 % : 1,02 à 1,95; tendance p = 0,021), d’avoir de la difficulté à s’endormir ou à rester endormi (RC = 1,71; IC à 95 % : 1,28 à 2,27; tendance p = 0,003), de ressentir une insatisfaction à l’égard du sommeil (RC = 1,87; IC à 95 % : 1,21 à 2,89; tendance p = 0,011) et de présenter un nombre plus élevé de troubles du sommeil (RC = 1,64; IC à 95 % : 1,19 à 2,26; tendance p = 0,001). Les relations observées étaient plus marquées chez les femmes que chez les hommes. Conclusion Notre étude, qui a fait appel à un marqueur biologique de l’exposition à la fumée du tabac, contribue à l’ensemble de la littérature traitant de l’effet de l’exposition aux substances toxiques de l’environnement sur la qualité du sommeil en montrant qu’il existe une relation entre l’exposition à la fumée du tabac et un sommeil de mauvaise qualité. Pour tenir compte des limites inhérentes à la nature transversale du plan d’étude et pour mieux évaluer les effets sur le long terme de l’exposition à la fumée de tabac sur la qualité du sommeil, il est nécessaire de réaliser des études longitudinales.

Tesi sul tema "Estimation de normales":

1

Criticou, Doukissa. "Estimateurs à rétrécisseurs (cas de distributions normales) : une classe d'estimateurs bayesiens". Rouen, 1986. http://www.theses.fr/1986ROUES050.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
On s'intéresse au traitement bayésien de l'estimation de la moyenne dans le modèle linéaire gaussien, quand: 1) la variance est connue à un facteur multiplicatif près; 2) le coût utilisé est quadratique (inverse de la variance); 3) la probabilité a priori est un mélange de lois gaussiennes
2

Rieux, Frédéric. "Processus de diffusion discret : opérateur laplacien appliqué à l'étude de surfaces". Thesis, Montpellier 2, 2012. http://www.theses.fr/2012MON20201/document.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Le contexte est la géométrie discrète dans Zn. Il s'agit de décrire les courbes et surfaces discrètes composées de voxels: les définitions usuelles de droites et plans discrets épais se comportent mal quand on passe à des ensembles courbes. Comment garantir un bon comportement topologique, les connexités requises, dans une situation qui généralise les droites et plans discrets?Le calcul de données sur ces courbes, normales, tangentes, courbure, ou des fonctions plus générales, fait appel à des moyennes utilisant des masques. Une question est la pertinence théorique et pratique de ces masques. Une voie explorée, est le calcul de masques fondés sur la marche aléatoire. Une marche aléatoire partant d'un centre donné sur une courbe ou une surface discrète, permet d'affecter à chaque autre voxel un poids, le temps moyen de visite. Ce noyau permet de calculer des moyennes et par là, des dérivées. L'étude du comportement de ce processus de diffusion, a permis de retrouver des outils classiques de géométrie sur des surfaces maillées, et de fournir des estimateurs de tangente et de courbure performants. La diversité du champs d'applications de ce processus de diffusion a été mise en avant, retrouvant ainsi des méthodes classiques mais avec une base théorique identique.} motsclefs{Processus Markovien, Géométrie discrète, Estimateur tangentes, normales, courbure, Noyau de diffusion, Analyse d'images
The context of discrete geometry is in Zn. We propose to discribe discrete curves and surfaces composed of voxels: how to compute classical notions of analysis as tangent and normals ? Computation of data on discrete curves use average mask. A large amount of works proposed to study the pertinence of those masks. We propose to compute an average mask based on random walk. A random walk starting from a point of a curve or a surface, allow to give a weight, the time passed on each point. This kernel allow us to compute average and derivative. The studied of this digital process allow us to recover classical notions of geometry on meshes surfaces, and give accuracy estimator of tangent and curvature. We propose a large field of applications of this approach recovering classical tools using in transversal communauty of discrete geometry, with a same theorical base
3

Charton, Jerome. "Etude de caractéristiques saillantes sur des maillages 3D par estimation des normales et des courbures discrètes". Thesis, Bordeaux, 2014. http://www.theses.fr/2014BORD0333/document.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Dans l'objectif d'améliorer et d'automatiser la chaîne de reproductiond'objet qui va de l'acquisition à l'impression 3D. Nous avons cherché à caractériserde la saillance sur les objets 3D modélisés par la structure d'un maillage 3D.Pour cela, nous avons fait un état de l'art des méthodes d'estimation des proprié-tés différentielles, à savoir la normale et la courbure, sur des surfaces discrètes sousla forme de maillage 3D. Pour comparer le comportement des différentes méthodes,nous avons repris un ensemble de critères de comparaison classique dans le domaine,qui sont : la précision, la convergence et la robustesse par rapport aux variations duvoisinage. Pour cela, nous avons établi un protocole de tests mettant en avant cesqualités. De cette première comparaison, il est ressorti que l'ensemble des méthodesexistantes présentent des défauts selon ces différents critères. Afin d'avoir une estimationdes propriétés différentielles plus fiable et précise nous avons élaboré deuxnouveaux estimateurs
With the aim to improve and automate the object reproduction chainfrom acquisition to 3D printing .We sought to characterize the salience on 3D objectsmodeled by a 3D mesh structure. For this, we have a state of the art of estimatingdifferential properties methods, namely normal and curvature on discrete surfaces inthe form of 3D mesh. To compare the behavior of different methods, we took a set ofclassic benchmarks in the domain, which are : accuracy, convergence and robustnesswith respect to variations of the neighbourhood. For this, we have established atest protocol emphasizing these qualities. From this first comparision, it was foundthat all the existing methods have shortcomings as these criteria. In order to havean estimation of the differential properties more reliable and accurate we developedtwo new estimators
4

Caracotte, Jordan. "Reconstruction 3D par stéréophotométrie pour la vision omnidirectionnelle". Electronic Thesis or Diss., Amiens, 2021. http://www.theses.fr/2021AMIE0031.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Cette thèse s'intéresse à la reconstruction 3D par stéréophotométrie et à la vision omnidirectionnelle. La stéréophotométrie repose sur l'exploitation de plusieurs photographies d'une scène, capturées sous différents éclairages par un même appareil immobile. La vision omnidirectionnelle rassemble les capteurs ainsi que les assemblages de caméras qui permettent de faire l'acquisition en une image d'une très grande portion de l'environnement autour de l'appareil. En s'appuyant sur quatre décennies de travaux en stéréophotométrie et en utilisant le modèle unifié pour la projection centrale, nous tentons de réunir ces deux domaines de recherche. Dans un premier temps, nous nous intéressons aux techniques pour l'estimation des normales à la surface ainsi qu'à l'intégration des gradients de profondeur, dans le but de retrouver la géométrie de la scène étudiée. Nous poursuivons en introduisant une nouvelle équation de l'irradiance ainsi qu'une chaîne de traitements modulaire, toutes deux adaptées aux capteurs à projection centrale. Le fonctionnement de la méthode est vérifié en utilisant une caméra perspective et un capteur catadioptrique. Nous détaillons ensuite une extension de la méthode afin de permettre de réaliser des reconstructions 3D à partir d'images issues de caméras twin-fisheye. Finalement, nous nous intéresserons aux limites de l'approche proposée ainsi qu'aux pistes d'amélioration envisagées
This thesis focuses on the photometric stereo problem and the omnidirectional vision. The photometric stereo problem is a 3D-reconstruction technique which requires several pictures of a surface under different lighting conditions from a single point of view. The omnidirectional vision encompasses the devices and the rig of cameras that capture a large part of the environment around them in a single image. Following four decades of research in the photometric stereo literature and using the unified model for central projection cameras, we try to merge these research fields. We first focus on techniques for estimating the normals to the surface, and for integrating the depth gradients to retrieve the shape. Then, we introduce a new spherical irradiance equation that we use to solve the photometric stereo problem using two central projection cameras. The approach is validated using synthetic and real images from a perspective camera and a catadioptric imaging device. We later extends the approach to perform 3d-reconstruction by photometric stereo using twin-fisheye cameras. Finally, we study some limitations of the approch and we discuss the ways to overcome these limits
5

Terzakis, Demètre. "Estimateurs à rétrécisseurs (cas de distributions normales à variance connue à un facteur près) : contrôle de l'emploi de la différence entre l'observation et l'estimation des moindres carrés". Rouen, 1987. http://www.theses.fr/1987ROUES015.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Dans le cadre général de l'estimation de la moyenne (supposée appartenir à v, sous-espace vectoriel strict de l'espace des observations) d'une loi normale multidimensionnelle y de variance connue à un facteur multiplicatif près, on cherche des conditions de domination de l'estimateur des moindres carrés t par des estimateurs à rétrécisseurs (autrement dit de James-Stein) de la forme t(y)-h(t(y), y-t(y))c(t(y)), où c est un endomorphisme de v. De nombreuses conditions "classiques" de domination apparaissent comme des corollaires des notres dans le cas ou t(y) et y-t(y) n'interviennent dans l'expression de h que par les valeurs prises par des formes quadratiques définies respectivement sur v et sur w (orthogonal de v pour la forme bilinéaire symétrique associée à l'inverse de la variance). On constate que des hypothèses de nature algébrique contraignante sur c permettent d'éviter l'usage d'hypothèses de différentiabilité sur h; quand celles-ci sont introduites, on distinguera selon qu'on utilise, relativement à la deuxième variable dans l'expression de h, la différentielle partielle (application de v x w dans le dual de w) ou la dérivée partielle suivant un vecteur privilégié de w
6

Lemaire, Jacques. "Étude de propriétés asymptotiques en classification". Nice, 1990. http://www.theses.fr/1990NICE4379.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
L'objectif de cette thèse est l'étude de propriétés de convergence des estimateurs construits dans un problème de classification d'un nombre fini de données, lorsque ce nombre tend vers l'infini. Un cadre probabiliste, pour cette étude est d'abord précisé. Le modèle sous-jacent est celui d'une décomposition sous la forme d'un mélange, de la loi de probabilité des observations. Nous citons quelques résultats de convergence montrant dans quelle mesure, des estimations consistantes des composantes du mélange étudié, peuvent produire des règles de classement asymptotiquement optimales. Ce résultat nous a incité à faire une étude bibliographique synthétique des méthodes d'estimation des composantes d'un mélange de densités de lois de probabilité. Nous abordons ensuite le cas des méthodes centroïdes, plus simples à mettre en œuvre. Dans une problématique ensembliste, nous avons tenté de généraliser plusieurs résultats asymptotiques antérieurs, en utilisant la notion d'integrande normale, introduite par Berliochi et Lassry, puis, dans un cadre d'approximation stochastique, le lemme de Robbins et Siegmund, de la théorie des martingales. Les résultats obtenus sont illustrés par une simulation numérique. Quatre annexes complètent ce document ; elles portent essentiellement sur les outils mathématiques utilisés
7

Valdivia, Paola Tatiana Llerena. "Correção de normais para suavização de nuvens de pontos". Universidade de São Paulo, 2013. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-19032014-145046/.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Nos anos recentes, suavização de superfícies é um assunto de intensa pesquisa em processamento geométrico. Muitas das abordagens para suavização de malhas usam um esquema de duas etapas: filtragem de normais seguido de um passo de atualização de vértices para corresponder com as normais filtradas. Neste trabalho, propomos uma adaptação de tais esquemas de duas etapas para superfícies representadas por nuvens de pontos. Para isso, exploramos esquemas de pesos para filtrar as normais. Além disso, investigamos três métodos para estimar normais, analisando o impacto de cada método para estimar normais em todo o processo de suavização da superfície. Para uma análise quantitativa, além da comparação visual convencional, avaliamos a eficácia de diferentes opções de implementação usando duas medidas, comparando nossos resultados com métodos de suavização de nuvens de pontos encontrados a literatura
In the last years, surface denoising is a subject of intensive research in geometry processing. Most of the recent approaches for mesh denoising use a twostep scheme: normal filtering followed by a point updating step to match the corrected normals. In this work, we propose an adaptation of such two-step approaches for point-based surfaces, exploring three different weight schemes for filtering normals. Moreover, we also investigate three techniques for normal estimation, analyzing the impact of each normal estimation method in the whole point-set smoothing process. Towards a quantitative analysis, in addition to conventional visual comparison, we evaluate the effectiveness of different choices of implementation using two measures, comparing our results against state-of-art point-based denoising techniques. Keywords: surface smoothing; point-based surface; normal estimation; normal filtering.
8

Wage, Kathleen E. "Adaptive estimation of acoustic normal modes". Thesis, Massachusetts Institute of Technology, 1994. http://hdl.handle.net/1721.1/12096.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Grip, Marcus. "Tyre Performance Estimation during Normal Driving". Thesis, Linköpings universitet, Fordonssystem, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-176558.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Driving with tyres not appropriate for the actual conditions can not only lead to accidents related to the tyres, but also cause detrimental effects on the environment via emission of rubber particles if the driving conditions are causing an unexpectedly high amount of tread wear. Estimating tyre performance in an online setting is therefore of interest, and the feasibility to estimate friction performance, velocity performance, and tread wear utilizing available information from the automotive grade sensors is investigated in this thesis. For the friction performance, a trend analysis is performed to investigate the correlation between tyre stiffness and friction potential. Given that there is a correlation, a model is derived based on the trend having a stiffness parameter as an input in order to predict the friction performance. Tendencies for a linear trend is shown, and a linear regression model is fitted to data and is evaluated by calculating a model fit and studying the residuals. Having a model fit of $80\%$, the precision of the expected values stemming from the proposed model is concluded to be fairly low, but still enough to roughly indicate the friction performance in winter conditions. A tread wear model that can estimate the amount of abrasive wear is also derived, and the proposed model only utilizes available information from the automotive grade sensors. Due to the model having a parameter that is assumed to be highly tyre specific, only a relative wear difference can be calculated. The model is evaluated in a simulation environment by its ability to indicate if a tyre is under the influence of a higher wear caused by a higher ambient temperature. The results indicates that the model is insufficient in an online setting and cannot accurately describe the phenomena of softer tyres having a larger amount of wear caused by a high ambient temperature compared to stiffer tyres. Lastly, a double lane change test (ISO 3888-2) is conducted to determine the critical velocity for cornering manoeuvres, which defines the velocity performance. The test was executed for six different sets of tyres, two of each type (winter, all-season, and summer). The approach to estimate the velocity performance in an online setting is analogue to that of the friction performance, and a trend analysis is performed to investigate the correlation between longitudinal tyre stiffness and the critical velocity. The results are rather unexpected and shows no substantial differences in velocity performance, even though the tyre-road grip felt distinctively worse for the softer tyres according to the driver. It is concluded that the bias stemming from the professional driver's skills might have distorted the results, and that another approach might need to be considered in order to estimate this performance.
10

Fernandez-Abrevaya, Victoria. "Apprentissage à grande échelle de modèles de formes et de mouvements pour le visage 3D". Electronic Thesis or Diss., Université Grenoble Alpes, 2020. https://theses.hal.science/tel-03151303.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Les modèles du visage 3D fondés sur des données sont une direction prometteuse pour capturer les subtilités complexes du visage humain, et une composante centrale de nombreuses applications grâce à leur capacité à simplifier des tâches complexes. La plupart des approches basées sur les données à ce jour ont été construites à partir d’un nombre limité d’échantillons ou par une augmentation par données synthétiques, principalement en raison de la difficulté à obtenir des scans 3D à grande échelle. Pourtant, il existe une quantité substantielle d’informations qui peuvent être recueillies lorsque l’on considère les sources publiquement accessibles qui ont été capturées au cours de la dernière décennie, dont la combinaison peut potentiellement apporter des modèles plus puissants.Cette thèse propose de nouvelles méthodes pour construire des modèles de la géométrie du visage 3D fondés sur des données, et examine si des performances améliorées peuvent être obtenues en apprenant à partir d’ensembles de données vastes et variés. Afin d’utiliser efficacement un grand nombre d’échantillons d’apprentissage, nous développons de nouvelles techniques d’apprentissage profond conçues pour gérer efficacement les données faciales tri-dimensionnelles. Nous nous concentrons sur plusieurs aspects qui influencent la géométrie du visage : ses composantes de forme, y compris les détails, ses composants de mouvement telles que l’expression, et l’interaction entre ces deux sous-espaces.Nous développons notamment deux approches pour construire des modèles génératifs qui découplent l’espace latent en fonction des sources naturelles de variation, e.g.identité et expression. La première approche considère une nouvelle architecture d’auto-encodeur profond qui permet d’apprendre un modèle multilinéaire sans nécessiter l’assemblage des données comme un tenseur complet. Nous proposons ensuite un nouveau modèle non linéaire basé sur l’apprentissage antagoniste qui davantage améliore la capacité de découplage. Ceci est rendu possible par une nouvelle architecture 3D-2D qui combine un générateur 3D avec un discriminateur 2D, où les deux domaines sont connectés par une couche de projection géométrique.En tant que besoin préalable à la construction de modèles basés sur les données, nous abordons également le problème de mise en correspondance d’un grand nombre de scans 3D de visages en mouvement. Nous proposons une approche qui peut gérer automatiquement une variété de séquences avec des hypothèses minimales sur les données d’entrée. Ceci est réalisé par l’utilisation d’un modèle spatio-temporel ainsi qu’une initialisation basée sur la régression, et nous montrons que nous pouvons obtenir des correspondances précises d’une manière efficace et évolutive.Finalement, nous abordons le problème de la récupération des normales de surface à partir d’images naturelles, dans le but d’enrichir les reconstructions 3D grossières existantes. Nous proposons une méthode qui peut exploiter toutes les images disponibles ainsi que les données normales, qu’elles soient couplées ou non, grâce à une nouvelle architecture d’apprentissage cross-modale. Notre approche repose sur un nouveau module qui permet de transférer les détails locaux de l’image vers la surface de sortie sans nuire aux performances lors de l’auto-encodage des modalités, en obtenant des résultats de pointe pour la tâche
Data-driven models of the 3D face are a promising direction for capturing the subtle complexities of the human face, and a central component to numerous applications thanks to their ability to simplify complex tasks. Most data-driven approaches to date were built from either a relatively limited number of samples or by synthetic data augmentation, mainly because of the difficulty in obtaining large-scale and accurate 3D scans of the face. Yet, there is a substantial amount of information that can be gathered when considering publicly available sources that have been captured over the last decade, whose combination can potentially bring forward more powerful models.This thesis proposes novel methods for building data-driven models of the 3D face geometry, and investigates whether improved performances can be obtained by learning from large and varied datasets of 3D facial scans. In order to make efficient use of a large number of training samples we develop novel deep learning techniques designed to effectively handle three-dimensional face data. We focus on several aspects that influence the geometry of the face: its shape components including fine details, its motion components such as expression, and the interaction between these two subspaces.We develop in particular two approaches for building generative models that decouple the latent space according to natural sources of variation, e.g.identity and expression. The first approach considers a novel deep autoencoder architecture that allows to learn a multilinear model without requiring the training data to be assembled as a complete tensor. We next propose a novel non-linear model based on adversarial training that further improves the decoupling capacity. This is enabled by a new 3D-2D architecture combining a 3D generator with a 2D discriminator, where both domains are bridged by a geometry mapping layer.As a necessary prerequisite for building data-driven models, we also address the problem of registering a large number of 3D facial scans in motion. We propose an approach that can efficiently and automatically handle a variety of sequences while making minimal assumptions on the input data. This is achieved by the use of a spatiotemporal model as well as a regression-based initialization, and we show that we can obtain accurate registrations in an efficient and scalable manner.Finally, we address the problem of recovering surface normals from natural images, with the goal of enriching existing coarse 3D reconstructions. We propose a method that can leverage all available image and normal data, whether paired or not, thanks to a new cross-modal learning architecture. Core to our approach is a novel module that we call deactivable skip connections, which allows to transfer the local details from the image to the output surface without hurting the performance when autoencoding modalities, achieving state-of-the-art results for the task

Libri sul tema "Estimation de normales":

1

Lacaze, B. Probabilités et statistique appliquées: Résumé de cours et illustrations. Toulouse: Cépaduès, 1997.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Cheng, Russell. Standard Asymptotic Theory. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780198505044.003.0003.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This book relies on maximum likelihood (ML) estimation of parameters. Asymptotic theory assumes regularity conditions hold when the ML estimator is consistent. Typically an additional third derivative condition is assumed to ensure that the ML estimator is also asymptotically normally distributed. Standard asymptotic results that then hold are summarized in this chapter; for example, the asymptotic variance of the ML estimator is then given by the Fisher information formula, and the log-likelihood ratio, the Wald and the score statistics for testing the statistical significance of parameter estimates are all asymptotically equivalent. Also, the useful profile log-likelihood then behaves exactly as a standard log-likelihood only in a parameter space of just one dimension. Further, the model can be reparametrized to make it locally orthogonal in the neighbourhood of the true parameter value. The large exponential family of models is briefly reviewed where a unified set of regular conditions can be obtained.
3

Schreuder, Michiel F. Renal hypoplasia. A cura di Adrian Woolf. Oxford University Press, 2015. http://dx.doi.org/10.1093/med/9780199592548.003.0348.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
In true renal hypoplasia, normal nephrons are formed but with a deficit in total numbers. As nephron number estimation is not possible in vivo, renal size is used as a marker. A widely used definition of renal hypoplasia is kidneys with a normal appearance on ultrasound but with a size less than two standard deviations below the mean for gender, age, and body size. A distinct and severe form of renal hypoplasia is called (congenital) oligomeganephronia, which is characterized by small but normal-shaped kidneys with a marked reduction in nephron numbers (to as low as 10–20% of normal), a distinct enlargement of glomeruli, and a reduced renal function. In many cases, the small kidney also shows signs of dysplasia on ultrasound, leading to the diagnosis of renal hypodysplasia. Based on the hyperfiltration hypothesis and clinical studies, glomerular hyperfiltration can be expected, resulting in hypertension, albuminuria, and renal injury, for which long-term follow-up of all patients with renal hypoplasia is desirable.
4

Walsh, Richard A. “I Am Not Sure If I Should Do DaT”. Oxford University Press, 2016. http://dx.doi.org/10.1093/med/9780190607555.003.0008.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Nuclear medicine-based imaging techniques can provide an estimation of nigrostriatal tract denervation based on radionucleotide uptake in the distal presynaptic terminals of dopaminergic neurons. Although unhelpful in differentiating between differing etiologies of denervation in varied neurodegenerative disorders associated with parkinsonism, this imaging is justified in situations in which parkinsonism is believed to be drug-induced or functional or in cases in which subclinical parkinsonism is suspected. The most common clinical situation in which dopamine transporter imaging is helpful is in the patient on neuroleptic therapy that cannot be stopped who has developed parkinsonism. Dopamine transporter imaging should be normal in drug-induced tremor.
5

Sainz, Jorge G., e Bradley P. Fuhrman. Basic Pediatric Hemodynamic Monitoring. Oxford University Press, 2017. http://dx.doi.org/10.1093/med/9780199918027.003.0005.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Physiological monitoring using a variety of technological advances supplements, but does not replace, our ability to distinguish normal from abnormal physiology traditionally gleaned from physical examination. Pulse oximetry uses the wavelengths of saturated and unsaturated hemoglobin to estimate arterial oxygenation noninvasively. Similar technology included on vascular catheters provides estimation of central or mixed venous oxygenation and helps assess the adequacy of oxygen delivered to tissues. End-tidal carbon dioxide measurements contribute to the assessment of ventilation. Systemic arterial blood pressure and central venous pressure measurements help evaluate cardiac performance, including the impact of ventilatory support. Intra-abdominal pressure may increase as a result of intraluminal air or fluid, abnormal fluid collections within the peritoneal cavity, or abnormal masses. Increased pressure may impede venous return to the heart and compromise intra-abdominal organ perfusion. Pressure measurement guides related management decisions.
6

Zinn-Justin, Paul, e Jean-Bernard Zuber. Multivariate statistics. A cura di Gernot Akemann, Jinho Baik e Philippe Di Francesco. Oxford University Press, 2018. http://dx.doi.org/10.1093/oxfordhb/9780198744191.013.28.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This article considers some classical and more modern results obtained in random matrix theory (RMT) for applications in statistics. In the classic paradigm of parametric statistics, data are generated randomly according to a probability distribution indexed by parameters. From this data, which is by nature random, the properties of the deterministic (and unknown) parameters may be inferred. The ability to infer properties of the unknown Σ (the population covariance matrix) will depend on the quality of the estimator. The article first provides an overview of two spectral statistical techniques, principal components analysis (PCA) and canonical correlation analysis (CCA), before discussing the Wishart distribution and normal theory. It then describes extreme eigenvalues and Tracy–Widom laws, taking into account the results obtained in the asymptotic setting of ‘large p, large n’. It also analyses the results for the limiting spectra of sample covariance matrices..
7

McCleary, Richard, David McDowall e Bradley J. Bartos. Noise Modeling. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780190661557.003.0003.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Chapter 3 introduces the Box-Jenkins AutoRegressive Integrated Moving Average (ARIMA) noise modeling strategy. The strategy begins with a test of the Normality assumption using a Kolomogov-Smirnov (KS) statistic. Non-Normal time series are transformed with a Box-Cox procedure is applied. A tentative ARIMA noise model is then identified from a sample AutoCorrelation function (ACF). If the sample ACF identifies a nonstationary model, the time series is differenced. Integer orders p and q of the underlying autoregressive and moving average structures are then identified from the ACF and partial autocorrelation function (PACF). Parameters of the tentative ARIMA noise model are estimated with maximum likelihood methods. If the estimates lie within the stationary-invertible bounds and are statistically significant, the residuals of the tentative model are diagnosed to determine whether the model’s residuals are not different than white noise. If the tentative model’s residuals satisfy this assumption, the statistically adequate model is accepted. Otherwise, the identification-estimation-diagnosis ARIMA noise model-building strategy continues iteratively until it yields a statistically adequate model. The Box-Jenkins ARIMA noise modeling strategy is illustrated with detailed analyses of twelve time series. The example analyses include non-Normal time series, stationary white noise, autoregressive and moving average time series, nonstationary time series, and seasonal time series. The time series models built in Chapter 3 are re-introduced in later chapters. Chapter 3 concludes with a discussion and demonstration of auxiliary modeling procedures that are not part of the Box-Jenkins strategy. These auxiliary procedures include the use of information criteria to compare models, unit root tests of stationarity, and co-integration.
8

Galderisi, Maurizio, e Sergio Mondillo. Assessment of diastolic function. Oxford University Press, 2011. http://dx.doi.org/10.1093/med/9780199599639.003.0009.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Modern assessment of left ventricular (LV) diastolic function should be based on the estimation of degree of LV filling pressure (LVFP), which is the true determinant of symptoms/signs and prognosis in heart failure.In order to achieve this goal, standard Doppler assessment of mitral inflow pattern (E/A ratio, deceleration time, isovolumic relaxation time) should be combined with additional manoeuvres and/or ultrasound tools such as: ◆ Valsalva manoeuvre applied to mitral inflow pattern. ◆ Pulmonary venous flow pattern. ◆ Velocity flow propagation by colour M-mode. ◆ Pulsed wave tissue Doppler of mitral annuls (average of septal and lateral E′ velocity).In intermediate doubtful situations, the two-dimensional determination of left atrial (LA) volume can be diagnostic, since LA enlargement is associated with a chronic increase of LVFP in the absence of mitral valve disease and atrial fibrillation.Some new echocardiographic technologies, such as the speckle tracking-derived LV longitudinal strain and LV torsion, LA strain, and even the three-dimensional determination of LA volumes can be potentially useful to add further information. In particular, the reduction of LV longitudinal strain in patients with LV diastolic dysfunction and normal ejection fraction demonstrates that a subclinical impairment of LV systolic function already exists under these circumstances.
9

Henry, Frances, e Dwaine Plaza, a cura di. Carnival Is Woman. University Press of Mississippi, 2019. http://dx.doi.org/10.14325/mississippi/9781496825445.001.0001.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
What is most intriguing in the Carnivals today is the substantial increase in the number of women who play mas’ with some figures estimating as much as 70% of all players. This volume, probably the first of its kind to concentrate solely on women in Carnival, normalizes the contemporary Carnival especially as it is playedin Trinidad and Tobago by demonstrating not only their numerical strength but the kind of mas’that is featured. The bikini and beads or bikini and feathers or 'pretty mas' is the dominant mas’ in today’s Carnival. The players of today, mainly women, are signifying or symbolizing by this form of mas’, their own newly found empowerment as females and their resistance to the older cultural norms of male oppression. Several chapters discuss in detail the commoditisation of Carnival in which sex is used to enhance tourism and provide striking visual images for magazines and websites. Several put the emphasis on the unveiling of the female body and the hip rolling sexual movements called “winin” or sometimes just “it” as in “use your it.” What most of these chapters have in common however is the emphasis on the performance of scantily clad female bodies and their movements and gyrations. This volume provides a feminist perspective to the understanding of Carnival today.
10

Schelbert, Heinrich R. Image-Based Measurements of Myocardial Blood Flow. Oxford University Press, 2015. http://dx.doi.org/10.1093/med/9780199392094.003.0024.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Image-based measurements of myocardial blood flow afford the assessment of coronary circulatory function. They reflect functional consequences of coronary stenoses, diffuse epicardial vessel disease and microvascular dysfunction and structural changes and thus provide a measure of the total ischemic burden. Measured flows contain therefore clinically important predictive information. Fundamental to flow measurements are the tissue tracer kinetics, their description through tracer kinetic models, high spatial and temporal resolution imaging devices and accurate extraction of radiotracer tissue concentrations from dynamically acquired images for estimating true flows from the tissue time activity curves. A large body of literature on measurements of myocardial blood flow exists for defining in humans normal values for flow at baseline and during hyperemic stress as well as for the myocardial flow reserve. The role of PET for flow measurements has been well established; initial results with modern SPECT devices are encouraging. Responses of myocardial blood flow to specific challenges like pharmacologic vasodilation and to sympathetic stimulation can uncover functional consequences of focal epicardial coronary stenoses, of conduit vessel disturbances and disease and impairments of microvascular function. Apart from risk stratification, flow measurements may allow detection of early preclinical disease, influence treatment strategies and identify therapy responses.

Capitoli di libri sul tema "Estimation de normales":

1

Maruyama, Yuzo, Tatsuya Kubokawa e William E. Strawderman. "Estimation of a Normal Mean Vector Under Unknown Scale". In Stein Estimation, 45–76. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-6077-4_3.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Maruyama, Yuzo, Tatsuya Kubokawa e William E. Strawderman. "Estimation of a Normal Mean Vector Under Known Scale". In Stein Estimation, 23–43. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-6077-4_2.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Singh, Vijay P. "Normal Distribution". In Entropy-Based Parameter Estimation in Hydrology, 56–67. Dordrecht: Springer Netherlands, 1998. http://dx.doi.org/10.1007/978-94-017-1431-0_5.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Buchholz, Dirk. "Normal Map Based Pose Estimation". In Bin-Picking, 57–95. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-26500-1_5.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Wan, Chengde, Angela Yao e Luc Van Gool. "Hand Pose Estimation from Local Surface Normals". In Computer Vision – ECCV 2016, 554–69. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-46487-9_34.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Sugasawa, Shonosuke, e Tatsuya Kubokawa. "Small Area Models for Non-normal Response Variables". In Mixed-Effects Models and Small Area Estimation, 83–98. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-19-9486-9_7.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Hand, David, e Martin Crowder. "Continuous non-normal measures: Gaussian estimation". In Practical Longitudinal Data Analysis, 91–107. Boston, MA: Springer US, 1996. http://dx.doi.org/10.1007/978-1-4899-3033-0_7.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Ladický, L’ubor, Bernhard Zeisl e Marc Pollefeys. "Discriminatively Trained Dense Surface Normal Estimation". In Computer Vision – ECCV 2014, 468–84. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-10602-1_31.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Lachos Dávila, Víctor Hugo, Celso Rômulo Barbosa Cabral e Camila Borelli Zeller. "Maximum Likelihood Estimation in Normal Mixtures". In Finite Mixture of Skewed Distributions, 7–13. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-98029-4_2.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Chen, Jiahua. "Estimation Under Finite Normal Mixture Models". In ICSA Book Series in Statistics, 53–81. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-6141-2_4.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri

Atti di convegni sul tema "Estimation de normales":

1

Huang, Xiaoyu, e Junmin Wang. "Payload Parameter Real-Time Estimation for Lightweight Vehicles". In ASME 2011 Dynamic Systems and Control Conference and Bath/ASME Symposium on Fluid Power and Motion Control. ASMEDC, 2011. http://dx.doi.org/10.1115/dscc2011-6045.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This paper proposes a payload parameter estimation method for lightweight vehicles (LWVs), whose dynamics and control are substantially affected by their payload variations due to the LWVs’ significantly reduced sizes and weights. Accurate and real-time estimation of payload parameters, including payload mass and its onboard planar location, will be helpful for controller designs and load condition monitoring. The proposed payload parameter estimator (PPE) is divided into two parts: tire nominal normal force estimator (NNFE) based on a recursive least squares (RLS) algorithm using signals measured from LWV constant speed maneuvers, and parameter calculator based on estimated nominal normal forces. The prototype LWV is an electric ground vehicle with separable torque control of the four wheels by in-wheel motors, which allow redundant input injections in the designed maneuvers. Simulation results, based on a CarSim® model, show that the proposed PPE is capable of accurately and quickly estimating payload parameters, and is independent of the road condition as long as the tire forces are kept within their linear ranges.
2

Yuan, Xiaocui, Qingjin Peng, Lushen Wu e Huawei Chen. "A Novel Method of Normal Estimation for 3D Surface Reconstruction". In ASME 2015 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2015. http://dx.doi.org/10.1115/detc2015-46484.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
A 3D object can be recovered from scanned point data, which requires accurate estimating normal directions of the object surface from the cloud data. Many point cloud processing algorithms rely on the accurate normal as input to generate an accurate 3D surface model. The neighborhood of a data point in its smooth region can be well approximated by a plane. However, the neighborhood of a feature point employed for the normal estimation is isotropic which would enclose points belonging to different surface patches across the sharp feature. In this paper, isotropic neighborhoods are segmented to search anisotropic neighborhoods for the accurate normal estimation. Normals and candidate feature points are first estimated by the principal component analysis (PCA) method. Neighborhoods of the feature point are then mapped into a Gaussian image. A k-means clustering algorithm is then used for the Gaussian image to identify an anisotropic sub-neighborhood for the data point. The normal of the candidate feature point is finally estimated by the anisotropic neighborhood with the PCA method. The proposed method can accurately estimate normal directions while preserving sharp features of the object surface. Applications have demonstrated the effectiveness of the proposed method.
3

Balaga, Sanjay Raghav, Mario labella e Kanwar Bharat Singh. "Real-Time Cornering Stiffness Estimation and Road Friction State Classification under Normal Driving Conditions". In WCX SAE World Congress Experience. 400 Commonwealth Drive, Warrendale, PA, United States: SAE International, 2024. http://dx.doi.org/10.4271/2024-01-2650.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
<div class="section abstract"><div class="htmlview paragraph">The tire cornering stiffness plays a vital role in the functionality of vehicle dynamics control systems, particularly when it comes to stability and path tracking controllers. This parameter relies on various external variables such as the tire/ambient temperature, tire wear condition, the road surface state, etc. Ensuring a reliable estimation of the cornering stiffness value is crucial for control systems. This ensures that these systems can accurately compute actuator requests in a wide range of driving conditions. In this paper, a novel estimation method is introduced that relies solely on standard vehicle sensor data, including data such as steering wheel angles, longitudinal acceleration, lateral acceleration, yaw rate, and vehicle speed, among others. Initially, the vehicle's handling characteristics are deduced by estimating the understeer gradient. Subsequently, real-time estimates of the cornering stiffness values are derived by adapting the previously obtained parameters, all based on readily available vehicle sensor data. To enhance the robustness of the cornering stiffness model, a model for estimating vehicle mass is employed. The validity of the estimation method is confirmed through testing involving both low and high G excitation maneuvers, as well as real-world driving data on public roads, encompassing different road surface driving conditions. Additionally, the approach's robustness is assessed across various tire types and degrees of tire wear. Finally, the estimates of tire cornering stiffness are applied within a model to ascertain the classification of the road friction state.</div></div>
4

Kolansky, Jeremy, Corina Sandu e Schalk Els. "Tire-Ground Normal Force Estimation From Vehicle System Identification and Parameter Estimation". In ASME 2014 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2014. http://dx.doi.org/10.1115/detc2014-34812.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Ground vehicle stability controllers can be significantly improved through knowledge of the vehicle’s tire-ground normal force. This work demonstrates a proof-of-concept study of such an estimator. The method involves work from two previous methods that perform real-time estimation of the vehicle’s mass and horizontal CG position. Inclusion of the previous work provides the foundation for a comprehensive method that estimates the tire-ground normal load of a ground vehicle that is invariant with respect to the vehicle parameters.
5

Bae, Gwangbin, Ignas Budvytis e Roberto Cipolla. "Estimating and Exploiting the Aleatoric Uncertainty in Surface Normal Estimation". In 2021 IEEE/CVF International Conference on Computer Vision (ICCV). IEEE, 2021. http://dx.doi.org/10.1109/iccv48922.2021.01289.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Iseki, Toshio. "A Study on Akaike’s Bayesian Information Criterion in Wave Estimation". In ASME 2011 30th International Conference on Ocean, Offshore and Arctic Engineering. ASMEDC, 2011. http://dx.doi.org/10.1115/omae2011-49170.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
A feasibility study of Bayesian wave estimation was carried out to investigate the relationship between the minimum Akaike’s Bayesian information criterion (ABIC) and the estimated wave parameters. The ship response functions, which were used for the Bayesian wave estimation together with the ship motion cross spectra, were simply modified and compared with the normal response functions in connection with the accuracy of estimated wave parameters. Moreover, the concept of the ABIC surfaces was introduced to investigate the optimum estimates from the stochastic viewpoint and the physical viewpoint. As the result, it was revealed that the minimum ABIC did not always provide the best estimates from the viewpoint of wave estimation and the simply modified response functions could reduce the estimating errors in some cases. The reasons were considered that the estimating error at the sharp peak of response amplitude operators was closely related to existence of the local minima of the ABIC surface and the simply modified response functions had some effects to make the ABIC surface smoother. It is pointed out as the conclusion of this report that any estimating errors of the ship response functions were not considered in the Bayesian modeling.
7

Duchnowski, Robert, e Zbigniew Wisniewski. "Msplit and MP estimation. A wider range of robustness". In Environmental Engineering. VGTU Technika, 2017. http://dx.doi.org/10.3846/enviro.2017.185.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Msplit and MP estimations are new methods of assessing the parameters of functional models of geodetic observations. The first method assumes that each observation can be assigned to either of some functional models which differ from each other in competitive parameters. While the latter method is based on the assumption that distributions of measurement errors differ from the normal one in asymmetry and excess kurtosis. The theoretical properties indicate that both methods are also robust against outliers. However, the sense of robustness is a little wider than in the case of M-estimation. In Msplit estimation the outliers are treated as variables with competitive functional models (in relation to models of “good” observations) while robustness of MP estimation depends on the mentioned parameters of probabilistic models of observations. This paper shows that on one hand robustness is an interesting property of the methods in question, but on the other hand it broadens possible application of such estimation methods.
8

Reid, Robert B., Mark E. Oxley, Michael T. Eismann e Matthew E. Goda. "Quantifying surface normal estimation". In Defense and Security Symposium, a cura di Dennis H. Goldstein e David B. Chenault. SPIE, 2006. http://dx.doi.org/10.1117/12.664161.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Kusupati, Uday, Shuo Cheng, Rui Chen e Hao Su. "Normal Assisted Stereo Depth Estimation". In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2020. http://dx.doi.org/10.1109/cvpr42600.2020.00226.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Lenssen, Jan Eric, Christian Osendorfer e Jonathan Masci. "Deep Iterative Surface Normal Estimation". In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2020. http://dx.doi.org/10.1109/cvpr42600.2020.01126.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri

Rapporti di organizzazioni sul tema "Estimation de normales":

1

Eslinger, Paul W., e Wayne A. Woodward. Minimum Hellinger Distance Estimation for Normal Models. Fort Belvoir, VA: Defense Technical Information Center, ottobre 1990. http://dx.doi.org/10.21236/ada228714.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Burriel, Pablo, Mar Delgado-Téllez, Camila Figueroa, Iván Kataryniuk e Javier J. Pérez. Estimating the contribution of macroeconomic factors to sovereign bond spreads in the euro area. Madrid: Banco de España, marzo 2024. http://dx.doi.org/10.53479/36257.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This paper proposes a novel approach to estimating the contribution of macroeconomic factors to sovereign spreads in the euro area, defined as the spread level consistent with the country’s prevailing macroeconomic conditions. Despite the wealth of papers estimating sovereign spreads, model-dependency and lack of robustness remain key considerations. Accordingly, we propose a “thick modeling” empirical framework, based on the estimation of a wide range of models. We focus on 10-year sovereign bond yields for nine euro area countries, using a sample that covers the period January 2000 to December 2023. Our results show that observed spreads behave in line with macro-financial determinants in “normal” times. Macroeconomic determinants are also able to account for a significant fraction of the observed sovereign spread dynamics in most episodes of financial turbulence, such as the pandemic and the aftermath of the Russian invasion of Ukraine. However, we find evidence of some deviations of sovereign spreads from their estimated values during the 2010-2012 euro area sovereign debt crisis. In this period, macroeconomic indicators are able to explain at most 26% of the observed peaks in spreads among non-core countries.
3

Fraley, Chris, e Adrian E. Raftery. Bayesian Regularization for Normal Mixture Estimation and Model-Based Clustering. Fort Belvoir, VA: Defense Technical Information Center, agosto 2005. http://dx.doi.org/10.21236/ada454825.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Amengual, Dante, Xinyue Bei, Marine Carrasco e Enrique Sentana. Score-type tests for normal mixtures. CIRANO, gennaio 2023. http://dx.doi.org/10.54932/uxsg1990.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Testing normality against discrete normal mixtures is complex because some parameters turn increasingly underidentified along alternative ways of approaching the null, others are inequality constrained, and several higher-order derivatives become identically 0. These problems make the maximum of the alternative model log-likelihood function numerically unreliable. We propose score-type tests asymptotically equivalent to the likelihood ratio as the largest of two simple intuitive statistics that only require estimation under the null. One novelty of our approach is that we treat symmetrically both ways of writing the null hypothesis without excluding any region of the parameter space. We derive the asymptotic distribution of our tests under the null and sequences of local alternatives. We also show that their asymptotic distribution is the same whether applied to observations or standardized residuals from heteroskedastic regression models. Finally, we study their power in simulations and apply them to the residuals of Mincer earnings functions.
5

Dueker, Michael J. Kalman Filtering with Truncated Normal State Variables for Bayesian Estimation of Macroeconomic Models. Federal Reserve Bank of St. Louis, 2005. http://dx.doi.org/10.20955/wp.2005.057.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Bonhomme, Stéphane, e Angela Denis. Estimating heterogeneous effects: applications to labor economics. Madrid: Banco de España, maggio 2024. http://dx.doi.org/10.53479/36556.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
A growing number of applications involve settings where, in order to infer heterogeneous effects, a researcher compares various units. Examples of research designs include children moving between different neighborhoods, workers moving between firms, patients migrating from one city to another, and banks offering loans to different firms. We present a unified framework for these settings, based on a linear model with normal random coefficients and normal errors. Using the model, we discuss how to recover the mean and dispersion of effects, other features of their distribution, and how to construct predictors of the effects. We provide moment conditions on the model’s parameters, and outline various estimation strategies. One of the main objectives of the paper is to clarify some of the underlying assumptions by highlighting their economic content, and to discuss and inform some of the key practical choices.
7

Gupta, Shanti S., e Klaus J. Miescke. On Finding the Largest Normal Mean and Estimating the Selected Mean. Fort Belvoir, VA: Defense Technical Information Center, agosto 1989. http://dx.doi.org/10.21236/ada211628.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Balani, Suman, Hetashvi Sudani, Sonali Nawghare e Nitin Kulkarni. ESTIMATION OF FETAL WEIGHT BY CLINICAL METHOD, ULTRASONOGRAPHY AND ITS CORRELATION WITH ACTUAL BIRTH WEIGHT IN TERM PREGNANCY. World Wide Journals, febbraio 2023. http://dx.doi.org/10.36106/ijar/6907486.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Introduction: The Accurate estimation of foetal weight is of paramount importance in modern obstetrics for management of labour and delivery. During the past two decades estimated foetal weight is incorporated into the standard routine antepartum evaluation of high-risk pregnancy & deliveries. Present study was conducted to estimation fetal weight by clinical method and by ultrasonography and to nd out its correlation with actual birth weight in term pregnancy. The cross-sectional Material and Methods: observational study was conducted in outpatient or inpatient Obstetric section of Department of Obstetrics & Gynaecology and USG section of Department of Radio-diagnosis of A.C.P.M. Medical College and Hospital, Dhule, Maharashtra. Most of the study Observations & Results: subjects were between 24-28 years of age 53.5% with mean age of 24.71 years. The mean Hadlock weight was 2705 ± 469 gm, while the actual birth weight was 2805 ± 465 gm. The difference was found to be statistically signicant (p<0.05). The difference in Dare's clinical method was found to be 73.3 ± 49.8 gm, while the Hadlock difference was found to be 103.1 ± 77.4 gm. There was a very strong, positive, statistically signicant correlation seen between Dare Weight and Actual Weight (p<0.05). There was a very strong, positive, statistically signicant correlation seen between Hadlock Weight and Actual Weight (p<0.05). Thus, major ndi Conclusion: ng from this study is that clinical estimation of fetal weight is as accurate as the ultrasonographic method of estimation within the normal birth weight range. Our study has important implication as in developing country like India, where ultrasound is not available in many health care delivery systems specially in rural areas where clinical method is easy, cost effective, simple, accurate and can be used even by midwives.
9

Tywoniak, Jan, Kateřina Sojková e Zdenko Malík. Building Physics in Living Lab. Department of the Built Environment, 2023. http://dx.doi.org/10.54337/aau541565072.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Team from Czech Technical University in Prague participated in prestigious international contest Solar Decathlon Europe 21-22. The topic of its FIRSTLIFE project was an extension of student dormitory by adding of new floors on the building together with a retrofit of the existing part. The paper deals with the pedagogical context of this activity. Students got an extraordinary opportunity to actually implement their theoretical proposals based on calculations. They also received feedback on the extent to which detailed designs are feasible in normal construction practice. New knowledge can be applied in future better estimation of the effect of imperfections, for example in calculations of heat conduction, the effect of thermal bridges, leaks for moisture transport and air tightness. Information about future research in Living Lab is given at the end of the paper.
10

Soloviev, Vladimir, Andrii Bielinskyi, Oleksandr Serdyuk, Victoria Solovieva e Serhiy Semerikov. Lyapunov Exponents as Indicators of the Stock Market Crashes. [б. в.], novembre 2020. http://dx.doi.org/10.31812/123456789/4131.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The frequent financial critical states that occur in our world, during many centuries have attracted scientists from different areas. The impact of similar fluctuations continues to have a huge impact on the world economy, causing instability in it concerning normal and natural disturbances [1]. The an- ticipation, prediction, and identification of such phenomena remain a huge chal- lenge. To be able to prevent such critical events, we focus our research on the chaotic properties of the stock market indices. During the discussion of the re- cent papers that have been devoted to the chaotic behavior and complexity in the financial system, we find that the Largest Lyapunov exponent and the spec- trum of Lyapunov exponents can be evaluated to determine whether the system is completely deterministic, or chaotic. Accordingly, we give a theoretical background on the method for Lyapunov exponents estimation, specifically, we followed the methods proposed by J. P. Eckmann and Sano-Sawada to compute the spectrum of Lyapunov exponents. With Rosenstein’s algorithm, we com- pute only the Largest (Maximal) Lyapunov exponents from an experimental time series, and we consider one of the measures from recurrence quantification analysis that in a similar way as the Largest Lyapunov exponent detects highly non-monotonic behavior. Along with the theoretical material, we present the empirical results which evidence that chaos theory and theory of complexity have a powerful toolkit for construction of indicators-precursors of crisis events in financial markets.

Vai alla bibliografia