Literatura académica sobre el tema "Implicit regularization"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Implicit regularization".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Artículos de revistas sobre el tema "Implicit regularization"

1

Ceng, Lu-Chuan, Qamrul Hasan Ansari y Ching-Feng Wen. "Implicit Relaxed and Hybrid Methods with Regularization for Minimization Problems and Asymptotically Strict Pseudocontractive Mappings in the Intermediate Sense". Abstract and Applied Analysis 2013 (2013): 1–14. http://dx.doi.org/10.1155/2013/854297.

Texto completo
Resumen
We first introduce an implicit relaxed method with regularization for finding a common element of the set of fixed points of an asymptotically strict pseudocontractive mappingSin the intermediate sense and the set of solutions of the minimization problem (MP) for a convex and continuously Frechet differentiable functional in the setting of Hilbert spaces. The implicit relaxed method with regularization is based on three well-known methods: the extragradient method, viscosity approximation method, and gradient projection algorithm with regularization. We derive a weak convergence theorem for two sequences generated by this method. On the other hand, we also prove a new strong convergence theorem by an implicit hybrid method with regularization for the MP and the mappingS. The implicit hybrid method with regularization is based on four well-known methods: the CQ method, extragradient method, viscosity approximation method, and gradient projection algorithm with regularization.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

FARGNOLI, H. G., A. P. BAÊTA SCARPELLI, L. C. T. BRITO, B. HILLER, MARCOS SAMPAIO, M. C. NEMES y A. A. OSIPOV. "ULTRAVIOLET AND INFRARED DIVERGENCES IN IMPLICIT REGULARIZATION: A CONSISTENT APPROACH". Modern Physics Letters A 26, n.º 04 (10 de febrero de 2011): 289–302. http://dx.doi.org/10.1142/s0217732311034773.

Texto completo
Resumen
Implicit Regularization is a four-dimensional regularization initially conceived to treat ultraviolet divergences. It has been successfully tested in several instances in the literature, more specifically in those where Dimensional Regularization does not apply. In the present contribution, we extend the method to handle infrared divergences as well. We show that the essential steps which rendered Implicit Regularization adequate in the case of ultraviolet divergences have their counterpart for infrared ones. Moreover, we show that a new scale appears, typically an infrared scale which is completely independent of the ultraviolet one. Examples are given.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Sampaio, Marcos, A. P. Baêta Scarpelli, J. E. Ottoni y M. C. Nemes. "Implicit Regularization and Renormalization of QCD". International Journal of Theoretical Physics 45, n.º 2 (febrero de 2006): 436–57. http://dx.doi.org/10.1007/s10773-006-9045-z.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Al-Tam, Faroq, António dos Anjos y Hamid Reza Shahbazkia. "Iterative illumination correction with implicit regularization". Signal, Image and Video Processing 10, n.º 5 (11 de diciembre de 2015): 967–74. http://dx.doi.org/10.1007/s11760-015-0847-4.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Dandi, Yatin, Luis Barba y Martin Jaggi. "Implicit Gradient Alignment in Distributed and Federated Learning". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 6 (28 de junio de 2022): 6454–62. http://dx.doi.org/10.1609/aaai.v36i6.20597.

Texto completo
Resumen
A major obstacle to achieving global convergence in distributed and federated learning is the misalignment of gradients across clients or mini-batches due to heterogeneity and stochasticity of the distributed data. In this work, we show that data heterogeneity can in fact be exploited to improve generalization performance through implicit regularization. One way to alleviate the effects of heterogeneity is to encourage the alignment of gradients across different clients throughout training. Our analysis reveals that this goal can be accomplished by utilizing the right optimization method that replicates the implicit regularization effect of SGD, leading to gradient alignment as well as improvements in test accuracies. Since the existence of this regularization in SGD completely relies on the sequential use of different mini-batches during training, it is inherently absent when training with large mini-batches. To obtain the generalization benefits of this regularization while increasing parallelism, we propose a novel GradAlign algorithm that induces the same implicit regularization while allowing the use of arbitrarily large batches in each update. We experimentally validate the benefits of our algorithm in different distributed and federated learning settings.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Lin, Huangxing, Yihong Zhuang, Xinghao Ding, Delu Zeng, Yue Huang, Xiaotong Tu y John Paisley. "Self-Supervised Image Denoising Using Implicit Deep Denoiser Prior". Proceedings of the AAAI Conference on Artificial Intelligence 37, n.º 2 (26 de junio de 2023): 1586–94. http://dx.doi.org/10.1609/aaai.v37i2.25245.

Texto completo
Resumen
We devise a new regularization for denoising with self-supervised learning. The regularization uses a deep image prior learned by the network, rather than a traditional predefined prior. Specifically, we treat the output of the network as a ``prior'' that we again denoise after ``re-noising.'' The network is updated to minimize the discrepancy between the twice-denoised image and its prior. We demonstrate that this regularization enables the network to learn to denoise even if it has not seen any clean images. The effectiveness of our method is based on the fact that CNNs naturally tend to capture low-level image statistics. Since our method utilizes the image prior implicitly captured by the deep denoising CNN to guide denoising, we refer to this training strategy as an Implicit Deep Denoiser Prior (IDDP). IDDP can be seen as a mixture of learning-based methods and traditional model-based denoising methods, in which regularization is adaptively formulated using the output of the network. We apply IDDP to various denoising tasks using only observed corrupted data and show that it achieves better denoising results than other self-supervised denoising methods.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Liu, Yuan, Yanzhi Song, Zhouwang Yang y Jiansong Deng. "Implicit surface reconstruction with total variation regularization". Computer Aided Geometric Design 52-53 (marzo de 2017): 135–53. http://dx.doi.org/10.1016/j.cagd.2017.02.005.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Li, Zhemin, Tao Sun, Hongxia Wang y Bao Wang. "Adaptive and Implicit Regularization for Matrix Completion". SIAM Journal on Imaging Sciences 15, n.º 4 (22 de noviembre de 2022): 2000–2022. http://dx.doi.org/10.1137/22m1489228.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Belytschko, T., S. P. Xiao y C. Parimi. "Topology optimization with implicit functions and regularization". International Journal for Numerical Methods in Engineering 57, n.º 8 (2003): 1177–96. http://dx.doi.org/10.1002/nme.824.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Rosado, R. J. C., A. Cherchiglia, M. Sampaio y B. Hiller. "An Implicit Regularization Approach to Chiral Models". Acta Physica Polonica B Proceedings Supplement 17, n.º 6 (2024): 1. http://dx.doi.org/10.5506/aphyspolbsupp.17.6-a15.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.

Tesis sobre el tema "Implicit regularization"

1

Loy, Kak Choon. "Efficient Semi-Implicit Time-Stepping Schemes for Incompressible Flows". Thesis, Université d'Ottawa / University of Ottawa, 2017. http://hdl.handle.net/10393/36442.

Texto completo
Resumen
The development of numerical methods for the incompressible Navier-Stokes equations received much attention in the past 50 years. Finite element methods emerged given their robustness and reliability. In our work, we choose the P2-P1 finite element for space approximation which gives 2nd-order accuracy for velocity and 1st-order accuracy for pressure. Our research focuses on the development of several high-order semi-implicit time-stepping methods to compute unsteady flows. The methods investigated include backward difference formulae (SBDF) and defect correction strategy (DC). Using the defect correction strategy, we investigate two variants, the first one being based on high-order artificial compressibility and bootstrapping strategy proposed by Guermond and Minev (GM) and the other being a combination of GM methods with sequential regularization method (GM-SRM). Both GM and GM-SRM methods avoid solving saddle point problems as for SBDF and DC methods. This approach reduces the complexity of the linear systems at the expense that many smaller linear systems need to be solved. Next, we proposed several numerical improvements in terms of better approximations of the nonlinear advection term and high-order initialization for all methods. To further minimize the complexity of the resulting linear systems, we developed several new variants of grad-div splitting algorithms besides the one studied by Guermond and Minev. Splitting algorithm allows us to handle larger flow problems. We showed that our new methods are capable of reproducing flow characteristics (e.g., lift and drag parameters and Strouhal numbers) published in the literature for 2D lid-driven cavity and 2D flow around the cylinder. SBDF methods with grad-div stabilization terms are found to be very stable, accurate and efficient when computing flows with high Reynolds numbers. Lastly, we showcased the robustness of our methods to carry 3D computations.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Ayme, Alexis. "Supervised learning with missing data : a non-asymptotic point of view". Electronic Thesis or Diss., Sorbonne université, 2024. http://www.theses.fr/2024SORUS252.

Texto completo
Resumen
Les valeurs manquantes sont courantes dans la plupart des ensembles de données du monde réel, en raison de la combinaison de sources multiples et d'informations intrinsèquement manquantes, telles que des défaillances de capteurs ou des questions d'enquête sans réponse. La présence de valeurs manquantes empêche souvent l'application d'algorithmes d'apprentissage standard. Cette thèse examinevaleurs manquantes dans un contexte de prédiction, visant à obtenir des prédictions précises malgré l'occurrence de données manquantes dans les données d'apprentissage et de test. L'objectif de cette thèse est d'analyser théoriquement des algorithmes spécifiques pour obtenir des garanties d'échantillons finis. Nous dérivons des bornes inférieures minimax sur le risque des prédictions linéaires en présence de valeurs manquantes. Ces bornes inférieures dépendent de la distribution du motif de valeurs manquantes et peuvent croître de manière exponentielle avec la dimension. Nous proposons une méthode très simple consistant à appliquer la procédure des moindres carrés uniquement aux motifs de valeurs manquantes les plus fréquents. Une telle méthode simple se révèle être une procédure presque minimax-optimale, qui s'écarte de l'algorithme des moindres carrés appliqué à tous les motifs de valeurs manquantes. Par la suite, nous explorons la méthode de l'imputation puis régression, où l'imputation est effectuée en utilisant l'imputation naïve par zéro, et l'étape de régression est réalisée via des modèles linéaires, dont les paramètres sont appris via la descente de gradient stochastique. Nous démontrons que cette méthode très simple offre de fortes garanties pour des échantillons finis dans des contextes de grande dimension. Plus précisément, nous montrons que le biais de cette méthode est inférieur au biais de la régression ridge. Étant donné que la régression ridge est souvent utilisée en haute dimension, cela prouve que le biais des données manquantes (via l'imputation par zéro) est négligeable dans certains contextes de grande dimension. Enfin, nous étudions différents algorithmes pour gérer la classification linéaire en présence de données manquantes (régression logistique, perceptron, LDA). Nous prouvons que la LDA est le seul modèle qui peut être valide pour des données complètes et manquantes dans certains contextes génériques
Missing values are common in most real-world data sets due to the combination of multiple sources andinherently missing information, such as sensor failures or unanswered survey questions. The presenceof missing values often prevents the application of standard learning algorithms. This thesis examinesmissing values in a prediction context, aiming to achieve accurate predictions despite the occurrence ofmissing data in both training and test datasets. The focus of this thesis is to theoretically analyze specific algorithms to obtain finite-sample guarantees. We derive minimax lower bounds on the excess risk of linear predictions in presence of missing values.Such lower bounds depend on the distribution of the missing pattern, and can grow exponentially withthe dimension. We propose a very simple method consisting in applying Least-Square procedure onthe most frequent missing patterns only. Such a simple method turns out to be near minimax-optimalprocedure, which departs from the Least-Square algorithm applied to all missing patterns. Followingthis, we explore the impute-then-regress method, where imputation is performed using the naive zeroimputation, and the regression step is carried out via linear models, whose parameters are learned viastochastic gradient descent. We demonstrate that this very simple method offers strong finite-sampleguarantees in high-dimensional settings. Specifically, we show that the bias of this method is lowerthan the bias of ridge regression. As ridge regression is often used in high dimensions, this proves thatthe bias of missing data (via zero imputation) is negligible in some high-dimensional settings. Thesefindings are illustrated using random features models, which help us to precisely understand the role ofdimensionality. Finally, we study different algorithm to handle linear classification in presence of missingdata (logistic regression, perceptron, LDA). We prove that LDA is the only model that can be valid forboth complete and missing data for some generic settings
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Estecahandy, Elodie. "Contribution à l'analyse mathématique et à la résolution numérique d'un problème inverse de scattering élasto-acoustique". Phd thesis, Université de Pau et des Pays de l'Adour, 2013. http://tel.archives-ouvertes.fr/tel-00880628.

Texto completo
Resumen
La détermination de la forme d'un obstacle élastique immergé dans un milieu fluide à partir de mesures du champ d'onde diffracté est un problème d'un vif intérêt dans de nombreux domaines tels que le sonar, l'exploration géophysique et l'imagerie médicale. A cause de son caractère non-linéaire et mal posé, ce problème inverse de l'obstacle (IOP) est très difficile à résoudre, particulièrement d'un point de vue numérique. De plus, son étude requiert la compréhension de la théorie du problème de diffraction direct (DP) associé, et la maîtrise des méthodes de résolution correspondantes. Le travail accompli ici se rapporte à l'analyse mathématique et numérique du DP élasto-acoustique et de l'IOP. En particulier, nous avons développé un code de simulation numérique performant pour la propagation des ondes associée à ce type de milieux, basé sur une méthode de type DG qui emploie des éléments finis d'ordre supérieur et des éléments courbes à l'interface afin de mieux représenter l'interaction fluide-structure, et nous l'appliquons à la reconstruction d'objets par la mise en oeuvre d'une méthode de Newton régularisée.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Pereira, Ana Isabel Costa. "Implicit Regularization in a QCD decay of the Higgs boson". Master's thesis, 2021. http://hdl.handle.net/10316/98040.

Texto completo
Resumen
Dissertação de Mestrado em Física apresentada à Faculdade de Ciências e Tecnologia
O regime perturbativo de Cromodinâmica Quântica envolve o aparecimento de divergências nas amplitudes de um processo. No entanto, as observáveis físicas devem ser finitas e, portanto, todas as divergências que surgem devem ser canceladas. De acordo com o teorema KLN, as divergências infravermelhas que aparecem numa taxa de decaimento ou secção eficaz em QCD devem cancelar-se ao juntar as contribuições das partes virtual e real que contribuem para a mesma ordem em teoria de perturbações. Neste trabalho, o objetivo principal é calcular a taxa de decaimento do bosão de Higgs em gluões modelado por um Lagrangiano efetivo no limite da massa do quark top infinita, e verificar o cancelamento das divergências. Para tal, derivamos as regras de Feynman do Lagrangiano efetivo para descrever a interação entre os gluões e o bosão de Higgs e estas são usadas para construir as amplitudes dos diagramas virtuais e reais do processo. Em seguida, usamos a IReg, que é um esquema de regularização não dimensional que trabalha na dimensão física da teoria e permite a separação das divergências de ultravioleta e infravermelhas de uma amplitude. Os integrais divergentes de ultravioleta são escritos como integrais divergentes básicos e os integrais finitos são avaliados usando o software Mathematica. Em seguida, usamos esses integrais para calcular a taxa de decaimento virtual do processo como uma correção à taxa de decaimento a nível árvore. Introduzimos o formalismo de spin-helicidade para calcular a amplitude real. Em seguida, estudamos o cálculo explícito do espaço fase do decaimento real e integramos a amplitude real ao longo das variáveis de integração do espaço fase para obter o decaimento real. Por fim, somamos as contribuições das taxas de decaimento virtual e real para obter o resultado final, que reproduz resultados conhecidos da literatura.
Perturbative Quantum Chromodynamics involves the appearance of divergences in the amplitudes of a process. However, physical observables must befinite and therefore, all the divergences that emerge must be cancelled. The KLN theorem states that the infrared divergences that appear in a QCD decay rate or cross section must cancel when putting together the contributions from the virtual and real parts that contribute at the same order in perturbation theory. In this work, the main goal is to calculate the decay rate of the QCD decay of the Higgs boson into gluons modeled by an effective Lagrangian in the limit of infinite top quark mass and verify the KLN theorem, using the Implicit Regularization (IReg) as opposed to Dimensional Regularization. We derive the Feynman rules of the effective Lagrangian to describe the interaction between gluons and the Higgs boson and use them to construct the amplitudes of the process’ virtual and real diagrams.We then use IReg, which is a non-dimensional regularization scheme that works in the physical dimension of the theory and allows for the separation of the ultraviolet and infrared divergences of an amplitude. The ultraviolet divergent integrals are written as basic divergent integrals and the finite integrals are evaluated using the software Mathematica. We then use these integrals to compute the virtual decay rate of the process as a correction to the tree-level decay rate. We introduce the spin-helicity formalism to compute the real amplitude. We then study the explicit computation of the phase space of the real decay and integrate the squared real amplitude over the phase space to obtain the real decay. At last, we add the contributions from both virtual and real decay rates to obtain the final result which is finite as expected, reproducing known results in the literature.
Outro - CERN/FIS-PAR/0040/2019
Los estilos APA, Harvard, Vancouver, ISO, etc.

Capítulos de libros sobre el tema "Implicit regularization"

1

Shafrir, David, Nir A. Sochen y Rachid Deriche. "Regularization of Mappings Between Implicit Manifolds of Arbitrary Dimension and Codimension". En Lecture Notes in Computer Science, 344–55. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11567646_29.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Wahba, G. "Regularization and Cross Validation Methods for Nonlinear, Implicit, Ill-posed Inverse Problems". En Geophysical Data Inversion Methods and Applications, 3–13. Wiesbaden: Vieweg+Teubner Verlag, 1990. http://dx.doi.org/10.1007/978-3-322-89416-8_1.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Zavarise, Giorgio, Laura De Lorenzis y Robert L. Taylor. "On Regularization of the Convergence Path for the Implicit Solution of Contact Problems". En Recent Developments and Innovative Applications in Computational Mechanics, 17–28. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-17484-1_3.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Usenov, Izat. "Combined Regularization Method for Solving an Implicit Operator Equation of the First Kind". En Lecture Notes in Networks and Systems, 24–33. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-64010-0_3.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Menini, Anne, Pierre-André Vuissoz, Jacques Felblinger y Freddy Odille. "Joint Reconstruction of Image and Motion in MRI: Implicit Regularization Using an Adaptive 3D Mesh". En Medical Image Computing and Computer-Assisted Intervention – MICCAI 2012, 264–71. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-33415-3_33.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Spieker, Veronika, Hannah Eichhorn, Jonathan K. Stelter, Wenqi Huang, Rickmer F. Braren, Daniel Rueckert, Francisco Sahli Costabal et al. "Self-supervised k-Space Regularization for Motion-Resolved Abdominal MRI Using Neural Implicit k-Space Representations". En Lecture Notes in Computer Science, 614–24. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-72104-5_59.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Ehrhardt, Jan y Heinz Handels. "Implicitly Solved Regularization for Learning-Based Image Registration". En Machine Learning in Medical Imaging, 137–46. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-45673-2_14.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Banerjee, Ayan y Sandeep K. S. Gupta. "Recovering Implicit Physics Model Under Real-World Constraints". En Frontiers in Artificial Intelligence and Applications. IOS Press, 2024. http://dx.doi.org/10.3233/faia240556.

Texto completo
Resumen
Recovering a physics-driven model, i.e. a governing set of equations of the underlying dynamical systems, from the real-world data has been of recent interest. Most existing methods either operate on simulation data with unrealistically high sampling rates or require explicit measurements of all system variables, which is not amenable in real-world deployments. Moreover, they assume the timestamps of external perturbations to the physical system are known a priori, without uncertainty, implicitly discounting any sensor time-synchronization or human reporting errors. In this paper, we propose a novel liquid time constant neural network (LTC-NN) based architecture to recover underlying model of physical dynamics from real-world data. The automatic differentiation property of LTC-NN nodes overcomes problems associated with low sampling rates, the input dependent time constant in the forward pass of the hidden layer of LTC-NN nodes creates a massive search space of implicit physical dynamics, the physics model solver based data reconstruction loss guides the search for the correct set of implicit dynamics, and the use of the dropout regularization in the dense layer ensures extraction of the sparsest model. Further, to account for the perturbation timing error, we utilize dense layer nodes to search through input shifts that results in the lowest reconstruction loss. Experiments on four benchmark dynamical systems, three with simulation data and one with the real-world data show that the LTC-NN architecture is more accurate in recovering implicit physics model coefficients than the state-of-the-art sparse model recovery approaches. We also introduce four additional case studies (total eight) on real-life medical examples in simulation and with real-world clinical data to show effectiveness of our approach in recovering underlying model in practice.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Xiao, Jinying, Ping Li y Jie Nie. "TED: Accelerate Model Training by Internal Generalization". En Frontiers in Artificial Intelligence and Applications. IOS Press, 2024. http://dx.doi.org/10.3233/faia240823.

Texto completo
Resumen
Large language models have demonstrated strong performance in recent years, but the high cost of training drives the need for efficient methods to compress dataset sizes. We propose TED pruning, a method that addresses the challenge of overfitting under high pruning ratios by quantifying the model’s ability to improve performance on pruned data while fitting retained data, known as Internal Generalization (IG). TED uses an optimization objective based on Internal Generalization Distance (IGD), measuring changes in IG before and after pruning to align with true generalization performance and achieve implicit regularization. The IGD optimization objective was verified to allow the model to achieve the smallest upper bound on generalization error. The impact of small mask fluctuations on IG is studied through masks and Taylor approximation, and fast estimation of IGD is enabled. In analyzing continuous training dynamics, the prior effect of IGD is validated, and a progressive pruning strategy is proposed. Experiments on image classification, natural language understanding, and large language model fine-tuning show TED achieves lossless performance with 60-70% of the data. Upon acceptance, our code will be made publicly available.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Thomas, Dominic. "Les Sans-papiers". En Postcolonial Realms of Memory, 255–66. Liverpool University Press, 2020. http://dx.doi.org/10.3828/liverpool/9781789620665.003.0024.

Texto completo
Resumen
Control and selection have been implicit dimensions of the history of immigration in France, shaping and defining the parameters of national identity over centuries. The year 1996 was a turning point when several hundred African sans-papiers sought refuge in the Saint-Bernard de la Chapelle church in the 18th arrondissement of Paris while awaiting a decision on their petition for amnesty and legalization. The church was later stormed by heavily armed police officers, and although there was widespread support for government policies intended to encourage legal paths to immigration, the police raids provoked outrage. This provided the impetus for social mobilization and the sans-papiers behaved contrary to expectations and decided to deliberately enter the public domain in order to shed light on their conditions. Emerging in this way from the dubious safety of legal invisibility, claims were made for more direct public representation and ultimately for regularization, while also countering popular misconceptions and stereotypes concerning their presence and role in French society. The sans-papiers movement is inspired by a shared memory of resistance and political representation that helps define a lieu de mémoire, a space which is, from a broadly postcolonial perspective, very much inscribed in collective memory.
Los estilos APA, Harvard, Vancouver, ISO, etc.

Actas de conferencias sobre el tema "Implicit regularization"

1

Xu, Qunzhi, Yi Yu y Yajun Mei. "Quickest Detection in High-Dimensional Linear Regression Models via Implicit Regularization". En 2024 IEEE International Symposium on Information Theory (ISIT), 1059–64. IEEE, 2024. http://dx.doi.org/10.1109/isit57864.2024.10619577.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Gunasekar, Suriya, Blake Woodworth, Srinadh Bhojanapalli, Behnam Neyshabur y Nathan Srebro. "Implicit Regularization in Matrix Factorization". En 2018 Information Theory and Applications Workshop (ITA). IEEE, 2018. http://dx.doi.org/10.1109/ita.2018.8503198.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Milanesi, Paolo, Hachem Kadri, Stephane Ayache y Thierry Artieres. "Implicit Regularization in Deep Tensor Factorization". En 2021 International Joint Conference on Neural Networks (IJCNN). IEEE, 2021. http://dx.doi.org/10.1109/ijcnn52387.2021.9533690.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Dupe, Francois Xavier, Sebastien Bougleux, Luc Brun, Olivier Lezoray y Abderahim Elmoataz. "Kernel-Based Implicit Regularization of Structured Objects". En 2010 20th International Conference on Pattern Recognition (ICPR). IEEE, 2010. http://dx.doi.org/10.1109/icpr.2010.525.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Yao, Tianyi, Daniel LeJeune, Hamid Javadi, Richard G. Baraniuk y Genevera I. Allen. "Minipatch Learning as Implicit Ridge-Like Regularization". En 2021 IEEE International Conference on Big Data and Smart Computing (BigComp). IEEE, 2021. http://dx.doi.org/10.1109/bigcomp51126.2021.00021.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Huang, Xiaoyang, Yi Zhang, Kai Chen, Teng Li, Wenjun Zhang y Bingbing Ni. "Learning Shape Primitives via Implicit Convexity Regularization". En 2023 IEEE/CVF International Conference on Computer Vision (ICCV). IEEE, 2023. http://dx.doi.org/10.1109/iccv51070.2023.00337.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Cherchiglia, A. "Systematizing Implicit Regularization for Multi-Loop Feynman Diagrams". En 4th International Conference on Fundamental Interactions. Trieste, Italy: Sissa Medialab, 2011. http://dx.doi.org/10.22323/1.124.0016.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Cataltepe, Zehra, Mahiye Uluyagmur y Esengul Tayfur. "TV program recommendation using implicit feedback with adaptive regularization". En 2012 20th Signal Processing and Communications Applications Conference (SIU). IEEE, 2012. http://dx.doi.org/10.1109/siu.2012.6204780.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Kumar, Akshay, Akshay Malhotra y Shahab Hamidi-Rad. "Group Sparsity via Implicit Regularization for MIMO Channel Estimation". En 2023 IEEE Wireless Communications and Networking Conference (WCNC). IEEE, 2023. http://dx.doi.org/10.1109/wcnc55385.2023.10118737.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Yao, Lina, Xianzhi Wang, Quan Z. Sheng, Wenjie Ruan y Wei Zhang. "Service Recommendation for Mashup Composition with Implicit Correlation Regularization". En 2015 IEEE International Conference on Web Services (ICWS). IEEE, 2015. http://dx.doi.org/10.1109/icws.2015.38.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía