Auswahl der wissenschaftlichen Literatur zum Thema „Self-Configuring lighting“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Inhaltsverzeichnis
Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Self-Configuring lighting" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Zeitschriftenartikel zum Thema "Self-Configuring lighting"
Luxman, Ramamoorthy, Marvin Nurit, Gaëtan Le Goïc, Franck Marzani und Alamin Mansouri. „Next Best Light Position: A self configuring approach for the Reflectance Transformation Imaging acquisition process“. Electronic Imaging 2021, Nr. 5 (18.01.2021): 132–1. http://dx.doi.org/10.2352/issn.2470-1173.2021.5.maap-132.
Der volle Inhalt der QuelleDavoli, Luca, Mattia Antonini und Gianluigi Ferrari. „DiRPL: A RPL-Based Resource and Service Discovery Algorithm for 6LoWPANs“. Applied Sciences 9, Nr. 1 (22.12.2018): 33. http://dx.doi.org/10.3390/app9010033.
Der volle Inhalt der QuelleHarley, Alexis. „Resurveying Eden“. M/C Journal 8, Nr. 4 (01.08.2005). http://dx.doi.org/10.5204/mcj.2382.
Der volle Inhalt der QuelleDissertationen zum Thema "Self-Configuring lighting"
Ansarnia, Masoomeh. „Development and Test of Computer Vision and Deep Learning Methods for Dynamic Management of Urban Lighting“. Electronic Thesis or Diss., Université de Lorraine, 2023. http://www.theses.fr/2023LORR0272.
Der volle Inhalt der QuelleThis doctoral thesis has been conducted within the framework of a research contract between the French urban lighting design and manufacturing company, Eclatec, and the Jean Lamour Institute in Nancy. The overarching goal of this research is to enhance nighttime urban lighting while simultaneously reducing electrical consumption and light pollution. To achieve this, an RGB camera is integrated into the streetlamp's light source, serving as the primary data collection point. This choice necessitated the use of a wide-angle lens with a slight vertical tilt in its axis. Although this configuration allows for the observation of a significant portion of the illuminated area, it results in highly distorted images. From this system, four major research challenges were investigated:1. The first challenge concerns video detection of individuals in close proximity to the luminaire under very low lighting conditions, with the aim of achieving dynamic lighting adjustment. This detection relies on deep learning models from the Yolo family, which were fine-tuned through transfer learning using a specific collection of images. These images were captured at various locations in the Nancy metropolitan area, at heights ranging from 6 to 8 meters. Under conditions of 10 lux illumination, an aperture of f/3.5, and a fixed sensitivity of 3200 ISO, the detection rate for pedestrians and vehicles exceeds 97%. The model, implemented on the embedded NVidia Jetson Nano GPU, achieves a frame rate of approximately 10 FPS, which proves adequate for our application. 2. The second research direction explores the recognition of the environment surrounding the luminaire through semantic segmentation of images. This segmentation will subsequently be employed to adapt the light distribution of the LED matrix to the encountered urban scenario. To accomplish this, we employed the OCR-HRNet neural network, which enhances high-resolution segmentation by incorporating contextual representation that considers pixel aggregation. This architecture is well-suited to images of non-uniform surfaces, characteristic of the ground beneath the luminaire. The results demonstrate excellent identification of structures and vegetated areas. However, the distinction between sidewalk and road remains challenging, particularly when road surfaces exhibit similar reflectance and textures. A post-image virtual marking solution significantly improves segmentation accuracy, especially in sunny scenes with numerous shadowed areas. 3. In a third phase, we modeled the optical system to enable the estimation of the real-world positions of ground points based on their images. A simple Cam To World transformation is proposed, accounting for extrinsic parameters of the viewpoint (height, pitch, and resolution), and the lens distortion function, approximated as an equidistant projection law. Given that stringent precision is not critical, a rigorous system calibration was not conducted. For an effective observation zone of 20 m × 50 m, the localization error is on the order of meters. 4. Finally, we propose an avenue for utilizing the lighting infrastructure to analyze traffic flow fluidity. The proposed method analyzes apparent motion of users by estimating the mean optical flow within each bounding box detected by Yolo. Currently, optical flow determination is performed offline using the deep learning algorithm FlowNet2. In the range of 0 to 15 m/s, the estimated speed of the moving object exhibits an error of less than 1 m/s
„Self-Configuring and Self-Adaptive Environment Control Systems for Buildings“. Doctoral diss., 2015. http://hdl.handle.net/2286/R.I.36025.
Der volle Inhalt der QuelleDissertation/Thesis
Doctoral Dissertation Computer Science 2015