To see the other types of publications on this topic, follow the link: Luminance camera.

Journal articles on the topic 'Luminance camera'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Luminance camera.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Vaaja, M. T., M. Maksimainen, M. Kurkela, J. P. Virtanen, T. Rantanen, and H. Hyyppä. "APPROACHES FOR MAPPING NIGHT-TIME ROAD ENVIRONMENT LIGHTING CONDITIONS." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences V-1-2020 (August 3, 2020): 199–205. http://dx.doi.org/10.5194/isprs-annals-v-1-2020-199-2020.

Full text
Abstract:
Abstract. The integration of the 3D measurement techniques with luminance imaging has increased the potential for mapping night-time road lighting conditions. In this study, we present selected static and mobile approaches for the purpose. The measurement methods include conventional 2D imaging luminance photometry and the integration of the luminance imaging with terrestrial and mobile laser scanning. In addition, we present our initial experiences with performing integrated luminance mapping and photogrammetric reconstruction from drone imagery. All of the presented methods require that the camera is calibrated with a reference luminance source. Our results show the results of luminance calibration and feasibility of 3D luminance point clouds for evaluating road surface luminances. In addition, we discuss the other potential applications, limitations and future research.
APA, Harvard, Vancouver, ISO, and other styles
2

Czyżewski, Dariusz, and Irena Fryc. "Luminance Calibration and Linearity Correction Method of Imaging Luminance Measurement Devices." Photonics Letters of Poland 13, no. 2 (June 30, 2021): 25. http://dx.doi.org/10.4302/plp.v13i2.1094.

Full text
Abstract:
This paper presents that the opto-electrical characteristic of a typical CCD based digital camera is nonlinear. It means that digital electric signal of the camera's CCD detector - is not a linear function of the luminance value on camera's lens. The opto-electrical characteristic feature of a digital camera needs to be transformed into a linear function if this camera is to be used as a luminance distribution measurement device known as Imaging Luminance Measurement Device (ILMD). The article presents the methodology for obtaining the opto-electrical characteristic feature of a typical CCD digital camera and focuses on the non- linearity correction method. Full Text: PDF ReferencesD. Wüller and H. Gabele, "The usage of digital cameras as luminance meters," in Digital Photography III, 2007, p. 65020U CrossRef P. Fiorentin and A. Scroccaro, "Detector-Based Calibration for Illuminance and Luminance Meters-Experimental Results," IEEE Transactions on Instrumentation and Measurement, vol. 59, no. 5, pp. 1375-1381, 2010 CrossRef M. Shpak, P. Kärhä, G. Porrovecchio, M. Smid, and E. Ikonen, "Luminance meter for photopic and scotopic measurements in the mesopic range," Meas. Sci. Technol, vol. 25, no. 9, p. 95001, 2014, CrossRef P. Fiorentin, P. Iacomussi, and G. Rossi, "Characterization and calibration of a CCD detector for light engineering," IEEE Transactions on Instrumentation and Measurement, vol. 54, no. 1, pp. 171-177, 2005, CrossRef I. Fryc and E. Czech, "Application of optical fibers and CCD array for measurement of luminance distribution," in Proc. SPIE 5064, Lightmetry 2002: Metrology and Testing Techniques Using Light, 2003, pp. 18-21, CrossRef I. Fryc, "Accuracy of spectral correction of a CCD array for luminance distribution measurement," in Proc. SPIE 5064, Lightmetry 2002: Metrology and Testing Techniques Using Light, 2003, pp. 38-42, CrossRef I. Fryc, "Analysis of the spectral correction errors of illuminance meter photometric head under the influence of the diffusing element," Optical Engineering, vol. 40, no. 8, pp. 1636-1640, 2001. CrossRef D. Czyzewski, "Monitoring of the subsequent LED lighting installation in Warsaw in the years 2014-2015," in Proceedings of 2016 IEEE Lighting Conference of the Visegrad Countries, Lumen V4 2016, 2016, pp. 1-4, CrossRef M. Sielachowska, D. Tyniecki, and M. Zajkowski, "Measurements of the Luminance Distribution in the Classroom Using the SkyWatcher Type System," in 2018 VII. Lighting Conference of the Visegrad Countries (Lumen V4), 2018, pp. 1-5, CrossRef W. Malska and H. Wachta, "Elements of inferential statistics in a quantitative assessment of illuminations of architectural structures," in 2016 IEEE Lighting Conference of the Visegrad Countries (Lumen V4), 2016, pp. 1-6, CrossRef T. Kruisselbrink, R. Dangol, and A. Rosemann, "Photometric measurements of lighting quality: An overview," Building and Environment, vol. 138, pp. 42-52, 2018. CrossRef A. Borisuit, M. Münch, L. Deschamps, J. Kämpf, and J.-L. Scartezzini, "A new device for dynamic luminance mapping and glare risk assessment in buildings," in Proc. SPIE 8485. Nonimaging Optics: Efficient Design for Illumination and Solar Concentration IX, 2012, vol. 8485, p. 84850M, CrossRef I. Lewin and J. O'Farrell, "Luminaire photometry using video camera techniques," Journal of the Illuminating Engineering Society, vol. 28, no. 1, pp. 57-63, 1999, CrossRef D. Czyżewski, "Research on luminance distributions of chip-on-board light-emitting diodes," Crystals, vol. 9, no. 12, pp. 1-14, 2019, CrossRef K. Tohsing, M. Schrempf, S. Riechelmann, H. Schilke, and G. Seckmeyer, "Measuring high-resolution sky luminance distributions with a CCD camera," Applied optics, vol. 52, no. 8, pp. 1564-1573, 2013. CrossRef D. Czyzewski, "Investigation of COB LED luminance distribution," in Proceedings of 2016 IEEE Lighting Conference of the Visegrad Countries, Lumen V4 2016, 2016, pp. 1-4, CrossRef A. de Vries, J. L. Souman, B. de Ruyter, I. Heynderickx, and Y. A. W. de Kort, "Lighting up the office: The effect of wall luminance on room appraisal, office workers' performance, and subjective alertness," Building and Environment, 2018 CrossRef D. Silvestre, J. Guy, J. Hanck, K. Cornish, and A. Bertone, "Different luminance- and texture-defined contrast sensitivity profiles for school-aged children," Nature. Scientific Reports, vol. 10, no. 13039, 2020, CrossRef H. Wachta, K. Baran, and M. Leśko, "The meaning of qualitative reflective features of the facade in the design of illumination of architectural objects," in AIP Conference Proceedings, 2019, vol. 2078, no. 1, p. 20102. CrossRef CIE, "Technical raport CIE 231:2019. CIE Classification System of Illuminance and Luminance Meters.," Vienna, Austria, 2019. CrossRef DIN, "Standard DIN 5032-7:2017. Photometry - Part 7: Classification of illuminance meters and luminance meters.," 2017. DirectLink CEN, "EN 13032-1:2004. Light and lighting - Measurement and presentation of photometric data of lamps and luminaires - Part 1: Measurement and file format," Bruxelles, Belgium., 2004. DirectLink CIE, "Technical raport CIE 231:2019. CIE Classification System of Illuminance and Luminance Meters," Vienna, Austria, 2019 CrossRef E. Czech, D. Czyzewski, "The linearization of the relationship between scene luminance and digital camera output levels", Photonics Letter of Poland 13, 1 (2021). CrossRef
APA, Harvard, Vancouver, ISO, and other styles
3

Bierings, RAJM, and NM Jansonius. "Luminance and pedestrians’ perceived ability to see after dark: Mapping the Netherlands using a citizen science network of smartphone users." Lighting Research & Technology 51, no. 2 (February 14, 2018): 231–42. http://dx.doi.org/10.1177/1477153518758355.

Full text
Abstract:
We studied pedestrians’ perception of their ability to see when outside after dark, the luminance of the pavement after dark and the association between perception and luminance. These data were captured by a citizen science network of smartphone users, with and without an eye disease. They used an app to report their ability to see when outside after dark in their own neighbourhood and measured the luminance of the pavement using the smartphone camera. Logistic regression was used to determine the influence of luminance, age, gender and eye disease on reported ability to see after dark. Amongst those respondents who did not report an eye disease, 11% reported visual conditions they perceived to make walking difficult; this increased to 40% for pedestrians who reported an eye disease. The recorded luminances were typically 0.01–0.1 cd/m2. For those respondents with healthy eyes, the percentage reporting difficult visual conditions increased especially below 0.01 cd/m2; for those with an eye disease, the increase started at higher luminances, which may limit their mobility after dark.
APA, Harvard, Vancouver, ISO, and other styles
4

G Varghese, Susan, Ciji Pearl Kurian, V. I. George, and T. S. Sudheer Kumar. "Control and evaluation of room interior lighting using digital camera as the sensor." International Journal of Engineering & Technology 7, no. 2.21 (April 20, 2018): 99. http://dx.doi.org/10.14419/ijet.v7i2.21.11844.

Full text
Abstract:
This paper reports the results of measurements performed in a test room to test how digital camera can be used as luminance meter and thus to investigate the lighting control scheme based on the inputs coming from camera. An indoor lighting control scheme which adapts according to the daylight availability is presented in this work. The camera calibration procedure based on High Dynamic Range Imaging technique is used to obtain the camera response function which allows to relate pixel values obtained from the image and the photopic luminance values. Luminance gradient evaluation for the uniformity analysis is discussed for the test room.
APA, Harvard, Vancouver, ISO, and other styles
5

Zou, Feng, and Shan Chen. "Problems about the Evaluating System of the Pixel Luminance Uniformity of LED Display Using a Digital Camera." Applied Mechanics and Materials 103 (September 2011): 361–65. http://dx.doi.org/10.4028/www.scientific.net/amm.103.361.

Full text
Abstract:
Digital Camera is widely used in the evaluating system of the pixel luminance uniformity of LED display. The evaluating system of the pixel luminance uniformity of LED display using a Digital Camera (DC) has many benefits, but the measuring accuracy is unsatisfied. In the paper the contributing factors are entered into details.
APA, Harvard, Vancouver, ISO, and other styles
6

Kruisselbrink, TW, R. Dangol, ALP Rosemann, and EJ van Loenen. "Spectral tuning of luminance cameras: A theoretical model and validation measurements." Lighting Research & Technology 52, no. 5 (October 9, 2019): 654–74. http://dx.doi.org/10.1177/1477153519880231.

Full text
Abstract:
Presently, luminance distribution measurement devices, using High Dynamic Range technology, are increasingly used as they provide a lot of relevant data related to the lit environment at once. However, the accuracy of these devices can be a concern. It is expected that the accuracy would be improved by incorporating the effect of the camera spectral responsivity and the spectral power distribution of the illuminant under which the measurements are conducted. This study introduces two optimization criteria incorporating these aspects to improve the spectral match and the performance of luminance distribution measurement devices. Both criteria are tested in a theoretical model and in practical measurements using two cameras and three illuminants: LED, halogen and fluorescent. Both methodologies support the hypothesis that the conventional method to determine the luminance introduces spectral mismatches that can be limited by optimizing relative to the spectral responsivity of the camera. Additionally, substantial evidence was found, by both the theoretical model and the validation measurements, that the spectral power distribution of the illuminant also has an effect on the performance.
APA, Harvard, Vancouver, ISO, and other styles
7

Mead, AR, and KM Mosalam. "Ubiquitous luminance sensing using the Raspberry Pi and Camera Module system." Lighting Research & Technology 49, no. 7 (May 13, 2016): 904–21. http://dx.doi.org/10.1177/1477153516649229.

Full text
Abstract:
In this paper, the authors have calibrated a Raspberry Pi and Camera Module (RPiCM) for use as an absolute luminance sensor. The spectral response of the RPiCM chip as well as linear mapping to the standard CIE-XYZ colour space have been measured, calculated and presented. The luminance values are anchored to absolute luminance measurements. Further, by using high dynamic range imaging techniques making use of different shutter speeds in a sequence of images, the measurement of luminance values from approximately 10 to 50,000 cd/m2 is possible. Lens correction for vignetting is also addressed, while pixel point spreading is ignored. This measurement goes beyond a single point measurement, economically and accurately allowing each of the arrays within the RPiCM chip to act as an individual luminance meter over the entire field of view of the camera system. Applications and limitations of the embedded camera system are discussed. An Energy Plus model is constructed as a motivational application of a simple one room, one window space and simulated for a year using weather files from around the world. These simulations highlight the need for spatial luminance-based sensing within the built environment to counteract the experience of discomfort glare by building occupants.
APA, Harvard, Vancouver, ISO, and other styles
8

Zhang, Jian Chuan, Wu Bin Li, and Chang Hou Lu. "Design of Automatic Detection Device for Steel Bar Surface Defects." Advanced Materials Research 532-533 (June 2012): 390–93. http://dx.doi.org/10.4028/www.scientific.net/amr.532-533.390.

Full text
Abstract:
To inspect the surface quality of steel bar, we designed an automatic system including linear camera and laser. Through the comparison among kinds of cameras, we select linear CCD to our system. The laser is also chosen by us with its high luminance and performance. Through a series of computation, we select the appropriate camera lens to our device. At last, we draw the whole detection system. This device has been used well and provides a good foundation for prospective image processing.
APA, Harvard, Vancouver, ISO, and other styles
9

Li, Wen Juan, Chuan Liang, Ning Zhi Jin, and Mei Lan Zhou. "Modulation Transfer Function Measurement System at Different Contrasts for CCD Camera." Applied Mechanics and Materials 241-244 (December 2012): 648–52. http://dx.doi.org/10.4028/www.scientific.net/amm.241-244.648.

Full text
Abstract:
In order to achieve MTFs at different contrasts, the MTF measurement system is designed and developed. Two integrating spheres are used to illuminate the face and back of the test target uniformly. The target luminance and background luminance of the test target are regulated by adjusting the attenuators near the entrance of each integrating sphere. The many groups’ experimental results indicate that the luminance differences between the values by the system and those by L88 first level luminance meter, which is checked by National Institute of Metrology P. R. China, are within ±0.3 cd/m2. Thereby the measurement precision can be ensured in MTF test. MTFs of Sony camera and Cannon camera at different contrasts are measured by this system. The measurement values show that MTFs at different contrasts can demonstrate the imaging quality fully and objectively. This study provides an effective method to evaluate the imaging quality of visible imaging systems.
APA, Harvard, Vancouver, ISO, and other styles
10

Hong, Sung De, and Jeong Tai Kim. "A Construction of Online Luminance Analysis Service Website." International Journal of Sustainable Lighting 17 (June 2, 2017): 21–26. http://dx.doi.org/10.26607/ijsl.v17i0.13.

Full text
Abstract:
In February 2012, Light Pollution Prevention Act was enacted and implemented in Korea. Under the new law, architectural exterior lightings and advertisement lightings will have to observe the permissible light emitting levels set by the enforcement ordinance of the law. In accordance with Article 12 of the law, the businesses which are legally required to observe the permissible light emitting levels should examine if their products satisfy legal requirements through the measurement of luminance. To measure the luminance of architectural external lightings and advertisement lightings, expensive measuring equipment is needed. For many small businesses, however, the measuring equipment is too expensive, and this kind cost burden may cause resistance against Light Pollution Prevention Act. In addition, the businesses are lack of expertise relating to the measurement of luminance. Therefore, this study proposed a concept of online luminance analysis services for the general public including the management and employees from the related industry. The online luminance analysis service adopted a method to analyze luminance by HDR-imaging videos using a common DSLR camera. In addition, a website to provide online services was constructed. The online luminance analysis services process proposed in this study include the followings: First, a website user should learn and follow how to capture and how to set the digital camera instructed on the website. Second, architectural exterior lightings and advertisement lightings should be captured using a digital camera. Third, the captured image files have to be uploaded on the website. Fourth, the uploaded image files should be analyzed by the website manager. Fifth, the analyzed results should be provided to the website users.
APA, Harvard, Vancouver, ISO, and other styles
11

Rusu, Alexandru Viorel, Catalin Daniel Galatanu, Gheorghe Livint, and Dorin Dumitru Lucache. "Measuring Average Luminance for Road Lighting from Outside the Carriageway with Imaging Sensor." Sustainability 13, no. 16 (August 12, 2021): 9029. http://dx.doi.org/10.3390/su13169029.

Full text
Abstract:
The main quality condition in street lighting is luminance distribution. During the carrying out of the literature, average luminance is the most important parameter to check. The standard BS EN 13201-3 imposes that average luminance must be calculated for the observer placed in the center of each circulating lane. As a consequence, according to these standards, the measurements can be done only on streets without traffic. Stopping the traffic on all lanes is very difficult. This paper proposes a solution for measuring the average luminance from outside the carriageway. The research was performed by simulations/calculations and was validated by field measurements. Imaging sensors were used to measure average luminance, while DIALux EVO 9.1 was used for the simulations. For symmetrical, opposite, and staggered lighting arrangements, average luminance measurements were performed with a digital camera positioned outside of the traffic area, with the equipment placed at the edge of the carriageway, giving similar results with standard measurements, with almost no difference. For single sided lighting arrangements, the differences became unacceptable. In this case, the paper proposes a correction function to calculate the average luminance for the observer placed on the carriageway, based on measurements with a digital camera placed outside the traffic area.
APA, Harvard, Vancouver, ISO, and other styles
12

Preciado, O. U., A. Décima, and J. B. Barraza. "PHOTOMETRIC CHARACTERIZATION OF A COMMERCIAL DSLR CAMERA FOR THE IMPLEMENTATION OF AN IMAGING LUMINANCE METER." Anales AFA 31, no. 4 (January 15, 2021): 143–49. http://dx.doi.org/10.31527/analesafa.2020.31.4.143.

Full text
Abstract:
In this paper we describe the procedure followed in the photometric characterization of a DSLR camera in order to implement an imaging luminance meter. The first step consisted in the experimental setup of a system to obtain the spectral response curves of the CMOS sensor for its three channels: red (R), green (G) and blue (B). Then, based on the linear combination of the RGB channel curves, we calculated an approximation of the CIE luminous efficiency function, V(λ), for the camera. We then characterized the camera lens which involved measuring its spectral transmittance and evaluating the uniformity of the lens-sensor assembly to compensate for loss of sensitivity at the image periphery (vignetting). Finally, we performed an absolute calibration in luminance and carried out a pilot test to create high dynamic range (HDR) images and luminance maps of a scene. The favourable results of the pilot test augur a successful implementation of the image luminance meter, however, it is necessary to finish with the development of a software for the image processing and to do more tests in order to be able to validate its use in different situations or to establish the restrictions of its use.
APA, Harvard, Vancouver, ISO, and other styles
13

Kurkela, Matti, Mikko Maksimainen, Arttu Julin, Toni Rantanen, Juho-Pekka Virtanen, Juha Hyyppä, Matti Tapio Vaaja, and Hannu Hyyppä. "Utilizing a Terrestrial Laser Scanner for 3D Luminance Measurement of Indoor Environments." Journal of Imaging 7, no. 5 (May 10, 2021): 85. http://dx.doi.org/10.3390/jimaging7050085.

Full text
Abstract:
We aim to present a method to measure 3D luminance point clouds by applying the integrated high dynamic range (HDR) panoramic camera system of a terrestrial laser scanning (TLS) instrument for performing luminance measurements simultaneously with laser scanning. We present the luminance calibration of a laser scanner and assess the accuracy, color measurement properties, and dynamic range of luminance measurement achieved in the laboratory environment. In addition, we demonstrate the 3D luminance measuring process through a case study with a luminance-calibrated laser scanner. The presented method can be utilized directly as the luminance data source. A terrestrial laser scanner can be prepared, characterized, and calibrated to apply it to the simultaneous measurement of both geometry and luminance. We discuss the state and limitations of contemporary TLS technology for luminance measuring.
APA, Harvard, Vancouver, ISO, and other styles
14

Uozumi, Takuji, Tomoko Ueno, and Kohji Kawakami. "Measurement of luminance distribution by Digital video camera." JOURNAL OF THE ILLUMINATING ENGINEERING INSTITUTE OF JAPAN 84, Appendix (2000): 240–41. http://dx.doi.org/10.2150/jieij1980.84.appendix_240.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Gbologah, Franklin E., Angshuman Guin, Roger Purcell, and Michael O. Rodgers. "Calibration of a Digital Camera for Rapid Auditing of In Situ Intersection Illumination." Transportation Research Record: Journal of the Transportation Research Board 2617, no. 1 (January 2017): 35–43. http://dx.doi.org/10.3141/2617-05.

Full text
Abstract:
The regular auditing of installed roadway lighting performance is essential in ensuring that in situ light levels are within design specifications despite the effects of lamp deterioration or changes in roadway functional class. However, existing guidelines for measuring roadway lighting performance are tedious and often impractical for transportation agencies and municipalities, which are already faced with time and resource constraints. A method for calibrating a digital single lens reflex camera for rapid assessment of illumination levels at roadway intersections is developed in this paper. The method uses an image analysis approach to extract pixel information in a digital image and link it to the scene luminance. It uses high-precision light meters to perform an initial calibration of the digital camera that has proved to be stable over long periods. The method was tested with field data, and the results indicate that average scene luminance derived from this method differs by less than 4% from the average observed scene luminance captured by high-precision luminance meters involving a rigorous field measurement methodology. The methodology developed in this study offers transportation agencies and municipalities a rapid, inexpensive, and efficient method for auditing the adequacy of roadway illumination.
APA, Harvard, Vancouver, ISO, and other styles
16

Kaiho, Koichi, and Katashi Matsunawa. "Study on Measurement of Luminance by Digital Still Camera." JOURNAL OF THE ILLUMINATING ENGINEERING INSTITUTE OF JAPAN 79, Appendix (1995): 263–64. http://dx.doi.org/10.2150/jieij1980.79.appendix_263.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Kruisselbrink, Thijs, Myriam Aries, and Alexander Rosemann. "A Practical Device for Measuring the Luminance Distribution." International Journal of Sustainable Lighting 19, no. 1 (June 29, 2017): 75–90. http://dx.doi.org/10.26607/ijsl.v19i1.76.

Full text
Abstract:
Various applications in building lighting such as automated daylight systems, dynamic lighting control systems, lighting simulations, and glare analyzes can be optimized using information on the actual luminance distributions of the surroundings. Currently, commercially available luminance distribution measurement devices are often not suitable for these kind of applications or simply too expensive for broad application. This paper describes the development of a practical and autonomous luminance distribution measurement device based on a credit card-sized single-board computer and a camera system. The luminance distribution was determined by capturing High Dynamic Range images and translating the RGB information to the CIE XYZ color space. The High Dynamic Range technology was essential to accurately capture the data needed to calculate the luminance distribution because it allows to capture luminance ranges occurring in real scenarios. The measurement results were represented in accordance with established methods in the field of daylighting. Measurements showed that the accuracy of the luminance distribution measurement device ranged from 5% to 20% (worst case) which was deemed acceptable for practical measurements and broad applications in the building realm.
APA, Harvard, Vancouver, ISO, and other styles
18

Uetani, Yoshiaki. "Measurement of Luminance and Color Coordinates by a Digital Camera." JOURNAL OF THE ILLUMINATING ENGINEERING INSTITUTE OF JAPAN 82, Appendix (1998): 128–29. http://dx.doi.org/10.2150/jieij1980.82.appendix_128.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Moore, T., H. Graves, M. J. Perry, and D. J. Carter. "Approximate field measurement of surface luminance using a digital camera." Lighting Research and Technology 32, no. 1 (January 1, 2000): 1–11. http://dx.doi.org/10.1177/096032710003200101.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Tohsing, Korntip, Michael Schrempf, Stefan Riechelmann, Holger Schilke, and Gunther Seckmeyer. "Measuring high-resolution sky luminance distributions with a CCD camera." Applied Optics 52, no. 8 (March 5, 2013): 1564. http://dx.doi.org/10.1364/ao.52.001564.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Kolláth, Zoltán, and Anita Dömény. "Night sky quality monitoring in existing and planned dark sky parks by digital cameras." International Journal of Sustainable Lighting 19, no. 1 (June 28, 2017): 61–68. http://dx.doi.org/10.26607/ijsl.v19i1.70.

Full text
Abstract:
A crucial part of the qualification of international dark sky places (IDSPs) is the objective measurement of night time sky luminance or radiance. Modern digital cameras provide an alternative way to perform all sky imaging either by a fisheye lens or by a mosaic image taken by a wide angle lens. Here we present a method for processing raw camera images to obtain calibrated measurements of sky quality. The comparison of the night sky quality of different European locations is also presented to demonstrate the use of our technique.
APA, Harvard, Vancouver, ISO, and other styles
22

Maksimainen, Mikko, Matti T. Vaaja, Matti Kurkela, Juho-Pekka Virtanen, Arttu Julin, Kaisa Jaalama, and Hannu Hyyppä. "Nighttime Mobile Laser Scanning and 3D Luminance Measurement: Verifying the Outcome of Roadside Tree Pruning with Mobile Measurement of the Road Environment." ISPRS International Journal of Geo-Information 9, no. 7 (July 19, 2020): 455. http://dx.doi.org/10.3390/ijgi9070455.

Full text
Abstract:
Roadside vegetation can affect the performance of installed road lighting. We demonstrate a workflow in which a car-mounted measurement system is used to assess the light-obstructing effect of roadside vegetation. The mobile mapping system (MMS) includes a panoramic camera system, laser scanner, inertial measurement unit, and satellite positioning system. The workflow and the measurement system were applied to a road section of Munkkiniemenranta, Helsinki, Finland, in 2015 and 2019. The relative luminance distribution on a road surface and the obstructing vegetation were measured before and after roadside vegetation pruning applying a luminance-calibrated mobile mapping system. The difference between the two measurements is presented, and the opportunities provided by the mobile 3D luminance measurement system are discussed.
APA, Harvard, Vancouver, ISO, and other styles
23

Budak, Vladimir P., Viсtor S. Zheltov, Tatyana V. Meshkova, and Victor D. Chembaev. "Experimental Study of the New Criterion of Lighting Quality Based on Analysis of Luminance Distribution at Moscow Metro Stations." Light & Engineering, no. 03-2020 (June 2020): 98–105. http://dx.doi.org/10.33383/2019-044.

Full text
Abstract:
The article presents the experiment in studying a new criterion of lighting quality based on spatial and angular distribution of luminance proposed by the Lighting Engineering sub-department of NIU MPEI. The experiment studies correlation between expert evaluations of lighting quality at 21 stations of the Moscow Metro with analysis based on the criterion of quality of RAW-format luminance photographs of the stations made by means of a camera and adjusted according to luminance measured by a luminance meter. The obtained photos were processed using the proposed criterion. The article presents design of station models and calculations made by means of DIALux software and the programme developed (as part of the work) on the basis of local evaluations. It is demonstrated that the proposed criterion allows us to take account of extended veiling reflections and may be considered as enhancement of the unified glare rating UGR.
APA, Harvard, Vancouver, ISO, and other styles
24

KIMURA, Hitoshi, and Tarow NOKGUCHI. "MEASUREMENT OF EFFECTIVE LUMINANCE IN ACTUAL VISUAL FIELD USING DIGITAL CAMERA." Journal of Architecture and Planning (Transactions of AIJ) 67, no. 551 (2002): 23–27. http://dx.doi.org/10.3130/aija.67.23_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

QUAN Xian-rong, LI Xian-sheng, LIU Ze-xun, YE Zhao, and WANG Zhi. "Nonuniformity Correction of TDI CCD Camera Based on Radiation Luminance Revises." Chinese Journal of Liquid Crystals and Displays 26, no. 3 (2011): 379–83. http://dx.doi.org/10.3788/yjyxs20112603.0379.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Noguchi, Tarow. "Utilization of a video camera for the measurements of luminance distribution." JOURNAL OF THE ILLUMINATING ENGINEERING INSTITUTE OF JAPAN 78, Appendix (1994): 318. http://dx.doi.org/10.2150/jieij1980.78.appendix_318.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Kurkela, Matti, Mikko Maksimainen, Matti Vaaja, Juho-Pekka Virtanen, Antero Kukko, Juha Hyyppä, and Hannu Hyyppä. "Camera preparation and performance for 3D luminance mapping of road environments." Photogrammetric Journal of Finland 25, no. 2 (2017): 1–23. http://dx.doi.org/10.17690/017252.1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Dev, Soumyabrata, Florian M. Savoy, Yee Hui Lee, and Stefan Winkler. "High-dynamic-range imaging for cloud segmentation." Atmospheric Measurement Techniques 11, no. 4 (April 11, 2018): 2041–49. http://dx.doi.org/10.5194/amt-11-2041-2018.

Full text
Abstract:
Abstract. Sky–cloud images obtained from ground-based sky cameras are usually captured using a fisheye lens with a wide field of view. However, the sky exhibits a large dynamic range in terms of luminance, more than a conventional camera can capture. It is thus difficult to capture the details of an entire scene with a regular camera in a single shot. In most cases, the circumsolar region is overexposed, and the regions near the horizon are underexposed. This renders cloud segmentation for such images difficult. In this paper, we propose HDRCloudSeg – an effective method for cloud segmentation using high-dynamic-range (HDR) imaging based on multi-exposure fusion. We describe the HDR image generation process and release a new database to the community for benchmarking. Our proposed approach is the first using HDR radiance maps for cloud segmentation and achieves very good results.
APA, Harvard, Vancouver, ISO, and other styles
29

Atif, M., Z. H. Khand, S. Khan, F. Akhtar, and A. Rajput. "Storage Optimization using Adaptive Thresholding Motion Detection." Engineering, Technology & Applied Science Research 11, no. 2 (April 11, 2021): 6869–72. http://dx.doi.org/10.48084/etasr.3951.

Full text
Abstract:
Data storage is always an issue, especially for video data from CCTV cameras that require huge amounts of storage. Moreover, monitoring past events is a laborious task. This paper proposes a motion detection method that requires fewer calculations and reduces the required data storage up to 70%, as it stores only the informative frames, enabling the security personnel to retrieve the required information more quickly. The proposed method utilized a histogram-based adaptive threshold for motion detection, and therefore it can work in variable luminance conditions. The proposed method can be applied to streamed frames of any CCTV camera to efficiently store and retrieve informative frames.
APA, Harvard, Vancouver, ISO, and other styles
30

Eisemann, Leon, Jan Froehlich, Axel Hartz, and Johannes Maucher. "Expanding dynamic range in a single-shot image through a sparse grid of low exposure pixels." Electronic Imaging 2020, no. 7 (January 26, 2020): 229–1. http://dx.doi.org/10.2352/issn.2470-1173.2020.7.iss-229.

Full text
Abstract:
Camera sensors are physically restricted in the amount of luminance which can be captured at once. To achieve a higher dynamic range, multiple exposures are typically combined. This method comes with several disadvantages, like temporal or alignment aliasing. Hence, we propose a method to preserve high luminance information in a single-shot image. By introducing a grid of highlight preserving pixels, which equals 1% of the total amount of pixels, we are able to sustain information directly incamera for later processing. To provide evidence, that this number of pixels is enough for gaining additional dynamic range, we use a U-Net for reconstruction. For training, we make use of the HDR+ dataset, which we augment to simulate our proposed grid. We demonstrate that our approach can preserve high luminance information, which can be used for a visually convincing reconstruction, close to the ground truth.
APA, Harvard, Vancouver, ISO, and other styles
31

Duque, Carlos Andres Arango, Mekides Assefa Abebe, Muhammad Shahid, and Jon Yngve Hardeberg. "Color and Quality Enhancement of Videoconferencing Whiteboards." Journal of Perceptual Imaging 1, no. 1 (January 1, 2018): 10504–1. http://dx.doi.org/10.2352/j.percept.imaging.2018.1.1.010504.

Full text
Abstract:
Abstract Whiteboards are commonly used as a medium of instant illustration of ideas during several activities including presentations, lectures, meetings, and related others through videoconferencing systems. However, the acquisition of whiteboard contents is inhibited by issues inherent to the camera technologies, the whiteboard glossy surfaces along with other environmental issues such as room lighting or camera positioning. The contents of whiteboards are mostly invisible due to the low luminance contrast and other related color degradation problems. This article presents an account of a work aimed at extracting the whiteboard image and consequently enhancing its perceptual quality and legibility. Two different methods based on color balancing and color warping are introduced to improve the global and local luminance contrast as well as color saturation of the contents. The methods are implemented based on different general models of the videoconferencing environment for avoiding color shifts and unnaturalness of results. Our evaluations, through psycho-visual experiments, reveal the significance of the proposed method’s improvements over the state of the art methods in terms of visual quality and visibility.
APA, Harvard, Vancouver, ISO, and other styles
32

Chun, Chanjun, Taehee Lee, Sungil Kwon, and Seung-Ki Ryu. "Classification and Segmentation of Longitudinal Road Marking Using Convolutional Neural Networks for Dynamic Retroreflection Estimation." Sensors 20, no. 19 (September 28, 2020): 5560. http://dx.doi.org/10.3390/s20195560.

Full text
Abstract:
Road markings constitute one of the most important elements of the road. Moreover, they are managed according to specific standards, including a criterion for a luminous contrast, which can be referred to as retroreflection. Retroreflection can be used to measure the reflection properties of road markings or other road facilities. It is essential to manage retroreflection in order to improve road safety and sustainability. In this study, we propose a dynamic retroreflection estimation method for longitudinal road markings, which employs a luminance camera and convolutional neural networks (CNNs). The images that were captured by a luminance camera were input into a classification and regression CNN model in order to determine whether the longitudinal road marking was accurately acquired. A segmentation model was also developed and implemented in order to accurately present the longitudinal road marking and reference plate if a longitudinal road marking was determined to exist in the captured image. The retroreflection was dynamically measured as a driver drove along an actual road; consequently, the effectiveness of the proposed method was demonstrated.
APA, Harvard, Vancouver, ISO, and other styles
33

Hatakeyama, Yutaka, Akimichi Mitsuta, and Kaoru Hirota. "Detection Algorithm for Real Surveillance Cameras Using Geographic Information." Journal of Advanced Computational Intelligence and Intelligent Informatics 12, no. 1 (January 20, 2008): 4–9. http://dx.doi.org/10.20965/jaciii.2008.p0004.

Full text
Abstract:
Detection algorithm for pedestrians is proposed for the real surveillance system based on color similarity for dynamic color images under low illumination, where the proposed color similarity is defined by color change vectors in the L*a*b* color metric space and the time taken by pedestrians to pass between surveillance camera. It provides continuous detection results through surveillance cameras under lower luminance conditions in real surveillance system. Experimental results for dynamic image taken under low illumination in streets show that detected frames with the proposed algorithm increased by 20% compared to detection results without geographic information. The proposed algorithm is being considered for use in poor security areas in downtown Japan.
APA, Harvard, Vancouver, ISO, and other styles
34

Chujo, Wataru, and Masahiro Kinoshita. "Rolling-shutter-based 16-QAM optical camera communication by spatial luminance distribution." IEICE Communications Express 8, no. 12 (2019): 566–71. http://dx.doi.org/10.1587/comex.2019gcl0055.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Glenn, Johnathan, G. Dodds, and R. Robinson. "Practical Limitations and Measurements for Camera Based Road Luminance/Lighting Standards Assessment." Journal of the Illuminating Engineering Society 28, no. 1 (January 1999): 64–70. http://dx.doi.org/10.1080/00994480.1999.10748253.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Hsu, Hsiang-Han, Yu-Ping Lan, and Chieh-Hung Lai. "P-84: Luminance and Color Measurement for LED Backlights with a Hyperspectral Camera." SID Symposium Digest of Technical Papers 39, no. 1 (2008): 1503. http://dx.doi.org/10.1889/1.3069441.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Kordecki, Andrzej, Henryk Palus, and Artur Bal. "Practical vignetting correction method for digital camera with measurement of surface luminance distribution." Signal, Image and Video Processing 10, no. 8 (July 21, 2016): 1417–24. http://dx.doi.org/10.1007/s11760-016-0941-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Dong, Xuan, Weixin Li, Xiaojie Wang, and Yunhong Wang. "Cycle-CNN for Colorization towards Real Monochrome-Color Camera Systems." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 07 (April 3, 2020): 10721–28. http://dx.doi.org/10.1609/aaai.v34i07.6700.

Full text
Abstract:
Colorization in monochrome-color camera systems aims to colorize the gray image IG from the monochrome camera using the color image RC from the color camera as reference. Since monochrome cameras have better imaging quality than color cameras, the colorization can help obtain higher quality color images. Related learning based methods usually simulate the monochrome-color camera systems to generate the synthesized data for training, due to the lack of ground-truth color information of the gray image in the real data. However, the methods that are trained relying on the synthesized data may get poor results when colorizing real data, because the synthesized data may deviate from the real data. We present a new CNN model, named cycle CNN, which can directly use the real data from monochrome-color camera systems for training. In detail, we use the colorization CNN model to do the colorization twice. First, we colorize IG using RC as reference to obtain the first-time colorization result IC. Second, we colorize the de-colored map of RC, i.e. RG, using the first-time colorization result IC as reference to obtain the second-time colorization result R′C. In this way, for the second-time colorization result R′C, we use the original color map RC as ground-truth and introduce the cycle consistency loss to push R′C ≈ RC. Also, for the first-time colorization result IC, we propose a structure similarity loss to encourage the luminance maps between IG and IC to have similar structures. In addition, we introduce a spatial smoothness loss within the colorization CNN model to encourage spatial smoothness of the colorization result. Combining all these losses, we could train the colorization CNN model using the real data in the absence of the ground-truth color information of IG. Experimental results show that we can outperform related methods largely for colorizing real data.
APA, Harvard, Vancouver, ISO, and other styles
39

Ramanujam, E., and S. Padmavathi. "Real time fall detection using infrared cameras and reflective tapes under day/night luminance." Journal of Ambient Intelligence and Smart Environments 13, no. 4 (July 21, 2021): 285–300. http://dx.doi.org/10.3233/ais-210605.

Full text
Abstract:
Falls are the leading cause of injuries and death in elderly individuals who live alone at home. The core service of assistive living technology is to monitor elders’ activities through wearable devices, ambient sensors, and vision systems. Vision systems are among the best solutions, as their implementation and maintenance costs are the lowest. However, current vision systems are limited in their ability to handle cluttered environments, occlusion, illumination changes throughout the day, and monitoring without illumination. To overcome these issues, this paper proposes a 24/7 monitoring system for elders that uses retroreflective tape fabricated as part of conventional clothing, monitored through low-cost infrared (IR) cameras fixed in the living environment. IR camera records video even when there are changes in illumination or zero luminance. For classification among clutter and occlusion, the tape is considered as a blob instead of a human silhouette; the orientation angle, fitted through ellipse modeling, of the blob in each frame allows classification that detects falls without pretrained data. System performance was tested using subjects in various age groups and “fall” or “non-fall” were detected with 99.01% accuracy.
APA, Harvard, Vancouver, ISO, and other styles
40

Masuta, Hiroyuki, and Naoyuki Kubota. "An Integrated Perceptual System of Different Perceptual Elements for an Intelligent Robot." Journal of Advanced Computational Intelligence and Intelligent Informatics 14, no. 7 (November 20, 2010): 770–75. http://dx.doi.org/10.20965/jaciii.2010.p0770.

Full text
Abstract:
This paper discusses an integrated perceptual system for intelligent robots. Robots should be able to perceive environments flexibly enough to realize intelligent behavior. We focus on a perceptual system based on the perceiving-acting cycle discussed in ecological psychology. The perceptual system we propose consists of a retinal model and a spiking-neural network realizing the perceiving-acting cycle concept. We apply our proposal to a robot arm with a threedimensional (3D)-range camera. We verified the feasibility of the perceptual system using a single input such as depth or luminance information. Our proposal integrates different perceptual elements for improving the accuracy of perception. Experimental results showed that our proposal perceives the targeted dish accurately by integrating different perceptual elements using the 3D-range camera.
APA, Harvard, Vancouver, ISO, and other styles
41

Rossini, Elton G., and Arno Krenzinger. "Maps of sky relative radiance and luminance distributions acquired with a monochromatic CCD camera." Solar Energy 81, no. 11 (November 2007): 1323–32. http://dx.doi.org/10.1016/j.solener.2007.06.013.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Akimoto, Manabu, Hideki Yamaguchi, and Hiroyuki Shinoda. "Calibration of Illumination Loss to Measure Luminance and Chromaticity Distribution by Using Digital Camera." i-Perception 2, no. 4 (May 2011): 379. http://dx.doi.org/10.1068/ic379.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Hatakeyama, Yutaka, Kazuhiko Kawamoto, Hajime Nobuhara, Shin-ichi Yoshida, and Kaoru Hirota. "Color Instance-Based Reasoning and its Application to Dynamic Image Restoration Under Low Luminance Conditions." Journal of Advanced Computational Intelligence and Intelligent Informatics 8, no. 6 (November 20, 2004): 639–48. http://dx.doi.org/10.20965/jaciii.2004.p0639.

Full text
Abstract:
Color instance-based reasoning is applied to dynamic image restoration under low luminance conditions is presented by introducing color change vectors. The color change vectors are constructed based on image measurement, i.e., no optical lighting models are required. This property enables us to deal with low luminance conditions, under which it is difficult to capture color information using traditional optics-based methods. The proposed algorithm restores color values by adaptively modifying the color change vectors in response to a given dynamic image. Experiments are done with still and dynamic images to compare the proposed algorithm with the conventional one in terms of color difference. The experimental results show that the proposed algorithm decreases the color-difference a maximum of 20% compared to the conventional algorithm. The proposed algorithm presents the foundation to identify a person by a low cost CCD camera in the practical security system.
APA, Harvard, Vancouver, ISO, and other styles
44

Troscianko, T., C. A. Parraga, G. Brelstaff, D. Carr, and K. Nelson. "Spatio-Chromatic Information Content of Natural Scenes." Perception 25, no. 1_suppl (August 1996): 162. http://dx.doi.org/10.1068/v96l1009.

Full text
Abstract:
A common assumption in the study of the relationship between human vision and the visual environment is that human vision has developed in order to encode the incident information in an optimal manner. Such arguments have been used to support the 1/f dependence of scene content as a function of spatial frequency. In keeping with this assumption, we ask whether there are any important differences between the luminance and (r/g) chrominance Fourier spectra of natural scenes, the simple expectation being that the chrominance spectrum should be relatively richer in low spatial frequencies than the luminance spectrum, to correspond with the different shape of luminance and chrominance contrast sensitivity functions. We analysed a data set of 29 images of natural scenes (predominantly of vegetation at different distances) which were obtained with a hyper-spectral camera (measuring the scene through a set of 31 wavelength bands in the range 400 – 700 nm). The images were transformed to the three Smith — Pokorny cone fundamentals, and further transformed into ‘luminance’ (r+g) and ‘chrominance’ (r-g) images, with various assumptions being made about the relative weighting of the r and g components, and the form of the chrominance response. We then analysed the Fourier spectra of these images using logarithmic intervals in spatial frequency space. This allowed a determination of the total energy within each Fourier band for each of the luminance and chrominance representations. The results strongly indicate that, for the set of scenes studied here, there was no evidence of a predominance of low-spatial-frequency chrominance information. Two classes of explanation are possible: (a) that raw Fourier content may not be the main organising principle determining visual encoding of colour, and/or (b) that our scenes were atypical of what may have driven visual evolution. We present arguments in favour of both of these propositions.
APA, Harvard, Vancouver, ISO, and other styles
45

Bai, Yunfei. "Research on Auto-Exposure Algorithm Based on Image Big Data and Information Entropy." Modern Electronic Technology 3, no. 2 (March 12, 2020): 6. http://dx.doi.org/10.26549/met.v3i2.2216.

Full text
Abstract:
An Auto-Exposure (AE) algorithm based on image big data and information entropy is proposed. On the basis of the traditional algorithm for automatic exposure adjustment based on image brightness, image big data analysis is introduced for the first time. Through the combination of ambient luminance evaluation and image information entropy, the dimension of information acquisition of the automatic exposure system is improved, thus improving the image effect and scene adaptability of the camera. Especially in high dynamic range scenes, compared with the traditional algorithm, the effect is significantly improved.
APA, Harvard, Vancouver, ISO, and other styles
46

KAIHO, Koichi, and Katashi MATSUNAWA. "USE OF DIGITAL STILL CAMERA FOR MEASUREMENT OF LUMINANCE AND APPLICATION TO LIGHT ENVIRONMENT PLANNING." AIJ Journal of Technology and Design 1, no. 1 (1995): 229–32. http://dx.doi.org/10.3130/aijt.1.229.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Yun, Su-In, and Kang-Soo Kim. "Sky Luminance Measurements Using CCD Camera and Comparisons with Calculation Models for Predicting Indoor Illuminance." Sustainability 10, no. 5 (May 14, 2018): 1556. http://dx.doi.org/10.3390/su10051556.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Siegmann, P., R. J. Lopez-Sastre, P. Gil-Jimenez, S. Lafuente-Arroyo, and S. Maldonado-Bascon. "Fundaments in Luminance and Retroreflectivity Measurements of Vertical Traffic Signs Using a Color Digital Camera." IEEE Transactions on Instrumentation and Measurement 57, no. 3 (March 2008): 607–15. http://dx.doi.org/10.1109/tim.2007.911643.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Zielinska-Dabkowska, Karolina M., and Kyra Xavia. "Global Approaches to Reduce Light Pollution from Media Architecture and Non-Static, Self-Luminous LED Displays for Mixed-Use Urban Developments." Sustainability 11, no. 12 (June 22, 2019): 3446. http://dx.doi.org/10.3390/su11123446.

Full text
Abstract:
Urban environments have become significantly brighter and more illuminated, and cities now consider media architecture and non-static, self-luminous LED displays an essential element of their strategy to attract residents, visitors, and tourists in the hours after dark. Unfortunately, most often, they are not designed with care, consideration, and awareness, nor do they support the visual wellbeing and circadian rhythms of humans. They also increase light pollution which has an adverse effect on the environment. The aim of this study was to estimate the scale of the negative impact of 28 non-static, self-luminous LED shop window displays within a real-life city context along the main shopping street Banhofstrasse in Zurich, Switzerland. An experimental field measurement survey investigation was performed to identify visual luminance with commonly available tools such as a luminance meter and a digital reflex camera for luminance photography. Moreover, the most important global approaches to reduce light pollution were evaluated in the form of existing guidelines, technical standards, and laws, all of which should be considered when specifying illuminated digital advertisements. A literature review and survey results both confirmed the extent of the problem and highlighted, too, the need to better measure, apply, and manage this new technology. The authors’ proposal for improvements involve practical recommendations for the design and implementation of future projects which can positively guide and direct this growing trend.
APA, Harvard, Vancouver, ISO, and other styles
50

Yamamoto, T., H. Takeda, and Hiizu Hyakutake. "Failure for Notched Plates of Short Glass Fiber Reinforced Polypropylene under Static and Cyclic Loading." Key Engineering Materials 261-263 (April 2004): 1433–38. http://dx.doi.org/10.4028/www.scientific.net/kem.261-263.1433.

Full text
Abstract:
The validity of the idea of severity near the notch root of notched FRP plates is investigated experimentally. The investigation was accomplished by obtaining experimental data on the static and cyclic loading tests for notched plates of a glass fiber-reinforced polypropylene (GF/PP). To evaluate the damage near the notch root, we measured the luminance distributions by means of a luminance-measuring technique using a CCD camera. The experimental results for the static loading tests show that the configuration and the area of damaged zone near the notch root were determined by both the maximum elastic stress at the notch root, σmax and notch-root radius ρ. The maximum elastic stress at fracture, σmax,c is governed by the notch-root radius ρ and it is independent of notch depth. The number of loading cycles to fatigue damage initiation is determined by both the σmax and ρ. These experimental results confirm the validity of the failure criterion in static load and the fatigue failure criterion based on the idea of severity near the notch root.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography