Colorspace transformations - linear sRGB to HSV
7 次查看(过去 30 天)
显示 更早的评论
Hello all,
I need a color space specialist :-)
I have a set of portraits; on every portrait a person is holding a gray card/color checker next to his/her cheek.
I am trying to determine some mean value of the skin colour (by comparison with reference white) and than do some white balancing...
All the photos are in CR2 (RAW) format; after linearization, demosaicing etc. I do a color conversion of my raw images to (linear) sRGB color space.
Since I am interested in chromaticity, I would like to convert my (linear) sRGB to HSV...and than present i.e. skin colors gamut.
How to do that?
I tried with some tools available, but it is not working...
0 个评论
采纳的回答
Image Analyst
2014-3-2
I do calibrated color imaging all the time - it's my "day job". You're doing it backwards, and incorrectly. You white balance FIRST and THEN determine skin color, not the other way around. And you can't do white balancing with just gray cards. You need something like an X-rite Color Checker Chart. Otherwise how can you tell if there is some color shift in your illumination? You might have yellowish or bluish white and want to make sure that you have the true white you're looking for, for example D65.
And I wouldn't go into sRGB space. How can you do that? How can you get standard RGB values when you can get almost any RGB out of the camera that you want just by changing the exposure and other things? I would just go directly from your actual RGB to XYZ and then to LAB. I have a primitive gamut visualization routine here (attached) but the best one is this one
Actually first you do background correction to compensate for lens shading. Then you snap images of a series of gray cards taking up the whole image to determine the opto-electronic conversion function ("gamma"). Then you image the Color Checker chart and background correct and gamma correct it. Then you determine the RGB to XYZ transform. Then you can use the "book formulas" to go from XYZ to LAB, which is the "true" colors.
2 个评论
Image Analyst
2025-6-17
Yes you do background correction first. It depends on how you define vignetting. It will compensate for darkening as you get to the edge of the field of view. Mostly, in the center, this is due to lens shading and some people call this vignetting. However it won't compensate for true vignetting where the light is actually prevented from reaching the sensor because it hits the "stop" in the optical system, like the iris diaphragm which shows up as a black ring if you zoom out enough and widen your field of view enough to see it.
I'm not familiar with DCRaw and I don't know if it does a true linearization. Maybe it does - I don't know. With a linear system, twice the brightness of your object (scene) would give twice the brightness. You can take an object of known percent relfectances like a standard chart like you can get from Calibrite and plot gray level vs true reflectance. It should be a straight line going through (0,0). Note that if the plot is a straight line but does not go through the origin, it is NOT a linear system and twice the brightness will not give twice the gray level. A linear system is best but not having one does not mean you can't still intensity calibrate your system. In the end you want to convert your image to CIE LAB color system because that is the color space where you can trace your measurements back to gold-standard spectrophotometers and get color differences. So you need a transform to convert measured RGB values into estimated calibrated LAB values. And you can't do this accurately with a simple 3x3 matrix or ICC profile. See my attached seminar for more info.
White balancing is necessary because a truly white object (flat spectrum) may not have R=G=B. This could be due to camera settings or coloration of the incident light. White balancing is done best by the camera before the picture is sent to the computer. What is does is to make sure that the mean red, green, and blue signals are all the same by adjusting the gains of the different color sensors in the camera. If white is not giving you equal signals for all color channels, then you can't trust the values it gives you for white or for any other color. It's just giving you wrong values. It's best to do white balancing in the camera but the RGB-to-LAB transform can compensate for this to some extent but it's best to optoelectronically fix it first so the software transform does not have to be so extreme.
更多回答(0 个)
另请参阅
类别
在 Help Center 和 File Exchange 中查找有关 Image Processing Toolbox 的更多信息
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!