How to make a segmentation in fibers?
1 次查看(过去 30 天)
显示 更早的评论
Hello
I wanto to make a segmentation in textile photos, aiming to get a difference colour reader, using LAB system . Don't know if it is enough to get a colour filter, as my biggest problem is recognising light colours. It is necesary to use a CNN to get a proper filtering ? Besides it is not necesary to get the full shape of the fiber , just big chunks of them , as I want to get each pixel averaged. Attached are some photo samples and the full code.
First of all, I want to get the foreground stand out from the background . Then to get a the mask eroded until having debris out and getting it clean . And finally getting two mask and recognise its proper colours ,compare them and calculate the difference by dEcmc. Code is in Spanish , hope can understand. Thanks in advance



%Down here some picture samples , between light and dark colours, mostly troubled with light.
a=imread('IMG_20250430_174124109.jpg');
a=imread('IMG_20250430_173858174.jpg');
a=imread('IMG_20250430_173858174.jpg');
a=imread('IMG_20250430_174202501.jpg');
a=imread('IMG_20250430_174045678.jpg');
imshow(a)
calculo_de_color(a)
function calculo_de_color(image)
% Función principal: cálculo de color entre dos objetos de una imagen
[BW, maskedRGBImage] = createMask(image);
imagenLab = rgb2lab(im2double(image));
% % Crear elemento estructurante
% SE = strel("disk", 90);
%
% % Procesar la imagen binaria
% BW2 = imerode(BW, SE);
% BW2 = imopen(BW2, SE); % Abrir después de erosionar
%Create a disk shaped structuring element with radius r.
SE = strel("disk",70);
%BW3 = imopen(BW,SE);
BW4 = imclose(BW,SE);
SE = strel("disk",90);
BW2 = imerode(BW4,SE);
% Limpiar regiones pequeñas si hay más de 2 objetos
CC = bwconncomp(BW2);
if CC.NumObjects > 2
stats = regionprops(CC, "Area");
areas = [stats.Area];
[~, idxMin] = min(areas);
BW2(CC.PixelIdxList{idxMin}) = 0; % Eliminar el más pequeño
% CC = bwconncomp(BW2); % Volver a calcular componentes
end
% Etiquetar objetos
[etiquetas, numObjetos] = bwlabel(BW2);
% Comprobación de que haya exactamente dos objetos
if numObjetos ~= 2
error('Se requieren exactamente dos objetos para comparar colores.');
end
% Inicializar arreglo de colores
coloresLab = zeros(numObjetos, 5); % [L a b chroma hue]
% Separar canales
L = imagenLab(:,:,1);
a = imagenLab(:,:,2);
b = imagenLab(:,:,3);
% Recorrer cada objeto encontrado
for k = 1:numObjetos
objetoMask = etiquetas == k;
L_vals = L(objetoMask);
a_vals = a(objetoMask);
b_vals = b(objetoMask);
% Calcular Chroma y Hue
c_vals = chroma(a_vals, b_vals);
h_vals = rad2deg(atan2(b_vals, a_vals));
coloresLab(k, :) = [mean(L_vals), mean(a_vals), mean(b_vals), mean(c_vals), mean(h_vals)];
end
% Calcular diferencia de color
valor = calcular_cmc(coloresLab);
% Evaluar resultados
evaluar(valor);
evaluar_en_color(coloresLab);
% --- FUNCIONES INTERNAS ---
function evaluar(resultado)
if resultado < 1
fprintf('ΔE_CMC = %.2f → ACEPTABLE.\n', resultado);
else
fprintf('ΔE_CMC = %.2f → NO aceptable.\n', resultado);
end
end
function evaluar_en_color(coloresLab)
[dL, da, db, dC, dH] = diferencias(coloresLab);
if dL >= 0
fprintf('La diferencia es más clara por %.2f.%%\n', abs(dL));
else
fprintf('La diferencia es más oscura por %.2f.%%\n', abs(dL));
end
if da >= 0
fprintf('La diferencia es más roja por %.2f.%%\n', abs(da));
else
fprintf('La diferencia es más verde por %.2f.%%\n', abs(da));
end
if db >= 0
fprintf('La muestra es más amarilla por %.2f.%%\n', abs(db));
else
fprintf('La muestra es más azul por %.2f.%%\n', abs(db));
end
if dC >= 0
fprintf('La muestra es más viva por %.2f.%%\n', abs(dC));
else
fprintf('La muestra es más apagada por %.2f.%%\n', abs(dC));
end
fprintf('La fuerza es de %.2f.%%\n',(coloresLab(1,4)/coloresLab(2,4)*100));
end
function [dL, da, db, dC, dH] = diferencias(coloresLab)
dL = coloresLab(2,1) - coloresLab(1,1);
da = coloresLab(2,2) - coloresLab(1,2);
db = coloresLab(2,3) - coloresLab(1,3);
dC = coloresLab(2,4) - coloresLab(1,4);
dH = sqrt(max(0, da^2 + db^2 - dC^2));
end
function C = chroma(a, b)
C = sqrt(a.^2 + b.^2);
end
function dE_CMC = calcular_cmc(coloresLab)
l = 2; c = 1;
L1 = coloresLab(1,1); C1 = coloresLab(1,4); H1 = coloresLab(1,5);
[dL, ~, ~, dC, dH] = diferencias(coloresLab);
SL = (L1 < 16) * 0.511 + (L1 >= 16) * ((0.040975 * L1) / (1 + 0.01765 * L1));
SC = (0.0638 * C1) / (1 + 0.0131 * C1) + 0.638;
if H1 >= 164 && H1 <= 345
T = 0.56 + abs(0.2 * cosd(168 + H1));
else
T = 0.36 + abs(0.4 * cosd(35 + H1));
end
F = sqrt(C1^4 / (C1^4 + 1900));
SH = SC * (F * T + 1 - F);
dE_CMC = sqrt((dL / (l * SL))^2 + (dC / (c * SC))^2 + (dH / SH)^2);
end
function [BW, maskedRGBImage] = createMask(RGB)
% Convertir imagen RGB a Lab
I = rgb2lab(im2double(RGB));
% Umbrales de segmentación (ajustar según aplicación)
channel1Min = 49.326;
channel1Max = 100.000;
channel2Min = -5.358;
channel2Max = 13.890;
channel3Min = -11.423;
channel3Max = 19.375;
% Crear máscara
sliderBW = (I(:,:,1) >= channel1Min) & (I(:,:,1) <= channel1Max) & ...
(I(:,:,2) >= channel2Min) & (I(:,:,2) <= channel2Max) & ...
(I(:,:,3) >= channel3Min) & (I(:,:,3) <= channel3Max);
BW = ~sliderBW; % Invertir máscara (dependiendo de tu aplicación)
% Imagen enmascarada
maskedRGBImage = RGB;
maskedRGBImage(repmat(~BW, [1 1 3])) = 0;
end
end
0 个评论
采纳的回答
Image Analyst
2025-5-6
I've spent the bulk of the last 35 years of my career doing color analysis of fabrics/textiles. I'm extremely familiar with it. I even helped develop the ASTM D4265 standard on the topic. You can do calibrated color measurement and color difference measurement but you cannot do it with the simple functions like rgb2lab.
First of all, your images are completely bad. You need to use a light booth where the camera is overhead and the front is not open to the room (have a door that shuts on the light booth to exclude room light). Secondly you need to calibrate your system to an object like the Calibrite Color Checker chart
Third, the lens is extremely important. You must use a high quality lens like those from Schneider. If you don't you'll get horrendous lens shading. This means that the center of the image can be as much as 30-50% brighter than the corners (or only 10% brighter with a Schneider lens). Even with a good Schneider lens you'll still get some shading so you'll need to do background correction otherwise the color you measure will depend on where in the image you measure it at.
Fourth, you need to use a scientific or industrial machine vision camera where you can turn off all automatic processing. Using smart phone cameras or even consumer or prosumer cameras (Nikon or Canon) are not to be used (for several reasons I won't get into here).
Next you need to develop accurate transforms to convert arbitrary RGB into calibrated LAB values and you should not use the built-in rgb2lab -- it's not an accurate enough transform for high precision work with very small color differences. You need to develop your own higher order transform. See attached seminar.
Once you have calibrated LAB, then you can use the built-in imcolordiff. If you don't have a controlled lighting setup and calibrate to known standards, then you will get arbitrary and inaccurate LAB values and Delta E values that are not comparable to what you'd get from a spectrophotometer.
2 个评论
Image Analyst
2025-5-8
I do not. A cell phone is not a good option. You'd be better off using a Canon or Nikon camera with their much larger sensors and ability to turn off most automatic processing. Do you know how much computational photography goes on by the camera in its attempt to present you a pleasing picture? A lot. Perhaps you can see if you can turn off any kind of processing and get just the raw image. Unless you turn off auto gain and exposure you can never do color differencing that you can trust because the camera is changing the brightness of the image all the time so you'd never know if the color difference was a true color difference due to your samples, or if it's due to the camera changing exposure. Plus I'd expect that you have horrendous lens shading, unless the computation photography in the camera did something to compensate for it. Try taking a completely blank shot of a uniform background, like the floor of the light booth. Then use improfile to get an intensity profile across the whole image. Is it completely flat? If not then the color you measure will depend on where the region of interest is in your field of view. That's something you don't want.
更多回答(0 个)
另请参阅
类别
在 Help Center 和 File Exchange 中查找有关 Subspace Methods 的更多信息
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!