Hi Jack,
I understand you are finding that even a pure black screen registers approximately 20% red pixels, and you want to eliminate this behavior. This might be due to the way you’re defining and detecting “red” pixels. In your code, you’re considering any pixel with a red value greater than or equal to 0.35 as a “red” pixel. This threshold might be too low, especially considering that color values in an image can range from 0 to 1. Even a grayscale image will have some red value in it, because grayscale consists of equal amounts of red, green, and blue.
To address this, you could increase your threshold for what you consider to be a “red” pixel. Alternatively, you could change your approach to look for pixels where the red value is significantly higher than both the green and blue values, rather than just looking at the red value in isolation.
Here’s a modified version of your code that implements the latter approach:
for i = 1:2:n
A=imread(['Image' int2str(i), '.jpg']);
[sz1 sz2 sz3]=size(A);
N=256;
% split RGB layers
R=A(:,:,1);
G=A(:,:,2);
B=A(:,:,3);
% filtering reds
red_pixels = (R > G*1.2) & (R > B*1.2);
amount_red_pixels(i)=numel(find(red_pixels));
fprintf('\n amount red pixels: %s \n', num2str(amount_red_pixels(end)));
% sz1*sz2 % total amount of pixels in initial image
pc_red_pixels(i)=round(100*numel(find(red_pixels))/(sz1*sz2),3); % percentage of red pixels
fprintf('\n percentage red pixels: %s %%\n', num2str(pc_red_pixels(end)));
end
In this version, a pixel is considered “red” if its red value is at least 20% greater than both its green and blue values. You can adjust the 1.2 multiplier as needed to fine-tune your definition of a “red” pixel.