Hi,
I understand that you’re comparing edge detection results against a reference dataset using metrics like True Positive Rate, Branching Factor, and Quality Percentage. However, the challenge is that the candidate and reference edge matrices differ in size and position due to their geographic alignment, even though both are in the OSGB36 coordinate system.
I assume that both datasets come with associated world files (like .tfw) that define their spatial referencing, and that you’re looking to use this information to properly align the datasets for pixel-by-pixel comparison.
In order to do this comparison in the correct geographic context, you can follow the below steps:
Step 1: Use the world files to read georeferencing information
Load the spatial referencing from each image’s world file using functions that can return referencing objects (like worldFileMatrix or georasterref). This provides the geographic coordinates for each pixel.
Step 2: Align the datasets in a common spatial grid
Use the referencing objects to reproject or resample one dataset to match the other's spatial grid. The function imwarp or mapresize along with imref2d can help resample the candidate to the same spatial extent and resolution as the reference.
Step 3: Crop or pad after spatial alignment
Once both matrices are spatially aligned using their world coordinates, then crop or pad as necessary to ensure they are the same matrix size. This step avoids shifting edges incorrectly because the alignment is already geographic.
Step 4: Perform binary comparison using your existing metrics
Now that both binary edge matrices are in the same spatial frame, your metric calculations (TP, FP, FN, TN) will be more accurate and meaningful.
Step 5: Validate results visually and statistically
As a sanity check, overlay the aligned edge results onto a basemap or reference image using imshowpair or similar tools, to verify correct alignment before computing metrics.
References:
Hope this helps!