issue in comparing two images

Asked by Sepid

I have written a test case to compare two images. In each of the images, I have the same structure, but with different colors in some sections of the structure. I don't why Sikuli considers these two images the same and pass the test case. It should not be like this, because in each image, different color has been used.

Would you please help me with this problem.

Thanks.

Question information

Language:
English Edit question
Status:
Solved
For:
SikuliX Edit question
Assignee:
No assignee Edit question
Solved by:
Sepid
Solved:
Last query:
Last reply:
Revision history for this message
RaiMan (raimund-hocke) said :
#1

What exactly are you doing, to "compare two images"?

What about the similarity score you get?

in doubt send me the 2 images and the relevant snippet as zip silently to my mail at https://launchpad.net/~raimund-hocke

Revision history for this message
Eugene Maslov (emaslov1) said :
#2

Raimund, I also have sometimes a problem that Sikuli Python scripts don't distinguish between glyphs on pressed and released toggle buttons, even if 1.00 MinSimilarity is set.

For example, two icons are the same for Sikuli:
http://www.hohmodrom.ru/upload/3/projimg/125631/files/toggled.png

Settings.MinSimilarity=1
click("unpressed.png")
click("pressed.png")

I understand that color-insensitive mode is good and it solves a lot of problems with random pixel differences in the images, but sometimes color-strict flag would be really useful. I heard that in Java API it's already solved, so the implementation for Python would be really nice.

Revision history for this message
RaiMan (raimund-hocke) said :
#3

@Eugene
thanks for the example.

I do not know, what you mean by "Java API it's already solved", but your example fails on all levels, but has a workaround.

The reason behind has nothing to do with some "color-strict flag", since there is none available.

If you use your 2 images you will get the following result (supposing the coloured is the pressed version):
pressed matches pressed with 1.00, but unpressed only with 0.97
unpressed matches both pressed and unpressed with 1.00

this leads to the workaround:
unpressed = Pattern("unpressed.png").exact()
pressed = Pattern("pressed.png").exact()

button = find(unpressed)
# now we can check the found button
if button.grow(10).exists(pressed, 0):
    print "button is pressed"
else:
    print "button is not pressed"

the .grow(10) extends the match region to be robust against differences in the shots (width/height).

Another possibility using only the pressed image (which makes it more robust with respect to the mentioned shot problem):
button = Pattern("pressed.png").similar(0.95)
pressed = Pattern(button).exact()
mButton = find(button)
# now we can check the found button
Settings.CheckLastSeen = False # see comment
if mButton.exists(pressed):
    print "button is pressed"
else:
    print "button is not pressed"
Settings.CheckLastSeen = True # see comment

comment:
version 1.1.0 has a feature, that stores the last match with the image and with the next search the image is first searched, where it was before, which speeds things up with a factor 50 ... 100.
In this special case it leads to a problem (I will fix that).
So for now we have to switch the feature off for the check and on again afterwards.

The real reason of the initial problem I guess is the fact, that internally the score is rounded at the second decimal (0.99+ is exact and reported as 1.00). In your case, on some decimal position 2+, we might find differences in the score.
In version 1.2 I will revise the score handling anyway, so it might well be, that your case will be handled as expected.

If you have more than one of these buttons visible in this moment, you have to use findAll() and scan the matches.

Revision history for this message
Sepid (sce2020sahaf) said :
#4

@Raimund

Thanks for your message. I sent you the images into your email address and explained everything there.

For other friends who have any suggestion, they can access to the image files from the following link:
https://drive.google.com/folderview?id=0B0HOjfpxPc67TS13VXlPMEh5eWs&usp=sharing

and a summary of my problem:
 I am trying to write an automated test case for the output of a visualization application. I want to verify the visualized output.
I am writing my code in C#, that' why I am using a sikuli wrapper for .Net called SikuliIntegrator. (a simple tutorial can be found here http://qaagent.wordpress.com/2013/07/17/how-to-use-sikuli-in-my-c-coded-tests/)

I don't know what is the default similarity score in SikuliAction.Exists function, but when I use it without setting any similarity score, these two images are considered as the same. I assume that when we use the similarity score, the result would be the same.

Thanks in advance for your help.

Revision history for this message
RaiMan (raimund-hocke) said :
#5

basic information about the problem can be found in comment #3.
... and it shows, how one can workaround in most cases.

In the original case (mentioned also in comment #4, we have a similar situation:
we have 2 images, that only in some part show gray color instead of red.
checking the match score, one can find, that one image against the other matches with a score of 0.98+ which results in a rounded value of 0.99 (shown in the match printout).
So in this case just using
Pattern(img).exact()
already works (no need to double check)

Revision history for this message
Eugene Maslov (emaslov1) said :
#6

Sepid, take a look what I found .

Your case is a strange one when two images look the same as a whole: probably, the total sum of differences returns zero diff number on average.
But the images are found different in their parts!

I cropped your image to the little key area of interest, and the difference will be detected quite reliably, even if similarity is reduced to 0.97.

I executed the following script over your red image, displayed in the screen:
http://www.hohmodrom.ru/upload/3/projimg/125631/files/color_confusion.gif

step 3 mistakenly passed...

but step 9 correctly failed! :)

Thus, cropping the reference images, it's possible to catch the difference :)

Revision history for this message
Eugene Maslov (emaslov1) said :
#7

Raimund, I meant above that I heard that something in Java can deal with exact colors. I just didn't know that jRobot is accessible from Python.
So now I solved my problem with pressed/released buttons, measuring the color inside the glyphs with jRobot.

import java.awt.Robot as jRobot
import java.awt.Color as Color

def findWithColor(img, reg=SCREEN, color=Color(255,255,255), probe=[0,0], debug=False):
    myRobot=jRobot()
    res=None
    reg.findAll(img)
    found=list(reg.getLastMatches())
    for but in found:
        probePoint=but.getCenter().offset(probe[0],probe[1])
        myColor=myRobot.getPixelColor(probePoint.x, probePoint.y)
        if debug:
            print "color at "+str(probePoint.x)+":" +str(probePoint.y)+":",myColor
        if myColor==color:
            res=but
            if not debug:
                break
    return(res)

Settings.MinSimilarity=0.85
#Stable click on grayed button:
findWithColor("unpressed.png",color=Color(204,219,233), probe=[2,2], debug=True).click()
#Stable click on white button
findWithColor("unpressed.png",color=Color(255,255,255)).click()

Thank you very much for the suggestions.

Revision history for this message
RaiMan (raimund-hocke) said :
#8

-- at comment #6
@Eugene
of course your approach is suitable in cases, where you know the spots, where 2 images might differ.
But in this case, this is not necessary, since the similarity score already differs (0.99 versus 1.00)

Generally your approach is only a workaround for the current weak implementation (score rounded to 2 decimals).
If 2 images differ in some pixels, then this should be detectable.
The solution is, to work with the original float values, that might show the fact "we are different" only in some decimal position beyond the 3rd (which now makes the difference).

Since this cannot be fixed easily, but needs a revision in my places including native code, I have put it on the list for version 1.2.

BTW:
since we have a change detection with the observe feature already (onChange), I guess, it is rather easy to add a feature, that might return a list of areas in the image, that contain differences between 2 images. (also something for 1.2).

Revision history for this message
Eugene Maslov (emaslov1) said :
#9

Raimund, by the way, in the script from #6
http://www.hohmodrom.ru/upload/3/projimg/125631/files/color_confusion.gif
if I reduce MinSimilarity to 0.9, so that the script passes ok, then increase to 1.00 and run again in the same IDE, it continues to run successfully. Build 2014-09-18.

Revision history for this message
RaiMan (raimund-hocke) said :
#10

@comment #7
@Eugene
Ok, now I understand, what you mean with "solved in Java API"

in class Location there is a method
getColor()

that internally uses the Robot feature you found and returns a java.awt.Color object.

So your solution can be made a bit smoother (no need for the Robot):

def findWithColor(img, reg=SCREEN, color=Color(255,255,255), probe=[0,0], debug=False):
    res=None
    reg.findAll(img)
    found=list(reg.getLastMatches())
    for but in found:
        probePoint=but.getCenter().offset(probe[0],probe[1])
        myColor=probePoint.getColor()
        if debug:
            print "color at "+str(probePoint.x)+":" +str(probePoint.y)+":",myColor
        if myColor==color:
            res=but
            if not debug:
                break
    return(res)

For a convenient usage on the scripting level, getColor() should return a list of the RGB values, so you could simply use (no ref to any java stuff)

if (255, 255, 255) = somePoint.getColore():
    print "the spot is white"

--- One more thing:
Again this is only a workaround for the fact, that currently small differences between images (in the sense of OpenCV::matchTemplate()) cannot be detected with Sikuli's weak similariity score implementation (see the other comments above).

Revision history for this message
RaiMan (raimund-hocke) said :
#11

@comment #9
@Eugene
the problem is addressed in comment #3

comment:
version 1.1.0 has a feature, that stores the last match with the image and with the next search the image is first searched, where it was before, which speeds things up with a factor 50 ... 100.
In this special case it leads to a problem (I will fix that).
So for now we have to switch the feature off for the check and on again afterwards.

... and is fixed with the next build on January 5th

Revision history for this message
RaiMan (raimund-hocke) said :
#12

@Eugene
LOL, much ado about nothing ...

A major reason for the problems we are currently discussing in version 1.1.0 at least, is the above mentioned bug in the CheckLastSeenFirst feature, which is fixed now (available January 5th).

So this workaround helps in such cases until then (taking Eugenes pressed/unpressed example):

Settings.CheckLastSeen = False
pressed = "image of pressed button.png"
notpressed = "image of not pressed button.png"
if exists(Pattern(pressed).exact(), 0):
    print "button is pressed"
if exists(Pattern(notpressed).exact(), 0):
    print "button is not pressed"

The evaluated scores of the button images used against the other (pressed-image on unpressed button and vice versa) result in a score of rounded 0.97 (exactly 0.960901379585 and 0.96352738142), which is really enough to distinguish.

In the original example of this question the match scores are 0.99413305521 and 0.994715809822, which again are enough to distinguish with Pattern().exact(), but are "rather low" (a significant difference to 1.0000000000000...) because of the large areas of even color.
When leaving out most of the background with the shots, the match score goes up to 0.999999.....

Revision history for this message
Sepid (sce2020sahaf) said :
#13

Thank you very much everybody specially Raiman for your great help.