about onChange() function

Asked by larryli

In my side, I used2.0.2 version.

I want to detect one reign change, it means, when there is any change to trigger the action.

def change_handler(event):
        print "something changed in ", event.region
        search_reg.stopObserver()
        search_reg.click()

search_reg.onChange(800, change_handler)
search_reg.observe(10,background=False)

Can I confirm one point, how to decide parameter, minChangedSize? is there any better way?

if I want to detect big change in this region, whether this function can use or not?
as my test, it seems the counter of changed increased, event if one scan is not over minChangedSize, is it right?

thank you very much

Question information

Language:
English Edit question
Status:
Solved
For:
SikuliX Edit question
Assignee:
No assignee Edit question
Solved by:
larryli
Solved:
Last query:
Last reply:
Revision history for this message
RaiMan (raimund-hocke) said :
#1

if you want to detect usual GUI changes or similar, just try with the default (just leave the value out).

Using the value only makes sense in cases where you do not want to see changes smaller than the given amount of pixels.

Revision history for this message
larryli (larryli2020) said :
#2

thanks for your answer.

as my understand, minChangedSize means, if image of current region is different with last and the difference of pixel is over than minChangedSize, the handler of onChange will be triggered, is my understand correct.
so I plan to onChange to detect GUI change.
Is there any method to detect correct pixel?

Settings.ObserveScanRate = 0.1
this setting can change scan rate, whether onChange will compare the image between before and scan and after one sca?
if the scan rate is too fast, maybe it can not detect so large differnce, is my understand correct?

Revision history for this message
RaiMan (raimund-hocke) said :
#4

--- Settings.ObserveScanRate:
the value says how many scans per second: 0.1 hence means a scan starts every 10 seconds.
The default is 3, meaning, that every 333 milliseconds a scan is started.
If a scan takes longer than 333 milliseconds in this case, the next scan is started, when this one is ended.
The whole thing is strictly sequential in the current implementation.
Even if you have more than one onChange defined:
scan1, scan2, scan3, .... and then next scan1, scan2, scan3, ... So in such a case the scanrate usually does not mean anything, unless your search region is rather small (less than 100 x 100).

Lowering the scanrate only makes sense, if you think your machine is too slow, to handle this pure numbercrunching process without influencing your workflow.

--- minChangedSize
Inside in a first step the pixels of the 2 region screenshots (last and current) are compared.
Next step: evaluate distinct rectangles containing changed pixels as small as possible.
last step: sort out the rectangles with less pixels (width x height - not changed pixels) than minChangedSizes
return the list of the rectangles remaining (might be empty now).

Which value is suitable for you situation usually has to be evaluated by some typical cases.

Revision history for this message
larryli (larryli2020) said :
#5

thanks for your answer.

for release version, whether we can see each scan log.
if means, we can know each current region change value.

in my side, I define the region is small, it is about 14x24
if I can see each scan log, it will better to adjust paramter

thanks

Revision history for this message
larryli (larryli2020) said :
#6

--- minChangedSize
can you explain in detail?

for example, if Settings.ObserveScanRate =1, it means one second will scan one
as my understand, when start one scan, if will capture one screenshot for current region, after 1s, it will make one screenshot, and then make the compare, to get the differnt pixel for these two screenshot, and then make compare with minChangedSize, is it right?

as you decription, it will find the max rectangles with less pixels, you will sperate the difference as different rectangles. is it right?

in the code, whether you can tell me the position of this scan process code?

thanks

Revision history for this message
RaiMan (raimund-hocke) said :
#7

you say:
for example, if Settings.ObserveScanRate =1, it means one second will scan one as my understand, when start one scan, if will capture one screenshot for current region, after 1s, it will make one screenshot, and then make the compare, to get the differnt pixel for these two screenshot

my answer: yes, correct.

about minChangedSize:
looking into the implementation, I have to admit, that a minChangedSize given with onChange() is simpy ignored deep down in the code.
In fact the fixed area of changed pixels is taken as 5x5 (hence 25 pixels).
The 2 images are compared after turned to grayscale. A pixel is taken as changed, if its gray value (0 .. 255) differs by 3.

The OpenCV based implementation can be found in Finder.Finder2.findChanges().

Revision history for this message
larryli (larryli2020) said :
#8

thanks for yoru information, I can understand.

according to your experience, is there any better way to detect the change, this change is fast? I want to detect the change timing

thanks

Revision history for this message
RaiMan (raimund-hocke) said :
#9

finding changes itself is rather fast (< some 10 msecs, the smaller the image the faster).

with this you can make your own change-detector:

... might be a region
image1 = SCREEN.capture(...)
# do something or simply wait
image2 = SCREEN.capture(...)

finder = Finder(image1)
start = time.time()
changes = finder.findChanges(image2)
print "elapsed:", time.time() - start
for change in changes:
  #if (change.w > nnn): continue # do some filtering
  change.highlightOn()
  print change
wait(5)

you might even afterwards combine smaller regions to larger ones: region1.union(region2)

Revision history for this message
larryli (larryli2020) said :
#10

thanks for your information.
I try to do as thef ollowing:

old_image = getScreen().capture(search_reg)
finder = Finder(old_image)

for x in range(0, 100):
        new_image = getScreen().capture(search_reg)
        start = time.time()
        changes = finder.findChanges(new_image)
        #print "elapsed:", time.time() - start
        for change in changes:
            print "**changed ",change, change.getW(),change.getH()
            changed = change.getW()*change.getH()
        wait(0.3)
        print("---We're on time %d" % (x))

as the log, it seems, always changed 18x13, but when there is big change in the region, it seems not detect.
search region is R[811,385 19x14]@S(0)
**changed R[0,0 18x13]@S(0) 18 13
---We're on time 0
**changed R[0,0 18x13]@S(0) 18 13
---We're on time 1
**changed R[0,0 18x13]@S(0) 18 13
---We're on time 2
**changed R[0,0 18x13]@S(0) 18 13
---We're on time 3
**changed R[0,0 18x13]@S(0) 18 13
---We're on time 4
**changed R[0,0 18x13]@S(0) 18 13
---We're on time 5
**changed R[0,0 18x13]@S(0) 18 13
---We're on time 6
**changed R[0,0 18x13]@S(0) 18 13
---We're on time 7
**changed R[0,0 18x13]@S(0) 18 13
---We're on time 8
**changed R[0,0 18x13]@S(0) 18 13
---We're on time 9
**changed R[0,0 18x13]@S(0) 18 13
---We're on time 10
**changed R[0,2 8x7]@S(0) 8 7

Revision history for this message
RaiMan (raimund-hocke) said :
#11

Sorry, no idea.

I tried something with your setup (always compare to a start image) and it worked as expected.

You have to find a way, to track it down.

Revision history for this message
larryli (larryli2020) said :
#12

thanks for your answer.
In my side, I try to save each capture and then to check whether there is big change or not.

I try to use the following
new_image.saveInto(capture_fld)

there is the following error, but in the code, I find saveInto is public in ScreenImage class, and capture also return ScreenImage, whether 2.0.2 version does not support it?

[error] AttributeError ( 'org.sikuli.script.ScreenImage' object has no attribute

Revision history for this message
RaiMan (raimund-hocke) said :
#13

not sure where you found that.

you have to use:

new_image.getFile(path, name)

to save a ScreenImage as <path>\name

Revision history for this message
larryli (larryli2020) said :
#14

Dear RaiMan,

thanks for your answer.
it can save the image correct.
as the result, it can find complete different image from view, but why it can not find the different?

at here, it can not post the picture, so I can not add attachment.

I have one unclear point, from user viewer, it seems there is no different for two image. because, I make capture as the following

for x in range(0, 200):
    new_image = getScreen().capture(search_reg)
    new_image.getFile(capture_fld,str(x))
    changes = finder.findChanges(new_image)
    for change in changes:
         print "**changed ",change, change.getW(),change.getH()
    wait(0.1)

why there is the following log. it means the changed region is 18*13? or no need change region? this log is result of finder.findChanges
changed R[0,0 18x13]@S(0) 18 13

thanks

Revision history for this message
larryli (larryli2020) said :
#15

let me add one information.

maybe it is related with threshold value. in my side, I try it in the local using capture image to test it. it can detect the changed.
whether this threshold can be changed or not? or you will support it in the furture.

for i in range(0,110):
    file = folder + str(i)+".png"
    print("start to compare file: ",file)
    duplicate = cv2.imread(file)
    difference = cv2.subtract(original, duplicate)
    #print(difference)
    dest = cv2.threshold(difference, thresh, maxValue, cv2.THRESH_BINARY)
    #print(dest[1])
    b, g, r = cv2.split(dest[1])
    if cv2.countNonZero(b) == 0 and cv2.countNonZero(g) == 0 and cv2.countNonZero(r) == 0:
        pass#print("The images are completely Equal",i)
    else:
        print("The images are not same on",i)

Revision history for this message
RaiMan (raimund-hocke) said :
#16

send me the 2 images as zip to my mail sikulix---at---outlook---dot---com

I have to analyse with your example.

Revision history for this message
larryli (larryli2020) said :
#17

I send the image to you, 999.png is old_img, others are new_img.
make the compare, we can find 97.png is different

Revision history for this message
RaiMan (raimund-hocke) said :
#18

ok, got it. I will check.

Revision history for this message
RaiMan (raimund-hocke) said :
#19

What you want to know is when in a small region, that usually shows some background, some pattern appears, that can be distinguished from the background (meaning: some to many of the pixels show a significant different color).

The current implementation of findChanges is not really appropriate for your situation, since:
- both images are converted to grayscale
- pixels are isolated, that differ more than +-3 in the range of 0 .. 255 (PIXEL_DIFF)
- if more than 5 pixels are detected, then the evaluation of rectangles is done (IMAGE_DIFF)

In your situation, the pixels of the background are constantly changing in a way, that is detected by findChanges.

I played a bit around with different DIFF settings:
PIXELDIFF = 20
IMAGDIFF = 50
comes near to what you want (97 and 98 are detected, but also 101)

I will add the 2 settings as parameters to findChanges. Available in 2.0.4 the next days.

BTW: you Python OpenCV example has a problem:
subtract will set results <0 to null and >255 to 255.
hence it depends on the order of the 2 given mats.

We use absDiff to calculate the pixel diffs.

Revision history for this message
larryli (larryli2020) said :
#20

Dear RaiMan,

thanks for your answer.
In my local environment, I add new function, it is same with you said. isChanged() function,
public static boolean isChanged(FindInput2 findInput,int threshold)

it can work also.
I will wait your new release.

for subtract, it is yes, but if it is < 0, it will from 255 again. for test, it is just sample test.
I also use absdiff also now.

thank you very much.

Revision history for this message
RaiMan (raimund-hocke) said :
#21

--- about subtract (from the OpenCV docs):

Difference between two arrays, when both input arrays have the same size and the same number of channels:
dst(I)=saturate(src1(I)−src2(I))

... and saturate exactly means: results <0 to null and >255 to 255.

--- if it is < 0, it will from 255 again
is wrap-around

Revision history for this message
larryli (larryli2020) said :
#22

Yes, you are right
thanks

Revision history for this message
larryli (larryli2020) said :
#23

Dear RaiMan,

I check the github code, I find, you add two function to set pixels and image_diff, setFindChangesPixelDiff and setFindChangesImageDiff

and also you call some reset the value.
Can I confirm when to calll this set function?
finder.setFindChangesPixelDiff(30);
finder.setFindChangesImageDiff(50);
because, in my side, I try to call these two function before observe and add the debug log in findChanges() function.

the value keep 3 and 5.

thanks for your answer.

Revision history for this message
RaiMan (raimund-hocke) said :
#24

the usage is:

finder = Finder(someImage)
finder.setFindChangesImageDiff(xx)
finder.findChanges(someOtherImage)

it only works for this finder object, because the values are reset to standard with a new finder object.

For onChange() it does not do anything.

Revision history for this message
larryli (larryli2020) said :
#25

Oh, I can understand.
thanks for your answer.

BTW, after threshold, if I want to get r,g,b, how to use it?
Imgproc.threshold(mDiffAbs, mDiffTresh, PIXEL_DIFF_THRESHOLD, 0.0, Imgproc.THRESH_TOZERO);

in the following code, it seems mats.size() is 0, I think it should be 3 for r,g,b. whether it is related with threshhold.
because, in Python, I will use b, g, r = cv2.split(mDiffTresh[1])

List<Mat> mats = new ArrayList<Mat>();
Core.split(mDiffTresh, mats);
int count = (mats != null) ? mats.size() : 0;
Debug.action("isChanged "+threshold+" Counter = "+count);

sorry, I am familiar with debug way to mat in sikulix code. do you know know which point problem?
thanks

Revision history for this message
RaiMan (raimund-hocke) said :
#26

If you refer to the SikuliX source code in findChanges:

be aware:
      Imgproc.cvtColor(findInput.getBase(), previousGray, toGray);
      Imgproc.cvtColor(findInput.getTarget(), nextGray, toGray);
      Core.absdiff(previousGray, nextGray, mDiffAbs);
      Imgproc.threshold(mDiffAbs, mDiffTresh, PIXEL_DIFF_THRESHOLD, 0.0, Imgproc.THRESH_TOZERO);

... mDiffAbs and hence mDiffTresh is grey (only one layer)

Revision history for this message
larryli (larryli2020) said :
#27

Oh, sorry, I can understand it