Cumulative per-build pass/fail statistics in image report

Asked by Vivia Nikolaidou

Hello,

I am trying to add some per-build pass/fail graphs to the Image Report (build_A: X pass Y fail, build_B: M pass N fail, etc). However, I did not find a convenient way to get the data I need.

Right now, I have to parse each row of the table, adding passes and fails to the according build as they go. I did a quick proof-of-concept in Javascript:

{% for col in cols %}
  ddates.push([{{forloop.counter0}}, {{col.number}}]);
  dpasses[{{forloop.counter0}}] = 0;
  dfails[{{forloop.counter0}}] = 0;
{% endfor %}
{% for row_data in table_data %}
  {% for result in row_data %}
    {% if result.present %}
      dpasses[{{forloop.counter0}}] += {{result.passes}};
      dfails[{{forloop.counter0}}] += {{result.total}} - {{result.passes}};
    {% endif %}
  {% endfor %}
{% endfor %}

However, this is slow. On the other hand, from examining the model and view, I didn't find a more convenient way to get this data.

It would be faster to move these calculations inside the view, but do you have a better recommendation? Or am I perhaps missing something obvious?

Thank you,

Vivia

Question information

Language:
English Edit question
Status:
Answered
For:
LAVA Dashboard (deprecated) Edit question
Assignee:
No assignee Edit question
Last query:
Last reply:
Revision history for this message
Michael Hudson-Doyle (mwhudson) said :
#1

This looks reasonable to me. Why/where is it slow? During rendering or in the browser?

Revision history for this message
Vivia Nikolaidou (n-vivia) said :
#2

I really don't want the calculations inside the js:

1) Performance-wise, isn't this going to collapse anyway when I have a lot of data to deal with? (and I will)
2) Architecturally, it sounds wrong to let the client do these calculations - the server is the one who knows the logic
3) I'd also like to add these graphs to "Image reports", to contain one graph per image. If I understood correctly the way it works, with js each graph's calculations will have to be redone when the user clicks on an image, but it won't be the case if I move them to the view (or any other solution you might recommend).

Revision history for this message
Michael Hudson-Doyle (mwhudson) said :
#3

1) I guess it kind of doubles the amount of work rendering the page does, which is perhaps too much.
2) I'm sorry, I don't care too much about that sort of thing :-)
3) In this case then yes, you will definitely want to do something different. It's probably not too hard to write a view that returns the data you need in json format for each image report -- the queries will be fairly complicated but nothing ridiculous (I hope). Do you have experience with the Django ORM?

Revision history for this message
Vivia Nikolaidou (n-vivia) said :
#4

OK, I came up with this code:

http://people.collabora.com/~vivia/image-report-graphs.patch

But the results on the graph are a bit weird, they're not the same as the ones on the table...? What's going on? I can't figure out what it might be :(

Revision history for this message
Michael Hudson-Doyle (mwhudson) said :
#5

Which branch was that patch generated against? I think you must have some local changes, it doesn't apply cleanly. Easy enough to do by hand, but an unexpected road bump :-)

The graphs look reasonable to me: http://people.linaro.org/~mwh/image-report-graph.png

One thing that might cause oddness if if the filter the image report is based on specifies tests or test cases -- are you doing that?

Revision history for this message
Michael Hudson-Doyle (mwhudson) said :
#6

Oh and having the graphs on the image-reports page makes it take nearly a minute to render for me. Will need to do something different there :-)

Revision history for this message
Vivia Nikolaidou (n-vivia) said :
#7

Thanks a lot for your time! :)

I may have some local changes as I'm using lava from the collabora repositories... when it's finished I'll properly do it on bzr. :)

The filter seems to just define a particular stream, a particular user, and rootfs.type . In any case, it should work for the general case for any filter, right? I saw the code in models.py get_test_runs_impl, and from what I understand I'll have to inject those conditions into my bundles.annotate(...) somehow...?

As for the image-reports page, I'm thinking it would also be way more helpful to have one cumulative graph for all images, with stacked bars that contain pass/fail statistics for each image. Probably much faster, too. I can surely generate a graph-report (in views.py) for each image, get totalpasses and grandtotal for each one, and feed those to the graph. That will surely move a lot of things out of js and will speed things up a bit. But can you think of a faster method (maybe getting the db to create better sums for me)?

Revision history for this message
Vivia Nikolaidou (n-vivia) said :
#8

Actually, what I meant was, one graph _per image set_, for all images in this image set. I made a proof-of-concept of what I was telling you above (also need a cleaner way to not include a graph when there is no data), could you tell me how it is performance-wise? (I don't have enough data locally to test its performance)

http://people.collabora.com/~vivia/graphs2.patch

I'd also appreciate a screenshot of that. :)

Revision history for this message
Vivia Nikolaidou (n-vivia) said :
#9

21:56 < mwhudson> so i think i mislead you a little with my advice last week
21:56 < mwhudson> report_for_graph goes from match to test run to bundle to test runs to result counts
21:56 < mwhudson> that's silly
21:56 < mwhudson> it should just go from match to test run to result counts
21:56 < mwhudson> for match in matches:
21:56 < mwhudson> counts = defaultdict(int)
21:57 < mwhudson> for test_run in match.test_runs:
21:57 < mwhudson> counts['pass'] += test_run.denormalization.count_pass
21:57 < mwhudson> counts['total' += test_run.denormalization.count_all()
21:57 < mwhudson> vivia: something like that ^
21:58 < mwhudson> vivia: that should make it a bunch faster, and maybe fix the number mismatch too

Revision history for this message
Vivia Nikolaidou (n-vivia) said :
#10

Wrong numbers bug fixed \o/

Essentially, my filter had rootfs.date under "build number attribute". I looked at the code - if the same test for the same rootfs.date was run twice, only the newest run was kept on the table.

I fixed the graph by reversing match.test_runs before the loop and only adding count_pass and count_all() if test_run.test.test_id is unseen.

http://people.collabora.com/~vivia/graphs3.patch

I also incorporated your suggestions that I pasted from IRC, it should be faster now. Can you tell me if the image-reports page is usable now?

Revision history for this message
Michael Hudson-Doyle (mwhudson) said :
#11

The patch looks good. I don't have time to try (and am off until christmas now, sorry) but I suspect that the image reports page will still be too slow on our data sets -- but this is a reflection of how much data producing these reports takes and that's probably something we need to fix anyway.

Can you help with this problem?

Provide an answer of your own, or ask Vivia Nikolaidou for more information if necessary.

To post a message you must log in.