New issue
Advanced search Search tips
Note: Color blocks (like or ) mean that a user may not be available. Tooltip shows the reason.

Issue 771919 link

Starred by 3 users

Issue metadata

Status: Assigned
Owner:
Cc:
Components:
EstimatedDays: ----
NextAction: ----
OS: Chrome
Pri: 2
Type: Bug-Regression



Sign in to add a comment

video_VDAPerf.h264: need better CPU usage metric

Project Member Reported by acourbot@chromium.org, Oct 5 2017

Issue description

The cpu_usage.kernel and cpu_usage.user metrics depend on the total execution time and were calibrated for a scenario where the video_decode_accelerator_unittest rendered on the screen. Since https://chromium-review.googlesource.com/c/chromium/src/+/654459, this is not happening anymore, which increased the percentage of time spent in both kernel and user-space relative to the total execution time of the test (see also  crbug.com/770596 ).

Considering that we are now rendering offscreen, a better metric would be to simply measure the actual time spent running the benchmark. This bug is to track this task.
 
Description: Show this description
Cc: owenlin@chromium.org hiroh@chromium.org
Cc: posciak@chromium.org
When we calculate the CPU usage, we will try to play the video at a specific speed, (usually 30 fps). In the code, we tried to drop frames to make sure the video is played at the expected speed.

So, if the real_time should not be changed a lot if we can decode quick enough. (and we should). So, I believe it is a bug in rendering_helper. How many frames in the video and what's the duration of playback?
owenli@ Execution time fell from ~22.59 seconds to 13.39. The video has 500 frames but IIUC we are playing it several times during the test. So it seems like there may be an issue indeed. Could you take a look locally maybe?
Actually, the video will be played only once for cpu_usage analysis. (Played at a given FPS)
It will be played another time for decode time analysis. (Decode as fast as it could)

According to  crbug.com/770596 . It is 4k h264 decoding, there are 500 frames in the video "crowd2160" and the fps is set 50. 
However, So, it should be played in 10 seconds.

Actually, I believe appling 4k label to squawks is not correct. The device's performance cannot support such playback. 
So, the "cpu usage" we get here is not meaningful. 
Good point, 4k playback on this device is painfully slow. But in that case the dropped frame metric should be much higher than what it currently is (< 1%), which still suggests that there is something wrong with the test.
Ah, turns out the dropped frame metric's range is 0-1. So a value of 0.96 here means 96% dropped frames, which is what we expect on squawks.

The label is reported as a percentage though, which is confusing. But maybe we can lift the confusion by choosing a better title for it?

Comment 9 by hiroh@chromium.org, Oct 8 2017

squawks doesn't support 4K H264 decoding obviously from the experiment data.
avtest_label_detect says the supported width and height of squawks are 4096 both, which looks strange.
VAVDA also checks if the current video size are supported on a device by the same method as avtest_label_detect. If not supported, VDVDA returns error and doesn't start decoding. VAVDA starts decoding 4K video without any error in fact.
It means either of the checking way or the platform of squawks is wrong.
I bet the platform of squawks has a bug. This problem would be resolved by the label detection method using static file discussed in crbug.com/663285.

Comment 10 by hiroh@chromium.org, Oct 10 2017

The frame per second (FPS) of the tested H264 4k data is 50.
The frame drop rate is computed from delivery_time. It seems from the delivery_time that squawks can decode 4k 30fps H264 video, but not 50fps one.
The 4k H264 video on crosvideo.appspot.com whose fps is 30, are playbacked smooth on squawks in fact.

Comment 11 by hiroh@chromium.org, Oct 10 2017

The way of checking the supported resolution is correct. intel-vaapi-driver returns 4096 as the supported width and height on squawks, according to the driver code.   
https://github.com/01org/intel-vaapi-driver/blob/c8f2493fd870438c3b7b1c269ca95ef334ed094a/src/i965_device_info.c
If my calculation is correct, with rendering we can only support up to 22 fps (500/22.59) on squawks.

I am not sure what it actually means when intel-vaapi-drvier report 4k is supported. We can decode a 4k video at (500/13.39 = 37) fps.

Sign in to add a comment