Support 8bit and 16bit video stream |
|||||||||||||
Issue description
Recently, GREY (i.e. Y8 8bit) and Z16 (depth 16bit) formats are added into UVC (usb video class). Latest linux kernel (v4.4) and Windows already support the formats.
There are already camera products, in the market, producing those format video stream. e.g. Intel Realsense. However, Chromium doesn't support those format, so video representation on Chromium is broken. See the attachment.
# Introduce 8bit and 16bit video pixel format.
So I want to add 8bit and 16bit VideoPixelFormat.
There are two options;
1. add Y8 and Y16
2. add Y8 and Z16
Which one do you prefer? Y16 is added and then all kind of 16 bit video stream (e.g. depth stream, infrared stream) will use Y16. Otherwise, if Z16 is added for only depth, it's possible to need to add Y16 in the future.
# Support Web API
<video> element must be able to play the 8bit and 16bit video stream.
navigator.mediaDevices.getUserMedia(constraints).then(function (stream_16bit) {
videoElement.srcObject = stream_16bit;
videoElement.play();
}
);
So we have to figure out how to support
1. visual representation of <video>
2. Internal value for WebGL texture
3. Internal value for Canvas2D ImageData
4. Internal value for ImageBitmap
My proposal is
1. visual representation of <video>: 8bit stream is shown in red channel. 16bit stream is shown in red (lower 8bit) and green (upper 8bit) channels.
Someone may prefer grey representation but grey representation is not consistent with actual internal value, which WebGL or Canvas2D will access.
2. 8bit can be uploaded to WebGL as follows,
texImage2D(context.TEXTURE_2D, 0, context.GL_LUMINANCE, context.GL_LUMINANCE, context.UNSIGNED_BYTE, video_8bit)
texImage2D(context.TEXTURE_2D, 0, context.GL_ALPHA, context.GL_ALPHA, context.UNSIGNED_BYTE, video_8bit)
In WebGL2
texImage2D(context.TEXTURE_2D, 0, context.GL_R8, context.GL_RED, context.UNSIGNED_BYTE, video_8bit)
texImage2D(context.TEXTURE_2D, 0, context.GL_R8I, context.GL_RED_INTEGER, context.UNSIGNED_BYTE, video_8bit)
texImage2D(context.TEXTURE_2D, 0, context.GL_R8UI, context.GL_RED_INTEGER, context.UNSIGNED_BYTE, video_8bit)
16bit can be uploaded to WebGL as follows,
texImage2D(context.TEXTURE_2D, 0, context.GL_LUMINANCE_ALPHA, context.GL_LUMINANCE_ALPHA, context.UNSIGNED_SHORT, video_16bit)
texImage2D(context.TEXTURE_2D, 0, context.GL_RGB, context.GL_RGB, context.GL_UNSIGNED_SHORT_5_6_5, video_16bit)
The later case has to reconstruct 16bit integer using tex.rgb value in glsl.
In WebGL2
texImage2D(context.TEXTURE_2D, 0, context.GL_R16F, context.GL_RED, context.UNSIGNED_SHORT, video_16bit)
texImage2D(context.TEXTURE_2D, 0, context.GL_R16I, context.GL_RED_INTEGER, context.UNSIGNED_SHORT, video_16bit)
texImage2D(context.TEXTURE_2D, 0, context.GL_R16UI, context.GL_RED_INTEGER, context.UNSIGNED_SHORT, video_16bit)
kbr@, what do you think? Do I need to change WebGL spec?
3. ImageData.data has 4 components always. It expects RGBA. In 8bit case, only first component among 4 component will be filled. In 16bit case, the first two components will be filled.
https://html.spec.whatwg.org/multipage/scripting.html#imagedata
4. ImageBitmap is handled in the same manner to Canvas2D ImageData. Red component (8bit) or "Red and Green" component (16bit) will be filled.
junov@, what do you think about #3 and #4?
# Spec work
In addition, I wonder if we have to change specs. I read MediaSource and MediaCapture (getUserMedia) specs and the specs never mention about video format. Specs define MIME-type though.
https://w3c.github.io/media-source/#byte-stream-formats
https://w3c.github.io/mediacapture-main/getusermedia.html#widl-MediaStreamTrack-getConstraints-MediaTrackConstraints
https://w3c.github.io/media-source/byte-stream-format-registry.html
In my understand, browser implementations should handle video format by themselves. Spec delegates all this implementation detail to browser implementations. What do you think?
,
Jun 29 2016
The attachment is captured using RealSense R200 on Ubuntu 16.04 and https://webrtc.github.io/samples/src/content/devices/input-output/
,
Jun 29 2016
,
Jun 29 2016
I don't like the idea of uploading 16-bit data as GL_UNSIGNED_SHORT_5_6_5 or putting high bits in one channel and low bits in another channel. If the implementation doesn't support one of the GL_R16* formats, we should just discard the lower bits and upload it as GL_R8* instead. We already have support for using half-floats for showing videos in PIXEL_FORMAT_YUV420P10, I don't think this should behave much differently.
,
Jun 29 2016
I'd prefer to focus on supporting these textures in WebGL 2.0 first, because it has all of the desired texture formats already (RED and RED_INTEGER textures in both 8- and 16-bit formats). LUMINANCE and ALPHA formats are deprecated and it's not desirable to continue to hack more web features on top of them. A normative note could be added to the WebGL 2.0 spec stating that videos of certain types must be uploadable into certain kinds of texture formats losslessly. Conformance tests should then be added. It would be worth prototyping the support for uploading these textures to WebGL 2.0 and see what can be done with it.
,
Jun 29 2016
P.S. I think it's a great idea to support these video formats. It's been desired for a long time to be able to interact better with depth cameras on the web.
,
Jun 30 2016
kbr@, hubbe@, I agree on R8 and R16 is desirable format. I also want to more focus on R8 and R16 support. On the other hands, we can not prevent user from using texImage2D with undesirable format such as RGB565 and LUMINANCE like texImage2D(context.TEXTURE_2D, 0, context.GL_RGB, context.GL_RGB, context.GL_UNSIGNED_SHORT_5_6_5, video_16bit) texImage2D() can upload source to dest whatever bits per pixel is same. do you want to allow only R8 and R16, and emit GL_INVALID_OPERATION in other case? If so, implementation itself will be much simpler. P.S. kbr@, thx for supporting. It will enable depth camera on the web as well as infrared camera on the web. Someone wants to use chromium to detect heat leakage. :)
,
Jun 30 2016
#7 - as kbr and hubbe suggested, I want to use only R8 and R16. It means we support depth camera to only device with OpenGL ES 3.0 or above. Is it ok? VideoFrame is a kind of wrapper of texture in chromium. If we want to support OpenGL ES 2.0 context, GL_RGB texture would be used for Y8 and Y16 video frame, because LUMINANCE and ALPHA formats are deprecated. It causes lots of messy in code base.
,
Jun 30 2016
I think it's ok if we require ES3, however, I thin most ES2 devices support the half-float extension, so if we could convert the 16-bit integers into 16-bit floats and use that texture format instead, we can support most existing devices.
,
Jun 30 2016
Generating INVALID_OPERATION sounds fine in the case where the user attempts to upload a 16-bit video to e.g. RGB_565 -- or, for that matter, to a WebGL 1.0 context at all. Eventually supporting uploads to half-float formats in WebGL 1.0 + the OES_texture_half_float extension sounds fine, but I think WebGL 2.0 should be the initial focus. The 16-bit integer texture formats sound like the perfect fit for this use case, and they're initially available in ES 3.0 / WebGL 2.0.
,
Jun 30 2016
#10 - kbr@, Thank you for clear opinion. Initially it will be available in ES 3.0 or above. VideoFrame will use R8 or R16 texture internally. hubbe@, after implementation is mature, we will re-visit to support WebGL 1.0 + the OES_texture_half_float extension
,
Jul 1 2016
,
Jul 4 2016
Current status on-going two preparation CLs https://codereview.chromium.org/2113243003/ https://codereview.chromium.org/2122573003/ prototype for RealSense SR300 including necessary parts of preparations https://codereview.chromium.org/2121043002/
,
Jul 4 2016
Patch https://codereview.chromium.org/2121043002/ is fully functioning proof of concept code. Continuing work on it but early feedback would be more than appreciated. Thanks. The screenshot (how_to_see_it_working_windows.png) shows how a 16 bit stream from depth device Realsense SR300 camera looks in Chromium. The code is using RG texture approach and fragment shader is outputting original RGB sample from RG texture. 16bit frames are uploaded to RG textures - lower byte to 'R' and higher byte to 'G'. This explains why the render looks like on the screenshot - color picker shows that B is 0, R and G have values as expected. Discussed with DS about this today - different rendering (e.g. blue for far away items and red for the closest or whatever JS developer would want) would be implemented in WebGL by using textures from this video. Caveats: Works only with Realsense SR300 camera on Windows (verified on Windows 10) - 16 bit depth video stream from SR300. There are multiple TODO(astojilj) in the code. I'm going to address them in follow up patches. The initial patch is used to get early feedback.
,
Jul 5 2016
#14 - although we thought R16 is right format for 16bit video, now we think RG8 is better. 16bit video will be handled by RG8 texture, no matter depth, infrared, x-ray, whatever. https://codereview.chromium.org/2122573003/ explains the rationale. Rationale to choose RG_88, instead of R_16 1. GpuMemoryBuffer cannot support R_16. (e.g. Mesa supports R8 and GR88 dmabuf) 2. GL_EXT_texture_rg is more widely used than GL_EXT_color_buffer_half_float (e.g. Mesa v11.2) WebGL can use same code to upload 16bit video to texture. Only RG channels will be filled and consume more memory than needed though. texImage2D(context.TEXTURE_2D, 0, context.GL_RGB, context.GL_RGB, context.GL_UNSIGNED_BYTE, video_16bit) texImage2D(context.TEXTURE_2D, 0, context.GL_RGBA, context.GL_RGBA, context.GL_UNSIGNED_BYTE, video_16bit) We will probably add this new feature in WebGL2. However, there is one caveat. Web API never expose video internal format. So user cannot ensure this API work with the given video stream. We need more discussion about this topic. texImage2D(context.TEXTURE_2D, 0, context.GL_R8, context.GL_RED, context.GL_UNSIGNED_BYTE, video_8bit) texImage2D(context.TEXTURE_2D, 0, context.GL_RG8, context.GL_RG, context.GL_UNSIGNED_BYTE, video_16bit) In addition, R16 might not be supported, because there is not easy way to convert RG to R16. glCopyTexImage2D(RG -> R16) doesn't work and R16 texture cannot be bound to FBO, which CopyTextureCHROMIUM requires. texImage2D(context.TEXTURE_2D, 0, context.GL_R16F, context.GL_RED, context.UNSIGNED_SHORT, video_16bit) texImage2D(context.TEXTURE_2D, 0, context.GL_R16I, context.GL_RED_INTEGER, context.UNSIGNED_SHORT, video_16bit) texImage2D(context.TEXTURE_2D, 0, context.GL_R16UI, context.GL_RED_INTEGER, context.UNSIGNED_SHORT, video_16bit) In conclusion, RG format makes sense for 16bit video, in the same sense that 32bit video uses RGBA and 24bit video uses RGB. RG format perfectly fits to existing GPU and media pipeline.
,
Jul 5 2016
Patch https://codereview.chromium.org/2121043002/ updated: 8bit video and Realsense R200 support added. Demo for 8 bit infrared (Y8) on R200 - infrared_8bit_r200.png Demo for 16 bit depth (Z16) on R200 - depth_16bit_r200_windows.png
,
Jul 6 2016
Using RG88 to upload 16-bit data is an ugly hack in my opinion. What's worse is that people will need special shaders for it, and filtering becomes impossible. Also, as soon as people start using this hack, we will never be able to remove it.
,
Jul 6 2016
I agree with hubbe@'s assessment that splitting the depth data across two color channels is a highly undesirable direction. It's a decision that will never be able to be reversed. The availability of GL_EXT_texture_rg over GL_EXT_color_buffer_half_float should not be an issue. In the discussion above, we were only considering adding support to ES 3.0 / WebGL 2.0 for these texture formats. ES 3.0 has support for R16UI textures in the core API. One concern is that these textures are not filterable. Still, if a user wants to do manual bilinear filtering, that is much easier if each component is in a single color channel. If R16F (half-float) textures were used instead, would they provide enough precision? There are only 10 (or, perhaps implicitly, 11) mantissa bits. They are, however, filterable -- and also supported in the ES 3.0 API by default. Does the RealSense camera provide the depth data into CPU-addressable memory, or does it digitize directly into an OpenGL texture? If into a texture, what is the internal format that is used? Could you explain more why the limitation that GpuMemoryBuffers can not use R16 textures is a problem for this use case? It does not seem a compelling reason to me to split the depth data across the red and green channels.
,
Jul 7 2016
Thank you for your feedback. All concerns make sense. All options (R16UI, R16F, RB8) have its own pros. and cons.. Let me compare; pros.(+) and cons.(-) R16F + intuitive + filterable + OpenGLES2 support (GL_EXT_color_buffer_half_float) - cannot copy because non-renderable and glCopyTexImage2D doesn't support. - Int-to-float CPU conversion - cannot support GpuMemoryBuffer R16UI + intuitive + can copy - non filterable - cannot support GpuMemoryBuffer - not OpenGLES2 support RG88 + can copy + support GpuMemoryBuffer + OpenGLES2 support (GL_EXT_texture_rg) - non filterable - not intuitive In my opinion, R16UI is better than R16F, mainly because R16F requires int-to-float CPU conversion [1]. GL_R16F can handle only GL_HALF_FLOAT and GL_FLOAT. [1] https://www.khronos.org/opengles/sdk/docs/man3/html/glTexImage2D.xhtml We cannot copy R16F video texture to WebGL texture. It's because R16F isn't renderable [1] and glCopyTexImage2D doesn't support [2], so glCopyTextureCHROMIUM also cannot copy, because non-renderable texture cannot be attached to FBO. [2] https://www.khronos.org/opengles/sdk/docs/man3/html/glCopyTexImage2D.xhtml R16UI needs manual bilinear filter but it's worth to avoid CPU conversion. R16UI vs RG88 is controversial. RG88 feels not-intuitive and needs 2~3 lines shader code like float getFloatFromRG(vec4 rg) { return ((rg.g * 256.0) + rg.r) / 256.0; } Both R16UI and RG88 requires manual bilinear filtering shader code. Texture lookup code for RG8 will use this helper function. float s1 = getFloatFromRG(texture2D(Texture0, coord1)); RG88 can support GpuMemoryBuffer to Android and ChromeOS easiliy. See below about the detail of GpuMemoryBuffer. In my opinion, RG88 deserves the small additional code to support GpuMemoryBuffer. On the other hands, as I mentioned earlier, web developer cannot know the video format, so they cannot know whether the video is RGBA, RGB, NV12, 16-bit, or 8-bit. So they probably want to upload video to WebGL texture using as-is code. texImage2D(context.TEXTURE_2D, 0, context.GL_RGBA, context.GL_RGBA, context.GL_UNSIGNED_BYTE, video_16bit) Intuitively, R will have low 8 bit, and G will have high 8 bit, and BA will be 0. It well fits to RG8 texture. > Using RG88 to upload 16-bit data is an ugly hack in my opinion. What's worse is that people will need special shaders for it, and filtering becomes impossible. Also, as soon as people start using this hack, we will never be able to remove it. R16UI needs also special shader for bilinear filter. 16bit reconstruction is much easier than manual bilinear shader. > I agree with hubbe@'s assessment that splitting the depth data across two color channels is a highly undesirable direction. It's a decision that will never be able to be reversed. > > The availability of GL_EXT_texture_rg over GL_EXT_color_buffer_half_float should not be an issue. In the discussion above, we were only considering adding support to ES 3.0 / WebGL 2.0 for these texture formats. ES 3.0 has support for R16UI textures in the core API. > > One concern is that these textures are not filterable. Still, if a user wants to do manual bilinear filtering, that is much easier if each component is in a single color channel. 16bit reconstruction is not very difficult. > If R16F (half-float) textures were used instead, would they provide enough precision? There are only 10 (or, perhaps implicitly, 11) mantissa bits. They are, however, filterable -- and also supported in the ES 3.0 API by default. R16F has 3 issues. 1. cannot create WebGL texture, because cannot copy 2. require int-to-float CPU conversion 3. GpuMemoryBuffer cannot support R16F. > Does the RealSense camera provide the depth data into CPU-addressable memory, or does it digitize directly into an OpenGL texture? If into a texture, what is the internal format that is used? the RealSense camera provides CPU-addressable memory. The data type is unsigned short, which is Z16 [1] in UVC (usb video class) spec. RealSense camera is just another USB camera. v4l2 or Windows media framework will capture the video frame in the same way to regular camera device. [1] https://github.com/torvalds/linux/blob/master/include/uapi/linux/videodev2.h#L627 > Could you explain more why the limitation that GpuMemoryBuffers can not use R16 textures is a problem for this use case? It does not seem a compelling reason to me to split the depth data across the red and green channels. GpuMemoryBuffer is chromium abstraction of GPU memory. Chromium multi-process architecture makes it impossible to use glMapBufferRange() for zero-copy texture. So Chromium has GpuMemoryBuffer backed by platform specific backend. zero-copy video playback also uses GpuMemoryBuffer. It's why device capture needs GpuMemoryBuffer support to reduce one-copy. ChromeOS, Android: support only RG8 https://github.com/torvalds/linux/blob/master/include/uapi/drm/drm_fourcc.h#L45 MacOSX: can support RG8, R16UI, R16F https://developer.apple.com/reference/uikit/ciimage/1652877-pixel_formats Currently, above 3 platforms support native GpuMemoryBuffers. ChromeOS and Android implement it using dma_buf. MacOSX uses IOSurface. Windows and Linux uses software fallback. dma_buf is GPU memory in Linux kernel. When glTexImage2D creates a texture, one dma_buf is created in the kernel. Linux kernel has DRM_FORMAT_GR88 [1] format but doesn't have something like DRM_FORMAT_R16 format. Mesa might abuse DRM_FORMAT_GR88 to support R16UI and R16F texture. [1] https://github.com/torvalds/linux/blob/master/include/uapi/drm/drm_fourcc.h#L45 dma_buf is bound to EGLImage and GL texture via EGL_EXT_image_dma_buf_import extension [2] [2] https://www.khronos.org/registry/egl/extensions/EXT/EGL_EXT_image_dma_buf_import.txt This extension create texture based on dma_buf format (i.e. DRM_FOURCC). DRM_FORMAT_GR88 is considered as RG8 texture. There is no way to make R16UI or R16F texture. There would be 2 options to support R16UI or R16F native GpuMemoryBuffers. 1. Add R16 and R16F in DRM_FOURCC. Note: DRM_FOURCC never has float type format. It means all drivers must implement it on kernel and 3D library (e.g. Mesa) 2. Change EGL_EXT_image_dma_buf_import extension to receive texture format. Both 2 options requires lots of time. I'm not sure if either option is acceptable in kernel or OpenGL community.
,
Jul 7 2016
I'd like to point out that for our internal video rendering (in cc/) I'm open to use whatever format works best, even if it's non-intuitive. We'd have to do benchmarks to actually know what is "best" though. However, for webgl, we need to make sure that we expose data the right way. To me, R16UI or R16F sounds like the right way, but other people (like kbr@) knows webgl a lot better than I do. I don't think there is any real reason why the cc/ code and webgl has to use the same format. As for GpuMemoryBuffers, as far as I know there is no real need to support any format that can't be promoted into overlays somehow, and at this time I don't think there are any 16-bit formats that can be promoted into overlays, so adding gpu memory buffer support for 16-bit stuff is probably premature.
,
Jul 7 2016
Thank you for quick feedback. > I'd like to point out that for our internal video rendering (in cc/) I'm open to use whatever format works best, even if it's non-intuitive. We'd have to do benchmarks to actually know what is "best" though. R16F requires additional memory allocation, int-to-half_float conversion and copy. R16UI and RB8 would be identical in terms of performance and power. > However, for webgl, we need to make sure that we expose data the right way. To me, R16UI or R16F sounds like the right way, but other people (like kbr@) knows webgl a lot better than I do. I don't think there is any real reason why the cc/ code and webgl has to use the same format. R16UI or R16F sounds like the right way, but web developer cannot know the video is 16-bit video. > As for GpuMemoryBuffers, as far as I know there is no real need to support any format that can't be promoted into overlays somehow, and at this time I don't think there are any 16-bit formats that can be promoted into overlays, so adding gpu memory buffer support for 16-bit stuff is probably premature. Depth camera is one of video_capture_device_client. video_capture_device_client already has GpuMemoryBuffers optimization code. https://cs.chromium.org/chromium/src/content/browser/renderer_host/media/video_capture_device_client.cc?q=video_capture_device_c&sq=package:chromium&l=300 In this case, GpuMemoryBuffers is used to reduce one-copy. without GpuMemoryBuffers) camera frame -copy-> shared memory -copy by glTexImage2D-> texture with GpuMemoryBuffers) camera frame -copy-> GpuMemoryBuffers == texture
,
Jul 7 2016
> R16UI and RB8 would be identical in terms of performance and power. In theory maybe. Benchmarks will tell us if the theory is correct or not.
,
Jul 7 2016
As DS mentioned, with GMBs you can save copies. In addition to that you don't need a GL context to do the upload, and you can map and address memory and then copy data in parallel. It'd be also nice to have only one upload path, and not one for GMB and one that uses texImage2d.
,
Jul 11 2016
I'd like to clarify what to benchmark (before we do it) and this sentence: 1. >#20 by hubbe@chromium.org, Jul 7 (4 days ago) >I'd like to point out that for our internal video rendering (in cc/) I'm open to use whatever format works best, even if it's non-intuitive. We'd have to do benchmarks to actually know what is "best" though. Does it mean that GL_RG_EXT & GL_UNSIGNED_BYTE or GL_R16UI would be fine too if they are cc internals only? In that case, I would guess that preferred content in: - TexImage2D() would be 16 bit unsigned short type with custom bilinear filter implementation if required. - ImageData would be 8 bit luminance in RGBA or just 8 bit 'R' component filled. If we internally use half_float then TexImage2d would be with half_float types too. ImageData would still be the same unsigned byte. Is my understanding correct here? 2. benchmarking would need to cover GL_RG_EXT/GL_R16UI vs GL_HALF_FLOAT vs GL_FLOAT. a)b) GL_RG_EXT and GL_R16UI with custom bilinear filtering. ________ vs. c) half_float ________ Conversion of normalized unsigned short to half float on CPU side we can do instead of memcpy here [1] (when copying data to shared memory buffer). [1] https://codereview.chromium.org/2121043002/diff/20001/content/browser/renderer_host/media/video_capture_device_client.cc#newcode172 For the conversion we cannot use the approach [2] but need to do proper conversion of normalized [ 0 - 0xFFFF] to [0-1.0] range. Approach [2] would cause precision loss if we use only upper 10 bits - I checked the values for some of cameras (e.g. SR300) and most of values are in range [0x0 - 0x1FFF] although camera provides 16 bit stream. [2] fast shift to [0.5-1.]: https://cs.chromium.org/chromium/src/cc/resources/video_resource_updater.cc?rcl=0&l=494 vs. d) GL_FLOAT I think it is good to check conversion cost here. About GL_FLOAT: GL_DEPTH_COMPONENT: =================== If somebody would like to upload texture to GL_DEPTH_COMPONENT, they would need to use GL_UNSIGNED_SHORT or GL_FLOAT (GL_DEPTH_COMPONENT32F), not half float. Bilinear filtering ================== Don't know how strong this requirement is for depth stream capture. If it is, GL_HALF_FLOAT & GL_FLOAT look preferred to unsigned integer values (given that it seems that we cannot use GL_R16 with normalized short, but only GL_R16UI).
,
Jul 11 2016
re #24: > Does it mean that GL_RG_EXT & GL_UNSIGNED_BYTE or GL_R16UI would be fine too if they are cc internals only? Yes. For anything that isn't visible to web developers, the most efficient solution is almost always the right solution. You don't have to benchmark every solution, instead you should probably pick one solution and benchmark it on as many systems as you can. Of course, you also need to benchamark what is already there as a comparison. If your solution is generally better, then we probably want to use it. When we evaluate what is "best", we'd consider cpu time, gpu time, power usage, code complexity and how common the required GL extensions are. If you can produce benchmarks on a few systems, showing that your solution works and is equal or better than the existing solution, the next step would be to check it, but controlled by a flag. Then we can run it through our benchmarks and see if we can replicate your results on a wider range of systems.
,
Jul 11 2016
For WebGL it will be necessary to make at least one copy of the camera's input because of how the APIs work; in WebGL, one calls texImage2D or texSubImage2D, passing in the video element as the source, to get the current frame of the video. (Extensions have been proposed for WebGL to avoid this copy, but we haven't made time to push them to completion.) So while it would be ideal to have only one code path for uploading these depth cameras' output, it isn't a strict requirement, because during this copy it's possible to change the texture's internal format. Each of these internal formats has pros and cons. Let's let the power and performance benchmarks discussed above help guide the decision about the desired upload format for WebGL. Intuitively I think R16UI or R16F would be easier for developers to deal with than RG88 but either way it's not a lot of math. I'm also not sure of the importance of bilinear filtering for these depth textures -- does it make sense mathematically? Are the depth values linear or something else (logarithmic)?
,
Jul 11 2016
Thanks. Sounds good. > or better than the existing solution, I still feel that there is a misunderstanding about the patches linked to this bug: 1) http://crrev.com/2122573003/ media: replace LUMINANCE_F16 by RG_88 for 9/10-bit h264 videos This patch replaces existing code with RG_88 approach so as you pointed out benchmark is needed to verify that replacement is better then existing approach. 2) http://crrev.com/2121043002/ 8 and 16 bpp video stream capture and render - Realsense R200 & SR300 support. This patch doesn't depend on patch 1). It is a new solution - there is no existing solution to capture and render depth and IR camera streams. Therefore, seems good to compare approach with different formats for this new solution (there is no existing one to compare against) and verify comparison on supported platform - it is just Windows at the moment. We plan to support more soon. Seems that in the prototype we'll need to cover all of this cases before making a conclusion about the best format. 1. visual representation of <video> 2. Internal value for WebGL texture 3. Internal value for Canvas2D ImageData 4. Internal value for ImageBitmap
,
Jul 12 2016
#26 Yes, let's get some numbers. Thanks. Depth values are linear. Linear interpolation might come handy when mapping from depth camera (e.g. 640x480) to RGB (1080p) camera space but there is some additional computation to cover there. Let's see.
,
Jul 15 2016
Some numbers are available bellow. Measurement is done using patch 3 here: https://crrev.com/2121043002/#ps40001 The patch adds: - custom bilinear filter for RG88: https://codereview.chromium.org/2121043002/diff/40001/cc/output/shader.cc#newcode2167 - using GL_FLOAT and GL_HALF_FLOAT texture, too https://codereview.chromium.org/2121043002/diff/40001/cc/resources/resource_provider.cc#newcode453 - conversion uint16_t to half-float: https://codereview.chromium.org/2121043002/diff/40001/cc/resources/video_resource_updater.cc#newcode109 Measurements done on Windows 10, Chromium 54.0.2794.0 64-bit Release_x64 developer build Intel® Core™ i7-4770HQ CPU @ 2.20 GHz 2.20 GHz Intel® Iris™ Pro Graphics 5200. For all formats bilinear filtering is used; only for RG88 code is using custom bilinear filter in fragment shader as mentioned above. Changing format measured is done by uncommenting lines here: https://codereview.chromium.org/2121043002/diff/40001/cc/resources/resource_provider.cc#newcode453 Measured: 1) GPU power consumption, 2) CPU power consumption, 3) time elapsed for texture upload and conversion (conversion needed for float and half float texture). 1) GPU power consumption ======================== Measured using GPU-Z 0.8.9. Windows wasn’t maximized for this numbers. When window is maximized to retina screen the results are higher but consistent whatever the video size is: GPU Power: RG88: 0.4-0.5W half-float: 0.5-0.6W float: 0.6-0.7W Measured also nearest sampling for all of the above – consistent -0.1W for all. GPU Load: RG88: 7-8% half-float: 9% float: 11% Measured also nearest sampling for all of the above – consistent -1%W for all. 2) CPU power consumption ======================== Measured using Intel Power Gadget 3.0, 5+ several minutes long runs and averages taken from logs. Average processor power_0 (Watt) RG: 7.1W half-float: 7.9W float: 8 W Average IA Power_0(Watt) RG: 0.8W half-float: 1W float: 1W Average GT Power_0(Watt) RG: 0.3W half-float:0.4 W float:0.5W 3) time elapsed for texture upload and conversion ================================================= This is done on a not very convenient way for continuous monitoring, but for me it was a quick way – it is a matter of practice. I guess I could spend some time later to study and use microbenchmark approach. It is a Release_x64 build. I would start the page and attach VS debugger to all chrome processed. Placed breakpoint at line with printf("%zx\n", microseconds) bellow to avoid getting microseconds value optimized out and would read the value. This patch applied to measure time spent in VideoFrameExternalResources VideoResourceUpdater::CreateForSoftwarePlanes. The time includes conversion (required for half-float and float only) and upload to texture. Results (microseconds): RG:300 (range:150-360) half-float:2800 (range:1300-4000) float:1100 (range:300-1500) Patch: diff --git a/cc/resources/video_resource_updater.cc b/cc/resources/video_resource_updater.cc index 8eec178..9d5b271 100644 --- a/cc/resources/video_resource_updater.cc +++ b/cc/resources/video_resource_updater.cc @@ -24,6 +24,10 @@ #include "third_party/skia/include/core/SkCanvas.h" #include "ui/gfx/geometry/size_conversions.h" +// TODO(astojilj): Remove this - just for measurement. +#include "base/time/time.h" +#include "base/timer/elapsed_timer.h" + namespace cc { namespace { @@ -272,6 +276,8 @@ VideoFrameExternalResources VideoResourceUpdater::CreateForSoftwarePlanes( scoped_refptr<media::VideoFrame> video_frame) { TRACE_EVENT0("cc", "VideoResourceUpdater::CreateForSoftwarePlanes"); const media::VideoPixelFormat input_frame_format = video_frame->format(); + // TODO(astojilj) Temporary code in next line is used to measure elapsed time. + base::ElapsedTimer timer; // TODO(hubbe): Make this a video frame method. int bits_per_channel = 0; switch (input_frame_format) { @@ -584,6 +590,14 @@ VideoFrameExternalResources VideoResourceUpdater::CreateForSoftwarePlanes( ? VideoFrameExternalResources::Y_RESOURCE : (isYuvPlanar ? VideoFrameExternalResources::YUV_RESOURCE : VideoFrameExternalResources::RGB_RESOURCE); + + // TODO(astojilj): Temporary code is used to measure elapsed time. + // printf bellow is used only to avoid getting microseconds optimized away in + // Release_x64 build - set a breakpoint to the line with printf to get the + // value. + uint64_t microseconds = timer.Elapsed().InMicroseconds(); + printf("%zx\n", microseconds); + return external_resources; } Disclaimer: I didn't use R16UI as it doesn't look available in GLES2.0 extensions on Windows. However, as it is not filterable it would require similar bilinear filter as done here for RG88.
,
Jul 15 2016
Very interesting, and thank you for getting these numbers. It looks like using RG88 is a noticable improvement over half-floats, so let's get it checked in, but behind a flag. We'll run it through our testing battery and if the results look good, we'll flip the flag.
,
Jul 15 2016
Great data aleksandar.stojiljkovic@. Thanks for gathering it. Agree with hubbe@ to proceed with RG88 as the texture format for these videos, including for WebGL 2.0.
,
Aug 15 2016
First patch after vacation, hopefully will help to move this forward. Please check the approach in third_party/WebKit/Source/modules/webgl, Video skcanvas_video_renderer.cc and VideoMediaPlayer(MS). Demo code used for verification, screenshots and explanations are bellow. Thanks. The patch https://crrev.com/2121043002/#ps60001 implements: 1) 16 bit video frame used in gl.texImage2D. 2) Getting 16 bit depth frame data to javascript using gl.readPixels from R16UI texture attached to non-zero framebuffer. 1) Video frame used in WebGL texImage2D: ==================== Following format/type - RGBA UNSIGNED_BYTE: only R component includes data from depth stream; upper byte of 16 bit video. This is according to your previous comments that splitting 16 bits to R and G would be considered as a hack. - RED_INTEGER(R16UI) UNSIGNED_SHORT: 16 bits go to R. I would guess this looks logical and according to addvices here. - RG(RG8) UNSIGNED_BYTE: upper byte to G, lower to R. I have also added this possibility as it seems logical to me to have 16 bpp buffer uploaded to 16 bpp RG8 texture. The main reason for it is that R16UI requires usampler2D and fragment shader with it fails to compile (Windows and --enable-unsafe-es3-apis) Here is a jsfiddle demo that could be used to see this working and a screenshot bellow. RG8 (note that both red and green are there): https://jsfiddle.net/astojilj/rsask25L/ RGBA texture (note that only red component is visible and with 13 bit of useful information upper byte in R provides 5 bit of info only - and red color doesn't help with the screenshot getting dark). https://jsfiddle.net/astojilj/hze98upd/ 2) Getting 16 bit depth frame data to javascript. ======================== 2D canvas + getImageData approach where 18 bits would be split to two components would be a hack (following the argumentation above). It would require handling of endianess in javascript and packing of 2 bytes to unsigned short. IMHO a better way is to use WebGL and readpixels from R16UI texture to Uint16Array. In short, this is the approach to get full 16 bit data in javascript: 1. on start, create a framebuffer, attach (COLOR_ATTACHMENT0) R16UI texture to it. 2. In loop, when video is ready or whenever it is needed to get the data but after texImage2D of the 16 bit video to R16UI texture, bind the framebuffer and call readpixels from R16UI texture bound to non zero framebuffer to Uint16Array. This way the data is accessible in javascript. Here is an example and screenshot: https://jsfiddle.net/astojilj/wq1kwu0t/ WebGL canvas is not visible and it is used only for texImage2D(video...) and then for readPixels. 2D canvas is used to render the data received via gl.readPixels. // upload the video frame to texture. gl.texImage2D(gl.TEXTURE_2D, 0, gl.R16UI, gl.RED_INTEGER, gl.UNSIGNED_SHORT, video); // bind the frame buffer that has the texture attached. gl.bindFramebuffer(gl.FRAMEBUFFER, fb); if(!implementationFormatType.firstChild) { implementationFormatType.innerHTML = "IMPLEMENTATION_COLOR_READ_FORMAT:0x" + gl.getParameter(gl.IMPLEMENTATION_COLOR_READ_FORMAT).toString(16) + ", IMPLEMENTATION_COLOR_READ_TYPE:0x" + gl.getParameter(gl.IMPLEMENTATION_COLOR_READ_TYPE).toString(16); } var arr = new Uint16Array(video.width * video.height); // read from framebuffer (from the texture) gl.readPixels(0, 0, video.width, video.height, gl.RED_INTEGER, gl.UNSIGNED_SHORT, arr); // display in 2D canvas var img = ctx2D.getImageData(0,0,video.width, video.height); var data = img.data; var j = 0; for (var i = 0; i < data.length; i+=4) { var ushort = arr[j++]; data[i] = ushort >> 5; data[i+1] = ushort >> 5; data[i+2] = ushort >> 5; data[i+3] = 255; } ctx2D.putImageData(img, 0, 0);
,
Aug 15 2016
and screenshot of this demo: " 2) Getting 16 bit depth frame data from r16ui texture color-attached to non-zero framebuffer using readpixels: https://jsfiddle.net/astojilj/wq1kwu0t/ WebGL canvas is not visible and it is used only for texImage2D(video...) and then for readPixels. 2D canvas is used to render the data received via gl.readPixels. "
,
Sep 29 2016
The following revision refers to this bug: https://chromium.googlesource.com/chromium/src.git/+/ea1ddc4e14b1b51a8dcbc29cabcec6513127e0c9 commit ea1ddc4e14b1b51a8dcbc29cabcec6513127e0c9 Author: dongseong.hwang <dongseong.hwang@intel.com> Date: Thu Sep 29 12:38:03 2016 media: Introduce Y8 and Y16 video pixel format Depth and infrared camera uses these format. TODO: native support for depth camera. BUG=624436 CQ_INCLUDE_TRYBOTS=master.tryserver.blink:linux_precise_blink_rel Review-Url: https://codereview.chromium.org/2113243003 Cr-Commit-Position: refs/heads/master@{#421804} [modify] https://crrev.com/ea1ddc4e14b1b51a8dcbc29cabcec6513127e0c9/cc/resources/video_resource_updater.cc [modify] https://crrev.com/ea1ddc4e14b1b51a8dcbc29cabcec6513127e0c9/media/base/video_frame.cc [modify] https://crrev.com/ea1ddc4e14b1b51a8dcbc29cabcec6513127e0c9/media/base/video_types.cc [modify] https://crrev.com/ea1ddc4e14b1b51a8dcbc29cabcec6513127e0c9/media/base/video_types.h [modify] https://crrev.com/ea1ddc4e14b1b51a8dcbc29cabcec6513127e0c9/media/mojo/common/media_type_converters.cc [modify] https://crrev.com/ea1ddc4e14b1b51a8dcbc29cabcec6513127e0c9/media/mojo/interfaces/media_types.mojom [modify] https://crrev.com/ea1ddc4e14b1b51a8dcbc29cabcec6513127e0c9/media/renderers/skcanvas_video_renderer.cc [modify] https://crrev.com/ea1ddc4e14b1b51a8dcbc29cabcec6513127e0c9/media/video/gpu_memory_buffer_video_frame_pool.cc [modify] https://crrev.com/ea1ddc4e14b1b51a8dcbc29cabcec6513127e0c9/tools/metrics/histograms/histograms.xml
,
Oct 4 2016
The following revision refers to this bug: https://chromium.googlesource.com/chromium/src.git/+/961b0e1cfb0c951c9bcd824044e2fa95b611312e commit 961b0e1cfb0c951c9bcd824044e2fa95b611312e Author: dongseong.hwang <dongseong.hwang@intel.com> Date: Tue Oct 04 10:15:30 2016 gpu: support RG_88 GpuMemoryBuffer 9-16 bit video channel will use it. BUG= 445071 , 624436 TEST=gl_tests --gtest_filter=GpuMemoryBufferTest* CQ_INCLUDE_TRYBOTS=master.tryserver.chromium.linux:linux_optional_gpu_tests_rel;master.tryserver.chromium.mac:mac_optional_gpu_tests_rel;master.tryserver.chromium.win:win_optional_gpu_tests_rel Review-Url: https://codereview.chromium.org/2376293003 Cr-Commit-Position: refs/heads/master@{#422741} [modify] https://crrev.com/961b0e1cfb0c951c9bcd824044e2fa95b611312e/components/exo/buffer.cc [modify] https://crrev.com/961b0e1cfb0c951c9bcd824044e2fa95b611312e/content/browser/gpu/browser_gpu_memory_buffer_manager.cc [modify] https://crrev.com/961b0e1cfb0c951c9bcd824044e2fa95b611312e/content/browser/gpu/gpu_internals_ui.cc [modify] https://crrev.com/961b0e1cfb0c951c9bcd824044e2fa95b611312e/gpu/command_buffer/client/gles2_implementation.cc [modify] https://crrev.com/961b0e1cfb0c951c9bcd824044e2fa95b611312e/gpu/command_buffer/common/gpu_memory_buffer_support.cc [modify] https://crrev.com/961b0e1cfb0c951c9bcd824044e2fa95b611312e/gpu/command_buffer/tests/gl_gpu_memory_buffer_unittest.cc [modify] https://crrev.com/961b0e1cfb0c951c9bcd824044e2fa95b611312e/gpu/ipc/client/gpu_memory_buffer_impl_shared_memory.cc [modify] https://crrev.com/961b0e1cfb0c951c9bcd824044e2fa95b611312e/ui/gfx/buffer_format_util.cc [modify] https://crrev.com/961b0e1cfb0c951c9bcd824044e2fa95b611312e/ui/gfx/buffer_types.h [modify] https://crrev.com/961b0e1cfb0c951c9bcd824044e2fa95b611312e/ui/gfx/mac/io_surface.cc [modify] https://crrev.com/961b0e1cfb0c951c9bcd824044e2fa95b611312e/ui/gfx/mojo/buffer_types.mojom [modify] https://crrev.com/961b0e1cfb0c951c9bcd824044e2fa95b611312e/ui/gfx/mojo/buffer_types_traits.h [modify] https://crrev.com/961b0e1cfb0c951c9bcd824044e2fa95b611312e/ui/gl/gl_image_io_surface.mm [modify] https://crrev.com/961b0e1cfb0c951c9bcd824044e2fa95b611312e/ui/gl/gl_image_memory.cc [modify] https://crrev.com/961b0e1cfb0c951c9bcd824044e2fa95b611312e/ui/gl/test/gl_image_test_support.cc [modify] https://crrev.com/961b0e1cfb0c951c9bcd824044e2fa95b611312e/ui/ozone/gl/gl_image_ozone_native_pixmap.cc [modify] https://crrev.com/961b0e1cfb0c951c9bcd824044e2fa95b611312e/ui/ozone/platform/drm/client_native_pixmap_factory_gbm.cc [modify] https://crrev.com/961b0e1cfb0c951c9bcd824044e2fa95b611312e/ui/ozone/platform/drm/common/drm_util.cc
,
Oct 7 2016
To communicate easy in https://codereview.chromium.org/2122573003/, upload pixel test images per configuration.
,
Oct 21 2016
Screenshot of fake capture device with gradient control points in the corners.
,
Oct 27 2016
The following revision refers to this bug: https://chromium.googlesource.com/chromium/src.git/+/38481bca66b6ca654bbe73cf6f0c418621d517cc commit 38481bca66b6ca654bbe73cf6f0c418621d517cc Author: aleksandar.stojiljkovic <aleksandar.stojiljkovic@intel.com> Date: Thu Oct 27 12:25:11 2016 FakeVideoCaptureDevice: Y16 testing support. Adds: - support for Y16 buffer format. - support for defining number of devices from command line. - generated color control points in corners that could be used to verify the frame rendering with no need for the reference render bitmap. BUG=624436 Review-Url: https://codereview.chromium.org/2447233002 Cr-Commit-Position: refs/heads/master@{#428001} [modify] https://crrev.com/38481bca66b6ca654bbe73cf6f0c418621d517cc/media/capture/video/fake_video_capture_device.cc [modify] https://crrev.com/38481bca66b6ca654bbe73cf6f0c418621d517cc/media/capture/video/fake_video_capture_device.h [modify] https://crrev.com/38481bca66b6ca654bbe73cf6f0c418621d517cc/media/capture/video/fake_video_capture_device_factory.cc [modify] https://crrev.com/38481bca66b6ca654bbe73cf6f0c418621d517cc/media/capture/video/fake_video_capture_device_factory.h [modify] https://crrev.com/38481bca66b6ca654bbe73cf6f0c418621d517cc/media/capture/video/fake_video_capture_device_unittest.cc
,
Nov 2 2016
The following revision refers to this bug: https://chromium.googlesource.com/chromium/src.git/+/198b050b9fe647bf3799ab75df07b888c7adcf50 commit 198b050b9fe647bf3799ab75df07b888c7adcf50 Author: aleksandar.stojiljkovic <aleksandar.stojiljkovic@intel.com> Date: Wed Nov 02 16:52:20 2016 16 bpp video stream capture, render and createImageBitmap(video) using (CPU) shared memory buffers This patch implements support for depth capture on Windows/Linux and ChromeOS using shared memory buffers. WebGL implementation with no precision loss and GPU memory buffer usage is split to separate patches from master patch: crrev.com/2121043002/ It Supports 16 bit depth video streams from R200 and SR300. Verified to work on Windows 10 and Ubuntu 16.04. Rendering video element: In cc, it is converted to RGBA format before glTexImage2D and rendered as TexturedQuad. 2D canvas + getImageData returns 8 bit color components (higher bit value) in RGBA. BUG=624436 CQ_INCLUDE_TRYBOTS=master.tryserver.blink:linux_precise_blink_rel Review-Url: https://codereview.chromium.org/2428263004 Cr-Commit-Position: refs/heads/master@{#429307} [modify] https://crrev.com/198b050b9fe647bf3799ab75df07b888c7adcf50/cc/output/renderer_pixeltest.cc [modify] https://crrev.com/198b050b9fe647bf3799ab75df07b888c7adcf50/cc/resources/video_resource_updater.cc [add] https://crrev.com/198b050b9fe647bf3799ab75df07b888c7adcf50/cc/test/data/intersecting_light_dark_squares_video.png [modify] https://crrev.com/198b050b9fe647bf3799ab75df07b888c7adcf50/content/browser/renderer_host/media/video_capture_buffer_pool_unittest.cc [modify] https://crrev.com/198b050b9fe647bf3799ab75df07b888c7adcf50/content/browser/renderer_host/media/video_capture_controller.cc [modify] https://crrev.com/198b050b9fe647bf3799ab75df07b888c7adcf50/content/browser/renderer_host/media/video_capture_controller_unittest.cc [modify] https://crrev.com/198b050b9fe647bf3799ab75df07b888c7adcf50/content/browser/renderer_host/media/video_capture_device_client_unittest.cc [modify] https://crrev.com/198b050b9fe647bf3799ab75df07b888c7adcf50/content/browser/renderer_host/media/video_capture_host.cc [modify] https://crrev.com/198b050b9fe647bf3799ab75df07b888c7adcf50/content/browser/renderer_host/media/video_capture_manager.cc [add] https://crrev.com/198b050b9fe647bf3799ab75df07b888c7adcf50/content/browser/webrtc/webrtc_depth_capture_browsertest.cc [modify] https://crrev.com/198b050b9fe647bf3799ab75df07b888c7adcf50/content/renderer/media/video_capture_impl.cc [modify] https://crrev.com/198b050b9fe647bf3799ab75df07b888c7adcf50/content/test/BUILD.gn [add] https://crrev.com/198b050b9fe647bf3799ab75df07b888c7adcf50/content/test/data/media/depth_stream_test_utilities.js [add] https://crrev.com/198b050b9fe647bf3799ab75df07b888c7adcf50/content/test/data/media/getusermedia-depth-capture.html [modify] https://crrev.com/198b050b9fe647bf3799ab75df07b888c7adcf50/media/base/video_frame.cc [modify] https://crrev.com/198b050b9fe647bf3799ab75df07b888c7adcf50/media/capture/video/linux/v4l2_capture_delegate.cc [modify] https://crrev.com/198b050b9fe647bf3799ab75df07b888c7adcf50/media/capture/video/video_capture_device_client.cc [modify] https://crrev.com/198b050b9fe647bf3799ab75df07b888c7adcf50/media/capture/video/video_capture_device_client.h [modify] https://crrev.com/198b050b9fe647bf3799ab75df07b888c7adcf50/media/capture/video/win/sink_filter_win.cc [modify] https://crrev.com/198b050b9fe647bf3799ab75df07b888c7adcf50/media/capture/video/win/sink_filter_win.h [modify] https://crrev.com/198b050b9fe647bf3799ab75df07b888c7adcf50/media/capture/video/win/sink_input_pin_win.cc [modify] https://crrev.com/198b050b9fe647bf3799ab75df07b888c7adcf50/media/capture/video/win/video_capture_device_factory_win.cc [modify] https://crrev.com/198b050b9fe647bf3799ab75df07b888c7adcf50/media/capture/video/win/video_capture_device_mf_win.cc [modify] https://crrev.com/198b050b9fe647bf3799ab75df07b888c7adcf50/media/capture/video/win/video_capture_device_win.cc [modify] https://crrev.com/198b050b9fe647bf3799ab75df07b888c7adcf50/media/renderers/skcanvas_video_renderer.cc [modify] https://crrev.com/198b050b9fe647bf3799ab75df07b888c7adcf50/media/renderers/skcanvas_video_renderer.h [modify] https://crrev.com/198b050b9fe647bf3799ab75df07b888c7adcf50/media/renderers/skcanvas_video_renderer_unittest.cc
,
Nov 3 2016
commit in comment #39 ( https://codereview.chromium.org/2428263004 ) seems to have broken the build locally for me. here is my set up: GN args: is_debug = false target_os = "chromeos" use_ozone = true ozone_platform_x11 = true ozone_platform_gbm = true ozone_platform_wayland = true enable_package_mash_services = true use_sysroot = false enable_nacl = false remove_webcore_debug_symbols = true Build command: $ time ninja -C <out> chrome mash:all services/ui/demo mash/browser/ Error: ../../../media/capture/video/linux/v4l2_capture_delegate.cc:63:6: error: use of undeclared identifier 'V4L2_PIX_FMT_INVZ' {V4L2_PIX_FMT_INVZ, PIXEL_FORMAT_Y16, 1}, ^ ../../../media/capture/video/linux/v4l2_capture_delegate.cc:162:46: error: cannot use incomplete type 'const struct (anonymous struct at ../../../media/capture/video/linux/v4l2_capture_delegate.cc:55:8) []' as a range for (const auto& fourcc_and_pixel_format : kSupportedFormatsAndPlanarity) { ^ ../../../media/capture/video/linux/v4l2_capture_delegate.cc:173:46: error: cannot use incomplete type 'const struct (anonymous struct at ../../../media/capture/video/linux/v4l2_capture_delegate.cc:55:8) []' as a range for (const auto& fourcc_and_pixel_format : kSupportedFormatsAndPlanarity) { ^ ../../../media/capture/video/linux/v4l2_capture_delegate.cc:188:29: error: cannot use incomplete type 'const struct (anonymous struct at ../../../media/capture/video/linux/v4l2_capture_delegate.cc:55:8) []' as a range for (const auto& format : kSupportedFormatsAndPlanarity) ^ 4 errors generated. [14084/28179] CXX obj/media/capture/capture/video_capture_device_client.o ninja: build stopped: subcommand failed. Detail: * /usr/include/linux/videodev2.h does not have V4L2_PIX_FMT_INVZ defined for me * $ uname -a Linux localhost.localdomain 4.7.4-200.fc24.x86_64 #1 SMP Thu Sep 15 18:42:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux So kernel > 4.6, and lines below from media/capture/video/linux/v4l2_capture_delegate.cc do not build #if LINUX_VERSION_CODE < KERNEL_VERSION(4, 6, 0) // 16 bit depth, Realsense F200. #define V4L2_PIX_FMT_Z16 v4l2_fourcc('Z', '1', '6', ' ') // 16 bit depth, Realsense SR300. #define V4L2_PIX_FMT_INVZ v4l2_fourcc('I', 'N', 'V', 'Z') #endif
,
Nov 3 2016
#40: tonikitoo@, I've split this to separate Issue 661877 . Let's cover it there. Thanks. Temporary, you could use this - https://codereview.chromium.org/2428263004/diff/20001/media/capture/video/linux/v4l2_capture_delegate.cc I will make a patch now to fix this and let's see if there are other people affected. 1. What is the chromeos board you are building for? 2. Are you building only chromium for chromeos (https://www.chromium.org/chromium-os/how-tos-and-troubleshooting/building-chromium-browser) or full image? 3. What's the version of libv4l-dev?
,
Nov 6 2016
Short update on this: 1. Using extended precision (float or unsigned short) The first implementation providing lossless access to 16-bit video stream using WebGL1 GL_FLOAT (RGBA) texture.is in review: https://crrev.com/2476693002/ Follow up patches should include WebGL2 R16UI and R32F support, already prototyped in https://crrev.com/2121043002/ 2. Related to RG88: https://github.com/w3c/mediacapture-depth/issues/131 "huningxin commented 13 days ago We need revisit this for video+canvas sink as TPAC meeting attendees are not positive to the depth value reconstruction from RG88 solution." ccameron@ in https://crrev.com/2122573003/ also expressed concern about this and we need to get this clarified.
,
Nov 7 2016
,
Nov 10 2016
,
Nov 29 2016
The following revision refers to this bug: https://chromium.googlesource.com/chromium/src.git/+/59c7989ec2c365f0bbaf09ecc02da7225061725b commit 59c7989ec2c365f0bbaf09ecc02da7225061725b Author: aleksandar.stojiljkovic <aleksandar.stojiljkovic@intel.com> Date: Tue Nov 29 07:44:58 2016 Avoid doing and then abandoning semi-accelerated texImageHelperHTMLVideoElement path. This resolves performance issue when uploading video to e.g. FLOAT or UNSIGNED_SHORT textures. If canUseCopyTextureCHROMIUM returns false, we shouldn't create AcceleratedImageBufferSurface, call paintCurrentFrame and texImage2DBase since they would not be used. This is because imageBuffer-> copyToPlatformTexture is not supported when canUseCopyTextureCHROMIUM is false and it would early return false. BUG=624436 CQ_INCLUDE_TRYBOTS=master.tryserver.chromium.linux:linux_optional_gpu_tests_rel;master.tryserver.chromium.mac:mac_optional_gpu_tests_rel;master.tryserver.chromium.win:win_optional_gpu_tests_rel Review-Url: https://codereview.chromium.org/2527343002 Cr-Commit-Position: refs/heads/master@{#434926} [modify] https://crrev.com/59c7989ec2c365f0bbaf09ecc02da7225061725b/third_party/WebKit/Source/modules/webgl/WebGLRenderingContextBase.cpp
,
Dec 5 2016
The following revision refers to this bug: https://chromium.googlesource.com/chromium/src.git/+/d851a930e44e902d0d30c1fea879831bb6edeef7 commit d851a930e44e902d0d30c1fea879831bb6edeef7 Author: aleksandar.stojiljkovic <aleksandar.stojiljkovic@intel.com> Date: Mon Dec 05 03:08:57 2016 Lossless access to 16-bit video stream using WebGL GL_FLOAT texture. If using canvas.getImageData or GL textures of UNSIGNED_BYTE type, 16-bit depth stream data is available only through 8-bit* API, so there is a precision loss. Here, we add no-precision-loss** JavaScript access through WebGL float texture. RGBA32F usage here enables lossless access to 16-bit depth information via WebGL1. In related work, the same code path is used to upload 16-bit data to other WebGL2 supported formats; e.g. with GL_R16UI there is no conversion needed in SkCanvasVideoRenderer::TexImageImpl. * 8-bit access refers to JS ImageData and WebGL UNSIGNED_BYTE where 16-bit depth data is now available as luminance (all 3 color channels contains upper 8 bits of 16bit value). ** Float is used for no-precision-loss access to 16-bit data normalized to [0-1.0] range using formula value_float = value_16bit/65535.0. This patch also adds UNSIGNED_BYTE WebGL test which was earlier tested through testVideoToImageBitmap since UNSIGNED_BYTE and canvas rendering still share the same path through SkCanvasVideoRenderer::Paint. BUG= 369849 , 624436 CQ_INCLUDE_TRYBOTS=master.tryserver.chromium.linux:linux_optional_gpu_tests_rel;master.tryserver.chromium.mac:mac_optional_gpu_tests_rel;master.tryserver.chromium.win:win_optional_gpu_tests_rel Review-Url: https://codereview.chromium.org/2476693002 Cr-Commit-Position: refs/heads/master@{#436221} [modify] https://crrev.com/d851a930e44e902d0d30c1fea879831bb6edeef7/chrome/test/BUILD.gn [modify] https://crrev.com/d851a930e44e902d0d30c1fea879831bb6edeef7/content/renderer/media/webmediaplayer_ms.cc [modify] https://crrev.com/d851a930e44e902d0d30c1fea879831bb6edeef7/content/renderer/media/webmediaplayer_ms.h [modify] https://crrev.com/d851a930e44e902d0d30c1fea879831bb6edeef7/content/test/data/media/depth_stream_test_utilities.js [modify] https://crrev.com/d851a930e44e902d0d30c1fea879831bb6edeef7/content/test/data/media/getusermedia-depth-capture.html [modify] https://crrev.com/d851a930e44e902d0d30c1fea879831bb6edeef7/content/test/gpu/generate_buildbot_json.py [add] https://crrev.com/d851a930e44e902d0d30c1fea879831bb6edeef7/content/test/gpu/gpu_tests/depth_capture_expectations.py [add] https://crrev.com/d851a930e44e902d0d30c1fea879831bb6edeef7/content/test/gpu/gpu_tests/depth_capture_integration_test.py [modify] https://crrev.com/d851a930e44e902d0d30c1fea879831bb6edeef7/media/renderers/skcanvas_video_renderer.cc [modify] https://crrev.com/d851a930e44e902d0d30c1fea879831bb6edeef7/media/renderers/skcanvas_video_renderer.h [modify] https://crrev.com/d851a930e44e902d0d30c1fea879831bb6edeef7/media/renderers/skcanvas_video_renderer_unittest.cc [modify] https://crrev.com/d851a930e44e902d0d30c1fea879831bb6edeef7/testing/buildbot/chromium.gpu.fyi.json [modify] https://crrev.com/d851a930e44e902d0d30c1fea879831bb6edeef7/testing/buildbot/chromium.gpu.json [modify] https://crrev.com/d851a930e44e902d0d30c1fea879831bb6edeef7/third_party/WebKit/Source/core/html/HTMLVideoElement.cpp [modify] https://crrev.com/d851a930e44e902d0d30c1fea879831bb6edeef7/third_party/WebKit/Source/core/html/HTMLVideoElement.h [modify] https://crrev.com/d851a930e44e902d0d30c1fea879831bb6edeef7/third_party/WebKit/Source/modules/webgl/WebGLRenderingContextBase.cpp [modify] https://crrev.com/d851a930e44e902d0d30c1fea879831bb6edeef7/third_party/WebKit/Source/modules/webgl/WebGLRenderingContextBase.h [modify] https://crrev.com/d851a930e44e902d0d30c1fea879831bb6edeef7/third_party/WebKit/public/platform/WebMediaPlayer.h
,
Dec 5 2016
The following revision refers to this bug: https://chromium.googlesource.com/chromium/src.git/+/943c81739e62a103098cf463b2a6aeb1f1636de2 commit 943c81739e62a103098cf463b2a6aeb1f1636de2 Author: kbr <kbr@chromium.org> Date: Mon Dec 05 10:00:19 2016 Fix depth_capture_tests configuration. Re-ran generate_buildbot_json.py. The JSON diffs were applied incorrectly by the CQ. BUG= 369849 , 624436 TBR=zmo@chromium.org NOTRY=true Review-Url: https://codereview.chromium.org/2554453004 Cr-Commit-Position: refs/heads/master@{#436250} [modify] https://crrev.com/943c81739e62a103098cf463b2a6aeb1f1636de2/testing/buildbot/chromium.gpu.fyi.json
,
Dec 5 2016
,
Dec 8 2016
The following revision refers to this bug: https://chromium.googlesource.com/chromium/src.git/+/014802a3945cbf187a74bda8b2fdc21404d44104 commit 014802a3945cbf187a74bda8b2fdc21404d44104 Author: aleksandar.stojiljkovic <aleksandar.stojiljkovic@intel.com> Date: Thu Dec 08 22:38:40 2016 WebGL2 & 16-bit depth capture: Upload video to GL_RED float texture. This enables lossless access to 16-bit video stream using GL_RED float texture on WebGL2. The patch extends the WebGL1 approach using RGBA32F textures for 16-bit depth video [*] by using single component GL_RED float [**] texture on WebGL2. * https://crrev.com/2476693002/ ** Float is used for no-precision-loss access to 16-bit data normalized to [0-1.0] range using formula value_float = value_16bit/65535.0. BUG= 369849 , 624436 CQ_INCLUDE_TRYBOTS=master.tryserver.chromium.linux:linux_optional_gpu_tests_rel;master.tryserver.chromium.mac:mac_optional_gpu_tests_rel;master.tryserver.chromium.win:win_optional_gpu_tests_rel Review-Url: https://codereview.chromium.org/2556943002 Cr-Commit-Position: refs/heads/master@{#437371} [modify] https://crrev.com/014802a3945cbf187a74bda8b2fdc21404d44104/content/test/data/media/depth_stream_test_utilities.js [modify] https://crrev.com/014802a3945cbf187a74bda8b2fdc21404d44104/content/test/data/media/getusermedia-depth-capture.html [modify] https://crrev.com/014802a3945cbf187a74bda8b2fdc21404d44104/content/test/gpu/gpu_tests/depth_capture_integration_test.py [modify] https://crrev.com/014802a3945cbf187a74bda8b2fdc21404d44104/media/renderers/skcanvas_video_renderer.cc [modify] https://crrev.com/014802a3945cbf187a74bda8b2fdc21404d44104/media/renderers/skcanvas_video_renderer_unittest.cc
,
Dec 9 2016
kbr@, hubbe@, ccameron@, Now is good time to try to get no-copy-upload of 16 bit video frame to RG8 texture implemented. Note that I'm writing this after realizing that the approach taken for no-copy-upload in #c33 (R16UI) is not possible anymore[1], that is: WebGL2 doesn't allow usage of R16UI. Chromium code got updated recently so the part of demo in #c33 stopped working: gl.texImage2D(gl.TEXTURE_2D, 0, gl.R16UI, gl.RED_INTEGER, gl.UNSIGNED_SHORT, video); Don't know if there is an extension planned for 16-bit integer formats. [1] Table under TexImageSource variant doesn't have 16 bit integer formats. https://www.khronos.org/registry/webgl/specs/latest/2.0/#3.7.6. The only remaining way to get the full precision, performance and lowest latency [2] is RG8. After the patches in #c47 (RGBA32F on WebGL1) and #c50 (R32F on WebGL2), measured the time needed for current (full precision == float approach) conversion of 16-bit to float for depth frame size of 640x480, which is standard now (Realsense, Kinect). Ubuntu 16.04, i7-5500U CPU @ 2.40GHz × 4 under -J8 chromium compilation stress and,... well, normal usage. RGBA32F and R32F have the same amount of (float_ = ushort_/65535.f) conversion as all RGB components in RGBA32F get the same value. With moderate stress: RGBA32F: ~2ms R32F: ~0.7ms Just copy (no conversion) to R32F: 0.5ms No stress results: RGBA32F: ~1.2ms R32F: ~0.55ms Just copy (no conversion) to R32F: ~0.4ms With R16UI or RG88 this would go away. It seems intuitive that gl.texImage2D(gl.TEXTURE_2D, 0, gl.RG8, gl.RG, gl.UNSIGNED_BYTE, video) for 16-bit video would upload higher byte to G and lower to R. Don't know what is the behavior of other formats when uploaded to RG8 texture or how should 8 bit video behave. [2] For lowest latency on getting the video frame from JS I was planning to use this optimization: For selected format and parameters, when texture bound to framebuffer is not rendered to (after it got attached) but it is mappable, avoid calling GL readPixels when JS api calls WebGL readPixels (no PBO usage, still need to think about that). kbr@, zmo@, ^^^ would this hack/optimization be OK? Thanks.
,
Dec 10 2016
The following revision refers to this bug: https://chromium.googlesource.com/chromium/src.git/+/8d87790e2232d24b7e6f889ad9aa895851fa5c85 commit 8d87790e2232d24b7e6f889ad9aa895851fa5c85 Author: qiankun.miao <qiankun.miao@intel.com> Date: Sat Dec 10 03:53:51 2016 Check EXT_color_buffer_float extension is available If the extension isn't available, we cannot read from fbo attaching float texture attachment. This CL adds error message when the extension is not available. BUG= 369849 , 624436 Review-Url: https://codereview.chromium.org/2565703002 Cr-Commit-Position: refs/heads/master@{#437744} [modify] https://crrev.com/8d87790e2232d24b7e6f889ad9aa895851fa5c85/content/test/data/media/getusermedia-depth-capture.html
,
Dec 13 2016
Alex: replying to #51 above: [1] The 16-bit integer formats and signed 8-bit integer formats were removed from that table because the conversions weren't obvious. For the 16-bit formats, should the input be scaled up to the full 16-bit range? There are arguments in both directions. For the 8-bit signed integer formats, should values greater than 127 become negative numbers? We can certainly change the WebGL specification to handle depth videos differently, assuming that applications can query them and know what conversion path they're taking. Please suggest a direction you'd like to take. [2] That sounds like too awful a hack. Can we instead watch for changes to the currentTime attribute to know whether a new frame's been produced?
,
Dec 14 2016
>#53:
Thanks.
[1]
I'll implement RG88, R16UI and DEPTH_COMPONENT16 (later two are not possible to use in glTexImage2D now) and do the same measurement as it is done for the float bellow.
[2]
I didn't explain this properly. The part you're referring to is resolved: video.oncanplay = function(){ /* called once the new frame is available */ }
I was referring to the time needed for gl.readPixels, but please ignore it for now as the suggestion in #52/[2] is premature. Let's get the numbers with integer formats and on several platforms to conclude which way to go.
RGBA32F
=======
For the reference, on Ubuntu 16.04, i7-5500U CPU @ 2.40GHz × 4, I put console.time/timeEnd around these calls:
1)gl.texImage2D(gl.TEXTURE_2D, 0, gl.R32F, gl.RED, gl.FLOAT, video);
2)gl.readPixels(0, 0, video.width, video.height, gl.RGBA, gl.FLOAT, arr);
1) takes ~0.9ms, 2) takes ~5ms (with values in range 1-15ms).
,
Feb 15 2017
#53 kbr@, I'm progressing with related tasks but need an advice on this before work on specifying this. Thanks. >We can certainly change the WebGL specification to handle depth videos differently, assuming that applications can query them and know what conversion path they're taking. Please suggest a direction you'd like to take. It is now possible to query MediaStreamTrack by videoKind (https://crrev.com/2664673002), but not sure if video.src query qualifies for this. Don't know enough if we can based exceptional case on media stream (video.src) query. Q1: can we handle it differently based on this or the field to use for query is needed in video? Tried or worked on several related issues meanwhile. The preference boils down to this choice of internal formats, in order of preference: 1) GL_R16 2) GL_RG8 Q2: There is no GL_R16 in GLES3. Could we add it to WebGL? Q3: Was there any interest in exposing GL_ARB_texture_view to WebGL? Note: Float is good and it should stay. Not sure yet if we should keep it normalized or scale it to contain real world measures (in meters). So, in parallel, I will work on using RG88 as intermediate and converting to target [0...1.0] float on GPU (within texImage2D JS call). Android Tango provides the data in projected 3D float points (in meters) so still need to figure out how to handle support for it ( Issue 674440 ).
,
Feb 18 2017
Q1: I don't know the public web APIs here, but it sounds to me like the browser should communicate more information about the format of the color channels. Not sure how to do this, whether via more members in MediaTrackSettings, or some other way. Maybe this isn't a requirement, and the user can always upload videos to high-bit-depth WebGL textures. They'll just get the full range or precision if the source video has it to offer. Q2: No. The only formats we can expose are those supported by ES3, or those supported by OpenGL ES extensions that have ubiquitous coverage. We'll need to stick with for example GL_R16I, GL_R16UI, or GL_R16F. Q3: it looks like the GL_EXT_texture_view extension does not have wide market penetration on mobile, and it's not in the ES 3.2 core. For this reason I don't think it's a good idea to expose it as a WebGL extension.
,
Mar 23 2017
>Q2: No. The only formats we can expose are those supported by ES3, or those supported by OpenGL ES extensions that have ubiquitous coverage. We'll need to stick with for example GL_R16I, GL_R16UI, or GL_R16F. kbr@, What about the approach [1] using TexImage2D to intermediate GL_R16_EXT followed by CopyTextureCHROMIUM (GL_R16 -> FLOAT)? This is using GL_EXT_texture_norm16 (GLES3.1 extension available on Android and in ANGLE) or it is available on core desktop profile. The patch is not exposing it outside - this is just a way to implement TexImage2D to gl.R32F. GL_R16 is used only internally in skcanvas_video_renderer.cc. Works on OSX and Windows (required a patch in ANGLE to bypass the TexImage2D validation). The measurement using Performance.now() around gl.TexImage2D + gl.R32F show significant improvement: Windows 10, Chromium 54.0.2794.0 64-bit Release_x64 developer build Intel® Core™ i7-4770HQ CPU @ 2.20 GHz 2.20 GHz Intel® Iris™ Pro Graphics 5200. Current: ~1.1ms With the patch: ~0.2ms Caveat: Hit the issue with i915 driver on Linux (lost precision)- need to report it. [1] https://codereview.chromium.org/2767063002/diff/20001/media/renderers/skcanvas_video_renderer.cc I'll measure the performance of implementing TexImage2D to float using intermediate GL_RG8 followed by draw to FLOAT texture.
,
Mar 23 2017
> What about the approach [1] using TexImage2D to intermediate GL_R16_EXT followed by CopyTextureCHROMIUM (GL_R16 -> FLOAT)? > This is using GL_EXT_texture_norm16 (GLES3.1 extension available on Android and in ANGLE) or it is available on core desktop profile. > > The patch is not exposing it outside - this is just a way to implement TexImage2D to > gl.R32F. GL_R16 is used only internally in skcanvas_video_renderer.cc. Aleksandar: thanks for looking into this. Using EXT_texture_norm16 internally sounds fine. It also sounds fine to propose it as a WebGL extension directly, if it's widely supported on mobile devices.
,
Apr 28 2017
The following revision refers to this bug: https://chromium.googlesource.com/chromium/src.git/+/10751e1866092a059e6725361dfea8553b449f43 commit 10751e1866092a059e6725361dfea8553b449f43 Author: aleksandar.stojiljkovic <aleksandar.stojiljkovic@intel.com> Date: Fri Apr 28 07:55:09 2017 16-bit video upload to float: intermediate R16_EXT and copy to float. R16_EXT is supported on desktop core profile (OSX and Linux), and via OpenGL ES 3.1 GL_EXT_texture_norm16 extension (including ANGLE on Windows). Update khronos headers to get GL_R16_EXT definition. It is not exposed through WebGL but only used internally. Brings significant performance improvement, cutting the time spent on WebGL TexImage2D from ~1.2ms to ~0.2ms - see https://crbug.com/624436. BUG=624436 CQ_INCLUDE_TRYBOTS=master.tryserver.chromium.android:android_optional_gpu_tests_rel;master.tryserver.chromium.linux:linux_optional_gpu_tests_rel;master.tryserver.chromium.mac:mac_optional_gpu_tests_rel;master.tryserver.chromium.win:win_optional_gpu_tests_rel Review-Url: https://codereview.chromium.org/2767063002 Cr-Commit-Position: refs/heads/master@{#467924} [modify] https://crrev.com/10751e1866092a059e6725361dfea8553b449f43/content/renderer/media/webmediaplayer_ms.cc [modify] https://crrev.com/10751e1866092a059e6725361dfea8553b449f43/content/renderer/media/webmediaplayer_ms.h [modify] https://crrev.com/10751e1866092a059e6725361dfea8553b449f43/content/test/data/media/depth_stream_test_utilities.js [modify] https://crrev.com/10751e1866092a059e6725361dfea8553b449f43/content/test/data/media/getusermedia-depth-capture.html [modify] https://crrev.com/10751e1866092a059e6725361dfea8553b449f43/gpu/command_buffer/common/capabilities.h [modify] https://crrev.com/10751e1866092a059e6725361dfea8553b449f43/gpu/command_buffer/common/gles2_cmd_utils.cc [modify] https://crrev.com/10751e1866092a059e6725361dfea8553b449f43/gpu/command_buffer/service/feature_info.cc [modify] https://crrev.com/10751e1866092a059e6725361dfea8553b449f43/gpu/command_buffer/service/feature_info.h [modify] https://crrev.com/10751e1866092a059e6725361dfea8553b449f43/gpu/command_buffer/service/feature_info_unittest.cc [modify] https://crrev.com/10751e1866092a059e6725361dfea8553b449f43/gpu/command_buffer/service/gles2_cmd_copy_texture_chromium.cc [modify] https://crrev.com/10751e1866092a059e6725361dfea8553b449f43/gpu/command_buffer/service/gles2_cmd_decoder.cc [modify] https://crrev.com/10751e1866092a059e6725361dfea8553b449f43/gpu/command_buffer/service/gles2_cmd_decoder_passthrough.cc [modify] https://crrev.com/10751e1866092a059e6725361dfea8553b449f43/gpu/command_buffer/service/texture_manager.cc [modify] https://crrev.com/10751e1866092a059e6725361dfea8553b449f43/gpu/command_buffer/tests/gl_copy_texture_CHROMIUM_unittest.cc [modify] https://crrev.com/10751e1866092a059e6725361dfea8553b449f43/gpu/ipc/common/gpu_command_buffer_traits_multi.h [modify] https://crrev.com/10751e1866092a059e6725361dfea8553b449f43/media/renderers/skcanvas_video_renderer.cc [modify] https://crrev.com/10751e1866092a059e6725361dfea8553b449f43/media/renderers/skcanvas_video_renderer.h [modify] https://crrev.com/10751e1866092a059e6725361dfea8553b449f43/media/renderers/skcanvas_video_renderer_unittest.cc [modify] https://crrev.com/10751e1866092a059e6725361dfea8553b449f43/third_party/WebKit/Source/core/html/HTMLVideoElement.cpp [modify] https://crrev.com/10751e1866092a059e6725361dfea8553b449f43/third_party/WebKit/Source/core/html/HTMLVideoElement.h [modify] https://crrev.com/10751e1866092a059e6725361dfea8553b449f43/third_party/WebKit/Source/modules/webgl/WebGLRenderingContextBase.cpp [modify] https://crrev.com/10751e1866092a059e6725361dfea8553b449f43/third_party/WebKit/public/platform/WebMediaPlayer.h [modify] https://crrev.com/10751e1866092a059e6725361dfea8553b449f43/third_party/khronos/EGL/egl.h [modify] https://crrev.com/10751e1866092a059e6725361dfea8553b449f43/third_party/khronos/EGL/eglext.h [modify] https://crrev.com/10751e1866092a059e6725361dfea8553b449f43/third_party/khronos/EGL/eglplatform.h [modify] https://crrev.com/10751e1866092a059e6725361dfea8553b449f43/third_party/khronos/GLES2/gl2.h [modify] https://crrev.com/10751e1866092a059e6725361dfea8553b449f43/third_party/khronos/GLES2/gl2ext.h [modify] https://crrev.com/10751e1866092a059e6725361dfea8553b449f43/third_party/khronos/GLES2/gl2platform.h [modify] https://crrev.com/10751e1866092a059e6725361dfea8553b449f43/third_party/khronos/GLES3/gl3.h [modify] https://crrev.com/10751e1866092a059e6725361dfea8553b449f43/third_party/khronos/GLES3/gl31.h [modify] https://crrev.com/10751e1866092a059e6725361dfea8553b449f43/third_party/khronos/GLES3/gl3platform.h [modify] https://crrev.com/10751e1866092a059e6725361dfea8553b449f43/third_party/khronos/KHR/khrplatform.h [modify] https://crrev.com/10751e1866092a059e6725361dfea8553b449f43/third_party/khronos/README.chromium [modify] https://crrev.com/10751e1866092a059e6725361dfea8553b449f43/ui/gl/gl_bindings.h
,
Jun 21 2017
The following revision refers to this bug: https://chromium.googlesource.com/chromium/src.git/+/65f143da57bb9367c5004c659d368642bd9969f6 commit 65f143da57bb9367c5004c659d368642bd9969f6 Author: rijubrata.bhaumik <rijubrata.bhaumik@intel.com> Date: Wed Jun 21 11:50:42 2017 gpu: support R16 GPUMemoryBuffer 9-16 bit video channel will use it. TEST=gl_tests --gtest_filter=GpuMemoryBufferTest* BUG= 445071 , 624436 CQ_INCLUDE_TRYBOTS=master.tryserver.chromium.android:android_optional_gpu_tests_rel;master.tryserver.chromium.linux:linux_optional_gpu_tests_rel;master.tryserver.chromium.mac:mac_optional_gpu_tests_rel;master.tryserver.chromium.win:win_optional_gpu_tests_rel Review-Url: https://codereview.chromium.org/2920793005 Cr-Commit-Position: refs/heads/master@{#481176} [modify] https://crrev.com/65f143da57bb9367c5004c659d368642bd9969f6/components/exo/buffer.cc [modify] https://crrev.com/65f143da57bb9367c5004c659d368642bd9969f6/content/browser/gpu/gpu_internals_ui.cc [modify] https://crrev.com/65f143da57bb9367c5004c659d368642bd9969f6/gpu/command_buffer/client/gles2_implementation.cc [modify] https://crrev.com/65f143da57bb9367c5004c659d368642bd9969f6/gpu/command_buffer/common/gpu_memory_buffer_support.cc [modify] https://crrev.com/65f143da57bb9367c5004c659d368642bd9969f6/gpu/command_buffer/tests/gl_gpu_memory_buffer_unittest.cc [modify] https://crrev.com/65f143da57bb9367c5004c659d368642bd9969f6/gpu/ipc/client/gpu_memory_buffer_impl_shared_memory.cc [modify] https://crrev.com/65f143da57bb9367c5004c659d368642bd9969f6/gpu/ipc/host/gpu_memory_buffer_support.cc [modify] https://crrev.com/65f143da57bb9367c5004c659d368642bd9969f6/ui/gfx/buffer_format_util.cc [modify] https://crrev.com/65f143da57bb9367c5004c659d368642bd9969f6/ui/gfx/buffer_types.h [modify] https://crrev.com/65f143da57bb9367c5004c659d368642bd9969f6/ui/gfx/mac/io_surface.cc [modify] https://crrev.com/65f143da57bb9367c5004c659d368642bd9969f6/ui/gfx/mojo/buffer_types.mojom [modify] https://crrev.com/65f143da57bb9367c5004c659d368642bd9969f6/ui/gfx/mojo/buffer_types_struct_traits.h [modify] https://crrev.com/65f143da57bb9367c5004c659d368642bd9969f6/ui/gl/gl_image_io_surface.mm [modify] https://crrev.com/65f143da57bb9367c5004c659d368642bd9969f6/ui/gl/gl_image_memory.cc [modify] https://crrev.com/65f143da57bb9367c5004c659d368642bd9969f6/ui/gl/gl_image_native_pixmap.cc [modify] https://crrev.com/65f143da57bb9367c5004c659d368642bd9969f6/ui/gl/test/gl_image_test_support.cc [modify] https://crrev.com/65f143da57bb9367c5004c659d368642bd9969f6/ui/ozone/platform/drm/common/drm_util.cc
,
Jun 27 2017
Related tutorial and demo code: "Typing in the air using depth camera, Chrome, JavaScript, and WebGL transform feedback" https://software.intel.com/en-us/blogs/2017/06/22/tutorial-typing-in-the-air-using-depth-camera-chrome-javascript-and-webgl-transform
,
Jun 27 2017
Very cool! Great article!
,
Sep 12 2017
The following revision refers to this bug: https://chromium.googlesource.com/chromium/src.git/+/a32e4ca5e787c5e985ef07e81600d9a85d8a7297 commit a32e4ca5e787c5e985ef07e81600d9a85d8a7297 Author: Rijubrata Bhaumik <rijubrata.bhaumik@intel.com> Date: Tue Sep 12 10:59:58 2017 [media]: Replace LUMINANCE_F16 by R16_EXT for 9/10-bit h264 videos. LUMINANCE_F16 has following issues: 1. GL_LUMINANCE (as well as GL_ALPHA) is deprecated. 2. GpuMemoryBuffer cannot support LUMINANCE_F16. 3. LUMINANCE_F16 requires cpu int-to-float conversion. This CL introduces a media switch "kUseR16Texture" feature to use R16_EXT. If we get better power/performance numbers, we can enable R16_EXT by default. R16_EXT is : + intuitive + filterable + no int to half float CPU conversion + GpuMemoryBuffer can support R16_EXT Bug: 445071 , 624436 Cq-Include-Trybots: master.tryserver.blink:linux_trusty_blink_rel;master.tryserver.chromium.android:android_optional_gpu_tests_rel;master.tryserver.chromium.linux:linux_optional_gpu_tests_rel;master.tryserver.chromium.mac:mac_optional_gpu_tests_rel;master.tryserver.chromium.win:win_optional_gpu_tests_rel Change-Id: I8e5d63a8ea11ef1bbc67e26f4d74e5c1dfc76eb5 Reviewed-on: https://chromium-review.googlesource.com/633663 Commit-Queue: Rijubrata Bhaumik <rijubrata.bhaumik@intel.com> Reviewed-by: Antoine Labour <piman@chromium.org> Reviewed-by: Fredrik Hubinette <hubbe@chromium.org> Cr-Commit-Position: refs/heads/master@{#501233} [modify] https://crrev.com/a32e4ca5e787c5e985ef07e81600d9a85d8a7297/cc/output/renderer_pixeltest.cc [modify] https://crrev.com/a32e4ca5e787c5e985ef07e81600d9a85d8a7297/cc/raster/raster_buffer_provider.cc [modify] https://crrev.com/a32e4ca5e787c5e985ef07e81600d9a85d8a7297/cc/resources/resource_provider.cc [modify] https://crrev.com/a32e4ca5e787c5e985ef07e81600d9a85d8a7297/cc/resources/video_resource_updater.cc [modify] https://crrev.com/a32e4ca5e787c5e985ef07e81600d9a85d8a7297/cc/resources/video_resource_updater.h [modify] https://crrev.com/a32e4ca5e787c5e985ef07e81600d9a85d8a7297/cc/resources/video_resource_updater_unittest.cc [modify] https://crrev.com/a32e4ca5e787c5e985ef07e81600d9a85d8a7297/cc/test/fake_resource_provider.h [modify] https://crrev.com/a32e4ca5e787c5e985ef07e81600d9a85d8a7297/cc/test/test_context_provider.cc [modify] https://crrev.com/a32e4ca5e787c5e985ef07e81600d9a85d8a7297/cc/test/test_in_process_context_provider.cc [modify] https://crrev.com/a32e4ca5e787c5e985ef07e81600d9a85d8a7297/cc/test/test_in_process_context_provider.h [modify] https://crrev.com/a32e4ca5e787c5e985ef07e81600d9a85d8a7297/cc/test/test_web_graphics_context_3d.h [modify] https://crrev.com/a32e4ca5e787c5e985ef07e81600d9a85d8a7297/components/viz/common/resources/platform_color.h [modify] https://crrev.com/a32e4ca5e787c5e985ef07e81600d9a85d8a7297/components/viz/common/resources/platform_color_unittest.cc [modify] https://crrev.com/a32e4ca5e787c5e985ef07e81600d9a85d8a7297/components/viz/common/resources/resource_format.h [modify] https://crrev.com/a32e4ca5e787c5e985ef07e81600d9a85d8a7297/components/viz/common/resources/resource_format_utils.cc [modify] https://crrev.com/a32e4ca5e787c5e985ef07e81600d9a85d8a7297/components/viz/common/resources/resource_settings.h [modify] https://crrev.com/a32e4ca5e787c5e985ef07e81600d9a85d8a7297/content/renderer/gpu/render_widget_compositor.cc [modify] https://crrev.com/a32e4ca5e787c5e985ef07e81600d9a85d8a7297/media/base/media_switches.cc [modify] https://crrev.com/a32e4ca5e787c5e985ef07e81600d9a85d8a7297/media/base/media_switches.h
,
Sep 12 2017
The following revision refers to this bug: https://chromium.googlesource.com/chromium/src.git/+/8ee94236e350177ae66e4c4e94d37c9918e558f9 commit 8ee94236e350177ae66e4c4e94d37c9918e558f9 Author: Sebastien Seguin-Gagnon <sebsg@chromium.org> Date: Tue Sep 12 13:39:10 2017 Revert "[media]: Replace LUMINANCE_F16 by R16_EXT for 9/10-bit h264 videos." This reverts commit a32e4ca5e787c5e985ef07e81600d9a85d8a7297. Reason for revert: Makes the VideoResourceUpdaterTest.HighBitFrameNoF16 test fail on Linux MSan tests: https://uberchromegw.corp.google.com/i/chromium.memory/builders/Linux%20MSan%20Tests/builds/4137 https://uberchromegw.corp.google.com/i/chromium.memory/builders/Linux%20MSan%20Tests/builds/4138 Uninitialized value was stored to memory, Uninitialized value was created by a heap allocation Original change's description: > [media]: Replace LUMINANCE_F16 by R16_EXT for 9/10-bit h264 videos. > > LUMINANCE_F16 has following issues: > 1. GL_LUMINANCE (as well as GL_ALPHA) is deprecated. > 2. GpuMemoryBuffer cannot support LUMINANCE_F16. > 3. LUMINANCE_F16 requires cpu int-to-float conversion. > > This CL introduces a media switch "kUseR16Texture" feature to use R16_EXT. > > If we get better power/performance numbers, we can enable R16_EXT > by default. > > R16_EXT is : > + intuitive > + filterable > + no int to half float CPU conversion > + GpuMemoryBuffer can support R16_EXT > > Bug: 445071 , 624436 > Cq-Include-Trybots: master.tryserver.blink:linux_trusty_blink_rel;master.tryserver.chromium.android:android_optional_gpu_tests_rel;master.tryserver.chromium.linux:linux_optional_gpu_tests_rel;master.tryserver.chromium.mac:mac_optional_gpu_tests_rel;master.tryserver.chromium.win:win_optional_gpu_tests_rel > Change-Id: I8e5d63a8ea11ef1bbc67e26f4d74e5c1dfc76eb5 > Reviewed-on: https://chromium-review.googlesource.com/633663 > Commit-Queue: Rijubrata Bhaumik <rijubrata.bhaumik@intel.com> > Reviewed-by: Antoine Labour <piman@chromium.org> > Reviewed-by: Fredrik Hubinette <hubbe@chromium.org> > Cr-Commit-Position: refs/heads/master@{#501233} TBR=rijubrata.bhaumik@intel.com,hubbe@chromium.org,dcastagna@chromium.org,piman@chromium.org Change-Id: If9643b718cca8e44e495018ff5ab934f395b2d71 No-Presubmit: true No-Tree-Checks: true No-Try: true Bug: 445071 , 624436 Cq-Include-Trybots: master.tryserver.blink:linux_trusty_blink_rel;master.tryserver.chromium.android:android_optional_gpu_tests_rel;master.tryserver.chromium.linux:linux_optional_gpu_tests_rel;master.tryserver.chromium.mac:mac_optional_gpu_tests_rel;master.tryserver.chromium.win:win_optional_gpu_tests_rel Reviewed-on: https://chromium-review.googlesource.com/663517 Reviewed-by: Sebastien Seguin-Gagnon <sebsg@chromium.org> Commit-Queue: Sebastien Seguin-Gagnon <sebsg@chromium.org> Cr-Commit-Position: refs/heads/master@{#501253} [modify] https://crrev.com/8ee94236e350177ae66e4c4e94d37c9918e558f9/cc/output/renderer_pixeltest.cc [modify] https://crrev.com/8ee94236e350177ae66e4c4e94d37c9918e558f9/cc/raster/raster_buffer_provider.cc [modify] https://crrev.com/8ee94236e350177ae66e4c4e94d37c9918e558f9/cc/resources/resource_provider.cc [modify] https://crrev.com/8ee94236e350177ae66e4c4e94d37c9918e558f9/cc/resources/video_resource_updater.cc [modify] https://crrev.com/8ee94236e350177ae66e4c4e94d37c9918e558f9/cc/resources/video_resource_updater.h [modify] https://crrev.com/8ee94236e350177ae66e4c4e94d37c9918e558f9/cc/resources/video_resource_updater_unittest.cc [modify] https://crrev.com/8ee94236e350177ae66e4c4e94d37c9918e558f9/cc/test/fake_resource_provider.h [modify] https://crrev.com/8ee94236e350177ae66e4c4e94d37c9918e558f9/cc/test/test_context_provider.cc [modify] https://crrev.com/8ee94236e350177ae66e4c4e94d37c9918e558f9/cc/test/test_in_process_context_provider.cc [modify] https://crrev.com/8ee94236e350177ae66e4c4e94d37c9918e558f9/cc/test/test_in_process_context_provider.h [modify] https://crrev.com/8ee94236e350177ae66e4c4e94d37c9918e558f9/cc/test/test_web_graphics_context_3d.h [modify] https://crrev.com/8ee94236e350177ae66e4c4e94d37c9918e558f9/components/viz/common/resources/platform_color.h [modify] https://crrev.com/8ee94236e350177ae66e4c4e94d37c9918e558f9/components/viz/common/resources/platform_color_unittest.cc [modify] https://crrev.com/8ee94236e350177ae66e4c4e94d37c9918e558f9/components/viz/common/resources/resource_format.h [modify] https://crrev.com/8ee94236e350177ae66e4c4e94d37c9918e558f9/components/viz/common/resources/resource_format_utils.cc [modify] https://crrev.com/8ee94236e350177ae66e4c4e94d37c9918e558f9/components/viz/common/resources/resource_settings.h [modify] https://crrev.com/8ee94236e350177ae66e4c4e94d37c9918e558f9/content/renderer/gpu/render_widget_compositor.cc [modify] https://crrev.com/8ee94236e350177ae66e4c4e94d37c9918e558f9/media/base/media_switches.cc [modify] https://crrev.com/8ee94236e350177ae66e4c4e94d37c9918e558f9/media/base/media_switches.h
,
Sep 19 2017
The following revision refers to this bug: https://chromium.googlesource.com/chromium/src.git/+/266bf70364718f6afd167fef514c97a15a749f83 commit 266bf70364718f6afd167fef514c97a15a749f83 Author: Rijubrata Bhaumik <rijubrata.bhaumik@intel.com> Date: Tue Sep 19 17:52:20 2017 [media]: Replace LUMINANCE_F16 by R16_EXT for 9/10-bit h264 videos. LUMINANCE_F16 has following issues: 1. GL_LUMINANCE (as well as GL_ALPHA) is deprecated. 2. GpuMemoryBuffer cannot support LUMINANCE_F16. 3. LUMINANCE_F16 requires cpu int-to-float conversion. This CL introduces a media switch "kUseR16Texture" feature to use R16_EXT. If we get better power/performance numbers, we can enable R16_EXT by default. R16_EXT is : + intuitive + filterable + no int to half float CPU conversion + GpuMemoryBuffer can support R16_EXT Bug: 445071 , 624436 Change-Id: I9390d15c1e3b9bb5e1f2825d7338a1b0dca1e8c5 Cq-Include-Trybots: master.tryserver.blink:linux_trusty_blink_rel;master.tryserver.chromium.android:android_optional_gpu_tests_rel;master.tryserver.chromium.linux:linux_optional_gpu_tests_rel;master.tryserver.chromium.mac:mac_optional_gpu_tests_rel;master.tryserver.chromium.win:win_optional_gpu_tests_rel Reviewed-on: https://chromium-review.googlesource.com/663660 Commit-Queue: Rijubrata Bhaumik <rijubrata.bhaumik@intel.com> Reviewed-by: Antoine Labour <piman@chromium.org> Reviewed-by: Fredrik Hubinette <hubbe@chromium.org> Cr-Commit-Position: refs/heads/master@{#502888} [modify] https://crrev.com/266bf70364718f6afd167fef514c97a15a749f83/cc/output/renderer_pixeltest.cc [modify] https://crrev.com/266bf70364718f6afd167fef514c97a15a749f83/cc/raster/raster_buffer_provider.cc [modify] https://crrev.com/266bf70364718f6afd167fef514c97a15a749f83/cc/resources/resource_provider.cc [modify] https://crrev.com/266bf70364718f6afd167fef514c97a15a749f83/cc/resources/video_resource_updater.cc [modify] https://crrev.com/266bf70364718f6afd167fef514c97a15a749f83/cc/resources/video_resource_updater.h [modify] https://crrev.com/266bf70364718f6afd167fef514c97a15a749f83/cc/resources/video_resource_updater_unittest.cc [modify] https://crrev.com/266bf70364718f6afd167fef514c97a15a749f83/cc/test/fake_resource_provider.h [modify] https://crrev.com/266bf70364718f6afd167fef514c97a15a749f83/cc/test/test_context_provider.cc [modify] https://crrev.com/266bf70364718f6afd167fef514c97a15a749f83/cc/test/test_in_process_context_provider.cc [modify] https://crrev.com/266bf70364718f6afd167fef514c97a15a749f83/cc/test/test_in_process_context_provider.h [modify] https://crrev.com/266bf70364718f6afd167fef514c97a15a749f83/cc/test/test_web_graphics_context_3d.h [modify] https://crrev.com/266bf70364718f6afd167fef514c97a15a749f83/components/viz/common/resources/platform_color.h [modify] https://crrev.com/266bf70364718f6afd167fef514c97a15a749f83/components/viz/common/resources/platform_color_unittest.cc [modify] https://crrev.com/266bf70364718f6afd167fef514c97a15a749f83/components/viz/common/resources/resource_format.h [modify] https://crrev.com/266bf70364718f6afd167fef514c97a15a749f83/components/viz/common/resources/resource_format_utils.cc [modify] https://crrev.com/266bf70364718f6afd167fef514c97a15a749f83/components/viz/common/resources/resource_settings.h [modify] https://crrev.com/266bf70364718f6afd167fef514c97a15a749f83/content/renderer/gpu/render_widget_compositor.cc [modify] https://crrev.com/266bf70364718f6afd167fef514c97a15a749f83/media/base/media_switches.cc [modify] https://crrev.com/266bf70364718f6afd167fef514c97a15a749f83/media/base/media_switches.h
,
Feb 9 2018
,
Feb 15 2018
,
Apr 16 2018
I would love to be able to use R16_EXT signed and unsigned from webgl for a medical application. Is there currently any efforts to propose and make that extension available to JS application through webgl?
,
Apr 16 2018
jf.pambrun@ Could you elaborate plz? From [1], only half-float and float 16F types are supported for texImage2D() R16F RED HALF_FLOAT, FLOAT RG16F RG HALF_FLOAT, FLOAT RGB16F RGB HALF_FLOAT, FLOAT RGBA16F RGBA HALF_FLOAT, FLOAT [1] https://www.khronos.org/registry/webgl/specs/latest/2.0/#3.7.6
,
Apr 16 2018
jf.pambrun@gmail.com: you are requesting a new WebGL extension. Please send your request to public_webgl@khronos.org instead.
,
Apr 17 2018
It would be great if I could use 16bit filtrable integer textures just like in this HDR video issue, but from webgl. I understand that this requires the EXT_texture_norm16 extension, that it's not currently available and that this is outside of scope of the chromium team. However, comments #55, #56 and #58 alluded to the idea of proposing this extension for adoption. With my question, I was just wondering if any efforts were already put in that direction. Regarding half floats, they are not precise enough. I would have to use float or signed/unsigned int16, but then I have to do the interpolation in shader. I really appreciate that you took the time to answer my comment, JF
,
Apr 17 2018
jf.pambrun@gmail.com. When uploading 16-bit depth video to GL_FLOAT texture, Chrome internally uses GL_R16_EXT as intermediate texture. >I would have to use float or signed/unsigned int16, but then I have to do the interpolation in shader. Use OES_texture_float_linear to make R32F, RG32F, RGB32F and RGBA32F filterable. I guess that your use case is not about 16-bit video upload to texture, but you need to upload 16-bit data to GL_R16UI texture, deal with 16-bit to float conversion in shader and render to float (WebGL 2.0), using EXT_color_buffer_float to make e.g. RGBA32F color-renderable. [1] https://www.khronos.org/registry/webgl/extensions/OES_texture_float_linear/ [2] https://www.khronos.org/registry/webgl/extensions/EXT_color_buffer_float/
,
Apr 17 2018
I guess I should provide more information about my use case. I have medical imaging volumes (e.g. computed tomography, magnetic resonance) which are 16bit signed or unsigned grayscale. The user can change a tone mapping function in real time to present 8 bit grayscale to the screen. This is done is shader. This volume needs to be sliced arbitrarily to present reconstruction in any orientation. This sampling needs to be tri-linearly interpolated. In addition, I would like to experiment with volume reconstruction which require many (in the order of 100s) 3d texture samples per rendered pixels. Thus performance quickly becomes an issue. So it would be ideal for the 3dsampler to perform the tri-linear interpolation for each texture sample. Otherwise, I need to sample 8 locations in shader and compute the interpolation there for each texture sample. Unfortunately, no uint16 or int16 texture formats are filtrable. That leaves float32, but that doubles the memory requirement which can already be >1gb for large volumes. I already experimented with OES_texture_float_linear with R32F and it works 'ok', but is somewhat slow (although I can't say if R16_EXT would be faster as I can't try) and requires a lot of memory.
,
Apr 18 2018
As I mentioned, Chrome can only expose a WebGL extension from https://www.khronos.org/registry/webgl/extensions/ So if you want a feature, the best is to propose it to the public_webgl mailing list and see how the community feels about it.
,
Jul 25
|
|||||||||||||
►
Sign in to add a comment |
|||||||||||||
Comment 1 by dongseon...@intel.com
, Jun 29 2016