MediaSource api not able to play webm video recorded from MediaRecorder
Reported by
blahblah...@gmail.com,
Apr 22 2016
|
|||||||||||||||||||||
Issue descriptionUserAgent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.86 Safari/537.36 Example URL: Steps to reproduce the problem: 1. copy the .html and test.webm files from http://html5-demos.appspot.com/static/media-source.html (probably want to remove google analytics from the html) to your local server 2. test it, and you should see the video playing 3. record a webm file at https://webrtc.github.io/samples/src/content/getusermedia/record/ 4. copy this recorded webm file to test.webm 5. reload media-source.html page What is the expected behavior? Should play the video and audio. What went wrong? Either nothing plays, or you get the first few seconds of video and then it stops. Did this work before? N/A Is it a problem with Flash or HTML5? N/A Does this work in other browsers? Yes Chrome version: 50.0.2661.86 Channel: stable OS Version: OS X 10.11.4 Flash Version: Works fine in Firefox. Also, the webm file plays fine if you just load it into chrome.
,
Apr 26 2016
,
Apr 26 2016
Thank you for providing more feedback. Adding requester "yiningc@chromium.org" for another review and adding "Needs-Review" label for tracking. For more details visit https://www.chromium.org/issue-tracking/autotriage - Your friendly Sheriffbot
,
Apr 26 2016
yiningc@ - can take a look and tell us if you think this is a playback issue, or if it's a problem with how MediaRecorder created its webm?
,
Apr 26 2016
blahblah676@gmail.com, My macbook camera is broken so I can't verify the video part, but in the WebRTC recorder link you provided https://webrtc.github.io/samples/src/content/getusermedia/record/, I can record audio, the audio (.webm) file can be loaded and played in the test page from local server. can you upgrade your chrome build and try again? thanks
,
Apr 26 2016
I fixed my macbook camera by reboot. I have verified the webRTC recorded video can be played in <video> element and is played through. blahblah676@gmail.com, please upgrade your Chrome build and try again.
,
Apr 26 2016
,
Apr 27 2016
Thank you for providing more feedback. Adding requester "tnakamura@chromium.org" for another review and adding "Needs-Review" label for tracking. For more details visit https://www.chromium.org/issue-tracking/autotriage - Your friendly Sheriffbot
,
Apr 27 2016
blahblah676@ and yiningc, I see this in the console when I run your test: mediasource.html:175 Uncaught InvalidStateError: Failed to execute 'appendBuffer' on 'SourceBuffer': This SourceBuffer is still processing an 'appendBuffer', 'appendStream', or 'remove' operation. Are there still bugs in the test code?
,
Apr 27 2016
Yeah if I run your demo locally it works randomly for 1-5 chunks until I get the console error posted above. Also, you specify Vorbis as audio codec instead of Opus, but that doesn't seem to be a problem here.
,
Apr 27 2016
Yes you're right. It looks like the code isn't properly checking the updating property. I've now fixed it on my server (or at least, hacked it so that it just uses a single chunk). Now that javascript error is gone, but the video still isn't playing properly. (But changing to the original .webm file - not recorded via MediaRecorder - plays perfectly). It looks like the video has the correct length (8 seconds), but it's only ever displaying the first frame.
,
Apr 27 2016
OK, not it works perfectly with my file but not with yours. I think it's a dupe of https://bugs.chromium.org/p/chromium/issues/detail?id=597034. mcasas@ do you agree?
,
Apr 27 2016
#13: Symptoms look similar. blahblah676@, could you try with the current Canary? That should fix your issue. https://crbug.com/597034 is solved but is marked as "Started" waiting for https://crbug.com/601657 which should land a longer term solution.
,
Apr 27 2016
Please note that you need to regenerate the recording with canary, not only play the file.
,
Apr 27 2016
yiningc@, can you help us figure out why this recording won't play properly?
,
Apr 28 2016
,
Apr 29 2016
Matt, can you take a look why WebRTC recorded video is not supported in MediaSource API? the repro steps are in comment #8
,
May 3 2016
Confirmed repro of #8 (middle case: "MediaSource (doesn't work): https://www.groupboard.com/gum_record/mediasource.html"). Inspection via devtools console of the buffered ranges demonstrates there's a discontinuity. Further inspection in debug mode for the API behavior during parsing that resulted in this discontinuity would be this investigation's next step. $('video').buffered TimeRanges {length: 2}length: 2__proto__: Object $('video').buffered.start(0) 0 $('video').buffered.end(0) 1.634 $('video').buffered.start(1) 3.536 $('video').buffered.end(1) 4.898 Chris, can you take a look please? If not, pass it back to me.
,
Jun 8 2016
HI! I also observed this issue, I use following page for testing: https://jsfiddle.net/stivyakovenko/fkt89cLu/6/ It works from time to time. Or starts working for few seconds and then stops. Some time opening another tab makes it work longer.
,
Jun 9 2016
Confirmed and root caused for https://www.groupboard.com/gum_record/mediasource.html The issue is a discontinuity as Matt said. The video frame at time 1.601 is marked as having duration .033. The next video frame is at time 1.68. 1.68 - 1.601 = 0.079 So either the time of 1.68 is wrong or the duration of .033 is wrong. The time is read from the timecode field of this block. The block does not set an explicit duration, so we look for a default duration. In this case the video track has a default duration of .033. If the track did not have a default duration, we would have set the duration to be the difference in timestamps between current and next buffer. I've verified that if the duration is set this way (commenting out reading of default duration in chrome), the video plays back fine. This is likely what firefox is doing, but I don't think this is a correct interpretation of the WebM spec. Looking at the entry for BlockDuration here: http://www.webmproject.org/docs/container/ "... When not written and with no DefaultDuration, the value is assumed to be the difference between the timecode of this Block and the timecode of the next Block in “display” order (not coding order)..." So I think we're right to use DefaultDuration before falling back to using timecode deltas. To fix the discontinuity, you should override your default duration with an explicit BlockDuration whenever the DefaultDuration is not correct for a given block. Alternatively, don't use DefaultDuration at all, and we will derive the duration using timestamp deltas. One gotcha with that last suggestion. We cannot perfectly derive the duration using timestamp deltas for the last block in a WebM Cluster. We will instead estimate using max of what we've seen so far. This isn't perfect, so we recommend always using an explicit BlockDuration at the end of your Clusters if DefaultDuration would not otherwise be set/accurate. Not sure what you're using to mux, but libWebM will do this for you if you set the appropriate flag - https://github.com/webmproject/libwebm/blob/master/mkvmuxer/mkvmuxer.h#L1464 Back to mcasas to look into fixing MediaRecorder muxing.
,
Jun 9 2016
Also confirmed Comment 21 - same issue.
,
Jun 10 2016
dear chcunningham, so is it a bug in MediaRecorder?
,
Jun 10 2016
#22: it's indeed perfectly possible that encoded frames are separated by an interval larger (or even smaller) that the expected frame interval, and that is due to a variety of reasons, e.g.: - pausing video on the Media Stream Track (hence nothing to record) - pausing the recording of the video track - bursty MediaStream sources, e.g. Screen capture or Canvas capture all of these cases would end up in, oftentimes, skipped encoded frames. I assumed that players, e.g. VLC, would simply not care about this situation and play back the associated decoded video frame at the instructed time. Is this reasoning plausible, chcunningham@?
,
Jun 10 2016
Yeah, I think that's plausible. MSE is more strict, partly because discontinuities with MSE can be deliberate on the part of the web app (e.g. user seeks to middle of video and app skips appending a large section between the start and middle that isn't immediately required). MSE is obligated to report those gaps as best it can so the app can append the missing data later if the user wants to playback in those regions.
,
Jun 10 2016
Re: 24 - yep, at least that's my suspicion atm.
,
Aug 18 2016
Removing MSE label since this looks to be WAI from MSE perspective.
,
Jan 4 2017
Can someone please verify this bug? I'm still not being able to play a video using MSE and feeding with webm chunks in sequence mode. Video freezes and only a few frames are showed and with poor quality.
,
Jan 4 2017
,
Jan 6 2017
Just in case it was lost in the wall of text above, I think this should be a pretty easy fix - simply stop setting track DefaultDuration. DefaultDuration is meant to be used when all (or most) frame durations are the same (one-off deviations can be handled with BlockDuration). But if what you really want is for duration to be derived via timestamp deltas, then just don't set DefaultDuration.
,
Jan 6 2017
#31: just to make sure, do you mean DefaultDuration as in [1]? | Supported | DefaultDuration | Number of nanoseconds (i.e. not scaled) per frame. | [1] http://www.webmproject.org/docs/container/#track
,
Jan 6 2017
Yes
,
Jan 6 2017
The following revision refers to this bug: https://chromium.googlesource.com/chromium/src.git/+/50252de90c71b7d5d12c30121a82d4e790f74252 commit 50252de90c71b7d5d12c30121a82d4e790f74252 Author: mcasas <mcasas@chromium.org> Date: Fri Jan 06 22:46:38 2017 WebmMuxer: do not set DefaultDuration if timestamp delta is not near constant WebmMuxer sets WebM's DefaultDuration to the expected delta between frames (derived from the frame rate), but that messes up (some) players due to frames arriving at a different rate. This is a legimimate case if, e.g. the encoded data stream is paused. This CL essentially avoids setting this parameter so that players can just look at each encoded frame timestamp instead. [1] http://www.webmproject.org/docs/container/#track BUG= 606000 Review-Url: https://codereview.chromium.org/2617143003 Cr-Commit-Position: refs/heads/master@{#442081} [modify] https://crrev.com/50252de90c71b7d5d12c30121a82d4e790f74252/media/muxers/webm_muxer.cc
,
Jan 7 2017
stiv.yakovenko@gmail.com, blahblah676@gmail.com, can you still hit this issue after #34?
,
Jan 18 2017
,
Jan 18 2017
Bulk move Blink>MediaStream>Recording ---> Blink>MediaRecording
,
Jan 19 2017
,
Jan 23 2017
ping stiv.yakovenko@gmail.com, blahblah676@gmail.com #35
,
Jan 23 2017
I was just going to tell you guys that previous patch does not seem to work properly. After heavy testing I can only say that "sometimes it works and sometimes it doesn't" Really, this is the most stupid and annoying bug ever. My project will be dead if I cannot properly record screen and stream video chunks at the same time. Can someone please help me?
,
Jan 23 2017
#40: can you please describe how are you testing, if possible uploading an e.g. codepen or jsbin, on which OS, which version of Chrome (I'm assuming Canary), etc? The easier it is to repro, the higher the chances of getting it resolved. Also verify that any recording can be correctly played back in external players e.g. VLC.
,
Jan 23 2017
I got the same exact problem. http://pastebin.com/4byEAcqD This code runs perfectly fine in Firefox(50.1.0), but Chrome(55.0.2883.87 m (64-bit)) runs it only for a few seconds and freezes afterwards.
,
Jan 23 2017
adding to #42 Forgot to tell you: i'm on Win8.1 64-Bit
,
Jan 23 2017
Okay, seems that bug is fixed, but not in current Electron stable build. Please see https://github.com/electron/libchromiumcontent/issues/253#issuecomment-274553229
,
Jan 23 2017
#42, #43: #34 landed in 57.0.2975.0, can you please try again with a Canary or Dev?
,
Jan 23 2017
I managed to repro the issue with a modified version of the jsbin in #21 (needs s/vorbis,vp9/opus,vp8/), see https://jsfiddle.net/miguelao/ay3pq0xt/. One of the realities of MR is that video and audio arrive at different times and in bursts. The Webm/mkv format explicits that timestamps must be monotonically increasing, for this reason webm_muxer.cc has a step where the timestamp is clamped to the latest seen @ [1]. This creates a timestamp sequence that "jumps". I added some timestamp dumps to illustrate this situation: VIDEO/AUDIO refers to the incoming encoder timestamp, and AddFrame is what goes into the WebM stream: [1:1:0123/151538.274790:VERBOSE1:webm_muxer.cc(164)] VIDEO @0 s [1:1:0123/151538.275086:VERBOSE1:webm_muxer.cc(324)] AddFrame@0 s [1:1:0123/151538.278864:VERBOSE1:webm_muxer.cc(164)] VIDEO @0.050088 s [1:1:0123/151538.279089:VERBOSE1:webm_muxer.cc(324)] AddFrame@0.050088 s [1:1:0123/151538.286713:VERBOSE1:webm_muxer.cc(164)] VIDEO @0.100169 s [1:1:0123/151538.287289:VERBOSE1:webm_muxer.cc(324)] AddFrame@0.100169 s .. first encoded audio frame arrives with timestamp 0.059935s, since it's in the past, it's clamped to the latest seen, 0.100169s [1:1:0123/151538.288569:VERBOSE1:webm_muxer.cc(200)] AUDIO @0.059935 s [1:1:0123/151538.288765:VERBOSE1:webm_muxer.cc(324)] AddFrame@0.100169 s [1:1:0123/151538.291128:VERBOSE1:webm_muxer.cc(200)] AUDIO @0.119789 s [1:1:0123/151538.291357:VERBOSE1:webm_muxer.cc(324)] AddFrame@0.119789 s [1:1:0123/151538.291798:VERBOSE1:webm_muxer.cc(200)] AUDIO @0.179809 s [1:1:0123/151538.291999:VERBOSE1:webm_muxer.cc(324)] AddFrame@0.179809 s ... now video arrives with timestamp 0.151605s also in the past, also truncated to the latest timestamp seen 0.179809s [1:1:0123/151538.321515:VERBOSE1:webm_muxer.cc(164)] VIDEO @0.151605 s [1:1:0123/151538.321740:VERBOSE1:webm_muxer.cc(324)] AddFrame@0.179809 s [1:1:0123/151538.330600:VERBOSE1:webm_muxer.cc(200)] AUDIO @0.241695 s [1:1:0123/151538.330799:VERBOSE1:webm_muxer.cc(324)] AddFrame@0.241695 s [1:1:0123/151538.368820:VERBOSE1:webm_muxer.cc(164)] VIDEO @0.201305 s [1:1:0123/151538.369115:VERBOSE1:webm_muxer.cc(324)] AddFrame@0.241695 s etc. This situation is not a problem per-se, although it introduces a jitter that translates to an imperceptible jerkiness and lack of lipsync. External player such as e.g. VLC reproduce the multiplexed sequence correctly, but MediaSource doesn't like to see a video and an audio frame with the same timestamp and it hits the check in [2]. A potential workaround is to force not two timestamps to be the same. [1] https://cs.chromium.org/chromium/src/media/muxers/webm_muxer.cc?q=webm_muxer.cc&sq=package:chromium&dr&l=318 [2] [1:1:0123/152725.614138:FATAL:source_buffer_range.cc(88)] Check failed: timestamp_delta > base::TimeDelta(). #0 0x7f116f180b7e base::debug::StackTrace::StackTrace() #1 0x7f116f1ea0ff logging::LogMessage::~LogMessage() #2 0x7f1165dd3b01 media::SourceBufferRange::AdjustEstimatedDurationForNewAppend() #3 0x7f1165dd326e media::SourceBufferRange::AppendBuffersToEnd() #4 0x7f1165dfccac media::SourceBufferStream::Append() #5 0x7f1165d212cb media::ChunkDemuxerStream::Append() #6 0x7f1165d8a019 media::MseTrackBuffer::FlushProcessedFrames() #7 0x7f1165d8c966 media::FrameProcessor::FlushProcessedFrames() #8 0x7f1165d8ab78 media::FrameProcessor::ProcessFrames() #9 0x7f1165de628f media::SourceBufferState::OnNewBuffers() #10 0x7f1165df40ed _ZN4base8internal13FunctorTraitsIMN5media17SourceBufferStateEFbRKNSt7__debug3mapIiNS4_5dequeI13scoped_refptrINS2_18StreamParserBufferEESaIS9_EEESt4lessIiESaISt4pairIKiSB_EEEEEvE6InvokeIPS3_JSK_EEEbSM_OT_DpOT0_ #11 0x7f1165df4036 _ZN4base8internal12InvokeHelperILb0EbE8MakeItSoIRKMN5media17SourceBufferStateEFbRKNSt7__debug3mapIiNS6_5dequeI13scoped_refptrINS4_18StreamParserBufferEESaISB_EEESt4lessIiESaISt4pairIKiSD_EEEEEJPS5_SM_EEEbOT_DpOT0_ #12 0x7f1165df3fb7 _ZN4base8internal7InvokerINS0_9BindStateIMN5media17SourceBufferStateEFbRKNSt7__debug3mapIiNS5_5dequeI13scoped_refptrINS3_18StreamParserBufferEESaISA_EEESt4lessIiESaISt4pairIKiSC_EEEEEJNS0_17UnretainedWrapperIS4_EEEEEFbSL_EE7RunImplIRKSN_RKSt5tupleIJSP_EEJLm0EEEEbOT_OT0_NS_13IndexSequenceIJXspT1_EEEESL_ #13 0x7f1165df3eec _ZN4base8internal7InvokerINS0_9BindStateIMN5media17SourceBufferStateEFbRKNSt7__debug3mapIiNS5_5dequeI13scoped_refptrINS3_18StreamParserBufferEESaISA_EEESt4lessIiESaISt4pairIKiSC_EEEEEJNS0_17UnretainedWrapperIS4_EEEEEFbSL_EE3RunEPNS0_13BindStateBaseESL_ #14 0x7f1165e5b936 base::internal::RunMixin<>::Run() #15 0x7f1165e5b19b media::WebMStreamParser::ParseCluster() #16 0x7f1165e5a1c4 media::WebMStreamParser::Parse() #17 0x7f1165de71d0 media::SourceBufferState::Append() #18 0x7f1165d29b36 media::ChunkDemuxer::AppendData() #19 0x7f115a128d85 media::WebSourceBufferImpl::append() #20 0x7f1153f665cd blink::SourceBuffer::appendBufferAsyncPart()
,
Jan 24 2017
Correction, when I wrote: "...but MediaSource doesn't like to see a video and an audio frame with the same timestamp" I meant: "...but MediaSource doesn't like to see TWO video OR audio frameS with the same timestamp"
,
Jan 24 2017
https://cs.chromium.org/chromium/src/media/filters/source_buffer_range.cc?rcl=0&l=88 That check is just a DCHECK. I don't expect things to go sideways if that DCHECK fails. Since adding that DCHECK I've learned of a number valid a scenarios where two frames may actually have the same timestamp, so this DCHECK should really be changed to >=.
,
Jan 24 2017
(Not suggesting >= will fix your issue, it will just unblock you to uncover whatever is really going wrong)
,
Jan 28 2017
We have been doing some further investigation and found that, at least,
there is a misconception in the behaviour of MSE. In particular the
following construction
mediaRecorder.ondataavailable = function (e) {
var reader = new FileReader();
reader.addEventListener("loadend", function () {
var arr = new Uint8Array(reader.result);
sourceBuffer.appendBuffer(arr); // <----
});
reader.readAsArrayBuffer(e.data);
}
suffers from the problem that, if |sourceBuffer| is busy, appendBuffer()
will reject the contents (|e.data|) that were supposed to be appended.
This situation is made more evident by the fact that MediaRecorder
produces Blobs that are only guaranteed to make sense when
serialized, from the Spec [1]:
> "When multiple Blobs are returned (because of timeslice or requestData()),
> the individual Blobs need not be playable, but the combination of all the
> Blobs from a completed recording MUST be playable."
If a given MR Blob is not appended to |sourceBuffer| and this Blob is not
containing a full chunk (e.g. a multiplexed video or audio frame), the
next chunk-Blob will most likely derail the SourceBuffer's internal parser.
The solution is to monitor e.g. |sourceBuffer.updating| [2] or listen to
the |updateend| event as in the example of the Spec:
https://www.w3.org/TR/media-source/#examples
[1] https://w3c.github.io/mediacapture-record/MediaRecorder.html#dom-mediarecorder-start
[2] https://developer.mozilla.org/en-US/docs/Web/API/SourceBuffer/updating
,
Jan 28 2017
Regarding #45 Since there was an update, i tried it with osx and win8.1 on version 56.0.2924.76 (64-bit) with changes according to #50. Both failed to replay the video. I also tried Canary on both os and it worked perfectly fine!
,
Jan 31 2017
Guys, somehow I'm now having an issue similar to this. I have a 10-minute video divided in 3-second chunks. I'm appending chunks in order to a MediaSource, but with this particular long video, video plays for 3 seconds and then stops. Most likely video plays for the chunk duration but when a new chunk should be played, it stops... This does not happen for short videos recorded on the same machine. Video is recorded using MR on Electron (Chrome 54). Playback issue exists in current stable Chrome version and in Canary too.
,
Jan 31 2017
demian85@, just to confirm, are you monitoring the sourceBuffer.updating attribute as suggested in comment #50? I notice that the js from the zip you uploaded here does not appear to monitor sourceBuffer.updating attribute in the nextChunk() method. This may cause you to silently lose appended chunks.
,
Jan 31 2017
the zip I mentioned was actually uploaded to a different bug: https://bugs.chromium.org/p/chromium/issues/detail?id=678269#c27 but I assume you're still using something similar?
,
Jan 31 2017
I'm appending chunks after every 'updateend' event, so I don't understand how can I be missing chunks. Also, it's not very clear how checking for 'updating' property would help, so if it's updating then what? I should keep trying until it's not being updated anymore? Wouldn't make more sense to listen for the proper event?
,
Jan 31 2017
If sourceBuffer.updating = true you'll want to listen for updateend and append. > I'm appending chunks after every 'updateend' event In the zip I mentioned I see you listen to 'updateend' and then schedule a call to nextChunk(), but I also see you call nextChunk() in appendBlob() in response to 'dataavailable' from mediaRecorder - no checking for sourceBuffer.updating in this path. Are you still using this code? If not, can you upload your latest and I'll dig in.
,
Feb 1 2017
What happened to my last comment?
,
Feb 1 2017
???? Can someone please explain why I cannot comment?
,
Feb 1 2017
Sorry I don't know what is happening to your comments. I did get emails with the full text of your comments though, so I'm hearing you. Please upload or link to a demo of the full end-to-end recording -> media source code that is giving you trouble.
,
Feb 1 2017
Okay, just a moment, because I'm debugging and found out something out of place that I cannot understand. I'm logging timestampOffset property of the source buffer being used to append media segments, and for the video that gets stuck, it prints a negative value for almost every chunk. From -0.125 to -2.576. The video that works fine prints 0 all the time... So, who is setting that value and why? Trying to set it to 0 raises an invalid state error, I cannot find a place to update its value correctly and I don't even understand why am I supposed to do that. Pretty sure this is the issue, but would like an explanation! Thanks.
,
Feb 1 2017
I can only speculate without seeing your code. Please upload a concise demo and I'll take a look.
,
Feb 1 2017
Well, here it comes. Video that gets stuck in both Chrome and Firefox, but somehow plays inside Electron app using Chrome 54 and the SAME player JS code you can see in the following URL: http://teamcapture.herokuapp.com/preview/1485959013537 Video that works everywhere: http://teamcapture.herokuapp.com/preview/1485867454742 Both recorded in the same Mac OSX 10.11, Can you spot any difference between those two videos? As told you before, timestampOffset is different but not sure what that means. Most likely webm chunks are fucked up somehow. Thanks.
,
Feb 1 2017
Your code is setting the sourceBuffer.mode = sequence. This will cause timestampOffset to update automatically. See here for more info: https://www.w3.org/TR/media-source/#dom-appendmode-sequence Is sequence mode important for your app? What are you aiming to do by using it instead of segments mode? Let me also double check that we are seeing the same behavior. browser version: 55.0.2883.95 When I open http://teamcapture.herokuapp.com/preview/1485959013537 (first link), I see the video play all the way to the end, but with I note that it freezes for a few times in the middle for about 4 seconds (works itself out). Is that what you mean by stuck (the momentary freezing)? When I open http://teamcapture.herokuapp.com/preview/1485867454742 ("works everywhere"), its hard to know for certain whether there is any freezing - not much seems to be going on in the video other than a slack toast around the 1 minute mark. I hear audio describing some actions, but its almost as if they're taking place on another monitor? Is this working as you expect? When I use chrome canary your first video stops playback due to gaps in your buffered range. Haven't looked into this yet. Second video plays the same as v55. Can dig deeper, but lets first make sure we're on the same page about expectations / observed behavior.
,
Feb 1 2017
It's like you say, but I'm using Chrome 56.0.2924.76 and first video freezes after 3 seconds and no longer continues. Please note that I have already cached all video fragments. I mean video gets stuck even with fragments already cached. Not sure why using sequence mode, I think I've been having issues with timestamps in the past and that's what worked for me. I've also tried segments mode and first video plays a few seconds more until it gets stuck... So something changed between this two Chrome versions and now MSE api is even stricter (or dumber) than before.
,
Feb 2 2017
So, I keep digging and this is what I found: I took screenshots of video playback log for the same working video I sent you before (http://teamcapture.herokuapp.com/preview/1485867454742) 1) https://s3-ap-southeast-1.amazonaws.com/teamcapture-website/video-failing-log Log from Chrome 56. It shows like 23 time ranges, so there are gaps in the middle. 2) https://s3-ap-southeast-1.amazonaws.com/teamcapture-website/video-working-log Log from my Electron app based on Chrome 54. It shows only one normalized time range as expected. I tried using sequence and segment mode, also changing appendWindowStart and timestampOffset, still the same issue. keep in mind that the JS code used to play video is the same in both cases. Something changed in MSE API that is breaking video chunks generated by the MediaRecorder API!
,
Feb 3 2017
@#69 - I chatted w/chcunningham@ and we have a hypothesis that he'll pursue verification of (recorder used to encode incorrect duration sometimes [details TBD], and Chrome is more recently not cross-fading overlapping audio frames but rather removes the overlapped frame). This leads to at least the first gap in the buffered ranges observed in a local repro I've shared with chcunningham@. demian85@ : Can you please try using MediaRecorder API on Chrome canary and see if gaps still repro? If using sequence appendMode, IIUC your player, the timestampOffset should also remain 0 if there were no discontinuities detected in your media. That the links so far showed offsets changing sometimes indicates to me that at least the media was likely problematic.
,
Feb 4 2017
Thanks for the fast response. The thing is, as I mentioned in previous issues, I'm working on an electron desktop app and latest version is based on Chrome 54. I just sent today the demo to my co-worker so he can test ASAP in latest Chrome. If my demo works fine, then I suppose there is a patch fixing this issue. I'd appreciate if you paste the link here so I can submit PR to electron/libchromiumcontent. Thanks.
,
Feb 4 2017
Well, sorry. I thought you replied to my latest issue report: https://bugs.chromium.org/p/chromium/issues/detail?id=688490
,
Feb 6 2017
Issue 688490 has been merged into this issue.
,
Feb 10 2017
I'm still looking into this issue. Its early, but my debugging shows some anomalies with timestamps coming out of the muxed video from MediaRecorder. Will post back when I know more.
,
Feb 14 2017
mcasas@chromium.org: sorry for the delay in replying. I just tested in 58.0.3012.0 canary (64-bit) on El Capitan, and the problem still occurs. I'm sending the entire video in one chunk, so I'm not hitting the issue you mention wrt the sourcebuffer being busy. Playback of a test .webm file using MediaSource works fine, but one recorded via MediaRecorder doesn't play at all. Playback works fine in Firefox.
,
Feb 14 2017
#75: when you say "Playback works fine in Firefox.", do you mean playback of the file produced with Chrome MediaRecorder? If you haven't tried that, could you please do so? Also if you could upload a live JS or clarify which one you're using and upload the file you are having troubles reproducing, that'd help a ton.
,
Feb 14 2017
Yes, I recorded the webm file on Chrome canary (using https://webrtc.github.io/samples/src/content/getusermedia/record/) and it plays back perfectly via MediaSource on Firefox.
,
Feb 15 2017
I just checked https://www.groupboard.com/gum_record/mediasource.html again (I assume you uploaded the MediaRecorder new video to that same URL) - this time it appears to be a different issue. chrome://media-internals shows this log: "Audio stream codec opus doesn't match SourceBuffer codecs." You're creating your SourceBuffer with MediaSource.addSourceBuffer('video/webm; codecs="vorbis,vp8"'); But your video has opus and vp9. If you change this call to MediaSource.addSourceBuffer('video/webm; codecs="opus,vp9"'); your video will play back in chrome canary (and perhaps stable, didn't check).
,
Feb 15 2017
Yes, you're right. It works perfectly when vp8 is specified in addSourceBuffer. For some reason firefox doesn't seem to mind that the mime type doesn't match the actual data.
,
Feb 15 2017
Great, but this has nothing to do with my reported bug. Please keep digging into original issue with media recorder on Mac.
,
Feb 17 2017
Just to be sure, I'm personally friendly towards maintaining the quality of Chromium downstream projects, e.g. Electron, Samsung Internet or Opera, but at the end of the day the current Chrome Stable (56.0.2924.87 as of today) is the platform we have to maintain here in this bug tracker. Bug fixes and/or new features _cannot_ be merged back in the majority of cases; bugs are solved on the tip-of-the-tree, and are verified in Canary. Once the Canary is fixed, the bug is considered closed; there is no option of merging back to Stable nor of distributing the fix to the embedders. Rolling embedders need to evaluate themselves if merging the fix or fixes is worth doing, taking into account that when they roll to the latest upstream version, they'll have a merge conflict; if these embedders follow a fire-and-forget system then they are even more on their own since we cannot support the explosion of branches that this method causes. Based on #81, Canary is fixed and we could mark this bug as fixed, regardless of any later investigations in the MR-MSE coupling. Re. Electron: M54 is before M56, so there's nothing we can do here. I'd recommend creating a patch with the MediaRecorder related differences, i.e.: - media/muxers/* - content/renderer/media/recorder/* , previously content/renderer/media/*track_recorder* and content/renderer/media/media_recorder* - third_party/WebKit/Source/modules/mediarecorder/ - third_party/libwebm and any other files being affected by MSE changes, if any.
,
Feb 18 2017
In case others encounter trouble with MediaRecorder -> MSE, I've uploaded a small demo here: https://storage.googleapis.com/chcunningham-chrome-shared/mr_to_mse.html This requires the chromium patches in comments above to work. I've verified it works on Mac Canary 58.0.3015.0.
,
Feb 20 2017
Thanks. As I stated in the other issue, demo does not seem to work on Chrome 56, or at least "sometimes". Canary works fine, but still, there is no clue which patches I should apply to M56 to fix issue. Unfortunately, I cannot just create a patch with a bunch of files and submit a PR without proper testing or evidence that this is the fix... and there is no way for me to test everything regarding MediaRecorder... too many things can break!
,
Feb 22 2017
Based on #85, #86 and my comments on #84, I think we can close this issue now, since it seems fixed in ToT.
,
Aug 10 2017
Hello. I have a very related question. It seems that recently Linux Chromium's MediaSource has lost support for vp9. To see what I mean, try this:
MediaSource.isTypeSupported('video/webm; codecs="vp9"')
It's "false" :( This used to be "true", because my MediaRecorder/MediaSource video streaming thing used to work with vp9 and Chromium. Chrome still works fine, but what about Chromium?
Thoughts? Thanks.
,
Aug 10 2017
Replying to your question in the other bug (Issue 726009) |
|||||||||||||||||||||
►
Sign in to add a comment |
|||||||||||||||||||||
Comment 1 by yini...@chromium.org
, Apr 25 2016Labels: Needs-Feedback