data channel gets closed when transferring a large file
Reported by
fi...@appear.in,
Jan 25 2017
|
|||||||||
Issue descriptionWhat steps will reproduce the problem? see https://github.com/andyet/SimpleWebRTC/issues/519 Notably, simplewebrtc does not use bufferedAmountLowThreshold (somehow I never merged a change) What is the expected result? The file is transferred. This used to work and it still works on localhost connections. What do you see instead? As described by the user I get a Uncaught DOMException: Failed to execute 'send' on 'RTCDataChannel': RTCDataChannel.readyState is not 'open'(…) I managed to reproduce and saw: [1:14:0125/192622.336510:ERROR:stunport.cc(283)] Jingle:Port[0x11a972551000:data:1:0:local:Net[any:0.0.0.x/0:Unknown]]: UDP send of 1217 bytes failed with error 11 I can share chrome_debug.log but it seems to contain the parts of the file transferred so is about 250 megabytes. What version of the product are you using? On what operating system? chrome 57 on linux Please provide any additional information below. Would it help if I tried turning IPv6 off to see if that causes it?
,
Jan 26 2017
,
Jan 30 2017
,
Feb 2 2017
I forgot I can actually move the bug. Doing that now.
,
Feb 2 2017
,
Feb 2 2017
,
Feb 3 2017
deadbeef@: Can you take a look or reassign if you're not the right owner for this?
,
Feb 3 2017
fippo: When the final send fails, what's the buffered amount? If the buffer's full, and the underlying socket is returning "EAGAIN", that would indicate that you're trying to send too fast, and this would be expected behavior. I don't like that this closes the data channel, but that's how the spec's written... The difference between Chrome 57 and 56 may just be that I did some refactoring that eliminated a thread hop, which means you can fill the buffer slightly faster.
,
Feb 4 2017
Anecdotally bufferdAmount went up to ~512k and kept dropping down to 0 again for quite a while until thing got really bad and the last bufferedAmount before the exception was 16646144 (with a slightly higher peak) I'll give the bufferedAmountLow PR another try. This (sadly) sounds like you're conforming to the spec so WONTFIX please. Thanks!
,
Feb 4 2017
hrm. There might still be bugs lurking here. I updated https://github.com/otalk/filetransfer/pull/7 If I remove the poll function in https://github.com/otalk/filetransfer/pull/7/files#diff-3b724eca668d32d165c4b81212c7457fR43 I get an error 11 while transferring and things simply hang. The error is not fatal and fixed by polling at a low frequency. I've attached a chrome debug log with additional FILETRANSFER console.logs at the beginning of sliceFile, onload and the else part where I removed the poll function. The last call I can see is the WAITING one on 2941. There are some subsequent writes and there is an error 11 on line 3581. This looks like it is flushing the write buffer, running into an error and not calling the bufferedAmountLow callback. The datachannels readyState remains open which seems correct given that this is not fatal.
,
Feb 4 2017
repro steps for #10: 1) go to https://fippo.github.io/SimpleWebRTC/filetransfer 2) create a room (or add ?some-room-name) 3) join same room from another machine (I seem to have some packet loss between my machines) 4) send a fairly large file at some point it hangs (at least for me)
,
Feb 4 2017
I think there's an off-by-one error. Should be "channel.bufferedAmount <= channel.bufferedAmountLowThreshold", not "<"; if the buffered amount reaches exactly the threshold (which is possible, since it's 8 * chunksize) you'd hit this. Anyway, closing this as WontFix as your earlier comment recommends.
,
Feb 4 2017
even fixing the off-by-one error I get the same behaviour. Once the bufferedAmount is too high and there is a write error, the treshold callback isn't called.
,
Feb 4 2017
Hmm, reopening then. I'd be very surprised if this wasn't working though. Could it be that "bufferedAmountLow" has capital letters (are events case-sensitive?)?
,
Feb 4 2017
Sounds like that was it. |
|||||||||
►
Sign in to add a comment |
|||||||||
Comment 1 by fi...@appear.in
, Jan 25 2017