Project: chromium Issues People Development process History Sign in
New issue
Advanced search Search tips
Note: Color blocks (like or ) mean that a user may not be available. Tooltip shows the reason.
Starred by 2 users
Status: Started
Owner:
Cc:
EstimatedDays: ----
NextAction: ----
OS: Chrome
Pri: 1
Type: Bug

Blocking:
issue 738024



Sign in to add a comment
update_engine: slow to respond to GetStatus calls over D-Bus
Project Member Reported by quiche@chromium.org, Jan 15 2016 Back to list
I frequently see "cros flash" fail while trying to re-flash my whirlwind. In every case, the error message from "cros flash" is something like the messages below. Are there any recent changes to update_engine that would explain this?

[0115/211904:INFO:update_engine_client.cc(604)] Querying Update Engine status...
[0115/211929:ERROR:logging.h(779)] Failed to call method: org.chromium.UpdateEngineInterface.GetStatus: object_path= /org/chromium/UpdateEngine: org.freedesktop.DBus.Error.NoReply: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken.
[0115/211929:ERROR:dbus_method_invoker.h(111)] CallMethodAndBlockWithTimeout(...): Domain=dbus, Code=org.freedesktop.DBus.Error.NoReply, Message=Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken.
[0115/211929:FATAL:update_engine_client.cc(258)] Check failed: ret. GetStatus() failed
 
Comment 1 by de...@chromium.org, Jan 15 2016
Cc: de...@chromium.org
Owner: quiche@chromium.org
That's basically either a timeout on GetStatus or maybe an update_engine crash.

update_engine is single threaded and we don't use truly asynchronous I/O (as in aio_read / aio_write) when writing to disk... so it could be the case that the dbus call timed out while we were busy writing to disk. We do go back to the main loop after every time chunk of data received from libcurl, and our main loop ensures round robin of these disk events... but the specific hardware specs of whirlwind may cause a single operation to bee too slow and time out the dbus call. This is hard to tell, without logs.

Next time you see this, please attach the /var/log/update_engine.log from the DUT when you see this failure. Thanks.
Comment 2 by quiche@chromium.org, Jan 21 2016
update_engine log files attached.

For context, client side logs (from cros flash) pasted inline:
13:15:15: WARNING: --- Start output from /mnt/stateful_partition/cros-flash/tmp.cIcUYr1QfR/dev_server.log ---[21/Jan/2016:21:12:52] DEVSERVER Using cache directory /mnt/stateful_partition/cros-flash/tmp.cIcUYr1QfR/static/cache
[21/Jan/2016:21:12:52] DEVSERVER Serving from /mnt/stateful_partition/cros-flash/tmp.cIcUYr1QfR/static
[21/Jan/2016:21:12:52] XBUDDY Using shadow config file stored at /mnt/stateful_partition/cros-flash/tmp.cIcUYr1QfR/src/shadow_xbuddy_config.ini
[21/Jan/2016:21:12:52] DEVSERVER Python module psutil is not installed. Function call <function _start_io_stat_thread at 0xb652ce70> is skipped.
[21/Jan/2016:21:12:52] ENGINE Listening for SIGHUP.
[21/Jan/2016:21:12:52] ENGINE Listening for SIGTERM.
[21/Jan/2016:21:12:52] ENGINE Listening for SIGUSR1.
[21/Jan/2016:21:12:52] ENGINE Bus STARTING
[21/Jan/2016:21:12:52] ENGINE Started monitor thread '_TimeoutMonitor'.
[21/Jan/2016:21:12:52] ENGINE PID 30297 written to '/tmp/devserver_wrapper.pid'.
[21/Jan/2016:21:12:52] ENGINE (wait_for_free_port) No cached port to wait for.
[21/Jan/2016:21:12:53] ENGINE Port 47293 written to '/mnt/stateful_partition/cros-flash/tmp.cIcUYr1QfR/dev_server.port'.
[21/Jan/2016:21:12:53] ENGINE (wait_for_occupied_port) Waiting for actual port 47293.
[21/Jan/2016:21:12:53] ENGINE Serving on :::0
[21/Jan/2016:21:12:53] ENGINE Bus STARTED
[21/Jan/2016:21:12:57] DEVSERVER Python module psutil is not installed. Function call <function _get_io_stats at 0xb6533330> is skipped.
::ffff:127.0.0.1 - - [21/Jan/2016:21:12:57] "GET /check_health HTTP/1.1" 200 43 "" "Wget/1.16 (linux-gnueabi)"
[21/Jan/2016:21:12:58] UPDATE Using static url base http://127.0.0.1:47293/static
[21/Jan/2016:21:12:58] UPDATE Handling update ping as http://127.0.0.1:47293
[21/Jan/2016:21:12:58] UPDATE Update Check Received. Client is using protocol version: 3.0
[21/Jan/2016:21:12:58] UPDATE Update label/file: pregenerated/chromiumos_test_image.bin
[21/Jan/2016:21:12:58] UPDATE client version 7825.0.2016_01_15_1704 latest version 9999.0.0
[21/Jan/2016:21:13:00] UPDATE Responding to client to use url http://127.0.0.1:47293/static/pregenerated/update.gz to get image
::ffff:127.0.0.1 - - [21/Jan/2016:21:13:00] "POST /update/pregenerated HTTP/1.1" 200 910 "" ""
[21/Jan/2016:21:13:00] UPDATE Using static url base http://127.0.0.1:47293/static
[21/Jan/2016:21:13:00] UPDATE Handling update ping as http://127.0.0.1:47293
[21/Jan/2016:21:13:00] UPDATE A non-update event notification received. Returning an ack.
::ffff:127.0.0.1 - - [21/Jan/2016:21:13:00] "POST /update/pregenerated HTTP/1.1" 200 253 "" ""
::ffff:127.0.0.1 - - [21/Jan/2016:21:13:00] "GET /static/pregenerated/update.gz HTTP/1.1" 200 59453998 "" ""
--- End output from /mnt/stateful_partition/cros-flash/tmp.cIcUYr1QfR/dev_server.log ---
13:15:23: ERROR: Device update failed.
13:15:23: ERROR: cros flash failed before completing.
13:15:23: ERROR: return code: 134; command: ssh -p 22 '-oConnectionAttempts=4' '-oUserKnownHostsFile=/dev/null' '-oProtocol=2' '-oConnectTimeout=30' '-oServerAliveCountMax=3' '-oStrictHostKeyChecking=no' '-oServerAliveInterval=10' '-oNumberOfPasswordPrompts=0' -i /tmp/ssh-tmpnJ9loj/testing_rsa root@chromeos1-dev-host6-router.cros -- update_engine_client --status
Warning: Permanently added 'chromeos1-dev-host6-router.cros,100.107.156.204' (RSA) to the list of known hosts.
Warning: Permanently added 'chromeos1-dev-host6-router.cros,100.107.156.204' (RSA) to the list of known hosts.
[0121/211447:INFO:update_engine_client.cc(604)] Querying Update Engine status...
[0121/211512:ERROR:logging.h(779)] Failed to call method: org.chromium.UpdateEngineInterface.GetStatus: object_path= /org/chromium/UpdateEngine: org.freedesktop.DBus.Error.NoReply: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken.
[0121/211512:ERROR:dbus_method_invoker.h(111)] CallMethodAndBlockWithTimeout(...): Domain=dbus, Code=org.freedesktop.DBus.Error.NoReply, Message=Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken.
[0121/211512:FATAL:update_engine_client.cc(258)] Check failed: ret. GetStatus() failed
update_engine.19700101-000010
109 KB Download
Comment 3 by quiche@chromium.org, Jan 21 2016
Cc: -de...@chromium.org quiche@chromium.org
Owner: de...@chromium.org
Tag, you're it. :-)
Comment 4 by quiche@chromium.org, Jan 21 2016
I could have sworn I attached two log files in #c2. Oh well, here's the second one.
update_engine.19700101-000006
538 KB Download
Comment 5 by de...@chromium.org, Jan 21 2016
I walk to your office to say hi and this is how you reply? waaaaaa :-(
Comment 6 by de...@chromium.org, Jan 21 2016
Interleaving the (second) log with the update_engine_client one gives these results:

daemon: [0121/211446:INFO:delta_performer.cc(162)] Completed 263/383 operations (68%), 54705968/59453998 bytes downloaded (92%), overall progress 80%
client: [0121/211447:INFO:update_engine_client.cc(604)] Querying Update Engine status...
daemon: [0121/211510:INFO:delta_performer.cc(162)] Completed 338/383 operations (88%), 55607088/59453998 bytes downloaded (93%), overall progress 90%
client: [0121/211512:ERROR:logging.h(779)] Failed to call method: org.chromium.UpdateEngineInterface.GetStatus: ....
daemon: [0121/211529:INFO:delta_performer.cc(100)] /dev/mmcblk0p4 is not an MTD nor a UBI device.

so... yeah, it looks like update_engine takes its time to reply. I guess I need to repro this bug but running update_engine with --v=1 on ww.
Comment 7 by dchan@chromium.org, Feb 13 2016
Status: Assigned
Comment 8 by de...@chromium.org, Feb 13 2016
Cc: de...@chromium.org
Owner: ----
Status: Available
I don't have a WW to repro this now time assigned to work on this.

Patches welcome!
Project Member Comment 9 by sheriffbot@chromium.org, Feb 13 2017
Labels: Hotlist-Recharge-Cold
Status: Untriaged
This issue has been available for more than 365 days, and should be re-evaluated. Please re-triage this issue.
The Hotlist-Recharge-Cold label is applied for tracking purposes, and should not be removed after re-triaging the issue.

For more details visit https://www.chromium.org/issue-tracking/autotriage - Your friendly Sheriffbot
I have a WW at my desk and it seems like we need to resolve this issue before I can commit changes that reduce buffer cache usage while writing partitions.

It seems like any function that is writing "significant amounts of data" to disk needs to poll for D-bus calls and handling them between writing chunks of the original write request.  "cooperative multi-tasking" was standard practice before pre-emptive multi-tasking was available.

Synchronous I/O throughput levels off as we write bigger chunks and we can dynamically pick a chunk size based on how long the initial write takes. We can start with /sys/block/mmcblk0/queue/optimal_io_size (unless it is 0, then use 4MB-16MB) and increase/decrease until time measured is just below some reasonable threshold for handling DBus messages. 

EMMC storage is currently also still "single threaded" and there is only very minor throughput benefit to queuing block writes in the buffer cache (assuming the block sizes are "big enough" - say > 256KB or so.)
Just O_DSYNC seems to work and sufficiently deals with the primary issue of outstanding dirty pages. I've updated the CL that I've been testing with:
    https://bugs.chromium.org/p/chromium/issues/detail?id=578270

This works on both whirlwind and gale. Just need to run the autotest as advertise in the commit message.


deymo was exactly right (why am I surprised?!!): can not use O_DIRECT until the code in "naturally aligns" transactions to logical block size:
[0601/175827:INFO:delta_performer.cc(295)] GGG: OpenCurrentPartition 0 of 2
[0601/175827:INFO:delta_performer.cc(318)] GGG: OpenCurrentPartition opening /dev/mmcblk0p5
[0601/175827:INFO:delta_performer.cc(881)] GGG: PerformReplaceOp /dev/mmcblk0p5 #dst_extents:1 bs:4096
[0601/175827:INFO:delta_performer.cc(883)] GGG: PerformReplaceOp /dev/mmcblk0p5 data_len:3726
[0601/175828:INFO:delta_performer.cc(881)] GGG: PerformReplaceOp /dev/mmcblk0p5 #dst_extents:1 bs:4096
[0601/175828:INFO:delta_performer.cc(883)] GGG: PerformReplaceOp /dev/mmcblk0p5 data_len:757034
[0601/175829:INFO:delta_performer.cc(881)] GGG: PerformReplaceOp /dev/mmcblk0p5 #dst_extents:1 bs:4096
[0601/175829:INFO:delta_performer.cc(883)] GGG: PerformReplaceOp /dev/mmcblk0p5 data_len:913224
[0601/175830:INFO:delta_performer.cc(881)] GGG: PerformReplaceOp /dev/mmcblk0p5 #dst_extents:1 bs:4096
[0601/175830:INFO:delta_performer.cc(883)] GGG: PerformReplaceOp /dev/mmcblk0p5 data_len:838029
[0601/175832:INFO:delta_performer.cc(881)] GGG: PerformReplaceOp /dev/mmcblk0p5 #dst_extents:1 bs:4096
[0601/175832:INFO:delta_performer.cc(883)] GGG: PerformReplaceOp /dev/mmcblk0p5 data_len:304095
[0601/175833:INFO:delta_performer.cc(881)] GGG: PerformReplaceOp /dev/mmcblk0p5 #dst_extents:1 bs:4096
[0601/175833:INFO:delta_performer.cc(883)] GGG: PerformReplaceOp /dev/mmcblk0p5 data_len:598909
[0601/175833:INFO:delta_performer.cc(881)] GGG: PerformReplaceOp /dev/mmcblk0p5 #dst_extents:1 bs:4096
[0601/175833:INFO:delta_performer.cc(883)] GGG: PerformReplaceOp /dev/mmcblk0p5 data_len:592328
[0601/175835:INFO:delta_performer.cc(881)] GGG: PerformReplaceOp /dev/mmcblk0p5 #dst_extents:1 bs:4096
[0601/175835:INFO:delta_performer.cc(883)] GGG: PerformReplaceOp /dev/mmcblk0p5 data_len:1100406
[0601/175836:INFO:delta_performer.cc(881)] GGG: PerformReplaceOp /dev/mmcblk0p5 #dst_extents:1 bs:4096
[0601/175836:INFO:delta_performer.cc(883)] GGG: PerformReplaceOp /dev/mmcblk0p5 data_len:556268
[0601/175837:INFO:delta_performer.cc(881)] GGG: PerformReplaceOp /dev/mmcblk0p5 #dst_extents:1 bs:4096
[0601/175837:INFO:delta_performer.cc(883)] GGG: PerformReplaceOp /dev/mmcblk0p5 data_len:866391
[0601/175838:INFO:delta_performer.cc(881)] GGG: PerformReplaceOp /dev/mmcblk0p5 #dst_extents:1 bs:4096
[0601/175838:INFO:delta_performer.cc(883)] GGG: PerformReplaceOp /dev/mmcblk0p5 data_len:843616
[0601/175839:INFO:delta_performer.cc(881)] GGG: PerformReplaceOp /dev/mmcblk0p5 #dst_extents:1 bs:4096
[0601/175839:INFO:delta_performer.cc(883)] GGG: PerformReplaceOp /dev/mmcblk0p5 data_len:932583
[0601/175840:INFO:delta_performer.cc(881)] GGG: PerformReplaceOp /dev/mmcblk0p5 #dst_extents:1 bs:4096
[0601/175840:INFO:delta_performer.cc(883)] GGG: PerformReplaceOp /dev/mmcblk0p5 data_len:1095637
[0601/175842:INFO:delta_performer.cc(881)] GGG: PerformReplaceOp /dev/mmcblk0p5 #dst_extents:1 bs:4096
[0601/175842:INFO:delta_performer.cc(883)] GGG: PerformReplaceOp /dev/mmcblk0p5 data_len:773910
[0601/175843:INFO:delta_performer.cc(881)] GGG: PerformReplaceOp /dev/mmcblk0p5 #dst_extents:1 bs:4096
[0601/175843:INFO:delta_performer.cc(883)] GGG: PerformReplaceOp /dev/mmcblk0p5 data_len:933373
[0601/175844:INFO:delta_performer.cc(881)] GGG: PerformReplaceOp /dev/mmcblk0p5 #dst_extents:1 bs:4096
[0601/175844:INFO:delta_performer.cc(883)] GGG: PerformReplaceOp /dev/mmcblk0p5 data_len:878385
[0601/175845:INFO:delta_performer.cc(881)] GGG: PerformReplaceOp /dev/mmcblk0p5 #dst_extents:1 bs:4096
[0601/175845:INFO:delta_performer.cc(883)] GGG: PerformReplaceOp /dev/mmcblk0p5 data_len:687301
[0601/175846:INFO:delta_performer.cc(881)] GGG: PerformReplaceOp /dev/mmcblk0p5 #dst_extents:1 bs:4096
[0601/175846:INFO:delta_performer.cc(883)] GGG: PerformReplaceOp /dev/mmcblk0p5 data_len:858742
[0601/175847:INFO:delta_performer.cc(881)] GGG: PerformReplaceOp /dev/mmcblk0p5 #dst_extents:1 bs:4096
[0601/175847:INFO:delta_performer.cc(883)] GGG: PerformReplaceOp /dev/mmcblk0p5 data_len:130269
[0601/175848:INFO:delta_performer.cc(881)] GGG: PerformReplaceOp /dev/mmcblk0p5 #dst_extents:1 bs:4096
[0601/175848:INFO:delta_performer.cc(883)] GGG: PerformReplaceOp /dev/mmcblk0p5 data_len:917021
[0601/175849:INFO:delta_performer.cc(881)] GGG: PerformReplaceOp /dev/mmcblk0p5 #dst_extents:1 bs:4096
[0601/175849:INFO:delta_performer.cc(883)] GGG: PerformReplaceOp /dev/mmcblk0p5 data_len:362854
[0601/175850:INFO:delta_performer.cc(881)] GGG: PerformReplaceOp /dev/mmcblk0p5 #dst_extents:1 bs:4096
[0601/175850:INFO:delta_performer.cc(883)] GGG: PerformReplaceOp /dev/mmcblk0p5 data_len:48
[0601/175850:INFO:delta_performer.cc(881)] GGG: PerformReplaceOp /dev/mmcblk0p5 #dst_extents:1 bs:4096
[0601/175850:INFO:delta_performer.cc(883)] GGG: PerformReplaceOp /dev/mmcblk0p5 data_len:48
[0601/175850:INFO:delta_performer.cc(881)] GGG: PerformReplaceOp /dev/mmcblk0p5 #dst_extents:1 bs:4096
[0601/175850:INFO:delta_performer.cc(883)] GGG: PerformReplaceOp /dev/mmcblk0p5 data_len:48
[0601/175851:INFO:delta_performer.cc(881)] GGG: PerformReplaceOp /dev/mmcblk0p5 #dst_extents:1 bs:4096
[0601/175851:INFO:delta_performer.cc(883)] GGG: PerformReplaceOp /dev/mmcblk0p5 data_len:48
...
Cc: grundler@chromium.org
What is the status of this bug, please?
Blocking: 738024
Cc: ahassani@chromium.org
IIRC Deymo is now working from Zurich on the "Compression Team".


I'm not sure since the bug is about a "symptom" and not specific cause. I don't know if there are any other outstanding "root cause"(s). Adding Amin in case he knows more. 
BTW, Amin,
Is there any log we can look at to determine what the update_engine_client (or update_engine) was doing that prevented a response to the devserver status query?
I just discussed with grundler@ and we probably need a long time solution for these kind of symptoms. We probably need to add more logging in update engine to see which operations and under what circumstances take long time to finish.
If this is urgent and is causing a lot of problems, we might be able to temporary fix it by retrying the dbus signal request in udpate_engine_client (for like 2 or 3 times) before bailing out. This might solve large number of failures at least for now.
Owner: ahassani@chromium.org
Perhaps we should work on the long-term fix if it isn't too much work. 

ahassani@ can you please make a few notes as to what we should do here then assign back to me and I'll try to find an owner.

Cc: sjg@chromium.org
The problem here is that update_engine is single threaded and sometimes when an operation inside update engine (like writing to disk or performing bspatch, downloading the patch, etc) takes long time to finish, it fails to respond to dbus signals on time. In this case 'update_engine_client --status' times out receiving the dbus signal and bails out. We have a similar problem in bug:738024 and I have proposed a solution in https://chromium-review.googlesource.com/c/556356/
It is pending approval. 

In general, we have to figure out where in the update engine and in what circustances this kind of excessive delay happens and then find a solution to it. This probably may require more logging in the update engine if the problem is not reproducible.
Blockedon: 738024 738027 738545
Labels: -Pri-2 Pri-1
I'm going to claim that this is blocking progress on several other bugs. I may well be wrong, in which case please drop the link.
Owner: lannm@chromium.org
Lann, here's the promised bug
Status: Started
Lann, welcome to the party! :D

If you can address my last set of comments on this CL
    https://chromium-review.googlesource.com/c/556356/

at least can be landed to at least work around the existing race condition which is causing failures.

My suggestion for "next steps" to measure how long different update_engine operations are taking as suggested by comment #16 and comment #19.
Blockedon: -738024 -738027 -738545
I think sjg got dependencies backwards between "blocked on" and "blocking": #738024 can't be both. :)

738027: AFAIK, Parrot is the only Chrome OS with rotational media (SATA HDD) and since my CL (O_DSYNC) landed, the AU time is marginally exceeding the 20 minute limit. I'll update that bug with my thoughts and possible solutions.

738545 appears to be about how autotest is logging stuff and not directly related to this failure.
Oops thanks Grant.
I have a patch that logs "slow" (>5s) InstallOperations, but I'm not sure how to try to reproduce this issue locally. Would it make sense to push and see if we can get logs?
I think we fixed this issue for now here: https://chromium-review.googlesource.com/c/556356/

I think logging sporadically may not help here like what is the reasoning behind 5 seconds. I think something that can log operations performance and create a histogram of some sort to show where we are lagging, what operations are slow, what is the operations input data size (payload, source, destination), etc would be a better solution. Its a longer shot but gives better information. If you want to work on this, maybe a one pager to show what data points are good to log and how to produce a readable information would be a good start (but only if you have time doing these stuff :) )


@grundler: Any input?
Given that this bug only appears to be affecting developers I didn't want to sink a lot of time into it. The logs above suggest that the dbus Status call timed out after ~25s, so I just picked 5s as a reasonable fraction of that (but of course it could be any other arbitrary time).
Lann,
Printing a histogram at the end of the update to log how long all the different operations took would be better than just logging when one specific operation.  Amin and I have talked about this before - so I agree with everything he said. :)

I'd rather have the infrastructure teams parse the histogram to flag when specific operations take "too long".

Slow operations affect Chrome OS Test Lab provisioning and limit the "throughput" of the tests the lab can run. Since we don't really know which operations are taking the most time, it would be good also collect total time for each type of operation.

Some of those operations will depend on other resources (like network bandwidth or CPU or write speed of storage). We can log more info later for specific operations once we have an overview of what is going on in the Test Lab.
Actually if we had those kind of information, we could always send them to UMA to get how user's updates are working!!
I suspect the metrics for "forced updates" (typical of Test Lab) and "background updates" (typical of normal use) will be very different: network BW available and O_DSYNC will have substantial impact.

For now, I think we should focus on Chrome OS Test lab and then in the future decide which output could be sent back as UMA stats. This will be a bit more work due to privacy review.
I have instrumented DeltaPerformer here, which I believe does most of the writing during an update: https://chromium-review.googlesource.com/c/570997
Project Member Comment 32 by bugdroid1@chromium.org, Jul 17
The following revision refers to this bug:
  https://chromium.googlesource.com/aosp/platform/system/update_engine/+/39f571439821118c9ee5b36131e851206b819074

commit 39f571439821118c9ee5b36131e851206b819074
Author: Lann Martin <lannm@chromium.org>
Date: Mon Jul 17 00:58:18 2017

Log DeltaPerformer operation histogram after update

Record and log InstallOperation run durations in DeltaPerformer. Since
these operations block the single-threaded process they may be
responsible for dbus service timeouts.

BUG=chromium:578270
TEST=deploy to device, restart update-engine, perform update (e.g.
`cros flash ssh://...`), check logs for histogram.

Change-Id: Idb782c86a627b1738a29901edd5d3b45790f4bb9
Reviewed-on: https://chromium-review.googlesource.com/570997
Commit-Ready: Lann Martin <lannm@chromium.org>
Tested-by: Lann Martin <lannm@chromium.org>
Reviewed-by: Simon Glass <sjg@chromium.org>
Reviewed-by: Grant Grundler <grundler@chromium.org>

[modify] https://crrev.com/39f571439821118c9ee5b36131e851206b819074/payload_consumer/delta_performer.cc
[modify] https://crrev.com/39f571439821118c9ee5b36131e851206b819074/payload_consumer/download_action.cc

Is this showing any useful results yet?
Someone will have to post logs after reproducing for this to be useful. Anyone here actively seeing this issue?
This is good to have in case things go bad so we can look at the logs and look for potential problems. There is no active result from this.
But on a second thought, we can look into the logs for different machines and see if there is any meaningful problem to be observed. But I have postponed it after some modifications to the UE and after the android container updater is done! :)
Sign in to add a comment