Teemo/Reef: Gedit App hang while resizing window |
|||||||||||||
Issue descriptionChrome Version: 69.0.3486.0 Chrome OS Version: 10866.1.0 Chrome OS Platform: Teemo Network info: Wifi Please specify Cr-* of the system to which this bug/feature applies (add the label below). Steps To Reproduce: (1) Install crostini. (2) Observe Terminal icon in app menu. (3) Install gedit App using sudo apt-get install gedit. (4) Open gedit app after successful installation. (5) Resize App or maxmize App. Expected Result: -- No UI issues and no impact on Linux terminal. Actual Result: -- App is hang while resizing. -- After app issue, unable to open Terminal. How frequently does this problem reproduce? (Always, sometimes, hard to reproduce?) Almost every time. After few minutes, app was closed and then terminal is working. Feedback report: 85547720140
,
Jul 26
What is the latest status of this bug? It is listed as a RBB and needs to be prioritized. Thanks.
,
Jul 30
I tried to reproduce this on a kench (fizz, same family as teemo) with 69.0.3493.0 on it. I didn't observe any of the behavior seen in this bug, so perhaps it has been fixed. Closing for now until a working repoduction case is given.
,
Jul 31
Observed issue on 10895.10.0, 69.0.3497.21 on teemo. Repo steps: 1, Install and open gedit 2, Resize or drag window for sometime continuously. 3, Observe issue. Feedback report: 85578169771 Observation video: https://storage.cloud.google.com/chromiumos-test-logs/bugfiles/cr/863490/Jul%2031%202018%202_47%20PM.webm?_ga=2.44724515.-66391737.1524518032
,
Jul 31
Could you post a video of the issue occurring?
,
Jul 31
This issue also observed in basking device(reef). Feedback report: 85578171593
,
Aug 3
,
Aug 3
Dylan and I noticed some weird stuff in the logs from your feedback report. The kernel seems to have been hung waiting for a btrfs transaction. 2018-07-31T14:28:08.290712-07:00 INFO vm_concierge[3771]: Received CreateDiskImage request 2018-07-31T14:28:08.295394-07:00 INFO vm_concierge[3771]: Received StartVm request 2018-07-31T14:28:08.295445-07:00 INFO vm_concierge[3771]: VM with requested name is already running 2018-07-31T14:28:08.297214-07:00 INFO vm_concierge[3771]: Received StartContainer request 2018-07-31T14:28:08.297268-07:00 INFO vm_concierge[3771]: Checking if container penguin is running 2018-07-31T14:28:08.299802-07:00 INFO vm_cicerone[3760]: Received IsContainerRunning request 2018-07-31T14:28:08.305245-07:00 INFO vm_concierge[3771]: Container penguin is already running. 2018-07-31T14:28:08.307129-07:00 INFO vm_cicerone[3760]: Received LaunchContainerApplication request 2018-07-31T14:28:31.916357-07:00 DEBUG kernel: [10033.207041] kvm: zapping shadow pages for mmio generation wraparound 2018-07-31T14:29:35.810942-07:00 INFO minijail0[6280]: libminijail[6280]: child process 6281 exited with status 2 2018-07-31T14:32:10.084916-07:00 INFO VM(4)[3780]: [ 9888.736733] INFO: task btrfs-transacti:119 blocked for more than 120 seconds.#015 2018-07-31T14:32:10.087541-07:00 INFO VM(4)[3780]: [ 9888.753512] Not tainted 4.14.56-05956-g5b9b9e14a8dc #1#015 2018-07-31T14:32:10.090139-07:00 INFO VM(4)[3780]: [ 9888.755509] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.#015 2018-07-31T14:32:10.092107-07:00 INFO VM(4)[3780]: [ 9888.758182] btrfs-transacti D 0 119 2 0x80000000#015 2018-07-31T14:32:10.093093-07:00 INFO VM(4)[3780]: [ 9888.760075] Call Trace:#015 2018-07-31T14:32:10.094674-07:00 INFO VM(4)[3780]: [ 9888.761264] ? __schedule+0x375/0x4ee#015 2018-07-31T14:32:10.095885-07:00 INFO VM(4)[3780]: [ 9888.762652] schedule+0x7d/0x8d#015 2018-07-31T14:32:10.097176-07:00 INFO VM(4)[3780]: [ 9888.763886] io_schedule+0xd/0x2e#015 2018-07-31T14:32:10.098922-07:00 INFO VM(4)[3780]: [ 9888.765238] wait_on_page_bit_common+0x109/0x17d#015 2018-07-31T14:32:10.100748-07:00 INFO VM(4)[3780]: [ 9888.766931] ? __add_to_page_cache_locked+0x10b/0x10b#015 2018-07-31T14:32:10.102378-07:00 INFO VM(4)[3780]: [ 9888.768696] __filemap_fdatawait_range+0xb7/0xfb#015 2018-07-31T14:32:10.103901-07:00 INFO VM(4)[3780]: [ 9888.770298] filemap_fdatawait_range+0x9/0x12#015 2018-07-31T14:32:10.105543-07:00 INFO VM(4)[3780]: [ 9888.771883] btrfs_wait_ordered_range+0x5d/0xf3#015 2018-07-31T14:32:10.107154-07:00 INFO VM(4)[3780]: [ 9888.773551] __btrfs_wait_cache_io+0x40/0x170#015 2018-07-31T14:32:10.109185-07:00 INFO VM(4)[3780]: [ 9888.775178] btrfs_start_dirty_block_groups+0x126/0x3c7#015 2018-07-31T14:32:10.110930-07:00 INFO VM(4)[3780]: [ 9888.777252] btrfs_commit_transaction+0xd8/0x7d9#015 2018-07-31T14:32:10.112495-07:00 INFO VM(4)[3780]: [ 9888.778913] ? start_transaction+0x2bc/0x388#015 2018-07-31T14:32:10.113994-07:00 INFO VM(4)[3780]: [ 9888.780424] transaction_kthread+0x10a/0x1c2#015 2018-07-31T14:32:10.115708-07:00 INFO VM(4)[3780]: [ 9888.781927] ? btrfs_cleanup_transaction+0x418/0x418#015 2018-07-31T14:32:10.116895-07:00 INFO VM(4)[3780]: [ 9888.783638] kthread+0x13d/0x145#015 2018-07-31T14:32:10.118408-07:00 INFO VM(4)[3780]: [ 9888.784829] ? __kthread_bind_mask+0x57/0x57#015 2018-07-31T14:32:10.119697-07:00 INFO VM(4)[3780]: [ 9888.786334] ret_from_fork+0x35/0x40#015 2018-07-31T14:32:10.121718-07:00 INFO VM(4)[3780]: [ 9888.787645] task PC stack pid father#015 2018-07-31T14:32:10.123652-07:00 INFO VM(4)[3780]: [ 9888.789694] btrfs-transacti D 0 119 2 0x80000000#015 2018-07-31T14:32:10.124616-07:00 INFO VM(4)[3780]: [ 9888.791590] Call Trace:#015 2018-07-31T14:32:10.125888-07:00 INFO VM(4)[3780]: [ 9888.792507] ? __schedule+0x375/0x4ee#015 2018-07-31T14:32:10.127018-07:00 INFO VM(4)[3780]: [ 9888.793788] schedule+0x7d/0x8d#015 2018-07-31T14:32:10.128193-07:00 INFO VM(4)[3780]: [ 9888.794914] io_schedule+0xd/0x2e#015 2018-07-31T14:32:10.129774-07:00 INFO VM(4)[3780]: [ 9888.796103] wait_on_page_bit_common+0x109/0x17d#015 2018-07-31T14:32:10.131493-07:00 INFO VM(4)[3780]: [ 9888.797689] ? __add_to_page_cache_locked+0x10b/0x10b#015 2018-07-31T14:32:10.133231-07:00 INFO VM(4)[3780]: [ 9888.799413] __filemap_fdatawait_range+0xb7/0xfb#015 2018-07-31T14:32:10.134806-07:00 INFO VM(4)[3780]: [ 9888.801212] filemap_fdatawait_range+0x9/0x12#015 2018-07-31T14:32:10.136382-07:00 INFO VM(4)[3780]: [ 9888.802728] btrfs_wait_ordered_range+0x5d/0xf3#015 2018-07-31T14:32:10.137891-07:00 INFO VM(4)[3780]: [ 9888.804299] __btrfs_wait_cache_io+0x40/0x170#015 2018-07-31T14:32:10.139666-07:00 INFO VM(4)[3780]: [ 9888.805803] btrfs_start_dirty_block_groups+0x126/0x3c7#015 2018-07-31T14:32:10.141257-07:00 INFO VM(4)[3780]: [ 9888.807588] btrfs_commit_transaction+0xd8/0x7d9#015 2018-07-31T14:32:10.142773-07:00 INFO VM(4)[3780]: [ 9888.809206] ? start_transaction+0x2bc/0x388#015 2018-07-31T14:32:10.144254-07:00 INFO VM(4)[3780]: [ 9888.810691] transaction_kthread+0x10a/0x1c2#015 2018-07-31T14:32:10.145999-07:00 INFO VM(4)[3780]: [ 9888.812176] ? btrfs_cleanup_transaction+0x418/0x418#015 2018-07-31T14:32:10.147189-07:00 INFO VM(4)[3780]: [ 9888.813929] kthread+0x13d/0x145#015 2018-07-31T14:32:10.148659-07:00 INFO VM(4)[3780]: [ 9888.815092] ? __kthread_bind_mask+0x57/0x57#015 2018-07-31T14:32:10.149924-07:00 INFO VM(4)[3780]: [ 9888.816567] ret_from_fork+0x35/0x40#015 2018-07-31T14:32:10.151835-07:00 INFO VM(4)[3780]: [ 9888.817853] ld-linux-x86-64 D 0 10406 785 0x00000100#015 2018-07-31T14:32:10.152799-07:00 INFO VM(4)[3780]: [ 9888.819770] Call Trace:#015 2018-07-31T14:32:10.154074-07:00 INFO VM(4)[3780]: [ 9888.820687] ? __schedule+0x375/0x4ee#015 2018-07-31T14:32:10.155779-07:00 INFO VM(4)[3780]: [ 9888.821973] ? console_conditional_schedule+0x1d/0x1d#015 2018-07-31T14:32:10.156930-07:00 INFO VM(4)[3780]: [ 9888.823695] schedule+0x7d/0x8d#015 2018-07-31T14:32:10.158258-07:00 INFO VM(4)[3780]: [ 9888.824820] schedule_timeout+0x2f/0xd2#015 2018-07-31T14:32:10.159781-07:00 INFO VM(4)[3780]: [ 9888.826256] ? virtqueue_add_sgs+0x71/0x80#015 2018-07-31T14:32:10.161309-07:00 INFO VM(4)[3780]: [ 9888.827705] ? do_wait_for_common+0x125/0x165#015 2018-07-31T14:32:10.162763-07:00 INFO VM(4)[3780]: [ 9888.829223] do_wait_for_common+0x125/0x165#015 2018-07-31T14:32:10.164064-07:00 INFO VM(4)[3780]: [ 9888.830669] ? do_task_dead+0x6b/0x6b#015 2018-07-31T14:32:10.165383-07:00 INFO VM(4)[3780]: [ 9888.831966] wait_for_common+0x44/0x5c#015 2018-07-31T14:32:10.166517-07:00 INFO VM(4)[3780]: [ 9888.833284] do_new+0x1d5/0x2a3#015 2018-07-31T14:32:10.167965-07:00 INFO VM(4)[3780]: [ 9888.834470] virtwl_ioctl_ptr+0x12b/0x1d5#015 2018-07-31T14:32:10.171932-07:00 INFO VM(4)[3780]: [ 9888.835927] vfs_ioctl+0x18/0x25#015 2018-07-31T14:32:10.173076-07:00 INFO VM(4)[3780]: [ 9888.839345] do_vfs_ioctl+0x4ad/0x4bc#015 2018-07-31T14:32:10.174252-07:00 INFO VM(4)[3780]: [ 9888.840994] SyS_ioctl+0x4d/0x6f#015 2018-07-31T14:32:10.175603-07:00 INFO VM(4)[3780]: [ 9888.842233] do_syscall_64+0x6a/0x7a#015 2018-07-31T14:32:10.177398-07:00 INFO VM(4)[3780]: [ 9888.843588] entry_SYSCALL_64_after_hwframe+0x3d/0xa2#015 2018-07-31T14:32:10.178712-07:00 INFO VM(4)[3780]: [ 9888.845352] RIP: 0033:0x78d80565e967#015 2018-07-31T14:32:10.181160-07:00 INFO VM(4)[3780]: [ 9888.846610] RSP: 002b:00007ffe9baf9f88 EFLAGS: 00000246 ORIG_RAX: 0000000000000010#015 2018-07-31T14:32:10.183555-07:00 INFO VM(4)[3780]: [ 9888.849133] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 000078d80565e967#015 2018-07-31T14:32:10.186316-07:00 INFO VM(4)[3780]: [ 9888.851504] RDX: 00007ffe9baf9ff0 RSI: 00000000c0307700 RDI: 0000000000000004#015 2018-07-31T14:32:10.188792-07:00 INFO VM(4)[3780]: [ 9888.854364] RBP: 00007ffe9bafa050 R08: 000000000ccccccc R09: 000078d80672dc10#015 2018-07-31T14:32:10.191154-07:00 INFO VM(4)[3780]: [ 9888.856740] R10: 0000000000000060 R11: 0000000000000246 R12: 0000000000000000#015 2018-07-31T14:32:10.193513-07:00 INFO VM(4)[3780]: [ 9888.859102] R13: 000078d806742cf0 R14: 000078d806728770 R15: 0000000000000211#015 2018-07-31T14:32:10.194799-07:00 INFO VM(4)[3780]: [ 9888.861466] NMI backtrace for cpu 1#015 2018-07-31T14:32:10.197423-07:00 INFO VM(4)[3780]: [ 9888.862762] CPU: 1 PID: 24 Comm: khungtaskd Not tainted 4.14.56-05956-g5b9b9e14a8dc #1#015 2018-07-31T14:32:10.198412-07:00 INFO VM(4)[3780]: [ 9888.865383] Call Trace:#015 2018-07-31T14:32:10.199687-07:00 INFO VM(4)[3780]: [ 9888.866406] dump_stack+0x5a/0x6f#015 2018-07-31T14:32:10.201079-07:00 INFO VM(4)[3780]: [ 9888.867620] nmi_cpu_backtrace+0x96/0xa9#015 2018-07-31T14:32:10.202945-07:00 INFO VM(4)[3780]: [ 9888.869001] ? irq_force_complete_move+0xcd/0xcd#015 2018-07-31T14:32:10.204717-07:00 INFO VM(4)[3780]: [ 9888.870925] nmi_trigger_cpumask_backtrace+0x50/0xc8#015 2018-07-31T14:32:10.205986-07:00 INFO VM(4)[3780]: [ 9888.872704] watchdog+0x2bd/0x2e1#015 2018-07-31T14:32:10.207339-07:00 INFO VM(4)[3780]: [ 9888.873920] ? hung_task_panic+0xd/0xd#015 2018-07-31T14:32:10.208609-07:00 INFO VM(4)[3780]: [ 9888.875236] kthread+0x13d/0x145#015 2018-07-31T14:32:10.210059-07:00 INFO VM(4)[3780]: [ 9888.876501] ? __kthread_bind_mask+0x57/0x57#015 2018-07-31T14:32:10.211310-07:00 INFO VM(4)[3780]: [ 9888.877949] ret_from_fork+0x35/0x40#015 2018-07-31T14:32:10.212834-07:00 INFO VM(4)[3780]: [ 9888.879242] Sending NMI from CPU 1 to CPUs 0:#015 2018-07-31T14:32:10.215255-07:00 INFO VM(4)[3780]: [ 9888.880858] NMI backtrace for cpu 0 skipped: idling at pc 0xffffffff81629624#015 2018-07-31T14:32:10.217456-07:00 INFO VM(4)[3780]: [ 9888.883355] Kernel panic - not syncing: hung_task: blocked tasks#015 2018-07-31T14:32:10.220101-07:00 INFO VM(4)[3780]: [ 9888.885408] CPU: 1 PID: 24 Comm: khungtaskd Not tainted 4.14.56-05956-g5b9b9e14a8dc #1#015 2018-07-31T14:32:10.221077-07:00 INFO VM(4)[3780]: [ 9888.888066] Call Trace:#015 2018-07-31T14:32:10.222233-07:00 INFO VM(4)[3780]: [ 9888.888954] dump_stack+0x5a/0x6f#015 2018-07-31T14:32:10.223358-07:00 INFO VM(4)[3780]: [ 9888.890169] panic+0xd1/0x223#015 2018-07-31T14:32:10.224564-07:00 INFO VM(4)[3780]: [ 9888.891281] watchdog+0x2cb/0x2e1#015 2018-07-31T14:32:10.225861-07:00 INFO VM(4)[3780]: [ 9888.892456] ? hung_task_panic+0xd/0xd#015 2018-07-31T14:32:10.226995-07:00 INFO VM(4)[3780]: [ 9888.893747] kthread+0x13d/0x145#015 2018-07-31T14:32:10.228435-07:00 INFO VM(4)[3780]: [ 9888.894870] ? __kthread_bind_mask+0x57/0x57#015 2018-07-31T14:32:10.229677-07:00 INFO VM(4)[3780]: [ 9888.896325] ret_from_fork+0x35/0x40#015 2018-07-31T14:32:10.230986-07:00 INFO VM(4)[3780]: [ 9888.897667] Kernel Offset: disabled#015 2018-07-31T14:32:10.232221-07:00 INFO VM(4)[3780]: [ 9888.898883] Rebooting in 1 seconds..#015 2018-07-31T14:32:11.246405-07:00 INFO localhos[3779]: crosvm[4]: [src/linux.rs:822] vcpu requested shutdown 2018-07-31T14:32:11.248317-07:00 ERR localhos[3783]: crosvm[1]: [devices/src/proxy.rs:102] error: child device process failed send: Connection refused (os error 111) 2018-07-31T14:32:11.248844-07:00 ERR localhos[3786]: crosvm[1]: [devices/src/proxy.rs:102] error: child device process failed send: Connection refused (os error 111) 2018-07-31T14:32:11.249173-07:00 ERR localhos[3787]: crosvm[1]: [devices/src/proxy.rs:102] error: child device process failed send: Connection refused (os error 111) 2018-07-31T14:32:11.249496-07:00 ERR localhos[3794]: crosvm[1]: [devices/src/proxy.rs:102] error: child device process failed send: Connection refused (os error 111) 2018-07-31T14:32:11.267259-07:00 ERR localhos[3791]: crosvm[1]: [devices/src/proxy.rs:102] error: child device process failed send: Connection refused (os error 111) 2018-07-31T14:32:11.267604-07:00 ERR localhos[3789]: crosvm[1]: [devices/src/proxy.rs:102] error: child device process failed send: Connection refused (os error 111) 2018-07-31T14:32:11.268935-07:00 ERR localhos[3784]: crosvm[1]: [devices/src/proxy.rs:102] error: child device process failed send: Connection refused (os error 111) 2018-07-31T14:32:11.490309-07:00 INFO localhos[3779]: crosvm[4]: [src/main.rs:460] crosvm has exited normally 2018-07-31T14:32:11.591832-07:00 WARNING localhos[3779]: crosvm[4]: [src/main.rs:630] not all child processes have exited; sending SIGKILL 2018-07-31T14:32:11.610267-07:00 INFO vm_concierge[3771]: Process 4 killed by signal 9 2018-07-31T14:32:11.611822-07:00 ERR vm_concierge[3771]: Unable to send signal to 5: No such process 2018-07-31T14:32:12.030197-07:00 INFO vm_concierge[3771]: Process 7 exited with status 0
,
Aug 6
,
Aug 8
dgreid and I debugged this one and found the cause. The guest physical addresses used for allocated virtio-wayland buffers start at the end of guest memory and increase monotonically. Normally this works fine, but in this instance, those addresses bumped into the 32 bit gap region at the last 768MB of the first 4GB of address space. Eventually, the APIC emulated by the host kernel get papered over by memory slots and will no longer receive vm exits by the guest kernel trying to program it, leading to the system hanging. This will only happen on VMs that have less than 4GB of ram allocated to them, which is all systems with less than 5.33 GB total memory due to concierge's 75% of total ram per VM allocation policy. The solution is to start address space allocation for virtio-wayland buffers after 4GB, no matter how little ram was allocated to the VM.
,
Aug 9
The following revision refers to this bug: https://chromium.googlesource.com/chromiumos/platform/crosvm/+/2dd78797d76245a45cc34a841150a170a0a22557 commit 2dd78797d76245a45cc34a841150a170a0a22557 Author: Dylan Reid <dgreid@chromium.org> Date: Thu Aug 09 09:21:31 2018 x86_64: Don't allocate addrs in the 32 bit gap Device allocations have to skip the gap so they don't collide with things like the APIC. BUG= 863490 TEST=Resize a gedit window on APL for a minute and make sure there isn't a crash. Change-Id: Ia8185bcdbb6c18e13d02be317ae4d48c73288661 Signed-off-by: Dylan Reid <dgreid@chromium.org> Reviewed-on: https://chromium-review.googlesource.com/1168400 Reviewed-by: Sonny Rao <sonnyrao@chromium.org> Reviewed-by: Zach Reizner <zachr@chromium.org> [modify] https://crrev.com/2dd78797d76245a45cc34a841150a170a0a22557/x86_64/src/lib.rs
,
Aug 9
,
Aug 9
This fixes a hang on all devices with less than 4 GB of memory.
,
Aug 9
This bug requires manual review: M69 has already been promoted to the beta branch, so this requires manual review Please contact the milestone owner if you have questions. Owners: amineer@(Android), kariahda@(iOS), cindyb@(ChromeOS), govind@(Desktop) For more details visit https://www.chromium.org/issue-tracking/autotriage - Your friendly Sheriffbot
,
Aug 9
Merge approved, M69.
,
Aug 9
,
Aug 9
The following revision refers to this bug: https://chromium.googlesource.com/chromiumos/platform/crosvm/+/26e3a480d051cad28cfeab2d37f30c84b7e83bfd commit 26e3a480d051cad28cfeab2d37f30c84b7e83bfd Author: Dylan Reid <dgreid@chromium.org> Date: Thu Aug 09 23:54:02 2018 x86_64: Don't allocate addrs in the 32 bit gap Device allocations have to skip the gap so they don't collide with things like the APIC. BUG= 863490 TEST=Resize a gedit window on APL for a minute and make sure there isn't a crash. Change-Id: Ia8185bcdbb6c18e13d02be317ae4d48c73288661 Signed-off-by: Dylan Reid <dgreid@chromium.org> Reviewed-on: https://chromium-review.googlesource.com/1168400 Reviewed-by: Sonny Rao <sonnyrao@chromium.org> Reviewed-by: Zach Reizner <zachr@chromium.org> (cherry picked from commit 2dd78797d76245a45cc34a841150a170a0a22557) Reviewed-on: https://chromium-review.googlesource.com/1169703 [modify] https://crrev.com/26e3a480d051cad28cfeab2d37f30c84b7e83bfd/x86_64/src/lib.rs
,
Oct 8
Verified on 69.0.3497.95/10895.56.0 and 11137.0.0, 71.0.3567.0 on teemo |
|||||||||||||
►
Sign in to add a comment |
|||||||||||||
Comment 1 by dgreid@chromium.org
, Jul 13Status: Assigned (was: Untriaged)