dev mode transition time too long |
|||||
Issue descriptionChrome Version: ToT OS: ChromeOS What steps will reproduce the problem? (1) Find a Chromebook with large internal storage, for example Eve i7 BEST (128G+) (2) Switch from normal mode to developer mode What is the expected result? Finish in reasonable time that complies to security requirement (5min+) What happens instead? Takes 45+ mins I think we had similar problem on rotational media. in platform2/init/clobber-state: # On a non-fast wipe, rotational drives take too long. Override to run them # through "fast" mode, with a forced delay. Sensitive contents should already # be encrypted. So maybe we need to do same thing if the block device is too large (say > 32G). Or, if there's some better way to wipe the blocks in a secure way within limited time. Set owner to Gwendal to see if there are some better ways. +kees who added the initial implementation of rotational disk wiping +mnissler to make sure we're not exposing the new path to privacy/security danger.
,
Jun 8 2018
I found there are programs like 'zerofree', which would zero the free blocks from ext2/3/4. Is it possible that we do something like "zeroused", to always zero all the used blocks, and then wait until 5 min transition done (if haven't)? This may help making the wiping experience more unified.
,
Jun 11 2018
Are there some blocks that would be better to wipe than others? For example, filesystem data, to make reconstructing files more difficult? Or would that implicitly be nuked by newfs? Should we be (or are we already) using something like TRIM to erase freed blocks, so that it is safe to zero only the currently-used ones?
,
Jun 11 2018
IIUC, part of the goal here is to make mode transitions a little while, so this can't be done quickly by opportunistic attackers. Regarding data security, if everything else works as intended, wiping the disk shouldn't be strictly necessary, but is still a nice-to-have as a safety net. 45+ minutes is excessive though. I'm surprised there's no faster way to wipe a large SSD drive though. Are we positive that's the best wiping performance we can get?
,
Jun 11 2018
> Are there some blocks that would be better to wipe than others? If we read ext4 bitmap (like what I mentioned as zerofree above), then yes it may be possible to wipe some blocks than others. I think there were some early discussion of not zeroing whole disk or using TRIM, and the concern at that time was we can't make sure if SSD firmware will really discard / clean the data - for example it may simply change internal mapping or put TRIM'ed blocks into free list, so a physical dump may still get the written data. In theory, writing zero to all blocks "should" occupy all blocks. Although, it is still possible that a SSD firmware may try to link zero blocks together... But since all confidential data should be stored in cryptohome today, and if we need to deal with 128G+ disk, then I think a special wipe is really needed...
,
Jun 11 2018
Wow, 45 minutes for an SSD?! That's impressive. So, I think the same rationale exists for a large SSD as it does with a slow HDD. Taking the same path as HDD should be okay, but it'd be nice to include SSD-specific best-effort things like TRIM. Doing an "mkfs.ext4 -E discard" on an SSD will TRIM the entire drive. (While this is not remotely safe forensically, it's a good best-effort, and combined with the sensitive data's encryption key getting wiped in the TPM, I think this should be fine.)
,
Jun 11 2018
Will it help if we run something like blkdiscard ? I wonder if that's good enough to "wipe" data.
,
Jun 11 2018
Yeah, that would be fine; it should be the same as "mkfs.ext4 -E discard" but no new filesystem. :). I would follow the same HDD path, add blkdiscard at the end, and make sure the timer waits like normal.
,
Jun 11 2018
or shred + blkdiscard -s . that really sounds more meaningful and efficient.
,
Jun 11 2018
-s may not be supported, so maybe: shred ...; blkdiscard -s $dev || blkdiscard $dev
,
Jun 12 2018
I just found in clobber-state, currently there's a chance since 2017-06 https://chromium-review.googlesource.com/535846 According to this, wiping on (at least) eMMC will do secure_erase of keys, then jump into fast wipe - similar to rotational disks. So maybe we see the problems because those disks were SATA / NVMe?
,
Jun 12 2018
it seems weird to me that wipe_rotational_drive deletes file using a different set when compared to wipe_keysets. Maybe we should do unify them - i.e., always remove same key files from rotational / eMMC/SATA/NVMe disk, trying secure_erase_file first then shred, and eventually do blkdiscard if possible.
,
Jun 12 2018
a proof-of-concept here: https://chromium-review.googlesource.com/c/chromiumos/platform2/+/1096714
,
Jun 12 2018
the reason CL:535846 operates quickly on eMMC is because it has support for secure erase. that's what platform2/secure_erase_file/ is all about. if the hardware on eve had something similar, we could update secure_erase_file to support it, and the rest of the code would follow the "fast" code paths for free. i don't think this transition being slow is a new thing, we're just seeing it again on eve. this is why we have things like wipe_block_dev which only tries to wipe existing used FS blocks. is the issue that you had a stateful partition that was filled up ? i wonder what overlap we have with `blkdiscard -s` and our secure_erase_file project.
,
Jun 12 2018
Re#14 1. That Eve is using NVMe. Does secure_erase work on NVMe? 2. blkdiscard works on whole block device and very fast - for example on a 256G eMMC, I can discard -s a 224G stateful partition in 5s, and all blocks become 0 after then. blkdiscard also can tell the underlying SSD/eMMC firmware to free all blocks - no matter allocated or free - so that helps getting better performance. I think we should always call blkdisard if possible?
,
Jun 12 2018
FYI NVMe on Eve supports BLKDISCARD but not BLKSECDISCARD, and that's probably why secure_erasefile failed. But 'blkdiscard' still works, so may still benefit from that according to keescook's comment in c#10.
,
Jun 12 2018
As was said earlier, BLKSECDISCARD is a thing of a past. [see https://chromium-review.googlesource.com/c/chromiumos/platform2/+/498647/13/secure_erase_file/secure_erase_file.cc#182 Writing 0's to 512GB drive takes time: Justin already noticed here: https://bugs.chromium.org/p/chromium/issues/detail?id=724169 #6: Trim will just erase the translation table, not the underlying data. Most SATA and NVMe do supports fast erase, but you need to erase the full drive. See src/platform/factory_installer/factory_verify.sh.
,
Jun 12 2018
i don't think trim/discarding blocks whereby we basically delink the blocks but leave the data intact is acceptable shred has a number of underlying assumptions that i don't think we can rely upon: updating the file in-place will rewrite the filesystem blocks in place (vs just writing to a new place and marking the old ones as free), the storage will rewrite in place (possibly doing a read/modify/write cycle on the block/page) and not just writing to a new place while leaving the old block to be garbage collected. i think that's why we're left with this large but slow hammer of trying to force everything to be nuked.
,
Jun 13 2018
,
Jun 13 2018
Re c#18, I don't believes shred works on SSDs for all those reasons.
,
Aug 3
This bug has an owner, thus, it's been triaged. Changing status to "assigned". |
|||||
►
Sign in to add a comment |
|||||
Comment 1 by hungte@chromium.org
, May 22 2018