New issue
Advanced search Search tips
Note: Color blocks (like or ) mean that a user may not be available. Tooltip shows the reason.

Issue 749736 link

Starred by 2 users

Issue metadata

Status: Fixed
Owner:
Closed: Aug 2017
Cc:
EstimatedDays: ----
NextAction: ----
OS: Fuchsia
Pri: 3
Type: Bug

Blocking:
issue 706592



Sign in to add a comment

Port PartitionAlloc to Fuchsia

Project Member Reported by scottmg@chromium.org, Jul 27 2017

Issue description

PartitionAllocTest.* are currently disabled in base_unittests on Fuchsia, and of course it's also needed for Blink, etc.

There's one particular mmap/mprotect/etc. feature that's not implemented on Fuchsia, that mprotect() doesn't support PROT_NONE. Upstream bug for that is MG-546 and MG-426, but there's a (not-unreasonable) disinclination to implementing that as it seems it's not strictly necessary.

I suggested perhaps we could just use PROT_READ to Justin on the basis that disallowing writes would be sorta vaguely guard-pagey, but apparently that leaks information about aslr or somethingsomething so it's a no-go. Dart felt it was sufficient for the time being I see: https://chromium.googlesource.com/external/github.com/dart-lang/sdk/+/master/runtime/vm/virtual_memory_fuchsia.cc#251 .

There's some docs on Fuchsia's vmem here https://fuchsia.googlesource.com/magenta/+/HEAD/docs/objects/vm_address_region.md.

I think the general idea for a Fuchsia implementation would be to make a vmar and a vmo for each partitionalloc allocation of a superpage, explicitly controlling the location of where it lives, and then use vmar_unmap and vmar_map to guard/unguard particular bits of address space. The map would work then because the address is sufficient to recover which vmo/base vmar would be used for that region.

I'm still ignorant on how the sub-vmars that would be created to modify the mapped-ness of subparts of those allocations will work (lifetime, overlapping, children, etc.) though.

[+too many people may or may not care, feel free to remove yourself if you don't.]
 
(I tried decommitting the vmo with mx_vmo_op_range() hoping it would fault on access, but I guess since the vmar knows which vmo is mapped into it, it just recommits the vmo again, so that doesn't help.)


Here's my current plan for the 4 main functions of base/allocator/partition_allocator/page_allocator.h. It seems a bit icky implementation-wise, so please let me know if I'm horribly confused on the use of these APIs.


AllocPages(address, length, align, page_accessibility):
- vmo_create() and vmar_map() both of size length, mapping SPECIFIC at hinted |address|
- save the vmo with into a side-data structure keyed by address and length
- if page_accessibility == PageInaccessible, SetSystemPagesInaccessible() on the whole range

FreePages(address, length):
- unmap the whole range (we know it corresponds to a previous AllocPages call)
- close the vmo handle we know is unused as a result
- remove vmo mapping entry from side-data

SetSystemPagesInaccessible(address, length):
- vmar_unmap() out the range

SetSystemPagesAccessible(address, length):
- find the vmo where this range would fall into in the side-data
- vmar_map the part of the vmo back into the vmar, using SPECIFIC to control the subchunk that was presumably previously unmap()d by SetSystemPagesInaccessible().


The side-data being maintained is a list of <vmo handle, base address, length>, which is necessary to go from the address being marked accessible, or being decommitted, or being committed, back to the vmo object. This is obviously a bit awkward as compared with POSIX or Windows, which let you use the address directly.


For the others:
- DecommitSystemPages() and RecommitSystemPages(): look up the vmo in side-data and vmo_op_range() on them
- DisardSystemPages(): ignore


Super-hacky code doing this if that was all totally unclear: https://chromium-review.googlesource.com/590958 

This approach also only works because PartitionAlloc is (I think? Maybe?) fairly nicely-behaved, in that it doesn't try to mark accessible things that are already accessible, or mark vmo-spanning chunks accessible, and so on. If it did that, I think I'd have to have a lot more bookkeeping on the side. This worries me for implementing V8's more general class, https://cs.chromium.org/chromium/src/v8/src/base/platform/platform-fuchsia.cc?type=cs&q=v8+fuchsia+virtualmemory&sq=package:chromium&l=38. as I'm less confident that it won't e.g. guard/unguard overlapping or intersecting blocks of memory.

Comment 2 by palmer@chromium.org, Jul 28 2017

I guess my instinct would be to implement PROT_NONE (which I guess Fuchsia would call MX_VM_FLAG_PERM_NONE?). I think guard pages are in large part what it's for. And Partition Alloc might not be the only caller that will want to have guard pages, going forward. But I don't understand everything yet, so maybe I'm wrong.

Comment 3 by teisenbe@google.com, Jul 28 2017

The VMAR/VMO distinction is attempting to separate address space allocation (VMARs) from memory allocation (VMOs).  So far, the typical way of implementing guard pages has been to create an over-sized VMAR to reserve address space, and then leave the parts of the VMAR you want to act as guards unmapped.

The behavior that we didn't anticipate being desirable was being able to transition regions back-and-forth to being reserved-and-unmapped.  I think a PROT_NONE analog is a reasonable way to meet this use case, since it is desirable for us to 1) reduce the amount of error-prone bookkeeping that API consumers need to write, and 2) minimize the memory overhead of bookkeeping in the kernel.

Comment 4 by teisenbe@google.com, Jul 28 2017

MG-426 tracks the PROT_NONE feature on the Magenta side
Status: Fixed (was: Assigned)

Sign in to add a comment