Project: v8 Issues People Development process History Sign in
New issue
Advanced search Search tips
Starred by 150 users
Status: Fixed
Owner:
Closed: Sep 2011
HW: ----
OS: ----
Priority: 3
Type: FeatureRequest



Sign in to add a comment
1gb memory limit
Reported by bertbel...@gmail.com, Aug 25 2010 Back to list
V8 is unable to use more than 1gb of memory, even when there's gigabytes of free memory and using an 64-bit build. Using the --max_old_space_size doesn't help either.

More information: http://groups.google.com/group/v8-users/browse_thread/thread/735fa1f78e419a7b
 
Labels: Type-FeatureRequest Priority-Low
There two independent issues we need to solve to support large heaps:

1) Currently V8 uses int variables/fields to track various limits inside memory management subsystem. Limits overflow when heap size approaches 1gb and this causes erratic behavior [e.g. Heap::old_gen_allocation_limit_ overflow forces V8 always to select mark-sweep collector instead of scavenge].

Semantically correct fix for this would be to replace int with size_t in appropriate places. Unfortunately this requires serious refactoring of V8 source code because currently it consistently uses int to store sizes of various objects and changing some of this declarations to use size_t will cause to numerous "signed compared to unsigned", "signed stored to unsigned" errors. 

2) MarkCompactCollector and MemoryAllocator use encodings based on assertion that number of chunks constituting paged part of the heap does not exceed certain value and each chunk consists of limited number of pages of limited size [see comments for Page::opaque_header and MapWord class]. One needs to rewrite these parts of GC/allocator to raise heap limits.

An independent problem is GC throughput: while current V8 GC behaves well on small and medium heaps [and especially on applications that satisfy generational hypothesis] it's performance could degrade on large heaps. For example on 1.5Gb heap one should expect mark-sweep taking > 1 s, mark-compact taking > 3s and scavenge taking >50ms. Such pauses might significantly degrade overall application performance and responsiveness. 


Even if this can be acceptable in a web browser, IMHO it is not when dealing with server-side processes. In my case, I've been forced to suddenly stop my work (an experiment with real-time web) with node.js because it cannot handle more than ~140K websockets concurrent connections (and my target is quite higher).
I'm sorry to say that I was surprised hitting this V8 limit (hey, it's a *Google* product! ;) ) and I wonder why the type of this issue in not "bug" and why it has a so low priority: does this mean that Google/V8 team has no strategic interest in making V8 a server-side tool capable to handle heaps bigger than 1GB (easy to reach, you know)?
Comment 3 by bro...@gmail.com, Oct 19 2010
Any chance we can get the priority of this bumped up?   This is going to be a significant impediment for Node.js development since memory is one of the most important resources for scaling servers.
The limit for 64 bit V8 is around 1.9Gbytes now.  Start V8 with the
--max-old-space-size=1900 flag.
It is vital for my server-side work that V8 supports at least 32GB memory sizes.

I am sure that work on this would feed back into client V8.
It is a significant impediment for me too.
Comment 7 by bro...@gmail.com, Dec 3 2010
Slightly off topic, but rumor on the street is that Node.js's Buffer objects are allocated outside the V8 memory space.  If that's the case - and it appears to be (https://github.com/ry/node/blob/master/src/node_buffer.cc#L192)  - than this may not be as big an issue as I'd thought.

This likely holds true for other node modules that wrap native C APIs (e.g. redis/memcached)... so maybe this isn't that big a deal.

Thoughts, anyone?
Even with Buffer objects being allocated outside V8, 1.9GB leaves little room for things like indexing using sets or tries, where the pointers themselves take up significant space. For instance, a project of mine that's indexing 7-10mb of test data using sets and skiplists takes 35mb of memory (Redis is uses about the same for the same data set). See this thread for more info: http://groups.google.com/group/nodejs-dev/browse_thread/thread/ea23816b70d26fa8
Comment 9 Deleted
It seems to me that point #1 above could reasonably be crowd-sourced to concerned parties (such as myself), if appropriate guidelines were provided by the development team.

"Death to V8 int declarations" patchfests could be organized, with modest support and guidance from the Google developers.


Point #1 have been resolved already (see r5559).
Good to know about #1.

Other than the allocation encodings issue in #2 above, and the counter overflow issue in http://code.google.com/p/v8/issues/detail?id=887, what remaining issues currently prevent large V8 heaps?


#2 is still unfixed.

There is an experimental GC (developed on experimental/gc branch) which does not have any inherent limitations and is expected to have better throughput/lower pauses on large heaps but it still under active development.
Comment 14 by k...@tigertext.com, Sep 16 2011
How about boosting the priority on this? This limitation seriously impedes the potential uses for node.
Comment 15 by m...@ell.io, Sep 17 2011
I’m sorry to trigger e-mail notifactions for 84 people, but this isn’t just a +1 on the issue: beyond needing votes-for-implementation, this *needs to have its priority raised*.

Sorry, Google, but your open-source project is important to much more of the world than just *your browser* now; and while this may not be an issue for the zones of impact *you* care about (said browser.), it’s a *huge* issue for much of the area where V8 is important in general in the modern (post-Node) world.
Comment 16 Deleted
The google team is already actively working on it, so I don't really know what the fuss is all about there. You can even follow the experiment GC branch development.

If you are hitting the process memory in node, just scale to multiple processes. It's not magic, it's impossible. It's not like the whole world is going to explode if the garbage collection refactor takes 4 more months. And even if it gets released by the V8 guys it will take a new stable node release before you can use it.

Results of a quick experiment with Buffer on recent Node.js checkout:
> Serenity:src williambarnhill$ node
> new Buffer(4000000000)
TypeError: Bad argument
    at new Buffer (buffer.js:235:21)
    at repl:1:2
    at REPLServer.eval (repl.js:72:28)
    at Interface.<anonymous> (repl.js:162:12)
    at Interface.emit (events.js:67:17)
    at Interface._onLine (readline.js:156:10)
    at Interface._line (readline.js:419:8)
    at Interface._ttyWrite (readline.js:596:14)
    at ReadStream.<anonymous> (readline.js:76:12)
    at ReadStream.emit (events.js:88:20)
> new Buffer(2000000000)
FATAL ERROR: v8::Object::SetIndexedPropertiesToExternalArrayData() length exceeds max acceptable value
Serenity:src williambarnhill$ node
> new Buffer(1500000000) 
FATAL ERROR: v8::Object::SetIndexedPropertiesToExternalArrayData() length exceeds max acceptable value
Serenity:src williambarnhill$ node
> new Buffer(1200000000)
FATAL ERROR: v8::Object::SetIndexedPropertiesToExternalArrayData() length exceeds max acceptable value
Serenity:src williambarnhill$ node
> new Buffer(1000000000)
<Buffer 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ...>
> 


Status: Fixed
fixed with r9328
What's the new limit? No limit?
The default limit is still 700Mbytes on 32 bit and 1400Mbytes on 64 bit.  You can up the limit with --max-old-space-size=2000  The unit is Megabytes.  On 32 bit systems you probably can't set it much higher due to the lack of virtual address space.  The limit is not known on 64 bit.

Bug reports are very welcome.
Comment 22 by tshin...@gmail.com, Sep 19 2011
@18 Re:
  FATAL ERROR: v8::Object::SetIndexedPropertiesToExternalArrayData() 
    length exceeds max acceptable value

Please see node.js  issue #1710  where this _particular_ scenario was explored.
  "std::bad_alloc exception if a too big buffer is allocated"
  https://github.com/joyent/node/issues/1710

In this case it is a hard-coded limit to 1GB on the size of ExternalArray allocations.  (v8/src/objects.h  ExternalArray::kMaxLength)  

Probably related to the larger issue, but it is hard-coded, not an (in)direct consequence of GC?
Yes, issue described in the comment #18 is not related to GC. It is related to the way external array's length is represented in the object (31-bit signed integer).

Also clarification to the comment #21: --max-old-space-size=2000 is just an example. You should be able to put any value there. Everything depends on how much memory the OS will allow V8 to take. V8 will not try to reserve the whole X mb of memory at once. It will request more pages from the OS as the JS heap grows.

Comment 24 by m...@ell.io, Sep 20 2011
Very promptly seen to. Sometimes, I love you guys, Google.
Comment 25 by rryk...@gmail.com, Jun 18 2013
I've tried --max-old-space-size=10000 on Linux, but still couldn't go over ~3GB. An error appears on the console when the limit is hit - see attached screenshot.

Version info:

Google Chrome	28.0.1500.45 (Official Build 205727) 
OS	Linux 
Blink	537.36 (@152173)
JavaScript	V8 3.18.5.8
Flash	11.7.700.203
User Agent	Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/28.0.1500.45 Safari/537.36
Command Line	 /usr/bin/google-chrome --max-old-space-size=10000 --flag-switches-begin --enable-sync-favicons --sync-keystore-encryption --flag-switches-end
Executable Path	/opt/google/chrome/google-chrome
screenshot.jpg
306 KB View Download
@rryk.ua: both strings and arrays have length limits that have nothing to do with allocation limits and V8 will not allow them to grow without bounds and throw a fake out-of-memory error.

try something more adequate: e.g. creating a lot of deeply nested tree like structures, e.g.

(function () {
  function tree (n, m) {
    if (n > 0) {
      var a = new Array(m);
      for (var i = 0; i < m; i++) a[i] = tree(n - 1, m);
      return a;
    }
  } 

  var trees = [];
  while (true) trees.push(tree(15, 2));
})();
Comment 27 by rryk...@gmail.com, Jun 19 2013
With this script I get the following error on the console:

V8 error: Allocation failed - process out of memory (CALL_AND_RETRY_LAST).  Current memory usage: 1393 MB

The tab crashes at roughly 1.5 GB of memory.
Comment 28 by ricow@chromium.org, Jun 19 2013
If you are running inside chrome you should pass the flag to v8 in using --js-flags="--max-old-space-size=10000"

Unless they added it recently there is no --max-old-space-size flag in chrome
Currently I'm developing a WebGL based app which must load and render a huge amount of data (architectural 3D models). I've run into Chrome's V8 1.4 GB limit as well. I've tried to remove this limit in Crome Canary (under OS X) with the mentioned flag --js-flags="--max-old-space-size=10000", but it seem doesn't work. Is there at all any possibility to remove this limit?

// Now we recommend our clients to run our app in FF. :-(
Owner: hpayer@chromium.org
re #29. If you using WebGL you may find there are big wins by moving to typed arrays.

a) Typed array storage does not count as part of the JS heap. So on a Win32 chrome you can write

    var out = [];
    var i
    for (i = 0; i < 1600; i +=1)
    {
        out.push(new Uint8Array(1024 * 1024));
    }
and it will not hit the limit.

Chrome tabs seem to stop (crash) at 2GB on Win32.

b) Changing to types arrays from arrays, and especially flattening arrays of structures in to a single typed array, can vastly reduce the memory footprint. Most of our vertex data is Float32s but can be uint16 or even bytes in some cases. We saw 10x reduction on animation data that was arrays of structures moving over to flattened typed arrays. Of course this comes at the labour cost of doing it and more obfuscated data structures.

c) Using the above can vastly reduce the number of objects on the heap so also reduce gc cost, though these are better than they were.
@#31: hey thank you for the hint regarding typed arrays -- didn't know that they are outside js heap. Actually after I've moved all geometry to typed arrays I've also experienced that memory usage was reduced in 4 times in our case. But the problem is that I receive 100 MB JSON string from server and must parse it. Chrome just crashes at about 500 MB heap in this case and about 1.5 GB if the nodes are smaller, as I can see from the timeline. I test it on the 64 bit MacBook Pro 16 GB RAM. The same result under Win 64x (we don't use 32x systems). But it runs perfectly in FF. FF crashes at about 6.5 GB heap size which is acceptable for huge models. 

The question is the same: is there any possibility to extend heap size in Chrome to at least 6 GB? :-)

I very appreciate any help. Thanks!
Comment 33 by stu...@lttlrck.com, Apr 30 2014
@#32 Have you thought about stream parsing the JSON? It might save some trouble in the long run at if/when models grow. 

https://github.com/dscape/clarinet
Fix was landed in V8 revision r21102, setting an arbitrary old space limit should work now.
@#33: thanks! Very interesting. Will check it. But here IMO it is offtopic. ;)
@#34: That was fast! Thanks a lot! One stupid question: I suppose it goes about V8 js engine. How can I use it in Chrome [Canary]? I've updated all Chromes (OS X) and it seems not work in both browsers, where I test the app.
@#36: The fix should be in Canary in the next few days, but this depends on several build/test bots being happy with v8 and Chrome, so I can't give a definite date. All I can say is that the fix is currently definitely *not* in Chrome.
@37: supposed that it should be first integrated in Chrome Canary, tested and then integrated in Chrome. Thank you for the rapid fix and thank you all guys for your great job! I love Chrome! ;-)
#34: I've upgraded Chrome, now it is Version 36.0.1978.0 canary, but with the flag --js-flags="--max-old-space-size=6000" window.performance.memory.jsHeapSizeLimit shows 2069.00 MB. No matter which heap size I set it remains about 2 GB. Any idea?
With the Version 37.0.2030.2 canary heap size still remains about 2GB no matter what I set. Also there is a huge performance degradation in my app since last update if compare with the last canary or current production:

Version 37.0.2030.2 canary -- Node parsed: 966,128ms
Version 35.0.1916.114 -- Node parsed: 179,708ms
FF 29.0.1 -- Node parsed: 95.68ms

What is going on with you, guys?
 I have tried to load a 5mb json fle into chrome.The chrome crashed , utilizing about 54% of RAM.If we are stopping other processes the chrome can utilize the rest of the memory.My question is what making it crash.Some say the memory is the main hindrance.Is that true ??
Have a look at the following JSON issue https://code.google.com/p/v8/issues/detail?id=3974
How much heap size 64 bit V8 engine officially giving us nowadays. 
What is the best solution for it?
We have been searching for lot of articles and solutions over the internet.
Is io.js suitable for such tasks?
Comment 44 by avi...@gmail.com, Mar 24 2015
I struggled with v8 memory limits for a long time.  While in some applications I was easily using over 8gb without any issues, in others I couldn't use more than 2, so the answer is kind of complicated.  
The 2 limits I was running into I think were on a limit on the size of all the object keys, as well as a limit on the size of the "new" space.

For example, I was creating an in memory database, and wanted a way to index values so that retrieving all the rows where a property "age" is 20 and height is "6" could be done very fast.  Essentially, I was trying to find the intersection of 2 lists of rows.  To do that I created two objects, where each rowNumber just mapped to true.  Then I would iterate over the smaller of the objects, and see if each rownum was a defined property on the larger object.  But when the database grew, I got out of memory errors, and I think I pinned it on the number of combined keys in all those maps simply growing too large.  I changed the design, so that instead of objects for the rows I used sorted arrays, and finding the intersection was just a matter of a number of binary searches (instead of object key lookups).  It was still very fast, and the memory limit disappeared.  The exact out of memory error I was getting mentioned something about strings which led me in the right direction.

In another app, I was creating large arrays, a few hundred thousand elements, passing them around, de-referencing them, combining them into new arrays, etc. and my app would crash at different points, claiming it was out of memory.  It seemed I was just creating and destroying arrays too quickly, and the "new space" was overfilling.  So, when I had to make a new array out of two existing arrays, instead of dereferencing the old arrays and claiming memory to create a new one, I instead created a "combined array" class.  The combined array class would have a list of arrays and when you iterated over the combined array you iterated over all the arrays in its list.  This way I wasn't constantly dereferencing memory, and claiming new memory.  As soon as I did that my problem disappeared.

I also had lots of large arrays in my app.  There were lets say a million objects, and then there were all these classes that would be interested in a few hundred thousand of them, and they would store them in their own array.  I don't think this was what caused my issues, but back then I didn't know what did, so I made each class, instead of storing an array of the objects themselves, they would just store numbers that were references to these objects (Another class was in charge of resolving the references to the actual objects).  And since I was just storing a list of numbers, I could put them into a typed array, who's storage is stored outside the V8 heap.

Anyways, I hope that makes sense.  It took me weeks to track down these problems.  This was all in node 10.32 (Older versions weren't any better).  ;(.
Comment 45 by xep...@gmail.com, Mar 25 2015
Any updates? Tried on version 41.0.2272.76 (64-bit) with flag --max-old-space-size setting to larger than 2GB, still doesn't work. JS heap limitation will stop at about 2089MB. But without this flag, the limitation is about 1545MB.

Please attach JS code that reproduces your issue.
I'm working with a lot of files.
For now i can choose to use a very old version of node.js 0.10.22 that seems more flexible regards to --max-old-space-size. But it still flats out completely around 2-4 GB (probably 1.9GB)
However it doesn't crash. But is just very slow.

If i stream my file data to the database that i'm importing data to, i get too many open files.

I can get around this issue in the folowing ways:

1. By refactoring my code to use synchronous file access, which is sad.
2. Use fs-gracefull. But it also causes memory issues and slow downs because of file cueing.
3. Group all my files together in folders of about 10.000 - 20.000 files, to match the node memory threshold. Then run multiple instances of node.js. But marshalling +250.000 large files located in a nested folder structure isn't always easy.

If concerns with mobile Android performance is holding back V8 optimizations, maybe an enterprise extension ( with enhanced data objects etc. ) to V8 could be considered.


Comment 48 Deleted
Labels: Priority-3
Sign in to add a comment