New issue
Advanced search Search tips
Note: Color blocks (like or ) mean that a user may not be available. Tooltip shows the reason.
Starred by 658 users

Issue metadata

Status: Duplicate
Merged: issue 4203
Owner:
Closed: Oct 2016
Cc:
HW: ----
NextAction: ----
OS: ----
Priority: 2
Type: FeatureRequest



Sign in to add a comment

Implement "use asm"

Project Member Reported by kbr@chromium.org, Mar 27 2013

Issue description

Mozilla and Epic Games have partnered to compile the Unreal Engine to JavaScript via asm.js:

https://blog.mozilla.org/blog/2013/03/27/mozilla-is-unlocking-the-power-of-the-web-as-a-platform-for-gaming/
http://techcrunch.com/2013/03/27/mozilla-and-epic-games-bring-unreal-engine-3-to-the-web-no-plugin-needed/

Optimizations should be added to V8 to generate good code for the asm.js subset of JavaScript. The implementation cost should be small compared to the potential upside -- the ability to run significant existing code bases with close to the speed of C inside the JavaScript engine.

 
Cc: dslomov@chromium.org
It would be good to fix issues like  Issue 2424  before implementing "use asm".

This way we will actually speed up some real world hand written code as well as emscripten generated code.

Comment 3 by 69.sau...@gmail.com, Mar 31 2013

it would be great if this can be implemented......
Cc: stefanoc@chromium.org
I think it is good to do this , may be it is useful on android on arm if you want to improve performance .
Here's an interesting opinion against this proposal: http://mrale.ph/blog/2013/03/28/why-asmjs-bothers-me.html

Comment 7 by m...@manichord.com, Jul 15 2013

Do you realise that that blog post you linked to was written by the same (former) V8 engineer who posted comment #2 ? And given micorbenchmark numbers for v8 on asm.js style code, I think Vyacheslav may have a point about not needing AOT.

Comment 8 by m...@manichord.com, Jul 15 2013

For future reference this is the microbenchmark that Vyacheslav referred me to:

http://arewefastyet.com/#machine=11&view=breakdown&suite=asmjs-ubench

Comment 9 by alonza...@gmail.com, Jul 15 2013

That site also has a set of larger benchmarks, with comparisons of various run times (workload0 is just startup, workload 4 is many seconds).

http://arewefastyet.com/#machine=11&view=breakdown&suite=asmjs-apps

Comment 10 by m...@manichord.com, Jul 15 2013

Thanks for that link - those tell quite a different story to the micro benchmarks. I wonder what causes that?
Any official hint on asm.js support in Chrome ? Is it going to be implemented or not ?
Is this implemented already. At least now www.unrealengine.com/html5/ works pretty good on 30.0.1599.69 m.

Is this implemented already. At least now www.unrealengine.com/html5/ works pretty good on 30.0.1599.69 m.

Is this implemented already. At least now www.unrealengine.com/html5/ works pretty good on 30.0.1599.69 m.

> support@toremote.com > Is this implemented already. At least now www.unrealengine.com/html5/ works pretty good on 30.0.1599.69 m.

no, it is false :) . on Chrome/30.0.1599.69 -- feature "use asm" -- not implemented yet.

www.unrealengine.com/html5/ -- works pretty because -- just V8 has fast Javascript (but not Asm.Js)
Many optimizations have landed in V8 over the last few months that improves its performance on real-world code as well as the asm.js subset. The "use asm" pragma, however, is not necessary to opt into these optimizations in V8; they are available for all JavaScript.

Emscripten developer Alon Zakai recently shared recent benchmark results for Firefox and Chrome on asm.js code: http://kripken.github.io/mloc_emscripten_talk/sloop.html#/7 (image also attached). Alon also noted Chrome now hits 60fps in the Epic Citadel Unreal demo: http://www.unrealengine.com/html5/
asm-benchmarks.png
57.7 KB View Download
It's clearly not the real world scenario. Check for example Lua interpreter converted to JS: http://kripken.github.io/lua.vm.js/lua.vm.js.html

On my AMD FX-8350 machine the benchamrk shows:

- Chrome Canary 32.0.1674.2:
 binarytrees 55.968 seconds
 scimark     1.14 MFLOPS
 VM startup  0.283 seconds

- Firefox Aurora 26.0a2 (2013-10-18):
 binarytrees 11.254 seconds
 scimark     6.32 MFLOPS
 VM startup  0.444 seconds

Firefox is about 6 times faster, only the VM startup it 2 slower which is related to asm.js compilation.
"It's clearly not the real world scenario."
"Firefox is about 6 times faster,..."

Pay more attention, the benchmark says: "times slower than native=1 (lower numbers are better)"
Chrome wins Firefox only on memops, so V8 is slower, as spected.
I did those tests myself. Please do it yourself and see the difference.
The Lua interpreter exposed two performance bugs in v8 (see https://code.google.com/p/v8/issues/detail?id=2873), but both have nothing to do with asm.js. Throwing any random asm.js code at v8 and seeing suboptimal performance is no proof at all that asm.js is needed. ;-)
Glad to hear that. I also noticed, that the latest Canary build 32.0.1676.2 doesn't properly load Citadel demo any more. It hangs after data download, when the progress bar is shown. Not sure if you are aware of that, the issues I found mention only performance regressions.
Regarding the Citadel demo: I've just tested it with the latest and greatest sources (Chrome from ToT) on x64 Linux, and it works. IIRC there were some problems on the Chrome side in the past regarding HTTP redirects (or something similar, I don't remember), so if things don't work for you, it would be great if you could open an issue on the Chrome bug tracker. We really like to keep the demo working. :-)
I check the Citadel again on the stable build and it works flawlessly, much smoother than on Firefox, and with almost native ~60 FPS. :) The Canary build on the other hand stops working during load, and on the console there is error, that is not present with stable build:

Uncaught TypeError: Cannot call method 'getSupportedExtensions' of undefined

Not sure if it is regression or just bug in UDK code. I can open an issue, but I'm am not sure if it should be done for Canary builds. Should I?
> Throwing any random asm.js code at v8 and seeing suboptimal performance is no proof at all that asm.js is needed. ;-)

I certainly agree it is not proof that explicit asm.js detection is necessary. However, I think that the scenario just described is exactly the reason why such a thing can be useful.

With explicit detection of asm.js and use of the asm.js type system, the code could be sent directly to CrankShaft, avoiding any limitations or bugs relating to the interaction between CrankShaft and the baseline compiler. That means it would avoid any issues with deoptimizations, collection of type info, limits on what is sent to CrankShaft and when, OSR, etc.

As a consequence, using the asm.js type system can give a fairly good guarantee that if you throw a random new asm.js codebase at the JS engine then it will run very fast. Furthermore, with that guarantee in place, it could enable more focus on backend optimizations in CrankShaft.

But again, I am not saying that is a necessary approach. General JIT improvements can in principle lead to the same performance for asm.js code, I don't think anyone can doubt that. And I can see how general JIT improvements are more elegant in a way. However, it appears to be the longer path.

Whether asm.js optimization is "necessary" is less of an interesting question to me than whether V8 has a next gen plan in mind.

The flip-flopping implied by showing off asm.js improvements at Google I/O 2013, then questioning its potential and place in the industry, makes me wonder what's going on.

Comment 26 by jakub...@gmail.com, Oct 22 2013

@#25: Optimizations showed on Google I/O had nothing to do with "use asm". They optimized JavaScript parts used by asm.js but they never implemented asm.js itself.

Comment 27 by m...@ell.io, Oct 22 2013

Not to say that any of the previous discussion has been at all off-topic or inappropriate, or anything … and, of course, to be a bit of a hypocrite myself …

… I'd just like to remind everyone that *six hundred people* (as of this reply) get an e-mail every time you comment on this. /=

Comment 28 by gmur...@gmail.com, Oct 22 2013

@26 that's a pretty fuzzy distinction. V8 could, on principle, ignore asm.js and only make optimizations that "just happen" to also increase asm.js performance. Much how you would focus heavily on increasing the performance against a particular benchmark or another but act as though all the optimizations being done were generally needed, but what some are saying is that it may be much better served by explicitly taking asm.js into account since a lot of the pre optimization inference can be shorn away if you already know the precise types and semantics.

You are somewhat served by staying general because there is at least some pressure for asm.js to not be idiomatically alien JavaScript, in order for it to run acceptably on non asm.js browsers, but you are missing an opportunity for the code to carry the exact intent through to the optimizer rather than it needing to be tortuously inferred from (and possibly fooled by) the semantics.

I know its a dirty word, but Crankshaft could be similarly well served by directly observing the type system in TypeScript, if that were to be served unshorn of annotation to the engine.

V8 is very good at inferring type systems where there are none, but that doesn't mean that the performance couldn't get much better if it allowed the upstream to annotate intent.
@#28 . why you say about only types?

pragma "use asm" -- disables garbage collections (garbage collections disables -- at all) and disables other high-level-features of VM.
@29 - "use asm" doesn't disable garbage collection; for valid asm.js, there isn't anything to garbage collect.  The javascript GC still continues to execute; for example, any JS functions that are called from within asm.js can still generate garbage that will be GCd.

Comment 31 Deleted

Here we see Firefox performing in browser cryptography 2-8 times faster than Chrome, because of support for ASM.js
http://blog.opal.io/crypto-in-the-browser.html
What we actually see in that article is that old versions of Chrome don't do very well when running asm.js code, but that's no surprise. I just ran http://jsperf.com/js-nacl with FF 24.0 and Chrome 32.0.1687 (fresh build from this morning), and the results are quite different from the ones in the article: In one of the benchmarks we literally crush FF, in others we are comparable, in some we still suck. I didn't do a detailed analysis yet, but it is fair to say that the article is outdated. We know a few general performance bugs in v8 (deoptimization loops, poor representation choices, etc.) which are exposed by some asm.js programs, and we will fix them hopefully in the near future.

But again, the article is no proof that asm.js is really necessary for good performance...
My intuition tells me that it's harder to beat FF when ignoring "use asm". It also tells me that someone using an asm.js compiler wants maximum theoretical performance more badly than anybody else doing their fancy web pages. My intuition does not tell me why Chrome is capable of NaCl but not asm.js when the second is cross browser and both goals are the same.
@34 NaCl and asm.js don't necessarily have the exact same goals, and NaCl is massively closer to native performance than any implementation of asm.js so far. Also, improving overall JS is a much better end goal, as everyone wins, not just some people running benchmarks. asm.js still has a lot to prove. It's a cool concept and all, and some devs are pretty serious about it, but that doesn't make it a real standard yet.

I'd say the chromium team has the right mentality. Use asm.js to find weaknesses in the current V8 implementation and improve it for all JS applications. This would eventually remove the need for a completely separate spec, and also has the benefit that the chromium team doesn't need to maintain two separate engines like Firefox does.

TL;DR: I think the chromium team is playing the long game, and FF is playing the short game.
Whether asm.js as a whole is a good idea, one part of it that is definitely helpful to developers is that the browser warns you when you have written a code block that won't optimize well. I wouldn't want warnings for all of my Javascript code, but for certain code blocks that I specifically want to be well optimized, being able to ask the compiler to warn me if I'm doing things that can't be well optimized is a huge benefit.

Even if "use asm" just triggered those checks in Chrome that would still be hugely useful to developers. Alternately, being able to right click on a function in the code view and mark it for analysis and feedback would be another way that could be accomplished. Or I can mark it a code block with "use asm" and run it in firefox and get warnings that approximate problems that might also exist with Chrome ;-)

E.g. "Warning: the type of object A change during execution on this line" -> Aha! That's what is causing my performance issue. Glad I didn't spend weeks tracking that down.
> This would eventually remove the need for a completely separate spec, and also has the benefit that the chromium team doesn't need to maintain two separate engines like Firefox does.

There is certainly a point to be made about focusing on general optimizations, however, asm.js optimizations in Firefox are not a separate JS engine, and not even a separate JIT. All Firefox added for asm.js is just a type checker for the asm.js type system, plus some small additional optimizations to the existing JIT backend Firefox already had.

Writing new JS engines and/or JITs takes years by large teams, while the asm.js optimizations in Firefox took 1 developer just a few months, and that was while the asm.js spec was being developed and changing a lot. We are talking about a relatively small amount of code and effort here.

Of course any new code requires work and maintenance, can have bugs etc., so your point about just focusing on general optimizations is certainly debatable, and I see the arguments to be made for both sides. However, I would say that in practice many months have passed since Firefox launched its asm.js optimizations, and they have not been a burden so far.

Comment 38 by tjpal...@gmail.com, Oct 30 2013

Google would never build an alternative to plain JS with a few more constraints just to make optimization easier (https://www.dartlang.org/), right? Well, maybe not a fully compatible one, at least.

I guess I should remember there are multiple people involved, and I'm not even consistent with myself all the time. And I can't prove the best course of action here, either. I'm just hoping other browsers keep enough market share that Google (composite entity again) doesn't feel a need to stagnate in JS performance. Overall, they're still doing great so far.

Meanwhile, my apologies for adding to the spam.
The original bug report says "Optimizations should be added to V8 to generate good code for the asm.js subset of JavaScript."  Even if V8 doesn't have a separate compilation mode for asm.js the way SpiderMonkey does, it's clearly generating much better code than it used to.  Is it good enough to close this bug?
@39: Does current V8 trunk spit out a console warning or error if the "use asm" is present and the code isn't valid asm.js?

I'd think that, at the very least, that debugging aid should be present before closing this bug. (Treat it like a cousin to "use strict" even if it doesn't have its own compilation mode.)

Issues like this are never really finished.

Maybe we should identify a number of optimizations that could be performed with asm.js code, then we can close this and focus on specific points rather than this vague target of "fast".

Comment 42 by m...@manichord.com, Oct 30 2013

I agree with comment #40.

Without arguing the point of if v8 should do AOT based on the "use asm" pragma, a key point seems to be around indicating to JS developers when de-opts are occuring when code tagged with "use asm" is run.

AFAIK this is not available in the devtools or console messages currently and requires running the v8 shell from the cmdline which is a much higher barrier of entry for webdevs. Perhaps that request needs to go into a new bug?


True, comment #40 makes a good point.  Providing those warnings (I use the plural because Firefox warns on successful compilation as well, so there's never any doubt) will be hard without some kind of special asm.js mode, even if it doesn't do AOT compilation.

I suggest the following:
- Close this bug.
- File a new bug about implementing the warnings (which the V8 team may well ignore).
- (Possibly) File new bugs on specific test cases where V8 is found to do poorly (which the V8 team are likely to act on).

But I'm not a V8 dev, just someone with experience with vague bug reports :)
Just my 2 cents,  #43 sounds like a solid course of action. If most people agree we could create the more specific bugs and close this one.
One thing that hasn't been mentioned yet is that AOT provides predictable performance. Once the code is compiled and the page has finished loading you won't have any initial slowdowns and UI glitches/stuttering due to JIT warmups.

However I don't know how quickly v8 warms up. All benchmarks I've seen focus on longer-running computations where warmup time plays a minor role.
Re comment #45:  my suggestion from comment #43 holds: "File new bugs on specific test cases where V8 is found to do poorly".

This bug really needs to be closed.
I think this bug is perfectly valid, it's just been co-opted to talk about general JS performance. This bug would be 'fixed' if the keywords are recognised and the code is then compiled.

'Improve JS performance so that asm.js is unnecessary' is a separate issue, that if fixed would obsolete this. But rather than close this, I think that should be opened as a separate bug and related conversation moved over there.

Comment 48 by gmur...@gmail.com, Oct 31 2013

If V8 respected the asm pragma it also wouldn't be possible to "fool" the semantic inference and wind up deoptimized. asm is telling you which semantics are being used, so that they no longer need to be inferred and should NOT be inferred as something else. Not only should the pragma improve maximally achievable performance, but it should make that performance more reliable as you aren't relying on the vagaries of the analyzer to make sure it gets optimized the right way.

Assume you wrote some assembly code by hand, to specifically avoid it being reinterpreted detrimentally by the optimizing compiler, but then your machine decided it new better and rearrange it to "make it better", ignoring the precise semantics you had nailed down?
Octane 2.0 benchmark was released today and includes an asm.js zlib benchmark. On Chrome 29 I get 24529 for the zlib score and on Firefox 25 I get 43787. Both on Linux. One interesting data point is that this number includes the time to parse and compile the code.
Isn't the point of asm.js to make the JIT's life easier by making it harder for code to unoptimizable and give a standard path towards straight forward compilation? I could understand hesitation on implementing asm.js, as it's just javascript that could be implicitly checked for without the declaration. It's just that the declaration can speed things up in optimization time and AoT.
I think that the key of asm.js is that it enforces static types.  Not only in the javascript "Number" but low level int's and floats with the enforced size.  This means that theoretically code should never have to be "de-optimized" as it should always use the same types.

Now, AFAIK this is already a benefit as once compiled that compiled routine will be good for the lifetime of the program.  However, by taking into account the knowledge that the code is asm.js you can take more time optimizing as you know it will be useful, and you can likely optimize earlier because you don't need to take time to track what types are used (you know the types before the code runs once).

I believe that asm.js also ensures that all functions called are static (except for the FFI) so that inlining is easier as you don't have to worry about the inlined function changing.

So some things to take away is that type guarantees are pretty much "free" in that code doesn't get deoptimized but in "use asm" blocks you can make stricter assumptions about types more safely.

The function staticness is also automatically beneficial for the same reason.

So v8 already benefits quite a bit from asm.js and the final tweaks would be minor (they shouldn't change inner loops).  One of the largest benefits would be removing the deoptimization checks, but that is very dangerous because if it can't be guaranteed that the assumptions are always valid you could possibly get undefined behaviour.

So we get back to the general agreement that this bug is more about emitting the proper warnings, rather than treating asm.js as special by the jit (and whatever other components).

Comment 52 by redu...@gmail.com, Jan 3 2014

Hi guys, just reported  issue #3080  , I'm not sure if it's related to this.

I understand that It's best to improve V8 as much as possible as priority and I'm not going to argue with that.

My experience with asm.js in Chrome/V8 is that (unlike Firefox) it stalls a lot, despite performance itself being fast. Unreal Citadel is pretty much a static rendering demo where the CPU does little to nothing, and benchmarks also use a finite and small number of code paths.

I've been trying to port a large game engine (linked above) that has 20mb of asm.js code, and that probably stresses several more codepaths than other tests, as it runs a custom physics engine, space grid hash broadphase, GLES2 renderer, interpreted script, track-based animations, particle systems, etc. 

So the question is, isn't AOT instead of JIT compilation better for these cases to avoid stalls? From what I understand reading the asm.js specification is that it's not just an issue of optimization, but that it should be much faster to parse and compile if properly detected, allowing AOT for a smoother experience.


This has almost been discussed to death, but well...

The problem of the game engine demo and several other asm.js demos/apps out there is not really AOT vs JIT, it is that there are tons of deopts, leading to repeated recompilations. We know a few reasons for this, and we probably need to investigate a few more, but when these deopts are solved, our concurrent compiler should yield smooth performance. If we don't deopt at all for asm.js programs and *still* have stalls, we might need to reconsider this approach, but not earlier.

Comment 54 by redu...@gmail.com, Jan 4 2014

@53, I understand that most stalls are caused by deopts and it makes complete sense, I also understand that it will make the demo I posted playable. Yet, is it really possible to have a situation where gameplay is smooth from the beginning without any kind of stall?

The demo i posted runs absolutely smooth from the first frame in Firefox. In chrome, it takes several seconds at the beginning with the scene frozen gran gradually becomes more and more playable (and finally it works but has small stalls).

So, besides the deopts, isn't the argument in favor of asm.js AOT 1) Faster compilation since it's known subset 2) No stalls at all even at the begining ?

I mean, letting aside the technical reasons, it seems like a win/win situation for, as I don't think otherwise something with a slow CPU such as a Chromebook could run asm.js binaries efficiently otherwise.
@54

> I mean, letting aside the technical reasons, it seems like a win/win situation

I agree! Chrome desktop/mobile (and Safari) should just add ASM.js support. That way, cross platform apps could more effectively compete with native apps.

Firefox OS soft launched. Their apps are all HTML5 based, with ASM.js support. Mozilla is on board.

Hopefully Google is up for it! They've already improved their WebView a huge amount with Android 4.4, which is pretty exciting. Apple on the other hand, still denies JIT to their WebViews. Pretty obvious what's going on there. :'(

Anyways. It's up to us developers to make killer apps which encourage ASM.js' adoption. Let's start making them!

Cheers,
Chris

Hai,I am new to v8 project. Whats the current state of this discussion? I would be happy, if someone could help me , to in get into the v8 development.
Not to antagonize a hopefully dead thread, but if you'd like a more breadth than depth benchmark/test there's the emscripten webkit.js port: http://trevorlinton.github.io/

V8 JIT on Chrome v35 takes 1.01/0.389/0.316/0.350/0.203/0.164/0.179 seconds
IM JIT on FireFox  v29 takes 0.09/0.037/0.033/0.035/0.036/0.036/0.037 seconds

The optimizations in Firefox appear have an order of magnitude faster impact on marked asm.js code within the first 12/15 iterations. Albeit it V8/Chrome (after 12~15 iterations) converges around 0.107 seconds or 3x slower than FF.

If you don't have lunch plans take a look.
 Issue 2976  has been merged into this issue.
How fast would V8 be after this issue is resolved? would it outperform the DartVM for example?
Is this thread dead :c ? does this mean 'use asm' wont be in V8? ;-;
I think V8's new engine called TurboFan tackle asmjs code.
I don't know whether "use asm" keyword become functional, but V8 will be faster than now in asmjs code.
Currently, asm.js is being used as a testing ground for V8's new, currently experimental compiler TurboFan. We are working on optimizing for it in the new engine, but after we are satisfied with asm.js, it is intended to eventually fully replace our current compiler Crankshaft. Long story short, we feel that performance optimizations of any kind shouldn't be just limited to a single subset of JS. It'll result in faster code across the board.

I will say this much about its status: since it is experimental and still highly unstable, don't expect it to land in stable Chrome in the immediate future. And when it does, it will initially be limited to asm.js.

#59: asm.js will be at least extremely close it's stable. After TurboFan is applied to everything, you'll see a massive speed jump in Chrome and Node both.
Note that #62 is not a V8 team member, nor works on Turbofan. The "we" is very misplaced, since he is not speaking for anyone actually working on Turbofan.
#63 Then who the hell is he?! Is there any truth in his words?
Is number #6/ correct though?
@all: I apologize for the poor word choice...And yangguo@ is entirely correct.

In #62, s/we/they/g, s/us/them/g, etc.

I will say that it's mostly from interpretation of commit discussion, mailing list discussion, etc. that I've observed.

Must've let my identity get ahead of itself or something...(I have been, for a while, considering contributing to this project in more than simple discussion for a while ;-)
Ive been keeping an eye on both V8, V8 turbofan, and DartVM. given the results in www.dartlang.org/performance , are these benchmarks http://arewefastyet.com/#machine=28&view=breakdown&suite=asmjs-ubench (asm.js benchmarks for V8 turbofan) faster than what the DartVM is today?
@67 none of those benchmarks exist for Dart so the question you ask has no answer at the moment. I invite you to implement the Dart backend for Emscripten and check the performance out.

Hello,

Thanks for your interest and excitement about TurboFan. Yes, it already generates really good code for JavaScript that is "asm.js" shaped, but that's certainly not its end goal in life. At the moment are experimenting the enabling heuristics for TurboFan, and one signal is the "use asm" annotation, but it's not the only signal.

We'll have more details about it when we feel it's ready for launch.

Thanks,
-TF peoples of V8
I really do hope TurboFan claims the title of being the fastest VM for web development(except Native of course, although TurboFan beating Native would be amazing :D)
I think even current v8 can beat native in some(!) cases, it's just not that it can beat it everywhere( SIMD? ), so TurboFan just needs to broad range of cases where it is consistently better.
In that case, TurboFan would have to beat DartVM.

turbofan-asm to enable stage 1 in the next version of chromium(? or chrome), so thats good news, by the way, would TurboFan be a AOT compiler?
#73: AFAIK, it is still a JIT.
Is there some other thread we can talk about turbofan?
The point of Chrome, as it's selling point, is that it is the fastest web browser. If Firefox is the fastest now, that's a problem. There is no reason that including asm.js would slow down the other developments, but it would fix Chrome now, and stop making it where any actual apps have to recommend using Firefox rather that Chrome.

Plus, I do believe that JIT cannot reach full speed unless you implement static typing as an option. It will always cost more to infer type than to just read it. Inferring will always require doing more operations than simply reading text.
We're already beta testing TurboFan in Chrome 41, which significantly improves the performance of numeric code like asm.js. There are additional heuristics and optimizations that are coming, so we are hesitant to close this issue as "Fixed", but one could consider this issue "Mostly fixed". 

We are actively experimenting with the policy to activate TurboFan, and one signal is the "use asm" directive. In response to #73 w.r.t. AOT, currently V8 does not use TurboFan to compile an entire asm.js module at a time.

Thanks,
-TurboFan Team

Owner: titzer@chromium.org
Status: Assigned
Moving this bug from "New" to "Assigned", since TurboFan should solve this issue among many others, though not in the exact way originally proposed.
#78 what are these "others" ?
Thanks for your work,

Is there plans to write something (blog post, wiki article, ..) to detail the TurboFan heuristics (how to hit it), and the optimizations it enables?

I discovered that it was a bit hard to get updated details about how most browser optimizations work (probably because they are very fluctuant), and I think it would be great to have an official statement about it. Maybe with something like code examples.

Anyway, once again, thanks. Always impressing to see js performances improved even more.

I think that asm.js specific warnings should be emitted when using the "use asm" directive but the code is not propper asm.js code before this issue is closed. From the developer's point of view, if it is not recognized by the console, it doesn't feel implemented.
Is there a pointer to how to write JS code that will not de-optimize the optimizers? A list of bad practices that will hurt optimization?
#80 
AFAIK it's only partially enabled in Canary, and is under a few flags. Also, I'm not sure how effective optimizing for V8 in browser code would be, assuming your target is the Web. 
I check the Citadel again on the stable build and it works flawlessly
http://www.wdfshare.com
^ Spam link (#85)

Comment 87 by cnyz...@gmail.com, Apr 13 2016

I tried emscripten tool and found different web browsers has quite different performance.

If native C++ takes 1 time,
firefox: 2
Microsoft Edge: 4~5
IE Chrome Safari: 30~40

Why? Are there any explanations?

Comment 88 by nanne@mycel.nl, Apr 13 2016

Chrome did not implement asm.js, however, they are implementing
webassembly. Which is the successor of ASM. Webassembly is currently in the
stage of being tested and is in the experimental versions of Chrome. See:
https://webassembly.github.io/demo/
Mergedinto: 4203
Status: Duplicate (was: Assigned)
Folding this into the asm->wasm issue.

Labels: Priority-2

Sign in to add a comment