Welcome to the main channel on the development of MoarVM, a virtual machine for NQP and Rakudo (moarvm.org). This channel is being logged for historical purposes.
Set by lizmat on 24 May 2021.
00:06 reportable6 left 00:08 reportable6 joined 01:09 Kaiepi left 02:48 sortiz left 05:57 AlexDaniel left 06:00 AlexDaniel joined 06:07 reportable6 left 06:09 reportable6 joined 06:53 Kaiepi joined 06:59 Kaipei joined 07:02 Kaiepi left
lizmat Would it make sense to expand nqp::index to accept 2 integer lists as haystack and needle, to facilitate searching in binary mode ? 09:50
or am I missing some way to easily look for sequences of integers in a Buf otherwise ? 09:58
nine lizmat: sounds like high level functionality that belongs to HLL code. The general guideline is: if something can only be done in the VM, then do it in the VM, otherwise do it in HLL code. 10:18
lizmat well, that could be argued for the current functionality of nqp::index 10:19
10:19 sena_kun joined
lizmat but was implemented at a lower level for efficiency, I'd say 10:19
I'd say, if something cannot be done efficiently in HLL code, it could be done in the VM 10:20
nine No, if something cannot be done efficiently in HLL code, we need to improve the VM so it can be done efficiently. 10:21
In cases like this where there are two opposing directions I could go, I like the extreme test. Imagine going to the extreme in both directions and see which one still makes sense.
The one extreme is to push everything into the VM when this gives even the slightest performance edge. To do that you'll eventually have to include every possible program as highly optimized code in the VM. That's obviously not sustainable. 10:22
lizmat agree
nine The other extreme is to have even the VM implemented in HLL code. And that's....actually possible and is done e.g. in Pypy 10:23
And PyPy is actually faster than the standard VM in many cases. It's just not compatible with extensions to the VM (the XS problem)
lizmat ok, by that logic we should consider re-implementing the regex engine in Raku 10:24
fair enough... I'll look at implementing some binary mode search capability in Raku 10:25
and see where we wind up, ok?
nine Arguably it can only become better :D
lizmat hmmmm.... perhaps
nine As to the problem in question: I don't see anything in principle that would prevent us from having fast binary search in HLL code. I'd guess that the heavy weight native refs we currently have are a bit in the way but who knows? 10:26
lizmat indeed
ok, thanks for the reality check :-)
nine And yes, by the current guiding principle, index should actually be HLL. And again, I don't see anything in principle that would cause this to be slower. Certainly some current implementation issues though :) 10:28
On a completely different topic: Anyone have any idea why we do this only in precompilation mode? github.com/rakudo/rakudo/blob/mast....nqp#L2633
It would seem to me that even if we're not precompiling we'd want to un-stub clones of a code object if the latter gets dynamically compiled. 10:29
And a probably related question: What is the bug that this comment is about? github.com/rakudo/rakudo/blob/mast....nqp#L2612
lizmat "Tag compile-time stubs so we don't try and serialize their outer (which is the create_code_object method in World)." 10:32
from 5fb6c0bfd3f8ec0de54c90ca01bd46b5dfaab3c3
could be Parrot related... it *is* from 2021 10:33
*2012
10:33 linkable6 left
lizmat it definitely predates my involvement in Rakudo :-) 10:34
10:35 linkable6 joined
nine RakuAST passes 78 test files and 551 spectest files 11:03
Note: this includes _all_ NativeCall tests!
lizmat whee! 11:05
12:07 reportable6 left 12:09 reportable6 joined 12:33 Techcable left 12:55 epony left 14:00 epony joined 14:06 Techcable joined 14:10 codesections joined 14:12 epony left 14:14 epony joined 14:21 codesections left 14:30 codesections joined 16:37 evalable6 left, linkable6 left 16:38 evalable6 joined, linkable6 joined 16:46 Kaipei left 16:57 sena_kun left 16:59 sena_kun joined 18:06 reportable6 left 18:09 reportable6 joined 18:21 Kaipei joined
japhb w00t 19:54
19:55 Kaipei left
japhb nine: I disagree with you on one point: When typical CPUs implement acceleration for something in a way that the VM is normally incapable of expressing, it's useful to have some way of accessing that native CPU functionality. "String" (which is actually binary) index search has been built into CPUs since the early 80's, because it's a foundational operation. And in the years since, it has been accelerated 19:57
by vector operations, which we *also* don't have any access to right now, because we don't have any way in our current compiler/VM stack to vectorize.
nine japhb: the VM is free to turn the bytecode that gets generated from HLL code into whatever it deems useful. We already JIT compile that bytecode into (horribly inefficient) machine code. Our JIT compiler is at the moment rather limited, so it doesn't take advantage of these advanced vector operations. But it could and it should. That doesn't require some higher functionality to be implemented in the VM 20:26
directly (though that's of course the easier way to do it)_
20:46 Kaipei joined
japhb nine: Sure, and I'd agree to avoid adding VM ops *if* the VM were in use by a substantial number of users that were outside our community. In this case, the VM exists *for us*, so we are more free to add/remove functionality as desired. Given the total amount of "free time" and expertise we currently have in the community, 20:54
we can get more out of supporting such operations efficiently in VM ops, and then *later* figuring out how to vectorize our JIT, which is Much Harder. 20:55
And when we *do* have a vectorized JIT, we can remove those ops without hurting our community, because we control both sides of the API boundary. 20:56
21:55 epony left 22:22 epony joined 22:32 linkable6 left 22:34 linkable6 joined 22:44 Kaipei left 22:56 sena_kun left