Welcome to the main channel on the development of MoarVM, a virtual machine for NQP and Rakudo (moarvm.org). This channel is being logged for historical purposes.
Set by lizmat on 24 May 2021.
00:02 reportable6 left 00:05 reportable6 joined
timo hum, we can't write to multiple registers from a single dispatch, right? callsites don't really have a concept of RW for registers 00:15
bootarray, create, const_i64_16, setelemspos, const_i64_16, setelemspos, getcode, push_o, getcode, push_o, getcode, push_o, getcode, push_o, getcode, push_o, getcode, push_o, getcode, push_o, getcode, push_o, getcode, push_o, getcode, push_o, getcode, push_o, getcode, push_o, getcode, push_o, getcode, push_o, getcode, push_o, getcode, push_o, getcode, push_o, getcode, push_o, getcode, push_o, 00:37
getcode, push_o, getcode, push_o, getcode, push_o, getcode, push_o, getcode, push_o, getcode, push_o, getcode, push_o, getcode, push_o, getcode, push_o, getcode, push_o, getcode, push_o, getcode, push_o, getcode, push_o, getcode, push_o, getcode, push_o, getcode, push_o, getcode, push_o, getcode, push_o, getcode, push_o, getcode, push_o, getcode, push_o, getcode, push_o, getcode, push_o, getcode,
push_o, getcode, push_o, getcode, push_o, getcode, push_o, getcode, push_o, getcode, push_o, getcode, push_o, getcode, push_o, getcode, push_o, getcode, push_o, getcode, push_o, getcode, push_o, getcode, push_o, getcode, push_o, getcode, push_o, getcode, push_o, getcode, push_o, getcode, push_o
could totally become one syscall 00:38
whats the max number of params to a dispatch_* i wonder 00:40
Geth MoarVM/new-special-return: f1edb2ba90 | (Jonathan Worthington)++ | 2 files
Eliminate remove_one_frame

We can move all of the logic into MVM_callstack_unwind_frame, which works out rather more efficient. At least some compilers need a hint that it's OK to inline exit_frame still. Before the this the average call to remove_one_frame entailed 118 instructions (including those of the called MVM_callstack_unwind_frame). Now just 92 instructions are retired in MVM_callstack_unwind_frame.
01:00
jnthnwrthngtn timo: Limited by max callsite size 01:01
Which is a 16-bit number of args
timo perfect 01:05
how do you feel about making that relatively common series of ops a syscall?
well, it comes once per compunit i guess
oh, there's no "coderef" callsite flag 01:06
jnthnwrthngtn Could be quite beneficial 01:13
Saves a hell of a lot of loops around the interpreter at least
Sleep time for me, 'night
timo also, it's code that only runs a single time, so here the "write it in C to go fast" approach actually makes sense for once 01:17
02:58 Altai-man joined 02:59 sena_kun left 05:04 sourceable6 left, committable6 left, releasable6 left, benchable6 left, linkable6 left, unicodable6 left, notable6 left, squashable6 left, quotable6 left, nativecallable6 left, greppable6 left, bisectable6 left, statisfiable6 left, evalable6 left, tellable6 left, reportable6 left, bloatable6 left, coverable6 left, shareable6 left, linkable6 joined, nativecallable6 joined 05:05 statisfiable6 joined, greppable6 joined, tellable6 joined, squashable6 joined, committable6 joined, evalable6 joined 05:06 coverable6 joined, shareable6 joined, bloatable6 joined 05:07 sourceable6 joined 06:04 benchable6 joined 06:05 quotable6 joined, reportable6 joined 06:06 unicodable6 joined
Nicholas good *, #moarvm 06:56
nine: I think currently I'd view myself as "stuck".
Report is:
on master (strictly, the parent of the merge, so that it's the last master commit before new-disp-nativecall) 06:57
in MVM_nativecall_invoke, the memory at the pointer in the assignment `MVMNativeCallBody *body = MVM_nativecall_get_nc_body(tc, site);` 06:58
comes directly from MVM_nativecall_build
whereas in the first commit of new-disp-nativecall, b7d40359a1503dda41e8592297d305ea7b1f2ba5, that memory has been (incompletely) copied by copy_to in src/6model/reprs/NativeCall.c 07:00
and I don't spot a clean way to fix copy_to to work with both layouts. Or really, *what* needs copying, and how. (But I guess I figure that out by looking to see what gets freed when you deallocate a NativeCall thingy.) 07:01
aha, code with #ifdef HAVE_LIBFFI 07:02
(anyway, need to to somethign else first)
07:05 notable6 joined 07:06 bisectable6 joined, releasable6 joined 08:06 evalable6 left, linkable6 left 08:07 evalable6 joined 08:08 linkable6 joined 08:47 childlikempress joined, Kaipi joined, moon-child left 08:50 Kaiepi left 08:58 Kaipi left, Kaiepi joined 09:00 childlikempress is now known as moon-child 09:22 Kaiepi left, Kaiepi joined 09:33 linkable6 left
nine Oh what I would give to have 15-gh_1202.t finally stable... 09:37
Nicholas how many hours do you have to give? :-/ 09:42
nine Quite a few looking at the time I've already spent fixing bugs. Trouble is that I don't know how to approach this as of course, I cannot reproduce the issues locally. 09:48
timo have we tried running rr on the CI? in theory we could record every test we run with not a that big penalty, and if tests fail we keep the recording otherwise we delete it right away? 10:03
10:34 evalable6 left 10:36 linkable6 joined, evalable6 joined
lizmat hmmm... 3rd time this week that t/spec/S17-promise/nonblocking-await.t hangs for me :-( 10:37
nine Actually I can reproduce those segfaults by running the test in 16 shells in parallel! 10:47
#0 find_interned_callsite (tc=0x7f42500cc350, cs_ptr=0x7f425e892038, steal=0) at src/core/callsite.c:194
194 if (callsites_equal(tc, interns->by_arity[num_flags][i], cs, num_flags, num_nameds)) {
jnthnwrthngtn moarning o/ 10:49
Nicholas \o 10:50
lizmat o/.all 10:52
nine Of course I've yet to reproduce it in rr 11:01
But at least I've got several core dumps that all point at the above location. So a bit of source code reading may raise an issue
Geth MoarVM/new-special-return: 05b3468567 | (Jonathan Worthington)++ | 2 files
Eliminate remove_one_frame

We can move all of the logic into MVM_callstack_unwind_frame, which works out rather more efficient. At least some compilers need a hint that it's OK to inline exit_frame still. Before the this the average call to remove_one_frame entailed 118 instructions (including those of the called MVM_callstack_unwind_frame). Now just 92 instructions are retired in MVM_callstack_unwind_frame.
11:09
jnthnwrthngtn nine: I think we need to read interns->num_by_arity[num_flags] and interns->by_arity[num_flags] once each before the loop 11:13
Since those can change during it. 11:14
I've pushed all I'm going to to new-special-return now (well, except changes in response to feedback). 11:16
Figured I may as well take the changes to their logical conclusion, since "why didn't you just go the final step and..." would have been an obvious review comment :)
timo jnthnwrthngtn: do you have a suggestion for how to handle that callsites don't have a flag for "coderef" for the proposed "collect-coderefs-into-array" or whatever syscall?
the validator wouldn't be able to verify unless it special-cases the op, or if there's a flag for syscalls "only" 11:17
it could use integers as indices into the codes table or something 11:20
jnthnwrthngtn Hm, it's not actually such a good saving really because arguments always have to go into registers 11:22
Even literal ones
timo ah, so we'd be blowing up the number of locals 11:24
and get a const_i64_16 for every coderef anyway
jnthnwrthngtn Yeah. There must be some better way to do this though...
timo .o( parallelize getcode )
wval an integer array that has the numbers in it is one way 11:25
Geth MoarVM/fix-intern-lookup: 08d41d5e39 | (Jonathan Worthington)++ | src/core/callsite.c
Avoid thread safety issues in intern lookups

The number of callsites and the pointer to the callsites memory may change. We carefully update these with a write barrier when doing the change, however the reading code also needs to take care to do that.
11:29
MoarVM: jnthn++ created pull request #1591:
Avoid thread safety issues in intern lookups
jnthnwrthngtn timo: Yeah, that (wval approach) may be bset
timo can't see what's what from a spesh dump or moar --dump any more 11:30
jnthnwrthngtn nine: ^^ may well help with the segv you posted above
nine jnthnwrthngtn: I hope so. I did basically the same but without the barrier and can still reproduce the issue 11:31
jnthnwrthngtn Ah, darn. 11:35
Maybe the barrier matters, though 11:36
nine Another segfault. Different location though:
#0 0x00007f29fa514ef2 in MVM_disp_program_run (tc=0x7f29e40fe180, dp=0x7f29e40cf390, record=0x7f29d8064b78, spesh_cid=0, bytecode_offset=4294967295, dp_index=5) at src/disp/program.c:2944
bt2944 if (STABLE(val.o) != (MVMSTable *)dp->gc_constants[op.arg_guard.checkee])
val.o is NULL 11:37
jnthnwrthngtn We should never end up with real NULLs being read out of registers (and thus arguments)
nine And another one in find_interned_callsite: (gdb) p arity_callsites[0]
Cannot access memory at address 0xac0afec0
jnthnwrthngtn Oh, hm. 11:39
timo someone snuck a cafe in there
jnthnwrthngtn OK, no idea 11:40
It'd doing realloc_at_safepoint precisely to avoid any issues
nine I noticed
It gets weirder. With this particular segfault I don't even see any other threads dabbling in callsite code at the time of the segfault 11:42
Geth MoarVM: 14a8befd1c | (Stefan Seifert)++ | 3 files
Eliminate hllbool/boot-boolify-boxed-int pairs in spesh

No need to turn an int into an HLL bool just to turn it back to an int when using it as a condition for jumps. Eliminate those pairs same as we do with box/unbox.
11:45
MoarVM: 6ff8155769 | (Jonathan Worthington)++ (committed using GitHub Web editor) | 3 files
Merge pull request #1586 from MoarVM/optimize_hllbool_boolify_pairs

Eliminate hllbool/boot-boolify-boxed-int pairs in spesh
11:55 sena_kun joined
nine The pointer to the by_arity that's involved in the segfault does look different to the others: 11:59
(gdb) p (MVMCallsite**[20])*(interns->by_arity)
$33 = {0x15f958c, 0x15f95d4, 0x48ba864, 0x7f76ac0afec0, 0x35e05a0, 0x4127490, 0x3696fe0, 0x4a20c40, 0x15f99c4, 0x3db950c, 0x0, 0x3db9e0c, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0}
It's the 0x7f76ac0afec0
timo do you still have the list of memory maps? 12:00
nine Oh...arity_callsites is 0xac0afec0
Notice that they are not that different at all?
timo some come from our rodata 12:01
12:02 reportable6 left
lizmat nine: 14a8befd1c feels as a reason to bump MoarVM 12:02
agree?
nine lizmat: yes, could make bisection easier in case it causes troubles 12:03
jnthnwrthngtn I don't think there was a bump since 770e82e484 and it's also good to get that in and exercised separately to the special returns stuff also 12:04
12:05 linkable6 left, reportable6 joined
nine "(gdb) maintenance info sections" shows the pointer is between: 12:05
[27] 0x7f76ac000000->0x7f76ac0b1000 at 0x036d3000: load7 ALLOC LOAD HAS_CONTENTS
[28] 0x7f76ac0b1000->0x7f76b0000000 at 0x03784000: load8 ALLOC READONLY
12:06 linkable6 joined
nine A different segfault shows the same weird pointer difference: 12:08
(gdb) p interns->by_arity[5]
$5 = (MVMCallsite **) 0x7fcb7c206210
(gdb) p arity_callsites
$6 = (MVMCallsite **) 0x7c206210
gfldex m: my $a = 42; say $a.not-there(); CATCH { when X::Method::NotFound { .resume } } 12:22
camelia Already set resume init args for this dispatcher
in block <unit> at <tmp> line 1
gfldex jnthnwrthngtn: ^^^ this seams to be a regression (of sorts)
jnthnwrthngtn I assume it previously indicated that the exception was not resumable? 12:42
lizmat yes
actually, no
committable6 2021.04 my $a = 42; say $a.not-there(); CATCH { when X::Method::NotFound { .resume } } 12:43
committable6: 2021.04 my $a = 42; say $a.not-there(); CATCH { when X::Method::NotFound { .resume } }
committable6 lizmat, Ā¦2021.04: Ā«Method 'not-there' not found for invocant of class 'Int'ā¤ in block <unit> at /tmp/Jeekr0Y10o line 1ā¤ā¤ Ā«exit code = 1Ā»Ā»
jnthnwrthngtn Heh, as if the resume was ignored :)
lizmat yup
committable6: 2021.04 my $a = 42; say $a.not-there(); CATCH { }
committable6 lizmat, Ā¦2021.04: Ā«No such method 'not-there' for invocant of type 'Int'ā¤ in block <unit> at /tmp/Nk3H4nJWD0 line 1ā¤ā¤ Ā«exit code = 1Ā»Ā»
jnthnwrthngtn Perhaps we need an X::NonResumable role with a method resume that dies, and to apply it to various things (probably including X::Comp). 12:45
In theory we may be able to actually make method not found resumable now, but I'm not sure we should. 12:46
lizmat why not? have it return Nil as if .?method ? 12:50
jnthnwrthngtn Feel free to try and implement it, if you think it's worth it. 12:52
I suspect the typical consequence of resuming will be that the code that did the method call won't handle the situation too. 12:54
lizmat hmmm.. well, if you have the CATCH in place to resume it, then you're sorta expecting it? 12:56
I mean, there is overhead involved with .?method, right
it would be a way to not have that overhead in the cases where it has the method?
jnthnwrthngtn The overhead of .?method is going to be vastly lower than going through the exception system.
lizmat but the overhead of .?method would be for every call, even the successful ones, no? 12:57
jnthnwrthngtn m: class C { method m() { } }; for ^10_000_000 { C.m }; say now - INIT now 12:58
camelia 0.068384721
jnthnwrthngtn m: class C { method m() { } }; for ^10_000_000 { C.?m }; say now - INIT now
camelia 0.060300922
nine will always use .? from now on instead of .
jnthnwrthngtn heh, too small :)
lol
Anyway, I think that in the case the method exists on the type then . and .? result in the same dispatch program :)
lizmat ok, in that case, colour me convinced :-) 12:59
jnthnwrthngtn m: class C { method m() { } }; for ^100_000_000 { C.m }; say now - INIT now
camelia 0.618730025
jnthnwrthngtn m: class C { method m() { } }; for ^100_000_000 { C.?m }; say now - INIT now
camelia 0.585454602
lizmat that's a trend!
lizmat also thinks about always using .?method now :-) 13:00
13:10 vrurg_ joined 13:12 vrurg left
Geth MoarVM/new-disp-nativecall-libffi: f6abc75ab2 | (Nicholas Clark)++ | src/6model/reprs/NativeCall.c
When NativeCall is libffi, don't leak ffi_arg_types during GC

body->ffi_arg_types is allocated with `MVM_malloc`, so needs a corresponding `MVM_free`.
13:49
MoarVM/new-disp-nativecall-libffi: af6f9b84de | (Nicholas Clark)++ | src/6model/reprs/NativeCall.c
When NativeCall is libffi, copy `ffi_ret_type` and `ffi_arg_types`

This bug wasn't exposed until now.
MoarVM/new-disp-nativecall-libffi: 26068a0440 | (Stefan Seifert)++ (committed by Nicholas Clark) | 8 files
Add boot-foreign-code dispatch terminal to replace native(call)invoke ops

With the new dispatch terminal, NativeCall will be able to benefit from the more efficient argument passing conventions. It will also benefit from dispatch programs tailored to the callsite, i.e. being able to replace more costly checks for containers with cheap guards.
Nicholas nine: effectively I'm sort-of-stuck. I have a question at the end of github.com/MoarVM/MoarVM/commit/26068a0440 14:02
nine Nicholas: oh, sorry. It's pretty safe to ignore that particular test. It requires fully working manage-explicitly support which didn't appear until some time later (and requires cooperation between rakudo and MoarVM). Indeed I discovered that this was broken even pre-new-disp. 14:07
14:07 TempIRCLogger left 14:08 TempIRCLogger joined
Nicholas OK. I might get a chance to get back to this in I guess about 4 hours 14:11
gfldex If .resume would take an argument that is used as the failed expressions return value, one could actually handle this case (any like many other cases). 14:18
lizmat gfldex: nqp::setpayload($!ex,value) in Exception.resume ? 14:23
jnthnwrthngtn fwiw, I think the next thing I'll take on is the LEAVE and friends situation (from the point of view of efficiency and also fixing the odd disappearing return value bug I uncovered a while back) 14:31
lizmat jnthnwrthngtn: bump MoarVM in the interest of testing and bisectability ? 14:32
jnthnwrthngtn lizmat: I think you asked earlier and I agreed? :)
lizmat oops, missed that :-) 14:33
going to do it now
jnthnwrthngtn Anyway, said next thing will probably be for next week. I've got a rough sketch of a plan for it
Basically, just emit code to run all of the exit phasers ahead of the (MoarVM) return op, so in a normal return they just run as normal code, and then identify the location of them within the block (handler style) for the unwind case. 14:34
nine So far all I've apparently managed is to make it less common. I've also tried moving the first find_interned_callsite call to after we've taken the lock and as expected that makes the problem go away. 14:35
jnthnwrthngtn Which will mean we also get inlining of them anyway.
dogbert17 nine: is this the place of the SEGV? 2944 if (STABLE(val.o) != (MVMSTable *)dp->gc_constants[op.arg_guard.checkee]) 14:54
nine dogbert17: yes
dogbert17: well for one of the two I see. The more frequent one is in find_interned_callsite 14:55
lizmat fwiw, I can not just restart applications after the bump :-( 14:58
Missing or wrong version of dependency 'gen/moar/stage2/NQPHLL.nqp' (from 'site#sources/45334C557865A97D1ECA0D3F3A3FAF2017FCE553 (OO::Monitors)') 14:59
always OO::Monitors :-(
installing / uninstalling OO::Monitors does not fix :-( 15:00
uninstalling Cro::HTTP 15:01
installing Cro::HTTP fails with: Missing or wrong version of dependency 'gen/moar/stage2/NQPHLL.nqp' (from 'site#sources/45334C557865A97D1ECA0D3F3A3FAF2017FCE553 (OO::Monitors)') 15:02
am I really the only one suffering from this ?
jnthnwrthngtn I've not seen it, fwiw 15:03
vrurg_ lizmat: not this time, but sometimes it happens. Never in time for me to be able to trace it down.
I usually just reinstall. Modules too.
Feels like under some conditions modules are not recompiled and the bytecode remains bound to the previous NQPHLL. But this is only a feeling, no proofs. 15:05
lizmat it's bloody annoying :-(
looks like nuking all precomp dirs is a solution 15:17
nine We shouldn't even find precomp files that are bound to outdated NQP modules as we're managing precomp files by compiler-id. And the compiler-id is derived from a sha1 of rakudo's sources and NQP's id which in turn is a sha1 of its sources. 15:18
lizmat well, that's the theory
practice is different, for me at least
the odd thing is, is that it always seem to center around OO::Monitors 15:19
nine It's only gonna stop once someone digs down into it instead of deleting the evidence
lizmat true
is there a precompilation dir sanity tool ?
15:23 [Coke] left
nine Well there's at least one (minor) issue: _all_sources in tools/lib/NQP/Config/Rakudo.pm only looks for .nqp and .pm6 files which doesn't cover src/core.c/ParallelSequence.rakumod. _all_sources is used to find the things we need to sha1 15:29
lizmat ok, that should be easily fixed 15:30
nine lizmat: do you use any -j option when building rakudo? 15:31
lizmat no 15:32
perl Configure.pl --gen-moar --make-install
is what I use
vrurg_ Rakudo build doesn't parallelize anyway. 15:36
I mean, .NOTPARALLEL is in action. 15:37
15:37 vrurg_ is now known as vrurg
nine Boy this stuff changed a whole lot since I wrote it. One thing seems definitely odd to me: the calculation of the source digest (which includes gen/nqp-version) got moved to this NQP::Config::Rakudo module which suggest that it's run as part of Configure.pl. 15:39
But if that is so, how could it pick up changes since Configure.pl ran?
lizmat aha... the plot thickens ? 15:40
otoh, I've only seen this problem *after* a new Confiigure.pl
nine If this source digest is calculated before Configure.pl updates NQP, then it won't pick up changes in NQP either. I think you are kinda special by relying on --gen-moar. I always update and install the 3 repos manually 15:41
lizmat well, that's one of the reasons I keep using that, as that is the "user" way if building Rakudo :-) 15:42
15:57 [Coke] joined
nine I think I know what's going on with find_interned_callsite. It's quite devious actually. If I'm correct, the segfaults I see now only appear with a --valgrind build of MoarVM. That's because with that option the FSA pre- and postfixes allocations with some red zone bytes. MVM_FSA_REDZONE_BYTES is defined to be 4. So all pointer arrays allocated with the FSA will be misaligned. 16:03
That's why we only read half the pointer. The other half may not be written yet as atomicity of pointer writes can only be expected for properly aligned pointers. 16:04
jnthnwrthngtn omg 16:07
That's both feasible and terrible 16:08
nine I've checked. The array is unaligned. I've now changed MVM_FSA_REDZONE_BYTES to 8 and running it again. So far no trouble
And yes, terrible indeed
jnthnwrthngtn So this was, after all, perhaps not the real problem? 16:09
(Although I think there was actually a risk that the PR I did solves)
nine Well you may have fixed the real problem already.
jnthnwrthngtn Ah
Well, the PR makes CI happy at least...but that may be luck given it failed rarely 16:10
nine I guess I should try to reproduce the problem without your fix but with my red zone adjustment. But for now my waterrower calls and then dinner
jnthnwrthngtn Enjoy :)
16:38 nebuchadnezzar left 16:55 CaCode joined 17:13 dogbert17 left, dogbert11 joined
timo could be you need to explicitly Configure.pl so that the --version we report really changes? 17:23
for the problem liz has with modules
i see you already figured that part out 17:24
lizmat fwiw, I haven't seen the issue since github.com/rakudo/rakudo/commit/e98e17da7d 17:25
sanity check: if I have a *lot* of objects with an int64 attribute, if I replace that by an int32 attribute, would I save memory 4 bytes / object? 17:26
jnthnwrthngtn Maybe not, because of alignment rules 17:29
If you had two 64-bit attributes one after another and both became int32, yes, that's certainly a saving
lizmat yeah I feared as much 17:30
so that's one optimization out of the window :-)
jnthnwrthngtn This is why things like the heap snapshot analyzer do the inside-out pattern 17:31
18:01 Kaiepi left 18:02 Kaiepi joined 18:15 sena_kun left
jnthnwrthngtn home time & 18:16
18:20 CaCode_ joined 18:23 CaCode left
Geth MoarVM/libffi-nativecall-fixes: a3c17d0f5d | (Nicholas Clark)++ | src/6model/reprs/NativeCall.c
When NativeCall is libffi, don't leak ffi_arg_types during GC

body->ffi_arg_types is allocated with `MVM_malloc`, so needs a corresponding `MVM_free`.
18:38
MoarVM/libffi-nativecall-fixes: a2a82f990a | (Nicholas Clark)++ | src/6model/reprs/NativeCall.c
When NativeCall is libffi, copy `ffi_ret_type` and `ffi_arg_types`

This bug was only exposed as a side effect of refactoring NativeCall to use new dispatch.
MoarVM: nwc10++ created pull request #1592:
libffi nativecall fixes
18:42 CaCode- joined 18:45 CaCode_ left
Geth MoarVM/new-disp-nativecall-libffi: f0dd7be6cd | (Stefan Seifert)++ (committed by Nicholas Clark) | 4 files
No longer allocate an argument array for generic native calls

Add a new variant MVM_nativecall_dispatch which understands the new dispatcher argument passing convention to avoid allocating and populating an argument array for every call.
19:07
MoarVM/new-disp-nativecall-libffi: fc705d3f5a | (Nicholas Clark)++ | src/core/nativecall_libffi.c
Implement MVM_nativecall_dispatch for libffi
Nicholas have one more commit.
Geth MoarVM/new-disp-nativecall-libffi: 68472b688e | (Stefan Seifert)++ (committed by Nicholas Clark) | src/core/nativecall_dyncall.c
Fix segfaults when primitive parameters are passed to native function

The dispatcher calling convention allows for unboxed values to get passed to a native function. Need to handle those in MVM_nativecall_dispatch instead of blindly assuming that we always get objects.
20:31
MoarVM/new-disp-nativecall-libffi: e88c027b74 | (Nicholas Clark)++ | src/core/nativecall_libffi.c
Also handle primitive parameters without segfaulting in the libffi code.

The dispatcher calling convention allows for unboxed values to get passed to a native function. Need to handle those in MVM_nativecall_dispatch instead of blindly assuming that we always get objects.
nine The MVM_disp_program_run segfault looks like a deopt materialization issue 20:47
Nicholas do I need to worry about this? 20:50
nine nah 20:51
jnthnwrthngtn Guess getting back into PEA should be my next stop after the LEAVE thing :) 21:00
21:01 evalable6 left, linkable6 left 21:02 evalable6 joined
nine 32657 rr recordings weighing 254 GB and 3 of them cought segfaults 21:13
Nicholas jnthnwrthngtn: I sort of think "taking care not to burn out" should be your next step. 21:16
Geth MoarVM/new-disp-nativecall-libffi: abc6aacbbd | (Stefan Seifert)++ (committed by Nicholas Clark) | 4 files
dispatcher-track-unbox-int
21:23
jnthnwrthngtn Well, kinda "working on" that by pretty much not working on MoarVM stuff at weekends, and not rushing back into RakuAST work. 21:24
nine Weekends off...a fascinating concept :)
21:24 leedo_ joined 21:25 rba_ joined 21:26 leedo left
lizmat meh, how I wish IterationBuffer also has a splice method 21:28
jnthnwrthngtn Weekends still seem to end up filled with things anyway. :)
21:28 rba left, rba_ is now known as rba
jnthnwrthngtn lizmat: For what purpose? 21:32
lizmat making Array::Sorted::Util work transparently on IterationBuffers also
Geth MoarVM/new-disp-nativecall-libffi: 4 commits pushed by (Stefan Seifert)++, (Nicholas Clark)++ 21:56
Nicholas I'm going to bed now.
jnthnwrthngtn Rest well o/ 21:58
22:04 linkable6 joined
jnthnwrthngtn Not far to go until a building Rakudo (`make`) is within a minute on my home box (it's ~62s now) :) 22:17
[Coke] O_o; 22:20
jnthnwrthngtn Ah, that's on the new-special-return branch, fwiw 22:22
[Coke] does that include moar & nqp builds?? 22:29
ah, you said "make", not "configure". :)
still, nice!
22:30 Kaiepi left 22:32 Kaiepi joined
jnthnwrthngtn Yeah, it's a 6s MoarVM and 18s NQP build for me. 22:36
Which ain't too bad either
[Coke] OO^ 22:40
22:49 MasterDuke left 23:33 CaCode- left 23:55 nebuchadnezzar joined