github.com/moarvm/moarvm | IRC logs at colabti.org/irclogger/irclogger_logs/moarvm
Set by AlexDaniel on 12 June 2018.
timotimo hrm. i'm unboxing the enum values at spesh time now (if they're constant and known, of course), but it looks like that part of the optimization happens too late to propagate values forward towards being used as flags for readuint 01:58
timotimo doing it post_inline as well seems fine 02:01
timotimo how do i even implement coerce_ui and coerce_iu when all we have for known values is a .i value that's MVMint64, but also, bit-per-bit, as long as we don't do some specific ops on the values? 02:12
nwc10 good *, #moarvm 06:10
jnthn morning o/ 09:32
tellable6 2020-07-01T08:05:07Z #raku <SmokeMachine> jnthn I'm still just wondering, sorry if interrupting, but would it make sense if there was something like RakuAST::Statement::Conditional that RakuAST::Statement::If and RakuAST::Statement::With would extend that?
2020-07-01T08:10:29Z #raku <SmokeMachine> jnthn the same for RakuAST::Statement::Elsif and RakuAST::Statement::Orwith
hey jnthn, you have a message: gist.github.com/b46293967dd7cd9af3...3647684c58
nwc10 \o 09:38
jnthn So...where wsa that ASAN barf that I was going to look at today... 10:08
Ah, paste.scsys.co.uk/591952 10:11
jnthn And...it doesn't blow up 10:14
Lots of leaks though :)
nwc10 do we "blame" timotimo for fixing something? 10:17
the harness reported it in the list of erros when I ran all the tests today in parallel
but I can't recreate any technicolor barfage when I run it individually today 10:18
or even failures
Geth MoarVM/new-disp: 26142fa6c2 | (Jonathan Worthington)++ | 3 files
Free memory associated with dispatch recordings
10:31
jnthn That's nearly all of the leak barf
Woulda been nice to provoke the other failure though 10:32
Unless it really is fixed, but that you saw it in the parallel run this morning makes me suspect that it just learned to play hide and seek better...
Hm, and it's in legacy args processing too 10:33
Though still the output makes no sense to me 10:34
Geth MoarVM/new-disp: 9ffe441e86 | (Jonathan Worthington)++ | src/core/callsite.c
Fix a leak when we drop all callsite args
10:41
jnthn And that nails the last new-disp related ASAN leak complaint I've seen
Geth MoarVM/new-disp: 3cb8874d6b | (Jonathan Worthington)++ | src/core/callstack.c
Free dispatch recording on exception too
10:50
jnthn Of course, right after saying it was the last one, I spot another :)
nwc10 jnthn: no, strictly and to be clear "harness repoted test not OK" 11:01
reported test not 100% passes
er, actually Parse errors: No plan found in TAP output
but there were also all those whatever-it-was syntxa error thingies that I saw and you could never reproduce 11:02
that made no sense
I no longer see those
jnthn That's something at least 11:10
ooh, lunch time
nwc10 jnthn: when running tests in parallel, the harness reports "Parse errors: No plan found in TAP output" for all of t/02-rakudo/03-corekeys-6d.t t/02-rakudo/03-cmp-ok.t t/04-nativecall/01-argless.t t/07-pod-to-text/02-input-output.t t/07-pod-to-text/01-whitespace.t 12:29
there is no extra output on STDERR
tests fail like this:
t/04-nativecall/01-argless.t .................................... Dubious, test
returned 1 (wstat 256, 0x100)
No subtests run
odd, I seem to be seeing different behaviour between my two terminal windows
so ... what differs in the ENV vars
jnthn Mmmm....that was a good bowl of red curry 12:29
nine loves red curry 12:32
jnthn I'm lucky to have a place reliably doing a good one about 5 minutes from the office. :) 12:49
moritz is more one for fried noodles or fried rice 12:54
jnthn nwc10: In spectest I get t/spec/S17-promise/start.t doing similar, but it runs fine alone. grmbl. 12:56
nwc10 my hunch is that it is something related to load average
and hence speeds
but does that mean that spesh can complete earler-or-later, and that might then inject changes into some other running code at different points in its execution?
jnthn Potentially, yes
Spesh runs on a background thread, so unless the blocking env var is set, its behavior will vary, and in a threaded program it will still vary even with that, due to the timing differences caused by the threads themselves
nwc10 but I do have this set: MVM_SPESH_BLOCKING=1
so, hmmm
jnthn Will see if I can catch one later. Going to try and get the reliable failures sorted out first.
Geth MoarVM/new-disp: f8cef2ff69 | (Jonathan Worthington)++ | 3 files
Implement captureexistsnamed on MVMCapture
13:24
nwc10 oh my, there are a lot of code point names in Unicode 13:32
138008 13:34
oh wait, 138007 13:35
"SIGNWRITING MOVEMENT-DIAGONAL BETWEEN TOWARDS MEDIUM" 13:36
jnthn And it's only growing... :) 13:46
.oO( or that's that the medium predicted... )
13:47
Hm, I think I'm missing something in the new dispatcher 13:51
In some cases, we need to do some late-bound work to decide on a candidate, e.g. more than we can ever reasonably guard. 13:53
And it'd be nice to be able to decide on the candidate, and then have it invoked, without having an intermediate thingy on the call stack 13:55
This'd potentially be useful for megamorphic caess too 13:56
Just not quite sure exactly what it'd look like 13:57
timotimo rr has a "chaos mode" that does something to scheduling that's supposed to provoke trouble much more often 14:21
brrt \o 15:25
jnthn o/ 15:27
brrt we're finally having rain again 15:33
timotimo we had cold for two days, now it's heat again, but not as strong 15:38
jnthn 29C outside at the moment. 15:42
Correction: 30C 15:43
[Coke] only 27C here. (but I'm in the US, so the AC has been on for weeks :) 15:44
jnthn Office air conditioning has a hole through the wall to eject air nowadays. Not a perfect setup, but works way better than tossing the pipe out of an ajar door. It's keeping it to...well, the thermostat thanks 24C, which may be correct, but it feels a little better than that, maybe 'cus the humidity is lower
The current bit of design works is quite headache-inducing even in the cool air... :) 15:45
I've realized that a multi-dispatch that needs late-bound stuff is probably just the same as a megamorphic callsite in that both want to do a little bit of work to get over the megamorphic "hump" 15:46
My figuring is that many places will be megamoprhic only along one dimension of the dispatch. 15:47
That is, you may well end up with a site that sees numerous types to dispatch a method on, but once we reach the point of having found one, it'll be a Method object that wants a code ref inside of it unwrapped 15:48
Ditto with later-bound multi-dispatches that depend on values, unpacking, etc. 15:49
I'm still not too happy with the lack of answers for polyvariance too 15:52
OO::Monitors is a classic example; the callsame after the lock acquisition is megamorphic at the location of the callsame, but keyed on the root of the dispatch that we're resuming it's probably monomorphic 15:54
Maybe resumptions thus want their ICs hung off the IC of the root of the dispatch generally. 15:55
And that brings me back to wondering if resumption of dispatch and megamorphic dispatch are both looking for some common mechanism. 15:56
Which I've been going around in circles on for the last hour or so. :)
nine That megamorphism at the callsite but monomorphism when looking at the larger picture is also true for lots of call chains 16:02
But that's probably out of scope if you try to stay sane during this dispatch rework :) 16:04