00:12 upupbb-user2 joined
Geth nqp: ec8f81ec61 | (Patrick Böker)++ | tools/templates/moar/Makefile.in
Fix `make clean`

It missed all moar specific files.
00:53 patrickb61 joined 00:55 patrickb left
Geth rakudo: patrickbkr++ created pull request #3576:
Implement verbatim arg handling in Proc
01:05 patrickb61 left 03:08 upupbb-user2 left 06:04 reportable6 joined 07:19 upupbb-user2 joined 07:47 leont joined 08:28 AlexDaniel left
[Tux] Rakudo version 2020.02.1-254-g87d2ff953 - MoarVM version 2020.02.1-64-g21fa394a8
csv-ip5xs0.689 - 0.705
csv-ip5xs-206.016 - 6.179
csv-parser24.076 - 24.110
csv-test-xs-200.373 - 0.376
test7.540 - 7.560
test-t1.949 - 1.955
test-t --race0.924 - 0.956
test-t-2031.050 - 31.969
test-t-20 --race9.744 - 9.784
10:04 sena_kun joined
sena_kun 10:04
tellable6 2020-03-27T19:38:20Z #raku-dev <lizmat> sena_kun: Set semantics means that the .WHICH of objects are used
sena_kun releasable6, status 10:05
releasable6 sena_kun, Next release in ≈8 hours. 3 blockers. 166 out of 254 commits logged (⚠ 3 warnings)
sena_kun, Details: gist.github.com/67e6575999bd88617e...df02bbda2a
sena_kun So we have a reliable segfault on Windows and JIT regression. 10:08
AlexDaniel` sena_kun: these 3 warnings are probably true and need to be fixed 10:12
sena_kun AlexDaniel`, sure. I am usually doing changelog in a single pass, so don't plan to look at it until main blockers are resolved. 10:13
AlexDaniel` sena_kun: ah, alright 10:16
I used to do it in like… um… 10 passes probably… 10:17
clearly I wasn't very good at it :)
sena_kun I think you were great enough to do all those things and more. :) 10:18
10:51 Altai-man_ joined 10:53 sena_kun left 11:02 pochi_ joined 11:03 pochi left 11:42 lizmat joined
nine There are 2 stress tests that fail with MVM_SPESH_BLOCKING and MVM_SPESH_NODELAY. Both use Proc::Async and send SIGINT to the child processes. 11:52
The thing is: with MVM_SPESH_BLOCKING and MVM_SPESH_NODELAY the behavior is actually correct. Both times Proc::Async throws a failure because the child process ended with exit code 0 but with signal 2. 11:53
Or maybe that's not correct after all, since the target programs handle SIGINT. 11:56
11:58 lizmat left
nine What's also weird is that it's clearly running the child process with MVM_SPESH_BLOCKING=1 and MVM_SPESH_NODELAY=1 that causes the failure, but when trying to run that command from the shell it simply succeeds with no signal reported 11:58
(exit status 0)
12:07 lizmat joined
Geth nqp: 6c5bd2a1e8 | (Elizabeth Mattijsen)++ | tools/templates/MOAR_REVISION
Bump NQP to get the latest MoarVM fixes
12:52 sena_kun joined 12:53 Altai-man_ left
nine Ah, of course. It's no bug in spesh. It's the usual "despite all the awesome async stuff, we just rely on timing and hope for the best" strategy. 12:55
MasterDuke in the tests? or in Proc::Async?
nine in the tests 12:56
MasterDuke ah, not so bad then 12:57
Geth rakudo: 9d8542815a | (Elizabeth Mattijsen)++ | tools/templates/NQP_REVISION
Bump NQP to get latest MoarVM and NQP improvements
tyil . 13:01
nine I just don't understand why people always reach for timers when we've had so many issues with that approach. Case in point: the test already uses this for the timeout: ($*VM.name eq 'jvm' ?? 20/3 !! 1) * (%*ENV<ROAST_TIMING_SCALE>//1)
lizmat nine: the alternative to not using timers, is hanging spectests on failures 13:05
jnthn Is hanging spectests actually worse than spurious timeouts? 13:08
lizmat not sure how CI handles that ? 13:09
nine I don't think that would be worse. If it hangs, you know there's an issue and can investigate. CI has their own timeouts anyway.
jnthn If we were doing CI on spectests that'd be great :P
nine But in this case it isn't even a timeout. It's a "let's wait an arbitrary time for the child process to be ready"
lizmat but for me, doing a spectest each night to see performance, a hanging test would completely destroy timing info
jnthn: ah, yeah, good point :-) 13:10
jnthn And finding out what's hanging is a case of "run `ps`, `grep` it, and look what's there"
lizmat jnthn: true, but that was not my point :-)
jnthn lizmat: tbh, if something has already hit a timeout, it's already eaten one of the parallel jobs up for longer than it should have and produced a distortion, so it's just a matter of degree. 13:11
lizmat well, that would show up as a spike in wallclock and/or CPU
but, yeah, if we all want to live with potentially hanging spectests, then by all means, let's rip out timeouts :-) 13:12
jnthn If we do really want to keep timeouts, I wonder if we could make a module in roast that disables timeouts if it sees any of the spesh stressing flags, or any other things liable to extend times. 13:13
And then use that when we need a timeout
nine I've already wasted a couple of hours today investigating this issue that really isn't one. I can live with a missing daily spectest timing every couple of months if we get stable tests instead :) 13:14
lizmat nine: then by all means, rip out the timeouts!
jnthn Another option is to have the harness to optionally enforce a timeout across all tests. 13:16
So if you want to have one, then it doesn't need to be scattered everywhere through the tests.
Then it covers tests you can't imagine hanging too 13:17
lizmat that timeout would have to be pretty big, as some test-files can take 20-30 seconds on my machine, but yeah 13:18
perhaps we could add timeout info to the spectest.data file, to override a default
nine How often do we actually have hangs in spec tests?
Geth roast: 6f05fb3fe4 | (Stefan Seifert)++ | MISC/bug-coverage-stress.t
Replace flappy timing in stress test with actual synchronization

We've waited an arbitrary time for a child process to be ready which caused spurious failures on loaded system or when running with MVM_SPESH_BLOCKING=1 and MVM_SPESH_NODELAY=1. Use explicit synchronization instead.
nine Is it me or is the explicitly synchronized version of the test actually much prettier and more readable, too?
lizmat nine: isn't that on account of losing the JVM todo ? 13:22
m: dd (1,2,3, *+1, -1, -2 ... *).head(10) # shouldn't this fail? the parts after the *-1 will never be reached 13:24
camelia (1, 2, 3, 4, 5, 6, 7, 8, 9, 10).Seq
lizmat or do we think this is a case of DIHWIDT ?
nine lizmat: the test is already skipped on JVM 13:33
lizmat ah, ok, so the todo was superfluous
Geth roast: d298eb2326 | (Stefan Seifert)++ | S17-procasync/kill.t
Fix flappy timing based Proc::Async::kill tests

Instead of waiting an arbitrary amount of time in the hopes that the child process will be ready by then, wait for the child to actually signal readyness. This makes the test run reliably even on a loaded system.
nine Darn.... t/spec/MISC/bug-coverage-stress.t can actually still fail in the same way. 14:26
(by not handling the signal)
lizmat but that would be a bug? 14:27
nine Am I correct that with code written like this, the signal handler will be installed before we print 'started' to STDOUT? react { whenever signal(SIGTERM).merge(signal SIGINT) { say ‘pass’; exit 0 }; say ‘started’; $*OUT.flush} 14:28
lizmat that's my understanding of how whenever works, yes 14:29
it runs the condition to create a Supply 14:30
and stores the block for the tap
Geth nqp: 432799ad32 | (Elizabeth Mattijsen)++ | tools/templates/MOAR_REVISION
Bump MoarVM to get lego JIT fix
14:51 Altai-man_ joined 14:53 sena_kun left
Geth rakudo: 156356eae2 | (Elizabeth Mattijsen)++ | tools/templates/NQP_REVISION
Bump NQP to get lego JIT fixes
lizmat afk for some fresh air& 15:13
Altai-man_ o/
releasable6, status 15:25
releasable6 Altai-man_, Next release in ≈3 hours. 3 blockers. 166 out of 256 commits logged (⚠ 3 warnings)
Altai-man_, Details: gist.github.com/a32532447959253457...20664021b8
Altai-man_ we still have github.com/rakudo/rakudo/issues/3569 as well... 15:27
MasterDuke might have been fixed by the bump also 15:28
Altai-man_ yes, need to be checked before looking again
Altai-man_ builds fresh 15:29
MasterDuke oh, now it dies with `nextcallee is not in the dynamic scope of a dispatcher`
nine Still see Type check failed in binding to parameter '<anon>'; expected Callable but got Nil (Nil) 15:30
MasterDuke but (no surprise) still fine when disabling spesh 15:31
Altai-man_ still sees Type check failed in binding to parameter '<anon>'; expected Callable but got Nil (Nil) 15:34
MasterDuke with `MVM_SPESH_NODELAY=1 MVM_SPESH_BLOCKING=1` it always dies at iteration 58, always with the not in dynamic score error 15:40
nine MasterDuke: you're running the version with `for ^1024 { foo }` aren't you? 15:47
MasterDuke yeah
Altai-man_ ci.appveyor.com/project/rakudo/rak...cufo047ygv 15:48
nine That one's stable when run with MVM_SPESH_INLINE_DISABLE=1 15:50
MasterDuke nine: yep. did you see my discussion with jnthn about it (don't remember exactly when, past couple days) 15:51
btw, i'm seeing a bunch of spectest fails
nine probably saw it but can't remember
MasterDuke colabti.org/irclogger/irclogger_lo...-03-27#l65 15:52
`./rakudo-m -I lib/ t/spec/S02-types/WHICH.t` dies with `Failed test 'X::Routine::Unwrap.raku returns self' at t/spec/S02-types/WHICH.t line 441. expected: 'X::Routine::Unwrap', got: 'Failure.new(exception => X::NoSuchSymbol.new(symbol => "X::Routine::Unwrap"), backtrace => Backtrace.new)'` 15:54
nine "Just copied over from `takedispatcher`" doesn't sound very confident
MasterDuke the code sort of looks like it makes sense, but i don't know anything about dispatching. i tried changing that conditional in a couple ways but nothing made a difference 15:55
Altai-man_ starts to think if it is a better approach to just revert these commits and delay them until next month, next release. 15:56
nine I wonder if it's possible to disable the nextdispatcher stuff without reverting everything? 15:58
Altai-man_ Cut out it from `case` statement? /me has no idea how it works 15:59
jnthn If the goal is to just not JIT it, then commenting out the graph.c case statements and any added templates would do it 16:00
nine jnthn: it's not or at least no longer the JIT. It's inlining 16:02
MasterDuke it's a spesh problem though, right? i.e., MVM_SPESH_INLINE_DISABLE=1 makes things work
but is anyone else seeing all these spectest fails? 16:03
nine not
Altai-man_ MasterDuke, I think commenting out is related to windows segfault, not nextdispatcher issue
16:04 upupbb-user2 left
jnthn *sigh* You know, just guessing one's way through inline.c is NOT going to go well. 16:16
I don't know what the semantics of nextdispatcher are 16:18
dogbert17 jnthn: did you check github.com/MoarVM/MoarVM/commit/6b...870f5c3ae2
nine dogbert17: whether that commit makes sense depends on the semantics of nextdispatcher. The comment that talks about an assumption is copied verbatim from MVM_OP_takedispatcher, but we don't know if that assumption actually holds 16:20
jnthn Yes, but I don't know if it's correct. I guess since lizmat merged it, she understands it far better than I do.
dogbert17 as I wrote in MoarVM, that commit is causes the DU errors found by nwc10 16:21
nine Feels more and more like we should take a large step back with this and give it another cycle to stabilize
jnthn Yes. It feels like folks are touching the JIT and spesh without really knowing what they're doing, and honestly, I'm kinda inclined to rescind commit bits for that. 16:22
So, I'd revert the whole lot of spesh/JIT stuff related to nextdispatcher. 16:24
nine I also wonder if the whole approach with the nextdispatcher makes sense from an architecture point. Looking at a spesh log those takenextdispatcher ops are everywhere now, even where there's no takedispatcher. Looks very intrusive for something that we've lived well for very long without.
jnthn Perhaps that also; sadly I've not had time to look more deeply into the reasoning. I mean, I understand there were semantic problems with composing dispatchers. 16:25
nine To be clear: I'm very glad that vrurg++ is fixing complicated edge cases in some of our more advanced features. It's just something where our lack of resources for proper reviews hurts a lot. 16:26
jnthn Architecture wise, though, the dispatcher stuff is overall LTA. It's really hard to optimize as it currently stands, so `self.Foo::bar` - a poor way to do things - is likely to heavily out-perform `callsame` and friends.
Indeed, I don't mind the overall bunch of work, I just want folks to understand that so far as MoarVM goes, it's better to do nothing than to do something that's not well understood. 16:27
(Also, the overall dispatcher design pre-dates any of spesh and even MoarVM.) 16:28
nine So a rethinking may be in order rather than adding band aids?
jnthn Probably, yes. 16:29
I mean, I'm glad vrurg did take on finding *a* way to get more correct semantics (and presumably added a bunch of tests to cover them, so we won't get those things wrong in any new design). 16:30
But I suspect we need a different approach, that we stand a chance of being able to optimize decently 16:31
It's also one of the things that requires a dynop at the moment, and we're also keen to get rid of those. 16:32
nine It's not even that easy to find all the commits to revert :/ The commit messages seem to indicate that his previous solution (via $*NEXT-DISPATCHER) wasn't really correct either 16:38
jnthn nine: Hm, is reverting the spesh/jit bits not enough to get us a clean release? Or is the perf regression of the VM not understanding the next dispatcher thing something you'd prefer to avoid? 16:46
(iiuc the problem isn't in correctness of the impl itself, but rather that the efforts to optimize it have caused issues) 16:47
16:52 sena_kun joined 16:53 Altai-man_ left
nine jnthn: reverting that commit didn't make a difference. What does is marking takenextdispatcher :noinline 17:07
nine@sphinx:~/rakudo/nqp/MoarVM (master *=)> raku tools/update_ops.p6
Parsed 921 total ops from src/core/oplist
raku: src/jit/x64/tiles.dasc:675: MVM_jit_tile_sub_load_idx: Assertion `out != in1' failed.
evalable6 (exit code 1) 04===SORRY!04=== Er…
nine, Full output: gist.github.com/3f70b709197f2f38e9...ca7b96e59f
lizmat jnthn: which commit are you referring to ? 17:10
nine seems to be caused by the expr jit of takenextdispatcher 17:17
Actually not. It fails to compile the sub_i r6(1), r0(1), r1(1) 18:10
Since takenextdispatcher is everywhere, taking out the expr JIT template simply blocks expr jiting a whole lot of blocks. 18:12
lizmat yuck
nine Though it's still a little bit suspicious as that sub_i does follow a takenextdispatcher op. I just don't know if the expr jit can get confused in that way 18:17
lizmat sub_i has long been in the JIT afaik, so if it is borked and it is after a new op, it's the new op, I'd say 18:27
nine When I replace the takenextdispatcher expr jit template with just: (template: takenextdispatcher (^vmnull)) it still fails. OTOH the template for sub_i has been there since the start of the expr jit
MasterDuke have you tried the jit bisect tool? 18:30
nine yes
and it definitely found the right basic block that fails to compile 18:34
Btw this happens only in a debug build. A non-debug build seems to be fine, i.e. it doesn't explode despite the asserted condition getting hit 18:44
18:51 Altai-man_ joined 18:54 sena_kun left
vrurg nine: I wasn't around. Totally agree that lack of review is a big issue. With regard to the nextdispatcher semantics I could elaborate on it if it's necessary. Can't quickly find a discussion with lizmat, but dispatchers implementation could be somewhat improved if there is a way to record call-specific data on a callee which could later be accessed by subsequent calls. 19:42
In theory, this would allow to get rid of take(next)dispatcher and $*DISPATCHER. 19:43
lizmat vrurg: that would be great medium term, but right now we face the choice of ripping out the nextdispatcher stuff, or fixing the JIT 19:51
vrurg Ripping it off means throwing my current project down to a trash bin as it heavily relies on multi+inheritance.
lizmat vrurg: understood... 20:02
so I guess that's not an option... so we need to fix the JIT
vrurg I'd be happy about it. :) 20:03
nine It's not the JIT
samcv So. Do we know yet any for sure file extensions for raku. I am going to add some filetypes to the perl 6 atom syntax highlighter, as well as rename to 'Raku/Perl 6'. Eventually will probably change the underlying package name as well, but not ready 20:04
lizmat the PR of the problem-solving ticket as info on that 20:05
samcv ok. will do that :) 20:06
(look at it i mean)
lizmat m: dd 10 ^...^ 1 # somehow, I thought this would work
camelia 5===SORRY!5=== Error while compiling <tmp>
Malformed postfix call
at <tmp>:1
------> 3dd 10 ^...^7⏏5 1 # somehow, I thought this would wor
nine vrurg: ripping it off would just mean postpoing your project. I'd think we can get things up and running for the next release
But right now we neither know how to fix things nor how to get back to the previous state 20:08
Marking takenextdispatcher :noinline fixes that one test case, yes. But takenextdispatcher is all over the place, so this would block inlining on a large scale 20:09
vrurg nine: I found my comment with a regard to per-call storage: github.com/rakudo/rakudo/commit/4e...t-37677053
Having this feature could obsolete $*DISPATCHER too making it a whole lot easier to optimize. 20:11
And, BTW, nqp::setdispatcher is used nowhere. I'd say it's just wasting the code space. 20:13
nine Marking takenextdispatcher :noinline slows down csv-ip5xs.pl by ~ 27 % 20:15
vrurg has to go. Will check the backlog in a couple of hours. 20:18
20:28 lucasb joined 20:52 sena_kun joined 20:54 Altai-man_ left
MasterDuke nine: what's the purpose of the `nqp::setelems($str, 0);` here github.com/Raku/nqp/commit/c3388a7...2333-R2334 ? 21:42
nine I don't think something like nqp::setcalldata('dispatcher', self) is the way to go. We will generally want to avoid using strings. We will also want something that can be optimized away completely in the normal case (which doesn't care about dispatchers) 21:43
MasterDuke: we're preallocating and then moving the write position back to the start 21:44
21:46 ricky007 joined
lizmat nine: no idea how hot that piece of code is, but you can stack nqp::setelems 21:47
MasterDuke ok, that's what i thought, wanted to confirm
lizmat m: use nqp; dd nqp::setelems(nqp::setelems(nqp::list,1000),0)
camelia ()
MasterDuke looks to be the same time either way 21:52
nqp: my $b; my int $c := 0; my num $s := nqp::time_n(); while $c++ < 10_000_000 { $b := MAST::Bytecode.new; nqp::setelems(nqp::setelems($b, 100), 0) }; say(nqp::sub_n(nqp::time_n(), $s)); say(nqp::elems($b))
camelia 1.0958857536315918
MasterDuke nqp: my $b; my int $c := 0; my num $s := nqp::time_n(); while $c++ < 10_000_000 { $b := MAST::Bytecode.new; nqp::setelems($b, 100); nqp::setelems($b, 0) }; say(nqp::sub_n(nqp::time_n(), $s)); say(nqp::elems($b))
camelia 1.079796314239502
MasterDuke i was just looking at where allocations happen when compiling rakudo 21:53
21:56 upupbb-user3 joined 22:24 patrickb joined 22:51 Altai-man_ joined 22:54 sena_kun left 23:17 AlexDaniel joined, AlexDaniel left, AlexDaniel joined 23:22 Kaiepi left 23:23 Kaiepi joined 23:28 Altai-man_ left 23:49 leont left