| IRC logs at
Set by AlexDaniel on 12 June 2018.
00:00 MasterDuke joined 01:34 sena_kun joined 01:36 Altai-man_ left 03:33 Altai-man_ joined 03:35 sena_kun left 05:34 sena_kun joined 05:36 Altai-man_ left 07:33 Altai-man_ joined 07:35 sena_kun left 07:53 domidumont joined 07:55 brrt joined 08:47 zakharyas joined
nwc10 good *, #moarvm 08:56
brrt good * nwc10 09:07
tellable6 2020-02-19T17:55:04Z #moarvm <dogbert17> brrt compile moarvm with --no-optimize and then run: MVM_SPESH_NODELAY=1 MVM_SPESH_BLOCKING=1 ./perl6-m -Ilib t/spec/S05-grammar/parse_and_parsefile-6e.t
brrt thank you dogbert17
09:34 sena_kun joined 09:35 Altai-man_ left 09:40 brrt left
jnthn i. 10:24
um, o/
nwc10 \o 10:30
11:33 Altai-man_ joined 11:36 sena_kun left 11:53 MasterDuke left 12:30 brrt joined
Geth MoarVM: 978480b288 | (Jonathan Worthington)++ | 2 files
Mark cmp_tc static to avoid linker errors

Hopefully helps with #1234.
linkable6 MOARVM#1234 [open]: [BLOCKER] Various issues when gcc-10 is used on macOS
jnthn That's a cute issue number :)
Altai-man_ jnthn, any chances you'll be able to help with the signature blocker? 12:51
jnthn Maybe, first I need to figure out how to repro it... 12:53
Altai-man_: About the other MoarVM one you marked blocker: not sure we'll find that long-standing issue in a hurry...
Altai-man_ jnthn, checkout to errata (either c or d) and then just make stresstest.
releasable6, status
releasable6 Altai-man_, Next release in ≈2 days and ≈6 hours. 4 blockers. 139 out of 242 commits logged (⚠ 2 warnings) 12:54
Altai-man_, Details:
Altai-man_ jnthn, I think if it is reliable on macos, we can somehow debug that, no? 12:55
if it is very heizen, not a blocker, but I saw lizmat confirming it's not. 12:56
lizmat sorry, which one are we talking about now?
lock-async-stress2.t panic. 12:57
lizmat yeah, between 30 and 50 runs, and it would fail 13:00
running it again now... so far no fail
no fail yet 13:02
still not broken 13:07
jnthn Yes, it's good at hiding
lizmat the times I tried it the other day, it would have produced it already 13:08
lizmat adds a displayable counter 13:09
bingo: MoarVM panic: Corrupt multi dispatch cache: cur_node != 0, re-check == 0x0
jnthn committable6: all say :($ is raw where True, $ is copy, Int $ is rw, $ is raw where True = 2).perl 13:11
lizmat this is the code I run: loop { say $++; run <raku t/spec/S17-promise/lock-async-stress2.t> }
jnthn bisectable6: say :($ is raw where True, $ is copy, Int $ is rw, $ is raw where True = 2).perl 13:12
committable6 jnthn,
bisectable6 jnthn, Bisecting by output (old=2015.12 new=9217b1c) because on both starting points the exit code is 0
jnthn, bisect log:
jnthn, (2016-03-04)
jnthn Something changed $ into \ in the output
In the latest release
13:12 brrt left
jnthn (this is the signautre one) 13:13
ah, maybe more on topic in #raku-dev
13:14 zakharyas left 13:34 sena_kun joined 13:36 Altai-man_ left
Guest1277 jnthn: wrt the Corrupt multi dispatch cache bug. Can't you leave it running in gdb, in another terminal, while doing something else? 13:49
I seem to remember that you aquired a super PC last year (12 cpu's ?) 13:50
*12 cores
jnthn The problem is more once it's reproduced; I've got to that point before, and it was still really unclear what was wrong. 14:05
nine So you'll want to run it in rr 14:21
rr is a real life safer in such cases
Guest1277 as long as you don't have a ... processor :) 14:23
but I suspect that jnthn has an Intel cpu 14:24
lizmat made a small change to the script, and now it's been running 340x without a hitch 14:25
when it reaches 1000 I will tell what the change is 14:26
ok, 403, so that doesn't make a difference :-( 14:30
14:38 lucasb joined 14:44 zakharyas joined 14:55 brrt joined
Guest1277 hi brrt 14:59
lizmat jnthn: I changed the cas in Lock::Async::lock to nqp::cas... which shouldn't make any difference
brrt hi Guest1277
lizmat now at 370 runs 15:00
Guest1277 brrt: have you managed to repro the assert failure? 15:01
lizmat 600 runs 15:06
this fix is probably just making it even more difficult to create the error 15:07
or could be an indication as to where the actual problem is?
sena_kun if it is in 300 times range, and not a new issue, this removes the blocker tag for me, so not a hill to die on 15:10
brrt Guest1277: I've been swamped by $dayjob 15:11
lizmat sena_kun: I'm gonna let it run to 1000
jnthn lizmat: Well, if the problem shows up in the multi dispatch cache, and we don't call the cas multi, then that fits 15:12
lizmat ah, duh.. :-)
jnthn: but that also means that the problem is either in multi-dispatch, possibly in conjunction with cas 15:13
also: I did *not* change the cas() in the unlock method, just in the lock() method 15:14
so if there's corruption, it's only when acquiring a lock
but that would be strange, as acquiring the lock hasn't got anything to do with case, right
1000+ :-) 15:17
15:20 moritz_ is now known as moritz 15:33 Altai-man_ joined 15:36 sena_kun left 16:38 domidumont left 16:53 brrt left
Geth MoarVM: a71eee4c25 | (Jonathan Worthington)++ | 3 files
Allow closing handle bound to async proc stdin

This provides a way for the thing spawning the process to request that the handle bound to its stdin is closed at the point that the process exits (or when spawing fails). This will help us to fix the handle leak reported in
17:34 sena_kun joined 17:36 Altai-man_ left 18:26 domidumont joined, domidumont left 18:28 domidumont joined 18:48 zakharyas left 19:33 Altai-man_ joined 19:35 brrt joined 19:36 sena_kun left 19:50 domidumont left 20:10 zakharyas joined 20:14 brrt left 20:55 zakharyas left 21:34 sena_kun joined 21:35 patrickb joined 21:36 Altai-man_ left 21:47 camelia left 22:19 MasterDuke joined 23:03 patrickb left 23:33 Altai-man_ joined 23:36 sena_kun left 23:38 lucasb left