00:33
bloatable6 joined
|
|||
samcv | the .tar.gz file contains our user info. not that it matters to me. but interesting. jnthn's releases have jnthn as the username in the tar and mine has samantha | 00:36 | |
Zoffix | ⚠️⚠️⚠️⚠️⚠️⚠️⚠️ ❗❗❗❗ ❕❕❕❕ | ||
timotimo | oh yeah tar tends to do that | ||
Zoffix | NOTICE: Main Development Branch Renamed from “nom” to “master”: rakudo.org/2017/10/27/main-developm...to-master/ | ||
⚠️⚠️⚠️⚠️⚠️⚠️⚠️ ❗❗❗❗ ❕❕❕❕ | |||
samcv | releas added to moarvm.org repo now | 00:43 | |
timotimo does the tiniest dance | |||
MasterDuke | `MVMP6bigintBody *body = get_bigint_body(tc, result);` complains `This representation (P6int) cannot unbox to other types (for type BOOTInt)` | 01:07 | |
when result is created like thus `MVMObject * const result = MVM_repr_alloc_init(tc, tc->instance->boot_types.BOOTInt);` | 01:08 | ||
is BOOTInt not the right type to use there? | |||
hmm, maybe i do want a result_type parameter | 01:11 | ||
01:33
greppable6 joined
01:57
ilbot3 joined
|
|||
Geth | MoarVM: MasterDuke17++ created pull request #737: Create and JIT coerce_II op |
03:14 | |
MoarVM: ugexe++ created pull request #738: Retry in certain cases of EINTR |
05:32 | ||
06:20
domidumont joined
06:27
domidumont joined
07:09
domidumont joined
07:13
lizmat joined
07:19
patrickz joined
08:10
zakharyas joined
08:15
zakharyas joined
08:21
zakharyas joined
08:32
zakharyas joined
08:35
Ven joined
09:13
Ven joined
09:26
zakharyas joined
09:46
Ven_ joined
10:09
Ven joined
|
|||
dogbert2 | jnthn: does this gist give you any ideas? gist.github.com/dogbert17/6257f2d6...b260ef8a29 | 10:27 | |
jnthn | Not really | 10:34 | |
It'll need tracing back to the call arguments | |||
10:37
Ven_ joined
|
|||
dogbert2 | are there any parameters to MVM_spesh_arg_guard_run I should dump or perhaps a MVM_dump_backtrace? | 11:34 | |
jnthn, timotimo: I have stopped gdb at github.com/MoarVM/MoarVM/blob/mast...rd.c#L408, 'test' has suddenly become null. Are there any variables etc I should check? | 12:02 | ||
(gdb) p *args | 12:09 | ||
$1 = {o = 0x80803d0, s = 0x80803d0, i8 = -48 '\320', u8 = 208 '\320', i16 = 976, u16 = 976, i32 = 134742992, u32 = 134742992, i64 = 134742992, u64 = 134742992, n32 = 4,09304929e-34, n64 = 6,6571883365061916e-316} | |||
(gdb) p *ag->nodes | 12:18 | ||
$3 = {op = MVM_SPESH_GUARD_OP_CALLSITE, yes = 14, no = 12, {cs = 0x81dc1d8, arg_index = 49624, st = 0x81dc1d8, offset = 136167896, result = 136167896}} | |||
12:50
zakharyas joined
12:55
synopsebot joined
12:57
synopsebot joined
13:18
synopsebot joined
13:41
zakharyas joined
13:51
zakharyas joined
|
|||
timotimo | we'd have to figure out where the null came from, i.e. what piece of code put a null into the arguments array | 15:11 | |
given how the "dump bytecode" function seems to put the -> at a semi-random place, that's not quite as easy i'm afraid | 15:12 | ||
if we had an rr recording we could step back to the corresponding "enter frame" (which could mean going over a whole lot of leave/enter pairs) | |||
and then trace it step-by-step | |||
15:18
ggoebel joined
16:37
greppable6 joined
16:38
AlexDaniel joined
17:19
domidumont joined
|
|||
dogbert17 | timotimo: can a spesh log give any clues? | 17:42 | |
timotimo | not likely | 18:04 | |
dogbert17 | sigh, a difficult bug then, grr | 18:15 | |
timotimo | hm. if you know exactly which spesh candidate from the spesh log is on the stack at each point, and you know which exact invocation has the broken args buffer, you can probably trace back what the value goes through? | 18:18 | |
18:18
bisectable6 joined,
benchable6 joined,
committable6 joined,
squashable6 joined,
unicodable6 joined
19:24
robertle joined
|
|||
samcv | good ** | 20:36 | |
Zoffix | \o | 20:40 | |
Geth | MoarVM: 00c3211c5c | (Elizabeth Mattijsen)++ | 2 files Remove dependency on autodie |
21:13 | |
21:20
stmuk_ joined
21:38
hoelzro joined
|
|||
lizmat | .tell jnthn 'time while perl6 -e 'my $a = 0; await start { for ^10000 -> $i { cas $a, -> $b { $b + $i } } } xx 4'; do :; done' segfaults for me within the second generally when run in conjunction with a "TEST_JOBS=8 make spectest" | 22:07 | |
yoleaux | lizmat: I'll pass your message to jnthn. | ||
jnthn | Arse, I thought I'd got those spectests clean :/ | 22:08 | |
Under GC stressing | |||
lizmat | jnthn: I don't think it's GC stressing (if I understand that term correctly) | 22:11 | |
feels more like a race condition to me | |||
as it doesn't crash if that code is just run by itself | |||
jnthn | Maybe, though a race alone isn't generally enough to trigger a SEGV | 22:12 | |
lizmat | ok, if I increase from xx 4 to xx 8 it *does* crash by itself | ||
with xx 7 it crashed after 37 seconds | 22:13 | ||
jnthn | Thread 4 "moar" received signal SIGSEGV, Segmentation fault. | 22:14 | |
[Switching to Thread 0x7fffef7fe700 (LWP 2183)] | |||
0x00007ffff775a222 in MVM_serialization_demand_object () from //home/jnthn/dev/MoarVM/install/lib/libmoar.so | |||
o.O | |||
lizmat | does not ring a bell to me | ||
jnthn | That's a weird place to explode | ||
lizmat | well, if you run it again, maybe it'll explode somewhere else | 22:15 | |
I mean, with the #1202 code I get a MoarVM panic about heap corruption | |||
jnthn | Does it right there again | 22:16 | |
(gdb) p sc->body | |||
$2 = (MVMSerializationContextBody *) 0x0 | |||
c'est le hosed | |||
lizmat | ok, so maybe we're on to something | 22:17 | |
5m18.212s before it segfaulted | 22:19 | ||
Geth | MoarVM: 9ad1f5fc86 | (Jonathan Worthington)++ | src/6model/serialization.c Add missing MVMROOT around managed mutex acquire The mutex acquire may GC block and thus trigger GC, moving objects. |
22:20 | |
lizmat | pretty sure it happened because I was doing something else on the machine, that required one or 2 CPU's | ||
whee! | |||
jnthn | GC-related bug, but depended on mutex acquisition timing, which would be naturally impacted by load | 22:21 | |
lizmat | jnthn: the cas code still segfaults for me, quite quickly | 22:33 | |
the GH #1202 code so far has not | |||
so I think Rakudo GH #1202 looks fixed by this fix | 22:34 | ||
jnthn | Hm, that fixed the cas code nicely for me. Odd | 22:35 | |
lizmat | alas, the 1202 also crashed: MoarVM panic: Internal error: zeroed target thread ID in work pass after 4m50 | ||
jnthn | Guess that'll be something different, then | 22:36 | |
But not for me today | |||
lizmat | the GH #1202 code feels more stable now | ||
so it may have helped anyways | 22:37 | ||
jnthn | Well, surely helped :) | ||
I got a crash after a few runs before, and it kept on going for ages after the fix |