github.com/moarvm/moarvm | IRC logs at colabti.org/irclogger/irclogger_logs/moarvm
Set by AlexDaniel on 12 June 2018.
02:15 MasterDuke left 03:20 leont left 04:20 linkable6 left, evalable6 left 04:21 evalable6 joined 04:22 linkable6 joined 05:37 japhb joined 07:46 MasterDuke joined 07:55 domidumont joined 08:53 leont joined 08:58 zakharyas joined 09:18 sena_kun left 09:27 sena_kun joined
dogbert11 nine: would you like to test something wrt Tux CSV module, time permitting? 10:22
it's simple, clone github.com/Tux/CSV.git, cd to the repo dir and run the included tests. I'm wondering if one of them hangs for you? 10:25
lizmat dogbert11: should this be on HEAD or on a MoarVM branch ?
dogbert11 HEAD 10:26
specifically this test: t/78_fragment.t
lizmat yup, that hangs for me 10:27
now, how did Blin miss that ?
dogbert11 good question 10:28
it seems to spend all its time in VMArray_gc_mark 10:30
10:33 MasterDuke left
lizmat yeah, and growing at about 10MB / sec on my machine 10:34
10:34 MasterDuke joined
lizmat presses ctrl-x at the 8GB mark 10:34
10:35 zakharyas left
lizmat $csv.colrange ([1, 4..6, 8..Inf]); # there seems to be an infinite range involved 10:36
dogbert11: could you make a rakudo issue for that
dogbert11: golfed to : dd (1..10).map(~*)[1,4..6,8..Inf] 10:38
nothing to do with CSV at all
dogbert: further golf: (1..10)[1,^Inf] 10:43
dogbert11 I can fix the issue 10:53
i.e. I can make a Rakudo issue :-) 10:55
lizmat yes, please 10:56
dogbert11 R#4216 10:58
10:59 linkable6 left
dogbert11 haha 10:59
11:00 linkable6 joined 11:01 linkable6 left 11:04 linkable6 joined
sena_kun ouch 11:05
The module is Text::CSV, right? 11:06
Right. Interesting. 11:09
lizmat yeah, but it is not a module issue 11:11
sena_kun Blin says it fails on both points, and as we have lots of broken modules in eco I cannot check all 300+ of them each time. Now I suspect it might be the culprit for the Blin slowdown I investigated. I'll release a point. 11:13
lizmat sena_kun: it has not been fixed yet, just identified
sena_kun lizmat, well, yes, "release a point after its fixed and checked". :)
lizmat ok, *phew* no pressure :-) 11:14
jnthn Regression in Rakudo, or in Moar, or? 11:17
MasterDuke rakudo 11:18
sena_kun Regression in Rakudo.
jnthn Ah, OK. I saw the note about gc_mark in VMArray and was thinking, how on earth could that get broken... 11:19
Guess it's just a regression that leads to a lot of marking
lizmat yeah... it's just stupidly applying non-lazy semantics on a lazy iterator 11:20
which I broke :-(
but first I need to get this week's RWN out of my system :-) 11:21
11:22 zakharyas joined
sena_kun Also, another simple thing how we missed that: this tiny gist is not in roast. 11:27
lizmat yeah :-(
MasterDuke anyone have a suggestion for a better way to debug github.com/MoarVM/MoarVM/issues/1434 ? the backtrace doesn't seem to make sense, i think because it's running a test, so there is forking and threading involved. i haven't gotten it to hit under valgrind, and the various gdb options to follow forks and such haven't worked 11:43
i think i tried asan and tsan, no help 11:44
12:02 zakharyas left
lizmat just an ideaI just had: would it be possible obtain the amount of CPU used by execution of a block (I'm specifically thinking of a start block, but more general would be nice) 12:03
or from one point in time to another point in time *in that thread*
I know it's possible to get totals, but that would be useless to find out how much e.g. a batch in a hyper was taking to allow for automatic batch-size adjustments 12:04
MasterDuke it looks like `getrusage` has a `WHO` parameter which can be set to `RUSAGE_THREAD`, but `uv_getrusage` just always calls it with `RUSAGE_SELF` 12:33
lizmat ooohhh.... :-) 12:34
how about adding a :thread to nqp::getrusage ? :-) 12:35
MasterDuke the problem is that nqp::getrusage uses uv_getrusage, so we'd have to patch it to accept a who parameter 12:46
github.com/libuv/libuv/blob/c9406b...#L969-L972 12:49
dogbert11 MasterDuke: is this better? gist.github.com/dogbert17/abb37bd9...20d9896acf 12:57
lizmat a different nqp op would also work for me :-) 13:06
MasterDuke dogbert11: huh. wonder if it's not safe to free the string passed to MVM_spesh_debug_printf without an MVM_spesh_debug_flush first? 13:13
dogbert11: how easily can you repro it? 13:24
lizmat: the unix implementation of uv_getrusage is pretty simple, it would be easy to copy it and add a parameter (or just make a hard-coded thread version). the windows version is a little bigger, and someone who actually knows windows would have to weigh in on whether the information is available per-thread (i would assume so, but have no idea). 13:28
however, the bigger questions is whether the rakudo idea of a thread would match what `getrusage(RUSAGE_THREAD)` would return? don't rakudo threads move around libuv threads? but maybe they don't move within a block so you could do something at the start and end of a block and it would work out?
lizmat well, as long as the block does not do an (implicit) await, it is sure to stay on the same OS thread afaik 13:29
jnthn ^^ ??
nine A task won't get moved onto a different block in the middle of the execution. It's certainly so now and I'd guess that it's safe to rely on that 13:32
It just wouldn't make sense to move an executing task
s/different block/different thread/
lizmat it might if it is doing an await!
nine true
13:48 cog__ left 13:56 zakharyas joined
dogbert11 MasterDuke: relatively easily 14:03
MasterDuke mind trying out a patch if i create one? 14:04
14:09 cog joined
dogbert11 surw, no problem 15:02
*sure
MasterDuke dogbert11: gist.github.com/MasterDuke17/1e160...c3c8703932 15:15
well, as long as i don't try to catch it in gdb/valgrind/etc it repros pretty easily, and no, the above patch does not fix it 15:24
lizmat and another Rakudo Weekly hits the Net: rakudoweekly.blog/2021/02/22/2021-08-first-21/ 15:37
afk for a few hours& 15:38
MasterDuke interesting, it's much more reproducible with --full-cleanup 16:11
jnthn, nine: does github.com/MoarVM/MoarVM/issues/14...-783487192 point out anything obvious to you? 16:18
16:37 MasterDuke left 17:45 domidumont left 17:51 domidumont joined 18:01 domidumont left 19:12 zakharyas left
Geth MoarVM: salortiz++ created pull request #1439:
VMArray: Use dest for select 'kind' in copy_elements
19:42
19:48 MasterDuke joined 20:08 linkable6 left 20:09 linkable6 joined
MasterDuke samcv: is github.com/apankrat/notes/blob/mas...md#unicode applicable for moarvm? 20:20
nine This gets infix:<eq> inlined into the loop body: MVM_SPESH_BLOCKING=1 ./rakudo-m -Ilib -e ' my $a = "foo"; my $b = "bar"; for ^10000 { my $c; $c := $a eq $b }' 21:08
This doesnt: MVM_SPESH_BLOCKING=1 ./rakudo-m -Ilib -e 'use Test; my $a = "foo"; my $b = "bar"; for ^10000 { my $c; $c := $a eq $b }'
In the latter case it looks like spesh cannot identify the target static frame of the call and thus bails out of inlining without even bothering to mention it in the inline log 21:09
MasterDuke is it just Test, or using any module? 21:11
nine any module 21:14
MasterDuke huh, seems a little less than ideal 21:15
jnthn That's a bit odd. 21:19
nine Because MVM_multi_cache_find_spesh can't find a candidate
jnthn That it doesn't know the target, I mean
nine Ordinarily I'd think that running more code will warm up those caches some more. But not that it drains them
jnthn If it doesn't know the target then inlining isn't even a question
No, the multi dispatch cache should get populated on first call 21:20
Does it have the types of A and B figured out?
uh, $a and $b
nine r3(4): usages=1, deopt=2,1, flags=9 KnTyp Concr (type: Scalar) 21:22
r4(4): usages=1, deopt=2, flags=9 KnTyp Concr (type: Scalar)
MVM_multi_cache_find_spesh bails out because arg 0 doesn't have MVM_SPESH_FACT_KNOWN_DECONT_TYPE 21:27
The last time we get any info about statistics on infix:<eq> is during module loading. Later on the log doesn't have anything on it anymore 22:00
MasterDuke is that a consequence of MVM_SPESH_BLOCKING? are the results (sometimes) different without? 22:09
cog Hi, what a Spesh slot and an effective Spesh slot ? 22:10
MasterDuke my understanding is that spesh slots are a place to store data that's useful for a spesh candidate (i.e., an optimized version of code). when a frame is invoked, if it has a spesh candidate, it's spesh slots are copied into the effective spesh slots of the frame 22:15
cog It is a Collectable so that means it can be anything ? 22:17
... collectable
MasterDuke yeah 22:18
nine yes 22:19
It's a value that is constant for the spesh candidate.
cog So a given candidate knows what it is ? 22:20
MasterDuke yep github.com/MoarVM/MoarVM/blob/mast....h#L22-L23 22:23