00:48 softmoth left 00:49 softmoth joined 01:05 toddr left 02:37 Kaeipi left 02:38 Kaeipi joined 03:34 cognominal joined 03:38 cognomin_ left 03:41 cognomin_ joined 03:45 cognominal left 06:48 MasterDuke joined 07:42 [Tux] left 07:59 [Tux] joined 08:48 Altai-man_ joined 09:08 Ven`` joined 09:15 sena_kun joined 09:16 Altai-man_ left 09:43 Kaeipi left, Merfont joined
nine MasterDuke: is that with timotimo++'s gc_measurement_debughelper branch? 09:51
09:52 softmoth left
MasterDuke nine: yes, with github.com/MoarVM/MoarVM/pull/1270 rebased on top of it also 10:25
nine So....we're gonna need some more improvements :) 10:31
In the meantime, maybe you can simply add some swap space? Since the memory is used for collecting results, it should only be written once and not read all the time, so performance shouldn't be all that bad 10:32
MasterDuke trying again with a 256gb swap file 10:39
10:43 Ven`` left 10:54 Ven`` joined
nine And yet another mystery: testing our Cro application with ab2 -n20000 -c1 localhost:10000/ now shows stable memory usage 11:02
However when I test with while curl --silent -o /dev/null 'localhost:10000/' ; do true ; done memory usage climbs and climbs
lizmat is one of those cases hitting the server harder than the other ? 11:03
nine The only differences in the sent requests I see are User-Agent: ApacheBench/2.3 vs User-Agent: curl/7.69.1 and `GET / HTTP/1.0` (ab2) vs `GET / HTTP/1.1` (curl)
lizmat I would think ab2 would be the faster one, but maybe not ?
nine lizmat: yeah, I'd bet on ab2 being faster than the shell loop 11:04
lizmat HTTP 1.1 defaults to keep-alive ?
could you force curl to use HTTP 1.0 ?
also, the shell loop is unended ? 11:05
nine lizmat: that....might be it! With curl --http1.0 memory usage seems to stay in the same region as ab2 11:12
jnthn Hmmm....that's interesting. 11:13
lizmat most modern browsers default to HTTP/1.1 afaik
jnthn Yes, but they don't make a thousand requests on the connection usually, and I'm guessing it's some leak over many requests on one connection. 11:14
11:14 Altai-man_ joined
lizmat so the benchmark is flawed when using HTTP/1.1 is what you're saying ? 11:14
jnthn Not exactly; there's something to fix, but real-world workloads are less likely to show such an impact. 11:16
lunch, bbiab
lizmat nine: perhaps add a "Connection: close" header to the curl case and then use HTTP/1.1
11:16 sena_kun left 11:22 Xliff joined
nine lizmat: that does indeed help! 11:25
11:31 committable6 left, committable6 joined 11:37 Xliff left
nine actually not 11:46
At 9K requests it's already at 2G RSS. When its stable, it stays around 800M 11:47
I'll give --http1.0 a full run now to get conclusive numbers 11:48
Geth_ Blin: bba41de470 | (Aleks-Daniel Jakimenko-Aleksejev)++ | bin/blin.p6
Remove .chomp

We *do* need these newlines after all :)
11:56
AlexDaniel Altai-man_: ok, the newline issue is fixed ā†‘
Altai-man_ AlexDaniel++
AlexDaniel I did test it this time :)
Altai-man_ AlexDaniel, can you try to test November (or almost any other module, really) issue locally? `--old=2020.02 --new=43c7e96f9a5f3ded6d7cbb7e8cc9ddc44b2fe8a9 November` is enough. 11:57
AlexDaniel yeah, already on it
MasterDuke it's writing profiler output...with an rss of 28.2gb. mem is completely used, 18gb of swap used also 12:06
well, it says it's writing, hasn't created the file yet
AlexDaniel MasterDuke: do you want me to run something on a machine with 64GB RAM? :) 12:07
MasterDuke `'/home/dan/Source/perl6/install/bin/moar' --libpath='/home/dan/Source/perl6/rakudo/blib' --libpath='/home/dan/Source/perl6/install/share/nqp/lib' rakudo.moarvm --nqp-lib='/home/dan/Source/perl6/rakudo/blib' --setting=NULL.c --ll-exception --profile-compile=core.c.sql --optimize=3 --target=mbc --stagestats --output=blib/CORE.c.setting.moarvm
'gen/moar/CORE.c.setting'`, with the right paths for your machine 12:08
AlexDaniel mhm give me a few minutesā€¦ :)
timotimo oh, did nine push a change to split the sql output into smaller chunks? 12:13
lizmat timotimo: yes he did afaik
timotimo good 12:14
MasterDuke heh. still hasn't started actually writing the file 12:23
timotimo hmm 12:24
MasterDuke timotimo: have any idea why nine's patch is required? think there's an easy fix?
timotimo it's required because sqlite doesn't like sql statements that are hundreds of megabytes long 12:25
in the parser
AlexDaniel MasterDuke: using which rakudo version? 12:26
timotimo but also, writing out more frequently should reduce the peak memory usage
MasterDuke AlexDaniel: HEAD. might also need nqp at HEAD, not sure if there's been a bump since nine's commit 12:27
12:39 gugod joined
AlexDaniel MasterDuke: OK, it did start doing something 12:41
MasterDuke writing a profile already!? 12:42
AlexDaniel MasterDuke: I don't think so, but it's running 12:44
timotimo can alwaysā„¢ attach gdb, interrupt, print stack trace 12:49
probably a few thousand frames deep in recursion
in the "precompute" phase
AlexDaniel looking at how it's using just one core and the memory just keeps creeping up slooowly I guess I just leave it running for a few hours? :) 12:50
if it doesn't actively use all that memory then I guess anyone can just swap it to an SSD and it'll be fine 12:52
timotimo hum. i wonder if there's any way at all to parallelize preprocessing of the tree and consuming the tree to generate the write-out ...
MasterDuke for sql, the file doesn't need to be in any sort of order (other than the couple create table statements at the beginning. could even the file writing be parallelized? 12:53
timotimo i think i actually use an auto_increment field in many of the tables 12:54
jnthn I wonder if the painfully pragmatic solution is to just generate the SQL string in C from the graph... 12:55
AlexDaniel timotimo: for IDs? UUIDs can do the trick then? :) 12:56
timotimo oh lord
12:57 rypervenche joined
MasterDuke oh, right 12:57
lizmat afk for a few hours& 13:04
MasterDuke timotimo: wait, why does that matter? 13:06
timotimo saves a bit of memory per entry
if the id doesn't have to be written in the file 13:07
MasterDuke well yeah, but then you're just assuming their corresponding values somewhere else? 13:08
timotimo yeah i just increment a number in the code 13:10
since i can't receive an answer from the sql server what id got allocated
MasterDuke get_remapped_type_id() ? 13:13
13:15 sena_kun joined 13:16 Altai-man_ left
timotimo is that the lookup or the generation? not sure 13:18
MasterDuke what is this doing? github.com/Raku/nqp/blob/master/sr...d.nqp#L482
timotimo empties out the array 13:19
MasterDuke why not just nqp::setelems($pieces, 0)? 13:20
timotimo you know, that's a very good question
AlexDaniel MasterDuke: and what if it finished without writing the file? 13:21
it wasn't killed as far as I can see
MasterDuke oh right. you need to apply gist.github.com/niner/8ebb8c6a1052...12759d85c4 first 13:22
huh, but it actually seems to be very slightly faster than nqp::setelems($pieces, 0) 13:31
AlexDaniel applied, rerunning 13:33
MasterDuke `62.2g 27.4g 0.9m R 95.0 87.2 171:35.75 5 moar`, be prepared to wait a while 13:40
AlexDaniel MasterDuke: question is, does it need more than 64G? 13:54
13:55 squashable6 left
MasterDuke well, its rss is ~28gb. assuming all swap is it also, that's another 18.5gb 13:55
so, maybe?
13:56 squashable6 joined
AlexDaniel okay we'll see what happensā€¦ I don't have any swap right now as I never thought this machine could possibly need more than 64G 13:56
but I can add a swap file I guess 13:57
nine timotimo: I think MasterDuke's question about my patch was actually about gist.github.com/niner/8ebb8c6a1052...12759d85c4 i.e. the END block confusion
MasterDuke i didn't have any before either, usually if something goes over my 32gb i'm ok with it dying. i just added the swap file for this run, i'll get rid of it after
AlexDaniel MasterDuke: hah, I came back to an unresponsive black screen 14:36
soā€¦ rerunning
don't know what happened there
now with a 60G swap file :) 14:38
MasterDuke whoops 14:39
`perf top` is showing MVM_coerce_i_s github.com/MoarVM/MoarVM/blob/mast...#L203-L227 up near the top (though of course MVM_profile_instrumented_mark_data is dominating everything). any thoughts on how to make that faster? 14:41
14:41 Merfont is now known as Kaiepi
jnthn Is profiling causing a lot of coerce_i_s? 14:42
Or does it do it without profiling also?
MasterDuke pretty sure it's the profiling. writing lots of ints to strings for the sql output 14:44
jnthn Ah, writing the data, ok 14:45
Geth_ nqp/jvm-opcode-cleanup: a778b2962b | Coke++ | 2 files
remove closefh_i
14:46
MasterDuke there's gc stuff too of course. but also memcpy and malloc_consolidate 14:48
Geth_ nqp/jvm-opcode-cleanup: 15 commits pushed by Coke++
review: github.com/Raku/nqp/compare/a778b2...ab9be457c6
AlexDaniel sena_kun: yep, I can reproduce! 14:52
15:14 Altai-man_ joined 15:15 Ven`` left 15:16 sena_kun left
AlexDaniel MasterDuke: yep, it claims that it started writing the file, it's using all 64G and 5G of swap right now 15:16
MasterDuke ah ah, mine just actually started writing the file 15:17
only 4 or so hours after starting... 15:18
15:48 Ven`` joined
[Coke] ooh: # Internal Error (signature.cpp:109), pid=62292, tid=40451 15:57
# Error: ShouldNotReachHere()
#
(running make j-test on jvm with an nqp patch)
running j-test again, no issue. weird. 15:59
AlexDaniel MasterDuke: it died. Segmentation Fault 16:07
MasterDuke: should I try again?
MasterDuke guess so, mine's still going 16:13
up to 405mb written 16:14
AlexDaniel MasterDuke: how do you know? I don't see the file 16:18
MasterDuke it just takes a while for it to actually write anything after it says it is 16:19
16:32 [Coke] left 16:35 [Coke] joined, [Coke] left, [Coke] joined
Geth_ Blin: 5e65ba0d00 | (Aleks-Daniel Jakimenko-Aleksejev)++ | bin/blin.p6
Remove warnings about using Any in string context

  `.bisected` can be empty for some modules, so we can't use that for
sorting. Grep first instead of next-ing inside the loop.
16:39
Blin: 425719e5e0 | (Aleks-Daniel Jakimenko-Aleksejev)++ | bin/blin.p6
Remove double space
16:48 patrickb joined
Geth_ nqp: coke++ created pull request #617:
Jvm opcode cleanup
16:55
AlexDaniel Altai-man_: it's interesting. So the first time it attempts to install November zef finishes with exit code 1 and this output: gist.github.com/AlexDaniel/17df1df...add8ab29da 17:08
The following packages were stubbed but not defined: November
what does that mean?
Altai-man_ AlexDaniel, that's not Blin specific failure, just a breakage of long abandoned module. 17:09
AlexDaniel Altai-man_: yes, but not really
Altai-man_ hm?
It gives me the same error when installed locally.
AlexDaniel Altai-man_: either zef or rakudo are getting confused about something 17:10
Altai-man_: what if you try to install it again? Will it succeed then?
Altai-man_ On 2020.02 the same result. 17:11
AlexDaniel point is, I think it succeeds if you try to install it twice
Altai-man_ You can try some other module, e.g. Module::Toolkit. 17:12
Let me check again...
AlexDaniel in some sense Blin is perhaps wrong because it attempts to --install-to into the same dir? 17:13
I never thought about that
17:15 sena_kun joined
sena_kun AlexDaniel, the same result for me. 17:16
I agree this is either a rakudo or zef bug in processing of something, but don't have time right now to dig. :(
17:17 Altai-man_ left
MasterDuke huh, why do i get no output (or file written) when i try to callgrind an nqp one-liner? i do when i run some random other thing 17:17
17:19 AlexDaniel left 17:20 AlexDaniel joined, AlexDaniel left, AlexDaniel joined
AlexDaniel sena_kun: what do you mean same result? 17:21
sena_kun AlexDaniel, the same output as in your gist each time.
AlexDaniel hmm
sena_kun Module::Toolkit is more interesting, by the way. It does clearly indicate something is very broken and then says "Ok, we installed that" and it is indeed listed in installed modules, can be uninstalled etc. 17:22
Maybe can start investigating from this one.
MasterDuke huh again. works when i use the nqp i installed, but not when i ./nqp-m in my nqp directory (though it work no under valgrind)
timotimo how much memory usage do we have while profiling a core setting compilation? upwards of 20 gigs? 30? 17:24
MasterDuke top is showing my res at 27.4g and my virt at 65g. i had to add a swap file, it has 26g used 17:43
currently 1.1g of profile written, but it's still going 17:44
timotimo get yourself a copy of "smem" and run "sudo smem -kas swap" 17:47
Geth_ rakudo: 9b66980d25 | (Patrick Bƶker)++ | 3 files
Fix bug report URL in binary release README
MasterDuke timotimo: `247918 dan /home/dan/Source/perl6/install/bin/moar --libpath=/home/dan/Source/perl6/rakudo/blib --libpath=/home/dan/Source/perl6/install/share/nqp/lib rakudo.moarvm --nqp-lib=/home/dan/Source/perl6 38.2G 27.5G 27.5G 27.5G` 17:51
timotimo oh my
already almost 40 gigs swapped out :D
Geth_ nqp: 2853a1a5fa | (Jonathan Worthington)++ | 2 files
Support anon declarator on NQP subs

So that in places like the Rakudo bootstrap, we can give the sub we use for setting up the various built-in methods names, but not have them get installed anywhere.
18:03
18:04 robertle joined
Geth_ nqp: patrickbkr++ created pull request #618:
Static nqp home hll var
18:26
nqp: e27b394231 | (Patrick Bƶker)++ | 7 files
Revert "Revert "Merge pull request #611 from patrickbkr/static-nqp-home-hll-var""

This reverts commit e89893ed7309636536786f5b974c418c2582568a.
nqp: fbae77eaac | (Patrick Bƶker)++ | tools/templates/jvm/Makefile.in
Add a missing dependency in Makefile rule

This broke building in some cases.
nqp: cd87855866 | (Patrick Bƶker)++ (committed using GitHub Web editor) | 7 files
Merge pull request #618 from patrickbkr/static-nqp-home-hll-var

Static nqp home hll var
rakudo/master: 5 commits pushed by (Patrick Bƶker)++, (Patrick Boeker)++ 18:30
AlexDaniel MasterDuke: it segfaulted again. Soooā€¦ good luck! 18:37
18:49 patrickb left 18:50 hankache joined 18:55 patrickb joined, softmoth joined 18:59 robertle left 19:02 robertle joined
Geth_ rakudo/rakuast: 36 commits pushed by (Jonathan Worthington)++
review: github.com/rakudo/rakudo/compare/d...105acf80b1
19:05
[Coke] there it is! 19:06
jnthn Only so much done so far, but it's a start. :-)
Needs NQP HEAD (the anon sub commit I pushed there earlier today) 19:07
[Coke] do we have a preference on PRs for merge vs. rebae? 19:10
19:14 Altai-man_ joined
AlexDaniel [Coke]: just merge, unless there's a reason to rebase 19:15
a reason could be for example that the PR is very old 19:16
19:16 sena_kun left
[Coke] this one is very recent, but understood. 19:16
Geth_ nqp/master: 17 commits pushed by Coke++, (Will Coleda)++
review: github.com/Raku/nqp/compare/cd8785...2bbc0bdf7b
hankache Hello * 19:22
[Coke] ~~ 19:23
hankache what is the Raku native type equivalent of wchar_t?
[Coke] MasterDuke: hi 19:25
19:26 lichtkind joined
[Coke] MasterDuke: the change you made to nqp docs is causing a test failure. I assume it means the doc test is broken. 19:27
19:27 robertle left, lucasb joined 19:33 patrickb left 19:36 patrickb joined
MasterDuke do you know which change? 19:39
lizmat nqp:: anon sub { "42" } 19:47
MasterDuke i'm getting a failure before my change
patrickb rba: I have updated a new revision of the windows 2020.02.1 binary release here: rooster.uber.space/patcloud/index....EJLQDZcjbf
lizmat jnthn: I think anon sub commit is already in rakudo HEAD 19:48
MasterDuke `Malformed UTF-8 near bytes 00 00 b2 at gen/moar/stage2/NQPCORE.setting:809 (/usr/share/nqp/lib/NQPCORE.setting.moarvm:consume-all-chars)` for t/docs/tests.t
patrickb rba: It's named rakudo-moar-2020.02.1-02-win-x86_64.zip/asc
rba: Can you upload?
19:57 hankache left
rba patrickb: sure give me a sec. 20:11
jnthn lizmat: Ah yes, somebody did indeed do a dump since then :) 20:30
Hadn't noticed that
lizmat how did you know I just did that? :-)
jnthn oops, bump :D 20:31
20:52 Ven`` left
[Coke] MasterDuke: it was removing the smrt_, I think 20:59
a5332ac14 ? 21:00
linkable6 (2020-04-15) github.com/Raku/nqp/commit/a5332ac14e smrt_(int|num|str)ify -> (int|num|str)ify
[Coke] ooh, nifty.
doc test complains: not ok 1542 - documented op 'strify' exists in moar
doc test cheats (and fails sometimes) to figure out which ops are defined in each backend. 21:01
also failing: not ok 728 - Opcode 'smrt_strify' (moar) is documented 21:12
nqp: nqp::say(nqp::smrt_strify(3))
camelia No registered operation handler for 'smrt_strify'
at gen/moar/stage2/QAST.nqp:1504 (/home/camelia/rakudo-m-inst-1/share/nqp/lib/QAST.moarvm:compile_op)
from gen/moar/stage2/QAST.nqp:6145 (/home/camelia/rakudo-m-inst-1/share/nqp/lib/QAST.moarvm:compile_ā€¦
[Coke] nqp: nqp::say(nqp::strify(3)) 21:13
camelia No registered operation handler for 'strify'
at gen/moar/stage2/QAST.nqp:1504 (/home/camelia/rakudo-m-inst-1/share/nqp/lib/QAST.moarvm:compile_op)
from gen/moar/stage2/QAST.nqp:6145 (/home/camelia/rakudo-m-inst-1/share/nqp/lib/QAST.moarvm:compile_node)ā€¦
[Coke] looks like there is a reference to smrt_strify in src/vm/moar/QAST/QASTOperationsMAST.nqp
21:14 donaldh joined 21:15 sena_kun joined 21:16 Altai-man_ left 21:25 patrickb left 21:26 patrickb joined, sena_kun left 21:27 Summertime left, Summertime joined 21:34 dogbert17 joined
rba patrickb: uploaded. 21:45
patrickb rba: Looking good. Thanks! 21:46
22:03 patrickb left, softmoth left 22:04 softmoth joined 22:09 softmoth left, softmoth joined
AlexDaniel Altai-man_: github.com/ugexe/zef/issues/342 22:10
tellable6 AlexDaniel, I'll pass your message to Altai-man_
AlexDaniel Altai-man_: I don't know if it's the only issue, but at least it's something I was able to reproduce on my own setup 22:12
tellable6 AlexDaniel, I'll pass your message to Altai-man_
MasterDuke woohoo, finally finished! don't know exactly how long that took, but i think over 10 hours. 3.2g file created 22:31
timotimo incredible 22:52
22:52 softmoth left
MasterDuke hm, but not sure how correct the values are... 22:54
it says github.com/Raku/nqp/blob/master/sr...#L586-L588 is the most expensive by exclusive time 22:55
timotimo yeah that seems odd 22:56
MasterDuke do you want the sqlite3 file? it's only 1.7g 22:57
timotimo i wonder how much zstd can do with that 23:06
MasterDuke zst of the original sql file is only 615mb 23:10
zstd of the sqlite file is 650mb 23:11
23:14 lichtkind left
timotimo ha, that's fun 23:17
i'll take the sqlite file anyway
not sure how, tho
MasterDuke what's that file transfer service we've used before? 23:20
from moarperf: "The profiled code ran for 1,852,391.13ms", "The Garbage Collector ran 1613 times. 1 GC runs inspected the entire heap", "GC runs have taken 86.45% of total run time", 23:23
but it thinks there weren't any allocations? 23:24
timotimo: send.firefox.com/download/e038cb2e...U-ijUAWoyw 23:31
moarperf is showing more expected things for routines. i guess my sql query was wrong 23:36
23:54 softmoth joined
timotimo sharedrop.io or something? 23:54
23:55 MasterDuke left