Perl 6 language and compiler development | Logs at colabti.org/irclogger/irclogger_log/perl6-dev | For toolchain/installation stuff see #perl6-toolchain | For MoarVM see #moarvm
Set by Zoffix on 27 July 2018.
02:10 AlexDaniel joined 02:27 ggoebel joined 02:32 ggoebel left 03:53 Kaiepi left
AlexDaniel dogbert11: I think it's fine. Yeah Moar had a release and Rakudo didn't, functionally there's no problem 04:10
04:35 Kaiepi joined 04:52 MasterDuke left 05:21 skids left 06:21 [TuxCM] left 07:59 vrurg left
Kaiepi oh lizmat you mentioned you wanted to talk earlier? 08:03
lizmat you should ping me about something ?
08:08 [TuxCM] joined
Kaiepi yesterday i was talking about my issues with feed operators and you said to talk to you later lizmat 08:08
lizmat right... 08:09
lizmat looks at the code again
Kaiepi: so, Actions/make_feed_result is run at compile time, right ? 08:12
Kaiepi yes 08:13
lizmat and Rakudo::Internals.EVALUATE-FEED should be run at runtime, right ? 08:14
Kaiepi it should, but when i tried that it ran slower than the current implementation of feed operators 08:15
lizmat but when I look at the code, it looks like it is being run at compile time ? 08:16
Kaiepi check the previous commits
initially i wrote it so it'd run at runtime 08:17
lizmat the approach in github.com/rakudo/rakudo/pull/2903...3e1330d285 of make_feed_result looks correct to me 08:19
except maybe the p6store ? is that still a thing? 08:20
Kaiepi i forget, locally i made it do either p6store or p6assign depending on the variable's sigil but idk if i pushed those changes or not 08:21
lizmat hmm... I guess it is
lizmat seems to recall we wanted to get rid of that (probably wrongly) 08:22
timotimo maybe you're confusing it with extops; p6store is implemented as a desugar, so it's implemented in nqp
lizmat m: use nqp; my @a; nqp::p6store(@a,nqp::list(42,43)); dd @a 08:23
camelia Array @a = [42, 43]
timotimo extops are the things that are implemented in C and so have to have an extra compilation step when building rakudo, and have to be located and loaded during perl6 startup
lizmat timotimo: ah, yes...
and p6store was at one point an extop, right ?
anyways 08:24
timotimo it might have been, dunno. but all extops start with "p6" if i'm not mistaken 08:31
lizmat but p6store is no longer an extop ? 08:33
timotimo that's right
jnthn p6store is something like `nqp::iscont(target) ?? nqp::p6assign(target, value) !! target.STORE(value)` 08:55
|Tux| Rakudo version 2019.03.1-420-g12ed245c8 - MoarVM version 2019.05-11-g248e2980a
csv-ip5xs0.706 - 0.710
csv-ip5xs-205.862 - 5.926
csv-parser21.233 - 21.905
csv-test-xs-200.428 - 0.432
test6.390 - 7.059
test-t1.721 - 1.734
test-t --race0.760 - 0.788
test-t-2028.350 - 29.723
test-t-20 --race8.980 - 8.995
jnthn It's what you use when you don't know what's on the LHS (e.g. no sigil)
And defers the choice of how the assignment works until runtime
lizmat jnthn: is that the right thing to do for the pointy end of a feed ? 08:58
Geth roast: df48391aa4 | (Elizabeth Mattijsen)++ | S29-context/eval.t
Add basic EVAL :check testing
08:59
jnthn Hm, we can case-analyze it, no? 09:03
==> my @a # quite clearly just wants to .STORE 09:04
Well maybe does...
I think traditionally we've .push'd
lizmat jnthn: well, that behaviour is up for some discussion :-) 09:05
github.com/perl6/problem-solving/issues/27
jnthn Right 09:07
Geth rakudo: cono++ created pull request #2911:
Fix slurp error during Configure.pl
10:31
rakudo: 6e14da541b | cono++ | Configure.pl
Fix slurp error during Configure.pl

Was getting this error: Undefined subroutine &main::slurp called at Configure.pl line 48. when I run: Configure.pl --gen-moar --gen-nqp --backends=moar --make-install
10:32
rakudo: 672fd6e2c1 | timo++ (committed using GitHub Web editor) | Configure.pl
Merge pull request #2911 from cono/slurp-fix

Fix slurp error during Configure.pl
11:51 ggoebel joined
lizmat .tell Kaiepi my take on a lazy feed handler: gist.github.com/lizmat/69faa8cb0f2...b21bace743 12:54
yoleaux lizmat: I'll pass your message to Kaiepi.
lizmat .tell Kaiepi while doing this, I realized that we will need some type of buffering / batching to make this work in multiple threads 12:55
yoleaux lizmat: I'll pass your message to Kaiepi.
12:55 [TuxCM] left
lizmat .tell Kaiepi I suggest we try something like this first, and continue the problem solving ticket about feed operators 12:55
yoleaux lizmat: I'll pass your message to Kaiepi.
13:00 pamplemousse joined 13:05 grayrider left
Kaiepi ahhh 13:11
yoleaux 12:54Z <lizmat> Kaiepi: my take on a lazy feed handler: gist.github.com/lizmat/69faa8cb0f2...b21bace743
12:55Z <lizmat> Kaiepi: while doing this, I realized that we will need some type of buffering / batching to make this work in multiple threads
12:55Z <lizmat> Kaiepi: I suggest we try something like this first, and continue the problem solving ticket about feed operators
Kaiepi i figured there was a way to reduce the list of ops but i couldn't work out how 13:12
13:27 vrurg joined 13:40 [TuxCM] joined 13:57 Kaypie joined 13:58 dogbert17 joined 13:59 [TuxCM] left 14:00 titsuki_ joined, nebuchad` joined 14:01 klapperl_ joined, commavir_ joined, scovit_ joined 14:05 [TuxCM] joined 14:06 Kaiepi left, SqrtNegI_ left, dogbert11 left, klapperl left, llfourn left, go|dfish left, llfourn joined, scovit left, nebuchadnezzar left, commavir left, titsuki left 14:09 nebuchad` is now known as nebuchadnezzar 14:11 commavir_ is now known as commavir 14:12 go|dfish joined 14:17 [TuxCM] left 15:49 discord6 left, discord6 joined 16:49 pamplemousse left, pamplemousse joined
lizmat m: use nqp; my num $then = nqp::time_n; say nqp::time_n - $then 17:10
camelia 1558458604.2014737
lizmat that feels... wrong ?
m: use nqp; say nqp::time_n - nqp::time_n # shouldn't that be close to zero? 17:11
camelia 1558458672.3035285
moritz m: use nqp; say nqp::time_n() - nqp::time_n() 17:12
camelia 0
lizmat he
timotimo it's ... an unsigned num?!?! 17:21
oh, haha 17:22
yeah, it's the parenthesis
17:31 robertle joined 17:32 Ven``_ joined 17:33 vrurg left
lizmat timotimo?? 17:35
timotimo lizmat: ? 17:36
lizmat so why do the parentheses fix the problem ?
timotimo because nqp::time_n - $then is nqp::time_n(-$then)
lizmat but, nqp::time_n is not documented as taking any values ? 17:37
timotimo right, but the parser doesn't know that
it just parses all nqp ops as if they were subs
lizmat aaah... ok 17:38
timotimo why it doesn't result in a compilation error because it's not supposed to take a value is anyone's guess ;)
lizmat must have missed that gotcha many, many times without realizing it
timotimo i might also have 17:39
should be possible to put a check in that complains if the number of args is too high
bisectable6: use nqp; nqp::time_n(1234) 17:40
bisectable6 timotimo, Bisecting by output (old=2015.12 new=672fd6e) because on both starting points the exit code is 1
timotimo m: use nqp; nqp::time_n(1234)
camelia This type cannot unbox to a native number: P6opaque, Int
in block <unit> at <tmp> line 1
timotimo m: use nqp; nqp::time_n(1234e0)
camelia ( no output )
timotimo bisect: use nqp; nqp::time_n(1234e0)
bisectable6 timotimo, bisect log: gist.github.com/60925d656e5741f757...34b6b7b609
timotimo, (2016-05-12) github.com/rakudo/rakudo/commit/33...d4efe221de
timotimo, On both starting points (old=2015.12 new=672fd6e) the exit code is 0 and the output is identical as well
timotimo, Output on both points: «»
17:46 Kaypie is now known as Kaiepi
lizmat Kaiepi: tried various parallel feed ideas, but none of them beat the simple, single threaded approach, even for very large lists 18:12
ugexe not sure why the size of the list would really matter 18:13
lizmat if you're batching stuff, the size of the batch compared to the size of the list matters 18:14
ugexe oh feed is batching now?
lizmat well, no
but I have some proof of concepts
ugexe what would be the difference (other than syntax) for ==> vs hyper? 18:15
lizmat gist.github.com/lizmat/8652e4a403f...2b31cc6ad4
the serial approach: gist.github.com/lizmat/69faa8cb0f2...b21bace743
timotimo with workloads as small as these, you're paying loads and loads of overhead on inter-process communication 18:17
lizmat *inter-thread
timotimo yes
lizmat but yeah
timotimo sorry
remember, when one core wants to access data that another core recently wrote the first core has to flush it out of its cache
lizmat but even if I artificially increase the load, it doesn't really help
timotimo so without any batching of the data, you'll be forcing every access to be a full through-to-the-ram read 18:18
what did you try so far?
lizmat see above gists
timotimo also: hm, with lazy oh, sleeping 0.001
i don't think that's enough sleep, tbh
lizmat actually, I changed that to Nil for ^1000... later 18:19
the sleep isn't actually a load
timotimo it'll simulate more work per stage, that should help when comparing it with a serial feed implementation
in theory it could cut the total sleep amount in half, i.e. serial would take 2s of sleep, parallel 1s of sleep
lizmat gist.github.com/lizmat/787a6ddb31d...f3403de78c # fast as you can for non-lazy sources 18:20
timotimo i wonder how fast we get Nil for ^1000 today. perhaps it's blazing fast
ugexe but is there a type of workload that *would* benefit from shoving data off from one CPU to another?
timotimo sure 18:21
imagine doing a simulation on one stage of the feed and a renderer on another
ugexe so maybe ==> should be serial unless prefixed with hyper or some such as you mentioned
lizmat the main issue I see, is that the values get consumed as the buffer is being filled
timotimo i'm not sure i understand what you mean by that 18:22
you mean how the last stage of the feed has to be "pulled from"? 18:24
perhaps there's a lot to be gained by also implementing a push-all or so
for when the feed gets pushed into something non-lazy
you should be able to prototype "as if we had batching" by just having (^100) xx 100 as the input and mapping over the inner values each step of the feed 18:28
ideally at some point we'd time the individual stages on the first few items that have to go through it and decide whether to spawn extra threads like that
ideally spawn an extra thread when a timeout happens, rather than waiting for the first step to be done 18:29
hold on that's BS, we can't do much with the second stage before the first stage produced the first value
lizmat yup
ok, I now have a serial / parallel for PROCESS-FEED( ^10_000, ( { Nil for ^10000; $_ } xx 10)); of 18:30
6 / 1.8 wallclock and 6 / 11.8 CPU 18:31
timotimo what's the 6 for?
lizmat oops; I meant 6 / 1.8 wallclock and 1.9 / 11.8 CPU
argh
18:31 pamplemousse_ joined
timotimo you meant 1.9 / 1.8 and 6 / 11.8? 18:31
lizmat 6 / 6 wallclock / CPU for serial, 1.9 / 11.8 for parallel
18:31 pamplemousse_ left
timotimo oh 18:32
that's a lot faster than i would have expected for parallel
oh, you put 10 stages in 18:33
i thought it was just 2
18:33 pamplemousse_ joined
lizmat no, 10 stages :-) 18:33
18:34 pamplemousse left, pamplemousse_ is now known as pamplemousse
lizmat tries another approach 18:35
ugexe my &installed = -> :$curs = $*REPO.repo-chain { $curs.list ==> grep { $_ ~~ CompUnit::Repository::Installable } ==> map { $_ => $_.installed.grep(*.defined).map(*.meta.hash) } ==> hash; }; say installed() # this was what i was benchmarking with years ago 18:40
evalable6 {/home/bisectable/.perl6 => (), /tmp/whateverable/rakudo-moar/672fd6e2c11aa3e173e7bbe64cd7…
ugexe, Full output: gist.github.com/f2b38e5a1309b33e6e...814635e222
AlexDaniel hah 18:41
19:09 SqrtNegInf joined
Geth roast: 92eec268bd | (Elizabeth Mattijsen)++ | S29-context/eval.t
Must check for CHECK block being run also
19:39
19:56 robertle left
timotimo .tell vrurg passing things it doesn't know about, Configure.pl just stays silent - we had --optimize=0 debug=3 in moarvm's Configure.pl and had a hard time figuring out why breakpoints weren't working/showing up in gdb; anything you can do? 20:05
yoleaux timotimo: I'll pass your message to vrurg.
20:07 Kaiepi left 20:08 Kaiepi joined 20:09 ufobat__ joined 20:12 ufobat_ left 20:51 MasterDuke joined 20:52 MasterDuke left, MasterDuke joined 20:53 Ven``_ left
lizmat Kaiepi: still around ? 21:04
.tell Kaiepi my latest version of the feed processor: gist.github.com/lizmat/cecd2801129...c9a8fe0ee7 21:10
yoleaux lizmat: I'll pass your message to Kaiepi.
lizmat .tell Kaiepi I think that combines the best of both worlds, please let me know if you need more info to integrate 21:11
yoleaux lizmat: I'll pass your message to Kaiepi.
Geth ¦ problem-solving: rba self-assigned perl6-infra: rules and guidelines github.com/perl6/problem-solving/issues/28 21:16
¦ problem-solving: rba assigned to maettu Issue perl6-infra: rules and guidelines github.com/perl6/problem-solving/issues/28 21:17
¦ problem-solving: rba self-assigned perl6-infra: group of services: DNS hosting github.com/perl6/problem-solving/issues/29
¦ problem-solving: rba assigned to maettu Issue perl6-infra: group of services: DNS hosting github.com/perl6/problem-solving/issues/29
lizmat rba++ 21:18
Geth ¦ problem-solving: rba self-assigned perl6-infra: service: Password handling github.com/perl6/problem-solving/issues/30
¦ problem-solving: rba assigned to maettu Issue perl6-infra: service: Password handling github.com/perl6/problem-solving/issues/30
lizmat goes away for some other stuff 21:22
21:55 pamplemousse left
Geth rakudo/ugexe-patch-1: 630a68858e | (Nick Logan)++ (committed using GitHub Web editor) | appveyor.yml
[ignore] appveyor bisect moar aad6fdc
22:08
rakudo: ugexe++ created pull request #2913:
[ignore] appveyor bisect moar aad6fdc
22:09
rakudo/ugexe-patch-1: d97138459e | (Nick Logan)++ (committed using GitHub Web editor) | appveyor.yml
Use an older rakudo
22:18
rakudo/ugexe-patch-1: 93b87b4b09 | (Nick Logan)++ (committed using GitHub Web editor) | appveyor.yml
fixup syntax
22:24
rakudo/ugexe-patch-1: c3b3d86e15 | (Nick Logan)++ (committed using GitHub Web editor) | appveyor.yml
moar 2fe2f58
22:42
rakudo/ugexe-patch-1: 935479414d | (Nick Logan)++ (committed using GitHub Web editor) | appveyor.yml
final commit
22:45
23:04 tobs left 23:11 Kaiepi left 23:12 Kaiepi joined 23:18 tobs joined 23:30 ggoebel left 23:53 j3nnn1 left