Welcome to the main channel on the development of MoarVM, a virtual machine for NQP and Rakudo (moarvm.org). This channel is being logged for historical purposes.
Set by lizmat on 24 May 2021.
00:02 reportable6 left 00:04 reportable6 joined 00:05 linkable6 joined 00:13 discord-raku-bot left 00:14 discord-raku-bot joined, discord-raku-bot left 00:15 discord-raku-bot left 00:16 discord-raku-bot joined
jdv wuts up with yall and coffee? 01:04
dr told me to cut back but i only do 1 a day so...no?
reaching 01:05
timo i don't drink coffee, i guess i'm one of relatively few computer-workers who don't? 01:13
leont doesn't drink coffee either 01:14
[Coke] I do 2 or 3 caff's a day. 01:30
01:56 linkable6 left, evalable6 left 01:57 evalable6 joined 02:14 ggoebel left 04:08 frost joined 04:31 ggoebel joined 04:56 linkable6 joined 06:00 notable6 left, evalable6 left, bloatable6 left, linkable6 left, bisectable6 left, releasable6 left, coverable6 left, sourceable6 left, reportable6 left, statisfiable6 left, shareable6 left, nativecallable6 left, squashable6 left, unicodable6 left, benchable6 left, greppable6 left, committable6 left, quotable6 left, tellable6 left 06:01 releasable6 joined, notable6 joined, nativecallable6 joined, unicodable6 joined, shareable6 joined 06:02 sourceable6 joined 06:03 squashable6 joined, benchable6 joined
Nicholas jdv: jnthnwrthngtn likes coffee. But sometimes his coffee machine doesn't like him. And sometimes I (also) have daft problems making coffee 06:28
07:01 tellable6 joined, linkable6 joined, committable6 joined
Nicholas good Universal Beverage Time, #moarvm 07:02
nine sticks to the traditional coffee 07:03
07:03 bisectable6 joined
Nicholas to be honest, right now I have regular coffee too 07:03
not some compromise-by-committee "brown liquid served with whipped cream and a straw" which was sort of what I was envisaging 07:04
(no, this is not me being infested with Vienese perverted ideas about what "cappochino" should mean, but more I started with "coke float" and then realised that ice cream wasn't likely to be univeral enough, but whipped cream seemed to be closer)
(I can't spell. This is not news) 07:05
nine I did enjoy tee in the morning as well when I was at Liz and Wendy's but when my wife's first words in the morning are "I'd like coffee" there's just not much I can do :) 07:07
07:22 Altai-man joined 07:23 samcv joined 07:24 [Coke]_ joined 07:31 bartolin_ joined, TempIRCLogger left, RakuIRCLogger left, [Coke] left, sena_kun left, bartolin left, jnthnwrthngtn left 07:48 sena_kun joined 07:50 Altai-man left 07:53 TempIRCLogger joined
lizmat so the good news is: getting back in the morning, my local dev version of the log server *did* start up correctly, reading all the available logs and growing to about 5G in memory 07:55
no idea as to how long it too to get there, but my feeling is about an hour :-(
nine That sounds rather excessive 07:56
07:56 moon-child is now known as Mondenkind
lizmat oddly enough, it appears to take *more* memory than pre-newdisp 07:58
yeah... I was used to ~1.5 mins
actually, it feels like an old bug that was fixed
so I'm thinking: maybe it got lost in the merge 07:59
08:01 quotable6 joined
lizmat anyways, it feels like it is blocking on start { } once the maximum number of threads has been reached 08:02
08:03 greppable6 joined, statisfiable6 joined
lizmat anyways, github.com/rakudo/rakudo/commit/39...6b16e211aa comes to mind 08:03
08:03 evalable6 joined
lizmat ah, also of note: control-c is also safe: once it is loading like that, control-c does *not* get served (or at least not while I was waiting for a few minutes) 08:08
08:18 tbrowder left, tbrowder joined
lizmat running with RAKUDO_SCHEDULER_DEBUG=1 gives a *lot* of "[SCHEDULER 3248] Will not add extra worker; hit 64 thread limit [branch with 0 total completed]" messages 08:19
I guess one for each start block :)
08:27 evalable6 left, statisfiable6 left, greppable6 left, quotable6 left, TempIRCLogger left 08:33 evalable6 joined, statisfiable6 joined, greppable6 joined, quotable6 joined
lizmat ok, the plot thickens: looking at the start of the SCHEDULER debug output 08:48
it looks like it gets into "Heuristic low utilization deadlock situation detected", then adds a worker thread, repeat until no more worker threads possible 08:49
08:50 jnthnwrthngtn joined
lizmat jnthnwrthngtn: not sure if you can backlog 08:50
ok, the plot thickens: looking at the start of the SCHEDULER debug output 08:51
it looks like it gets into "Heuristic low utilization deadlock situation detected", then adds a worker thread, repeat until no more worker threads possible
09:59 psydroid left, AlexDaniel left 10:02 coverable6 joined, AlexDaniel joined 10:07 squashable6 left 10:10 squashable6 joined 10:11 psydroid joined 10:28 Altai-man joined
Nicholas this logger stayed on the same side of the split as lizmat: colabti.org/irclogger/irclogger_lo...2021-10-13 10:32
lizmat yeah... need to make a merger of the different log versions :-) 10:33
ok, I give up on this: I can see all threads being used and executing stuff... it looks like they're waiting for each other on a lower level than I can see 10:48
note that all of these threads *are* executing the same code, but on different files 10:49
*log files
note that at this stage, I will not be able to run the log server in production on new-disp :-( 10:50
11:02 reportable6 joined 11:03 bloatable6 joined
lizmat ok, one final datapoint: loading the log files of a channel with :1degree, is not markably slower than with :15degree 11:09
which leads me to believe that affectively there is only one thread executing the log loading logic at a time
*effectively 11:10
11:16 Xliff joined
lizmat one more final datapoint: 11:31
I've just timed the loading of the channel logs *without* starting the server
if I run with :1degree, I see 31 Threads for the process in MacOS's Process Monitor 11:32
with :15degree, I see 71 threads in the Process Monitor 11:33
Altai-man this sounds like a new blocker?
lizmat however, there is *no* difference in wallclock time, but the :15degree case does take 2 more CPU seconds
Altai-man: yes, I would say this is a blocker, as it would affect any server code, specifically Cro 11:34
Altai-man lizmat, can you please create a new ticket?
lizmat will do
should I do that in MoarVM or Rakudo ? 11:35
feels like a MoarVM issue, but then it could also be a dispatch issue
Altai-man lizmat, rakudo one 11:36
11:51 lizmat_ joined, TempIRCLogger joined 11:52 squashable6 left, RakuIRCLogger joined 11:54 lizmat left 11:55 squashable6 joined 11:57 lizmat_ left, lizmat joined
lizmat github.com/rakudo/rakudo/issues/4569 11:57
jnthnwrthngtn nine ^^
12:02 reportable6 left
jnthnwrthngtn moarning o/ 12:08
Altai-man lizmat++ 12:09
jnthnwrthngtn, morning!
12:10 ggoebel left
jnthnwrthngtn lizmat: Curiously, I did benchmark Cro prior to new-disp merge and saw an *improvement* relative to master 12:11
lizmat: Please add repro instructions to the issue 12:12
I guess I need 1) to know what to install, 2) which command to run, 3) enough data to run it with presumably 12:13
Sigh, I'm hungry.
lizmat: I don't suppose you've been able to repro some simpler .hyper/.race failing to parallelize? 12:14
$ time raku -e '^10 .map({ my $x = 0; for ^10_000_000 { $x++ } })' 12:23
real 0m3,931s
$ time raku -e '^10 .race(:1batch, :10degree).map({ my $x = 0; for ^10_000_000 { $x++ } })'
real 0m0,798s
Sadly, nothing this simple cuts it 12:24
12:32 frost left
lizmat jnthnwrthngtn: maybe IO needs to be involved? 12:54
anyways, releasing the module now, so should be able to just zef install it
and symlink to another dir for the logs
jnthnwrthngtn Thanks 13:00
13:03 reportable6 joined 13:37 [Coke]_ is now known as [Coke]
[Coke] starts on his second cup of coffee 13:44
lizmat jnthnwrthngtn: github.com/rakudo/rakudo/issues/45...-942324062 13:45
13:59 ggoebel joined
[Coke] had a run() that I was then doing a .decode('utf8-c8') on the output of. Using Proc::Async, I want to use "whenever $proc.stdout.lines" - (from doc examples) - is there a similar need to manually decode? 14:05
ww
jnthnwrthngtn Diky, will make a cup of tea and take a look :) 14:16
[Coke]: Pass env => 'utf8-c8' :) 14:17
uh
enc, not env :
:)
argh, 4 hours sleep was not enough 14:18
[Coke] ah, sweet.
14:31 ggoebel left
jnthnwrthngtn lizmat: First attempt: it said "Could not find App::Raku::Log:ver<0.0.1>:auth<zef:lizmat>". Turns out that the script in bin/ asks for 0.0.1, but the version in META6.json is 0.0.3. Tweaking the script gets me further but... 14:32
ailed to get the directory contents of '/home/jnthn/dev/IRC-logs/base/descriptions': Failed to open dir: No such file or directory
(I ran it standing in the clone of IRC-logs, and indeed on't see the base directory
lizmat are you in the App-Raku-Log dir ? 14:33
from the clone?
you should be in the App-Raku-Log clone dir, not in rthe IRC-logs dir 14:34
that should be a subdir of the App-Raku-Log dir now 14:35
Altai-man "Invocant of method 'raku' must be an object instance of type 'Mu', not a type object of type 'Any'. Did you forget a '.new'?" <- is this me or it is very bizarre? I see the exception when printing a value on new-disp, but not on release. The fun thing is that it _does_ print the value properly, but after that throws an error. 14:36
jnthnwrthngtn lizmat: Ah, I see 14:40
Intersting, on my machine it runs for longer with --degree=8 14:43
And maxes out at 134% CPU, and more sys time also
lizmat well, if you have 8+8 cores... it should max out at about 1500% (used to for me before new-disp) 14:44
14:45 ggoebel joined
jnthnwrthngtn #2 0x00007ffff7a4b1cd in uv_mutex_lock () from //home/jnthn/dev/MoarVM/install/lib/libmoar.so 14:46
#3 0x00007ffff78d6a9d in MVM_callsite_intern ()
from //home/jnthn/dev/MoarVM/install/lib/libmoar.so
Loads of threads there
lizmat by default it runs to 71 CPU threads for me 14:47
(that would be with --degree=15 and --batch=16
jnthnwrthngtn $ cat fpic | grep 'Full polymorphic flattening' | wc -l 14:51
138799
lizmat is that good or bad ? 14:55
jnthnwrthngtn Bad that it happens, good in that it's debugging progress 14:58
15:00 ggoebel left
jnthnwrthngtn lizmat: Did this get slower even with --degree=1 ? 15:00
lizmat you mean, compared --degree=15 ? 15:01
jnthnwrthngtn No, I mean --degree=1 after new-disp vs before
(If you don't know, don't worry, if you do it's interesting) 15:02
lizmat I never played with that before new-disp.. I wanted it as fast as possible, so always used the default of --degree=15
m: say Kernel.cpu-cores 15:03
camelia 4
jnthnwrthngtn Fair enough
lizmat actually, that value
(which is 16 for me)
jnthnwrthngtn Well, we end up with 15,548 different callsites getting interned 15:04
m: say 138799 / 15548 15:05
camelia 8.927129
jnthnwrthngtn Yowser.
Are you constructing objects with huge numbers of args?
named args, to be precise
lizmat don't think so? lemme check 15:06
jnthnwrthngtn Ah, I think you probably are. The hierarchy under IRC::Log::Entry 15:07
lizmat with huge number you mean more 10 ?
TWEAK there takes 4 named
could it be that they are named native int parameters ? 15:08
jnthnwrthngtn haha
Let me check something to be sure...
lizmat oki
jnthnwrthngtn hah, yes 15:10
time raku-logs-server --channels=raku-dev --degree=1 --dontstart 15:11
Normally: real0m17,572s
But if you disable hash randomization
real0m2,459s
Now, can anybody guess what's going on? :)
lizmat hmmm... excessive hashing ? 15:12
jnthnwrthngtn Prior to new-disp, we could never specialize anything when you called it and flattened args. In new-disp we lift that limitation. So in theory, maybe we get to run faster. 15:13
Doing that means that we need to intern the flattened callsites.
It's possible to write a ridiculous program that causes pain, but this should not be one...
...except it is, because when we flatten in a hash, we iterate the hash. 15:14
lizmat yes... we do... :-)
jnthnwrthngtn And that means we iterate it in many orders
(thanks to randomization)
lizmat so maybe hashes are not the right thing for that ? 15:15
jnthnwrthngtn Meaning that we produce many possible permutations of callsite (since those have to commit to an order)
I think we "just" need to iterate them in a stable order
I mean, flattening hash arguments is something we do all over
lizmat yeah, but unless you refer to %_ explicitely, that's all under the hood, no ? 15:16
jnthnwrthngtn Well, the case here is that BUILDALL slurps up the args, and then calls the appropriate BUILD/TWEAK with them flattened 15:18
Nicholas: If one wants a more consistent order when iterating a hash, what is the right thing to do? 15:19
Nicholas "Doctor doctor, it hurts when I do this." "Well, don't do that then". 15:20
Um, I don't see a good answer to this, other than sort the keys
jnthnwrthngtn Yeah, I feared that would be the answer
Nicholas however, sort them on address
not values
just sort the pointers
jnthnwrthngtn ?
Really?
Nicholas well, OK
jnthnwrthngtn I suspect we don't intern strings hard enough
Nicholas will different hashes arrive with the same keys? (but different copies of the values)
acutally, if the answer is "different copies of the same value" then IIRC the 64 bit hash value is the same for the same string value (in a given interpreter run) 15:21
jnthnwrthngtn Different copies of the values, but in this case I suspect deserialized from JSON, so the strings will be all over the place
Nicholas so likely sort *those* first, and then tie break on actual string vlaue
jnthnwrthngtn Hash values of strings can be trusted?
Uh, that's a bad question 15:22
Nicholas they have to be determimistic (for a given interpreter run) else how do you find the key in the hash? :-)
lizmat jnthnwrthngtn: actually, no deserializtion from JSON, but from colabti tyle log files
jnthnwrthngtn Hash values of strings are independent of any given hsah?
lizmat *style
jnthnwrthngtn Nicholas: Duh, good point :)
lizmat but indeed.. with strings all over the place
jnthnwrthngtn Not the smartest bear today...
Nicholas hash value will be a function of 1) hash algorithm (ie siphash, until we change that) 2) the 64 bit per-process salt for all hashing 3) the codepoints of the string 15:23
lizmat basically a lot of substr on the log file
jnthnwrthngtn OK, and for a given execution of the VM, 1 and 2 are constant, so it's only 3
Nicholas so, possibly even just sort on 64 bit hash value and if two happen to collide, then, oh well, that's two variants of the same thing interned
or do the job properly...
jnthnwrthngtn Yeah, it'll be more expensive to take care of the collisions than it will be have the dupe interns 15:24
Nicholas but IIRC when hacking on this stuff last year, my code was buggy and I *forgot* to check the actual strings
and we passsed all spectests
jnthnwrthngtn 64 bits is enough for anybody
Nicholas ie our 64 bit hash values are good enough that we "rarely" generate collisions
jnthnwrthngtn Ah, and: I also see a speedup with --degree=8 over --degree=1 with hash randomization disabled 15:26
OK, another cup of tea and I'll try to implement this 15:28
lizmat waits in great anticipation :-) 15:29
Nicholas if thses are MVMString objects found as keys in a hash, then their s->body.cached_hash_code will always be valid and computed already
hence the interface in MVM_string_compute_hash_code() might be a bit less than optima 15:30
optmial
and "just cheating and breaking encapsulation inside the function passed to qsort()" might be the best plan
nine If the hashing algorithm is any good, I'd be surprised to see a collision of those 64 bits just about ever 15:35
15:57 dogbert17 left
jnthnwrthngtn Nicholas: Turns out if I used qsort from the C standard library I can't reach the tc anyway, so I just grabbed the cached hash value out :) 16:00
Ah, that's what you realized too and I missed it
The change helps, currently running spectest 16:01
16:06 dogbert17 joined
lizmat jnthnwrthngtn: all of this doesn't make me understand why the performance between --degree=15 and --degree=1 are so similar 16:06
whereas before new-disp they weren't ? 16:07
jnthnwrthngtn lizmat: Hash randomization caused a huge blow-up in the number of possible callsite shapes. With new-disp we now try to intern those so we can use them as specialization keys. 16:11
The first step to seeing if we already have that callsite is a search of all those with the same arity to see if any match.
So the more you have, the longer that search gets 16:12
lizmat I get that... and I guess that search has a lock on it?
jnthnwrthngtn The variation has a second effect: it means that we don't detect the possibility to use an individual dispatch program.
And so we end up slow-pathing those (re-recording them every time) 16:13
Which forces an intern even if the one for flattening is an "if you can"
lizmat I all understand that... but wouldn't that just mean using *more* CPU
jnthnwrthngtn And...you guessed it, the intern cache has a lock on it
lizmat right, ok
that explains 16:14
jnthnwrthngtn That explains the huge increase in system time too
(All the lock acquisitions)
lizmat feels to me that using a lock for this wouldn't scale very well in the further future? 16:15
jnthnwrthngtn What we could do on top of the fix I've already done is move the lookup in the intern cache out of the lock section
lizmat I mean if the lookup could be done lockless :-)
jnthnwrthngtn It can but then you have to repeat it after you acquire the lock.
Well, or some further mitigation of that race
s/further/alternative/ 16:16
lizmat some use of cas ?
jnthnwrthngtn Well, you could use an atomic integer that you read before looking, and then if it's the same after you acquire the lock you know the cache didn't change under you
That's probably the way to go 16:17
Geth MoarVM: e66404a859 | (Jonathan Worthington)++ | src/core/args.c
Ensure stable order of flattened named args

Hash randomization means that the iteration of hash keys varies from hash to hash. In the context of a flattened hash of arguments, this means that a given set of named arguments could appear in many orders upon iteration. Previously, we built up the callsite arg names list in whatever order we got from hash iteration. That means we're liable to ... (6 more lines)
16:22
lizmat tries 16:23
jnthnwrthngtn Yeah, over the execution of that program, even with the fix, it looks up 191,986 callsites, but only adds 2160 new ones 16:28
m: say 191986 - 2160 16:29
camelia 189826
jnthnwrthngtn That's quite a lot of saved lock acquisitions
lizmat yup...
preliminary tests: degree=1 goes from 28.15 to 3.43 16:30
preliminary tests: degree=15 goes from 30.39 to 1.785
lizmat is a lot happier camper 16:31
lizmat attempts a full load
yup, eating CPU like crazy now :-) 16:32
jnthnwrthngtn nom nom nom
lizmat less than a minute now to load all of the logs :-)
clocking in at about 5G memory :-) 16:33
16:36 ggoebel joined 16:37 [Coke]_ joined 16:39 [Coke] left
lizmat perhaps time to do a MoarVM bump? 16:42
Altai-man we can 16:43
jnthnwrthngtn I've got another patch spectesting now for reducing locking on the intern cache 16:47
16:48 ggoebel left, ggoebel joined
Geth MoarVM: f7d6bc6149 | (Jonathan Worthington)++ | 2 files
Avoid locking on intern cache lookup

We only really need to take the lock in order to serialize additions. Rather than always searching again to see if the callsite was added, use a total count of interns as a cheaper validation that nothing changed between our search and the lock acquisition.
16:53
Altai-man jnthnwrthngtn, want another one before taking a good rest?
jnthnwrthngtn Altai-man: Hm, maybe...what is it?
Altai-man jnthnwrthngtn, "Invocant of method 'raku' must be an object instance of type 'Mu', not a type object of type 'Any'" which started to happen on new-disp. Alas, there is no nice golf as the module does funky things with MOP. Want steps to reproduce? 16:55
jnthnwrthngtn Um...no, I'd prefer a golf, tbh. 16:58
Altai-man aha, I'll try to make one then 16:59
jnthnwrthngtn What kind of bug is it, though? Spesh-sensitive?
Or still there under MVM_SPESH_DISABLE=1?
Altai-man still there when spesh is disabled 17:00
any other flags I should try?
jnthnwrthngtn No, if it's still there when spesh is disabled then that's already ruled out all of spesh :) 17:01
That will probably make it much easier to golf
Altai-man well, the bug is in cro-ldap, tests explode in the parser and it relies on the meta generator, so all of this is a bit entangled. 17:04
jnthnwrthngtn This is a recent regression, given we didn't see it in blin before? 17:06
17:06 [Coke]_ is now known as [Coke]
Altai-man the module is not in the ecosystem, so not something recent, alas 17:06
oh no, github.com/rakudo/rakudo/issues/4570
jnthnwrthngtn, I'll try to golf it down, anyway 17:07
jnthnwrthngtn Yeah, I'd appreciate others doing golfing. I've done so much of it over the last weeks/months.
I don't think that's sustainable for me, nor is the assumption that I'll do it good for others getting better at it. 17:08
17:27 ggoebel left
jnthnwrthngtn dinner & 17:38
18:02 discord-raku-bot left, discord-raku-bot joined, reportable6 left 18:06 discord-raku-bot left 18:07 discord-raku-bot joined 18:11 vrurg left 18:12 vrurg joined 18:17 vrurg left
dogbert17 hmm, this is the kind of problem nine++ tends to eat for lunch :) 18:48
18:54 Xliff left
Altai-man and I have a golf 18:57
gist.github.com/Altai-man/f2e7816a...d5e72f283c 18:59
19:04 reportable6 joined
Altai-man m: class ASNType { has $.type is rw; }; my $new-type = Metamodel::ClassHOW.new_type(name => 'LDAPMessage'); my $attribute-type = ASNType.new(type => Int); my $attr = Attribute.new(name => '$!protocol-op', type => $attribute-type.type, package => $new-type, :has_accessor); $new-type.^add_attribute($attr); $new-type.^compose; say $new-type.new; 19:05
camelia Invocant of method 'raku' must be an object instance of type 'Mu', not
a type object of type 'Int'. Did you forget a '.new'?
in block <unit> at <tmp> line 1
19:05 Altai-man left 19:06 vrurg joined
jnthnwrthngtn I like short golfs and I cannnot lie... 21:02
21:11 linkable6 left
[Coke] ... did you already fix it?? 21:12
jnthnwrthngtn Yes :)
[Coke] www.youtube.com/watch?v=PSZy6lGgOcI 21:13
nine dogbert17: I might eat those for lunch, but jnthnwrthngtn++ eats those for a quick little snack in between 21:16
dogbert17 nine: you're up late 21:21
jnthnwrthngtn Wonder if GC debug options and/or a small nursery might narrow github.com/rakudo/rakudo/issues/4570 down...
dogbert17 jnthnwrthngtn: I ran with a 24k nursery 21:22
nine dogbert17: indeed. A friend came over and now I'm waiting till the bathroom's free
jnthnwrthngtn dogbert17: Ah, OK 21:24
Well, maybe I'll take a look tomorrow
dogbert17 I'll try a bit more and see if I can get better info 21:27
jnthnwrthngtn Nice, thanks 21:34
sena_kun jnthnwrthngtn++ # outstanding! 22:09
22:09 ggoebel joined
sena_kun also thanks for nudging me to golf it, it was kinda fun 22:10
22:22 ggoebel left 22:36 ggoebel joined