github.com/moarvm/moarvm | IRC logs at colabti.org/irclogger/irclogger_logs/moarvm
Set by AlexDaniel on 12 June 2018.
01:42 evalable6 left, linkable6 left, linkable6 joined 01:44 evalable6 joined 01:48 Altai-man_ joined 01:50 sena_kun left 03:49 sena_kun joined 03:51 Altai-man_ left 05:48 Altai-man_ joined 05:51 sena_kun left
Geth_ MoarVM: usev6++ created pull request #1268:
Avoid null pointer in backtrace and avoid segfault
06:17
MoarVM: a4689f90a8 | (Christian Bartolomäus)++ | src/core/exceptions.c
Avoid null pointer in backtrace and avoid segfault

Fixes github.com/rakudo/rakudo/issues/3605.
06:42
MoarVM: 838e6836e0 | niner++ (committed using GitHub Web editor) | src/core/exceptions.c
Merge pull request #1268 from usev6/Rakudo#3605

Avoid null pointer in backtrace and avoid segfault
linkable6 RAKUDO#3605 [closed]: github.com/rakudo/rakudo/issues/3605 [MoarVM][SEGV] Segfault when trying to print last element of nqp::backtrace()
07:49 sena_kun joined 07:50 Altai-man_ left 07:51 zakharyas joined
MasterDuke jnthn: any idea why every single bind_key call would go MVM_fixed_size_alloc->alloc_from_global->alloc_slow_path? it's only allocating sizeof(MVMHashEntry), which is just 48 07:52
nine MasterDuke: the size is irrelevant. If there's nothing in the free list it has to go through the slow path 08:16
MasterDuke nine: ah, true. does the free list ever get repopulated?
nine MasterDuke: sure, once we free something
But since compilation seems to be allocating a lot of hashes and hangs on to them, we're just freeing not nearly as many entries as we're allocating 08:17
MasterDuke yeah, there are 9m calls to bind_key
well, actually about twice that, those are just the ones that come directly from interp.c 08:20
fwiw, the majority of the bind_keys are called via QRegex.moarvm:MATCH, QASTNode.moarvm:symbol, QAST.moarvm:register_lexical, NQPCORE.setting.moarvm:prepare-hash 08:22
lizmat I was looking at some NQP code the other day, and was surprised to see how many times hashes where being used instead of lists with constant indices 08:53
%h<foo> = 42 instead of foo := 0, @l[foo] = 42
MasterDuke i'd certainly hope that in a Perl family language hashes would be fast, but they're pretty much guaranteed to have a little more overhead than lists. maybe in the guts it would make sense to convert some of those hashes to lists 08:56
nwc10 "those who do not remember pseudeohashes are doomed to re-implement them" ? :-) 08:58
and "those who do not remember ithreads ..." ?
lizmat nwc10: point taken...
nwc10 however, I think rakduo stands a fighting chance of making pseudohashes actually work the way they had been intended
lizmat but perhaps those hashes could become objects
nwc10: my understanding of P6Opaque is that they are really working pseudo hashes 08:59
nwc10 the lesson from pseudohashes was that (for Perl 5) the runtime cost of checking 'is this really a hash?" outweighed the benefit from the internal array representation for pseudoashes
lizmat: I don't know anything of the details of P6Opaque 09:00
rakudo stands a chance of doing much better because it has a proper compiler that can analyse things
and then MoarVM's spesh gives it a second chance
two ways to win!
hence the smiley - I don't think that it really applies here 09:01
lizmat I was not proposing creating a layer
just that in the time when a lot was unsure and still being developed 09:02
using hashes was a quick way to get things goiing
now that things have solidified a bit more, one could look at cases where it would make sense to replaces hashes with arrays and constant indices
09:48 Altai-man_ joined 09:50 sena_kun left 09:56 zakharyas1 joined 09:58 zakharyas left 10:38 zakharyas joined
jnthn Or even better, classes with attributes. 10:39
Which are faster to access than arrays.
10:40 zakharyas1 left
MasterDuke that kind of blows my mind 10:41
jnthn But (spoiler from the GPW talk I had to skip and still need to record) in terms of Rakudo's actions/world, I'd not spend too much time optimizing there right now. I'm far enough into my poking around with macro preparations to know that actions/world as we know them are probably going to change almost unrecognizably. 10:42
And sweep away a lot of the hash access on the way
Altai-man_ is intrigued 10:43
MasterDuke in Rakudo or NQP?
jnthn Rakudo
I'll record it this week, all being well 10:44
MasterDuke i'd assumed lizmat meant nqp code in NQP, not nqp code in Rakudo. would work there still help?
jnthn Sure 10:46
lizmat MasterDuke: I was looking both at the implementation of regexes in NQP and at Rakudo's BOOTSTRAP
if I recall correctly :-)
jnthn But much of BOOTSTRAP is all in a BEGIN block and so runs once at compile time? 10:47
Though yeah, the multi-dispatcher is in there I guess and it could benefit from less hash usage
lizmat yeah, that's the part I was talking about :-)
jnthn Yeah, I expect there's wins to be had there 10:48
mst "set phasers to optimise" ? 10:49
MasterDuke jnthn: but wouldn't fewer calls to alloc_slow_path() help compilation?
jnthn What proportion of calls end up there? 10:54
Glancing the source (while doing something else...) it seems that there may be a win by handling back a chunk of entries at once somehow. 11:06
MasterDuke jnthn: for bind_key calls while compiling rakudo, most 11:25
11:49 sena_kun joined 11:50 Altai-man_ left 12:20 MasterDuke left 12:26 MasterDuke joined
MasterDuke jnthn: just when compiling CORE.c: 112707922 calls to fixed_size_allocate. 5804710 just malloc, 89643803 hit the "per-thread free list", 10854360 take the fast path in alloc_from_global, 6405049 take the slow path in alloc_from_global 12:29
alloc_from_global fast path is the "global free list" 12:30
nine That doesn't sound all that bad? 12:47
MasterDuke no. but still seems a bit odd that all bind_key calls hit the slow path 12:52
i guess they just happen later in compilation and the fast resources are already used up? 12:53
jnthn MasterDuke: Well, the fast resources come from things being freed 12:54
m: say 6405049 / 112707922 12:55
camelia 0.0568287383
MasterDuke that seems like a harder thing to find out...what things aren't being freed that could/should be
jnthn Even in this program that's growing in memory size, it's still not a bad bet; only a bit over 5% miss taking from a free list 12:56
I don't think it's about things not being freed, it's just that if a program is building up a big data structure, then it's not going to satisfy its needs just by looking at already-freed memory. 12:57
MasterDuke m: say 6405049 / (112707922 - 5804710)
camelia 0.0599144673
jnthn We can do a bit better probably, by taking a page and chaining a free-list through it of some length, and giving it to the requesting thread so its next N requests don't need the global sync 12:58
nwc10 jnthn: (1) I don't know how "pro" malloc implementation do this sort of stuff, but 13:04
(2) just guessing - do we keep any sort of stats on recent slow path failures?
bceause it sort of sounds like (a) keep some sort of counter for slow path allocations (b) once it hits $threshold, then start doing the work (and committing to) the make a free list 13:05
(I assume that once it splits the page up with a free list, it never coalesces those bits of memory again, *and* 99% of them start out free (ie arguably wasted) 13:06
but hey, this is garbage collection. > 75% of everything is wasted
timotimo i kind-of-sort-of wish we could make our qast tree datastructure smaller 13:39
i wonder how many qast nodes have only a single child all the time; a list could be saved for those, but it'll bloat the code a bunch 13:41
like, when we manipulate qast nodes' child lists we usually take the list and push into it 13:42
if there's only temporary lists being returned so that everything can keep using list API when going through child nodes, we'd lose those 13:43
are hashes much bigger than lists? for qast nodes with not many annotations a linear probe of keys vs values through an object array could be fast enough, and if the size difference is drastic, that could be good 13:45
the increase in code may be far from worth it, though 13:46
MasterDuke the heap profiler should be able to show if a small hash is smaller than the equiv list 13:47
right?
13:48 Altai-man_ joined 13:51 sena_kun left
timotimo you mean moar's heap analyzer? 14:16
MasterDuke yeah 14:20
but i guess you can just figure it out deterministically by looking at the various data structures in moar 14:21
timotimo for arrays, the growth heuristic will be interesting, for hashes how the keys tend to be distributed among the buckets, which will be randomized 15:11
15:49 sena_kun joined 15:51 Altai-man_ left
[Coke] jnthn++ # sweeping changes coming 17:03
17:15 zakharyas left 17:48 Altai-man_ joined 17:51 sena_kun left 18:07 SmokeMachine_ joined 18:08 SmokeMachine_ is now known as SmokeMachine 18:13 MasterDuke left 18:30 MasterDuke joined 18:51 brrt joined
brrt \o 18:51
speaking of sweeping changes...
(I'd think that during pandemic-time, sitting at home, I'd have time to work on the JIT, but alas...) 18:52
how do we feel about dumbing down our internal string representation
and leaving the clever structures to rakudoland 18:53
MasterDuke like the strands?
brrt yeah
so that moarvm can have an efficient, simple, non-polymorphic underlying implementation 18:54
and rakudo can do all the algorithmic cleverness
if our objects are cheap enough, which they should be, then I don't see why we shouldn't
lizmat I assume nqp, or really Rakudo ? 19:01
because if really Rakudo, I don't see how that could be an improvement ? 19:02
but would gladly be convinced otherwise :-)
brrt nqp is fine as well 19:06
but, not moarvm
lizmat And another Rakudo Weekly News hits the Net: rakudoweekly.blog/2020/04/06/2020-...spectives/ 19:12
nwc10 o/ 19:37
brrt lizmat++ 19:39
19:49 sena_kun joined 19:50 Altai-man_ left 20:13 zakharyas joined 20:19 brrt left 20:20 brrt joined 20:26 zakharyas left 20:29 zakharyas joined 20:55 zakharyas left 21:20 brrt left 21:48 Altai-man_ joined 21:50 sena_kun left 22:09 jjatria joined 22:18 jjatria left 22:20 jjatria joined 23:49 sena_kun joined 23:50 Altai-man_ left