| IRC logs at
Set by AlexDaniel on 12 June 2018.
00:06 lucasb left 02:10 Kaiepi left 02:12 Kaiepi joined 02:42 squashable6 left 02:43 squashable6 joined 02:55 bonsaikitten left 06:53 domidumont joined 07:18 brrt joined 07:27 domidumont left 07:38 squashable6 left 07:40 squashable6 joined 07:48 patrickb joined
brrt \o 07:50
nwc10 o/ 07:54
07:59 xiaomiao joined 08:15 sena_kun joined
sena_kun 08:35
brrt ohai sena_kun 08:37
sena_kun o/ 08:50
09:00 zakharyas joined 09:25 sena_kun left 09:29 brrt left
timotimo o/ 10:11
10:45 zakharyas left 10:54 zakharyas joined 11:02 zakharyas left 11:07 zakharyas joined 12:13 zakharyas left 12:40 sena_kun joined 13:36 lizmat left 13:38 lizmat joined 13:49 dogbert11 joined 13:51 dogbert17 left 13:54 zakharyas joined 14:43 sena_kun left 14:44 lucasb joined 14:58 sena_kun joined 15:33 MasterDuke left 16:18 patrickb left 16:44 sena_kun left 16:52 zakharyas left 16:54 zakharyas joined 16:58 sena_kun joined 17:06 patrickb joined 17:09 domidumont joined 18:20 MasterDuke joined 18:37 zakharyas left 18:43 sena_kun left 18:57 sena_kun joined 20:18 zakharyas joined 20:27 domidumont left 20:33 lucasb left 20:44 sena_kun left 20:58 sena_kun joined 21:16 moritz left, zakharyas left 21:18 moritz joined 22:39 MasterDuke left 22:43 sena_kun left 22:46 MasterDuke joined
MasterDuke any idea why `strace -c perl6 -e 'say "Hello world"'` reports ~320 calls to brk (half of the total number of syscalls)? 22:47
japhb MasterDuke: Just guessing, but perhaps libc malloc wants more heap space? 22:50
We do so much allocation, that I have to assume we're going to push the underlying libc allocation to demand more from the OS pretty regularly. Mind you, we could probably arrange to request more from the OS all at once .... 22:51
MasterDuke is that something easy to optimize?
japhb MasterDuke: I dunno about easy, but I assume some well chosen mmap and mprotect and such calls would cut that down quite a bit. 22:52
MasterDuke know how to figure out where they would go?
hm, guess something like heaptrack could be useful here... 22:53
japhb I'm not going to do much better at that question than just reading the relevant code ...
22:59 sena_kun joined
japhb Successfully nerd sniped -- a little searching tells me that Linux malloc(3) uses mmap for allocations larger than MMAP_THRESHOLD (by default, 128KiB), and brk for smaller allocations. So perhaps this represents space taken by lots of small mallocs. 22:59
MasterDuke looks like the place with the single largest number of allocations is 23:01
67k of them. but only 3.3mb allocated 23:02
timotimo: you've just been playing with serialization/deserialization. any thoughts on how to malloc in larger chunks? 23:08
japhb The classical answer is "allocate a pool, and then suballocate from that" -- which is exactly what our GC does. But since GC'ables can move, and IIRC Gen2 things have some overhead to pay attention to inter-generational pointers, not all structures can use the GC directly. But things that are only allocated once and never freed could certainly use a simple bump-the-pointer pool allocation. 23:11
MasterDuke that's what i understand our fixed_size_allocator to be. wonder if there's a reason it isn't used for deserialization? 23:15
japhb Wonder if it has anything to do with the laziness of deserialization 23:21
Or perhaps that bit (allocation of deserialization allocation) just never got an optimization pass.
Actually, do you know that all the brk calls happen in deserialization for sure, or just an educated guess?
MasterDuke just a guess based on heaptrack telling me that was where the most number of allocations happened 23:26
heh. some very rough hacking has ./src/6model/reprs/VMArray.c using the MVM_fixed_size_* functions in enough places that `perl6 -e 'say "Hello world"'` doesn't segv 23:35 didn't change the number of brk calls
samcv releasing 23:36
MasterDuke huh. heaptrack does now reports a ton fewer calls to allocation functions... 23:37
hm, then what's a good way to tell where those brks happen? 23:39
jnthn Ultimately, though, the fixed size allocator will call malloc when it wants another page for the pool
So it may well be less calls to malloc, while still asking for the same-ish amount 23:40
And I think that's what drives the brk calls
same-ish amount of memory overall, I mean
23:41 patrickb left
jnthn If we want to reduce the amount of brk calls, we probably need to ask for a bigger chunk up front 23:43
MasterDuke hm. so for the fixed size allocator we could possibly create more buckets up front? anything we could do otherwise?
jnthn Probably, has to be careful not to end up over-allocating. I mean, if we can predict how much we'll need, we can probably do the brk call in advance 23:45
MasterDuke well, even `strace -c perl6 -e ''` calls brk ~300 times 23:46
jnthn I guess a really hacky way, if we know the amount Rakudo ends up asking for whatever you do, is to stick it into the code generated for the perl6 executable ;)
MasterDuke MVM_malloc is only called ~288 times by MVM_fixed_size_realloc, so it may not be the problem 23:47
samcv nine, nice automating more of the MoarVM release ++
well thet one i noticed was not having to edit index.html, but i saw some other changes in the logs
MasterDuke jnthn: ha, that seems kind of hacky, but easy to do... 23:48
samcv though i have not tried the tools/release script
MasterDuke add an argument to MVM_vm_create_instance()? 23:50
jnthn MasterDuke: Probably neater as another function 23:51
MasterDuke: Also so as not to bust stuff for any existing embedders, who will surely call that
MasterDuke something like MVM_vm_set_size()? 23:52
takes an MVMInstance and a size? 23:53
samcv release is done
guess the bot is gone 23:54
bed time. ping me if there's any releasy things needed to be done, but everything should be good 23:58
i did not update the wikipedia page fyi 23:59