00:37 psch joined, [Coke] joined, moritz joined 01:48 ilbot3 joined 05:25 domidumont joined 05:31 domidumont joined 05:50 domidumont joined 05:57 domidumont joined
domidumont timotimo: the spawnned process is defunct when moar hangs on run command. 06:30
brrt[work] good * #moarvm 07:07
nebuchadnezzar hello 08:10
domidumont: I'll be off for two weeks, I'm going to 2016.opennebulaconf.com/ and take the opportunity to visit Barcelona and Valencia ;-) 08:11
domidumont nebuchadnezzar: ok. Envoy your time off. :-) 08:39
09:09 geekosaur joined
[Coke] brrt[work]: o/ 13:18
brrt[work] hi [Coke] 13:50
brrt[work] afk 14:27
14:54 FROGGS joined 15:12 dalek joined
FROGGS o/ 15:20
domidumont o/ 15:43
16:22 domidumont joined
domidumont FROGGS: any idea on why gcc is zombie process when moar is hung ? 17:26
18:04 LLamaRider joined
FROGGS domidumont: no, no idea 18:14
domidumont: I also dont know how to continue debugging....
18:43 harrow joined 18:46 BinGOs_ joined 18:48 BinGOs joined 18:49 FROGGS_ joined 18:51 eviltwin_b joined
domidumont FROGGS: me neither... 18:51
18:58 JimmyZ joined 19:30 dalek joined 19:56 vendethiel joined 20:29 stmuk joined 20:38 brrt joined
brrt it's in a zombie state, iirc, because moar spawned it, it has exited, but moar hasn't yet reaped it 20:38
reaping involves wait()-ing
also, if it is hung, start gdb, and attach to it using gdb attach 20:39
timotimo so how in the hell is libuv missing it?
brrt that will give you access to the live heap and threads
that, i don't know :-)
can't be a knowitall and actually know everything, now can i 20:40
:-P
timotimo of course
brrt finds his personal laptop to be faster than his work macbook, and is surprised about it 20:41
timotimo huh 20:43
but maybe it has more battery life
or something
20:51 Dunearhp joined
brrt it does, yes 20:51
and it is more silent
but speed is also important
timotimo if only there was a way to tell a big beefy machine what to do and it could do whatever you want for you 21:02
brrt yes, over the internet or something like that
timotimo yeah, or maybe just over the phone or something
maybe tell it via a telegram
21:08 Dunearhp joined
brrt running classification algorithms in pl/pgsql; not the cleverest of ideas 21:18
timotimo i wonder when i should look out for more stuff being compiled than last time i looked (with the new jit) 21:33
japhb .ask brrt Classifying what? 22:00
SIGH
timotimo i believe brrt backlogs
at the very least this channel completely
jnthn would be fine with us having a message bot here, fwiw 22:05
timotimo right 22:08
japhb I'm not a chanop. Can someone /invite yoleaux ?
Or convince $Zoffix to bring one. :-)
geekosaur also not a chanop, sorry 22:15
jnthn grmbl, it won't let me op myself 22:24
timotimo jnthn: i've thought a few times about how we're going to do the "grab a private chunk from the fsa" thing for fast per-thread allocations 22:25
jnthn timotimo: Did you read the paper on the hoard allocator? 22:26
timotimo oh, no
jnthn It's good food for thought at a minimum :)
timotimo i was just wondering, are individual threads 100% owners of the lexical environments and locals that go with frames?
because frames can still escape and become GC-managed, right?
jnthn Sure
And locals too 22:27
Uh, locals can as well as lexicals I mean
Consider a continuation taken in one thread and resumed in another
timotimo oh, that is because all threads are stopped when gc happens
and items that belong to another thread get passed to that thread's inbox, yeah? 22:28
that's what that is for?
jnthn Yeah
At GC time, we take the set of threads that are alive and divide up the full set of threads among them
Hm, awake is better than alive I guess
timotimo i suggest "not blocked" 22:29
jnthn Yes, even better :)
So the inbox is processed by whichever thread is GCing the thread in question.
timotimo OK. we have freelists for these pages, right? i'd need to check if the whole page has become free by going through the freelist, or do we count how many things are still allocated in there? 22:30
jnthn No, you'd have to check it's entirely a fresslist
*freelist
timotimo OK
jnthn We could of course add metadata on that
timotimo i'm just doing a broad survey of the problem to see if maybe i'd be able to pull it off easily
jnthn I'm not sure I'd call it "easy" 22:31
timotimo do you see difficulties for this that i might miss?
jnthn Well, I suggest at least skimming the hoard paper
It does iirc nicely make the point that if you're not careful, you can get into trouble with producer/consumer patterns 22:32
And discuss how they mitigate that
timotimo oh, because the consumer will get all these objects passed that now belong to another thread
and they'll come to die in that other thread 22:33
jnthn Yeah
You could end up with one thread having tons of empty pages
Or similar
timotimo becaus we won't compact, yeah
so we're already going to use these private pages for everything, not "only" lexical environments and locals 22:34
jnthn I remember reading the paper when pondering the problem and it convinced me that I needed to think harder about it :)
timotimo OK, that's a good warning :)
jnthn Or put better, it convinced me that it's easy to get some bad failure modes with naive solutions.
The current failure mode is "argh slow", which is desirable to fix, but not if the alternative is "argh some kinds of program eat infinite memory" :) 22:35
On thread locality though, I'd assume at the moment that everything can escape between threads
At some point in the future we'll have EA
timotimo right 22:36
jnthn And be able to do better there
Also
We already do a basic kind of EA
timotimo right, for frames
jnthn In that we allocate frames on the callstack region
timotimo yeah
jnthn And we can extend that without too much trouble to include the lexicals, iirc
Did the analysis on that a while ago
timotimo if we end up with a bunch of pages that are mostly empty belonging privately to a thread, they could go back into the global pool
jnthn Yeah
timotimo but before i speculate more, i'll read that paper
jnthn Yeah, I found it useful 22:37
Though mostly in that it convinced me to put the problem off rather than hurt our userbase with a half-assed solution :)
timotimo fair enough 22:38
maybe i'll go through with the "allocate two things at once with the same lock" and do some measurements how that feels
jnthn (And no, not trying to discourage you from working on it, just warning you it's not trivial. :))
timotimo i appreciate that you're not just letting me run into an open blade ;)
(not sure if that makes sense to say in english) 22:39
jnthn It's...not an idiom I've heard of, but I can parse it :) 22:40
timotimo "in's offene messer laufen lassen" in german
dalek arVM: f2369b4 | timotimo++ | src/io/dirops.c:
ensure errno is grabbed before MVM_free is called
22:53
23:21 geekosaur joined
timotimo neat, horde will re-use completely freed pages for any other size that may exist 23:25
i'm not actually sure if our gen2 pages are each of the same size, though, or if they depend on the size of the objects that go in them
looks like to achieve the sorting into bins ordered by "freeness" we'll have to have a "amount of allocated objects" piece of metadata per page 23:27
23:35 avar joined