github.com/moarvm/moarvm | IRC logs at colabti.org/irclogger/irclogger_logs/moarvm
Set by AlexDaniel on 12 June 2018.
lizmat jnthn: www.reddit.com/r/perl6/comments/cj...rnal_code/ # not sure what the answer is there atm 09:13
nwc10 jnthn/nine: 10:36
aargh
jnthn/nine: paste.scsys.co.uk/585521 10:37
can't replicate on efirst attempt
network here is vile
I'm probably 90% AFK 8thanks; packet loss) so please don't assume any prompt replies before Riga
nqp t/concurrency/02-lock.t ASAN barfage 10:38
pamplemousse o/ 12:31
nwc10 \o 12:40
timotimo o/ 13:14
nwc10 \o 13:21
ugexe would have to signal the thread to be killed to cleanup / call uv_stop (which cannot be called from a different thread) 15:48
lizmat I can't help but think that it shouldn't be too hard to have each thread set up a CONTROL block, which would check for a certain CX exception value, and then throw a special exception to have the thread killed, just as it would have been if something had gone wrong ? 15:51
timotimo the important part is how to deliver the exception, i think?
lizmat and then somehow deliver that control message to the thread ?
yeah...
but how does Comma do that then ? 15:52
timotimo how do you mean?
ugexe i would have thought it would use signals, not an exception
timotimo signals are caught by the async i/o thread, though 15:53
lizmat Comma can suspend threads, can it not ?
ugexe each thread has its own event loop... wouldnt each thread thus handle its own signals? 15:54
timotimo ugexe: that's not how it's implemented 15:57
lizmat: it attaches the debugger
ugexe also imagine it won't be super easy since we use basic locking in e.g. PrecompRepository, which means its *expecting* to not be interrupted (else a lock will be stuck locked) 15:58
although maybe that is unavoidable 15:59
lizmat maybe having it be cooperative, would already work a lot
cooperative in the sense that you would need to call a sub that would do the blocking once instructed from elsewhere 16:00
so it wouldn't work for just any code
but it would be easy enough to let it cooperate ?
could be as easy as it having an array of Locks, indexed by thread ID's, and just locking that externally 16:01
while the sub would just attempt to lock its thread's lock inside the sub (and then immediately release it when it got it?) 16:02
ugexe I'm not quite sure what you are suggesting... a type of Lock singleton? But remember Lock::Async does not have to be locked/unlocked by the same thread. 16:11
although i suppose that likely saves it from the issue of killing the thread locking it forever 16:12
or at least its recoverable 16:13
lizmat no, just a simple array of Locks
and a sub that would lock @locks[$*THREAD] 16:14
ugexe `my $lock = ...; sub foo { $lock.lock; &do-work; $lock.unlock };` when killing the thread inside &do-work how does $lock get unlocked?
if they really wanted automatic unlocking they would use $lock.protect, so it would feel odd to automatically unlock a $lock.lock but only when inside a thread that is killed 16:16
of course if we could get rid of $lock.lock / $lock.unlock in the core it really becomes a library design issue 16:17
get rid of their use, not their implementation 16:18
then again that doesn't help with internal moarvm lock usage 16:20
lizmat my @suspend[Lock] = Lock.new xx ^threads.max; sub checkpoint(--> Nil) { LEAVE @suspend[$*THREAD].unlock; @suspend[$*THREAD].lock }; sub suspend($id --> Nil) { @suspend[$id].lock }; sub release($id --> Nil) { @suspend[$id].unlock }
ugexe that is what $lock.protect is, no?
lizmat basically, yes... but now with external influencers ? 16:21
ugexe what if you dont want automatic unlocking?
lizmat sub protect( &code } { @suspend[$*THREAD].protect: &code } 16:22
ugexe that would still automatically unlock 16:23
i'm asking if i still want low-level lock control
lizmat why would one do that if one can run this inside a protect block, which takes care of all locking / unlocking? 16:27
ugexe because locking isn't always encapsulated into a single function 16:29
github.com/rakudo/rakudo/blob/32d4...y.pm6#L233 16:30
lizmat ugexe: that's too much of a diversion for me while working on the P6W, will look at that later... 16:31
ugexe its a function that basically calls $lock.unlock from a function that did not lock $lock in the first place
its not even locked in that *file* 16:32
ugexe then again if a lock must be locked/unlocked by the same thread then i'm not sure how it would deadlock (in perl6 code, not moarvm internals) in the first place 16:35
lizmat and another Perl 6 Weekly hits the Net: p6weekly.wordpress.com/2019/07/29/...sed-again/ 18:15
nine With MVM_GC_DEBUG active t/04-nativecall/23-incomplete-types.t consistently fails with "Illegal Gen2 -> Nursery in caller chain (not in inter-gen set)" 19:24
nine A MVM_gc_write_barrier seems to help, but the test still goes down in flames 19:55
nine "Illegal use of a nursery-allocating spesh op when gen2 allocation flag set" that's an...interesting error 19:58
nine Oh, exceptions leaving the allocate_in_gen2 set 20:07
nine LOL. So in 2016 jnthn++ made a commit "Fix CUnion layout computation GC issues" and CUnion was good. In January this year I applied the same fix to CStruct. Neither of us though thought to do the same with CPPStruct... 20:12
nine Oh....the allocate_in_gen2 flag that's left on accidentally may actually be the root cause of the "Illegal Gen2 -> Nursery in caller chain" error as that's what causes the newly allocated heap frame to be already in gen2 while its caller is still in the nursery. 20:30
nine So now I wonder if I actually should commit that MVM_gc_write_barrier fix. That'd depend on whether there's at least a theoretical possibility that a situation like this can occur (a frame in gen2 with its caller in the nursery). 20:31
Geth MoarVM: 23fd04494c | (Stefan Seifert)++ | 3 files
Fix CStruct's compute_allocation_strategy leaving the allocate_in_gen2 flag on by accident

Before throwing any exception in compute_allocation_strategy or one of its callees we need to clear the allocate_in_gen2 flag set in compose. Otherwise we could have all sorts of explosions when objects are allocated in gen2 unexpectedly.
20:57
MoarVM: 39a4f05f0c | (Stefan Seifert)++ | src/6model/reprs/CPPStruct.c
Fix possible segfaults related to corrupted CPPStruct STables

The fix is basically a port of commit 6733e927ceb1aeff0daf23a58c43730cb2c38d77 from CUnion to CPPStruct. It fixes composed CPPStruct STables showing up with a NULL REPR_data pointer and garbage in sc_forward_u caused by an unfortunately timed GC run.
nine Well it seems like this concludes my round of fixes to consistently reproducible GC related issues. We survive a full rakudo build with a 4K nursery, nursery poisoning and MVM_GC_DEBUG 2. Tests pass except for some flappers (mostly doing async/concurrency stuff) 21:19
Or not! A nursery size of 512 bytes uncovered at least one more issue ;) 21:25
timotimo yikes
that's gotta run the GC like every ten allocs 21:26
Geth MoarVM: 4a90335f32 | (Stefan Seifert)++ | src/6model/reprs/CStruct.c
Fix possible access to fromspace in CStruct's get_attribute

root could be moved by a GC run triggered by string decoding. Since CStruct's body is inlined into MVMCStruct the body would be moved along, so we must get a fresh pointer to that, too.
21:33
nine What? 10 allocations without a GC? Must lower the limit... 21:34
Alas, there are allocations of more than 256 bytes, so there's not much more room to up the pressure
timotimo well
we can split the limit into hard and soft 21:35
sena_kun daaaaaaamn 21:37
sorry, wrong windos. :S
timotimo ha
sena_kun *window
nine I could also just remove the condition to start a gc in MVM_gc_allocate_nursery 21:38
timotimo yea
i was wondering if we could have a trick where we mmap the nursery in two address spaces and instead of a full mark&sweep copy-over-everything pass we'd just flip pointers between one and the other every time we allocate something 21:39
that could be faster