github.com/moarvm/moarvm | IRC logs at colabti.org/irclogger/irclogger_logs/moarvm Set by AlexDaniel on 12 June 2018. |
|||
00:13
pamplemousse left
00:18
sena_kun left
05:03
evalable6 left,
notable6 left
05:04
evalable6 joined
05:06
notable6 joined
05:44
robertle left
06:40
domidumont joined
07:26
patrickb joined
07:28
robertle joined
08:02
zakharyas joined
|
|||
lizmat | jnthn: www.reddit.com/r/perl6/comments/cj...rnal_code/ # not sure what the answer is there atm | 09:13 | |
09:22
zakharyas left
09:33
AlexDaniel` joined
09:52
Geth joined
|
|||
nwc10 | jnthn/nine: | 10:36 | |
aargh | |||
jnthn/nine: paste.scsys.co.uk/585521 | 10:37 | ||
can't replicate on efirst attempt | |||
network here is vile | |||
I'm probably 90% AFK 8thanks; packet loss) so please don't assume any prompt replies before Riga | |||
nqp t/concurrency/02-lock.t ASAN barfage | 10:38 | ||
10:45
sena_kun joined
10:49
nativecallable6 left,
unicodable6 left,
statisfiable6 left,
releasable6 left,
squashable6 left,
undersightable6 left,
coverable6 left,
committable6 left,
evalable6 left,
quotable6 left,
bloatable6 left,
shareable6 left,
bisectable6 left,
benchable6 left,
greppable6 left,
reportable6 left,
notable6 left
11:47
lucasb joined
11:55
bisectable6 joined
11:59
bisectable6 left
12:00
statisfiable6 joined,
robertle left,
notable6 joined
12:01
committable6 joined,
bloatable6 joined,
squashable6 joined,
reportable6 joined,
evalable6 joined,
undersightable6 joined
12:02
bisectable6 joined,
releasable6 joined,
quotable6 joined,
shareable6 joined,
robertle joined
12:03
nativecallable6 joined
12:04
benchable6 joined,
unicodable6 joined,
greppable6 joined,
coverable6 joined
12:29
pamplemousse joined
|
|||
pamplemousse | o/ | 12:31 | |
nwc10 | \o | 12:40 | |
12:48
robertle left
12:54
robertle joined
12:57
sena_kun left
|
|||
timotimo | o/ | 13:14 | |
nwc10 | \o | 13:21 | |
13:42
zakharyas joined
14:01
domidumont left
14:19
zakharyas1 joined
14:20
zakharyas left
14:39
pamplemousse left
15:16
pamplemousse joined
15:26
robertle left
15:35
zakharyas1 left,
patrickb left
15:38
domidumont joined
|
|||
ugexe | would have to signal the thread to be killed to cleanup / call uv_stop (which cannot be called from a different thread) | 15:48 | |
lizmat | I can't help but think that it shouldn't be too hard to have each thread set up a CONTROL block, which would check for a certain CX exception value, and then throw a special exception to have the thread killed, just as it would have been if something had gone wrong ? | 15:51 | |
timotimo | the important part is how to deliver the exception, i think? | ||
lizmat | and then somehow deliver that control message to the thread ? | ||
yeah... | |||
but how does Comma do that then ? | 15:52 | ||
timotimo | how do you mean? | ||
ugexe | i would have thought it would use signals, not an exception | ||
timotimo | signals are caught by the async i/o thread, though | 15:53 | |
lizmat | Comma can suspend threads, can it not ? | ||
ugexe | each thread has its own event loop... wouldnt each thread thus handle its own signals? | 15:54 | |
timotimo | ugexe: that's not how it's implemented | 15:57 | |
lizmat: it attaches the debugger | |||
15:58
pamplemousse left
|
|||
ugexe | also imagine it won't be super easy since we use basic locking in e.g. PrecompRepository, which means its *expecting* to not be interrupted (else a lock will be stuck locked) | 15:58 | |
although maybe that is unavoidable | 15:59 | ||
lizmat | maybe having it be cooperative, would already work a lot | ||
cooperative in the sense that you would need to call a sub that would do the blocking once instructed from elsewhere | 16:00 | ||
so it wouldn't work for just any code | |||
but it would be easy enough to let it cooperate ? | |||
could be as easy as it having an array of Locks, indexed by thread ID's, and just locking that externally | 16:01 | ||
while the sub would just attempt to lock its thread's lock inside the sub (and then immediately release it when it got it?) | 16:02 | ||
ugexe | I'm not quite sure what you are suggesting... a type of Lock singleton? But remember Lock::Async does not have to be locked/unlocked by the same thread. | 16:11 | |
although i suppose that likely saves it from the issue of killing the thread locking it forever | 16:12 | ||
or at least its recoverable | 16:13 | ||
lizmat | no, just a simple array of Locks | ||
and a sub that would lock @locks[$*THREAD] | 16:14 | ||
ugexe | `my $lock = ...; sub foo { $lock.lock; &do-work; $lock.unlock };` when killing the thread inside &do-work how does $lock get unlocked? | ||
if they really wanted automatic unlocking they would use $lock.protect, so it would feel odd to automatically unlock a $lock.lock but only when inside a thread that is killed | 16:16 | ||
of course if we could get rid of $lock.lock / $lock.unlock in the core it really becomes a library design issue | 16:17 | ||
get rid of their use, not their implementation | 16:18 | ||
then again that doesn't help with internal moarvm lock usage | 16:20 | ||
lizmat | my @suspend[Lock] = Lock.new xx ^threads.max; sub checkpoint(--> Nil) { LEAVE @suspend[$*THREAD].unlock; @suspend[$*THREAD].lock }; sub suspend($id --> Nil) { @suspend[$id].lock }; sub release($id --> Nil) { @suspend[$id].unlock } | ||
ugexe | that is what $lock.protect is, no? | ||
lizmat | basically, yes... but now with external influencers ? | 16:21 | |
ugexe | what if you dont want automatic unlocking? | ||
lizmat | sub protect( &code } { @suspend[$*THREAD].protect: &code } | 16:22 | |
16:23
pamplemousse joined
|
|||
ugexe | that would still automatically unlock | 16:23 | |
i'm asking if i still want low-level lock control | |||
lizmat | why would one do that if one can run this inside a protect block, which takes care of all locking / unlocking? | 16:27 | |
ugexe | because locking isn't always encapsulated into a single function | 16:29 | |
github.com/rakudo/rakudo/blob/32d4...y.pm6#L233 | 16:30 | ||
lizmat | ugexe: that's too much of a diversion for me while working on the P6W, will look at that later... | 16:31 | |
ugexe | its a function that basically calls $lock.unlock from a function that did not lock $lock in the first place | ||
its not even locked in that *file* | 16:32 | ||
16:33
robertle joined
|
|||
ugexe | then again if a lock must be locked/unlocked by the same thread then i'm not sure how it would deadlock (in perl6 code, not moarvm internals) in the first place | 16:35 | |
18:00
reportable6 left,
reportable6 joined
|
|||
lizmat | and another Perl 6 Weekly hits the Net: p6weekly.wordpress.com/2019/07/29/...sed-again/ | 18:15 | |
18:37
chloekek joined
18:49
domidumont left
19:06
pamplemousse left
|
|||
nine | With MVM_GC_DEBUG active t/04-nativecall/23-incomplete-types.t consistently fails with "Illegal Gen2 -> Nursery in caller chain (not in inter-gen set)" | 19:24 | |
19:29
pamplemousse joined
19:33
pamplemousse left
19:35
evalable6 left
19:37
evalable6 joined
19:39
evalable6 left
19:44
evalable6 joined
19:45
evalable6 left
19:49
evalable6 joined
19:50
evalable6 left
19:53
evalable6 joined
|
|||
nine | A MVM_gc_write_barrier seems to help, but the test still goes down in flames | 19:55 | |
19:57
robertle left
|
|||
nine | "Illegal use of a nursery-allocating spesh op when gen2 allocation flag set" that's an...interesting error | 19:58 | |
19:59
evalable6 left
20:04
evalable6 joined
|
|||
nine | Oh, exceptions leaving the allocate_in_gen2 set | 20:07 | |
20:12
sena_kun joined
|
|||
nine | LOL. So in 2016 jnthn++ made a commit "Fix CUnion layout computation GC issues" and CUnion was good. In January this year I applied the same fix to CStruct. Neither of us though thought to do the same with CPPStruct... | 20:12 | |
20:13
evalable6 left
20:17
evalable6 joined
20:20
evalable6 left
20:24
evalable6 joined
|
|||
nine | Oh....the allocate_in_gen2 flag that's left on accidentally may actually be the root cause of the "Illegal Gen2 -> Nursery in caller chain" error as that's what causes the newly allocated heap frame to be already in gen2 while its caller is still in the nursery. | 20:30 | |
20:30
evalable6 left
|
|||
nine | So now I wonder if I actually should commit that MVM_gc_write_barrier fix. That'd depend on whether there's at least a theoretical possibility that a situation like this can occur (a frame in gen2 with its caller in the nursery). | 20:31 | |
20:33
evalable6 joined
20:37
pamplemousse joined,
evalable6 left
20:40
evalable6 joined
20:43
evalable6 left
20:47
evalable6 joined
20:50
evalable6 left
20:51
evalable6 joined,
evalable6 left
20:55
evalable6 joined
20:56
evalable6 left
|
|||
Geth | MoarVM: 23fd04494c | (Stefan Seifert)++ | 3 files Fix CStruct's compute_allocation_strategy leaving the allocate_in_gen2 flag on by accident Before throwing any exception in compute_allocation_strategy or one of its callees we need to clear the allocate_in_gen2 flag set in compose. Otherwise we could have all sorts of explosions when objects are allocated in gen2 unexpectedly. |
20:57 | |
MoarVM: 39a4f05f0c | (Stefan Seifert)++ | src/6model/reprs/CPPStruct.c Fix possible segfaults related to corrupted CPPStruct STables The fix is basically a port of commit 6733e927ceb1aeff0daf23a58c43730cb2c38d77 from CUnion to CPPStruct. It fixes composed CPPStruct STables showing up with a NULL REPR_data pointer and garbage in sc_forward_u caused by an unfortunately timed GC run. |
|||
21:00
evalable6 joined
21:04
evalable6 left
21:09
evalable6 joined
|
|||
nine | Well it seems like this concludes my round of fixes to consistently reproducible GC related issues. We survive a full rakudo build with a 4K nursery, nursery poisoning and MVM_GC_DEBUG 2. Tests pass except for some flappers (mostly doing async/concurrency stuff) | 21:19 | |
Or not! A nursery size of 512 bytes uncovered at least one more issue ;) | 21:25 | ||
timotimo | yikes | ||
that's gotta run the GC like every ten allocs | 21:26 | ||
Geth | MoarVM: 4a90335f32 | (Stefan Seifert)++ | src/6model/reprs/CStruct.c Fix possible access to fromspace in CStruct's get_attribute root could be moved by a GC run triggered by string decoding. Since CStruct's body is inlined into MVMCStruct the body would be moved along, so we must get a fresh pointer to that, too. |
21:33 | |
nine | What? 10 allocations without a GC? Must lower the limit... | 21:34 | |
Alas, there are allocations of more than 256 bytes, so there's not much more room to up the pressure | |||
timotimo | well | ||
we can split the limit into hard and soft | 21:35 | ||
sena_kun | daaaaaaamn | 21:37 | |
sorry, wrong windos. :S | |||
timotimo | ha | ||
sena_kun | *window | ||
nine | I could also just remove the condition to start a gc in MVM_gc_allocate_nursery | 21:38 | |
timotimo | yea | ||
i was wondering if we could have a trick where we mmap the nursery in two address spaces and instead of a full mark&sweep copy-over-everything pass we'd just flip pointers between one and the other every time we allocate something | 21:39 | ||
that could be faster | |||
21:59
chloekek left
23:06
lucasb left
23:24
pamplemousse left,
pamplemousse joined
|