01:48
ilbot3 joined
01:52
geekosaur joined
03:44
geekosaur joined
05:00
sivoais joined
07:30
domidumont joined
07:35
domidumont joined
07:49
leont_ joined
08:39
lizmat joined
08:47
vendethiel joined
09:07
leont_ joined
09:40
lizmat joined,
leont_ joined
10:08
leont_ joined
10:33
leont_ joined
10:37
masak joined
|
|||
masak | for people interested in GCs: jvns.ca/blog/2016/04/22/java-garbag...ally-slow/ | 10:38 | |
jnthn | GCs are hard. :) | 10:53 | |
Curiously, we have A Different Problem in Moar. | 10:55 | ||
We don't have a way to set an absolute memory limit. | |||
Instead, the amount of overhead the GC carves out for itself is proportional to the overall heap size. | |||
timotimo | that sleep sort crash seems to be fixed in latest rakudo moar | 10:57 | |
jnthn | Oh? | ||
The author was running an older one? | 10:58 | ||
uh | |||
reporter | |||
timotimo | latest meaning bleeding edge | ||
i.e. newer than nqp_revision and moar_revision | |||
lizmat | jnthn: it crashed on me | ||
jnthn | Yeah, we need to a tad careful with "seems to be fixed" on threading bugs | ||
lizmat | well, you could argue I worked around it | ||
timotimo | i'm running it for the 4th time now :) | ||
oh | |||
with your latest commit, that makes it no longer crash? | 10:59 | ||
jnthn | Because I've got ones where I didn't reproduce them after 10,000 attempts on either Windows or in a Linux VM | ||
timotimo | well, that'd be why, then | ||
lizmat | indeed | ||
jnthn | (Yes, I actually did run the thing for 10,000+ attempts) | ||
And yet Zoffix and others saw it almost right away | |||
lizmat | if you change Channel.list in the code by Channel.Supply.list | ||
it will break again | |||
jnthn | fwiw, the 13KB nursery shook out quite a few things in S17 | 11:00 | |
lizmat | I merely re-implemented Channel.list to be an Iterator | ||
rather than self.Supply.list | |||
jnthn | I hope you were really careful... :) | ||
timotimo | rebuilding rakudo | ||
jnthn went via Supply in order to rule out a bunch of possible races :) | 11:01 | ||
lizmat | well, that's why I'm not sure my solution is correct | ||
timotimo | yup, it totally crashes | ||
lizmat | it's GC related, as adding an nqp::force_gc in the sub, totally fixes it | ||
timotimo | interesting. it's the "trying to destroy a mutex aborts" thing | 11:02 | |
jnthn | s/totally fixes/totally hides/ ;-) | ||
lizmat | jnthn: eh, yeah :-) | ||
jnthn | Anyway, I should probably spend the weekend resting rather than debugging GC things :-) | ||
timotimo | to be fair, when a mutex that's still held gets gc_free'd, we're in trouble already | ||
jnthn | But I plan to dig into the various bugs, inclduing S17 ones, that the 13KB nursery turned up next week. | 11:03 | |
I'll probably then split those out into a separate branch from my frame refactors | |||
So we get them into master sooner | |||
Before returning to optimizing invocation :) | |||
timotimo: Totally agree on that | 11:04 | ||
timotimo | how do i ask Lock.condition if the lock is locked at that moment or not? | ||
i'd like to give Lock a DESTROY submethod for debugging purposes | |||
jnthn | uh | 11:05 | |
Lock.condition is for creating a condition variable :) | |||
timotimo | oh, er | ||
yeah, i meant to say: | |||
how do i ask a lock if it's locked :) | |||
jnthn | Don't know there's a way to do that at present | ||
timotimo | the condvar was just the thing i was still looking at at that moment | ||
polllock | 11:06 | ||
jnthn | Though | ||
You can .wrap the methods maybe :) | |||
(lock/unlock) | |||
timotimo | hm, after locking, writing to a lock's attribute would be safe | ||
jnthn | Yeah, or you can patch it that way | ||
lizmat: btw, did you keep Channel.list returning a List, or does it return a Seq? | 11:07 | ||
lizmat | Seq | ||
jnthn | Ah...it needs to stay a List, I think. | ||
But htat's just a matter of s/Seq.new(...)/List.from-iterator(...)/ | |||
Won't need any changes to the iterator | 11:08 | ||
We should maybe add a .Seq coercion method also though | |||
lizmat | yeah, but then we don't need an Iterator | ||
we can just build the list and return that when we're done | |||
jnthn | Or maybe just implement what you have now as .Seq and then method list() { self.Seq.list } | ||
lizmat | but why go through the .Seq then ? | 11:09 | |
timotimo | so you can get a Seq if you want one | ||
without going through list :) | |||
jnthn | List doesn't mean "fully reified", it just means "remembering" | ||
Laziness is still valuable. | |||
timotimo | jnthn: This representation (ReentrantMutex) does not support attribute storage | ||
d'oh :) | |||
jnthn | timotimo: heh, yes :) | ||
timotimo | i could implement an op that asks a lock for its lockedness without locking, that'd go into moar, then | 11:11 | |
for now i'll be AFK, though | 11:12 | ||
dalek | arVM: e3bf435 | (Pawel Murias)++ | src/6model/reprs/P6opaque.c: Fix segfault when composing an uncomposed P6opaque repr. |
11:27 | |
arVM: 629282f | jnthn++ | src/6model/reprs/P6opaque.c: Merge pull request #362 from pmurias/fix-uncomposed-segfault Fix segfault when composing an uncomposed P6opaque repr. |
|||
11:45
lizmat joined
12:10
lizmat_ joined
13:50
domidumont joined
14:41
Util joined
14:50
leont joined
15:25
leont joined
17:50
lizmat joined
17:51
lizmat joined
17:52
leont joined
|
|||
leont | I seem to have hit a most interesting bug: every 200th process I run() fails to receive input… | 17:57 | |
jnthn | Exactly 200? :) | ||
timotimo | we can grep the moar source for "200" | 17:58 | |
must be a magic number somewhere | |||
jnthn | :P | ||
dinner & | |||
timotimo | #define DROP_INPUT_EVERY 200 | ||
leont | Yes, exactly 200, 400, 600 and now on the way towards 800 | 17:59 | |
(while running the spectest in my harness, this seems to be the only remaining issue in the synchronous parser) | 18:00 | ||
800 reliably failed too | 18:09 | ||
masak | leont: do you have something for others to run to confirm this behavior? | 18:11 | |
nine_ | leont: the 200th fails but the 201st works again? Are those run serially or in parallel? | 18:14 | |
arnsholt | A first golf attempt would be running "echo 1" 1000 times or something, and seeing what happens | 18:16 | |
I guess | |||
18:18
leont joined
|
|||
lizmat | nine_: I think they're serial | 18:19 | |
timotimo | if you're doing something in parallel, it's probably extremely hard to pin-point hit every 200th :) | 18:20 | |
18:24
leont joined
18:30
leont joined
|
|||
jnthn | github.com/MoarVM/MoarVM/blob/mast...h/osr.h#L2 | 18:38 | |
There's a 200 | |||
Maybe try with MVM_SPESH_OSR_DISABLE=1 | |||
leont | Interestingly, it sigaborts reproducibly after 997 tests… | 18:42 | |
18:50
zakharyas joined
20:06
Ven joined
20:56
Ven joined
21:29
zakharyas joined
21:35
leont joined
21:37
zakharyas joined
21:54
dalek joined
21:56
cognominal joined
22:46
lizmat joined
|