02:57
ilbot3 joined
06:27
geospeck joined
07:33
domidumont joined
07:40
domidumont joined
09:04
domidumont1 joined
09:42
domidumont joined
09:44
domidumont2 joined
10:33
domidumont joined
12:44
squashable6 joined,
reportable6 joined,
greppable6 joined,
unicodable6 joined,
benchable6 joined
12:56
domidumont joined
13:53
brrt joined
14:34
domidumont1 joined
14:48
geospeck joined
14:49
samcv joined
16:05
brrt joined
|
|||
brrt | good * #moarvm | 16:06 | |
lizmat | brrt o/ | 16:08 | |
brrt | \o lizmat | 16:09 | |
AlexDaniel` | O√ | ||
nwc10 | good UGT, brrt | 16:40 | |
samcv | going to release MoarVM after checking things with NFG_CHECK on | 16:41 | |
brrt | \o/ | 16:42 | |
lizmat | samcv++ | 16:45 | |
AlexDaniel` | Sounds great! | 16:50 | |
lizmat notices that nqp::isprime_I is not being JITted | 16:54 | ||
timotimo | that'd be trivial to do, lizmat. where do you see it aborting frames? | 17:08 | |
lizmat | it doesn't turn green in --profile for something like: say (^Inf).grep(*.is-prime).skip(999).head | 17:09 | |
timotimo | the question is: is it inlined anywhere? | 17:10 | |
lizmat | in JIT log I see: BAIL: op <isprime_I> | 17:11 | |
dinner& | 17:12 | ||
timotimo | the jit log no longer outputs "entering inline", something must have broken that | 17:19 | |
it only shows "leaving inline" | |||
17:24
zakharyas joined
18:09
coverable6 joined,
nativecallable6 joined
18:44
domidumont joined
|
|||
Geth | MoarVM: 80976d759a | (Samantha McVey)++ | VERSION Update VERSION file to 2017.12 |
18:47 | |
MoarVM: a184d77019 | (Samantha McVey)++ | 2 files Minor ChangeLog and release_guide.md fix Don't repeat "New in 2017.12" twice and make it more clear what the link at the top of the changelog is for. For the release guide, Numbering changed but a step refering to a changed number had not yet been changed, it has been updated now. |
|||
timotimo | i think we should throw out the second change in the miscellaneous section | 19:23 | |
and the lengthy explanation for why the ucd2c changes are good seems out of place for the changelog | 19:27 | ||
and the jit-bisect SPESH_LIMIT change is in two sections in a row :) | 19:28 | ||
oh, the release already happened | |||
lizmat | yeah, water under the bridge :-) | 19:30 | |
samcv | timotimo: well it can be changed after the fact it's just not in the main release tar.gz | 19:31 | |
so feel free to change it | |||
timotimo | right | ||
20:22
geospeck joined
20:26
bart_ joined
|
|||
bart_ | good * | 20:27 | |
timotimo | hey bart, what happened to your name? | 20:33 | |
bart_ | good question | ||
polari chat client isn't respecting my preference clearly | |||
brrt | a memory leak in syncfile? | 20:40 | |
20:40
AlexDaniel joined
|
|||
brrt | that is weird | 20:42 | |
20:55
quotable6 joined,
brrt joined
20:57
committable6 joined
20:58
bloatable6 joined,
evalable6 joined,
bisectable6 joined,
releasable6 joined,
statisfiable6 joined
|
|||
brrt | we can't run with --full-cleanup at all since that aborts because we try to destroy a lock | 21:23 | |
because, gc tries to cleanup the spesh queue object | 21:24 | ||
and it cant' because it is presumably still locked | 21:25 | ||
jnthn | Can't in any Perl 6 program that ever triggers use of the thread pool for the same reason: a blocking queue implies a lock is held while the condvar waits for the queue not empty signal. | 21:32 | |
yoleaux | 16 Dec 2017 17:27Z <brrt> jnthn: what is our current policy with regards to process cleanup, because i think we leak memory if we shutdown moarvm before the spesh worker is done, and that angers ASAN | ||
04:52Z <Zoffix> jnthn: Made a change to 6.d &await that's kinda gross. If you wanted to review: github.com/rakudo/rakudo/commit/c51f1796e6 | |||
brrt | uhuh | 21:33 | |
welcome back jnthn :-) | |||
so, one thing that i've done, not in C but in python, iirc, is to put a NULL value on the queue when cleaning up, and use that as a signal | 21:34 | ||
or, alternatively, a constant MVM_QUEUE_CLOSE | |||
jnthn | .tell Zoffix I don't think await should be doing any flattening or descent beyond what *@awaitables does when slurping | ||
yoleaux | jnthn: I'll pass your message to Zoffix. | ||
brrt | would you think that could work here? | ||
jnthn | Well, many threads could be waiting on the cond var, though, and queues are just one place this could happen. | 21:35 | |
brrt | true, but that's not true for the spesh worker | 21:36 | |
so even if we can't in general... | 21:37 | ||
jnthn | If we're not looking for a general solution, then just not even trying to destroy a lock that's held would be an option... | ||
brrt | but then, the --full-cleanup doesn't work | 21:39 | |
i'm not sure what solution i'm looking for anyway | |||
jnthn | --full-cleanup has never fully worked | 21:41 | |
brrt | which is ironic | ||
jnthn | And it gets little love because we don't need it for real work | 21:42 | |
It's been useful to get a clearer picture of memory leaks | 21:43 | ||
But it's a hard thing to achieve | |||
brrt | the thing is, that attitude makes it impossible to cleanup existing memory leaks using ASAN and similar tools | ||
jnthn | Well, if somebody has the weeks to invest on making it fully work, I'm not going to say no. :-) | 21:44 | |
I suspect some kind of workaround to make it not do the mutex destroy that blows up would probably suffice for now, though | 21:46 | ||
In that sure, that'll still show up as a leak | |||
But it's a lot less noise than without it | |||
If we want to borrow a more general solution, then the JVM has some good inspiration here. It provides a park/unpark mechanism implemented in terms of a single cond-var per thread | 21:47 | ||
It then implements all of its locks, conds, etc. *on top* of that park mechanism | |||
Thus my predicition of O(weeks): implementing our own locks/condvars would need *very* careful work. | 21:48 | ||
Anyway, in that scheme you just signal all threads at shutdown time and get them to exit | 21:49 | ||
That deals with *most* things | |||
(However, not things stuck in some foreign function call, or blocking I/O) | |||
brrt | dirty hack would be somehow preventing that lock from being cleaned up | 21:51 | |
jnthn | That's what I suggested above | ||
Locks have a "is this lock held" flag | |||
brrt | uhuh | ||
jnthn | So I figure we can use that | ||
brrt | i'm curious about how the JVMs lock/unlock stuff works | 21:52 | |
jnthn | Yeah, it's probably the right overall direction | 21:53 | |
But...in terms of things that are tricky to implement... :-) | |||
(Also, if I don't make as much sense as on a good day, it's 'cus I'm doing the hopefully-nearly-the-end bits of flu...) | 21:56 | ||
brrt | let's hope it is the end bits indeed :-) | 21:58 | |
but unfortunately, it makes some sense, yes | |||
hmm | |||
jnthn | Yeah, it's been a week since I first woke up and thought "urgh, this feels like it's going to be a heavy cold..." | 22:00 | |
samcv | i have a cold or flu right now ::( | 23:12 |