|
01:09
lizmat joined
01:16
nativecallable6 joined
01:17
unicodable6 joined,
quotable6 joined
02:19
FROGGS joined
02:27
Kaiepi joined
03:04
ilbot3 joined
03:57
bloatable6 joined
06:41
domidumont joined
06:49
domidumont joined
07:01
brrt joined
07:16
ggoebel joined
07:31
domidumont joined
|
|||
| brrt | good * #moarvm | 07:42 | |
|
07:51
domidumont joined
08:16
brrt joined
08:30
domidumont joined
08:46
zakharyas joined
09:04
zakharyas joined
09:05
zakharyas joined
|
|||
| jnthn | o/ #moarvm | 10:12 | |
|
10:40
zakharyas joined
11:22
zakharyas joined
|
|||
| Geth | MoarVM/jit-expr-optimizer: f6a2d57e0a | (Bart Wiegmans)++ | 2 files JIT - use apply_template_adhoc everywhere Much nicer than constructing an array and pushing it, and more importantly, I plan to start packing a good deal of information in the expression node itself, which would need cooperation of the template |
11:35 | |
| brrt | \o jnthn | ||
| jnthn; any opinion on the use of bitfields in structs | 11:36 | ||
| jnthn | brrt: Hm, I think they're fairly widely supported, so should be fine to use? | 11:38 | |
| brrt | yeah, i think so to | 11:42 | |
| they're probably nicer to use than 'raw' bitpacking | 11:43 | ||
| jnthn | *nod* | ||
|
12:15
domidumont joined
12:56
brrt joined
14:08
domidumont joined
14:19
ggoebel joined,
committable6 joined
15:05
zakharyas joined
|
|||
| [Coke] reads webkit.org/blog/8048/what-spectre-...or-webkit/ | 15:34 | ||
| jnthn | Interesting post | 15:39 | |
| brrt | hmm, maybe i should read that as well | ||
| btw | 15:40 | ||
| jnthn | Looks like we need to put something into our make release target that explodes if the work tree isn't clean: github.com/MoarVM/MoarVM/issues/778 | 16:05 | |
| [Coke] | ooooops | ||
| jnthn | Possibly also a 2017.12.1 | 16:08 | |
|
16:11
brrt joined,
squashable6 joined
16:21
zakharyas joined
16:44
brrt1 joined
|
|||
| AlexDaniel` | can't see why 2017.12.1 would be needed, what am I missing? | 17:02 | |
| the tar is wrong, just fix it? | |||
| 2017.12.1 point release would mean⦠a release on exactly the same commit? | 17:05 | ||
| and then people will be asking why is there a 2017.12.1 moar but no rakudo⦠| |||
| [Coke] | a 2017.12.1 *tar* is needed to fix the tar, yes? nice to also have a corresponding git commit to track against. | ||
| (since we don't want to have two tars with the same name) | 17:06 | ||
| We have had in the past, releases where there was a point release of one of the 3 components, but not all 3. I think that's explainable. | 17:07 | ||
| AlexDaniel` | yeah also, the same release (but correct tar) is available on github | ||
| sooo⦠| |||
| jnthn | But missing 3rdparty/ | ||
| AlexDaniel` | just fix the file | ||
| hmm | 17:08 | ||
| jnthn | That's why we do our own tarballs | ||
|
17:09
AlexDaniel joined
|
|||
| jnthn | It's generally understood that release tarballs are immutable, however; I'd rather not violate that. | 17:09 | |
| [Coke] | jnthn++ for restating my intent clearly. :) | 17:10 | |
| jnthn | As for explaining stuff, 2018.01 is like, a week or so away, I guess? :) | ||
| AlexDaniel | right | ||
| releasable6: next | |||
| releasable6 | AlexDaniel, Next release in 11 days and ā1 hour. No blockers. Unknown changelog format | ||
| AlexDaniel | but still, uh⦠rakudo 2017.12 is supposed to work with 2017.12 MoarVM | 17:11 | |
| at least, that's what the VERSION file says | |||
| so⦠why don't we need rakudo 2017.12.1 then? | 17:13 | ||
| [Coke] | VERSION is used by the 'make' process that invokes git, yes? so that part is still fine | ||
| (or is VERSION somehow used when building from tarballs?) | 17:14 | ||
| AlexDaniel | no, but as a human I'd be using the same tarballed version that is claimed⦠| ||
| jnthn | No, you'd fetch and build the 3 tarballs independently if doing that. If there was a 2017.12 Rakudo Star, *that* would be impacted, however, since it bundles Rakudo/NQP/MoarVM | ||
| AlexDaniel | maybe no need to be pedantic about that though⦠| 17:15 | |
| jnthn | Given the next release is ~10 days away, I don't think so | 17:16 | |
| AlexDaniel | jnthn: So I was thinking⦠what about having all three releases (moar, nqp, rakudo) done in one go in an automated way? | ||
| like, it would avoid things like this, possibly | 17:17 | ||
| jnthn | AlexDaniel: I'm OK with that in principle. | ||
| Well, this would have been avoided by a sanity check in "make release" also | |||
| I don't have time to make it happen, but I'm all for release automation if somebody has the time/motivation to work on it. | |||
| AlexDaniel | well, I have a sakefile that I've been using for nqp+rakudo | 17:18 | |
| extending it to moarvm should take very little time | |||
| jnthn | And uploading the tarball is just a commit to the MoarVM website git repo | 17:19 | |
| AlexDaniel | at the time I was sure that we're doing moarvm releases separately for some good reason? | ||
| jnthn | I don't think there was any good reason | ||
| It's just the way things were :) | 17:20 | ||
| AlexDaniel | hmmm ok | ||
| I'll see what I can do then | |||
| jnthn | The MoarVM release was something that I Just Did for some years, and it was pretty easy so I didn't mind :) | ||
| But I don't in the slightest bit mind having less release chores :) | 17:21 | ||
|
17:23
coverable6 joined,
benchable6 joined
17:39
bisectable6 joined
17:43
zakharyas joined
18:19
zakharyas joined
19:39
greppable6 joined
20:48
brrt joined
20:53
Voldenet joined
20:59
evalable6 joined
21:20
Ven`` joined
22:23
brrt joined
22:29
TimToady joined
|
|||
| timotimo | brrt: i regret that i don't have any good ideas for how to go on with the jit | 22:30 | |
| brrt | ah, but, then i can probably help | 22:31 | |
| but first, why the question? | 22:32 | ||
|
22:32
releasable6 joined,
statisfiable6 joined,
reportable6 joined
|
|||
| brrt | or the statement, really | 22:35 | |
| anything in particular you'd like to achieve | |||
| timotimo | no, just you asking about the lisp cond support | 22:38 | |
| bbiab | 22:41 | ||
| brrt | oh, the cond idea is just a simplification measure | 22:51 | |
| the idea being that we can express all if/when constructs as COND blocks | |||
| which means that the optimizer can be more regular in the implementation | 22:52 | ||
| or at least, something like that | |||
| wasn't a terribly well-defined idea, in fact | |||
| and the drawback is that COND in it's normal form consists of pairs of ((condition?) (statement)) | 22:53 | ||
| well, that doesn't parse in the expr JIT compiler | |||
| so it'd have to be (COND (WHEN (condition?) (STATEMENT)) (WHEN (...) (...))) | 22:54 | ||
| which is fine and still regular, but then we have the second problem, which is that this would naively compile to a conditional jump -> block -> jump -> label ... sequence, and .. i thought you would have fewer jumps by grouping all the conditions | 22:55 | ||
| and the statements | 22:56 | ||
| jnthn | Yeah, branches are costly | ||
| brrt | i'm not exactly sure what i thought is true | ||
| jnthn | So when we can spot opportunities to do calculation instead of flow control, it's preferable | ||
| A representation that makes that easier for an optimizer to spot opportunities for would seem good | 22:57 | ||
| brrt | hmmm | ||
| one way to ensure the grouping of conditions in the generated code is to have the tiler use a specific iteration order for COND | 22:59 | ||
| which, in terms of complexity, has a very water-bed like quality :-) | |||
| jnthn: i found a tidbit you might find interesting | 23:00 | ||
| it turns out the java implementation of HashMap uses a bucket design with a red-black tree to implement the per-bucket set | 23:04 | ||
| since recently, instead of a LinkedList | |||
| jnthn | Oh, interesting. | 23:06 | |
| brrt | and at first i thought that it made no sense, because whenever your hash table would be so overfilled as to make this make a difference, you should probably increase the number of buckets rather than switch data structures | ||
| jnthn | Indeed | ||
| But? :) | |||
| brrt | however i thought some more about it | ||
| this obviously gives you a O(n log n) worst-case retrieval and modification cost | 23:07 | ||
| and that, in turn, can protect you from some forms of DoS attacks (that would happen if i knew the hash algorithm, and/or hashes wouldn't be sufficiently randomized) | 23:09 | ||
| jnthn | Oh, right. Interesting. | ||
| brrt | now, nobody does that kind of attack on perl anymore because in perl hashes are randomized | ||
| and i would suspect java hashes are also randomized | |||
| but, what's different is that java programs tend to be relatively long living, and hash tables might be relatively long-living as well, in which case randomization is not sufficient protection anymore | 23:11 | ||
| also, the additional cost of using a binary tree over a linked list, again in java, is not so large (just one more pointer) | 23:12 | ||
| so actually, this kind of makes sense | 23:13 | ||
| jnthn | Hm, interesting. | 23:27 | |
| brrt | by the way, do we still do lexical autoviv of objects on first access? | 23:37 | |
| as in, internally to the VM, rather than with an explicit condition? | |||
| timotimo | i only know we'll throw out attribute autoviv | 23:41 | |
| brrt | then, i am all for throwing out lex autoviv as well | 23:44 | |
| jnthn | Yeah, we still do it | 23:49 | |
| I'd like not to, but $/ and $! exist in every routine, and not allocating a Scalar every time for them was a big win | 23:50 | ||
| We need to be sure we can eliminate that in other ways | |||
| brrt | well, the obvious other way is to have codegen | 23:53 | |
| insert an explicit check | 23:54 | ||
| and set | |||
| but that is going to make frames much larger | |||
| jnthn | I was more thinking we always allocate them | 23:56 | |
| And then if spesh can prove they are never accessed, it just tosses the initialization | |||
| brrt | i like that idea | ||
| my alternative idea was to have a sp_getlex that would not autovivify, combined with an autogenerated conditional block with a bindlex | 23:59 | ||
| hmm, doesn't even have to be a full block though | |||
| could be an sp_getlex and an sp_lexautoviv | |||