04:31
kjp left
04:33
kjp joined,
kjp left
04:34
kjp joined
08:43
sena_kun joined
10:29
sjn left
10:34
sjn joined
10:39
finanalyst joined
10:41
sjn left
10:46
sjn joined
|
|||
ab5tract | to clarify my self-proclaimed evil genius status of yesterday, I should have acknowledged timo++ and jnthn++ for both mentioning re-use of the Raku bits and bobs | 11:15 | |
Geth | rakudo/main: 27565cc1f7 | (Elizabeth Mattijsen)++ | lib/RakuDoc/To/RakuDoc.rakumod RakuAST: render attributes as doc-attribute in safe RakuDoc generation |
11:16 | |
ab5tract | but it felt pretty slick anyway when I realized that Raku being essentially a superset means that re-use wasn't about wiring up existing objects and re-using them in an NQP parsing chain | 11:17 | |
lizmat | :-) | 11:18 | |
ab5tract | but rather only required that we create a filetype recognizer and a "language" definition class that just passes `RakuLanguage.INSTANCE` to its super() :D | ||
usercontent.irccloud-cdn.com/file/....26.29.png | 11:27 | ||
lizmat | :-) | ||
Geth | rakudo/main: 8162f3eb3b | (Elizabeth Mattijsen)++ | lib/RakuDoc/To/RakuDoc.rakumod RakuAST: properly render submethods in safe RakuDoc Check ::Submethod before ::Method, as Submethod is a subclass of ::Method |
11:31 | |
rakudo/main: 8eef4c797d | (Elizabeth Mattijsen)++ | lib/RakuDoc/To/RakuDoc.rakumod RakuAST: merge =leading / =trailing into single paragraph As suggested by patrickb++ Note this can be reverted easily. |
11:38 | ||
14:03
finanalyst left
|
|||
releasable6 | Next release in ≈4 days and ≈3 hours. There are no known blockers. Please log your changes in the ChangeLog: github.com/rakudo/rakudo/wiki/ChangeLog-Draft | 15:00 | |
16:06
summerisle is now known as eof
|
|||
timo | i wonder how much performance penalty we would have to pay to have the objects table in serialized blobs be variable width entries instead of fixed length entries | 16:10 | |
though of course the deserialized data is a few times bigger than the objects table | 16:16 | ||
m: say "the serialized data is $(0x4b9968 / 0x1bdc00) times bigger"; say "the serialized data is $(0x4b9968 * 100 / 0x1383778)% of the total 'data' section" | 16:19 | ||
camelia | the serialized data is 2.7136015 times bigger the serialized data is 24.21371095% of the total 'data' section |
||
ab5tract | intriguing | ||
timo | this is the core c setting | 16:20 | |
ab5tract | how do you reckon we could get reasonable stats on the performance shift (if any)? | 16:22 | |
coleman | Where is the serialization implemented right now | 16:23 | |
Geth | rakudo/main: fdeb87d04b | ab5tract++ | src/Perl6/Optimizer.nqp Fix small breakage in the Mu:U sm optimization While migrating from an earlier version of the fix, I forgot to change `$rhs.value` back to `$rhs.ast`. It also appears that my issues with `$sm_type_how` were either self-inflicted or entirely imagined, as it works fine. |
16:24 | |
timo | coleman: you have 6model/serialization.c but src/core/bytecode.c is also important, and (almost) every REPR under src/6model/reprs/ has their own serialize and deserialize functions for both instance data and repr_data | 16:29 | |
coleman | gotcha; thank you | ||
For saving space, capnproto objects have an OPTIONAL packing scheme capnproto.org/encoding.html#packing | 16:30 | ||
timo | we do want to have almost-random-access to individual elements of a long table | 16:33 | |
we do have variable width integers in our scheme, and the table in question us all integers | 16:34 | ||
coleman | do we have an integration test in the moarvm repo that deals with bytecode files directly | 16:40 | |
even if it's one like "run this and don't crash" | 16:41 | ||
timo | no, no tests at all inside the moarvm repo | 16:42 | |
nqp's test suite is used as tests for moarvm | 16:43 | ||
i'm about to push a branch to moarvm where you can get a detailed dump of a .moarvm file (work-in-progress though) | 16:52 | ||
i'm aiming for a "what if xxd, but a human-readable index at the start, and things in the middle split up into chunks", so you'd have one hexdump per string, one hexdump per STable data, etc | 16:53 | ||
for some segments right now it's just one dump for the whole thing | |||
coleman | I think that's a prerequisite to any optimization, yea | 16:58 | |
timo | well, you could alternatively look at the stuff in gdb after moar has loaded it, or measure stuff from a running program | 16:59 | |
this has been prompted by reproducible builds and diffoscope, so i'm trying to make the dump work as good as it can without reading additional files from disk | |||
but of course what something like python might have "as built-in as it gets" like str, list, dict, are all high-level classes you pull in from the setting, and partially they come in through the bootstrap i think | 17:01 | ||
and the serialization format isn't self-describing like a json would be, so you can't decipher it without having the moar code (or binary of course) | |||
the dumper should not run a single drop of custom code | 17:02 | ||
coleman | agreed | ||
timo | but every moar bytecode has a deserialization frame that is responsible for a bunch of important setup, so we can't trivially have the fruits of that labor be exposed helpfully in the dumper output | 17:03 | |
plus it should run fast | |||
hm. i assume xxd is a lot faster than my silly hand-crafted hxdump function, so for use in tools like diffoscope there should be a "carve" mode (like binwalk's carve) aka "extract" which makes a set of files with the bits in it | 17:04 | ||
coleman | easy-to-run > faster (at first) | 17:08 | |
timo | i will probably contribute moarvm support to diffoscope, it's not much. but i don't want to make them add moarvm to the dependencies or even suggested dependencies tbh | 17:10 | |
it's already a huge tree of dependencies you pull in when you install diffoscope | |||
ok, pull moarvm and check out the detailed_moarvm_file_dump branch | 17:12 | ||
it's not doing everything yet, but it serves as a starting point if you want to go explore | |||
i'll keep hacking at it | |||
coleman | ty | 17:14 | |
timo | if you want to poke at deserialization, you can `use nqp;` in raku code, then `nqp::getobjsc($existing_thing)` (may need to decont) can give you the sc an object belongs to, then `nqp::scgethandle` tells you what you have the SC of, nqp::scgetdesc gives you the "description" of the SC (probably path or filename), nqp::scobjcount gives you how many objects are in it, and `nqp::scgetobj` gets you | 17:19 | |
an object out of the SC by its index | |||
breakpoints in "work_loop" can be illuminating when it comes to trees of serialized objects being read | |||
functions with "sc_demand" in the name are also interesting. demand stable, demand code, demand object | 17:20 | ||
ab5tract | timo++ | 17:21 | |
timo | oh the frames data is also quite sizable | 17:28 | |
m: say "hexdump of 25210 frame data is $(0xbae8b0 / 16) lines"; say "$(0xbae8b0 / 25210) bytes per frame on average" | 17:34 | ||
camelia | hexdump of 25210 frame data is 765579 lines 485.889092 bytes per frame on average |
||
timo | good candidate for a table with the most important info, and if anything in the other parts differs, the byte level diff would show them as well | 17:35 | |
17:51
sena_kun left
|
|||
coleman | I'm attempting to install windows on my other NVME drive right now; I have an old windows 10 license i think. I will either succeed and compile Rakudo on windows, succeed and play Rocket League, or fail. | 18:09 | |
back in a bit | 18:10 | ||
one of those 3 | |||
timo | i thought rocket league runs perfectly on linux :) | 18:13 | |
coleman | it stopped when i switched from PopOS to Ubuntu | 18:15 | |
or some other upgrade | |||
or maybe it was Epic Games' fault | 18:16 | ||
who knows. | |||
timo | gist.github.com/timo/4b668a64657c8...d91aad4e4b for other spectators, here's a snipped new moar dump output example, just over 10k lines :) | 18:44 | |
anyway, dumping the entire core.c setting with this code already just takes one second, unless you dump it to an unbuffered tty | 18:47 | ||
now hunting a spot where the build directory is going into the core setting's string heap | 19:15 | ||
19:26
sena_kun joined
|
|||
Geth | rakudo/main: 6e182acdb1 | ab5tract++ | t/08-performance/03-optimizer-regressions.t Add test for "not-Mu:U" ~~ Mu:U As it is optimizer-related, I've stashed it in the Rakudo test suite under `08-performance`. |
19:49 | |
20:40
sena_kun left,
finanalyst joined
22:07
finanalyst left
|
|||
timo | ok this is slightly odd: when we parse a $?FILE we generate the full file path as a constant ... so when we parse `multi sub trait_mod:<is>(Attribute:D $attr, |c) { X::Comp::Trait::Unknown.new(file => $?FILE, [...]` in the core setting, we just put the core setting's path in there? but when i actually run into it in code, something else must be updating it to point at where it actually happened | 22:53 | |
as the parser sees it | |||
yeah, World's handle-begin-time-exceptions is one place that does that kind of thing | 22:58 | ||
and directly below that the rethrow method | 22:59 | ||
in any case, i don't think putting $?LINE and $?FILE there is correct at all | 23:08 | ||
m: class A { has $.haha }; try trait_mod:<is>(A.^attributes[0], funny => "very"); say $!.raku | 23:16 | ||
camelia | X::Comp::Trait::Unknown.new(pos => Any, filename => Any, line => 220, directive-filename => Any, column => Any, modules => [], is-compile-time => Bool::False, pre => Any, post => Any, highexpect => ["rw", "readonly", "box_target", "leading_docs", "tra… | ||
timo | mhh ... hmm? | 23:17 | |
m: say $?FILE | |||
camelia | <tmp> | ||
timo | ... ... ... the code was setting "file" rather than "filename" | 23:19 | |
m: use MONKEY; augment class X::Comp::Trait::Unknown { method new(|c) { say "called with $(c.raku)" } }; try trait_mod:<is>(X::Comp::Trait::Unknown.^attributes[0], funny => "barely"); say $!.raku; | 23:21 | ||
camelia | called with \(:declaring("an attribute"), :file("/home/camelia/rakudo-m-1/gen/moar/CORE.c.setting"), :highexpect(("rw", "readonly", "box_target", "leading_docs", "trailing_docs")), :line(220), :subtype("funny"), :type("is")) X::Method::NotFound.new(… |
||
timo | there we go | ||
Geth | rakudo/no_file_variable_in_precomps: 85a07423e3 | (Timo Paulssen)++ | 2 files Remove $?FILE and $?LINE where they don't make much sense The filename and line will get set by World if the exception is thrown at BEGIN time, "file" is not a named argument the X::Comp constructor takes ("filename" would be), so this is not a regression. And the filename and line from the core setting is not very helpful, since you also get a stack trace in that case. |
23:35 | |
rakudo: timo++ created pull request #5647: Remove $?FILE and $?LINE where they don't make much sense |
23:37 |