MasterDuke_ | don't think i tried just removing that check altogether though... | 00:00 | |
samcv | do i need qt5-base | 00:01 | |
MasterDuke_ | i think? it was a while ago i was messing around with that | ||
samcv | ok. i checked it out of the svn | ||
just search for 1<<27? | 00:02 | ||
MasterDuke_ | it's line 639 of some file | 00:03 | |
samcv | that's very specific | ||
MasterDuke_ | going off of what timotimo pasted, i assume he grabbed the line number from vim | 00:04 | |
samcv | oh | ||
ok i found 1<<27 | |||
on exactly that line. heh | |||
timotimo | here comes the hang | 00:05 | |
cpu usage has drastically decreased, unsurprisingly | 00:07 | ||
samcv | nice | ||
timotimo | what's nice about that? :D | ||
samcv | the cpu usage decreased | ||
timotimo | well ... yeah ... because it's waiting for data to be shoveled in from swap and back again | ||
samcv | oh. lol. | 00:08 | |
! | |||
timotimo | d'oh, it's reaching the amount of swap i have soon | ||
6.7g of 8.25g | |||
samcv | uh oh. you need more swap | ||
timotimo | i do | ||
samcv | i used up 16GB maybe more | ||
no more than 18GB tho | |||
timotimo | writing profiler output!!! | ||
samcv | yep :) | 00:09 | |
MasterDuke_, gonna try removing all the checks that seem sane to remove | 00:21 | ||
cause some things like failure to realloc or malloc can trigger documenttoolarge errors, so i didn't touch those ones though | |||
MasterDuke_ | in qt? | ||
samcv | yes | ||
but commented out a few places that seemed arbitrary | |||
MasterDuke_ | interested to see what you find | 00:22 | |
timotimo | i want to go to bed soon ... but the thing is still running :| | ||
samcv | go to bed. it might be done in an hour | ||
timotimo | hah. | ||
in theory it'd be faster because i'm throwing out list items and hash keys/values while creating the json | 00:23 | ||
so consecutive gc runs will have slightly less work | |||
oh, huh, the json file is still at 0 bytes? | |||
MasterDuke_ | if it's better can do the same for to_sql | ||
samcv | argh still too large document | 00:24 | |
timotimo | samcv: one idea is to switch from a JSONDocument-like API to a stream-like API | ||
where the size of the document is practically irrelevant | |||
samcv | yeah | ||
let's just do a memory dump | 00:25 | ||
like microsoft word does | |||
sorry that made only 1/4 of sense. ignore that | |||
timotimo | kch kch kch | ||
geekosaur | word hasn't done that in a while | 00:27 | |
I am pretty sure the xml-ish format they use now is not what they have in memory :) | |||
samcv | yeah they don't anymore | ||
timotimo | nah, they still dump the memory, but they put some random xml tags around pieces of it :P | ||
samcv | hahaha | 00:28 | |
geekosaur | also they've actually been addressing the "but it doesn't behave the same on windows vs. mac" | ||
(I still suspect it's because they switched to the mac version as the main one... because the old one was a horrid trainwreck that derailed development of windows 8 because some of their streamlining uncovered bugs that had been patched around in the OS during the "office is our cash cow, office devs an do anything they want" phase.) | 00:30 | ||
timotimo | i think swap usage is slowly dropping | 00:31 | |
samcv | just wait | ||
timotimo | it'll end up being at like 0.05% cpu usage | ||
samcv | i had that happen then after it went down for a while it went back up. has it actually started outputting the json? | ||
timotimo | let's see. | ||
nah, still 0 bytes | |||
doesn't help that i'm also creating a new swapfile on one of the disks where there's already an active swapfile | 00:32 | ||
oh wow | 00:34 | ||
why not update my laptop's packages on the side | |||
just a download of 1.3 gigs | |||
Disk Requirements: | 00:40 | ||
At least 169MB more space needed on the / filesystem. | |||
i've got so much swap now | 00:44 | ||
MasterDuke_ | .tell pmurias i think NQP_VERBOSE_EXCEPTIONS=1 is what gives better jvm errors | 00:58 | |
yoleaux | MasterDuke_: I'll pass your message to pmurias. | ||
MasterDuke_ | .tell [Coke] i think NQP_VERBOSE_EXCEPTIONS=1 is what gives better jvm errors | ||
yoleaux | MasterDuke_: I'll pass your message to [Coke]. | ||
MasterDuke_ | .tell TimToady i think NQP_VERBOSE_EXCEPTIONS=1 is what gives better jvm errors | 00:59 | |
yoleaux | MasterDuke_: I'll pass your message to TimToady. | ||
samcv | well i got it to 14% of my mem usage (qt) | 01:01 | |
and then i get too large document error | |||
MasterDuke_ | m: say 1: | 03:21 | |
camelia | HERE: - sym: : - O: 1 |
||
MasterDuke_ | bisectable6: say 1: | ||
bisectable6 | MasterDuke_, Bisecting by output (old=2015.12 new=241831e) because on both starting points the exit code is 0 | ||
MasterDuke_, bisect log: gist.github.com/32e2abe320c388f3c4...b23eafa2b7 | |||
MasterDuke_, There are 20 candidates for the first ānewā revision. See the log for more details | |||
MasterDuke_ | bisectable6: old=2017.03 say 1: | 03:22 | |
bisectable6 | MasterDuke_, Bisecting by output (old=2017.03 new=241831e) because on both starting points the exit code is 0 | ||
MasterDuke_, bisect log: gist.github.com/8ec0603f10e4f5573e...dc4ca9c07e | |||
MasterDuke_, There are 20 candidates for the first ānewā revision. See the log for more details | |||
MasterDuke_ | c: 2017.03,HEAD say 1: | 03:23 | |
committable6 | MasterDuke_, Ā¦2017.03: Ā«1Ā» Ā¦HEAD(241831e): Ā«HERE:ā¤- sym: :ā¤- O: ā¤ā¤1Ā» | ||
MasterDuke_ | this ^^^ was talked about over in #perl6, github.com/rakudo/rakudo/blob/nom/....nqp#L4504 introduced in github.com/rakudo/rakudo/commit/cd...a1c4018b7b | 03:25 | |
Geth | nqp: e843d339bc | MasterDuke17++ | docs/ops.markdown Alphabetize the ops and use consistent wording |
03:35 | |
rakudo/nom: c9ebfc2023 | TimToady++ | src/Perl6/Grammar.nqp remove inadvertent debugging line |
03:46 | ||
TimToady guesses that probably snuck in at some point due to hitting 'u' one too many times in vim, since it certainly wasn't part of the uncurse effort... | 03:53 | ||
yoleaux | 02:42Z <MasterDuke_> TimToady: if you backlog here irclog.perlgeek.de/perl6/2017-04-18#i_14443061, was this intended? | ||
TimToady | probably one of the earlier 'useless use' thingies that I backed into | 03:55 | |
course, I'm starting to get to the age where I can just blame brainrot... | 03:56 | ||
nine | YEAH, YEAH, YEAH, YEAH! I just managed for the first time to run Inline::Perl5's test suite using moarvm, nqp, rakudo _and_ Inline::Perl5 installed from RPM packages. | 05:37 | |
This is it! The culmination of 16 months of working on precomp stuff | |||
Yesterday evening I was already so close. I got the RPM to work with my manually compiled rakudo but not with the one installed from the RPM. Turns out I had a workaround in that package that deleted the precomp files for CompUnit::Repository::Staging itself. | 05:38 | ||
Looks like I don't need that workaround anymore :) | 05:39 | ||
Now the work on packaging Perl 6 modules and submitting to openSUSE can start in earnest. And now that deployment is sorted out we can start using Perl 6 and Inline::Perl5 in production at work. | 05:43 | ||
samcv | yay! | 05:44 | |
nine, yay | |||
bartolin | very nice, nine++ | 05:46 | |
TimToady | do we need a point release? prolly need one for my debugging flub anyway... | 05:59 | |
people are likely to have used 'method $obj: @args' in the ecosystem | 06:00 | ||
TimToady is a little surprised no test caught this, but I guess the harness just tends to ignore stderr | 06:02 | ||
samcv | ok so i got my appimage build attempting to install every module | 06:04 | |
into an appimage | |||
nine | TimToady: no, I'm lucky. The workaround was in my .spec file :) | ||
samcv | still going travis-ci.org/samcv/rakudo-appimag...9739#L2701 not sure how long will take to finish or the best way to record which were uninstallable. | 06:05 | |
TimToady | darn, I was hoping I wouldn't be the only reason for a point release :) | ||
samcv | so i can make some sort of uh. ecosystem warning system idk | ||
bartolin | .tell Zoffix unfortunatly 9d8e391f3b (rakudo) does not work on JVM: Unknown encoding 'utf8-c8'. One option would be to add a workaround for JVM there ... | ||
yoleaux | bartolin: I'll pass your message to Zoffix. | ||
samcv | it installs them not as an appimage but before it's made btw. so that affects the results not | 06:06 | |
i guess i can make it a gh-pages thing like i did for the appimages and for moarvm coverage | |||
hopefully travis ci doesnn't stop the build before everything is installed. if it does i'll probably have to split the build up or something | |||
also nine you know about modules. is there a file that would be best to look at after i have tried to install all the modules, so i can get a list of what actualy installed and compare it to the full list? | 06:14 | ||
TimToady | .tell Zoffix We'll probably need a point release before doing Star, 'cuz I screwed up the 'new Foo: ...' syntax by leaving a debugging line in (after worrying about all the heavy stuff, wouldn'tchya know it'd be something stupid) | 06:28 | |
yoleaux | TimToady: I'll pass your message to Zoffix. | ||
samcv | ruh roh | 06:29 | |
TimToady, what's a good percentage of eco modules acceptable to be failing | 06:31 | ||
out of the total | |||
i mean 0 would be nice but seems a bit unrealistic | |||
TimToady | um, 0 would be good :0 | ||
I suppose it depends on why they're failing | |||
samcv | well they shouldn't fail. i mean | 06:32 | |
not a good thing | |||
TimToady | if they're failing due to extraneous debugging info, well, I know where that came from :) | ||
samcv | hm | ||
TimToady | if they don't like the new I/O for some reason, that's something else | ||
samcv | so far IO::Prompter, Math::ContinuedFractions, List::Utils, Text::Diff, BioInfo:ver('0.4.3'):auth('Matt Oates'), Math::PascalTriangle:ver('0.1.0'), DateTime::Math Flower, Hinges | ||
TimToady | if they happen to think Cursor and Match are different types, well... | 06:33 | |
samcv | are failing. out of those travis has tested | ||
travis-ci.org/samcv/rakudo-appimag...9739#L3524 what is this. possibly recent change? | |||
t/math.t ..1/17Ambiguous call to 'infix:<->'; these signatures all match: | 06:34 | ||
:(DateTime:D \a, DateTime:D \b) | |||
:(DateTime:D $a, DateTime:D $b) | |||
that doesn't sound that great | |||
TimToady | dunno, but if DateTime is malfing, that could easily take down a number of other modules | 06:36 | |
samcv | yea | ||
[Tux] | This is Rakudo version 2017.04-2-gc9ebfc202 built on MoarVM version 2017.04 | ||
csv-ip5xs 3.182 | |||
test 12.651 | |||
test-t 5.111 - 5.118 | |||
csv-parser 13.127 | |||
RabidGravy | BTW, I testted all of mine on Saturday so that's about 7% passing :) | ||
samcv | hehehe you have 7% of all modules? | ||
o.O | 06:37 | ||
TimToady | well, we could've broken them since Saturday :) | ||
samcv | some of the have no plan in tap output | 06:38 | |
list::util has t/08-combinations.t ....42/?Type check failed in binding; expected Positional but got Seq ((["a", "b", "c"], | |||
RabidGravy | yeah 7.63% it appears ;-) | ||
samcv | heh my job has now been terminated X| | ||
will have to randomize which modules go each time. or maybe split them up between two jobs idk | 06:39 | ||
RabidGravy | it was over 10% at some point last year, but I slowed down | ||
TimToady | well, maybe we'd better see the downstream fallout over the next day or two; a lot of people hacked on a lot of things over the last month | 06:43 | |
RabidGravy: did yours all work, or did you have to tweak 'em? | |||
RabidGravy | a couple of tweaks, the symlink semantics bit in one place and there was something with IO::Path.append that needed more coercing | 06:46 | |
TimToady | Math::ContinuedFractions appears not to download from github | ||
RabidGravy | nothing like "lexical import" though | ||
TimToady | huh, can't get IO::Prompter either, maybe my zef is screwed up or too old | 06:47 | |
or maybe github is screwy at the moment | 06:49 | ||
samcv | well it installed or tried to install 95 modules | 06:56 | |
so that's not bad for one travis run. we have 700 right? | |||
just gonna need at least 7 travis builds XD | |||
86 pass and 11 fail. so | 06:59 | ||
that's not good statistics so far :O | |||
nine | samcv: I'm not sure I understood your question | 07:03 | |
samcv | uhm. there's a json file that keeps track of what's installed right. as far as modules goes? | ||
here's the test results gist.github.com/samcv/835b0640ef91...771e230b6e | |||
RabidGravy | 812 | 07:04 | |
samcv | so that's an 8.8% failure rate out of the modules it had time to install | ||
RabidGravy, modules total? | |||
RabidGravy | yeah | 07:06 | |
phew, none of mine in the FAIL list | |||
;-) | |||
samcv | time to uhm. i guess. split it into 10 builds | 07:07 | |
i'll sort the modules alphabetically and then choose 1/10 of a section to try and install | |||
nine | samcv: Well there's a dist's meta data that's stored in the repo. But I'm not sure what exactly you're after. What do you mean by "full list"? | ||
samcv | what is installed | ||
a list of everything installed i can process programically | 07:08 | ||
though i could do zef list --installed i guess. maybe that's the best way? but i do want to know the file it's stored in too. since that with help with other things | |||
nine | m: say $*REPO.next-repo.installed>>.meta | 07:09 | |
camelia | ({auth => github:niner, author => github:niner, authors => [Stefan Seifert], depends => [LibraryMake], description => Use Perl 5 code in a Perl 6 program, files => {resources/libraries/p5helper => 2F6B236B77BC9D0E77C1B73DBAFA53E81D238E83.so}, license => ā¦ | ||
nine | samcv: ^^^ | ||
samcv | thx | 07:12 | |
TimToady | IO::Prompter is using ancient syntax ('as') in a signature | ||
Math::ContinuedFractions appears to just be producing wrong results | 07:13 | ||
Text::Diff is failing with: No such method 'succ' for invocant of type 'List' | 07:14 | ||
so seems to be a variety of reasons | 07:15 | ||
samcv | yeah | 07:16 | |
Zoffix | . | 07:39 | |
yoleaux | 06:05Z <bartolin> Zoffix: unfortunatly 9d8e391f3b (rakudo) does not work on JVM: Unknown encoding 'utf8-c8'. One option would be to add a workaround for JVM there ... | ||
06:28Z <TimToady> Zoffix: We'll probably need a point release before doing Star, 'cuz I screwed up the 'new Foo: ...' syntax by leaving a debugging line in (after worrying about all the heavy stuff, wouldn'tchya know it'd be something stupid) | |||
Zoffix | Cool. I don't think NeuralAnomaly knows how to do point releases, so I get to keep my skills sharp by doing it... *dun-tun-tun*... by hand! | 07:40 | |
.tell AlexDaniel something's wrong with committable6. Doesn't respond. I tried running ./verify-and-unfuck but it seems to just sit there; I tried deleting the "deleteme" file; but nothign helped | 07:42 | ||
yoleaux | Zoffix: I'll pass your message to AlexDaniel. | ||
nine | Oh and once we've got all Perl 6 modules in the Open Build Service, we could use that for smoke testing the whole ecosystem. Because the build service will rebuild packages on changes to their dependencies. | 07:44 | |
Zoffix | Heh. Google doesn't want us to do a point release apparently: "Starting VM instance "perlbuild2" failed. Error: The zone 'projects/perl6-build/zones/us-east1-b' does not have enough resources available to fulfill the request. Try a different zone, or try again later." | ||
AlexDaniel: you have a robomessage | 07:47 | ||
AlexDaniel | . | ||
yoleaux | 07:43Z <Zoffix> AlexDaniel: something's wrong with committable6. Doesn't respond. I tried running ./verify-and-unfuck but it seems to just sit there; I tried deleting the "deleteme" file; but nothign helped | ||
Zoffix | And it pinged out like 1 min ago | 07:48 | |
AlexDaniel | hmm these aren't the files you should use for such situation, actually XD | ||
Zoffix | AlexDaniel: also, is there a ./stop-all ? | ||
AlexDaniel | dammit I have to clean it up | ||
Zoffix | :} | ||
AlexDaniel | Zoffix: yes, it is ļ½¢rakudobrew build moarļ½£ | 07:49 | |
Zoffix | :o | 07:50 | |
AlexDaniel | hm, it failed it on HEAD | ||
which was 4 hours ago | |||
e: say 42 | 07:51 | ||
ah-ha | |||
samcv | i can't report issues here github.com/colomon/io-prompter | 07:53 | |
there's no issues tab X| | |||
Zoffix | hm... I see no option to switch zone for my VM... :S | 07:54 | |
AlexDaniel | c: HEAD^ say 42 | ||
committable6 | AlexDaniel, Ā¦HEAD^: Ā«42Ā» | ||
AlexDaniel | c: HEAD say 42 | ||
committable6 | AlexDaniel, Ā¦HEAD(c9ebfc2): Ā«No build for this commitĀ» | ||
Zoffix | samcv: just make a fake PR and describe issue there, along with saying Issues tab is disabled | ||
AlexDaniel | alright, this should fix itself ā3 minutes | 07:55 | |
Zoffix | c: 2017.04 class Foo {}; $ = new Foo: | ||
committable6 | Zoffix, Ā¦2017.04: Ā«HERE:ā¤- sym: :ā¤- O: ā¤Ā» | ||
AlexDaniel | oh noes | ||
Zoffix | ehehe. This is so funny :) | ||
AlexDaniel | this isā¦ horrible actually | ||
Zoffix | It's a new feature, to... um... discourage people from using that notation :P | 07:56 | |
AlexDaniel | are we going to have a 2017.04.01 to fix that? | 07:57 | |
Zoffix | Yup. As soon as manage to power on a VM | 07:58 | |
It's 4AM, I guess everyone's running nightly stuff, which is why the zone is too busy :| | 07:59 | ||
Zoffix tries creating a new VM in another zone, hoping to hook up the same drive to it | 08:00 | ||
Don't see an option. Stupid google | 08:03 | ||
samcv | omg this module is so bad | 08:07 | |
Type check failed in binding to parameter '$in'; expected IO but got <anon|94140617225856> (<anon|94140617225856>...) | |||
io::prompter. with `:d(:$default) as Str = "",` as a parameter | 08:08 | ||
at least fixing that parameter makes more than 0 tests pass? | 08:09 | ||
AlexDaniel | e: say 42 | 08:10 | |
evalable6 | 42 | ||
AlexDaniel | alright | ||
Zoffix: thanks! | |||
Zoffix | ugh, this is really annoying. | 08:13 | |
No easy way to move to new zone or even make the VM non-ephemeral. | 08:14 | ||
or detach a drive from one VM to use on another | 08:15 | ||
samcv | ok will have to fix some IO related stuff in this module | 08:17 | |
arghhh | 08:19 | ||
Zoffix | ... "Starting VM instance "perlbuild2-non-preemptive" failed. Error: A required resource is not available." | ||
And it deleted the VM I was trying to create.. wtf :( | |||
samcv | class StubIO is IO | ||
ok. i think i can safely change these to IO::Handle's too | 08:20 | ||
now all but one test file works | 08:23 | ||
Zoffix | YEY! I'm in :) Had to bump down the CPUs to 8 | 08:26 | |
samcv | checked another one off. gist.github.com/samcv/835b0640ef91...771e230b6e | 08:30 | |
of failing modules | 08:31 | ||
can we get an intern | |||
Zoffix | samcv: what's this stuff? | ||
samcv | failing modules :X | ||
just made a PR due to IO breakage from that one module | |||
Zoffix | samcv: but what's with the selection of modules? How were they selected? | 08:32 | |
samcv | uh. not purposefully. | 08:33 | |
they are just only the ones that completed/failed before travis ended the build | |||
i tried to install every single module | |||
Zoffix, can you or somebody write me some code to divide an array holding all 700 or so module names into sections? | |||
so i can set an ENV variable to a number (number of builds) and another to which build it is. | 08:34 | ||
Zoffix | m: my @a = 1..10; dd @a.rotor: @a/5, :partial | ||
camelia | ((1, 2), (3, 4), (5, 6), (7, 8), (9, 10)).Seq | ||
samcv | so NUM_BUILDS=10 then it splits the array into 10 sections | ||
Zoffix | m: my @a = 1..10; dd @a.rotor: @a/3, :partial | ||
camelia | ((1, 2, 3), (4, 5, 6), (7, 8, 9), (10,)).Seq | ||
samcv | and BUILD_NUM=2 then it needs the 2nd section of that array | ||
and only build that section. and also make sure that the 10th or last one will get ones that are in addition to the divisor | 08:35 | ||
err remainder type things | |||
hm | |||
that seems good | |||
thank you kindly. :partial is critical :) did not know that one. was trying to hack something and it ended up being error prone | |||
Zoffix | m: %*ENV<NUM_BUILDS BUILD_NUM> = 10, 2; my @a = 1..100; dd (@a.rotor: @a/(%*ENV<NUM_BUILDS>-1), :partial)[%*ENV<BUILD_NUM>] | 08:36 | |
camelia | (23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33) | ||
samcv | m: %*ENV<NUM_BUILDS BUILD_NUM> = 10, 10; my @a = 1..100; dd (@a.rotor: @a/(%*ENV<NUM_BUILDS>-1), :partial)[%*ENV<BUILD_NUM>] | 08:37 | |
camelia | Nil | ||
samcv | m: %*ENV<NUM_BUILDS BUILD_NUM> = 9, 10; my @a = 1..100; dd (@a.rotor: @a/(%*ENV<NUM_BUILDS>-1), :partial)[%*ENV<BUILD_NUM>] | ||
camelia | Nil | ||
samcv | m: %*ENV<NUM_BUILDS BUILD_NUM> = 10, 9; my @a = 1..100; dd (@a.rotor: @a/(%*ENV<NUM_BUILDS>-1), :partial)[%*ENV<BUILD_NUM>] | 08:38 | |
camelia | (100,) | ||
samcv | m: %*ENV<NUM_BUILDS BUILD_NUM> = 10, 8; my @a = 1..100; dd (@a.rotor: @a/(%*ENV<NUM_BUILDS>-1), :partial)[%*ENV<BUILD_NUM>] | ||
camelia | (89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99) | ||
samcv | m: %*ENV<NUM_BUILDS BUILD_NUM> = 10, 1; my @a = 1..100; dd (@a.rotor: @a/(%*ENV<NUM_BUILDS>-1), :partial)[%*ENV<BUILD_NUM>] | ||
camelia | (12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22) | ||
samcv | m: %*ENV<NUM_BUILDS BUILD_NUM> = 10, 0; my @a = 1..100; dd (@a.rotor: @a/(%*ENV<NUM_BUILDS>-1), :partial)[%*ENV<BUILD_NUM>] | ||
camelia | (1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11) | ||
Geth | rakudo/nom: 88a6facc81 | (Zoffix Znet)++ | src/core/IO/Path.pm Undo IO::Path.resolve fix on JVM Seems it doesn't know about ut8-c8 yet, so swap JVM to use the pre-fix code. |
08:39 | |
roast: 6fc4cf833d | (Zoffix Znet)++ | integration/weird-errors.t Test there is no unwanted output with `new Foo:` |
09:00 | ||
rakudo/nom: c8cb6a61fa | (Zoffix Znet)++ | docs/ChangeLog Log 2017.04.1 changes |
|||
rakudo/nom: 7eb37996a3 | (Zoffix Znet)++ | docs/announce/2017.04.1.md Add 2017.04.1 Release Announcement |
09:02 | ||
rakudo/nom: e49b3728a8 | (Zoffix Znet)++ | VERSION [release] bump VERSION to 2017.04.1 |
09:03 | ||
Zoffix | Does changelog in emailed release announcements always looks this messed up? www.nntp.perl.org/group/perl.perl6....15215.html | 09:18 | |
Seems multi-line entries have the 1..* lines flush to left margin instead of indented :S | |||
Looks fine in my email reader :/ | 09:19 | ||
awww... | 09:24 | ||
Looks like the rakudo.org/downloads/rakudo/ script doesn't know how to handle point releases either... | |||
Hope I can fix it in time... I need to wake up for work in 35 minutes | 09:26 | ||
I guess it's 'cause PHP's glob sorts it that way :| | 09:27 | ||
Done. | 09:55 | ||
Geth | rakudo/nom: 7e826f434d | (Zoffix Znet)++ (committed using GitHub Web editor) | docs/announce/2017.04.1.md It's not the future yet. |
09:58 | |
Zoffix | "You had one job" :) | ||
And that file made it into the tarball. Brilliant. | |||
At least the filename is right | 09:59 | ||
Annndd cut | 10:01 | ||
Zoffix celebrates with appropriate amount of fun | |||
.tell bartolin Thanks. Fixed by undoing the bugfix for JVM: github.com/rakudo/rakudo/commit/88...01e97181f4 | 10:03 | ||
yoleaux | Zoffix: I'll pass your message to bartolin. | ||
Zoffix | .tell TimToady point release cut. We also discovered a bug on our downloads page that didn't sort point releases correctly. So this was all beneficial :) | ||
yoleaux | Zoffix: I'll pass your message to TimToady. | ||
jnthn | nine: On Rakudo latest I'm seeing: | 10:06 | |
An exception occurred while evaluating a constant | |||
at /home/travis/build/edumentab/rmtly/site#sources/F6A76DDBC4B3F739D1B0D02B0403CF4AEB0CBC07 (Digest::SHA1::Native):5 | |||
Exception details: No such method 'absolute' for invocant of type '<anon|863819600>' | |||
That line is github.com/bduggan/p6-digest-sha1-...ive.pm6#L5 | |||
I think you changed something in this area recently? | 10:07 | ||
samcv | jnthn, i'm getting all the eco modules tested | 10:08 | |
well trying. i split it into 10 pieces to have travis ci do | 10:09 | ||
so hopefully then we can get numbers for every single module | |||
thanks Zoffix for helping me with rotor, it's working beautifully so far | 10:10 | ||
jnthn | Sounds nice | 10:12 | |
samcv | very slow. but. will be a good starting point. and hopefully can get it so all builds complete | 10:13 | |
and then maybe upload the logs or something somewhere, or process them idk | |||
one step at a time | |||
though it seems impossible to install every module within 50 minute time frame for travis :( though i could hack it and commit a tar.gz to a branch, then trigger another build by commiting it | 10:15 | ||
until they're all actually intstalled. | |||
Zoffix | "...On Rakudo latest I'm seeing..." | 10:22 | |
Sounds like it's a good thing I ensured the download page works right even when there's more than one point release :} | |||
Oh cool. The last 3 releases all happened on the same day of the month (if we assume today's point release is the release for this month) | 10:33 | ||
Geth | rakudo/nom: 41bb79c9ea | (Zoffix Znet)++ (committed using GitHub Web editor) | docs/release_guide.pod List 2017.04.1 in past releases list - Also, fix the date for 2017.04 release |
||
nine | jnthn: indeed, that's caused by github.com/rakudo/rakudo/commit/d4...9656c28232 :/ | 10:35 | |
jnthn | That very line was changed some days back from .abspath :) | 10:38 | |
nine | Wouldn't have mattered. Fix coming up | ||
Zoffix | \o/ | 10:39 | |
nine | Do we have something like Moose's "handles" for delegating a whole bunch of methods? | ||
Zoffix: that's definitely something for the point release | 10:40 | ||
Zoffix | no problem | ||
nine | TimToady will be happy that he now has company :) | ||
Zoffix | :) | ||
jnthn | nine: yes, it's called handles :) | 10:41 | |
nine | Or...maybe I should just mixin the Callable stuff. That way we stay very close to the original IO::Path object | ||
Ah, no that's not possible as the point of the exercise is to defer creating the path in the first place | 10:42 | ||
Haha, "handles" is excruciatingly well documented ;) docs.perl6.org/language/glossary#i...ry-handles | 10:43 | ||
Zoffix | nine: docs.perl6.org/language/typesystem...it-handles | 10:44 | |
nine | Ok, I will delegate these method calls: Str gist perl absolute is-absolute relative is-relative parts volume dirname basename extension open resolve slurp lines comb split words copy | 10:48 | |
I.e. everything that handles the path name itself or does read access on the file. I better not delegate anything related to permissions or modification of the file. | 10:49 | ||
Now how can I lazily initialize this attribute? | 10:53 | ||
Zoffix | = do {} ? | 10:54 | |
Ah no | |||
m: use nqp; class Foo { my $def; nqp::bindattr($def, Scalar, '$!whence', {say "inited"}); has $.foo := $def; }.new.foo | 10:57 | ||
camelia | 5===SORRY!5=== Error while compiling <tmp> Cannot use := to initialize an attribute at <tmp>:1 ------> 3nce', {say "inited"}); has $.foo := $def7ā5; }.new.foo |
||
Zoffix shrugs | |||
nine | I guess I have to use a Proxy? | 10:58 | |
Zoffix leaves for work | |||
ping me when it's time to cut the point release | 10:59 | ||
nine | m: class Foo { has $.foo handles <bar>; method BUILD() { $!foo = Proxy.new(FETCH => method () { note "initing foo"; class :: { method bar() { "bar" } }.new }, STORE => method () { die }) } }; my $foo = Foo.new; say $foo.bar; say $foo.foo; | 11:01 | |
camelia | initing foo bar <anon|60446032>.new |
||
Zoffix | m: use nqp; class Foo { has $.foo is rw; submethod TWEAK { nqp::bindattr($!foo, Scalar, '$!whence', {say "inited"}); } }.new.foo = 42 | ||
camelia | inited | ||
Zoffix | hm, assignment works, but not reading | ||
m: use nqp; class Foo { has $.foo; method foo { $!foo //= do {say "inited"; 42} } }; my $n = Foo.new; say "started"; dd $n.foo; dd $n.foo | 11:02 | ||
camelia | started inited 42 42 |
||
nine | But methods delgated by "handles" won't go through the accessor | 11:04 | |
Zoffix | nine: BTW, I've seen some comments in IO source about not being able to use `handles` in core. So you might want to test it out first | ||
nine | on it | ||
AlexDaniel | hm, interesting | 11:16 | |
c: releases say 42 | |||
committable6 | AlexDaniel, Ā¦releases (18 commits): Ā«42Ā» | ||
AlexDaniel | ā¦ that's not what I meant! | ||
c: releases say rand | |||
committable6 | AlexDaniel, gist.github.com/be6a6a39422634b94f...d94b2a8c8b | 11:17 | |
AlexDaniel | oh, what a good bot! | ||
picked up 2017.04.1 automatically | |||
AlexDaniel pets committable6 | |||
11 secondsā¦ that's getting slow | 11:26 | ||
.oO( can we stop making releases so that I don't have to fix committable6? :P ) |
11:27 | ||
Zoffix | nine, looking at that error message; perhaps that class should have some name. And we should document its methods, so people would know what they can and can't call on the %?RESOURCES stuff | ||
samcv | ok we're at at 18 failed modules now gist.github.com/samcv/835b0640ef91...771e230b6e | 11:28 | |
i have links for all 18 of them to logs of the failures | |||
for convenience | |||
more often used ones i moved to the top like datetime::utils datetime::math, list::Utils, then there's several math ones too | 11:29 | ||
i gotta go to bed. all the travis builds have not completed yet. and looks like i'm going to have to make more than 10 builds to build all modules or maybe calculate dependencies some way to split them up differently... | 11:30 | ||
anddd. why the hell not split it up into 20 builds! travis-ci.org/samcv/rakudo-appimag.../223119055 so will find out when i wake up what happened with that hahahaha | 11:36 | ||
20? why not 100! abuse travis for all it's got! | |||
AlexDaniel | Zoffix: by the way, can I do 2017.06 release? (that is, a release after the next one) I'm just thinking that the current situation with you doing every release doesn't affect the bus factor positivelyā¦ what do you think? | 11:39 | |
samcv | also is it ok to have an issue open at github.com/perl6/ecosystem with failed modules so others can also edit it with updates? | 11:40 | |
discussion etc etc | |||
AlexDaniel | samcv: why not? | ||
samcv | very yes | ||
will open issue | |||
AlexDaniel | I've noticed that if you explicitly say that people are encouraged to edit your post, people will actually do it | 11:41 | |
Zoffix | samcv: the amusing part is you were also the person complaining about travis aborting builds, presumably because it was under heavy load :) | ||
samcv | yeah well. if it does 50 minutes that's fine. it tried hard | ||
but i have seen it abort after like 27 minutes before | |||
but also ;) | 11:42 | ||
if they want they can throttle my builds or whatever :\ i mean | |||
i'm seeing the same modules in like all 10 builds failing. so things must depend o nthem | 11:44 | ||
well at least one thing from each part of the alphabet | |||
Zoffix | AlexDaniel: not a fan of the idea TBH. | ||
samcv | i mean it needs to go somewhere. | 11:45 | |
nine | Holy crap. Seems like I've even broken Inline::Perl5's test suite when it's _not_ installed from an RPM package | ||
samcv | :X | ||
AlexDaniel | Zoffix: sure, why? | ||
Zoffix | nine, pretty sure it installed fine with zef when I cut 2017.04.1 | ||
nine | Ok, only when I apply the change that should fix it for packaging. | 11:46 | |
samcv | ok well here we go. github.com/perl6/ecosystem/issues/318 i'm fine to remove it if there's somewhere that it can go instead that people might actually participate in it | 11:48 | |
maybe someday i can scrape git's email addresses and mass email the most recent commiters :X | 11:49 | ||
might be annoying. haha. probably not gonna reort to that yet | 11:50 | ||
o/ night | 11:56 | ||
Zoffix | AlexDaniel: sort of "aint broke don't fix it thing." I got into routine with releases; it works. Not a single release issue in 9 nonths. So why change it? There is no bus factor, as the entire process is documented in step by step. So when I die, exact same proceedure would need to be followed as right now for you to learn to release. | 11:57 | |
+ we've been using the same key to sign releases, now it'd have to be different? Since I can't give you passphrase for mine; or was the plan to have me poweron the VM and kick the bot off safemode and you to just tell it to cut? | 12:03 | ||
+ it'd have to be a bot doing it with my github account, 'cause you don't have rakudo's commit bit | 12:04 | ||
+ I've seen some poor attention to detail in releases of other projects; if you happen to be the same type of person, it'd annoy me to have poor attention to detail in rakudo's releases | 12:05 | ||
+ I don't like change :) | |||
+ I don't like inconsistency | 12:07 | ||
+ I need to add stuff to perl6.fail to make changelog generation more automatic | |||
Interesting... GCE still tells me not enough resources in zone to power on a 24-core box. First time it's like that during day hours :o | 12:21 | ||
And what's really annoying: fine, there aren't any resources, I'll try again later... But it *deletes* the VM I've just created because it couldn't power it on after creation. | 12:22 | ||
nine | Passing a Callable to the "is native" trait is a great way to defer the creation of the path to the shared lib till runtime. It just has one drawback: NativeCall's guess_library_name will use the result of calling this object as-is and will not apply the $*VM.platform-library-name transformation. | 12:26 | |
Zoffix | Guess a good time as any to try and find a zone that lets me poweron a 64-core VM... | 12:27 | |
nine | I.e. resources/libraries/p5helper will stay as-is and not be transformed into resources/libraries/libp5helper.so (on Linux) | ||
Zoffix | ZofBot: I want a less-than-minute time for stresstest! | ||
ZofBot | Zoffix, zip | ||
nine | Now I've got 3 options: 1. change guess_library_name to do the transformation - thereby robbing the user of this escape hatch, 2. have the Callable do that transformation itself, thereby tying %?RESOURCES closer to NativeCall's needs or 3. do the transformation in CompUnit::Repository::Filesystem::resource | 12:29 | |
2 has the disadvantage of having to guess if the transformation is necessary (it isn't for Installation repositories for example), same as guess_library_name does right now. | 12:30 | ||
3 may break other existing code as it really changes the result of %?RESOURCES<libraries/foo> | |||
jnthn | maybe 4) Introduce a named type Resource or some such that is returned from %?RESOURCES lookups, and add a candidate for that in NativeCall's native trait? | ||
nine | jnthn: aaah, that makes so much sense it almost hurts :) | 12:33 | |
I guess "Distribution::Resource" is the obvious name | 12:35 | ||
jnthn | Seems reasonable, yeah | 12:36 | |
So after I thought about it, yeah, I *think* it's easy... | 12:37 | ||
oops, wrong window | 12:38 | ||
nine | This also opens the door for moving the platform_library_name transformation into a repo-specific subclass of Distribution::Resource as it's only needed when the resource is loaded from a FileSystem repo. | 12:43 | |
Zoffix | . | 12:45 | |
yoleaux | 12:42Z <El_Che> Zoffix: Thx, I'll do that. I was just starting to create the pkgs, but indeed beter to wait | ||
Geth | roast: cd94122a9a | usev6++ | S05-modifier/ignorecase.t [JVM] Re-fudge some tests for ligatures matching |
12:46 | |
Zoffix | $ lscpu | grep 'CPU(s)' | 12:52 | |
CPU(s): 64 | |||
I did it :) | |||
Had to use "increase quota" form, which apparently is handled by robot, because I got the increase right away | 12:53 | ||
"Stage parse : 53.006" lowest yet | 12:55 | ||
A'right. Time to set some records \o/ | |||
Close, but no cigar: Files=1241, Tests=133745, 77 wallclock secs (21.38 usr 3.66 sys + 2072.16 cusr 186.73 csys = 2283.93 CPU) | 12:57 | ||
m: say 111/77 | |||
camelia | 1.441558 | ||
Zoffix | 1.5x faster than 24-core box | ||
That's with TEST_JOBS=70 | 12:58 | ||
Google promised 128-core boxes later this year... So there's still hope for a sub-minute stresstests :) | 12:59 | ||
nine | Unless there are test file which take longer than a minute by themselves? | 13:00 | |
Zoffix | Oh.. right. And there are probably are | ||
harness6 result: Files=1241, Tests=133745, 137 wallclock secs | |||
Well, this was good 3 minutes of fun :) | 13:01 | ||
MasterDuke_ | i just did a parse on AlexDaniel's whateverable server in 51s | ||
i would have thought the GCE could do it even faster | |||
Zoffix | Mine is with 2.2GHz cpus | 13:02 | |
And parse isn't affected much, if at all, but number of cores. | |||
MasterDuke_ | yeah, i just thought the individual cores would be a bit faster than that | ||
his server has 3.4GHz cores | 13:03 | ||
nine: you'll have to let us know the relevant numbers when you get your ryzen up and running | 13:04 | ||
nine | MasterDuke_: will do :) I've now ordered the cooler from another merchant. Today I finally got an email where the one I ordered at initially admitted to not knowing when it can be delivered. While at the same time still claiming a 2 day delivery time on geizhals.at, just like they did a week ago. | 13:07 | |
MasterDuke_ | you didn't want to use the stock amd one? i thought they got a serious upgrade a year or two ago? | 13:09 | |
nine | The 1800X doesn't come with a stock cooler. Also I just want the best (thus most silent) cooler I can get :) | 13:11 | |
MasterDuke_ | huh, didn't realize that | ||
and yes, silent is good. built a watercooling setup for the athlon 64 3400 i won from amd back in college and loved that i couldn't hear a thing | 13:13 | ||
timotimo | the profiling job i let run over night errored out with a dumb error from me not being careful enough about nqp ... | 13:14 | |
nine | Oh how I hate the waiting. Ever since I decided to go for a really silent system, I notice the noise of my current one much more :) | 13:15 | |
timotimo | but it helped me figure out that the majority of time is spent in the graph node preparation step | ||
maybe i can find out how to make it nom less memory | |||
MasterDuke_ | post_process_call_graph_node? | ||
timotimo | yup | 13:16 | |
i wonder how big the id remap hash gets | 13:17 | ||
MasterDuke_ | line 122 could be moved down to 133, right? | 13:18 | |
for a micro-optimization | |||
timotimo | ah, indeed, that's a good idea | 13:19 | |
do you know if $node<id> is an int? | |||
because if it is, we should perhaps use a native int array instead of a hash for the remap | |||
MasterDuke_ | pretty sure all the ids are ints | ||
timotimo | those are a thousand times more efficient when it comes to gc in memory-constrained environments | ||
hm, except we stringify the $new-id-counter into $node<id> | 13:20 | ||
MasterDuke_ | line 144. how good is the gc about throwing away newly created variables? | ||
timotimo | it's very good | 13:21 | |
Geth | rakudo/nom: f4f1c42048 | (Stefan Seifert)++ | src/core/Distribution.pm Support all appropriate IO::Path methods on Distribution::Resources %?RESOURCES<foo> used to return IO::Path objects. People started to depend on that behavior which of course now fails as the object is no longer an IO::Path but something that defers the creation of the path as long as possible. Delegate calls to all of IO::Path's useful methods (those dealing with the path name itself and for read access) to the result of the deferred path assembly. |
||
rakudo/nom: 647abfea2d | (Stefan Seifert)++ | 2 files Improve relations between %?RESOURCES and Native trait. %?RESOURCES<foo> now returns a Distribution::Resource object which the Native trait will know how to deal with. This fixes the situation where one could either have automatic shared library name transformation by NativeCall _or_ deferred path setup of resources installed via the Staging repo. ... (5 more lines) |
|||
MasterDuke_ | `my $shared_data := nqp::hash(...); $id_to_thing{$node<id>} := $shared_data;`. would it be any better as `$id_to_thing{$node<id>} := nqp::hash(...)`? | ||
nine | jnthn: many thanks for pointing out the issue and the help with the fix :) | 13:22 | |
timotimo | at most a small difference | 13:23 | |
jnthn | nine++ # yay, that should unbust my $dayjob app :) | ||
Zoffix is reminded | 13:24 | ||
nine | As that part of the commit message was cut, the recommended way to access functions in bundled native libraries is now: | ||
my constant $helper = %?RESOURCES<libraries/helper>; sub foo() is native($helper) { ... } | 13:25 | ||
Zoffix | m: dd IO::Spec::Win32.is-absolute: '/' | ||
camelia | Bool::True | ||
Zoffix | Do we keep that ^ as true? | ||
nine | Older suggestions like %?RESOURCES<libraries/helper>.Str or %?RESOURCES<libraries/helper>.absolute will still work (but not with the Staging repo) | ||
timotimo | MasterDuke_: wait ... are those ids high numbers? | 13:26 | |
i think they are ... | |||
MasterDuke_ | timotimo: line 132: how expensive is try? does that ever not succeed? | ||
timotimo: i don't think so. don't they just start at 1 and count up? | 13:27 | ||
timotimo | that's why i have a hash ... | ||
try is very cheap when it doesn't get hit | |||
no, those ids only start at 1 and count up because we have the id remap | |||
if i hadn't somehow broken the compile i'd just quickly toss a debug print in there ... | 13:28 | ||
i think moar internally uses memory addresses for the ids | 13:29 | ||
because those are, conveniently, unique | |||
MasterDuke_ | in a profile of a rakudo compile from feb the max id in any of the tables is 5487 | ||
ah. too big for ints? | |||
timotimo | too big for "int", but they fit into int64 | 13:30 | |
except on 32bit where int is int32 and pointer is also int32 | |||
just a quick debug output looks a whole lot like this: | 13:31 | ||
id was 28719360 | |||
id was 41029576 | |||
so at least i can div by 8 to get the sizes down, but it's still huge | |||
if i want to replace the hash with an array, it'll become the size of the ram you have, basically :P | 13:32 | ||
now if i put in a quick pass first that finds the lowest id, that'd help a whole lot | 13:33 | ||
robertle | you could rely on alignment a bit ;) | ||
timotimo | yeah, that's what i meant by divide by 8 | ||
Zoffix | nine: should I cut the release now or is there more commits coming? Any tests for the bug that was fixed? | ||
timotimo | alternatively i'll build id_remap out of two lists that i keep in sync; one has all the befores, the other has all the afters, and i keep it sorted by inserting into the right positions | 13:34 | |
MasterDuke_ | yeah, was thinking about the same thing | 13:35 | |
timotimo | hm, but if the max id is 5487, that is unlikely to be the culprit at all :| | ||
so maybe what we should try instead is to make the post_process_call_graph_node function iterative instead of recursive | |||
MasterDuke_ | annoying, but if it's much faster/uses less ram... | ||
nine | Zoffix: the 1M$ question is if my commit broke the Staging repo stuff again. Now that I think of it, this could very well be. The fix would be trivial, but I have to go home to be able to test it. | 13:37 | |
Zoffix | nine: OK, well. I'm not in a rush. Test it when you can and let me know when to release. | ||
timotimo | should be relatively easy, no? | 13:40 | |
nine | Just had another look and I guess it's fine, as I do not overwrite $!IO in the Proxy's FETCH. So it should run the code for every access. But a test would be good anyway. | ||
Zoffix | [Coke]: recall in July, 2016 you said when making point releases, they should only include the bug fix and not whatever commits happened since previous release. | 13:49 | |
[Coke]: how to do that? Specically, how would the tag work? | |||
timotimo | create a branch for the point release | 13:58 | |
and when only the interestign commits have been cherry-picked into that branch | |||
tag the tip of that branch as the point release | |||
Zoffix | And then what? | 13:59 | |
jnthn | Why does there need to be something else? :) | 14:00 | |
Well, delete the branch I guess :) | |||
Zoffix | So there'll always be a branch hanging out? | ||
jnthn | Since the tag records what was releases | ||
Zoffix | Um, but what will happen to the tag then? And there won't be a path from that tag in master | ||
jnthn | That's normal enough, though? | 14:01 | |
timotimo | i'd assume other projects do it exactly like that | ||
Zoffix | Is it? Last august that ruinined NQP | ||
timotimo | you're allowed to delete the branch because the tag keeps all the commits alive | ||
nine | I guess in the current case we want all commits anyway | 14:02 | |
timotimo | hm, what were the details? i don't remember | ||
[Coke] | The issue was that some things are incorrectly expecting a simple linear commit history for rakudo, which is borked. we should fix whatever broke when we did that. | 14:04 | |
... but knowing that something like that is out there, yes, be cautious. | |||
timotimo | good idea. though maybe not now for the release :) | ||
like, make a faux not-really-release in between months just to shake out that | 14:05 | ||
Zoffix | timotimo: details of august breakage? I tagged the dist, then git pull --rebased a recent commit on top of that and pushed. And the tag wasn't in the path for the commits in master | ||
[Coke] | right. release tags don't have to be on master/nom | 14:06 | |
what was expecting it to be? | |||
Zoffix | [Coke]: git describe | 14:07 | |
Zoffix just tested out the "branch tag delete" method | 14:09 | ||
The tag stays, but yeah, git describe doesn't see it | |||
The tag with the commits from released branch | |||
s/released/deleted/; | |||
Geth | rakudo/merge-post-2017.04.2-release: c6fd7361d2 | (Zoffix Znet)++ | src/core/IO/Spec/Win32.pm [io grant] Make IO::Spec::Win32.is-absolute about 63x faster - Use NQP ops instead of regexes - Toss UNC path check; if we didn't match the leading slash test, we won't match UNC path anyway |
14:12 | |
timotimo | oh, i get it | ||
Zoffix | Well, if you commit non-release stuff before 2017.04.2, please try to use ^ that branch ( merge-post-2017.04.2-release ) instead of nom. I don't wanna go with "this should work, probably" road with the tags unless we really have to. | 14:13 | |
timotimo | you don't get 2017.01.1 in the versions because the point release is on a different branch entirely | ||
which is "correct", of course | |||
(the best kind of correct) | |||
Zoffix | ^ that commit should give some sort of gravity boost on Windows. I see .is-absolute used in a bunch of places, like .absolute, .dir, .chdir, .slurp, .open. In fact, I think any method that actually accesses the file + some others | 14:16 | |
jnthn | Zoffix: ooh, sounds nice :) | 14:17 | |
MasterDuke_ | bisectable6 has gotten confused before with weird history | 14:18 | |
but not sure how much we should take that into account for rakudo releases | 14:19 | ||
Zoffix | Ah, no, only .dir, .chdir, and .parent. But never fear: all the read methods use another method that should get a sizeable boost as well some time later today. For absolute paths it should be in the same 60x range. | 14:23 | |
MasterDuke_ | timotimo: any improvements so far? | 14:27 | |
timotimo | no, i ran a proflie-compile again in the background and it got OOM'd even though i have lots of swap remaining | 14:28 | |
not quite sure why that'd happen | |||
MasterDuke_ | maybe need to come up with some script that creates a large profile, but doesn't take *quite* as much time/ram as profiling a rakudo compile | 14:30 | |
timotimo | heh. | ||
well, i got one tiny piece of information, i think | 14:31 | ||
it did get its memory growth before the post-process step started | |||
so perhaps the problem is already there inside the moarvm code that creates the profile datastructures in the first place | |||
MasterDuke_ | huh. did the post-process make it any worse? | ||
Zoffix | Actually no, not today. I was planning to have an evening off today :} | 14:32 | |
Tomorrow! :) | |||
timotimo | oh. i think i actually threw out the debug output that would actually have told me that | 14:35 | |
no, i do have a print for whenever it puts a value into the id remap | |||
and that didn't get hit | |||
so it will have crashed before it reached the post processing step | |||
it looks like zef was using new-from-absolute-path | 14:39 | ||
oh | 14:40 | ||
no it wasn't | |||
Zoffix | It was until recently | ||
github.com/ugexe/zef/issues/149 | |||
Fixed 10 days ago | |||
timotimo | yeah but i just pulled it and it won't install | ||
Zoffix | :( | 14:41 | |
timotimo | it seems to call .IO on something that's already an IO::Path or something? | ||
hold on ... | |||
d'oh :) | |||
i was using the system-wide installed zef to try to install zef | |||
of course it was still using the old code | |||
Zoffix | Calling .IO on IO::Path is identity | 14:42 | |
m: dd [ ".".IO.WHAT.IO, ".".IO.IO ] | |||
camelia | [IO::Path, ".".IO(:SPEC(IO::Spec::Unix),:CWD("/home/camelia"))] | ||
timotimo | right | ||
the problem was i was looking at a newer source file than it was running | |||
Zoffix | Ah | 14:43 | |
woooow | |||
Spec::Win32.rel2abs is slow as hell. I had to drop down my bench to 500 iteration and it still took 14 seconds | 14:44 | ||
Oh, 2500 iterations, but still that's slow AF | |||
ZofBot: time to fix that! | |||
ZofBot | Zoffix, in(10); my $timecalc = Promise | ||
timotimo | <3 | 14:45 | |
TimToady | I had to reinstall zef last night to get it to work | 14:53 | |
yoleaux | 10:03Z <Zoffix> TimToady: point release cut. We also discovered a bug on our downloads page that didn't sort point releases correctly. So this was all beneficial :) | ||
TimToady | and no, I couldn't use the old zef to do it... | ||
nine | That somehow doesn't sound like the backwards compatibility story we told after the 6.c release :/ | 14:55 | |
Zoffix | It is though. I didn't have to modify any of 6.c tests to remove .abspath and .new-from-absolute-path | 14:56 | |
And .new-from-absolute-path wasn't even documented. | 14:57 | ||
Just like zef uses Rakudo::Internals.from-json; if we nix it, the same breakage will occur. | |||
TimToady | we did explicitly say "if it's not tested, it's not guaranteed" | 14:59 | |
Zoffix | ZofBot: but then users happened | 15:00 | |
ZofBot | Zoffix, 123 # 111111111111111111111111111111111111111111111 | ||
TimToady | ZofBot: Why can't you ever think of anything original to say? | 15:01 | |
ZofBot | TimToady, alternatively i'll build id_remap out of two lists that i keep in sync; one has all the befores, the other has all the afters, and i keep it sorted by inserting into the right positions | ||
Zoffix | Would be sweet if the "Back" button worked in the profiler when looking through CallGraph -> Callees | 15:03 | |
Zoffix looks at source | 15:04 | ||
Oh, some sort of angular stuff : | |||
Or an ".." link to go up a level, so you could actually nagivgate teh callee tree instead of restarting from scratch | 15:05 | ||
jnthn | Zoffix: When I originally did it there was a breadcrumb trail at the top | ||
That let you go back to recent frames in the chain | 15:06 | ||
Zoffix | Oh, awesome. yeah, it's still there. It's just at the top and I didn't see it :) | ||
jnthn | That UI is like, the first *and* last thing I wrote in Angular JS :) | 15:08 | |
timotimo | the breadcrumb trail exists but it acts really strange when you skip multiple levels | 15:15 | |
and i managed to somehow break labels in most of the boxes in the call graph >_< | 15:16 | ||
nine | Darn....back to getting the wrong path for the shared lib again. Will have to look into this later on :/ | 15:29 | |
Zoffix | Note to self: /Q'\'/ is not the same as /ļ½¢\ļ½£/ | 15:30 | |
samcv++ # updateing highlighter to latest fixed my Q'\' breakage \o/ | 15:37 | ||
What's the right way to use that anyway? | 15:44 | ||
I mean.. how to uses Texas Q// in regex | |||
m: say 'foo' ~~ /{Q'fo'}/ | |||
camelia | 5===SORRY!5=== Error while compiling <tmp> Two terms in a row at <tmp>:1 ------> 3say 'foo' ~~ /{Q'fo7ā5'}/ expecting any of: infix infix stopper statement end statement modifier staā¦ |
||
Zoffix | TTIAR? | ||
m: say 'foo' ~~ /{Q/fo/}/ | 15:45 | ||
camelia | ļ½¢ļ½£ | ||
Zoffix | dafuq | ||
m: say 'foo' ~~ /<{Q/fo/}>/ | |||
camelia | ļ½¢foļ½£ | ||
Zoffix | m: say 'foo' ~~ /<{Q'fo'}>/ | ||
camelia | 5===SORRY!5=== Error while compiling <tmp> Two terms in a row at <tmp>:1 ------> 3say 'foo' ~~ /<{Q'fo7ā5'}>/ expecting any of: infix infix stopper statement end statement modifier sā¦ |
||
Zoffix | bug | ||
m: say 'foo' ~~ /<{Q/.+/}>/ | 15:46 | ||
camelia | ļ½¢fooļ½£ | ||
Zoffix | m: say 'f.+o' ~~ /"{Q/f.+/}"/ | ||
camelia | ļ½¢f.+ļ½£ | ||
Zoffix | aha \o/ | ||
m: say 'f.+o' ~~ /"{Q'f.+'}"/ | 15:47 | ||
camelia | 5===SORRY!5=== Error while compiling <tmp> Unable to parse expression in single quotes; couldn't find final "'" at <tmp>:1 ------> 3say 'f.+o' ~~ /"{Q'f.+'}"/7ā5<EOL> expecting any of: dotty method or postfix siā¦ |
||
Zoffix | It's tripping on ' quotes there too tho | ||
timotimo | MasterDuke_: i'll put some telemetry pings into the call graph creation code inside moarvm to see what its behavior is like | 15:49 | |
TimToady | ZofBot: Q'fo is a valid ident | 16:12 | |
ZofBot | TimToady, , are aliases into the $/ object | ||
TimToady | m: say 'f.+o' ~~ /"{Q 'f.+'}"/ | 16:13 | |
camelia | ļ½¢f.+ļ½£ | ||
TimToady | m: say 'f.+o' ~~ /"{Qāf.+ā}"/ | 16:14 | |
camelia | ļ½¢f.+ļ½£ | 16:15 | |
timotimo | we can give a "did you mean" error here | 16:16 | |
i'm surprised we haven't stumbled over this a thousand times, though? | 16:17 | ||
m: say qq'blah' | |||
camelia | 5===SORRY!5=== Error while compiling <tmp> Two terms in a row at <tmp>:1 ------> 3say qq'blah7ā5' expecting any of: infix infix stopper postfix statement end statement modifier ā¦ |
||
timotimo | i suppose when you're already using Q and friends, you wouldn't use '? | ||
Zoffix | m: say Q'blah' | 16:18 | |
camelia | 5===SORRY!5=== Error while compiling <tmp> Two terms in a row at <tmp>:1 ------> 3say Q'blah7ā5' expecting any of: infix infix stopper postfix statement end statement modifier ā¦ |
||
Zoffix | Ok, definitely a regression, because that used to work | 16:19 | |
bisect: Q'blah' | |||
c: 2017.03 Q'blah' | |||
committable6 | Zoffix, gist.github.com/af15992e4d4e449ad7...e76a56c146 | ||
Zoffix | c: all say Q'blah' | 16:20 | |
... | |||
TimToady | it *shouldn't* work | ||
Zoffix | hurry up, robot! | ||
TimToady: really? | 16:21 | ||
star: say Q'blah' | |||
camelia | 5===SORRY!5=== Error while compiling <tmp> Two terms in a row at <tmp>:1 ------> 3say Q'blah7ā5' expecting any of: infix infix stopper postfix statement end statement modifier ā¦ |
||
Zoffix | OK | ||
Maybe I'm misremembering | |||
TimToady | m: my \Q'blah = 42; say Q'blah | ||
camelia | 42 | ||
Zoffix | Ahhh | ||
OOHHH | |||
TimToady | as timotimo points out, we can have a better message though | ||
Zoffix | star: say Q 'blah' | 16:22 | |
camelia | blah | ||
Zoffix | Right, THAT used to work :D | ||
TimToady | still duz | ||
Zoffix | yeah :) | ||
timotimo | now i have a segfault in gdb open where i can't figure out what did it ... | ||
aha! | 16:29 | ||
the call stack is too deep | |||
i mean ... that could be it | 16:30 | ||
the instruction it crashes on is a callq | |||
Zoffix | Hm.. 40 minutes of rewriting it a routine in nqp = 8% speed gain... | 16:41 | |
A few weeks of this effort and rakudo will read the files before you even know you wanted to read them :P | 16:42 | ||
Time to learn to make non-nqp fast. | 16:43 | ||
Where do I start? | |||
nine | m: class Foo { has $.IO; method BUILD() { $!IO = Proxy.new(FETCH => method () { note "FETCH"; "foo" }, STORE => method ($new) { die }) } }; my $foo = Foo.new; $foo.IO; $foo.IO; | 16:45 | |
camelia | FETCH | ||
nine | Why does this FETCH only once?! | ||
Zoffix | need to bind the proxy | 16:47 | |
m: class Foo { has $.IO; method BUILD() { $!IO := Proxy.new(FETCH => method () { note "FETCH"; "foo" }, STORE => method ($new) { die }) } }; my $foo = Foo.new; $foo.IO; $foo.IO; | |||
camelia | FETCH FETCH FETCH |
||
nine | oh, ok, thanks! | ||
Zoffix | Why does it fetch it 3 times? | ||
nine | m: class Foo { has $.IO; method BUILD() { $!IO := Proxy.new(FETCH => method () { note "FETCH"; "foo" }, STORE => method ($new) { die }) }; self }; my $foo = Foo.new; $foo.IO; $foo.IO; | 16:48 | |
camelia | 5===SORRY!5=== Error while compiling <tmp> 'self' used where no object is available at <tmp>:1 ------> 3o" }, STORE => method ($new) { die }) };7ā5 self }; my $foo = Foo.new; $foo.IO; $fo expecting any of: term |
||
nine | m: class Foo { has $.IO; method BUILD() { $!IO := Proxy.new(FETCH => method () { note "FETCH"; "foo" }, STORE => method ($new) { die }) }; 1 }; my $foo = Foo.new; $foo.IO; $foo.IO; | ||
camelia | WARNINGS for <tmp>: Useless use of constant integer 1 in sink context (line 1) FETCH FETCH FETCH |
||
nine | no idea then | ||
Zoffix | oh, doh. Yeah, that's it | 16:49 | |
m: class Foo { has $.IO; method BUILD() { $!IO := Proxy.new(FETCH => method () { note "FETCH"; "foo" }, STORE => method ($new) { die }); 42 } }; my $foo = Foo.new; | |||
camelia | ( no output ) | ||
Zoffix | m: class Foo { has $.IO; method BUILD() { $!IO := Proxy.new(FETCH => method () { note "FETCH"; "foo" }, STORE => method ($new) { die }); 42 } }; my $foo = Foo.new; Foo.IO | ||
camelia | Cannot look up attributes in a Foo type object in block <unit> at <tmp> line 1 |
||
Zoffix | m: class Foo { has $.IO; method BUILD() { $!IO := Proxy.new(FETCH => method () { note "FETCH"; "foo" }, STORE => method ($new) { die }); 42 } }; my $foo = Foo.new; $foo.IO | 16:50 | |
camelia | FETCH | ||
Zoffix | cool | ||
nine | Now with the proxy bound it ends with: Cannot invoke this object (REPR: Null; VMNull) | ||
--ll-exception does not actually give me a backtrace there | 16:52 | ||
Maybe I'll have to write 20 delegation methods manually after all | 16:57 | ||
I'd guess that it's the FETCH closure that doesn't survive serialization. | 16:58 | ||
TimToady | .oO(serial killer?) |
16:59 | |
Zoffix | :D | 17:00 | |
nine | but dinner first | 17:01 | |
MasterDuke_ | timotimo: get anything from the telemetry? | 17:02 | |
timotimo | MasterDuke_: it's not reached the point yet | 17:03 | |
now i'm no longer at my desktop, so i'll have to ssh in to see what's up | |||
MasterDuke_ | at one point i had some code that very quickly produced a too-big profile, but i don't remember what it was | 17:04 | |
timotimo | ackermann should do it. alternatively, i'd think recursive fibonacci should also do it | ||
MasterDuke_ | i wonder if it was something with .combinations | 17:05 | |
Zoffix | What do you think of readability of rel2abs2 vs rel2abs3 here? gist.github.com/zoffixznet/1121ae5...2c91848f06 | ||
timotimo | didn't we have a recursion-free combinations routine? | ||
desktop will most likely freeze in a minute or two | |||
Zoffix | It only brings like 10% improvement... The real slowage is further down the pipeline. This method is called anytime a file on disc is accessed | ||
Wondering if the mild perf boost is worth the poorer readability | 17:06 | ||
MasterDuke_ | timotimo: we do now, this may have been before that | ||
Zoffix | ( I'll still need to rewrite ($path ~~ /^ <$UNCpath>/ in nqp, so there'd be another small nqp chunklet there) | ||
MasterDuke_ | Zoffix: do you know the slowest part of the non-nqp version? is there a hot part so you could re-write just a bit in nqp? | 17:08 | |
timotimo | gist.github.com/timo/1dab05a808111...4adff39e87 - MasterDuke_, something similar to the function donw below would be what we do in post-processing | ||
well, that function is a bit wordy because of the whole C thing; in nqp-land we'd just be using a list | |||
MasterDuke_ | isn't malloc always on the heap? | 17:09 | |
Zoffix | MasterDuke_: good planm | 17:10 | |
MasterDuke_ | Zoffix: but i wouldn't say the rel2abs3 is all that bad | ||
timotimo | malloc is on the heap, but the structure i put there is on the stack | 17:12 | |
MasterDuke_ | ah | 17:13 | |
recursive fibonacci of 30k only produced a 15mb profile, and up to 10k it runs pretty quickly, faster than i expected | 17:18 | ||
timotimo | interesting | 17:36 | |
MasterDuke_ | oops, i lied. recursive factorial is very fast. recursive fibonacci is not | ||
bit of a difference | 17:37 | ||
timotimo | ah, well :) | 17:38 | |
factorial doesn't branch | |||
DrForr | Ackermann FTW :) | 17:41 | |
(though there are many more fun hyperexponentials out there) | 17:42 | ||
MasterDuke_ | can do fib(35), but fib(40) was taking a long time so i killed it. but it only produced a 100k profile | 17:44 | |
fib(37) produced a 25k profile | |||
*250k | 17:46 | ||
Geth | rakudo/nom: 4a560aa746 | (Stefan Seifert)++ | src/core/Distribution.pm Fix "Cannot invoke this object (REPR: Null; VMNull)" A packaged Inline::Perl5 will throw the error about VMNull at runtime when trying to access the Proxy's FETCH method. Presumably the method closure did not survive serialization. Use a less elegant, but ultimately working approach instead. |
17:48 | |
nine | Is there a way to run a spec test using an installed perl6 instead of from the source directory? | 17:50 | |
MasterDuke_ | something with the --cmd option for fudgeandrun perhaps? | ||
*--impl-cmd | 17:53 | ||
nine | Ah, symlink the t directory in an otherwise empty directory and modify line 39 in t/harness5 ;) | 17:55 | |
Zoffix: I'm seeing t/spec/S16-filehandles/filestat.t ................................. Dubious, test returned 1 (wstat 256, 0x100) Failed 1/11 subtests | 17:58 | ||
Zoffix: otherwise we're go for point release from me :) | 17:59 | ||
Zoffix: the failing test in filestat.t is 'IO.accessed should be updated when file content changes' and is kinda wrong as it just won't work on a file system mounted with noatime | 18:00 | ||
timotimo | how do we figure that out? | 18:01 | |
nine | timotimo: probably checking if system tools detect the change and if they don't, skip the test | ||
Geth | star: 5398118aa1 | (Steve Mynott)++ | 2 files bump versions to 2017.04 and 04.1 for rakudo |
18:02 | |
Zoffix | stmuk: that's the wrong version. It'll be 2017.04.2 | 18:03 | |
nine: OK. I thought noatime would still update it on 1st access after write | |||
stmuk | ah | ||
timotimo | that's not a bad idea, except we have to have a system tool for every platform where that can happen :D | ||
geekosaur | Zoffix, there's both soft-atime and hard no-atime, the latter you won;'t ever see an updated time | 18:04 | |
nine | timotimo: is there an atime on every platform? | ||
nine just removed the noatime mount option from his fstab | 18:05 | ||
timotimo | no clue :) | ||
geekosaur | atime is usually present although its resolution varies (e.g. its granularity is 2 seconds on FAT/VFAT) | ||
stmuk | relatime is best for linux | ||
nine | I guess since I upgraded to an Intel SSD 750 400GB NVMe device, atime just won't make any difference anymore :) | ||
Geth | roast: 715f602205 | (Zoffix Znet)++ (committed using GitHub Web editor) | S16-filehandles/filestat.t skip-fudge IO::Path.accessed test - Needs more through for noatime systems - Test was added 10 days ago and isn't part of 6.c-errata |
18:08 | |
Zoffix | Alright, I'll start cutting the 2017.04.2 | ||
s/through/thought/through | |||
bah | |||
.ask [Coke] I'm kinda way past the original grant due date, but I have a question: should I (a) complete the grant by end of April, making my final report on April 30th; or (b) throw in a bunch of performance improvements as part of the grant work and complete it mid-May. I'll make a short report this week and final and complete report mid-May | 18:14 | ||
yoleaux | Zoffix: I'll pass your message to [Coke]. | ||
MasterDuke_ | timotimo: 'use MONKEY-SEE-NO-EVAL; for ^10 { my &fs = EVAL q|sub f\qq[$_]($n) { if $n < 2 { return 1 } else { return $n * f\qq[$_]($n - 1) } }|; say &fs(30_000).chars };' produces a 150mb profile that the qt viewer can't open | 18:19 | |
Malformed input file, top level isn't an array: "too deeply nested document" | |||
timotimo | OK | 18:20 | |
... too deeply? | |||
that's distinct from "too large", right? | |||
MasterDuke_ | think so | 18:21 | |
geekosaur | yes, that's a stack (not necessarily machine stack) issue | ||
timotimo | it's machine stack if we have a json machine | ||
MasterDuke_ | but we don't necessarily need a profile that's actually too large right, just one that's pretty large and generates quickly? | 18:26 | |
[Coke] | . | 18:29 | |
yoleaux | 18:14Z <Zoffix> [Coke]: I'm kinda way past the original grant due date, but I have a question: should I (a) complete the grant by end of April, making my final report on April 30th; or (b) throw in a bunch of performance improvements as part of the grant work and complete it mid-May. I'll make a short report this week and final and complete report mid-May | ||
Zoffix | nine: is this output from Inline::Perl5 normal? something about leaked scalars in t/use.t gist.github.com/zoffixznet/e17ff7b...4261824f2a | ||
[Coke] | late completion is fine; missing reporting progress is not. | ||
nine | Zoffix: yes :/ But maybe now that the packaging stuff works I'll find the time to track that down. | 18:30 | |
[Coke] | Either way is fine; if you want to throw extra stuff in as part of the grant report, I am not going to stop you. :) | ||
Zoffix | [Coke]: OK. I'll have the report for April tomorrow \o/ | ||
:D | |||
[Coke] | I can take .md for republishing. | ||
Zoffix | OK | 18:31 | |
MasterDuke_ | timotimo: 'use MONKEY-SEE-NO-EVAL; for ^500 { my &fs = EVAL q|sub f\qq[$_]($n) { if $n < 2 { return 1 } else { return $n * f\qq[$_]($n - 1) } }|; &fs(500) };' produces a 127mb profile that's "too large document" | 18:38 | |
Geth | rakudo/nom: f28044b8f7 | (Zoffix Znet)++ | docs/ChangeLog Log 2017.04.2 changes |
18:41 | |
rakudo/nom: 85da85002d | (Zoffix Znet)++ | docs/announce/2017.04.2.md Add 2017.04.2 Release Announcement |
18:43 | ||
MasterDuke_ | on my machine, the whole thing took 73s. of that, post_process_call_graph_node() took 14.0655379295349 and to_json() took 43.3250727653503 | 18:53 | |
AlexDaniel | hhhhhmmmm | 18:55 | |
yoleaux | 18:39Z <Zoffix> AlexDaniel: re cutting releases: so, are we cool or is my reasoning unreasonable? | ||
AlexDaniel | where's committable | 18:56 | |
c: 647abfea2d9 say 42 | 18:57 | ||
:| | |||
c: 647abfea2d9 say 42 | 18:58 | ||
committable6 | AlexDaniel, Ā¦647abfe: Ā«No build for this commitĀ» | ||
AlexDaniel | ok that's better | ||
c: HEAD 42 | |||
:| | 18:59 | ||
c: HEAD 42 | |||
committable6 | AlexDaniel, Ā¦HEAD(85da850): Ā«No build for this commitĀ» | ||
AlexDaniel | c: HEAD^ 42 | ||
committable6 | AlexDaniel, Ā¦HEAD^: Ā«WARNINGS for /tmp/donFsm9Lfe:ā¤Useless use of constant integer 42 in sink context (line 1)Ā» | ||
AlexDaniel | Zoffix: we are cool, it's fine. I found your reasoning unreasonable, but at the same time I appreciate your work, so if it disturbs you too much then I think we can keep it this way. | 19:04 | |
c: HEAD 42 | 19:05 | ||
committable6 | AlexDaniel, Ā¦HEAD(85da850): Ā«No build for this commitĀ» | ||
AlexDaniel | c: 647abfea2d9 say 42 | ||
committable6 | AlexDaniel, Ā¦647abfe: Ā«42Ā» | ||
AlexDaniel | ok that's slightly betterā¦ | ||
c: HEAD 42 | 19:07 | ||
committable6 | AlexDaniel, Ā¦HEAD(85da850): Ā«WARNINGS for /tmp/y84Bt4akEP:ā¤Useless use of constant integer 42 in sink context (line 1)Ā» | ||
AlexDaniel | if only I could figure out why whateverable does thisā¦ | 19:08 | |
c: releases say rand | |||
committable6 | AlexDaniel, gist.github.com/954b232a8967a2e8f1...fa6365c763 | ||
AlexDaniel | c: all say rand | 19:10 | |
committable6 | AlexDaniel, gist.github.com/349e4b94e73202adef...fdaf84225a | ||
AlexDaniel | c: all say Q'blah' | ||
committable6 | AlexDaniel, gist.github.com/395a79533bdb3e09ea...26bc6b825f | ||
timotimo | don't you mean Q'plah? | ||
AlexDaniel shrug | 19:11 | ||
Zoffix | AlexDaniel: well, OK. Once I'm done with the grant stuff, I plan on improving the bot and perl6.fail (they're both kinda in-beta-status). So the end-goal is for the bot to auto-start the VM and have an on-IRC safemode switch. Once that's done, you can cut a release. 'cause then you won't need any VMs, keys, or commit bits; the bot will handle all that | ||
AlexDaniel | Zoffix: and if the bot is hit by a bus we're doomed, mhmā¦ | 19:12 | |
MasterDuke_ | timotimo: for that script, to_json takes 42s, but to_sql takes 53s | 19:13 | |
Zoffix | Not at all. (a) You can always follow manual instructions ā it's painful and error-prone; (b) The final-state bot will have proper setup instructions, so all you'd need to do is find a new box to run it on and done. | ||
I mean, I'm following manual instructions *as we speak* because the bot doesn't know how to cut point releases | 19:14 | ||
AlexDaniel | I'd much rather run it myself. But again, if this is so big of a trouble, we don't *have* to do it | ||
Zoffix | AlexDaniel: run what yourself? The bot? | ||
AlexDaniel | yea | ||
Zoffix | AlexDaniel: well, then submit a CLA to get commit bit for rakudo. | 19:15 | |
You need it to run the bot, 'cause I doubt anyone will give you their github credentials :P | |||
AlexDaniel | sure, I know. This is why I brought it up ahead of time :) | ||
Zoffix | And we also need to figure out the "group key" or whatever. So that multiple people can release and we'd still always use the same key | 19:16 | |
timotimo | MasterDuke_: that's kinda strange | 19:17 | |
MasterDuke_ | 2202736maxresident)k for both | 19:20 | |
Geth | rakudo/nom: 052dfcddce | (Zoffix Znet)++ | VERSION Bump version to 2017.04.2 |
19:21 | |
MasterDuke_ | timotimo: got anything from telemetry yet? | 19:23 | |
timotimo | only gc runs starting and such | 19:31 | |
MasterDuke_ | it's still going? | ||
timotimo | yeah | 19:32 | |
of course it is :) | |||
603b30 30636434259937 (- "start minor collection" (2620) | |||
^- the last output | |||
not every interval was a gc, but the vast majority | 19:33 | ||
MasterDuke_ | how long has it been running? | ||
timotimo | real time or cpu time? :) | ||
MasterDuke_ | real | 19:34 | |
timotimo | hm | ||
good question actually | |||
0 Epoch counter: 98516819482392 | |||
that's not a unix timestamp | |||
MasterDuke_ | and you're profiling a rakudo compile? | 19:35 | |
timotimo | yeah | 19:37 | |
i think with --target=ast | |||
it's spending most of its time doing nothing | 19:38 | ||
MasterDuke_ | swapping? | ||
timotimo | yeah | ||
MasterDuke_ | how much ram do you have? | 19:39 | |
timotimo | eating now | 19:40 | |
15.6G says htop | |||
10.9G is in my swap | |||
MasterDuke_ | ha. need to rent some 32 or 64gb aws or gce machine for an hour or so | 19:41 | |
Zoffix | good god we have a lot of branches | 19:49 | |
"2016.01-preparation" | |||
heh | |||
Geth | rakudo: zoffixznet++ created pull request #1062: [io grant] Make IO::Spec::Win32.is-absolute about 63x faster |
||
rakudo/nom: c6fd7361d2 | (Zoffix Znet)++ | src/core/IO/Spec/Win32.pm [io grant] Make IO::Spec::Win32.is-absolute about 63x faster - Use NQP ops instead of regexes - Toss UNC path check; if we didn't match the leading slash test, we won't match UNC path anyway |
19:50 | ||
rakudo/nom: e1c086b7a7 | (Zoffix Znet)++ (committed using GitHub Web editor) | src/core/IO/Spec/Win32.pm Merge pull request #1062 from rakudo/merge-post-2017.04.2-release [io grant] Make IO::Spec::Win32.is-absolute about 63x faster |
|||
star: a54b67f843 | (Zoffix Znet)++ (committed using GitHub Web editor) | tools/star/Makefile Use correct Rakudo point release |
19:51 | ||
Zoffix | Alright | ||
Release is done. | |||
2017.04.2 is the latest and greatest. | |||
Commit to nom freely :) | |||
Zoffix takes rest of the day off to relax | 19:52 | ||
[Coke] | (group key) We've signed releases as individuals in the past, I don't think that's necessarily a problem to do in the future, is it | 19:55 | |
? | |||
(lot of branches) if the branches are merged to nom, we can probably kill them. | |||
timotimo | MasterDuke_: it's not outputting anything into the telemetry log any more | 19:56 | |
it might be that gcc stopped it | |||
MasterDuke_ | timotimo: is your branch up to date? if i check it out, build moar, and run my script will i get something useful? | 19:58 | |
timotimo | oh | ||
it's spittin' | |||
samcv | .tell lizmat thank you :) got your package in the mail! so happy! | ||
yoleaux | samcv: I'll pass your message to lizmat. | ||
samcv | i now have perl 6 swag! | ||
timotimo | samcv: yeah!! | 20:00 | |
it seems like the process has ended | 20:14 | ||
aha. SIGKILL | 20:18 | ||
"the process no longer exists" | |||
robertle | out of curiosity, what does "nom" stand for? | 20:30 | |
geekosaur | new object model | 20:31 | |
iirc | |||
timotimo | yeah, from many, many, many years ago | 20:33 | |
before nom rakudo used to compile the whole core setting from source every time you started it | |||
because we didn't have serialization yet | |||
jnthn | The history is a little more involved than that :) | 20:34 | |
We did compile the setting into bytecode, but the lack of serialization meant that we couldn't serialize meta-objects | 20:35 | ||
So we built up objects every time at startup | |||
Like adding all the methods to classes, etc. | |||
timotimo | ah, ok | 20:37 | |
robertle | sounds slow | 20:38 | |
jnthn | It was, and CORE.setting was way smaller then | 20:39 | |
By now we are even doing stuff like lazily deserializing parts of CORE.setting on-demand | 20:41 | ||
Which gave us a further startup time reduction and base memory reduction | 20:43 | ||
At the price of quite a few headaches | |||
MasterDuke_ | timotimo: i have a 3601 line telemetry log. anything i can glean from it? | 20:47 | |
timotimo | you don't have my local patch for it, right? | 20:49 | |
oh | |||
MasterDuke_ | no, whatever was on github | ||
timotimo | the patch that i linked you to that changes the marking for call graph nodes also includes the interesting intervals | 20:50 | |
could you apply that patch? | |||
MasterDuke_ | doing that now | 20:51 | |
oh, now about 16k lines of log | 20:54 | ||
timotimo | mhm | 20:55 | |
MasterDuke_ | don't know what successors are, but there were 254238 of them | 20:56 | |
timotimo | that's how many children each call graph node has | ||
in theory we could graph the growth of the call graph with this | |||
MasterDuke_ | started with 103579 | 20:57 | |
Zoffix | [Coke]: (group key) yes we did do them in the past and now there are a bunch of different keys in use and some are still unverified: github.com/rakudo/rakudo/tags?after=2016.01.1 So, I don't know of a way to verify those releases. | 21:42 | |
[Coke]: and if we use a bunch of different keys, users need to import them (or figure out how to very without importing) | 21:43 | ||
+ someone releases some users import key as trusted, user goes apeshit and now we have a bunch of users trusting a hostile key. | 21:44 | ||
ZofBot: APOKEYLIPSE! | |||
ZofBot | Zoffix, [Note: the name "FIRST" used to be associated with "state" declarations | ||
Zoffix | OMG! I got the best hacking ice-cream! A fruity icepop on the outside; pop rocks on the inside! | 22:06 | |
These ones: www.madewithnestle.ca/sites/defaul...k=cd_rn-0D | |||
Wonder how they make the popping things not pop inside wet-ish icecream, yet pop in wet mouth | |||
geekosaur | temperature? | 22:15 | |
Zoffix | Ah | 22:42 | |
samcv | and nucleation. you put it in your mouth and ice cream dissolves off, exposing the pitted groves off the poprocks which then have all those physical sites to react at | 22:46 | |
Zoffix | :o | 23:07 | |
Zoffix has another one | |||
ZofBot: FOR SCIENCE | |||
ZofBot | Zoffix, If it's being overly hungry | ||
geekosaur | oh, yes, could well be threshold water level, if the ice cream is decent then it's got lipids that would wash away | 23:10 | |
Zoffix | "To make Pop Rocks, the hot sugar mixture is allowed to mix with carbon dioxide gas at about 600 pounds per square inch (psi). The carbon dioxide gas forms tiny, 600-psi bubbles in the candy" | 23:24 | |
600-psi :o | |||
samcv | not high enough | 23:26 | |
we need more bubbles |