🦋 Welcome to Raku! raku.org/ | evalbot usage: 'p6: say 3;' or /msg camelia p6: ... | irclog: colabti.org/irclogger/irclogger_log/raku
Set by ChanServ on 14 October 2019.
00:02 melezhik left 00:10 rbt left 00:11 rbt joined 00:16 mowcat left
rypervenche Any idea why adding promises to these variables would cause an increase in 20-30 seconds in the script? gist.github.com/rypervenche/5c9909.../revisions 00:21
00:25 sena_kun joined 00:27 Altai-man_ left
timotimo you could tr timing individual pieces 00:28
does the output appear in the expected "rhythm" or are there pauses where you wouldn't expect any? 00:29
like, you would expect creating links and creating sentences grepped to appear immediately, and also creating @sentences would appear immediately 00:30
but then it'd await the sentence grepping part
00:30 leont left 00:50 oddp left 01:09 Altai-man_ joined 01:11 sena_kun left 01:12 molaf left
raku-bridge <DataKinds> no problem rypervenche! 01:18
timotimo hey @DataKinds, you made that blog post including the "create objects a bunch" benchmark, right? 01:19
raku-bridge <DataKinds> timotimo: yeah I did, what's up?
<DataKinds> you mean this one right datakinds.github.io/2020/06/17/raku-in-2020 01:20
timotimo we have an optimization that may one day let this run at much much higher speeds
because it won't be creating the objects at all
and in the past i've been interested in giving raudo an optimization for when you give it an empty program :D 01:21
raku-bridge <DataKinds> hehe 01:22
<DataKinds> what's the code path look like for an empty file, anyway?
<DataKinds> i was hoping that neither ruby nor raku had an optimization like that because I really wanted to test like the full VM start-up time 01:23
<DataKinds> also, more importantly: what do you mean by "it won't be creating the objects at all" timotimo?
timotimo in the ideal case, the entire stuff that happens inside the for loop is inlined 01:24
and the optimizer can reason that nothing that was created escapes
so no object has to be created, as long as we keep the things around that were put into attributes and such
01:25 molaf joined
timotimo with a little extra it can see that it's not ever read from 01:26
so the code inside the loop turns into { }
raku-bridge <DataKinds> ahh that'd be very cool 01:30
<DataKinds> would that happen in an optimization pass after the execution begins? or before
timotimo after 01:31
raku-bridge <DataKinds> very cool
<DataKinds> additionally, what's the feasibility of having an optimization for large collections of small objects where instantiating a new one actually just acts as a thin delta layer on top of a pointer to another object? ala some of clojure's collections
<DataKinds> i know there's a LOT of speed to be had there
timotimo there's at least one talk by jonathan about this, it's "scalar replacement" and "escape analysis" (well, the latter enables the former in this case) 01:32
01:32 Cabanossi left
raku-bridge <DataKinds> does raku not currently do any sort of escape analysis? 01:32
timotimo ah, kind of like turning an array of structs into a struct of arrays?
raku-bridge <DataKinds> ya
timotimo it does, just not all of the escape analysis forever
raku-bridge <DataKinds> ah interesting 01:33
<DataKinds> sorry i'm picking your brain here i always found optimization really interesting
timotimo same
raku-bridge <DataKinds> does rakudo currently do any sort of small object optimization? 01:34
timotimo i'm not sure if this counts; moarvm has an allocator for pages of almost-same-sized objects 01:35
01:38 Cabanossi joined 02:05 Manifest0 left 02:06 Manifest0 joined 02:11 Cabanossi left 02:21 Cabanossi joined 02:31 stoned75 left 02:32 wtwt5237 joined
wtwt5237 Hi, I tried the new 2020.06 version of raku. Looks like it has become even slower than 2020.05. 2020.05 is slower than 2020.02. Does anyone have any insight into this? 02:33
github.com/wtwt5237/raku-for-bioinformatics
03:10 sena_kun joined 03:11 Altai-man_ left 03:17 rbt left 03:18 rbt joined 03:21 zacts joined
guifa wtwt5237: I know that there were a few optimizations that were that done were made would make things worse in some acses, but better more often than not. It might be that your code is hitting some of those trade offs on the downside 03:29
03:33 vike left 04:26 MilkmanDan left
zacts hello #raku 04:37
04:38 brtastic joined 04:44 Maylay left 04:50 Maylay joined 04:54 rindolf joined 05:01 Maylay left 05:04 wtwt5237 left 05:07 stoned75 joined 05:09 Altai-man_ joined 05:11 sena_kun left 05:16 CAPTCHA_REQUIRED joined, OpenZen left
CAPTCHA_REQUIRED How would you go about downloading something in raku concurrently 05:16
with transactional error handling
05:17 rindolf left 05:28 rindolf joined
moritz CAPTCHA_REQUIRED: cro.services/docs/intro/http-client 05:30
05:31 brtastic left
moritz the .get method returns a promise, so you can do all the usual stuff with it instead of using `await` 05:32
05:32 Maylay joined 05:34 vike joined 05:45 molaf left 06:07 brtastic joined 06:14 brtastic left
raku-bridge <DataKinds> isn't cro a lil overkill for that 06:37
<DataKinds> Why not just WWW 06:38
<DataKinds> Or HTTP::Tinyish
moritz well, Cro is asynchronous by design, which seems to fit the question 06:42
06:44 vike left
raku-bridge <DataKinds> question asked concurrently, not async could just use .race 06:44
CAPTCHA_REQUIRED i mean async
raku-bridge <DataKinds> I stand corrected hehe
06:52 brtastic joined 06:54 bocaneri joined 06:55 zacts left 06:59 skids left 07:10 sena_kun joined 07:12 Altai-man_ left
MasterDuke wtwt5237: i get fluctuating numbers on repeated runs. as low as 5.9s and as high as 7.2s. you might just have gotten a lucky 2020.02 and an unlucky 2020.0(5|6) 07:14
tellable6 MasterDuke, I'll pass your message to wtwt5237
07:17 sarna joined 07:18 bocaneri left, leont joined 07:23 aborazmeh joined, aborazmeh left, aborazmeh joined 07:28 bocaneri joined 07:31 vike joined 07:36 dakkar joined 07:55 aborazmeh left 08:05 xinming_ left 08:06 xinming_ joined 08:07 jarh joined
jarh Hi everyone! 08:07
Where do I file a bug report for raku? 08:08
raku-bridge <DataKinds> 👋
jarh % perl6 --versionThis is Rakudo version 2020.02.1 built on MoarVM version 2020.02.1implementing Raku 6.d.
raku-bridge <DataKinds> github.com/rakudo/rakudo/wiki/rt-introduction
MasterDuke jarh: github.com/rakudo/rakudo/issues/new/
08:16 JJMerelo joined 08:24 kensanata joined 08:28 pecastro joined 08:38 AlexDaniel joined, jarh left, AlexDaniel left, AlexDaniel joined 08:43 Sgeo left
El_Che rbt: rakudo-pkg 2020.06 packages are up 08:47
08:57 wamba joined 09:06 Doc_Holliwould joined, lizmat_ joined 09:07 reppie joined 09:08 Doc_Holliwood left, lizmat left, notandinus left 09:09 Altai-man_ joined, gugod left, x[LGWs4x4i]uG2N0 left, gugod joined 09:12 sena_kun left 09:14 JJMerelo left 09:20 MasterDuke left 09:36 oddp joined 09:44 AmberSaber joined
AmberSaber hi 09:45
09:46 AmberSaber left 09:48 wamba left 09:51 lizmat_ is now known as lizmat
Geth doc: 187a10926d | (Patrick Böker)++ (committed using GitHub Web editor) | doc/Language/compilation.pod6
The "perl" repository has been renamed to "core"

The rename took place in [4d5b254e654d3](github.com/rakudo/rakudo/commit/4d...70b95402). Reflect that change here.
09:54
linkable6 Link: docs.raku.org/language/compilation
09:56 JJMerelo joined
Xliff nine: Could you take a look into R#3765, please? 10:02
linkable6 R#3765 [open]: github.com/rakudo/rakudo/issues/3765 [SEGV] Segmentation Fault occurring in 2020.06 after Converting Script to a Module
10:03 MasterDuke joined
nine Xliff: sorry, not any time soon. I'm in the middle of getting in-process precompilation going. If you could narrow down the failure to a somewhat more manageable size, I may be able to help. 10:05
Xliff nine: OK. Will attempt to work toward that. 10:06
nine: However, I expect the size is tied to the root cause. :/
MasterDuke nine: in-process compilation will help a lot with large projects with lots of large dependencies, correct? (e.g., a lot of Xliff's projects) 10:07
tellable6 2020-06-23T09:55:01Z #moarvm <nine> MasterDuke: I think spesh plugins are going to be replaced by the new dispatch architecture. So probably not...
nine No. I don't think we've ever had a bug that depended on having huge code size
MasterDuke: yes
MasterDuke: I didn't think it would, but apparently it can save a very noticable part of compilation time when code is spread out over lots of compilation units 10:08
Xliff nine: There's always the first time, yes?
MasterDuke nine: i'm not sure about that (re large code size). very large literals (e.g., a hash with 100k+) have problems
nine MasterDuke: yes, but that's a much more extreme case than "my script is just pretty long" 10:09
MasterDuke nine: and good point re new-disp, i won't do anything until after that lands
Xliff nine: No, I think it's more the number of compunits involved, rather than code size. Just a thought.
nine I'd bet that it's not that either
Xliff Again, just a thought. 10:10
10:13 wamba joined
lizmat clickbaits rakudoweekly.blog/2020/06/22/2020-25-on-time/ 10:14
10:21 Doc_Holliwould left
Xliff falls for it. 10:21
MasterDuke anyone want to create a regex for git to recognize raku functions? github.com/git/git/blob/master/use...#L115-L144 10:26
10:36 libertas joined 10:38 reppie is now known as x[LGWs4x4i]uG2N0 10:52 kent\n joined
JJMerelo MasterDuke: maybe create an issue... somewhere? marketing repo? 10:54
MasterDuke user-experience, right 10:55
github.com/perl6/user-experience/issues/39 10:57
JJMerelo MasterDuke thanks!
rbt Thank-you El_Che. 11:04
Xliff MasterDuke: Ewww... that's using old regex, not raku regexs! 11:05
11:10 sena_kun joined 11:11 cpan-raku left, Altai-man_ left 11:12 cpan-raku joined, cpan-raku left, cpan-raku joined 11:14 JJMerelo left 11:18 JJMerelo joined 11:50 Xliff left
Altreus I don't think I'm doing this right... this line is never run github.com/shuppet/p6-api-discord/...on.pm6#L93 11:50
the method is run from TWEAK 11:51
In turn from here github.com/shuppet/p6-api-discord/...d.pm6#L212
And ultimately from a script calling that method 11:52
The Connection object is definitely created because the websocket is created. I have debugged the value of this $json github.com/shuppet/p6-api-discord/...et.pm6#L28 11:53
SmokeMachine JJMerelo: Hi! Thank you very much! 11:55
is that git repo a book you are writing?
JJMerelo Yep
"Raku recipes" for Apress.
Deadline in 3 weeks, two chapters to go... 11:56
Red is part of the basic configuration, since it uses an auxiliary library that also uses, by default, Red.
lizmat Altreus jnthn might know
SmokeMachine JJMerelo: great. 11:58
JJMerelo: I'm looking forward to read that book 11:59
JJMerelo Well...
If you're interested in the previous one, "Perl 6 Quick Syntax Reference", also by APress, we're looking for reviewers... 12:00
This one will be published hopefully in October, if everything goes according to schedule.
SmokeMachine JJMerelo: I'd love to help! even knowing that my help wouldn't be that good, I don't know if my English is enough for that 12:02
JJMerelo SmokeMachine we'd send you an e-copy and, when you're done, you're expected to publish a review, preferably in Amazon. Will that be OK? 12:03
12:05 wamba left
SmokeMachine Ok! :) 12:08
12:11 JJMerelo left
SmokeMachine have anyone made a mi6 release GitHub action already? 12:38
lizmat SmokeMachine: not to my knowledge 12:45
SmokeMachine that would be useful... I'll try to do that when I have some time... 12:46
El_Che .tell jj "Raku, on the other hand, has fantastic documentation." (datakinds.github.io/2020/06/17/raku-in-2020) 12:49
tellable6 El_Che, I haven't seen jj around, did you mean cj?
El_Che .tell jjmerelo "Raku, on the other hand, has fantastic documentation." (datakinds.github.io/2020/06/17/raku-in-2020)
tellable6 El_Che, I'll pass your message to JJMerelo
Altreus lizmat: hopefully! 12:54
13:09 Altai-man_ joined 13:11 sena_kun left
cpan-raku New module released to CPAN! Gnome::N (0.17.10) by 03MARTIMM 13:16
New module released to CPAN! Gnome::Gtk3 (0.28.5.2) by 03MARTIMM
13:25 sarna left 13:31 tejr left, soursBot joined 13:33 tejr joined 13:59 [Coke] left 14:00 dataangel left 14:01 [Coke] joined, [Coke] left, [Coke] joined
[Coke] finally reinstalls tmux so he can get better unicode support for his irc session. 14:02
just noticed that camelia wasn't rendering in the toolbar in screen.
14:13 xinming_ left, xinming_ joined 14:28 poga joined 14:30 Kaeipi left 14:31 Kaeipi joined 15:10 sena_kun joined
cpan-raku New module released to CPAN! Red (0.1.9) by 03FCO 15:11
15:12 Altai-man_ left 15:25 soursBot left 15:28 soursBot joined 15:43 MasterDuke left 15:46 JJMerelo joined
JJMerelo Hi 15:47
tellable6 2020-06-23T12:49:53Z #raku <El_Che> jjmerelo "Raku, on the other hand, has fantastic documentation." (datakinds.github.io/2020/06/17/raku-in-2020)
JJMerelo .tell El_Che thanks for letting me know! 15:48
tellable6 JJMerelo, I'll pass your message to El_Che
JJMerelo I'm adopting Digest::HMAC by the terms of its free license. It's been erroring in installation for some time, so important modules such as Cro::HTTP can't be installed automatically 15:53
lizmat JJMerelo++
15:53 soursBot left
JJMerelo the PR has been sitting there for some time, no reaction. When they come back, they can simply merge changes and I'll retire my copy (or create a new stuff, if there's really something new there) 15:54
15:57 grumble is now known as rawr
Geth ecosystem/JJ-patch-14: 40e9927ac1 | (Juan Julián Merelo Guervós)++ (committed using GitHub Web editor) | META.list
Change repo for distribution

To fix installation error.
16:00
ecosystem: JJ++ created pull request #509:
Change repo for distribution
jdv79 are there any known mem leaks? 16:09
16:09 soursBot joined
jdv79 i may have a proc that eats more mem as it runs but there is no obvious explicit global state 16:10
Geth ecosystem: 40e9927ac1 | (Juan Julián Merelo Guervós)++ (committed using GitHub Web editor) | META.list
Change repo for distribution

To fix installation error.
16:14
ecosystem: d26ab11a6b | (Juan Julián Merelo Guervós)++ (committed using GitHub Web editor) | META.list
Merge pull request #509 from Raku/JJ-patch-14

Change repo for distribution
16:16 Doc_Holliwould joined 16:17 kensanata left
jdv79 almost certain i have an unbounded mem use case:( 16:22
timotimo: how do i get your heap thingee working? 16:28
16:47 MasterDuke joined 16:52 molaf joined 16:53 skids joined 16:57 natrys joined 17:09 Altai-man_ joined 17:11 sena_kun left 17:14 kybr joined 17:19 antononcube joined 17:21 stoned75 left 17:23 MilkmanDan joined 17:24 brtastic left 17:25 squashable6 left
antononcube Hi! I want some help with choosing package / module name. I wrote Raku packages that translated natural language commands into executable code in different programming languages. Say, I have QuantileRegressionWorflows module and LatentSemanticAnalysis module. What would be good module "path" names? Something like "Text::LatentSemanticAnalysis" or 17:25
"CodeGeneration::LatentSemanticAnalysis" ?
17:25 dakkar left
timotimo jdv79: you need the timo fork's "nqp-ops-makes-things-faster" (or similar) branch 17:26
17:26 squashable6 joined
Altai-man_ antononcube, `Text::` seems fitting to me. 17:27
antononcube Here is an example call to the module LatentSemanticAnalysis: say to_LSAMon_R( "create from textHamlet; make document term matrix with stemming FALSE and automatic stop words; apply LSI functions global weight function IDF, local term weight function TermFrequency, normalizer function Cosine; extract 12 topics using method NNMF and max steps
12 and 20 min number of documents per term; show topics table with 12 terms; show thesaurus table for king, castle, denmark;")
That call generates this R code: LSAMonUnit(textHamlet) %>%LSAMonMakeDocumentTermMatrix( stemWordsQ = FALSE, stopWords = NULL) %>%LSAMonApplyTermWeightFunctions(globalWeightFunction = "IDF", localWeightFunction = "None", normalizerFunction = "Cosine") %>%LSAMonExtractTopics( numberOfTopics = 12, method = "NNMF", maxSteps = 12, 17:28
minNumberOfDocumentsPerTerm = 20) %>%LSAMonEchoTopicsTable(numberOfTerms = 12) %>%LSAMonEchoStatisticalThesaurus( words = c("king", "castle", "denmark"))
Altai-man_ Thanks! I am strongly considering `"Text::"`. You do not find it too generic? 17:29
Altai-man_ antononcube, not really. It depends on the direction you want/can take it to. For example, if you plan to make lots of plugins you can put it under some top-level name, just not too long please. ;) For example, there are Sparrow::{Plugins...} or Cro::(Core, HTTP, etc) and it is not "Web::Cro::...". 17:31
antononcube, if you are going for a single module, then Text seems ok. It assumes working with some text, including parsing/generation as well as modifying, etc. 17:32
antononcube Altai-man_ Good point. I do want to have multiple modules for generating Machine Learning workflows and might reuse grammars from siblings models. 17:33
Altai-man_ antononcube, you can also consider just `LSA` prefix, I see google understands this abbr pretty well. Just to save some typing. 17:34
antononcube Altai-man_ So I envision something like `CodeGeneration::LatentSemanticAnalysisWorkflows`, `CodeGeneration::QuantileRegressionWorkflows`, `CodeGeneration::RecommenderWrofkows`, etc. And, yes, I realize the names I am showing are long. 17:35
Altai-man_ antononcube, personally, I don't mind long names, auto-completion saves my day. Though `CodeGeneration` sounds a bit unusual. I wrote Java::Generate, maybe you can go with `LSA::Generate` or something. ;) 17:38
antononcube Altai-man_ I am targeting different programming languages with the same parser/interpreter module. 17:39
Altai-man_ For example here is a Python code generation result with the same sequence of natural commands above: `obj = LSAMonUnit(aText );obj = LSAMonMakeDocumentTermMatrix( lsaObj = obj, stemWordsQ = NA);obj = LSAMonEchoDocumentTermMatrixStatistics( lsaObj = obj );obj = LSAMonApplyTermWeightFunctions( lsaObj = obj, globalWeightFunction = "IDF", 17:41
localWeightFunction = "None", normalizerFunction = "Cosine");obj = LSAMonExtractTopics( lsaObj = obj, numberOfTopics = 12, method = "NNMF", maxSteps = 12);obj = LSAMonEchoTopicsTable( lsaObj = obj, numberOfTableColumns = 12, numberOfTerms = 10);obj = LSAMonEchoStatisticalThesaurus( lsaObj = obj, words = c("sing", "left", "home"))`
Altai-man_ antononcube, not sure if it implies anything different from what I wrote. 17:42
antononcube Altai-man_ I mean I do not think this would work:`R::LatentSemanticAnalysisWorkflows`,`Python::LatentSemanticAnalysisWorkflows`, `Java::LatentSemanticAnalysisWorkflows` . 17:43
Altai-man_ Well, I suggested LSA::Generate, which can have LSA::Generate::R, LSA::Generate::Python, etc. 17:44
LSA::Core etc.
antononcube Altai-man_ I see. Yes, that is interesting/good way of partitioning the functionalities.. 17:45
@Alta
Altai-man_ But the problem with that approach is that if want to have a common module with grammar for generic words or phrases I cannot share between the modules for LSA, Regression, or Classification. 17:47
Altai-man_ Why? 17:48
antononcube Altai-man_ Because I do not thing `LSA::Generate::CommonWords` belongs in an obvious way to `QR::Generate::` 17:49
Altai-man_ antononcube, use LSA::CommonWords? 17:50
17:52 brtastic joined
antononcube Altai-man_ Sure, I can use it explicitly in `QR::Generate::`, but I think with `CodeGeneration::CommonWords`, `CodeGeneration::QR::`, `CodeGeneration::LSA::` the dependency is going to be more obvious. 17:53
Altai-man_ antononcube, well, you can do as you please, not sure if anybody can stop you. 17:54
17:55 Noisytoot_ is now known as Noisytoot
antononcube Altai-man_ Ok, good to know. :) But the alternative breakdown you suggested is definitely something I am going to seriously consider. 17:56
Altai-man_ Thank you for your suggestions! 17:57
17:58 antononcube left 18:00 soursBot left 18:05 MasterDuke left
jdv79 timotimo: is that in timo/moarperf? 18:07
i don't see it
timotimo sorry, that's my p6-app-moarvm-heapanalyzer or something like that 18:11
jdv79 curious that gnu time reported a max rss of 14.6g for a given run and ps only ever said 3.6g... 18:14
both way too much, in any case
18:15 MasterDuke joined
JJMerelo I see supernovus is doing a bunch of transfer to community modules. 18:15
Are they doing also the change in the ecosystem entries? Or that's up to us?
I think I know the answer to that, but...
jdv79 hmm, raku segfaults at teh end of the run with --profile=heap 18:18
timotimo oh? it shouldn't have anything much that it runs when the program is finished 18:20
oh!
--profile=/tmp/foo.mvmheap
the profile switch has taken over the job of --profile-filename
jdv79 i was just doing what the README said to
ok
timotimo oh, that needs updated! good catch 18:21
18:22 bocaneri left 18:29 MilkmanDan left
jdv79 for a simple case it gen'd a 3g file - geez 18:29
timotimo oh, when you "head foo.mvmheap | xxd" does it say version 2 or 3 near the very start?
18:32 Sgeo joined
jdv79 MoarHeapDumpv002 18:33
still trying to iron out the deps and get the profiler running 18:34
the Compress::Zstd tests are failing
is that known?
Corrupted block detected 18:35
MasterDuke i was able to `zef install Compress::Zstd` no trouble. do you have the zstd-dev (or whatever it is for your os) package installed? 18:36
jdv79 just installed it - ubuntu latest
20.something
timotimo ah yes v002 is vastly bigger 18:37
ah, yes, i've got an off-by-one in there that i've only been able to reproduce in tests - which is ... "ironic"? i think it means the decompress stuff is fine and tfhe compress stuff has a bug
jdv79 libzstd-dev/focal,now 1.4.4+dfsg-3 amd64 [installed] 18:38
18:39 Nile joined
jdv79 nopaste.linux-dev.org/?1320537 18:41
is there somethign else i should do?
abraxxa any idea why copying a file to a gvfs mounted dir doesn't work? 18:44
I get an Failed to copy file: operation not supported on socket error
MasterDuke jdv79: you probably need to recompile moarvm so that it can see you now support zstd 18:46
jdv79 i' 18:49
ll try
MasterDuke i see `libs: -lm -lpthread -lrt -ldl -lzstd` in the output after i `perl Configure.pl` 18:51
jdv79 i just saw that 18:55
19:10 sena_kun joined 19:12 Altai-man_ left
timotimo if anybody wants to, they could turn libzstd into a run-time dependency rather than a compile-time dependency 19:12
jdv79 same error from the profiler after a recompile 19:13
timotimo if that's a thing, that you just put the .h or something derived from it, into your code
jdv79 the Compress:Zstd tests passed this time though
timotimo looks like it's still reading a v2 file 19:14
for v3 it'd be using HeapAnalyzer/Parser.pm6 19:15
jdv79 should/can i gen a v3 file? 19:16
timotimo you'll have to until i figure out why the heap snapshot parser isn't working right with v2 19:17
jdv79 how do i gen a v3 file? 19:18
timotimo when moar is built while libzstd is found, it'll gen v3
there's a flag you can pass to configure if you really want v2 instead 19:19
19:19 epony left 19:20 AlexDaniel left 19:22 xinming_ left 19:23 xinming_ joined
timotimo i guess i laser-focussed on making v3 great while not taking enough pains to make sure v2 still worked, though i'm not sure why that'd have broken :( 19:24
there's a pull-request from nwc to fix something related to the v2 format i think?
19:28 soursBot joined 19:43 JJMerelo left 19:46 TeamBlast left, sivoais left 19:49 TeamBlast joined
cpan-raku New module released to CPAN! Concurrent::PChannel (0.0.3) by 03VRURG 19:58
19:59 sivoais joined
jdv79 timotimo: even a v3 snap fails on a summayr for me 20:01
timotimo oh? now things are getting more interesting
jdv79 nopaste.linux-dev.org/?1320538 20:02
20:02 molaf left
jdv79 at least the initial summary seems to be correct - for this simple run i see ~20M growth 20:03
timotimo that's a good start
jdv79 but i'm not sure how much more i can glean from that
timotimo right, you'll want at the very least the highscore list
jdv79 if i know what htat was - i believe you 20:04
*knew
timotimo the error message is potentially from inside moarvm, where it might be interpolating an empty debugname, in which case we may be doing something wrong in there, too
probably just a cosmetic fix there, though
jdv79 is there a way to measure how much mem a ref is using? 20:06
timotimo a ref is usually just a pointer inside of an object
jdv79 iirc in perl there's something like Scalar::Size
i mean recursively
"this hash is x size"
timotimo nothing that's really easy just yet 20:07
jdv79 i do collect a results array for final processing but it should be tiny unless raku datastructures are way larger than i would think
timotimo it's moderately interesting that the number of frames is growing
we can get a bit of interesting output from log::timeline 20:08
hold on
if you could set LOG_TIMELINE_JSON_LINES=/dev/stderr for example 20:09
jdv79 where do i set that? 20:10
20:10 kensanata joined, rbt left 20:11 rbt joined
jdv79 that blasts out a lot of gibberish it looks like 20:12
timotimo yeah, it's supposed to output a json blob per line
ok perhaps mixing it with stdout is a bad idea
20:13 epony joined
timotimo hm. can i feed comma's log::timeline viewer with jsonp files? 20:14
jdv79 nopaste.xyz/?d0349318bc0468d6#5nMw...FarD8ZhUeT 20:20
idk if that means anything
timotimo isn't it beautiful how it randomizes the keys of the json dicts there 20:22
ok, so the kind ("k" field) can be 0 for an event, 1 for "task start" and 2 for "task end", the latter two will also have an "i" which is a unique id 20:23
oh, when it exits with an exception, we'll also see the "task end" for it 20:24
so that's not how i could find out where it b0rks
jdv79 hmm 20:25
timotimo i could see it from it being too quick, but does that even make sense ... 20:27
jdv79:i think a little piece of the err file got killed because of 2>err vs 2>>err ? 20:39
in that case i shouldn't recommend setting LOG_TIMELINE_JSON_LINES to stdout or stderr
20:40 rindolf left
jdv79 nopaste.xyz/?ebab99f68a536eaa#3MKs...NqdZJFVmZs 20:42
is that any better?
20:42 natrys left
timotimo ah, yes, now the entry with "i":1 is there 20:42
oh there's output at the top that i was missing 20:43
somehow an Uninstantiable made it in there?!?
20:43 brtastic left
timotimo i can only imagine it's about the values object that gets passed to !read-attribute-stream 20:48
20:49 molaf joined
timotimo jdv79: would you be okay with patching the moarvm-heapanalyzer code locally? 20:50
jdv79 i gues 20:54
i only have another 45mins today
can do more tomorrow maybe
20:55 natrys joined, natrys left 20:59 skids left 21:06 soursBot left 21:07 Benett left 21:08 Benett joined 21:09 Altai-man_ joined
jdv79 what is the patching? 21:11
21:11 sena_kun left
timotimo sorry i got pulled AFK for a bit there 21:18
really i'm pretty confused about what the state may be at the point of the exception
is your data sufficiently harmless that you can just send the heap snapshot file over to me?
maybe we'll put a dd of $values and $result in front of the if $entrysize == [...] in Parser.pm6 around line 218 21:20
and then dd the $dcvalues and $dcresult inside each of these ifs
i have to go prepare some food 21:21
jdv79 idk - whta's in the snapshot? 21:24
as long as it doesn't have data sure
how do i send it to you? its 309M
timotimo it has filenames, function/method names and their line numbers, it has names of attributes, if you have an array of Int objects, the int cache will probably rat out values between -1 and 16 or 32 or so 21:25
goo.gl/t9VoHG - this may work? snapdrop? it ought to send it p2p 21:26
feel free to symmetrically encrypt it with gpg with some passphrase you get to me via another channel or whatever 21:27
this is probably a dumb question; you're on an x86 system? i haven't made the heap snapshot profiler endianness-clean 21:33
GTG now, sorry 21:34
this tool requires me to click accept before it will transfer anything i believe 21:35
maybe try firefox send
jdv79 thanks 21:41
timotimo oh hey 21:42
i'm at the keyboard again for a moment
21:44 kensanata left
jdv79 if anybody has any other easy/expedient ideas to finding the mem growth that would be nice 21:55
this is currently a prod issue for me so its a pain...
random golfing was unsucessful so far 21:56
MasterDuke have you tried heaptrack? it's at the moarvm level, but it might be helpful 22:02
jdv79 nope
unfortunately i have to bounce for today. be back tomorrow! tahnks.
22:17 squashable6 left 22:19 squashable6 joined 22:21 squashable6 left 22:23 squashable6 joined 22:41 Altai-man_ left 22:42 sena_kun joined 22:59 pecastro left 23:09 Altai-man_ joined 23:12 m_athias left, sena_kun left 23:18 m_athias joined 23:45 squashable6 left 23:47 squashable6 joined