🦋 Welcome to the MAIN() IRC channel of the Raku Programming Language (raku.org). Log available at irclogs.raku.org/raku/live.html . If you're a beginner, you can also check out the #raku-beginner channel!
Set by lizmat on 6 September 2022.
andinus mail 07:30
oops wrong pane
patrickb I have a role that needs some init logic. I can't put it in a `submethod TWEAK` as that collides in the does-ing class. Currently I rely on a `method TWEAK_SpanHandling()` that needs to be called in the mixing in classes TWEAK. Is there a way to do this that does not require cooperation of the does-ing class? 09:36
ab5tract patrickb: docs.raku.org/language/phasers#COM...plemented) 09:41
lizmat m: role A { sub TWEAK($) { dd; nextsame }; ::?CLASS.^find_method("TWEAK").wrap(&TWEAK) }; class B does A { method TWEAK() { dd } }; B.new 09:42
camelia sub TWEAK($)
method TWEAK(B $:: *%_)
ab5tract ah, nice
patrickb ab5tract: But I need this at object creation time, not compile time.
lizmat patrickb: I think my solution is what you're looking for ? 09:43
patrickb lizmat: That works. Thanks!
But wow, this is hacky..
ab5tract I imagine COMPOSE will be used often to mix in a run time candidate as above, but you raise a good point 09:44
This comes up fairly often, it would be great to explore some solutions
lizmat FWIW, I don't think a COMPOSE phaser is that necessary, as the mainline of the role is basically the COMPOSE phaser 09:47
ab5tract Fair, I think it was proposed as a way to create hooks like the ones that have been asked for twice in 48 hours alone. But just a hook or two in themselves would likely be plenty 09:56
librasteve just playing with the role / TWEAK conundrum - you could do this: 10:56
m: role Thingy { submethod TWEAK { say 'yo' } } class A is Thingy { submethod TWEAK { say 'ho' } } my $a = A.new; 10:57
Raku eval Exit code: 1 ===SORRY!=== Error while compiling /home/glot/main.raku Strange text after block (missing semicolon or comma?) at /home/glot/main.raku:1 ------> Thingy { submethod TWEAK { say 'yo' } }⏏ class A is Thingy { submethod TWEAK { s expecting any of: infix infix stopper statement end statement modifier statement modifier loop
evalable6__ (exit code 1) 4===SORRY!4=== Er…
librasteve, Full output: gist.github.com/2850c869c3272537a4...6ca051efb3
librasteve m: role Thingy { submethod TWEAK { say 'yo' } }; class A is Thingy { submethod TWEAK { say 'ho' } }; my $a = A.new;
Raku eval yo ho
evalable6__ yo
ho
librasteve that is to go is rather than does in the user class and force the role to be punned...? 10:58
tbrowder .ask patrickb have you tried installing Raku on a Chromebook? 13:38
tellable6 tbrowder, I'll pass your message to patrickb
tbrowder or anyone else... 13:39
patrickb no I haven't 14:15
tellable6 2024-08-19T13:38:46Z #raku <tbrowder> patrickb have you tried installing Raku on a Chromebook?
lizmat and yet another Rakudo Weekly News hits the Net: rakudoweekly.blog/2024/08/19/2024-...ing-ahead/ 14:51
jgaz When will Raku Land stop listing up non-Zef modules? 16:32
patrickb jgaz: Probably nevercor very far in the future. Since p6c is effectively frozen, there is no harm coming from it anymore. 18:00
antononcube Just finished watching the presentation and Q&A session of "Large Language Models and The End of Programming - CS50 Tech Talk with Dr. Matt Welsh" (www.youtube.com/watch?v=JhCl-GeT4jw) My TL;DR review is: > Yes, yes, yes, and blah, blah, blah. 19:30
The presenter either lacks imagination or really likes portraying himself as such.
As for programming with LLMs -- I challenge him to show me how with LLMs he is going to produce working code for "multiscale Truchet patterns" -- say in Python, R, or JavaScript -- via cursory, human understanding of the this post: community.wolfram.com/groups/-/m/t/3247480
See these LLM-derived breakdowns and summaries of that presentation: github.com/antononcube/RakuForPred...t-Welsh.md
jaguart.v1 One thing I note on LLM generated code, with prompts are for Perl - the code is legacy Perl style - no modern idioms e.g. Corinna. Might differ for other languages I guess. 20:57
antononcube Understanble -- LLMs are statistical in nature, hence larger-footprint outcomes are more probable. That is why people make complicated prompts, etc. (And make different claims about those... ) 21:46
_grenzo So, you're saying, to protect our jobs we need to put more honey pots on github with code that suffers from the Halting problem, or other hard to spot bugs? 22:07
And use misleading comments...
Rule 1: We don't talk about AI fight club. 22:11
antononcube Or just invent new programming languages that immediately have huge mind-share. 22:46
_grenzo Have an LLM do that