🦋 Welcome to the MAIN() IRC channel of the Raku Programming Language (raku.org). Log available at irclogs.raku.org/raku/live.html . If you're a beginner, you can also check out the #raku-beginner channel! Set by lizmat on 6 September 2022. |
|||
00:05
mtj joined
00:15
sena_kun left
|
|||
tbrowder__ | .tell _grenzo yes, i'm interested in yr mod to gen tests from a module distrio | 00:42 | |
tellable6 | tbrowder__, I'll pass your message to _grenzo | ||
tbrowder__ | it could be very useful for sure | 00:43 | |
00:48
jpn joined
00:53
jpn left
|
|||
Xliff | Generate tests from a distro? Huh! How would that work? | 00:58 | |
01:20
jpn joined
01:25
jpn left
|
|||
antononcube | @Xliff Hmm... not that difficult to do. Just give your distro code to the LLM and ask write tests for it. | 01:43 | |
Of course, the obtained tests would be just (somewhat) good first iterations. At least with current LLMs. | 01:44 | ||
I am generally more interested in the other direction -- generating "distro" documentation from tests. | 01:45 | ||
01:45
Manifest0 left
|
|||
@lizmat I am mostly trying to say that LLMs can be used to generate code. I have seen a few examples of Raku functionalities written with NQP. I am saying it would be cool if the via LLMs they can be rewritten into "standard" Raku and/or RakuAST. | 01:47 | ||
BTW, I do not think or feel strongly about any LLM application ideas. I am still trying out different workflows with them in order to see their limits. | 01:48 | ||
01:49
jpn joined
|
|||
Current, "large scale" idea with LLMs I investigate is to automatically generate an LLM prompt for any package from the ecosystem. I will first start with well documented packages, after some tests will decided how to proceed. | 01:54 | ||
01:54
jpn left
|
|||
I think at some point we should a introduce the soft rule that anyone who submits a package to the ecosystem should provide an LLM prompt that helps its utilization. | 01:55 | ||
tbrowder__ | when i said "distro" i was using @ugexe's termininolgy. i meant it would be very useful for me as a "distro" author during development, not from something published. | 01:58 | |
guifa | I would say the main point of RakuAST isn't so much improving code generation. It may be a side effect, but a lot of it came via the desire to implement macros and do some other stuff that requires actually manipulating the codegen | 02:00 | |
antononcube | @guifa Macros make LLM even more applicable. | 02:01 | |
@tbrowder Yeah, I do not use "distro", I prefer the generic term "package". Although, I am aware that in Raku, "distro", "package", and "module" have more specialized meaning. | 02:02 | ||
02:19
hulk joined
02:20
kylese left
02:22
jpn joined
02:26
jpn left
|
|||
_grenzo | @tbrowder I started writing the module a while ago but ran into needing to parse the files to get the list of subs, classes, methods, and multi method signatures to generate the tests for. The correct answer seemed to be have Raku parse them. But I haven't figured that out yet. So it's stalled. I would welcome pointers to documentation or examples. | 02:36 | |
tellable6 | 2024-01-22T00:42:50Z #raku <tbrowder__> _grenzo yes, i'm interested in yr mod to gen tests from a module distrio | ||
antononcube | @grenzo Yes, see, "UML::Translators" -- shameless plug! | 02:39 | |
raku.land/zef:antononcube/UML::Translators | |||
Basically, in order to generate UML diagrams, I have traverse class hieararchies, roles, subs, etc. | |||
_grenzo | I'll take a look | 02:40 | |
antononcube | See also the comments here: stackoverflow.com/q/68622047 | 02:41 | |
02:51
jpn joined
02:55
jpn left
03:15
hulk left,
kylese joined
03:45
jpn joined
03:49
jpn left
03:51
jpn joined
03:56
jpn left
04:52
jpn joined
04:57
jpn left
05:36
jpn joined
05:41
jpn left
06:32
jpn joined
06:37
jpn left
07:05
Util left
07:07
Util joined
07:18
renormalist left
07:23
jpn joined
07:44
sena_kun joined
08:07
jpn left
08:25
abraxxa joined
08:29
abraxxa left
08:30
abraxxa joined
08:52
Manifest0 joined
08:59
dakkar joined
09:01
jpn joined
09:07
Sgeo left
09:27
abraxxa-home joined
09:48
abraxxa-home left
10:16
jpn left
10:25
jpn joined
|
|||
lizmat | And yet another Rakudo Weekly News hits the Net: rakudoweekly.blog/2024/01/22/2024-04-marrow/ | 12:40 | |
ab5tract | antononbcube: TIL about UML::Translators. Very cool, I wanted that recently and just assumed we didn't have anything like that in the Raku ecosystem. Glad to be proven wrong! | 12:41 | |
lizmat | antononcube ^^ | 12:49 | |
ab5tract | Marrow looks interesting. I don't have any Postgresql processes to try it on at the moment though. | 12:50 | |
lizmat | I wonder what it would take to get SQLite support | ||
ab5tract | that would be cool to see | 12:54 | |
antononcube: I pinged you on Discord, in case you are still up for that LLM boot camp | 12:58 | ||
[Coke] | regading the weekly, is there a way to get more context on the tweets without having an account on the site? direct links work, but clicking to get to the context requires a logn | 13:18 | |
my usual response is to ignore them, but occasionally I try to click through. | 13:19 | ||
lizmat | I'd be happy to be told of a way :-) | ||
ab5tract | I think the third party options are very limited nowadays due to the quotas | 13:35 | |
13:39
jpn left
|
|||
antononcube | @ab5tract Did you search the Zef ecosystem for the kind of introspection “UML::Translators” provides? With what tags? | 13:40 | |
And, yes — thank you for your interest in it! | 13:41 | ||
ab5tract | I didn't search, no. I was also looking to learn Mermaid in "manual" mode, so I skipped past checking what existed | 13:43 | |
But I did take a minute to imagine how useful a Raku <-> UML translator would be and figured I would probably have to implement one if I wanted to see it. | 13:44 | ||
So, yeah, again, glad to be proven wrong! | 13:45 | ||
Think we can teach it NQP? | |||
antononcube | Yeah, the generation of Raku code from UML and/or Mermaid would be really nice. | ||
ab5tract | Oh, I must have misread. I thought that it was currently bi-directional. | 13:46 | |
antononcube | I am not sure about teaching NQP — I think someone needs to know introspection of NQP code well. | ||
ab5tract | Makes sense | ||
antononcube | Agh, no — just introspection, Raku -> UML. I do say that UML -> Raku is in my TODO list or be nice to have, | 13:47 | |
13:48
jpn joined
|
|||
In some sense that translation can/should be done to good degree with LLMs. | 13:48 | ||
ab5tract | True | 13:49 | |
antononcube | The thing is that introspection-wise, Raku -> UML, usually has to translate a code base that exceed s the token limit of most LLM models. (Currently.) The translation of UML -> Raku would typically require much smaller count of input and output tokens. (Hence, would bring results most of the time.) | 13:52 | |
13:53
jpn left
|
|||
The question, of course, is how well LLMs know Raku. Most recent ones, trained, say, up to Sep 2023, seem to have relatively good Raku knowledge. | 13:54 | ||
ab5tract | But then we get back to the question of training a customized LLM that knows Raku specifically | 13:56 | |
At least, in my mind. | |||
antononcube | Right. OpenAI claims one can make personalized, specially trained GPTs. I plan to work with those this week. | 13:58 | |
ab5tract | Ok things could really get interesting then | ||
antononcube | The way I see it — if large enough token inputs allowed by the models, a fairly large code base can uploaded / trained with. | 13:59 | |
ab5tract | so one day we will be able to feed all of roast, but I get the sense that's some time off at this point | 14:00 | |
antononcube | So, the current “largest” LLM model I am aware of takes 128K input tokens (≈256K characters), and gives results up to 8K tokens. | 14:02 | |
This is the main limitation for “feeding” that LLM model with code. | 14:03 | ||
ab5tract | Is there a way to curate this input set for maximal understanding? | ||
By which I mean, are there guidelines, processes, etc already well-defined? | |||
antononcube | Correction : 4K output tokens, not 8K. | 14:04 | |
ab5tract | It's kind of a cool project: 128K tokens to work with. | ||
Raku is also pretty golfable... we can get more mileage out of 128K tokens. | 14:05 | ||
antononcube | Yes, that is the claim — some of the models say they “supervised.” But this does not mean the supervision is accessible to end users. | ||
ab5tract | but then you are creating more of a Raku golf generator than a "regular" Raku generator | ||
antononcube | I do not like games in which a ball is hit with a stick. Hence, the Raku golf analogies and characterizations are lost on me. | 14:06 | |
ab5tract | Still might be a lot of fun. | ||
antononcube: code golfing is a sub-genre of programming where the goal is to express an algorithm in as few characters as possible | 14:07 | ||
there are now dedicated languages for this, but for a long time Perl was king. | 14:08 | ||
antononcube | Hm… interesting. LLMs are usually way to prolific in their responses. | ||
ab5tract | Raku gains and loses in this compressability versus Perl, though it remains powerful | 14:09 | |
antononcube | Currently I do not expect great results from LLMs. Good and good enough, sure, sort of… | ||
14:10
jpn joined
|
|||
So, would not expected great terse code to be produced. | 14:10 | ||
ab5tract | But could it still learn from the terse code? | 14:11 | |
There's an equation in here, I can feel it. Something involving the "area gained by terseness" in relation to "depth gained by verbosity" | 14:14 | ||
14:15
jpn left
|
|||
antononcube | Sure, to a point. For example, using few shot training, i.e. with examples. | 14:15 | |
Another way is using “assistants.” | 14:21 | ||
14:21
jpn joined
|
|||
japhb | ab5tract: Sounds related to the waterbed theory of complexity ... | 14:42 | |
ab5tract | That makes sense | 14:51 | |
antononcube | @japhb Sounds like featherbedding by USA company. | 14:54 | |
16:16
jpn left
16:18
jpn joined
16:37
jpn left
|
|||
ingy | andinus: thanks, I guess linux/arm binaries are not a thing yet. probably not a big demand | 16:46 | |
yamlscript releases are similar: github.com/yaml/yamlscript/releases/tag/0.1.35 | 16:48 | ||
just don't have windows working quite yet | |||
16:50
jpn joined
16:57
abraxxa left
17:31
wlhn left
17:59
dakkar left
18:26
slicer joined
18:28
abraxxa-home joined
19:10
slicer left
19:20
slicer joined
19:26
slicer left
19:31
slicer joined
19:46
jpn left
20:04
slicer left
20:10
jpn joined
20:18
TieUpYourCamel left
20:24
jpn left
20:26
slicer joined
20:30
slicer82 joined
20:31
slicer left,
TieUpYourCamel joined
20:37
slicer82 left
|
|||
tonyo | tbrowder__: github.com/tony-o/raku-fez/blob/ma...kumod#L663 <- this grabs the commands and tries to match anything in github.com/tony-o/raku-fez/tree/ma...rces/usage and prints that help out - you could also automate that part | 20:48 | |
rather than using flat usage files | 20:49 | ||
i didn't in fez because i wanted to show both the short flag and long flag in one usage | |||
20:54
jpn joined
|
|||
_grenzo | @lizmat Started looking at that over the weekend (to make writing tests for Marrow easier. SqlLite does not appear to support the INFORMATION_SCHEMA standard. Which means writing SqlLite specific code to interrogate the database. Postgresql, Mysql and others have implemented it so I started there. | 21:01 | |
21:02
jpn left
21:04
jpn joined
|
|||
(apologies to SQLite for the misspelling) | 21:07 | ||
tbrowder__ | tonyo: thnx | 21:55 | |
22:03
jpn left
22:04
jpn joined
22:43
abraxxa-home left
23:34
sena_kun left
23:39
Sgeo joined
23:59
jpn left
|