00:01 elcaro joined, elcaro left, elcaro joined 00:09 wayland joined, wayland76 left 00:51 MasterDuke joined 01:25 hulk joined 01:26 kylese left 01:52 constxd joined 02:14 teatime left 02:15 hulk left, kylese joined 02:49 MasterDuke left 03:11 Aedil joined 03:56 nicktv joined 04:19 wayland left, wayland joined 04:45 Ekho left 04:57 Ekho joined 05:16 oodani left 05:17 oodani joined 05:45 nicktv left 06:34 nicktv joined 06:37 nicktv left 06:49 sena_kun joined 07:03 Sgeo left 07:04 dawids joined, xinming left 07:05 xinming joined 07:08 dawids left 07:24 sena_kun left 07:29 xinming left, xinming joined 11:24 eseyman left 11:26 manu_ joined 11:27 manu_ is now known as eseyman
[Coke] Occasional reminder - always looking for folks to contribute to the docs - you don't have to be a technical writer, there are lot of potential inroads to contributing, including: making sure whatever was in the release that was just cut is covered in the docs! 12:00
join us in #raku-doc if you're interested and say hi!
lizmat [Coke]++ 12:01
sjn <3
constxd have u tried using a LLM for docs 12:34
12:44 Aedil left
[Coke] You need to feed an LLM content - we're talking about generating that content. Someone could for sure feed the docs into an LLM and make it more conversational. 12:45
I would suggest anyone doing that collab with anton as he's done a lot of raku/llm combo work.
13:12 jgaz joined 13:41 Aedil joined
scullucs Oops, thanks, I had missed your answer. I'm also wondering if Some::Module can be set up (via META6.json for example) to render .../Some/Module.rakudoc (instead of ...rakumod) when invoking 'rakudoc Some::Module'. 13:56
antononcube @constxd I assume you are asking for content from docs.raku.org, but I would like to point to this blog post that discusses LLM derivation of code using comment-code rules: "TLDR LLM solutions for software manuals", rakuforprediction.wordpress.com/20...e-manuals/ 14:06
constxd [Coke]: no i mean like feeding in the source code for a module and having it synthesize the docs 14:08
antononcube Alternatively, we can use Retrieval Augmented Generation (RAG), which is a technique to get "trained LLM" look-&-feel from document collections. 14:09
As for generating documentation from code -- yes it can be done. I have asking people who are too lazy to write READMEs to do at least that. 14:10
I made like a dozen experiments one year ago. And my conclusion is, but that technique/application is limited. (Meaning, the documentation is too literal.) 14:11
The new models might fare better -- my confidence about that is, say, ≈40%. 14:12
14:13 Sgeo joined
[Coke] My expectations there are very low but am happy to be surprised. 14:20
antononcube Here is an example documentation generated from Raku code: github.com/FCO/RakuCron/issues/1 . 14:22
👆 This is just an example, not a very serious attempt...
[Coke] the problem I anticipate with source-based generation of docs is that you might get things that correspond to types, routines... but you're not going to get "here's an overview on the OO meta and how you can use roles, when you should private attributes..." 14:23
antononcube Yeah -- one should craft complicated prompt to get overviews. 14:24
It depends what is the programming language too. And/or what is the problem domiain. 14:25
[Coke] what is the prompt consuming to answer the question though? I find it very unlikely that the code of rakudo is the source of that.
I mean, we were specifically discussion Raku/Rakudo here. Might get better results for a different language (with different coding standards) 14:26
librasteve and with a much much larger base of training examples eg python/rust
antononcube Definitely, depends on the (i) model and (ii) the code properties. 14:27
For example, some of Gemini's models can take million tokens -- whole textbooks can put in.
If the Rakudo code is commented "well", certain higher level overviews or summaries can be attempted/derived with good results. 14:28
librasteve although i’ve been pleasantly surprised with ChatGPT raku “knowledge” so maybe raku is already above some minimal quantity training examples eg on GH/SO/rosetta
antononcube @librasteve Yeah, I think that is the case. Definitely, better since last year. 14:29
Meaning, I get often enough good code results with "gpt-4" and "gpt-4o".
And, again, the problem domain matters too. 14:30
If the algorithms have published pseudo codes in Wikipedia, for example, it is much more likely to get working Raku code. 14:31
And this year LLM-generated Raku code looks much less like Perl.
librasteve lucs: I’m getting into App:Mi6 and you can tell it the location of your rakudoc in the dist.ini config … 14:32
scullucs @librasteve Hmm... I guess that means that once the module is uploaded to zef, the "right thing" gets done when a user installs the module. I wonder what that right thing is. 14:51
Maybe rakudoc looks at the dist.ini contents (I kind of hope not, but who knows). 14:52
[Coke] the rakudoc command line tool definitely needs some care. 14:53
14:57 wayland76 joined 15:02 wayland left
librasteve yeah … it’s rendered to the gh README.md with mi6 build 15:26
16:31 stanrifkin joined 17:40 guest555 joined 18:25 Aedil left 18:49 sena_kun joined 19:00 guest555 left 19:31 xinming left 19:32 xinming joined
ugexe If i ever get time to submit gist.github.com/ugexe/8caac54436b4...1d6fb4fb31 it would add the META6 field for mapping a module name to a different rakudoc file name 19:36
19:42 stanrifkin left 20:09 wayland joined 20:10 wayland76 left
librasteve ++ the grant app 20:52
22:17 sena_kun left