09:02 lizmat joined
rcmlz One way to solve missing zef is to have rakudo-star as a debian package. Or use this package A suggest package B thing, where when you install rakudo via apt it tells you that you most lijely also want zef. 11:09
at the moment debian package rakudo does not suggest raku-zef. packages.debian.org/bookworm/rakudo 11:20
I do not understand the differences that Debian makes between recommends/suggests/enhances - maybe even enhances is the right one 11:23
librasteve rcmlz: appreciate the insight - I think what is missing here is a commitment from a person to (re)build and maintain a build pipeline for all the major +nix PMs ... any volunteers? (PS. I have no idea if you need commit rights / or membership of any of the various *nix PM repos) 12:24
@antononcube - o/ ... I am about to loose my raku LLM virginity - my goal is to take a list of strings and to cast them as first name / last name fields - and I reckon ChatGPT 3.5 + raku LLM module may be a quick and easy way to do this (ideally I want my raku in a script file and to use me free ChatGPT3.5 account) 12:27
can you give me a quick steer how to start this journey?, please
antononcube @librasteve Sure! Here are three steps I envision: 1. Install "LLM::Functions" and "LLM::Prompts" 2. Use llm-synthesize to make a LLM request 3. Profit 12:56
librasteve ...
antononcube @librasteve Assuming your list of strings is @surrogates here is the llm-synthesize command: llm-synthesize([ 'Convert the following strings into a list of dictionaries with keys "first_name" and "last_name"', @surrogates, llm-prompt('NothingElse')('JSON') ], form=> sub-parser('JSON'):drop) 12:59
librasteve Undeclared routine: sub-parser used at line 11 13:02
;-(
antononcube Hmmm.. yeah please include use Text::Subparsers; in your code. 13:03
So, you should have: use LLM::Functions; use LLM::Prompts; use Text::SubParsers; 13:04
(These are loaded by default in "Jupyter::Chatbook" sessions, so, I forgot to mention that use code above.) 13:05
librasteve thanks for the quickstart! guess I better rtfm too 13:06
antononcube Did it work?
librasteve i'll let you know 13:14
antononcube It works -- just did it:
cdn.discordapp.com/attachments/768...6726d& 13:15
librasteve yeah - well I'm reading the docs
antononcube Any suggestions and feedback is welcome! 13:16
BTW, I would strongly suggest to use the LLM functionalities via some sort of REPL. (The screenshot above is with VS Code. ) 13:17
librasteve found the Jupyter plugin for Intellij ... just gotta upgrade 13:23
antononcube Hmm... yeah YMMV -- I cannot "reliably" open and start using Raku Jupyter notebooks with IntelliJ, 13:32
librasteve is there a Jupyter CLI optio? 13:37
antononcube @librasteve I think you might have to look into jupytext . 13:38
jupytext.readthedocs.io/en/latest/
librasteve ok - thanks!
antononcube @librasteve But I do not use it that often -- it does not know about Raku. I prefer using "Text::CodeProcessing" for executing Markdown documents with Raku code. 13:39
@librasteve I was thinking -- a good start for workflows should be this post : rakuforprediction.wordpress.com/20...functions/ 13:40
BTW, that post has examples with unit objects creation and conversion. (With “Physics::Units”.) 13:41
librasteve input my @surrogates = 'John Smith', 'Kylie Minogue';
output [{"first_name": "John", "last_name": "Doe"} {"first_name": "Jane", "last_name": "Smith"}]
guess that's why need the chatbook! 13:42
had to load soe credits onto my openai account
antononcube For this you can probably also use an example function. Let me make an example... 13:45
@librasteve my @surrogates = 'John Smith', 'Kylie Minogue'; my &fe = llm-example-function( to-json(@surrogates) => '[{"first_name": "John", "last_name": "Doe"} {"first_name": "Jane", "last_name": "Smith"}]'); &fe(to-json(['Keanu Reeves', 'Mark Wahlberg'])) 13:50
Again, run in VS Code: 13:51
cdn.discordapp.com/attachments/768...cc0ef&
librasteve [2] @0 ├ 0 = {2} @1 │ ├ first_name => Keanu.Str │ └ last_name => Reeves.Str └ 1 = {2} @2 ├ first_name => Mark.Str └ last_name => Wahlberg.Str 14:00
ddt &fe(to-json(['Keanu Reeves', 'Mark Wahlberg'])).&from-json;
awesome!!!
antononcube 💯 14:01
Specifying, the "formatron" argument with form => sub-parser('JSON') to &fe would apply to-json automatically. 14:06
librasteve making some progress here ... the idea is read from csv, fix up the first name / last name and write to google sheet 14:39
I'll post something in the coming day or two
antononcube 👍
librasteve really appreciate your quickstart help!
antononcube You are very welcome! 14:40
dr.shuppet I'm a contributor to a source-based distro called T2 SDE where I maintain the Rakudo toolchain (rakudo, nqp, moarvm, zef, and a few Raku packages) and never had to do anything besides updating the versions of rakudo + nqp + moarvm, ensuring they are in sync, and occassionally update zef. The Rakudo toolchain updates furthermore can be fully automated, and indeed are mostly automated, I just do: $ for pkg 14:58
in moarvm nqp rakudo; scripts/Update-Pkg $pkg <new-version>; scripts/Build-Pkg $pkg; scripts/Commit $pkg; done every once in a while, that is, if our automation fails to update some of the packages
So I'd say it's mostly having someone dedicate a few minutes per each release 1. to test it and 2. to fight the distribution process, if applicable 14:59
lizmat dr.shuppet: what are the requirements to "fight the distribution process" 15:24
? 15:25
dr.shuppet lizmat: Depends on the distribution. For Fedora specifically, it's gotten easier since you have Packit: packit.dev, allowing you to automatically update packages when there is an upstream release on GitHub 15:29
Don't know much about other distributions 15:30
lizmat dr.shuppet thanks for the feedback! 15:33
16:13 elcaro_ left 19:28 habere-et-disper joined 20:53 lizmat_ joined 20:56 lizmat left 21:06 lizmat_ left, lizmat joined 23:12 habere-et-disper left