00:44
abraxxa-home left
01:32
jgaz left
01:59
Manifest0 left
02:39
guifa joined
02:50
hulk joined
02:51
kylese left
03:15
hulk left,
kylese joined
03:39
Aedil joined
|
|||
SmokeMachine | I think ASTQuery is getting better and better... usercontent.irccloud-cdn.com/file/.../image.png | 05:45 | |
I think the next step is to have a good set of default groups... currently that's what we have (github.com/FCO/ASTQuery/blob/main/...L8-L60)... does anyone have any suggestion what groups we should have? | 05:47 | ||
(that query (`.var-declaration[scope="has"]`) searches for all attributes) | 05:48 | ||
06:03
guifa left
06:27
teatwo joined
06:30
teatime left
06:56
Aedil left
07:14
Aedil joined
07:53
suman joined
|
|||
librasteve | lizmat: happy to hear that :a:b:c in non function call settings should work … any idea where it is covered in the docs? anyway suggest any bug issue fix should also update docs of course… | 07:57 | |
suman | q: # The input expression containing Raku code | 07:59 | |
my $input_code = '1+3; 2/3; say "Hello" '; | |||
tellable6 | 2023-11-28T22:08:40Z #raku <librasteve> suman: suggest you also try @array>>.num (ie. native Num) ... maybe also worth looking at the ingestion phase and coercing to Num|num when you do the csv parse or whatever | ||
suman | # execute and capture the output | ||
my $code = qqx{raku -e $input_code}; | |||
say $code | |||
camelia: # The input expression containing Raku code | |||
my $input_code = '1+3; 2/3; say "Hello" '; | |||
# execute and capture the output | |||
my $code = qqx{raku -e $input_code}; | |||
say $code | |||
camelia: my $input_code = '1+3; 2/3; say "Hello" '; my $code = qqx{raku -e $input_code}; say $code | 08:00 | ||
m: my $input_code = '1+3; 2/3; say "Hello" '; my $code = qqx{raku -e $input_code}; say $code | 08:02 | ||
camelia | WARNINGS for -e: Useless use of "+" in expression "1+3" in sink context (line 1) /bin/sh: 2/3: No such file or directory /bin/sh: say: command not found |
||
suman | m: my $code = '1+3; 2/3; say "Hello" '; my $output = shell("raku -e $code", :out).out.slurp; say $output | 08:05 | |
camelia | WARNINGS for -e: Useless use of "+" in expression "1+3" in sink context (line 1) /bin/sh: 2/3: No such file or directory /bin/sh: say: command not found |
||
08:20
suman left
09:33
Xliff left
09:37
sena_kun joined
10:17
Manifest0 joined
10:58
Aedil left
11:28
Sgeo left
12:13
guifa joined
|
|||
librasteve | suman: your code works fine on my machine - with the same Useless use of warning ... I suspect that the raku eval here in the channel is configured so that you can't spawn a shell process via qqx | 13:37 | |
tellable6 | librasteve, I'll pass your message to suman | ||
antononcube | @librasteve Any "Inline::Python" related epiphanies while being in the pub? | 15:09 | |
15:46
Xliff joined
15:52
Aedil joined
|
|||
librasteve | it now installs okish … i have pushed a patch PR to GH that removed a couple of failing tests and new README advice if you hit the config error I hit but meantime you should be able to zef install from GH librasteve fork | 16:43 | |
guifa | antononcube: which module was it that would run local LLM modules? | 17:29 | |
antononcube | See “WWW::LLaMA”. The generic modules “LLM::Functions” and “LLM::Prompts” also apply. Chatbooks can also be used, | 17:31 | |
guifa | ah right | ||
Now I just need to figure out the simplest way to rev up the server. WWW::LLaMA doesn't handle the model loading and all itself, right? | 17:32 | ||
antononcube | See : www.youtube.com/watch?v=zVX-SqRfFPA | ||
You have to download the llamafiles yourself. | 17:33 | ||
guifa | alright -- thanks | 17:34 | |
oh wow I didn't realize the llama files were universal executables | 17:36 | ||
that's....really cool | |||
antononcube | Yeah! | ||
There is a request from @rcmlz to implement an Ollama front end. I will do it some point but it is of low priority for me. | 17:38 | ||
guifa | I'm using it for a project in a class but professor would prefer me to give him a single install script and / or very simple instructions for install | ||
antononcube | I.e. I strongly suspect you a module-client like this in mind: ollama.com | 17:39 | |
guifa | so it sounds like I could put the llama file in resources and go from there | ||
yeah -- right now biggest thing is just have...something :) I'll perfect if I ever decide to release | 17:40 | ||
antononcube | I think script that does this is fairly easy to do. | ||
guifa | yeah. Hoping I could get it down to "install rakudo" and "install this module" :) | 17:41 | |
antononcube | You can ask an LLM to generate that script, BTW. | 17:44 | |
17:44
zenmov joined
|
|||
guifa | ha there are still some things I surprisingly take enjoyment writing :) | 17:45 | |
antononcube | Ok. But this also involves writing (in Raku.) | 17:46 | |
guifa | Plus, my project is supposed to take me X number of hours | 17:47 | |
some of the other parts have gone smoother than expected | |||
haha | |||
antononcube | @guifa This is what I have in mind with "writing up an LLM script generation": | 17:52 | |
cdn.discordapp.com/attachments/633...d9d21& | |||
guifa | nice | 17:53 | |
17:54
zenmov left,
zenmov joined
|
|||
antononcube | Even if does not work it s a good start to make the actual script. More instructions can be added to make the files executable, launch them, etc. | 17:54 | |
And that also means that you code writing. 🙂 | |||
Here is the generation with the additional prompt element: > "Make the downloaded model executable. Have an option to start the downloaded model or not." | 17:57 | ||
cdn.discordapp.com/attachments/633...12c3e& | |||
guifa | yeah | ||
I think one of my frustrations with LLMs is they're always really good but absolutely require that just-in-case human intervention. | 17:58 | ||
was a bit saddening when I tried using it to adjust the difficulty of texts for reading -- generally quite good for English | |||
but the more inflected a language was, the more its performance dropped, and unfortunately for teaching reading, you need accuracy to be high. Great for helping teachers save a lot of time but not for 100% automated performance yet | |||
antononcube | No, it is just a tool to speed up work. | 18:00 | |
For programming with LLMs, very often one has to decide: > Should debug this (possibly mediocre) code generated by an LLM, or program the thing from scratch. | 18:02 | ||
guifa | yes exactly | 18:04 | |
oh this is kinda cool. I figured out a simple way to do let a role do a TWEAK without it being eaten up by its implementor | 18:11 | ||
method new(|args) { my $new = self.bless: args; LEAVE $new.ROLE-TWEAK: args; $new }; method ROLE-TWEAK { ... } | 18:14 | ||
it always gets called after the class TWEAKA, which in my case is a benefit | 18:15 | ||
18:32
silug4 joined,
silug left,
silug4 is now known as silug
18:56
zenmov left
19:00
Aedil left
|
|||
ab5tract | guifa: slick! | 19:27 | |
I wonder if it switching to ENTER would always call ahead of the class TWEAK? | 19:28 | ||
in case one needed the opposite dynamic | |||
19:32
avuserow joined
|
|||
[Coke] | "i don't know who needs to hear this right now", but make sure you don't have any cloud stuff running you forgot about. | 20:49 | |
(my blin VM was up for a month doing nothing, oops) | 20:50 | ||
20:52
Sgeo joined
|
|||
librasteve | ;-( | 20:59 | |
fwiw I go raws-ec2 nuke ...raku.land/zef:librasteve/CLI::AWS::EC2-Simple | 21:01 | ||
21:58
sena_kun left
22:50
guifa_ joined
22:51
guifa left
22:55
guifa_ left
|