00:49 Guest13 joined 01:10 Guest13 left 01:37 kylese left 01:38 kylese joined 02:15 kylese left, kylese joined 02:30 kylese left 02:34 kylese joined 03:38 stanrifkin joined 03:41 stanrifkin_ left 03:42 floyza joined 03:55 Aedil joined 04:05 guifa left 04:52 floyza left 04:59 kjp left 05:00 kjp joined 05:01 kjp left, kjp joined 10:24 Sgeo_ left 10:58 sena_kun joined 11:52 abraxxa-home joined 11:53 abraxxa-home left 12:06 abraxxa-home joined 13:16 guifa joined 14:27 abraxxa-home left
guifa antononcube: for your LLM stuff, do any of them allow for local hosting of the models? Wanting to use a lighter weight model but not sure the best route to integrate into Raku 14:37
antononcube @guifa Yes. Plesee "WWW::LLaMA". www.youtube.com/watch?v=zVX-SqRfFPA 14:44
guifa sweeeeeeet 14:45
antononcube I have used most of the models listed here: github.com/Mozilla-Ocho/llamafile .
guifa I figured you had already handled this stuff
Going to be using Gemma. Not the best model but it works enough for my proof of concept for a class project I'm doing 14:46
but at least runs on my computer at a respectable speed ha
antononcube 😎 Yeah!! There is also a special "llama" magic cell, if you have the Raku chatbook installed.
I have not used Gemma... Only Gemini. 14:47
Well, there is a Raku package "WWW::Gemini". Again, Gemini "inherits" PaLM, so "palm" and "gemini" can be used in chatbooks. 14:49
tbrowder antononcube: could you please take a look at github/raku/roast/S32-num/log.t per roast issue #862 14:51
or any other sharp mathemetician like librasteve or [Coke] or 14:52
vrurg
antononcube @tbrowder I am a very blunt mathematician, but I will take a look. 14:53
tbrowder well, yr bluntness is hidden from me! thanks!
did you fare well in Milton? 14:54
antononcube @tbrowder Yes, mostly... 14:55
tbrowder good, my son in Cocoa Beach had lots of debris and some yard flooding but not up to foundation 14:56
good thing no big trees around... 14:57
antononcube Yeah... lots of twigs and leaves here... 14:58
tbrowder note i'm working on adding log2 tests, but that note about adding complex number tests has been there since at least 2019, just curious if the existing imaginary number tests are sufficient. i can add the necessary tests if you say the existing tests are NOT adequate. otherwise i will delete those notes 15:02
guifa antononcube: I'll be running these as part of a web server. 15:22
basically creating a cool modular system to provide reading comprehension activities across langauges
antononcube @guifa Yeah, that is a good use case for LLMs. 15:25
guifa some stuff will still go to other things, like lemmatizing every word and finding it in a word frequency list to try to highlight less common words 15:26
LLM could maybe do okay with that, but other libraries can do it much faster/precisely
antononcube You might explore using different pre-conditioning prompts -- I would say the Raku LLM functionalities have the perfect architecture / DSL for these kind of expereiments.
guifa Yeah -- part of what I'm doing now is really trying to craft the prompts
but already getting good results 15:27
antononcube LLMs use "tokens" which is similar to "lemmatizing".
guifa Yeah, very similar but just enough different to be frustrating to use for a language activity
lol
but I'm merging even different LLMs
like we know having imagery is an important support for reading. So off to Diffusion or similar to generate pretty pictures, based on a descriptive prompt generated by an LLM based on whatever the text is 15:28
antononcube One year and a half ago, I played with LLMs having a dialogue. Those were "limited experiments", and results were not that interesting. That was 1.5 year ago, though -- things should have changed a lot. 16:13
16:16 lizmat_ joined 16:19 lizmat left 16:25 guifa left
librasteve tbrowder: I have updated the Issue with some tests for log of complex numbers 16:25
with a little help from my friend ChatGPT ;-)
16:41 guifa joined
tbrowder thnx! 16:47
16:50 guifa left 17:12 bdju left 17:14 bdju joined 17:30 Aedil left 17:36 bdju left, bdju joined 17:45 maylay left 17:57 maylay joined 18:13 Sgeo joined 19:13 dpk left 19:14 dpk joined 19:51 stanrifkin left 21:07 stanrifkin joined 21:49 Tirifto left 21:50 Tirifto joined 22:13 lizmat_ left, lizmat joined 22:54 sena_kun left 23:45 guifa joined
guifa antononcube: yeah, the conversation ones are a bit tricky. I tried doing some where I'd set a global prompt to correct any errors, but it wasn't entirely successful in locating them or correcting them per se, and it had a number of smaller errors / awkward phrasings itself 23:48
antononcube @guifa Have you used / do you use “LLM::Prompts” ? 23:58
That package has the prompt “CopyEdit”.
Well, I am assuming you are using LLMs via Raku… 23:59