This channel is intended for people just starting with the Raku Programming Language (raku.org). Logs are available at irclogs.raku.org/raku-beginner/live.html Set by lizmat on 8 June 2022. |
|||
00:05
ACfromTX joined
03:17
stanrifkin_ joined
03:20
stanrifkin left
06:03
soverysour joined,
soverysour left,
soverysour joined
|
|||
disbot2 | <ng0177> How to make gtp-4o work? Thanks a lot. | 06:58 | |
<ng0177> cdn.discordapp.com/attachments/768...eb9cb& | |||
<antononcube> @ng0177 Can you use the CLI scripts of “WWW::OpenAI” ? | 07:08 | ||
<antononcube> I strongly suspect you have not set the API key. | 07:14 | ||
<antononcube> See here : github.com/antononcube/Raku-Jupyte...a-api-keys | |||
07:31
soverysour left
|
|||
disbot2 | <ng0177> Yes, I created one (they are quite long) and assigned it to env. However, another issue seems this: | 07:44 | |
<ng0177> cdn.discordapp.com/attachments/768...a0b6f& | |||
<antononcube> Why do you attempt to use openai-chat ? | 07:46 | ||
<antononcube> See / use openai-playground: > openai-playground How many people are work age in Japan As of 2021, there are approximately 67 million people in Japan who are of working age, which is defined as people between the ages of 15 and 64. | 07:47 | ||
<ng0177> Maybe, it is not needed. | |||
<ng0177> cdn.discordapp.com/attachments/768...9ad75& | |||
<antononcube> That is a magic cell spec in a Jupyter notebook. | 07:49 | ||
<ng0177> Now, all is understood 🙂 BUT I have "insufficient quota"... | 07:52 | ||
<antononcube> If the Jupyter notebook (environment) cannot find the API key, you can specify it in the magic cell with the argument "api-key". | |||
<antononcube> The magic cells do not print much diagnostics. But the issues can be diagnosed by using llm-synthesize or the actual LLM service functions. (Like openai-playground or gemini-prompt.) | 07:55 | ||
<ng0177> It looks like I need a paid account although having used gtp hardly? | 08:06 | ||
<ng0177> cdn.discordapp.com/attachments/768...55068& | |||
<antononcube> Well, you can download the llamafile models and use "WWW::LLaMA". | 08:07 | ||
08:09
soverysour joined,
soverysour left,
soverysour joined
|
|||
disbot2 | <antononcube> If you use the argument --format=json you should be able to see more details of openai-playground output. | 08:09 | |
08:50
soverysour left
|
|||
disbot2 | <ng0177> Should it be possible to use Gemini instead of GTP to avoid the token limit? | 09:18 | |
10:13
soverysour joined
11:02
soverysour left
11:26
soverysour joined
11:55
soverysour left,
soverysour joined
12:08
stanrifkin_ left
|
|||
disbot2 | <antononcube> I have a “tester” Gemini tier account — it has its limitations too. | 12:23 | |
<antononcube> Running LLaMA models “locally” on your own computer it the best way to avoid usage limits. | 12:24 | ||
12:53
soverysour left
13:05
soverysour joined,
soverysour left,
soverysour joined
|
|||
disbot2 | <ng0177> I will dig into it. Thanks a lot! | 13:11 | |
<antononcube> @ng0177 I really advice to take a look at llamafile: www.youtube.com/watch?v=zVX-SqRfFPA | 14:02 | ||
14:09
soverysour left
14:45
soverysour joined,
soverysour left,
soverysour joined
14:58
soverysour left
16:58
stanrifkin joined
17:00
stanrifkin left
17:36
soverysour joined,
soverysour left,
soverysour joined
19:46
soverysour left
20:04
guifa_ left
22:20
habere-et-disper joined
22:25
soverysour joined,
soverysour left,
soverysour joined
22:31
soverysour left
|