🦋 Welcome to the MAIN() IRC channel of the Raku Programming Language (raku.org). Log available at irclogs.raku.org/raku/live.html . If you're a beginner, you can also check out the #raku-beginner channel! Set by lizmat on 6 September 2022. |
|||
antononcube | @.vushu In case you are curious -- right now I am using "Proc::ZMQed" to connect to Wolfram Engine (WE) and generate chess position images with WE and then get them in Raku REPL session. | 00:00 | |
.vushu | sounds really cool, I have no clue about AI | 00:02 | |
@antononcube I will some time try to see how I can generate a png, not what library to use | 00:03 | ||
antononcube | I tend to be very dogmatic about AI, so, my rants about should be taking with scepticism. | ||
ab5tract | vushu: I always add a dot to your name because it shows through the bridge as being part of your nick (or whatever discord calls it.. user name?) | 00:04 | |
wanted to make sure you got the ping :) | |||
antononcube | To be hones, I did not considered using chess as example, but people seem to be impressed by that examples, so, I dicided to put in one or two. | 00:05 | |
ab5tract | the issue with the audio stream thing is related to NativeCall missing an abstraction, if I remember correctly? | ||
.vushu | ab5tract isn’t it because Moarvm doesnt recognize the callback from another thread? | 00:07 | |
either i make an abstraction where i spawn the audio thread myself or I modify moarvm | 00:08 | ||
ab5tract | I must have misremembered the error message | 00:09 | |
.vushu | Spawning the audio thread isn’t trivial since raylib is using an external library called miniaudio which in turn spawns the thread | ||
ab5tract | sounds like modifying moarvm is the answer ;) | 00:10 | |
.vushu | yeah I think that’s the answe but I have no idea how to do that 😂 | 00:11 | |
antononcube | Hmm... would you be in interested in a music retrieval demo and/or composition demo with AI and MIDI ? | ||
ab5tract | One of my goals in the new year is to start a Rakudo Delving Society where anyone who's interested can join in some collective code archaeology, with the eventual purpose of exploring, understanding, and documenting every nook and cranny of the Rakudo compiler and MoarVM runtime. | 00:15 | |
antononcube | @ab5stract Noble endeavor. (I am not interested, though.) | 00:17 | |
00:17
jpn joined
|
|||
ab5tract | that's too bad. I was hoping it might entice you as an AI tooling opportunity | 00:18 | |
antononcube | And I was hoping people to be enticed to use AI tools I talked about to investigate nooks and crannies of Raku. 🙂 | 00:19 | |
It should not be difficult to design certain prompts and workflows just for that, though. | |||
ab5tract | maybe we can meet in the middle somewhere :) I know you've tried to break it down a bunch of times but it still goes over my head. | 00:20 | |
maybe now that I have a non-trivial use case in mind, it will become less diffuse | 00:21 | ||
.vushu | ab5tract Im definitely interested in learning more about the compiler 😁 | ||
antononcube | Compelling examples are hard to find. People of this particular community have interests that do not overlap. (Except on "common" language topics.) | 00:22 | |
00:22
jpn left
|
|||
For example -- when it comes to Raku's compiler -- I would be more interested to "try to see without looking." | 00:22 | ||
nemokosch | it's great to hear there is somebody trying to make compiler stuff more accessible... there is just no way around it | 00:23 | |
00:23
dbonnafo2 joined
|
|||
today as well, Patrick asked a question - not sure whom he asked. Nobody knows anything about the topic. | 00:23 | ||
antononcube | 🤣 | ||
ab5tract | that sound sufficiently mystical to pique my interest | ||
"try to see without looking" | 00:24 | ||
antononcube | @ab5tract I am writing an explanation right now. | ||
That, of course, is how statisticians like to do things. (BTW, I distrust them.) | 00:25 | ||
ab5tract | nemokosch: yeah, it's a crappy feeling to want to fix problems but not have any way to get decent bearins on the problem | ||
*bearings | |||
antononcube | With "try to see withot lookin" I mean, making lots descriptive statistics over the code and making tests that are narrated or explained by LLMs and verify certain expected behavior. | 00:26 | |
nemokosch | I can tinker with less obscure parts of the core but that's about it | ||
also, if somebody knows how multiple dispatch resolution works... | |||
00:26
dbonnafo joined
|
|||
it's hard to say positive things under some circumstances, and it's not okay at all that multi dispatch resolution is black magic. That's literally basic user-facing behavior that should be taught to "beginners" | 00:27 | ||
but how to teach it when nobody knows it? | |||
antononcube | @nemokosch Try to use AIs !! 🙂 | 00:28 | |
00:28
dbonnafo2 left
00:29
dbonnafo left,
dbonnafo joined
|
|||
ab5tract | antononcube: I'd like an LLM to digest all of roast.git and then I'd like to discuss Raku with that context. is that even possible? | 00:30 | |
antononcube | Yes, with the newest (preview) models -- they have 128k tokens as intake. Also, one can make custom "GPTs" based on certain collections of documents. | 00:31 | |
00:32
dbonnafo2 joined
|
|||
ab5tract | ok, interesting. | 00:32 | |
00:34
dbonnafo left
|
|||
ab5tract | nemokosch: your point is well taken. I'm hoping to address it the only way I see it working... getting a group of minds together on it. | 00:35 | |
and _maybe_ also rewriting (portions of) moar in Zig in order to drive comprehension | 00:36 | ||
antononcube | @ab5tract We are talking too absractly about LLMs for Raku code / coding here. Ideally, you should try to use the Raku code writer prompt and how you can use that kind of interaction. | 00:39 | |
00:39
dbonnafo3 joined
00:40
dbonnafo2 left
|
|||
ab5tract | antononcube: it's really late and like I said, the AI stuff doesn't just click with me, so I have no idea what your suggestion actually looks like or what it would be effective at solving | 00:43 | |
00:43
dbonnafo3 left,
dbonnafo joined
|
|||
ab5tract | but I look forward to getting into some specifics with you about it | 00:43 | |
antononcube: regarding generative examples. have you considered GL shaders? it might be cool to see some of those created by an AI | 00:45 | ||
nemokosch | being able to write moar code would also be kind of a game changer for sure | ||
ab5tract | yeah, opening that up to a wider contributer base is imperative at this point | 00:46 | |
it's kind of interesting too, when considering Zig's stdlib is considerably more batteries included than C. Could these essential datastructures and other functionalities prove useful in the runtime implementation? | 00:51 | ||
okay, that's the final thought from me tonight. take it easy all. | 00:52 | ||
antononcube | I do not think I have heard of GL shaders before. | 00:56 | |
I assume this link is relevant: learnopengl.com/Getting-started/Shaders | 00:57 | ||
01:07
Manifest0 left
|
|||
nemokosch | good night | 01:11 | |
01:12
jpn joined
|
|||
antononcube | You too! | 01:15 | |
01:17
jpn left
01:27
dbonnafo left
01:45
epony joined
02:33
hulk joined
02:34
kylese left
03:00
jpn joined
03:05
jpn left
03:15
hulk left,
kylese joined
03:41
edr left
04:07
derpydoo left
04:17
jpn joined
04:23
jpn left
04:51
jpn joined
04:56
jpn left
05:20
dbonnafo joined
05:24
dbonnafo left
05:25
dbonnafo joined
05:44
tejr joined
06:11
tadzik left,
tadzik joined
06:18
jpn joined
06:23
jpn left
06:46
dbonnafo left
07:02
jpn joined
07:10
jpn left
07:12
Summer joined
07:57
jpn joined
08:02
jpn left
08:03
dbonnafo joined
08:07
dbonnafo left
08:28
Summer left
08:29
Summer joined
08:49
merp left
08:50
Manifest0 joined
08:54
jpn joined
08:55
merp joined
08:58
jpn left
08:59
Summer left,
dakkar joined,
Summer joined
09:09
jpn joined
09:22
hapisnake joined
09:25
hapisnake left
09:53
Sgeo left
10:01
sena_kun joined
10:05
MasterDuke left
|
|||
Geth | advent/main: 440b1a1203 | (Elizabeth Mattijsen)++ (committed using GitHub Web editor) | raku-advent-2023/authors.md Claim the final slot |
10:13 | |
10:26
jpn left
10:29
jpn joined
11:27
dakkar left
11:31
dakkar joined
|
|||
librasteve | out of curiosity, why rewrite (parts of) MoarVM in Zig and not Rust? (also the MoarVM repo has 370k loc ... so I would suggest that there is a firm and agreed team objective) | 11:34 | |
nemokosch | I suppose because Zig is way simpler, has out-of-the-box cross compilation toolchain and overall much smoother integration with C | 11:56 | |
.vushu | Why not C++? I think it's a very viable choice, since you can incrementally port stuff to c++ and still compile the c code using c++ compiler | 12:00 | |
lizmat | I think historically C++ was not chosen because C++ compilers where not available on may platforms | 12:04 | |
and a dislike of C++ by the founder of MoarVM :-) | |||
12:04
dbonnafo joined
|
|||
.vushu | Today C++, is wide supported, ah yeah many people dislike C++, I use it at work and I find it quite pleasant to work with 😄 | 12:05 | |
leont | C++ in practive is not a language but a family of dialect, much like Arabic | 12:08 | |
I like some dialects of C++, I hate others. | |||
12:09
dbonnafo left
|
|||
nemokosch | Like Raku itself | 12:09 | |
Except it has a standard and experts | 12:10 | ||
leont | librasteve: honestly VMs are dirty. I'm pretty sure that if you write one in Rust you'll end up using so much unsafe code it won't feel like Rust anymore. | ||
nemokosch | In the meantime, Zig is the polar opposite | ||
Dull and just gets the job done | |||
leont | VMs might be the only application type where C is a much better choice than C++; what you actually want is more like a portable assembly than a high level language in many ways. | 12:11 | |
.vushu | If you should use C++, then please only use the modern C++ no C like C++ | ||
LLVM is written in C++ | |||
and every new language is using LLVM | 12:12 | ||
leont | LLVM is not a VM in the MoarVM sense, it's really more of an intermediate | ||
You don't run LLVM code | |||
nemokosch | I'm not sure what that's supposed to mean | 12:13 | |
But moarvm is in a pretty darn horrible state as of now | |||
That's a given | |||
leont | It's a VM, they're supposed to be horrible | ||
It's where you put all the horribleness so that the rest of your stack doesn't have to be | 12:14 | ||
nemokosch | no, I mean it's not good | 12:16 | |
it's still dog slow, it still has apparent memory leaks, it still crashes and leaks undecipherable error messages | |||
it's kind of the elephant in the room | 12:17 | ||
and we are talking about technology that 5 people have sufficient competence of, on the whole planet, and like 2 of them do some housekeeping, the others not even that | 12:18 | ||
if there were anyone in the community with competence in designing a runtime, the rational choice would be to just ditch MoarVM at this point - but there isn't | 12:19 | ||
any attempt to "torture" its code somehow has to be appreciated a lot | 12:20 | ||
.vushu | It could be fun project to a make new VM for Raku 😀 | 12:21 | |
nemokosch | this reminds me... do you remember this funny issue where if $some-range { ... } and if so $funny-range { ... } didn't do the same thing? it was a couple of weeks ago | 12:23 | |
(changing the variable name on the way was accidental 😅 ) | 12:24 | ||
.vushu | Doesn't sound good 😅 | 12:25 | |
lizmat | the problem was that Range.Bool was checking for concreteness of the Range object | ||
in 6.e, it will check it it produces at least one value | 12:26 | ||
*if | |||
nemokosch | more or less; the way I remember it is that the bool check checked .elems which could turn into a silent False value on failure | 12:27 | |
but anyway, it turned out that the former piece of code doesn't use a high-level coercion but some NQP-level magic that has little knowledge of core settings | 12:28 | ||
lizmat | ?? | ||
nemokosch | I couldn't actually find the backing code but it's very possible it was implemented straight in the VM some way | ||
lizmat | Ranges are a HLL thing, I don't think there's any specific VM code for that ? | 12:29 | |
nemokosch | implicit boolish coercions overall | 12:30 | |
the story is telling either way: either there was no easily applicable fix to invoke the actually valid core setting - which is probably an architectural problem - or actually there was, except nobody to actually trace it down and fix it | 12:31 | ||
lizmat | apparently there were plenty of people to talk about it :-( | ||
nemokosch | to be honest, this also seems like the kind of thing that could be fixed for good in RakuAST: if should just mandate a boolish coercion in the HLL, not escalate anything towards the VM | 12:34 | |
lizmat | but that was exactly the problem: | 12:38 | |
m: dd (1..5).Bool | |||
camelia | Bool::True | ||
lizmat | m: dd (5..1).Bool | ||
camelia | Bool::True | ||
lizmat | m: use v6.e.PREVIEW; dd (5..1).Bool | ||
camelia | Bool::False | ||
lizmat | m: use v6.e.PREVIEW; dd (1..5).Bool | ||
camelia | Bool::True | ||
nemokosch | that's not a problem | 12:39 | |
the problem is that nqp::istrue keeps reporting 1 for (5..1) | |||
even with use v6.e.PREVIEW | 12:40 | ||
and the even bigger problem is that this is what high-level if's care about, for some reason | |||
so that's two wrongs that add up into one wrong at the end of the day | 12:41 | ||
inb4 no, nqp::istrue is apparently not a defined-check; it can return 0 for empty string and stuff like that | 12:42 | ||
(if that wasn't the case, if '' { ... } or if 0 { ... } would have been faulty the whole time) | 12:43 | ||
lizmat | not true: | 12:45 | |
m: use v6.e.PREVIEW; use nqp; dd nqp::istrue(5..1) | |||
camelia | 0 | ||
nemokosch | then this has been fixed | ||
lizmat | it has been fixed because the Range.Bool was fixed | 12:46 | |
nemokosch | well, double-fixed | ||
because it was already right in the HLL | |||
and that in itself is rather alarming | |||
one can check the behavior in 2023.09 for example, that's what I have installed | 12:48 | ||
say so 5..1 is False, nqp::say(nqp::istrue(5..1)) is 1 | |||
so then, where did the two behaviors part? | |||
lizmat | I originally fixed it in 6.d as well, but that caused ecosystem breakage | 12:49 | |
so I moved it to 6.e | |||
nemokosch | bisectable: from=2023.09 use v6.e.PREVIEW; use nqp; nqp::say(nqp::istrue(5..1)); | 12:50 | |
bisectable6 | nemokosch, Will bisect the whole range automagically because no endpoints were provided, hang tight | ||
nemokosch | there were endpoints provided 🤡 | ||
bisectable6 | nemokosch, Output on all releases: gist.github.com/c324790012af6ca194...f1976ec0b6 | ||
nemokosch, Nothing to bisect! | |||
nemokosch | bisectable: help | 12:51 | |
bisectable6 | nemokosch, Like this: bisectable6: old=2015.12 new=HEAD exit 1 if (^∞).grep({ last })[5] // 0 == 4 # See wiki for more examples: github.com/Raku/whateverable/wiki/Bisectable | ||
nemokosch | lol | ||
great | |||
bisectable: old=2023.09 use v6.e.PREVIEW; use nqp; nqp::say(nqp::istrue(5..1)); | |||
bisectable6 | nemokosch, Bisecting by output (old=2023.09 new=f562e6b) because on both starting points the exit code is 0 | ||
nemokosch, bisect log: gist.github.com/1e65bba1f70713bba8...e2fbd21174 | |||
nemokosch, (2023-12-07) github.com/rakudo/rakudo/commit/4e...bbc85b5029 | |||
nemokosch | huh, why is that a fix | 12:53 | |
12:54
epony left
|
|||
why did this code cause anything in NQP land to return False | 12:54 | ||
lizmat | perhaps because nqp::istrue calls .Bool on HLL objects? | 12:55 | |
nemokosch | > Not exactly sure why this fixes #5490 but this also apparently > makes say nqp::istrue(1..0) output 0 in 6.e, which appeared to > be the underlying issue. | 12:56 | |
well | |||
Geth | advent/main: c38e1b6868 | (Elizabeth Mattijsen)++ (committed using GitHub Web editor) | raku-advent-2023/authors.md Swap 22/23 |
||
nemokosch | I don't know, I can't see anything in this code that implies any True value changing to False | ||
only the other way around | |||
only the is default could explain it, ie. this function didn't get called previously | 12:58 | ||
ab5tract | vushu: somehow missed your message about being into the compiler delving idea last night. I was hoping you'd say something like that! | 13:00 | |
nemokosch: that's how I interpreted the patch at the time. I do find the commit message evokes a sense of mystery around the whole thing though | 13:01 | ||
lizmat | well, yes, it had a sense of mystery for me as well :-) | 13:03 | |
nemokosch | yes... the "it worked... WHY" moment | ||
lizmat | did some spelunking: it goes back to the "boot-boolify" dispatch logic in nqp/MoarVM/src/disp/boot.c | 13:09 | |
.vushu | ab5tract: Yeah you can count me in 😄 | 13:11 | |
lizmat | which ultimately winds up in Perl6::Metamodel::BoolificationProtocol's publish_boolification_spec which specifies to call the .Bool method | 13:13 | |
for any object that doesn't have a specific boolification_mode | |||
nemokosch | how did you find it? | 13:17 | |
"the chase is better than the catch" | |||
lizmat | rak §istrue nqp/src | 13:19 | |
vi nqp/src/vm/moar/QAST/QASTOperationsMAST.nqp | 13:20 | ||
rak emit_object_boolify nqp/MoarVM | |||
rak emit_object_boolify nqp | |||
rak boot-boolify src | |||
rak boot-boolify nqp | 13:21 | ||
vi nqp/MoarVM/src/disp/boot.c | |||
rak boolification_spec src | |||
vi src/Perl6/Metamodel/BoolificationProtocol.nqp | 13:22 | ||
that's basically the chase | |||
m: dd Range.^get_boolification_mode | 13:25 | ||
camelia | (NQPMu without .raku or .perl method) | ||
lizmat | % nqp -e 'nqp::say(Mu == 0)' | 13:26 | |
1 | |||
13:38
derpydoo joined
|
|||
ab5tract | nice dive! | 13:59 | |
vushu: awesome :D | 14:02 | ||
antononcube | One reason not to use C++ is that C++ changes too much, too often. | 14:10 | |
14:10
Summer left,
Summer joined
|
|||
"<leont> I like some dialects of C++, I hate others." -- Self-confessed C++-connoisseur !!! | 14:12 | ||
.vushu | @antononcube C++ aims at being back compatible so old code never breaks, you can compile old legacy C++ code with new compilers, so the only change that is done is adding features. | 14:16 | |
14:19
jpn left
|
|||
antononcube | I get it -- that was C++ mission from the start, to introduce OOP into C and C-world. ("Reuse, reuse, and reuse" as a main principle.) | 14:20 | |
But currently what coding in C++ is is fairly different from say 12 or 20 years ago. | 14:21 | ||
.vushu | Very different, I think there is an entirely new language within the old language C++. But I reckon that the language is getting very big. | 14:23 | |
I think that's just natural that it get this big given the time it has been used. | 14:25 | ||
antononcube | @ab5tract I will make a video on "Raku coding with AIs" just for you. (So, you won't have any excuses for not using LLMs after that...) | 14:27 | |
Yeah. That is why I liked @leont 's analogy with Arabic. | 14:28 | ||
.vushu | Indeed. | 14:30 | |
ab5tract | atnononcube: don't get me wrong, I think it is exciting to see the AI write some code. But what I really want is an LLM that is trained on the codebase and the roast spec | ||
antononcube | Doable, but I am doing anytime soon. I am more interested to show how it can be done that doing it myself. | 14:32 | |
ab5tract | as well as several dozen textbooks | ||
That's fair. Where would one start? huggingface? | |||
antononcube | Mostly, because I simply do not know what is a good or bad outcome using the corresponding LLM persona. | ||
ab5tract | good = helpful in the hunt :) | 14:33 | |
antononcube | Huggingface is just another LLM. I am not aware for support for it in Raku. (Although, I plan to do it at some point.) | 14:34 | |
ab5tract | but I would expect any LLM trained on roast to be able to create working code even while hallucinating | ||
antononcube | My statement above assumes that you want to use Raku for doing the LLM-persona (let us named, "Rakust" or something...) | 14:35 | |
ab5tract | antononcube: okay, I'll stick to one you have already added support for | ||
antononcube | Not "Rakust", "Rakoast"... | ||
ab5tract | Let's codename this Project Firefly | 14:36 | |
fire because roast | |||
also, fireflies are badass | 14:37 | ||
antononcube | @ab5tract I hope I am no being obnoxious here, but I think you have to follow a plan that has a dual purpose: (i) to get you acquainted with LLMs usage, (ii) get something practically useful. | ||
ab5tract | yup, I was already thinking the same thing | 14:38 | |
antononcube | I also assume that you are want (anxious, eager, and willing) to use Raku for making that kind LLM tools. | 14:39 | |
"Eat your own dog food", etc. | |||
ab5tract | Sure am | ||
antononcube | Currently, Raku has "WWW::OpenAI" and "WWW::PaLM". I plan to implement "WWW::MistralAI" soon. | 14:40 | |
ab5tract | I have a basic OpenAI account. Never used or heard of PaLM | 14:41 | |
Any reason to choose one or the other for our use case? | 14:42 | ||
antononcube | I would say using OpenAI is the most obvious choice, with biggest impact on the dual purpose/goals above. Additionally, it is easier to communicate to others (non-AIs) using ChatGPT and OpenAI examples. | ||
PaLM is Google's equivalent / response to ChatGPT. It is good, but in a different way than ChatGPT> | 14:43 | ||
MistralAI is "closer" to using Huggingface. | |||
ab5tract | both will allow me to train an LLM with my own texts? | ||
antononcube | Yes. PaLM is very contemporary -- within days of committing something to GitHub I can see responses of PaLM that include code or documentation from those commits. | 14:44 | |
Look at "gpt-4-1106-preview" : it allows the use of 128,000 tokens in prompts. (This roughly means 256K characters.) | 14:46 | ||
14:46
xinming left
|
|||
ab5tract | Just to be clear, I'm asking about this: "Also, one can make custom "GPTs" based on certain collections of documents." | 14:48 | |
antononcube | I plan implement the facilitation functions of using ChatGPT tools and making ChatGPT assistants soon. | ||
ab5tract | nice! | ||
14:48
xinming joined
|
|||
antononcube | Well, yes. But that has several sides. Or multiple angles / approaches to achive it. | 14:49 | |
I need to write down and document those offline. (I.e. not here and now. 🙂 ) | 14:50 | ||
ab5tract | okay :) | ||
then let's pause here for a minute as I need to run anyway | |||
great chat so far, thanks antononcube | 14:51 | ||
antononcube | @ab5tract Please install "WWW::OpenAI" and see can you use it. Then I would advice to install "Jupyter::Chatbook". | ||
Ok. Same here! | |||
15:02
jpn joined
16:02
silug left
16:05
silug joined
16:12
constxqt_ left
16:46
dbonnafo joined
16:51
dbonnafo left
17:35
dakkar left
17:40
jpn left
18:32
jpn joined
18:42
jpn left
18:45
jpn joined
19:37
dbonnafo2 joined
|
|||
ingy | yamlscript.org/posts/advent-2023/dec-20/ | 19:39 | |
tonyo tells me he's working out the raku bindings | |||
tonyo | i yam. | ||
ingy | :) | 19:40 | |
19:49
jpn left
|
|||
nemokosch | yaml was meant to be a data format, no? | 19:52 | |
ingy | It was meant to be a language agnostic serialization format. then it became a lowly config format. now it's decided it wants to become a velociraptor. | 19:56 | |
19:57
notna joined
|
|||
nemokosch | I have bad feelings about this ever since the very legitimate and reasonable demand to allow flattening of list references went neglected | 20:01 | |
it seems like a complete misunderstanding what the project got entrusted with | |||
20:06
melezhik joined
|
|||
melezhik | .tell vushu: SparrowCI build fails for raylib binding - ci.sparrowhub.io/report/3851 | 20:07 | |
tellable6 | melezhik, I'll pass your message to .vushu | ||
20:12
melezhik left
20:39
dbonnafo2 left
21:06
notna left
21:08
jpn joined
21:13
jpn left
21:25
perlbot left,
simcop2387 left,
Summer left
21:26
Summer joined,
perlbot joined
21:27
simcop2387 joined,
constxqt_ joined
21:29
jpn joined
21:39
epony joined
21:56
Summer left,
Summer joined
22:01
jpn left
22:07
epony left
22:15
jpn joined
22:20
jpn left
22:21
epony joined
22:27
Summer left,
Summer joined
22:52
jpn joined
22:57
jpn left,
Summer left
22:58
Summer joined
23:28
Summer left,
Summer joined
23:29
jpn joined
23:30
sena_kun left
23:34
jpn left
23:58
jpn joined
23:59
Summer left,
Summer joined
|