🦋 Welcome to the MAIN() IRC channel of the Raku Programming Language (raku.org). This channel is logged for the purpose of keeping a history about its development | evalbot usage: 'm: say 3;' or /msg camelia m: ... | Log inspection is getting closer to beta. If you're a beginner, you can also check out the #raku-beginner channel! Set by lizmat on 25 August 2021. |
|||
00:02
reportable6 left
00:40
frost joined
01:03
reportable6 joined
01:19
MasterDuke left
02:19
linkable6 left,
evalable6 left
02:20
evalable6 joined
02:38
xinming__ left
02:39
xinming__ joined
03:39
squashable6 left,
evalable6 left,
tellable6 left,
notable6 left,
bloatable6 left,
nativecallable6 left,
quotable6 left,
sourceable6 left,
coverable6 left,
statisfiable6 left,
greppable6 left,
unicodable6 left,
benchable6 left,
shareable6 left,
bisectable6 left,
reportable6 left,
committable6 left,
releasable6 left
03:40
squashable6 joined,
notable6 joined,
reportable6 joined
03:41
sourceable6 joined,
shareable6 joined,
committable6 joined,
bloatable6 joined,
unicodable6 joined,
nativecallable6 joined
03:42
greppable6 joined,
releasable6 joined,
evalable6 joined
04:20
linkable6 joined
04:41
benchable6 joined
04:42
coverable6 joined,
statisfiable6 joined
04:44
frost left,
dogbert17 left,
Scotteh left,
chronon left,
goblin left,
gfldex left,
a3r0 left
04:45
frost joined,
dogbert17 joined,
Scotteh joined,
chronon joined,
goblin joined,
gfldex joined,
a3r0 joined
05:34
tejr left,
tejr joined
05:37
casaca left
05:42
tellable6 joined,
quotable6 joined
05:54
japhb left
06:02
reportable6 left
06:05
reportable6 joined
06:17
tejr left
06:19
tejr joined
06:23
frost left
06:24
Skarsnik joined
06:49
japhb joined
07:17
seednode left
07:18
seednode joined
07:33
abraxxa joined
07:35
abraxxa left
07:36
abraxxa joined
07:40
abraxxa left
07:41
abraxxa joined
08:22
Sgeo left
08:57
MasterDuke joined
09:10
dakkar joined
09:50
hkdtam joined
09:56
casaca joined
|
|||
Skarsnik | I crashed a vm trying to build rakudo in it, that's new x) | 10:29 | |
SmokeMachine | is there a way of exporting an EXPORT sub that will export something defined by the unit that imported that EXPORT (without using BEGIN)? | 10:41 | |
lizmat | If you find a way, let me know :-) | 10:42 | |
SmokeMachine | I was trying something like: `my %exports; sub red-config($schema) { %exports = $schema.models }; sub EXPORT(--> Map()) { '&EXPORT' => sub { %export.Map } }` and on the importing one: `use Red::Config; use Red; red-config schema Bla, Ble`but it's exporting before populating %exports (and that makes sense...) | 10:45 | |
I've also tried: `macro export-red-config(|c) { quasi { sub EXPORT(--> Map()) { red-config({{{ |c }}}) } } }` with no luck (but I'm sure I'm doing the quasi wrong...) | 10:47 | ||
10:56
linkable6 left,
evalable6 left
10:58
linkable6 joined
11:18
eseyman left
11:41
bisectable6 joined
12:02
reportable6 left
12:16
Skarsnik left
12:18
A26F64 joined
12:19
A26F64 left
13:13
thundergnat left
13:27
ufobat joined,
Sgeo joined
13:35
A26F64 joined
13:48
Geth left,
Geth joined
13:50
thundergnat joined
14:13
jmcgnh left
14:22
jmcgnh joined
14:56
[Coke] left
14:58
evalable6 joined
14:59
[Coke] joined
15:03
reportable6 joined
15:16
squashable6 left
15:24
ramiroencinas joined
|
|||
ramiroencinas | Hi everyone. I have discovered an annoying size limitation in the .write method of IO::Socket::Async class. This .write method doesn't write more than 65535 chars. However, the same method .write from the IO::Socket::Async::SSL class doesn't have this size limitation allowing to write more than 65535 chars. | 15:36 | |
I have tested it in Debian and Alpine Linux with the same results. | 15:38 | ||
I don't know if this size limitation is a bug or not. | 15:40 | ||
MasterDuke | it looks like IO::Socket::Async::SSL uses openssl, but IO::Socket::Async uses libuv | 15:51 | |
the docs don't mention a size limit for IO::Socket::Async.write, but it returns the number of bytes written, i guess you're supposed to handle retrying with the rest if not all is written | 15:53 | ||
ramiroencinas | Thanks MasterDuke, I will try to handling the rest of the chars. | 15:57 | |
MasterDuke | fell free to create a rakudo issue if you think it should do something differently (maybe the docs should at least mention the limit for rakudo) | 16:00 | |
16:05
Sgeo_ joined,
Sgeo left
|
|||
ramiroencinas | Thanks MasterDuke, I have created a Rakudo issue. | 16:16 | |
16:16
ramiroencinas left
|
|||
MasterDuke | thanks | 16:16 | |
16:25
whatnext joined
16:26
[Coke]_ joined
16:28
[Coke] left,
[Coke]_ is now known as [Coke]
|
|||
whatnext | hello all :) question about deleting using Red. I am basically doing the following `Table.^delete` and getting `DB::Pg::Error::FatalError.new(message => "syntax error at end of input" ...` I can't seem to get any visibility on what SQL statement it is trying to execute. Just thought I'd check if anyone had come across this before raising it as an | 16:29 | |
issue? :) | |||
SmokeMachine | whatnext: could tou try `my $*RED-DEBUG = True` before the delete to show us the SQL, please? | 16:30 | |
whatnext: Table.^delete? were you meaning `Table.^all.delete`? | 16:31 | ||
whatnext | I was following this: fco.github.io/Red/tutorials/start.html which says `Person.^delete` at the bottom of the page? | 16:32 | |
SmokeMachine | whatnext: yes, sorry... I've forgotten about that | 16:33 | |
whatnext: but yes... I can reproduce it here... | 16:34 | ||
whatnext | should I be using `Table.^all.delete` then? | 16:35 | |
SmokeMachine | whatnext: no, both should work... | 16:37 | |
whatnext: that's an issue... it seems it's starting a transaction and never finishing it... :( | |||
whatnext: no, sorry, forget about it! it's not working inside a transaction... but it's failing for some reason... | 16:38 | ||
whatnext | I guess this must have broken recently? | 16:39 | |
SmokeMachine | whatnext: I'll try to fix that after work. Would you mind to create a issue for that, please? | ||
whatnext: yes, that's new... | |||
whatnext | Ok will do | ||
andinus | is there a way to locate memory leaks? say to narrow it down to a block? | 16:42 | |
MasterDuke | well, a --profile will give you a count of allocated objects and where they were allocated, which may help | 16:45 | |
whatnext | SmokeMachine: raised issue here github.com/FCO/Red/issues/525 | 16:46 | |
SmokeMachine | whatnext: thanks! | ||
MasterDuke | if you add --profile-kind=heap you'll get a heap snapshot, which can be more useful (but a bit trickier to understand) | ||
or you can try running under something like heaptrack (though you might want to use --full-cleanup with rakudo with that) | 16:48 | ||
that will give you moarvm-level information, but you might be able to figure out where that corresponds to in your script | 16:49 | ||
andinus | i see, i'll try with heap kind | 16:53 | |
MasterDuke | i think you'll need p6-app-moarvm-heapanalyzer to open them. maybe comma does to? | 16:54 | |
SmokeMachine | whatnext: It's going to be an easy fix... sorry for that... | 17:02 | |
andinus | instrumented profile says, inlining eliminated the need to create .. followed by a very large number | ||
SmokeMachine | whatnext: are you liking Red and it's documentation? (there are a lot missing on the documentation...) | ||
andinus | js says NaN | ||
# MoarVM oops: MVM_str_hash_entry_size called with a stale hashtable pointer | 17:03 | ||
^ i get these crashes but i'm not able to reliably reproduce it, should i file an issue for it? | 17:04 | ||
whatnext | SmokeMachine: it's certainly the best ORM for raku right now, and there's a lot I do like about it | 17:05 | |
MasterDuke | that usually means you're writing to a hash from multiple threads, which isn't allowed | 17:06 | |
whatnext | documentation is a bit sparse it's true, but to be expected given the early stage of development | ||
MasterDuke | you're going to want to look in the allocations tab | 17:07 | |
andinus | my @p; for @lines -> $iter { if @p.elems == $batch {await @p; @p = [];}} | 17:08 | |
lizmat clickbaits rakudoweekly.blog/2021/11/08/2021-...wo-commas/ | 17:09 | ||
andinus | ... push @p, start {} ==> later in the loop it does this | ||
whatnext | one thing that I would have preferred is a more DBIx::Class like interface, rather than the SQLAlchemy style. However, currently I use some fairly simple wrapper code which ends up making it pretty similar | ||
andinus | is this usage problematic? | 17:10 | |
whatnext | I guess my main issue is namespacing - because with the SQLAlchemy style you end up `use`ing a module for every table | ||
MasterDuke | what are you doing in the start block? or is it really empty? | ||
perryprog | lizmat what's that picture? | ||
lizmat | it's from a number of chocolates presented by the late Jeff Goff at the 2015 YAPC::NA in Salt Lake City | 17:11 | |
SmokeMachine | whatnext: I'm "fixing" that with this: github.com/FCO/Red/pull/524 | ||
andinus | MasterDuke: its this script github.com/andinus/fornax/blob/mai...LI.rakumod | ||
perryprog | oh wow, nice | ||
andinus | it's using Cairo togenerate lots of pngs | ||
i suspect it's Cairo that's causing the memory leak | 17:12 | ||
SmokeMachine | whatnext: with that, you can create a module like this: github.com/FCO/Red/pull/524/files#...aedfeR1-R9 and where you use it, you'll import all your models, the schema and the connection... | 17:13 | |
MasterDuke | andinus: any particular reason you're doing manual batching and starts instead of using `race for @lines.skip.race.kv <...>`? | 17:17 | |
oh, you're also using run. i think that's known to (at least appear to) leak | 17:19 | ||
andinus | yes, race didn't do anything last i checked, i'll check it again (2 mins) | ||
run is used at the end | |||
andinus.unfla.me/writings/2021/for...rames.html | |||
MasterDuke | ah, right | ||
andinus | here i summarize the leak, it has screenshots as the script progressed | ||
you can see the memory usage rising as the iterations pass | |||
whatnext | SmokeMachine: can I ask when you will merge into master? :) | 17:20 | |
andinus | also seems like --full-cleanup always terminates with `SIGSEGV (Address boundary error)` on OpenBSD | ||
i dont have comma, i'll get the heapanalyzer | 17:21 | ||
MasterDuke | turns out comma doesn't analyze heap snapshots yet, but they're in the process of adding that | 17:22 | |
andinus: i can just try some of the .fornax files in the repo to test with? | 17:24 | ||
SmokeMachine | whatnext: I'm still trying to find out a way to the user not need to manually write the EXPORT sub... (I don't think there Is a way... but when I'm sure I'll probably merge that) | ||
andinus | MasterDuke: yes, 50 is around 1200 iterations, i run, raku -Ilib bin/fornax --skip-video resources/solutions/DFS-50.fornax | 17:27 | |
just tested with .race, doesn't help, with manual batching, i see noticeable speedup | 17:28 | ||
17:28
evalable6 left,
linkable6 left
|
|||
SmokeMachine | whatnext: github.com/FCO/Red/commit/5dcfbc13...5cf818df74 | 17:29 | |
17:31
linkable6 joined
|
|||
whatnext | @SmokeMachine: ok I see - that does look like a simple fix :) | 17:32 | |
thanks | |||
I guess currently this is only an issue if no filter terms are specified - would that be correct? | 17:33 | ||
SmokeMachine | whatnext: yes, exactly | ||
whatnext: and that wasn't being tested... :( | 17:34 | ||
whatnext: if you test it and that's working, would you mind to close the issue, please? | 17:35 | ||
17:37
dakkar left
|
|||
lizmat | ok | 17:37 | |
whatnext | SmokeMachine: ok will do :) | 17:38 | |
SmokeMachine | whatnext: thanks! | 17:39 | |
whatnext: and if you'd like to help even more, I'm needing all possible help to close all this issues (github.com/FCO/Red/projects/3) to finally launch our first stable version | 17:41 | ||
MasterDuke | andinus: turns out race doesn't support multi-arg blocks. but if you switch to .pairs instead of .kv and then pull out the $idx and $iter manually it works. a little bit simpler overall | 17:46 | |
still does have the occasional MVM_oops | 17:47 | ||
andinus | ah i see, i'll try with .pairs | ||
what could be causing the MVM_oops after this?, nothing is shared (written) between threads | 17:48 | ||
MasterDuke | so locally i have `race for @lines.skip.pairs.race(:degree($batch)) {` and in the block `my ($idx, $iter) = .key, .value;`, and i removed @p entirely | ||
andinus | i see, makes sense | ||
whatnext | SmokeMachine: well I won't rule out pitching in on that completely - I am pretty stretched right now though, so it might be some time before I get to it | ||
possibly I could pick up some of the documentation if it's still not done when I have more free time | 17:49 | ||
17:49
abraxxa left
|
|||
SmokeMachine | whatnext: that would help a lot! Thanks! | 17:50 | |
MasterDuke | the thread that oopsed was in Cairo's write_png | 17:51 | |
timo: ping | 17:52 | ||
andinus | is .race helping when you run locally? here it's not making much of a difference, manual batching seems to do better | 17:54 | |
MasterDuke | it was roughly the same, maybe 1s faster with .race | ||
andinus | ah i see | ||
yes, and with manual batching the speed is ~4x, as expected | 17:55 | ||
MasterDuke | e.g., 22s vs 21s with default batch size | ||
andinus | ah wait, that might be it, default batch is 64 iirc | 17:56 | |
MasterDuke | i mean .race (with a batch of 4) was the same speed as manual batching (with a batch of 4) | 17:57 | |
timo | sorry, i'm just heading out the door | ||
MasterDuke | --batch=8 is almost twice as fast | 17:59 | |
andinus | MasterDuke: ah, not able to reproduce that behaviour, (openbsd), manual batching (4) is faster than .race(:4batch) | ||
MasterDuke | ah, maybe you're confusing race's batch and degree | 18:00 | |
andinus | ah got it, :degree($batch), i should rename the var | ||
indeed, .race seems faster, testing on DFS-10 (30 iterations) seems like manual batching beats it, maybe it performs well with more iterations | 18:02 | ||
If there's an I/O operation inside the loop, there might be some contention so please avoid it. | |||
^ .race docs do say this | 18:03 | ||
18:03
reportable6 left
|
|||
ugexe | a lot of it is going to depend on your cpu | 18:04 | |
MasterDuke | what it you do .race(:1batch, :degree($batch)) | ||
and the specifics of the workload, yeah | 18:05 | ||
gfldex | andinus: you could try to use a Channel to steam to-be-written data into a separat thread that does the IO. | ||
MasterDuke | i'm not saying you should definitely switch to .race, it just looked like the manual batching was pretty straightforward and a good candidate for converting | 18:06 | |
andinus | :1batch doesnt change much. | ||
without .race/batch: 24s, with .race (:degree($batch)): 12s, with manual batching: 6s | |||
:1batch :degree($batch) should reproduce manual batching behaviour though | 18:07 | ||
gfldex: i see, i'll read about Channels | 18:08 | ||
MasterDuke | on my system .race and manual is pretty much the same, even with :1batch or not and different values of :degrees | 18:09 | |
gfldex | You can `use Telemetry;` to get an idea how well the worker threads are satured. see: gfldex.wordpress.com/2017/11/05/racing-rakudo/ | ||
How long does take one iteration on a single core? | 18:10 | ||
lizmat | -Msnapper also nowadays :-) | ||
MasterDuke | i'm actually seeing the same scaling factor for both ways, but manual consistently being a couple seconds slower | 18:11 | |
andinus | i just tried it on a ubuntu machine, manual: 2.88, .race: 3.40, single core: 4.78 | 18:16 | |
maybe the iterations are too low (40) for .race to benefit | 18:17 | ||
MasterDuke: do you have .pairs.race or .race.pairs? | |||
MasterDuke | pairs.race | 18:18 | |
andinus | ah, both same times | ||
race.pairs would be better right? | |||
gfldex: thanks for the hint! | 18:19 | ||
18:19
A26F64 left
|
|||
andinus | i'll document these and update the writing later tomorrow | 18:20 | |
MasterDuke | i think it's really the same, adding the .race is only needed/used to set the degree and batch. it's the race prefix that's doing the work | ||
andinus | ah, i was thinking .pairs is applied first so it would be single core and then .race multi threads the loop | 18:21 | |
MasterDuke | andinus: btw, you can return a value from a given block. so something like `my IterStatus $status = do given $iter.substr(0, 1) { when '|' { Completed } ... };` should work | 18:22 | |
andinus | ah,i do remember trying something similar | 18:23 | |
i was missing "do" | |||
18:31
evalable6 joined
|
|||
codesections | How do I tell zef that I want it to run tests that live somewhere other than ./t ? zef test takes a path, but pointing it to a different directory (or test file) doesn't seem to do the trick | 18:43 | |
ugexe | there is no way | 18:44 | |
codesections | Oh, interesting. Thanks | 18:45 | |
ugexe | a test has no way of communicating where the distribution its testing lives | 18:47 | |
and with `zef test .` the `.` is referring to a distribution | 18:48 | ||
if it was like `zef test a/b/c/d` it would be impossible to tell if that is supposed to be a distribution or a test directory | |||
codesections | yeah, that makes sense. I wasn't thinking that it would let me put the tests _outside_ of the distribution. But I was thinking I could specify a different directory so long as a META6.json was somewhere in its ancestors | 18:50 | |
18:50
ufobat left
|
|||
ugexe | there is no "go up a level till you find a META6.json", which would mean you might not be able t have e.g. test files called META6.json | 18:51 | |
18:51
bdju left
|
|||
ugexe | plus you can run tests against installed versions of a distribution instead of e.g. -Ilib | 18:51 | |
there is probably a sane way to expand that `zef test $dist` to account for specific test files but it might be a bit verbose like having --test-file=... --test-file=... | 18:52 | ||
codesections | Makes sense. I'm not complaining – I just figured I was missing something obvious :) | ||
Could it be a field in the META6? | 18:53 | ||
ugexe | Probably, although it is also weird that it would be referencing paths that are nowhere else in the META6.json | 18:55 | |
codesections | wouldn't is just be a path starting from the same relative root as the paths in "provides" ? | 18:56 | |
ugexe | yeah but it doesnt refer to anything after the distribution is installed | 18:57 | |
18:58
bdju joined
|
|||
ugexe | that doesn't discount the idea, but to me it is weird that an installed META6.json would refer to paths/files that are no longer part of it as a whole | 18:58 | |
we could of course installed tests >:) | |||
which would let people do stuff like $dist = CURI.candidates("MyFoo"); CURI2.install($dist); run-tests-on-curi2($dist) | 19:00 | ||
MasterDuke | ugexe: i just did a `zef search Cairo` and get | ||
0 |Zef::Repository::LocalCache |Cairo:ver<0.2.4> | |||
8 |Zef::Repository::Ecosystems<p6c> |Cairo:ver<0.2.7> | |||
but `zef upgrade Cairo` says `All requested distributions are already at their latest versions` | |||
ugexe | what version is installed? | 19:01 | |
MasterDuke | 0.2.4 (which oddly enough was from a `zef install Cairo` just earlier today) | 19:02 | |
gfldex | codesections: I got a script that likes to test things. If you run it with `raku-test-all test ./some-dir/ it does what you need. raw.githubusercontent.com/gfldex/b...u-test-all | 19:03 | |
codesections | Hmm, I think that's getting deeper into zef's implementation than where I can have an informed opinion. I'd kind of thought that zef *did* install the tests somewhere in the sources/ folder of hashed paths | ||
19:03
reportable6 joined
|
|||
MasterDuke | ugexe: oh, that was happening when i was using zef from its checkout. if i use the zef in <prefix>/share/perl6/site/bin/ an upgrade works fine | 19:06 | |
ugexe | if those are both the same version they should work the same | 19:07 | |
might be worth doing --version to be sure the zef in checkout isnt loading some old zef from the e.g. home repo or some such | 19:08 | ||
MasterDuke | ah, checked out one is 0.13.0, installed one is 0.13.1 | ||
i need to remember to not use the checked out one | |||
ugexe | fwiw you should update zef again | 19:09 | |
to at least 0.13.3 | |||
MasterDuke | just upgraded to 0.13.4, thanks | 19:10 | |
19:11
melezhik joined
|
|||
ugexe | codesections: it probably could similar to what we do for bin/ files, although really we need a spec for that too instead of just having zef grep for all files in that directory (some of which might not even be raku and thus shouldnt have the bin shim applied to them by rakudo) | 19:12 | |
at a minimum users would need to list files in bin/ they want installed, and also a way to identify that e.g. bin/zef is a raku script and not like a bash script | 19:15 | ||
most of this seems like it could apply to tests as well | |||
although tests have the benefit of file extensions | 19:16 | ||
codesections | those all sounds like good ideas (and things that it'd be good to get spec'd before the ecosystem gets too huge) | ||
19:16
whatnext left
|
|||
ugexe | yeah the npm thing the other day got me thinking of how we can allow users to disallow installing bin scripts | 19:16 | |
codesections | Yeah. Though if we do have a supply chain vulnerability issue at some point, Raku's flexibility will really work against us. | 19:18 | |
ugexe | hopefully having a thoroughly strict ecosystem and disabling non strict ecosystems can mitigate a lot of issues | 19:20 | |
in the future non-strict ecosystems would ideally be limited to darkpans | 19:21 | ||
codesections | yeah. And having a culture that doesn't encourage piles of untrusted transitive dependencies | 19:22 | |
I was just looking at a report the other day that tracked average dependencies per project by language, and there's a huge swing | |||
from Swift (4) and Go (13) to JS (377) | 19:23 | ||
i.blackhat.com/USA-20/Wednesday/us...alysis.pdf | |||
19:33
eseyman joined
|
|||
ugexe | yeah thats just dumb. a developer should have a good understanding of all of their dependencies and that isn't happening with 377 | 19:34 | |
offloading that understanding to the first transitive dependency's author is a bad solution | 19:35 | ||
but hey then you dont have to care | |||
19:38
melezhik left
19:43
Geth left,
TempIRCLogger left
19:45
RakuIRCLogger left,
lizmat left
19:47
lizmat joined
19:48
Geth joined
19:51
Kaipi joined
19:53
childlikempress joined,
bingos_ joined,
gordonfish- joined
19:54
simcop2387_ joined
19:55
broquain1 joined,
moritz_ joined,
bdju_ joined
19:56
avarab joined,
BinGOs left,
bingos_ is now known as BinGOs
19:57
BinGOs left,
BinGOs joined
19:58
gordonfish left,
gordonfish- is now known as gordonfish
|
|||
MasterDuke | andinus: btw, i have a patch for rakudo that fixes the MVM_oops you're seeing with fornax | 20:00 | |
20:00
bdju left,
moritz left,
broquaint left,
Kaiepi left,
discord-raku-bot left,
avar left,
moon-child left,
simcop2387 left,
simcop2387_ is now known as simcop2387
20:02
Skarsnik joined
20:16
melezhik joined
20:23
melezhik left
20:26
childlikempress is now known as moon-child
20:31
melezhik joined
20:39
melezhik left
21:16
squashable6 joined
21:30
discord-raku-bot joined
22:13
daxim left
|
|||
Nemokosch | but you know it's funny because reading on, it turns out that most JS libraries are stable in return | 22:24 | |
unlike PHP where I have no idea what they managed to mess up so much... | 22:27 | ||
22:34
Skarsnik left
22:47
A26F64 joined
|
|||
tonyo | hasn't been my experience with JS libs..some of the ultra popular ones are very stable _now_ but still have dependency hell | 22:49 | |
Nemokosch | dependency is not "hell" per se. Javascript doesn't come with a huge runtime like Java or .NET so it's obvious the common things people are using will be more visible | 22:53 | |
22:57
melezhik joined
|
|||
melezhik | . | 22:57 | |
I am not sure of how many people have heard of #raku-news channel , but here one can get some new - comments from mybfio related to Raku | 22:58 | ||
logs.liz.nl/raku-news/live.html | |||
for old entries | 22:59 | ||
logs.liz.nl/raku-news/index.html | |||
tonyo | 377 dependencies is significantly buigger than go's 13. it isn't like go has a huge runtime built into it | 23:00 | |
and building dependencies for something like raku makes 377 a bear | |||
though, eventually, i suppose the dependency builder will get merged in with zef and it can async build most of them. iirc 2018 PTS is when that was built but the async portion wasn't stable enough to precomp everything | 23:02 | ||
Nemokosch | I only know Go as the language that explicitly advocated copypasting tbh | ||
23:08
melezhik left
|
|||
tonyo | it does advocate that for the people who don't want to screw around with generators. there isn't a huge runtime for it, similarly to java or .net. it's not like js doesn't also promote copy and paste, either | 23:12 | |
it'd be easy to generate those numbers for raku | |||
23:15
A26F64 left
23:16
melezhik joined
|
|||
tonyo | not sure what method they're using for the percentiles. raku came out for 10% = 0 depends, mean = 1, 90% = 4.. this could be tweaked to get the right numbers if they're doing something different curl -s 360.zef.pm|jq '.[].depends|length' 2>&1|sort -n|perl -e 'my @x; while (<>) { chomp; push @x, $_; }; my @N = @x[map $_*@x, .1, .5, .9]; printf "10%: %d\n50%: %d\n90%: %d\n\n", @N;' | 23:39 | |
23:42
melezhik left
23:44
dogbert17 left
|
|||
Nemokosch | Well JS nowadays is a lot less minimalistic than Go was designed to be, I don't think it's a bad thing that many things are offered as libraries on top of the language | 23:58 | |
I genuinely don't get why someone would use is-array for one but it would be a bit harsh to call this phenomenon "hell" lol |