Welcome to the main channel on the development of MoarVM, a virtual machine for NQP and Rakudo (moarvm.org). This channel is being logged for historical purposes. Set by lizmat on 24 May 2021. |
|||
00:56
gfldex left,
gfldex joined
01:12
MasterDuke joined
|
|||
MasterDuke | this time i caught an `MoarVM panic: non-AsyncTask fetched from eventloop active work list` in t/spec/S32-io/IO-Socket-Async.t and did an `rr pack`, so hopefully it can be more easily diagnosed | 01:14 | |
01:15
unicodable6__ left,
unicodable6 joined
|
|||
MasterDuke | it got an MVM_REPR_ID_MVMNull instead of an MVM_REPR_ID_MVMAsyncTask | 01:40 | |
but i know essentially nothing about the io socket async stuff, so i'll try again to catch one of the other errors. but if anybody wants the rr replay i'll save it | 01:46 | ||
02:29
bisectable6 left,
bisectable6 joined
|
|||
MasterDuke | just caught a fail in t/spec/S02-types/mix.t | 02:49 | |
not ok 126 - .roll(100) (2) | |||
# Failed test '.roll(100) (2)' | |||
# at t/spec/S02-types/mix.t line 283 | |||
03:39
MasterDuke left
04:48
kjp left,
kjp joined
05:47
jdv left,
jdv joined
06:02
leont left,
leont joined
|
|||
lizmat | MasterDuke: don't worry about that one, it will fail about 1% of the time | 06:18 | |
it's hard to write a test involving randomness | |||
06:19
sena_kun joined
07:11
sena_kun left
|
|||
nine | github.com/Raku/roast/commit/48466a57d0 should help with that | 07:39 | |
lizmat | well... I originally chose 100 because the *failure* of the test every now and then, actually ensures the test is correct | 07:50 | |
if we're going to make that chance really negligible then we might as well remove that test | |||
nine | I don't understand that argument. Nothing changed with regards to the statistical analysis. It still checks whether the element that should make up ~1/3 of the result makes up less than 3/4. | 07:56 | |
lizmat | sorry, I mistook the test, I thought we were talking about the next block of test | 08:03 | |
the my $m = {b => 1, a => 100000000000, c => -100000000000}.Mix; one | |||
:q | 08:04 | ||
cause if that one for some reason just starts producing "a"'s always, we will never know that we have a bug | 08:07 | ||
but that most definitely is not a hill I want to die on :-) | 08:08 | ||
08:32
Techcable left
08:33
Techcable joined
08:45
nine left,
nine joined
08:51
tbrowder left,
tbrowder joined
08:55
jjatria left,
jjatria joined
|
|||
timo | we might get some benefit from checking out all the extra stuff mimalloc has to offer | 09:24 | |
for example, mimalloc offers "heaps", which can replace the spesh allocator, since it offers the "free all at once" feature that we made the spesh allocator for | 09:25 | ||
09:25
mst left
|
|||
timo | and we should also use the "good alloc size" and "usable size" functions for our growing arrays | 09:25 | |
09:25
mst joined
09:52
vrurg_ joined
09:54
vrurg left
10:41
greppable6 left,
greppable6 joined
12:41
Geth left,
Geth joined
|
|||
[Coke] | I am all for taking advantage of other people's code if we can. | 13:55 | |
timo | though i'm not sure if mimalloc can actually beat the spesh allocator, we can save a tiny bit on code size perhaps? | 13:57 | |
since the spesh alloc uses bump-the-pointer and doesn't support freeing individual things at all, which i think the mimalloc heap api still supports | |||
on the other hand, we still support building moar without mimalloc | |||
14:23
notable6 left,
notable6 joined
14:24
bloatable6 left,
bloatable6 joined
17:42
sena_kun joined
|
|||
timo | we can combine both; allocate the pages for the spesh allocator with a heap id, and when destroying everything we don't have to walk the linked list and call free a bunch of times, instead we just call "free the heap" once | 18:32 | |
lizmat | what would be the benefits? less CPU usage, less memory ? | 18:37 | |
timo | could be less cpu usage, but probably not much, and it's in the spesh thread anyway, so wouldn't make programs finish much faster | 18:42 | |
lizmat | well, if we could get memory usage down, that would be good | 18:50 | |
nine | I'd be surprised if there's much to gain there though | 18:57 | |
18:57
notna joined
19:15
notna left
20:09
bisectable6 left
20:10
notable6 left,
greppable6 left
20:11
bisectable6 joined,
notable6 joined,
unicodable6 left,
sourceable6 left,
evalable6 joined,
bloatable6 left
20:12
greppable6 joined,
nativecallable6 left,
bisectable6__ joined,
linkable6 left,
unicodable6 joined,
quotable6 joined,
nativecallable6 joined,
notable6__ joined,
evalable6__ joined,
benchable6 joined
20:13
tellable6 joined,
committable6 joined,
benchable6__ joined,
linkable6 joined,
notable6__ left,
benchable6__ left,
evalable6__ left,
bisectable6 left,
bisectable6__ left,
evalable6 left,
benchable6 left,
notable6 left,
quotable6 left
20:14
sourceable6 joined,
greppable6 left,
tellable6 left,
unicodable6 left,
sourceable6 left,
coverable6 joined,
nativecallable6 left,
linkable6 left,
coverable6 left,
committable6 left
20:15
greppable6 joined
20:16
bloatable6 joined,
benchable6 joined,
committable6 joined
20:17
evalable6 joined,
coverable6 joined,
linkable6 joined,
unicodable6 joined,
nativecallable6 joined,
shareable6 joined
20:18
bisectable6 joined,
quotable6 joined,
sourceable6 joined,
tellable6 joined
20:19
notable6 joined,
releasable6 joined
20:34
sena_kun left
|