github.com/moarvm/moarvm | IRC logs at colabti.org/irclogger/irclogger_logs/moarvm Set by AlexDaniel on 12 June 2018. |
|||
00:31
lucasb left
|
|||
timotimo | SACRED EXCREMENT | 00:44 | |
Data::MessagePack is now an extreme amount of times faster than it was before my changes | |||
oh, it looks like it's now also wrong | |||
now it is correct, and much slower than when it was wrong, but it's still a whole lot faster than before i changed anything | 00:52 | ||
japhb | timotimo: oscillating towards the win? | 01:10 | |
timotimo | seems so | ||
i could go for another "faster" though | |||
so it's currently at 2 minutes for 1.8M file | 01:23 | ||
39 entries in that file | 01:24 | ||
04:14
pamplemousse joined
06:10
pamplemousse left
06:21
Voldenet left
06:26
Voldenet joined,
Voldenet left,
Voldenet joined
|
|||
MasterDuke | without knowing anything about Data::MessagePack that seems slow. how long do other langs take to process the same file? | 07:07 | |
07:22
Voldenet left
07:27
Voldenet joined,
Voldenet left,
Voldenet joined
09:09
Ven`` joined
09:28
Ven`` left
10:48
lucasb joined
11:09
squashable6 left
11:14
squashable6 joined
12:51
evalable6 left
12:52
evalable6 joined
|
|||
MasterDuke | man, a whole bunch of our 3rdparty repos have newer versions available upstream, e.g., dyncall, libatomicops, libtommath, libuv | 13:03 | |
13:21
Ven`` joined
|
|||
timotimo | updating those will feel line an explosion of freshness | 13:26 | |
93 ( 1 pgs): 96256 - 95504 - 1504 == -752 ( -0.78125% ) | 13:27 | ||
well, this is working great, clearly! | |||
MasterDuke | what do those numbers represent? | 13:28 | |
timotimo | first is the bin index, second is the number of pages in the bin | 13:36 | |
then comes the total size of the pages minus how much space is left at the end | |||
actually, no, not the "space left at the end" bit | 13:37 | ||
... | 13:38 | ||
i made these names confusing | |||
MasterDuke | do you want space left at the end? | 13:40 | |
timotimo | it's not really preventable | 13:43 | |
Ven`` | if you add a new opcode to moarvm, how do you usually test it? also lay the groundwork in nqp/rakudo necessary? | 13:58 | |
timotimo | the tests for opcodes go into nqp | 14:00 | |
moar relies on nqp for the vast majority of its testing needs | |||
well, that and rakudo, but a little more indirectly over there | |||
Ven`` | timotimo: yeah, the actual tests for sure, I meant more for experimentation | 14:06 | |
Kaiepi | so because of some of the changes i'm making to make it so ipv6 addresses don't get used when they're not supposed to be able to, it's now possible to close async udp sockets in the middle of attempting to do a write, and just checking if the handle's closing before each attempt to write isn't good enough to prevent that from happening | 14:16 | |
i tried using a semaphore to fix this, but idt i can just go and destroy it on close with threads waiting on it, i need something else on top of that or to go about this entirely differently. how do i make stuff like this thread-safe? | 14:17 | ||
14:20
chloekek joined
|
|||
timotimo | Kaiepi: maybe a mechanism like "free at safepoint" can help with this issue | 14:22 | |
Kaiepi | like, keeping whether or not the handle wants to be closed somewhere in state and adding a callback that gets called after each write that checks if there are any threads waiting on the semaphore before destroying it when it wants to close? | 14:27 | |
or rather, handles the actual closing | 14:28 | ||
14:42
chloekek left
|
|||
Kaiepi | however it should work, uv_once_t looks like it'd be useful here | 14:42 | |
14:56
Ven`` left
14:59
Ven`` joined
15:06
Ven`` left
16:38
domidumont joined
18:29
domidumont left
19:45
chloekek joined
20:22
pamplemousse joined
20:52
Kaiepi left
20:55
Kaiepi joined
21:18
lucasb left
21:34
pamplemousse left
21:43
pamplemousse joined
22:11
pamplemousse left
|
|||
Geth | MoarVM: MasterDuke17++ created pull request #1162: Jit some num ops |
22:19 | |
22:23
chloekek left
22:49
pamplemousse joined
|
|||
jnthn | Kaiepi: There's only one thread that does all the async I/O (all things to do are marshalled there, and all results are delivered into the queue of the thread pool scheduler). So you don't need any locking there, just to make sure all requests are properly sent to the I/O worker. | 23:08 | |
tellable6 | gist.github.com/db92a797bfb3de8a02...6d2205d248 | ||
Kaiepi | oh wow i'm fucking dumb, i was getting a double free during cleanup after writing that made me think that but it was because i was freeing the request and its data in the wrong order | 23:17 | |
closing doesn't even free the same request in the first place | |||
but knowing that i/o can only be done from one thread is really helpful, thanks jnthn | 23:24 | ||
jnthn | Kaiepi: This only applies to the async I/O stuff, but the non-async I/O also wraps a mutex acquisition around the operations, so you're already safe in the I/O vtable handlers too | 23:30 | |
Kaiepi | yeah, i saw how some parts of sync i/o use mutexes and assumed it was possible with async i/o as well | 23:33 | |
btw what's the logic in what sync i/o calls require calls to MVM_gc_mark_thread_blocked/MVM_gc_mark_thread_unblocked and which don't? because there are a bunch that don't atm | 23:34 | ||
jnthn | If it can block, it should have the calls | 23:35 | |
Where by block we really mean "is I/O bound" | 23:36 | ||
Rest time; 'night | 23:37 | ||
23:47
pamplemousse left
|