00:30
yoleaux joined,
travis-ci joined
|
|||
travis-ci | MoarVM build failed. Samantha McVey 'Merge pull request #781 from samcv/ucd-critic | 00:30 | |
travis-ci.org/MoarVM/MoarVM/builds/330705279 github.com/MoarVM/MoarVM/compare/a...4869e37bcb | |||
00:30
travis-ci left
01:37
Kaiepi joined
01:39
Kaiepi joined
02:57
ilbot3 joined
03:06
mtj_ joined
03:13
AlexDaniel joined
04:14
mtj_ joined
04:53
nativecallable6 joined
05:16
Kaiepi joined
05:50
statisfiable6 joined
07:52
domidumont joined
08:13
domidumont joined
08:27
brrt joined
08:34
zakharyas joined
08:56
zakharyas joined
09:01
zakharyas joined
09:20
jsimonet joined
09:27
zakharyas joined
11:17
zakharyas joined
11:34
yoleaux joined
11:44
psch joined
11:49
releasable6 joined
12:04
brrt1 joined
12:06
AlexDani` joined
12:14
yoleaux joined
12:36
benchable6 joined
12:37
coverable6 joined,
reportable6 joined
13:27
zakharyas joined
13:59
domidumont joined
14:05
yoleaux joined
14:14
bisectable6 joined
14:18
Kaiepi joined
14:51
domidumont joined
15:17
zakharyas joined
15:28
yoleaux joined
15:40
nebuchadnezzar joined,
timotimo joined
15:43
releasable6 joined
15:54
yoleaux joined
15:58
ggoebel joined
16:04
brrt joined
16:52
zakharyas joined
17:06
brrt joined
17:11
zakharyas joined
17:34
geospeck joined
17:43
evalable6 joined
17:56
Kaiepi joined,
AlexDani` joined
18:34
Kaiepi joined
18:38
domidumont joined
18:43
MasterDuke joined
18:44
domidumont joined
18:46
domidumont1 joined
18:49
Kaiepi joined
18:51
domidumont joined
18:58
domidumont joined
19:08
domidumont joined
19:09
domidumont joined
19:14
domidumont joined
|
|||
MasterDuke | is there any benefit to creating templates that just call c functions for the expr jit? or is the fall(back|through) to the old jit just as efficient? | 19:22 | |
19:36
quotable6 joined
19:40
domidumont joined
|
|||
timotimo | depends on what the next step in the bb would have been | 19:53 | |
MasterDuke | would it ever be less efficient? | 19:55 | |
timotimo | shouldn't be | ||
jnthn | Possibly the expr JIT can avoid some dupe loads | ||
timotimo | that's true | ||
jnthn | So could come out better | ||
timotimo | it might also be able to avoid stores :) | ||
MasterDuke | sounds like it's worth it to go to the effort of creating templates for all the ops then? | 19:57 | |
timotimo | just make sure to prioritize working on the ones that are more common | 19:58 | |
MasterDuke | sure | 19:59 | |
timotimo | does the "parse jitgraph" script help any? | 20:00 | |
20:00
domidumont1 joined
|
|||
MasterDuke | however; the most common, sp_findmeth, might be out of my paygrade right now. it's been ~10 years since i've really read assembly and ~15 years since i've written any | 20:01 | |
haven't really looked at the output yet | 20:02 | ||
but i think it's saying all were skipped | 20:03 | ||
timotimo | i'd rather read the C implementation in interp.c than try to go from emit.dasc to a template jit thingie | ||
MasterDuke | because of surprising lines | ||
timotimo | right, that's unsurprising :) | ||
MasterDuke | but wouldn't what's in emit.dasc be optimized? | 20:04 | |
timotimo | i'd expect it to be 1:1 translated from interp.c | ||
MasterDuke | or you're saying the expr jit should do a better job optimizing a conversion of the c implementation than a conversion of the asm? | 20:05 | |
hm, the comments for sp_findmeth in both interp.c and emit.dasc look similar | 20:07 | ||
timotimo | the wins come from the template jit being able to look at multiple ops in a row; from C to emit.dasc is mostly 1:1 translations because the lego jit doesn't have much additional information | 20:10 | |
MasterDuke | ah | 20:12 | |
20:36
Kaiepi joined
20:37
domidumont joined
|
|||
timotimo | it can do better than interp.c if a type is known and a reprop is called, like push_o for example. either it has to look up an object's STable, find its REPR, and grab the function pointer from in there | 20:42 | |
but the jit can put the function pointer right into the compiled assembly code | |||
22:22
squashable6 joined,
unicodable6 joined,
committable6 joined,
bloatable6 joined
|
|||
timotimo | jnthn: looking into beefing up udp receiving; is it okay to be ignoring UV_UDP_PARTIAL flags if they get set? | 22:40 | |
* Indicates message was truncated because read buffer was too small. The | 22:41 | ||
* remainder was discarded by the OS. Used in uv_udp_recv_cb. | |||
22:41
yoleaux joined
|
|||
timotimo | though i don't think we control buffer size at all from our end? | 22:42 | |
jnthn | If I remember correctly, this is about the size of the read buffer we pass to libuv | ||
libuv drops bytes on the floor if that's too short | |||
But I think (though please check it) that our buffer size is the maxium packet size anyway | 22:43 | ||
geekosaur | um, the OS UDP interface does have such a flag | ||
timotimo | doesn't libuv decide what size it wants and calls on_alloc from us? | ||
geekosaur | uf you tell it to recv() or recvfrom() to a buffer that is too small for the packet received,it sets the partial flag on the result | ||
jnthn | It passes a suggested size, which we can ignore or respect | ||
geekosaur: That's probably why libuv works that way, then :) | 22:44 | ||
timotimo | we have no way to see if an incoming packet is too big for us? | ||
geekosaur | nope. UDP is kinda dumb that way | ||
jnthn | geekosaur: Is there a maximum possible size it could be, though? | ||
(Or perhaps a way to find that out?) | 22:45 | ||
geekosaur | there is, but it is system and to some extent network dependent. | ||
timotimo | MTU is the upper limit, right? | ||
geekosaur | and that last part basically means you can't find out | ||
jnthn | If I'm reading en.wikipedia.org/wiki/User_Datagra..._structure right then it's 2 bytes for the size field in the packet | 22:46 | |
geekosaur | it us, but note that (a) path MTU discovery is not really a thing on UDP, because it requires retransmits (b) the max MTU is the lowest max MTU of the traversed network nodes | ||
jnthn | Which would imply 64KB is the most one could expect | ||
geekosaur | and (c) there is n guarantee that two packets fromt he same source to the same destination follow the same path | 22:47 | |
timotimo | you can also send UDP over a unix domain socket, right? | ||
does that give us practically infinite mtu? | |||
geekosaur | you can, but the max MTU will depend on kernel buffers for unix domain sockets. which often are the same as pipe buffers | 22:48 | |
timotimo | fair enough | ||
int uv_recv_buffer_size(uv_handle_t* handle, int* value)¶ | 22:50 | ||
maybe this is how we can change the buffer size | 22:51 | ||
lizmat | from former work, I know that UDP packets that get too long, may also be dropped completely by routers | 22:57 | |
so it's usually in the interest of the sender to keep packages as small as possible | |||
timotimo | true, there's no ICMP fragmentation reports for udp | 22:58 | |
because if you're using UDP, you're fine with data getting lost | |||
i wonder if holding on to the last sockaddr so as not to rebuild string after string after string of the same ip address over and over again | 23:01 | ||
(and also an Int object for the port) | |||
23:09
greppable6 joined
|
|||
lizmat | and another Perl 6 Weekly hits the Net: p6weekly.wordpress.com/2018/01/22/...-optimism/ | 23:14 | |
timotimo | i'm slightly confused. the callback that (i believe) gets called when a udp message comes in already has parameters for 2x port and 2x hostname, but none of them got passed by the udp impl so far and why doesn't that explode violently? %) | 23:19 | |
(that was in fact the listener supply, not the message supply) | 23:25 | ||
23:27
SmokeMachine joined
|
|||
timotimo | are optional parameters in our callbacks for asyncreadbytes and friends costly at all i wonder ... | 23:53 | |
jnthn | Not terribly | 23:54 | |
timotimo | i don't have a design for the API of receiving udp messages including the host and port, but it looks like the plumbing around everything will make returning proper objects a bit annoying | 23:57 | |
i.e. the encoded supply just grabs the emitted things from the supply and calls .encode on it to get a string instead of the buf |