03:09
vendethiel joined
|
|||
dalek | MoarVM: db1f759 | hoelzro++ | src/core/callsite.h: | 06:38 | |
MoarVM: Change MVM_callsite_num_nameds to take a const MVMCallsite | |||
MoarVM: | |||
MoarVM: MVM_callsite_num_nameds doesn't need to change the given callsite, | |||
MoarVM: so we can have the compiler guarantee that so no mistakes are made | |||
06:38
dalek joined
06:56
domidumont joined
07:01
domidumont joined
08:12
FROGGS joined
08:38
Ven joined
08:46
zakharyas joined
09:07
kjs_ joined
09:50
brrt joined
|
|||
brrt | good * #moarvm | 09:50 | |
nwc10 | good *, brrt | 09:51 | |
brrt | good * nwc10 | ||
in the region of the topic of jits, and especially pypy... | 09:52 | ||
i recently claimed that i got a 5x speedup on a pypy program compared to a python program | |||
well, i implemented an improved algorithm that runs in O(n log n), rather than O(n^2) | |||
nwc10 | and now your office is much colder? | 09:53 | |
brrt | not only is cpython now faster than pypy original; cpython is faster than pypy, period | ||
nwc10 | and quieter | ||
brrt | yes.... | ||
algorithms always win from runtime speed, it seems | |||
nwc10 | OK, odd. If pypy can win on the inefficient algorithm, why can't it also win on the efficient algorithm? Does it now never hit a JIT path? | 09:54 | |
brrt | it just runs too fast | ||
the original algorithm did a million comparisons, this one just a few thousands | 09:55 | ||
btw, wrt to the illumos build failure, i have a simple and stupid suspicion | 09:59 | ||
my suspision is that a): lua and minilua use scanf and friends for parsing formats; | 10:00 | ||
b): illumos / solaris scanf doesn't understand negative hexadecimals | |||
10:09
dalek joined
|
|||
arnsholt | I got a 10x (or more, I suspect, for larger problems) using PyPy. But that problem is inherently O(n^2) or thereabouts, so no algorithm win to be had =) | 10:19 | |
10:51
brrt joined
11:39
Ven joined
13:22
kjs_ joined
13:35
dalek joined
14:29
Ven joined
14:56
Hotkeys joined
16:51
domidumont joined
16:57
domidumont joined
16:58
domidumont joined
17:12
FROGGS joined
|
|||
diakopter | omg | 17:55 | |
17:58
vendethiel joined
|
|||
timotimo | OYG? | 17:58 | |
18:11
psch joined
18:30
Hotkeys left
18:35
dalek joined
18:39
kjs_ joined
|
|||
diakopter | timotimo: it's just, core setting compilation spends 6.6% of time in the MVMHash repr ops | 18:49 | |
timotimo | yeah, we use hashes in a big portion of things | 18:50 | |
diakopter | -_- # lolz | ||
timotimo | have you tried using --target=<different values> to find out what parts add what amount of stuff where? | ||
that may be interesting to know | |||
diakopter | no, that's a stellar idea; I'll try it | 18:51 | |
timotimo | also, does leaving out the optimizer with --optimize=off remove a big chunk of hashes or is it the same overall? | ||
diakopter | timotimo: what's your theory there | 18:53 | |
timotimo | the optimizer digs around a whole bunch in symbol tables and it also uses some hashes to store information about what things have what properties | 18:54 | |
diakopter | actually most of the hash usage seems to occur during parsing | ||
do we dedupe strings on compunit compilation | 18:56 | ||
(I don't remember!) | |||
timotimo | yes, we have a string heap that is backed by a hash to do just that | ||
diakopter | oh yeah. | 18:57 | |
and now I remember [re-?]implementing it. O_O | |||
yeesh, seems uthash has fallen behind in the latest benchmark comparisons | 19:04 | ||
timotimo | oh? | 19:05 | |
i don't know how exactly we do our hashing with the new grapheme stuff, tbh | |||
19:25
leont joined
|
|||
japhb | diakopter: Link to said benchmarks? | 19:26 | |
diakopter | www.tommyds.it/doc/benchmark is the one I looked at most recently | 19:27 | |
I guess it's not that recent | 19:28 | ||
19:32
kjs_ joined
21:08
kjs_ joined
21:15
diakopter___ joined
22:58
FROGGS joined
23:12
FROGGS joined
23:32
FROGGS joined
|