01:14
Pixi left
01:15
Pixi joined
|
|||
releasable6 | Next release in ≈1 day and ≈15 hours. There are no known blockers. Please log your changes in the ChangeLog: github.com/rakudo/rakudo/wiki/ChangeLog-Draft | 03:00 | |
04:59
rakkable left
05:00
rakkable joined
05:27
guifa left
06:45
vrurg joined
06:46
vrurg_ left
06:52
vrurg_ joined,
vrurg left
06:53
vrurg joined
06:56
vrurg_ left
07:35
Geth joined
08:29
melezhik_ joined
08:39
melezhik_ left
08:54
melezhik_ joined,
melezhik_ left
|
|||
Geth | rakudo/main: 726c0e8b83 | (Elizabeth Mattijsen)++ | 2 files RakuAST: fix deparsing of rule/token/with leading doc Spotted by David Warring++ Fixes #5978 Note that this fix also needed a text fix because of some whitespace difference. |
10:02 | |
10:51
melezhik joined
|
|||
melezhik | Latest rakudo build fails for alpine - sparky.sparrowhub.io/report/rakudo-head/25840 , HTH | 10:52 | |
timo2 | melezhik: do you have a way to get the a core dump from the segfault there? | 12:29 | |
melezhik | Sure if you tell me how | ||
timo2 | for a core dump to be emitted, the relevant ulimit has to be nonzero, and ideally big enough to hold the entire contents, so usually `ulimit -c unlimited`. then the /proc/sys/kernel/core_pattern decides what happens with a generated core dump. on systems with systemd-coredump that will handle it, otherwise you can give a filename pattern there with variables like the PID being substituted in | 12:31 | |
if systemd-coredump does it, the file will be available through coredumpctl, but i don't exactly know how it behaves when containers are involved | 12:32 | ||
melezhik | I rerun build and it succeeds now. Maybe false negative | 12:37 | |
lizmat | or a flapper :-( | ||
melezhik | Ouch 🤕 | ||
I can run rebuild again )) | 12:38 | ||
timo2 | segfault is bad news in any case | 12:49 | |
usually with a segfault there's also output in `dmesg`, but usually less convenient to read, i don't think it symbolizes anything there | |||
melezhik | Anyways third attempt succeeded again for the same commit | 12:59 | |
timo2 | what's the easiest way to get exactly the same thing running on a local machine, and then getting a shell in there after the failure happens in order to run the problematic step on repeat?? | 13:01 | |
13:48
librasteve_ joined
|
|||
melezhik | I think once the second build gets run this is no longer the same environment. | 13:51 | |
timo2 | i put one question mark too many by accident | 13:58 | |
i feel like there could be value in going through the starting state of the NFA and all the states reachable from it via epsilons and copying all the edges into a single state | 13:59 | ||
15:01
ds7823 joined
|
|||
ab5tract | intriguing.. so it sort of pre-loads all potential paths? | 15:12 | |
timo2 | yeah | 15:25 | |
gist.github.com/timo/b40793874d28b...22e9abb767 - here's an nfa trace before and after, you can tell that the first state has 226 outgoing edges by the second column | 15:34 | ||
lines refresh to also see a json dump of the first few states in each case | 15:44 | ||
i think it's pretty good that out of these 226 states 193 go into one single binary search | 15:45 | ||
so the benefit to the nfa engine there is that first of all it doesn't have to chase down about the 36 states for the very first character | 15:46 | ||
so that's a drastic reduction in disjunct memory accesses right there | 15:47 | ||
15:47
ds7823 left
|
|||
timo2 | and then the big binary search allow finding the right place in those 193 states in just a couple of bisections and then to jump right to the 194th to continue there | 15:47 | |
there's duplicates of fate edges in there which are probably not that terrible from a performance standpoint | 15:50 | ||
a real, full optimizer might want to fully reconstruct an NFA instead of doing simple passes, but we also don't want to spend too much time in the optimizer, especially once a grammar gets really really big. we did have a user hit that recently where the nfa optimizer times just completely outgrew a sensible timeframe | 15:52 | ||
though a fuller optimizer could also result in smaller NFAs at each step, so then it'd be less work to merge them together and then re-optimize | 15:53 | ||
[Coke] | (segfault) yes, please retest, we have a release in the next 24h | 15:54 | |
16:11
melezhik left
|
|||
timo2 | en.wikipedia.org/wiki/NFA_minimization does not give very much hope, huh | 17:11 | |
18:50
apogee_ntv left
18:52
apogee_ntv joined
|
|||
[Coke] | OK - ready to do the release tomorrow, but will be getting a late start as I also volunteered for something that runs from 730-10 AM, and it'll take me a little while to get back and work on the release | 22:25 | |
releasable6 | Next release in ≈19 hours. There are no known blockers. Please log your changes in the ChangeLog: github.com/rakudo/rakudo/wiki/ChangeLog-Draft | 23:00 |