01:37
Ven`` joined
01:55
ilbot3 joined
05:32
domidumont joined
05:55
domidumont joined
06:14
brrt joined
|
|||
brrt | good * #moarvm | 06:18 | |
06:41
brrt joined
07:17
zakharyas joined
|
|||
TimToady | well, my spectest still hangs in S17-supply/syntax.t test 68 | 07:18 | |
it's not chewing up syscalls, strace reports only futex(0x543d454, FUTEX_WAIT_PRIVATE, 1, NULL | 07:19 | ||
well, actually, it's hanging in test 69 | 07:24 | ||
still chews up about .004 gig per second | |||
07:29
leont joined
|
|||
TimToady | interestingly, if I insert signal(SIGINT).tap: { note "HERE"; exit } into that test, it traps the SIGINT, but never runs the block, seemingly | 07:38 | |
so maybe it's the scheduler that's hosed somehow | 07:39 | ||
leont | That sort of thing is tricky to schedule in general | 07:50 | |
08:26
leont joined
|
|||
leont | In POSIX, signals are actually delivered to a thread, not a process (even if signal disposition is per process). This behavior can be useful but is also hard to meaninfully translate to a higher level programming language. | 08:31 | |
TimToady | a completely fresh clone passed the spectest, but then hung again when that test was run individually, so it's apparently probabilistic, though likely on my machine | 08:32 | |
if I put 'note $++;' in the inner loop, it generally never gets anywhere close to 500, usually stops by the first 20 or 30 | 08:38 | ||
well, dinner & | 08:39 | ||
brrt | anyway, i was going to suggest | 09:03 | |
lets make a moarvm docs site | 09:04 | ||
built from the docs/ directory of MoarVM | |||
nwc10 | using dogfood | 09:09 | |
(or github pages) | |||
brrt | or $whatever-solves-the-problem-using-minimum-brain-ccyles | 09:11 | |
:-) | |||
anyway, i'm for merging nine++'s branch today | |||
nwc10 | I'd argue against writing significant code in anything-other-than-Perl 6 even if it solves the immediate problem in fewer cycles than github pages | 09:13 | |
as it's a local minima | |||
brrt | hmmm | 09:17 | |
i think that's fair | |||
i'm wondering if i should be converting my documentation from org-mode to pod6 | 09:18 | ||
09:26
robertle joined
|
|||
dogbert17 | hmm, MoarVM panic: Internal error: zeroed target thread ID in work pass | 10:03 | |
annoyingly, it disappears under gdb | 10:04 | ||
10:43
Ven`` joined
|
|||
Geth | MoarVM: MasterDuke17++ created pull request #706: Fix comment |
11:54 | |
MoarVM: 33c7521c1d | MasterDuke17++ (committed using GitHub Web editor) | src/6model/reprs/P6opaque.c Fix comment |
11:56 | ||
MoarVM: d274e827af | lizmat++ (committed using GitHub Web editor) | src/6model/reprs/P6opaque.c Merge pull request #706 from MasterDuke17/patch-1 Fix comment |
|||
[Coke] | (futex... NULL) I have just been seeing that in failure modes for docker. small world. | 12:06 | |
jnthn | It's what pthreads mutexes boil down to using on Linux, in the case they actually need to block | 12:17 | |
And most multi-threaded uses those :) | |||
*code uses | 12:19 | ||
TimToady | funny though that it succeeded under heavy load, but fails quickly when run standalone | 12:22 | |
I never got more than 20 or 30 iterations into the ^500 when standalone | |||
and a fresh clone did it too, so it's not something residual from previous builds | 12:23 | ||
my failing linux is running a 3.14.1 kernel, while my 3.16.0 kernel on my home server seems to be fine | 12:27 | ||
so maybe there's some difference there | |||
also, the succeeding cpu is an AMD, while the failing one is an Intel | 12:30 | ||
lizmat | FWIW, my MBP (on which it doesn't fail) has an Intel | 12:31 | |
so feels more like a kernel issue than a processor issue to me | |||
TimToady | well, or a kernel that doesn't quite do hyperthreading right maybe | ||
lizmat | well, fwiw, it does feel good that we're getting closer to the metal with rakudo :-) | 12:32 | |
TimToady | a little funny that there's a different kernel release though, since they're both Linux Mint 17.3 | 12:44 | |
you'd think they'd keep those synced up across architectures | 12:45 | ||
14:00
leont joined
|
|||
geekosaur | they do. *but*, if you need specific hardware support, you may be pointed to installing a newer kernel | 14:00 | |
14:23
buggable joined
|
|||
dogbert17 | I tried the pragma 'use trace' on the troublesome test 69 in syntax.t. Running the code it spits out quite a bit of information and then 'hangs' for quite a few seconds on the line | 14:24 | |
whenever Supply.interval(0.001) { done }. It then spits out more data and usually 'hangs' again on the same line for a long time before continuing | 14:25 | ||
runtime numbers also differ wildly between runs, in three runs I got 26, 33 and 11 secs respectively on my 3 core vm | 14:29 | ||
'max memory usage' according to /usr/bin/time for the three runs was 471704, 590412 and 230464k respectively | 14:31 | ||
14:38
AlexDaniel joined
14:48
brrt joined
14:55
leont joined
15:15
brrt joined
15:22
geekosaur joined
16:05
domidumont joined
|
|||
timotimo | so, syntax.t is happy when i give it even the lowest amount of startup delay time | 16:31 | |
maybe there's a race somewhere where the interval fires before the tap is set up, but that doesn't explain why the "done" isn't run on the second tick | |||
next step for me should probably be to put debug prints everywhere in the scheduler ... | 16:43 | ||
afk first, though | 16:44 | ||
16:47
dogbert2 joined
17:01
robertle joined
17:02
Ven joined
17:27
leont joined
17:48
Zoffix joined
17:57
Ven joined
|
|||
samcv | good * moarvm | 18:04 | |
Zoffix | \o\ | 18:06 | |
18:09
Ven_ joined
18:26
leont joined
19:25
patrickz joined
|
|||
patrickz | Hi everyone! | 19:27 | |
Zoffix | \o | ||
samcv | hi patrickz | 19:29 | |
patrickz | I have submitted a PR fixing the build with GCCs some days ago. Bumping it again before it gets out of sight: github.com/MoarVM/MoarVM/pull/696 | 19:30 | |
*older GCCs* | 19:31 | ||
lizmat | and another Perl 6 Weekly hits the Net: p6weekly.wordpress.com/2017/09/25/...-the-pool/ | 20:03 | |
20:10
leont joined
|
|||
timotimo | queued an unlock vow; queue is 1276 long | 20:26 | |
hm, i wonder. | 20:27 | ||
should an interval supply be able to "take back" its "i want to have access to this lock" when the corresponding interval tap gets cancelled? | 20:28 | ||
so anyway, the lock:async used in the interval supply is accruing more and more contenders for acquisition | 20:42 | ||
in theory perhaps the done called by one of the interval ticks is deadlocking with something and keeping the lock held that way and the program just continues piling up new ticks that want to execute their code | 20:44 | ||
the only things competing for this lock are creating new taps on the interval supply and emitting values | 21:01 | ||
hm. is the lock protect in method tap of interval for making sure the first interval is only emitted once the whole thing has been set up? but will cueing make you vulnerable to that? | 21:04 | ||
well, removing it doesn't seem to fix anything | 21:08 | ||
22:09
TimToady joined
|
|||
jnthn | timotimo: Yes, there's a rule that we must always call the tap(...) callback before any emit(...) or other messages, which is why it holds the lock around the setup | 22:11 | |
Interesting about the queueing up...weird | 22:12 | ||
timotimo | of course any amount of debug output can mess up scheduling | 22:16 | |
jnthn | Yeah, there's always that fun | 22:17 | |
But still, the done should run | |||
And switch stuff off | |||
timotimo | it should | ||
jnthn | So nothing more piles up | ||
It's very odd | |||
timotimo | i also put a print into the CATCH, that doesn't get called | 22:18 | |
so it doesn't exit it "sideways" via an exception without me noticing | |||
btw, cool, i can book "intentful testing through domain events" for free! but it only lasts 0 days :( | |||
jnthn | There's always a catch with free things... | ||
It seems that one of my supply guts changes has afflicated cro-websockets also :( | 22:19 | ||
timotimo | dang :( | ||
jnthn | I stresstested the darn thing against cro-http; didn't think to try that | 22:20 | |
"It handled 200,000 requests, it can't be that broken" :) | |||
timotimo | hah, on your 'site, "Perl 6" gets split into two lines twice in the bit about Cro | ||
Zoffix | .oO( all the more reason to rename... ) |
22:24 | |
timotimo | i really, really need a better way to visualize concurrent activities than printing to a console | 22:25 | |
exposing telemeh to userspace and building a graphical frontend for that would be one way to do that ... | 22:26 | ||
jnthn | Yeah, some kind of concurrency visualizer would be neat | ||
But it'd really need to be Promise/Supply-aware | |||
timotimo | oh yes | 22:27 | |
23:22
geekosaur joined
|
|||
Geth | MoarVM: 454e9148a2 | (Samantha McVey)++ | docs/collation.asciidoc Add future info to Collation docs. Language/natural sort Add information about how we would add language specific sorting, as well as a few details about implementing natural sorting (numeric sorting). |
23:23 |