|
02:48
ilbot3 joined
|
|||
| MasterDuke | .tell jnthn i applied nine's patch and then changed the conditional to `if (bits > 63 || (!sign && bits == 63)) {`, but that broke a bunch of tests. i looked at some of the breaking tests in S02-types/int-uint.t and they seemed a little suspect, but this is really pushing the boundaries of stuff i'm confident in | 04:24 | |
| yoleaux2 | MasterDuke: I'll pass your message to jnthn. | ||
| MasterDuke | .tell jnthn i then got distracted by the spesh bug i/Zoffix++ found with EVAL and div, so i'm not sure i'll come up with a solution anytime soon | 04:28 | |
| yoleaux2 | MasterDuke: I'll pass your message to jnthn. | ||
| MasterDuke | .tell jnthn fyi, spesh/eval/div discussion starts around here irclog.perlgeek.de/perl6-dev/2017-...i_14168641 | 04:31 | |
| yoleaux2 | MasterDuke: I'll pass your message to jnthn. | ||
|
06:48
ggoebel joined
08:34
zakharyas joined
08:44
pyrimidi_ joined
08:53
brrt joined
|
|||
| brrt | good *, #moarvm | 08:53 | |
| so, long story short; i'm doing cycle breaking wrong | 09:02 | ||
| i think, i can better do it with a stack | |||
| (what cycles are you talking about, brrt? long story, good for a blog post :-)) | 09:03 | ||
|
09:08
domidumont joined
|
|||
| jnthn | Bonus points if you include sample code formatted to look like a bicycle :P | 09:43 | |
| yoleaux2 | 04:24Z <MasterDuke> jnthn: i applied nine's patch and then changed the conditional to `if (bits > 63 || (!sign && bits == 63)) {`, but that broke a bunch of tests. i looked at some of the breaking tests in S02-types/int-uint.t and they seemed a little suspect, but this is really pushing the boundaries of stuff i'm confident in | ||
| 04:28Z <MasterDuke> jnthn: i then got distracted by the spesh bug i/Zoffix++ found with EVAL and div, so i'm not sure i'll come up with a solution anytime soon | |||
| 04:31Z <MasterDuke> jnthn: fyi, spesh/eval/div discussion starts around here irclog.perlgeek.de/perl6-dev/2017-...i_14168641 | |||
| brrt wonders how many characters that would require at minimum | 09:56 | ||
| i've got something | 09:58 | ||
| jnthn | probablydance.com/2017/02/26/i-wro...hashtable/ # long read, but maybe interesting :) | ||
| brrt | +_-_ | ||
| O -O | |||
| jnthn | :D | ||
| Next question: is it valid BF? :) | |||
| brrt | (bf?) | 09:59 | |
| jnthn | Brainf**k | ||
| brrt | ah, i see | ||
| no idea | |||
| jnthn | I don't think so, though, 'cus O surely isn't in there :) | ||
| brrt | that's unfortunate :-) | 10:00 | |
| dogbert11 | o/ jnthn how is your brane today, have you had any coffee? | ||
| jnthn | Currently drinking coffee | ||
| dogbert11 | :) | ||
| jnthn | And looking at what I've got left to do from Friday's hacking :) | 10:01 | |
| dogbert11 | ah, the async lib stuff | ||
| jnthn | aye | ||
| dogbert11 | I only have boring gists | ||
| but this is quite short, this always turns up when running a p6 program, is it anything to worry about? gist.github.com/dogbert17/0200459a...bf1ed2f6f1 | 10:02 | ||
| and if the above isn't enough I have another question this time regarding a roast test file, i.e. github.com/perl6/roast/blob/master...eserving.t | 10:22 | ||
| the last test fails from time to time. I'm simply wondering if there might be some kind of race condition wrt writing to '@closings' on line 16? | 10:24 | ||
| brrt | <scratchpad> | 10:44 | |
| so, what we can do is maintain a stack for the cycles | |||
| traverse the reverse map and push the registers we need to process in a cycle | 10:45 | ||
| then swap the start register value to a scratch value, and process the stack in lifo order (i.e. reverse the cycle), inserting swap registers | 10:46 | ||
| then place the cycle-starter into it's destination afterwards | 10:47 | ||
| e.g. suppose we have cycle a -> b -> c -> a, we loop while (reverse_map[cursor] != start_point) { cycle_stack[cycle_stack_top++] = cursor; cursor = reverse_map[cursor] } | 10:48 | ||
| which would build a, b, but not c | |||
| so we need a slightly different ending criterium | |||
| jnthn back | 10:51 | ||
| Power outage. | |||
| dogbert11: ooh, yes, looks like | 10:54 | ||
| nwc10 | jnthn: interesting. Now, *I* am not volunteering (at least, not this calendar year) to transcribe that to C (well, the subset that "we" need) | 11:02 | |
| but I hope someone looks further into it | |||
| jnthn | :) | 11:05 | |
| Yes, I do drop stuff here now and then in hope of provoking somebody into writing code :-) | |||
| brrt | heheeh | 11:18 | |
| anyway, i figured out my solution ove rlunch | 11:19 | ||
| *over lunch | |||
| that is that in my example, 'a' needn't be on the stack at all, since it is handled specially | |||
| b and c need to be on the stack. | |||
| </scratchpad> | 11:20 | ||
|
11:55
domidumont joined
|
|||
| dogbert11 | power outage, uh oh, usually means that the coffee machine is broken as well | 12:27 | |
| jnthn | Yeah, thankfully I'd just made one before it :) | 12:33 | |
|
12:53
domidumont joined
|
|||
| brrt wonders what would happen if all syscall except for one (wait()) were asynchronous and just stashed integers into a buffer | 13:03 | ||
| moritz | brrt: error handling would be "fun" | 13:04 | |
| brrt | (also, you'd have to emulate all the blocking syscalls with a stash+wait(), which would convert your optimized, register-argument syscall into a random-memory-buffer-syscall, and they'd be slower) | ||
| yeah, that too | |||
| but you could write microkernels really nice that way | |||
| (in other words, what if it's just UNIX that sucks?) | 13:07 | ||
| s/sucks/is not optimized for 2017/ | |||
| brrt is reading AnyEvent code and⦠all of it would just be so much nicer in perl6 | 13:10 | ||
|
13:55
SmokeMachine joined
14:11
lucasb joined
14:29
pyrimidine joined
|
|||
| dogbert11 | jnthn: FYI t/spec/S17-promise/nonblocking-await.t has a tendency to hang on the third test if the nursery is small | 14:36 | |
|
14:37
lucasb left
|
|||
| dogbert11 | I suspect this test 'Hundreds of time-based Promises awaited completes correctly' as being the culprit | 14:50 | |
| gdb says something about '79 __atomic_thread_fence(__ATOMIC_SEQ_CST);' | 14:52 | ||
| #0 AO_nop_full () at 3rdparty/libatomic_ops/src/atomic_ops/sysdeps/gcc/generic.h:79 | |||
| #1 MVM_gc_enter_from_allocator (tc=tc@entry=0x86dc450) at src/gc/orchestrate.c:409 | |||
|
14:55
Ven joined
|
|||
| timotimo | that sounds like it's trying to start a gc run | 14:55 | |
| but other threads aren't cooperating | |||
| i.e. it's waiting forever for other threads to say "yup, i'm ready" | 14:56 | ||
| dogbert11 | aha | ||
| timotimo | it'd be interesting to see what the other threads are doing at that moment | ||
| can you "thread apply all bt"? | |||
| dogbert11 | sec | ||
| gist.github.com/dogbert17/5d29fee3...c5cf6fb132 | 14:58 | ||
| timotimo | oh, heh. | ||
| can you re-run with JIT disabled? | |||
| dogbert11 | do I have JIT on 32 bit? | 14:59 | |
| brrt | no, you should not | ||
| timotimo | you normally do not | ||
| unless you've been hiding something from us :) | |||
| dogbert11 | :-) | ||
| brrt | (the 'fastest hash table' article is very nice, jnthn++) | ||
| timotimo | so why is the backtrace so shite for threads 2 through 6? | ||
| dogbert11 | moar is built with optimizations though | ||
| brrt | hmm, could you try and replicate without optimization | 15:00 | |
| timotimo | like ... frame pointer omission? | ||
| dogbert11 | can do, I also attached gdb after the hang | ||
| brrt | FPO seems like a likely candidate yes | 15:01 | |
| timotimo | attaching it afterwards should be okay | ||
| brrt | yeah, that's no problem usually | ||
| dogbert11 | rebuilding ... | ||
|
15:02
lizmat joined
|
|||
| dogbert11 | done, refresh the gist | 15:03 | |
| brrt | okay, that looks interesting | 15:04 | |
| dogbert11 | thread 5 seems to be involved in some GC shenanigans as well | 15:13 | |
| timotimo | it doesn't help that the line number in eventloop_queue_work is just the start of the { } inside the MVMROOT | 15:25 | |
| >:( | |||
| and that the interesting frame is just "??" | 15:26 | ||
| i wonder if get_or_vivify_loop has trouble? | |||
| okay imagine this scenario: | |||
| dogbert11 | 'Backtrace stopped: previous frame inner to this frame (corrupt stack?) | 15:27 | |
| that sounds a bit ominous | |||
| timotimo | two threads try to vivify the loop at the same time, so they block on the mutex_event_loop_start | ||
| no, i mean | |||
| one blocks, the other proceeds into the critical section | |||
| but inside the critical section it has to GC | |||
| i think we just need to mark the thread as blocked inside get_or_vivify_loop | |||
| i'll whip up a patch | |||
| dogbert11 | ++timotimo | 15:28 | |
| timotimo | (except i see no reason for the eventloop to not exist already by that point) | ||
| gist.github.com/timo/66551554fd1c5...d04fab155a - try this please | 15:29 | ||
| dogbert11 | will do, gimme a few minutes ... | 15:31 | |
|
15:36
brrt joined
|
|||
| dogbert11 | timotimo: fail, refresh the gist again | 15:41 | |
| timotimo | huh | 15:42 | |
| dogbert11 | should I double check if I applied your patch correctly | 15:44 | |
| timotimo | i'm just surprised it's in serialization_demand_object now | ||
| dogbert11 | it worked the first three times and then hanged | 15:45 | |
| timotimo | mhm | ||
| what if you do the same thing i did in the patch, except around serialization.c:2720? | |||
|
15:45
pyrimidine joined
|
|||
| dogbert11 | will try | 15:47 | |
| MVMSerializationReader *sr = sc->body->sr; | 15:49 | ||
| MVM_gc_mark_thread_blocked(tc); | |||
| MVM_reentrantmutex_lock(tc, (MVMReentrantMutex *)sc->body->mutex); | |||
| MVM_gc_mark_thread_unblocked(tc); | |||
| timotimo: my change must be incorrect, got 'MoarVM panic: Invalid GC status observed while blocking thread; aborting' | 15:51 | ||
| jnthn | MVM_reentrantmutex_lock igself calls MVM_gc_mark_thread_blocked, I believe | 15:53 | |
| *itself | |||
| dogbert11 | have you backlogged what we're trying to do? | 15:55 | |
| timotimo | oh! | 15:56 | |
| dogbert11 removes the second fix in the meantime :) | |||
| so, there's still a problem then | 16:00 | ||
|
16:08
pyrimidine joined
16:27
lizmat_ joined
16:31
Ven joined
16:33
lizmat joined
17:02
Ven joined
17:57
lizmat joined
18:01
Ven joined
18:21
Ven joined
18:36
pyrimidine joined
18:41
Ven joined
18:44
Ven_ joined
19:18
pyrimidine joined
19:21
lizmat joined
|
|||
| dogbert17 | timotimo: looks as if we got stuck with t/spec/S17-promise/nonblocking-await.t. Should I report it? | 19:22 | |
| timotimo | yeah, please do | 19:51 | |
| samcv | morning all | ||
|
20:01
Ven joined
|
|||
| jnthn | o/, samcv | 20:05 | |
| dogbert17: Will have time to look into some of those issues tomorrow | |||
| dogbert17 | jnthn, timotimo: github.com/MoarVM/MoarVM/issues/543 | 20:11 | |
| jnthn | Hm, interesting one | 20:13 | |
| dogbert17 | haven't seen it fail when the nursery is at its default size | 20:14 | |
|
20:17
pyrimidine joined
|
|||
| jnthn | Suspect it can happen, but just unlikely to | 20:18 | |
| Which is why we nursery stress :) | |||
| dogbert17 | indeed :) | 20:19 | |
|
20:32
puddu joined
|
|||
| puddu | .tell bttr UNIX is not optimized for 2017? What if all syscalls would be synchronous a part of one, clone, which is also synchronous but let you implement all kind of multiprocessing you want? | 20:35 | |
| yoleaux2 | puddu: I'll pass your message to bttr. | ||
|
21:38
agentzh joined
22:03
agentzh joined
22:14
pyrimidine joined
22:21
pyrimidine joined
22:59
pyrimidine joined
|
|||