Geth | MoarVM: samcv++ created pull request #717: Disable tests on Appveyor until we can fix the build |
00:18 | |
AlexDaniel | what a coincidence | 00:22 | |
oh, not so much of a coincidence, it's just that I didn't backlog here… | 00:24 | ||
.tell brrt fwiw a ticket for that: github.com/MoarVM/MoarVM/issues/718 | |||
yoleaux | AlexDaniel: I'll pass your message to brrt. | ||
samcv | AlexDaniel, we can disable the testing which has always had issues. so hopefully we can fix the build then someone can make sure it doesn't freeze during make test with nqp | 00:25 | |
and it seems like it's turning "/" into "\". but i can see the code that does it for the Makefile and one other file. but maybe i just can't find where we possibly do it for other files. though that emit.c is created partly by dynasm i think | 00:26 | ||
AlexDaniel | samcv: all I know is that we've just noticed a real failure because appveyor was doing it's job | ||
so I'd much rather have it red than not have it at all | 00:27 | ||
samcv | you mean on the rakudo appveyor or the moarvm one | ||
AlexDaniel | on moar | ||
samcv | well it's not helpful if it's red when the biggest case that happens is the build is broken for windows, but the testing is unchanged | ||
that's the most common thing that happens RE windows builds | 00:28 | ||
and the test results are not useful if the testing doesn't even complete because of some unknown thing | 00:29 | ||
AlexDaniel shrugs | 00:31 | ||
00:43
committable6 joined
|
|||
samcv | because of that we didn't notice the build actually breaking with the recent JIT merge | 00:44 | |
00:53
bisectable6 joined
|
|||
AlexDaniel | samcv: oooh, I understand your point now | 01:08 | |
samcv: yeah, alright. Sorry | 01:09 | ||
samcv | np AlexDaniel :) don't apologize. | 01:10 | |
AlexDaniel | well, I should've thought about it a little bit more :) | 01:11 | |
it's kinda not OK that we'll have tests running on travis but not appveyor, but yeah, that'll probably be more useful | |||
samcv: why not merge it? | 01:12 | ||
samcv | your view was perfectly understandible without more information. so it's fine. i hope soon we'll figure out the issue. i may eventually make a windows VM or something | ||
AlexDaniel, was trying to see what appveyor would come back with | |||
i'm less familiar with appveyor compared to Travis, and have had to do many more trials than i'd expect to get it right | 01:13 | ||
though i did that before i knew that appveyor was always failing. so i'll merge now :P | 01:14 | ||
AlexDaniel | OK I renamed github.com/MoarVM/MoarVM/issues/697 accordingly | ||
Geth | MoarVM: f6548703f0 | (Samantha McVey)++ | .appveyor.yml Disable tests on Appveyor until we can fix the build If it stalls during testing, we get no feedback on when the build itself has broken. |
||
MoarVM: e264bfa884 | (Samantha McVey)++ (committed using GitHub Web editor) | .appveyor.yml Merge pull request #717 from samcv/ap Disable tests on Appveyor until we can fix the build |
|||
samcv | AlexDaniel++ | ||
AlexDaniel | samcv++ :) | ||
01:22
releasable6 joined
01:33
bisectable6 joined,
squashable6 joined,
statisfiable6 joined
01:55
ilbot3 joined
01:57
zakharyas joined
02:03
arnsholt joined
|
|||
AlexDaniel | b: help | 02:24 | |
bisectable6 | AlexDaniel, Like this: bisectable6: old=2015.12 new=HEAD exit 1 if (^∞).grep({ last })[5] // 0 == 4 # See wiki for more examples: github.com/perl6/whateverable/wiki/Bisectable | ||
03:05
Ven`` joined
03:07
TimToady joined
03:21
Util joined
05:16
evalable6 joined
06:09
domidumont joined
06:17
domidumont joined
07:12
domidumont joined
07:15
hoffentlichja joined
07:29
patrickz joined
07:30
patrickz joined
08:36
domidumont joined
10:02
robertle joined
|
|||
Geth | MoarVM: a9f4770110 | (Jonathan Worthington)++ | 6 files Make thread GC nurseries scale to need Many threads (such as the specialization worker and the Rakudo GC supervisor) run and allocate relatively little. Thus they will never fill a nursery, and so there's no point them having a sizable one. This change starts all but the main thread off with a smaller nursery. ... (10 more lines) |
10:10 | |
nwc10 | good *, jnthn. | 10:13 | |
will there be bloggage about that commit, with some numbers for the win? | 10:14 | ||
jnthn | Dunno, given I'm so behind with blogposts I have to write for funded stuff... | 10:15 | |
nwc10 | you've now run out of funds? | 10:16 | |
jnthn | Yeah | ||
nwc10 | :-( | ||
that's a bother | 10:17 | ||
jnthn | True, but at least it means work ain't adding to my blogging backlog :P | ||
The ticket I was looking at is rt.perl.org/Ticket/Display.html?id=131915 fwiw | 10:18 | ||
Which complained about memory use after just doing one run(...) | |||
Which is implemented in terms of Proc::Async | |||
And so starts some background threads or so | |||
Most of the improvement came from the new scheduler | 10:19 | ||
Which starts a bunch less threads | |||
But the specializer worker is a "normal" thread too (I don't want "two kinds of threads") | |||
And the supervisor in the scheduler is another one | |||
So giving them smaller nurseries is a win | 10:21 | ||
Base memory according to the measurement technique used in the ticket is now 90% of what it was before the run and 83% of what it was after | |||
m: say 114948 / 76896 | 10:22 | ||
camelia | 1.494850 | ||
nwc10 | ah OK cool thanks | ||
I hadn't realise that this was driven by a particular ticket | |||
and had assumed that it was a more general "this optimisation is a big win" | |||
jnthn | m: say 139220 / 85104 | ||
camelia | 1.635881 | ||
jnthn | Phew :) | ||
I was worried for a moment that I'd made the base case sufficiently better that it would cancel out my actual win :P :P | 10:23 | ||
m: say 56568 R/ 215976 | |||
camelia | 3.817989 | ||
jnthn | OK. So I've improved that ratio from 3.8 to 1.5 | ||
10:25
evalable6 joined
|
|||
jnthn | I'm curious what the number that process is using is though, as /usr/bin/time shows a rather lower maxresident | 10:28 | |
And massif shows lower still, but I bet that doesn't count the mmap'd bytecode files | |||
ilmari | virtual vs. resident size? | 10:29 | |
lizmat | jnthn: re attribute default initialization: what is the reason for the "will build" trait, why not just call set_build on the attribute in Actions ? | ||
jnthn | Presumably something like that | ||
lizmat: Probably because `will build` was in the design docs, and it was preferable to have all those things go through one codepath, so folks could override the trait if they had reason to | 10:31 | ||
lizmat | ok | ||
jnthn | Though it's been that way for so many years, I honestly don't remember exactly the thinking :) | ||
lizmat | also: I see a "WANTED" in the attr init code, whereas at least atm, the method is only called internally | 10:32 | |
not sure about the runtime ramifications of WANTED | |||
jnthn | Isn't WANTED just a compile-time tracking annotation? | 10:33 | |
lizmat | yeah, probably is... | ||
jnthn | So no impact | ||
lizmat | still, how would that work with a method ? | ||
jnthn | It's just saying the return value is wanted | ||
Which they always are | |||
At a guess, anyway | 10:34 | ||
lizmat | ok, so it's more a comment than anything else. :-) | ||
jnthn | Hmm, the spesh worker thread gets a spesh log allocated, which is a bit pointless 'cus it never runs any user code | 10:37 | |
lizmat | jnthn: looking at github.com/rakudo/rakudo/blob/nom/....nqp#L9165 | 10:38 | |
isn't line 9168 then superfluous ? | |||
as it's already marked scope<lexical> in 9165 ? | 10:39 | ||
also: why lexical, why not local ? | |||
jnthn | That is a bit odd | 10:40 | |
Because an attribute initializer may have nested blocks | |||
has $.foo = do { ... } | |||
Could try removing line 9168 | |||
lizmat | ok, will do | ||
10:46
evalable6 joined
|
|||
Geth | MoarVM: 2553a0e5a6 | (Jonathan Worthington)++ | src/core/threads.c Don't give internal workers a spesh log Saves a tiny bit of work, and avoids an allocation, although it will barely register as an actual saving thanks to OS cleverness around memory that is allocated but then never touched. |
10:49 | |
jnthn | oops | 10:50 | |
oh, phew | |||
Thought I'd gone and committed debug output. Just hadn't done a make install again after deleting it :) | |||
lizmat | jnthn: fwiw, I'm seeing flappers in make spectest that I haven't seen flapping before | 10:56 | |
so I suspect we still have some gremlins in the new spesh/JIT code | |||
jnthn | That or the scheduler rewrite, or the complete re-working of supply concurrency management :P | 10:57 | |
There's also the occasional SEGV that shows up in CORE.setting compile | |||
The paste with the backtrace on it expired, alas | |||
lizmat | example: dieing in test 4 of integration/advent2011-day10.t | ||
jnthn | Seems fine here | 10:59 | |
lizmat | yeah... not reproducible | ||
which was my point :-) | |||
nwc10 | oooh, I just got | 11:01 | |
moar: src/6model/sc.c:383: MVM_SC_WB_OBJ: Assertion `!(obj->header.flags & MVM_CF_FORWARDER_VALID)' failed. | |||
I think from target perl6-gdb-m | 11:02 | ||
sadly can't reproduce | |||
timotimo | jnthn: should there be any consideration for nursery size reductions from strings and such when the nursery could still grow in size? | 11:11 | |
jnthn | Think I'd prefer those two not to interact | 11:13 | |
11:17
domidumont joined
|
|||
jnthn | GC stressed CORE.setting build set off while I go for lunch | 11:20 | |
bbiab | |||
dogbert17 | jnthn: btw, t/spec/S17-promise/lock-async.t takes 1m40s on my 4 core $work machine. Do you want me to RT yesterdays findings wrt worker threads | 11:21 | |
lizmat | jnthn: fwiw, I cannot find a reference to "will build" in the specs, only that the Attribute object has a "build" method | ||
abbiab | 11:22 | ||
11:59
domidumont1 joined
|
|||
jnthn | Dammit, it did trip a GC fromspace access assert but I was apparently so hungry I forgot to set a breakpoint on MVM_panic | 12:03 | |
nwc10 | bother | 12:04 | |
do it again while ilmari goes to lunch... | |||
timotimo | get used to using rr, perhaps :) | ||
jnthn | ah, it hits pretty early on | ||
12:07
ZofBot joined
|
|||
jnthn | haha | 12:07 | |
When I wrote this code... | |||
ParameterizeReturnData *prd = (ParameterizeReturnData *)sr_data; | |||
I hadn't yet learned that "prd" is Czech for "fart" :D | 12:08 | ||
Think I found the problem, anyways | 12:09 | ||
nwc10 | with a rolled R? seems quite | ||
alliterative | |||
no, that wasn't the word I meant | |||
I slack. I'll go back to "breaking things" | 12:10 | ||
jnthn | Yeah :) | ||
I think onomatopoeia is the one :) | |||
nwc10 | that was the one | ||
jnthn | Seems it's surviving longer, anyways | 12:11 | |
ah, it means I fix github.com/MoarVM/MoarVM/issues/707 also | |||
[Coke] | (broke the windows build) worked fine here (failed about as many spectests as windows has been) | 12:13 | |
jnthn | Yeah, got through the build. Nice :) | ||
Wowser, make test ain't happy under GC stress | 12:14 | ||
12:35
domidumont joined
12:43
domidumont1 joined
|
|||
jnthn | spectest neither for that matter | 12:49 | |
Geth | MoarVM: 99bc69a2f4 | (Jonathan Worthington)++ | src/6model/parametric.c Missing MVMROOTs in type parameterization handling The callback data in `prd` is no longer being marked by this point, so need to grab it and root it before using it. Fixes #707, and together with it the occasional crash while building CORE.setting. |
13:04 | |
13:07
hoelzro joined
|
|||
Geth | MoarVM: 716f6fcfaf | (Jonathan Worthington)++ | 2 files Missing rooting/barriering in C[PP]Struct |
13:16 | |
13:25
weabot joined
13:30
cog_ joined
|
|||
Geth | MoarVM: 953c38423c | (Jonathan Worthington)++ | src/core/frame.c Set frame->static_info in allocate_frame So it's there right from the start. If the spesh log entry fills up the log and causes it to be sent, then we might have a frame with no static_info on the temp roots stack. This can cause a NULL pointer dereference inside the GC. We could NULL-guard it there, but it's less work to just make sure that situation can never happen. |
13:35 | |
MoarVM: be8e7dfa0c | (Jonathan Worthington)++ | src/gc/worklist.h Catch frame worklist adds with NULL static_info |
|||
dogbert17 sees cool bugfixes, jnthn++ | 13:41 | ||
13:45
zakharyas joined
|
|||
jnthn | Hunting another one. Somehow, a frame ends up with an outdated ->outer into fromspace | 13:45 | |
dogbert17 | is it a MoarVM issue? | 13:46 | |
timotimo | well, probably | ||
jnthn | Yeah, surely | ||
timotimo | you can't really modify that stuff from nqp or rakudo | ||
jnthn | Oh, unless you mean "is it a MoarVM GitHub Issue" | ||
In which case, no | |||
dogbert17 | aha, interesting | 13:47 | |
yeah, GitHub issue is what I was after :) | |||
is it the MVM_gc_root_add_frame_roots_to_worklist panic you're hunting? gist.github.com/dogbert17/f8cb9974...da44ef40da | 14:00 | ||
jnthn | I've probably got a fix for that one locally :) | 14:01 | |
dogbert17 | impressive | 14:02 | |
Geth | MoarVM: 856a9a6f93 | (Jonathan Worthington)++ | src/core/frame.c Missing MVMROOT of outer/code_ref |
14:05 | |
MoarVM: 0374f7383a | (Jonathan Worthington)++ | src/core/frame.c Set frame->caller up earlier The GC could see the frame and the old value, which might have been junk, in some cases. We could NULL it so that didn't happen, but instead just set it earlier before GC can ever be triggered. |
|||
jnthn | Enough GC bug hunting for today, methinks | 14:26 | |
That should at least make things a little better, though :) | 14:27 | ||
dogbert17 | indeed it will, jnthn++ | ||
jnthn | Seems there's still some more hunting to do another day, though | 14:29 | |
dogbert17 | so you've seen more problems? | 14:33 | |
jnthn | Yeah | 14:37 | |
dogbert17 | gah :) | ||
timotimo | i wish there was some kind of static analysis we could teach how to find these problems | 14:38 | |
the missing root problems, that is | |||
it seems like the clang static analyzer is built to be extended | 14:44 | ||
dogbert17 | jnthn: have you only made changes in MoarVM today? | 14:58 | |
timotimo | i see no others | 15:00 | |
jnthn | dogbert17: Yeah | 15:01 | |
dogbert17 | then I've probably messed up my rakudo install :) | 15:02 | |
15:09
brrt joined
|
|||
brrt | ohai | 15:09 | |
yoleaux | 4 Oct 2017 23:46Z <samcv> brrt: i think your even_moar_jit branch merge broke the windows build | ||
00:24Z <AlexDaniel> brrt: fwiw a ticket for that: github.com/MoarVM/MoarVM/issues/718 | |||
samcv | oh hi brrt :-) | ||
brrt | i'll take a quick look | 15:10 | |
samcv | my guess it's it's putting in a path with '\' | ||
brrt | i'm basically in the process of moving | ||
timotimo | moving house? | ||
samcv | so it's getting unknown escape sequences. but i'm not sure how that info arrives in that file | ||
but it only is an issue on windows and it tries to interpret it as an escappe sequence. it was my belief that in C files '/' are used? | 15:11 | ||
jnthn | samcv: Are you sure that's what the error is about? | ||
I thought that was just a warning | 15:12 | ||
And the error was something else after it? | |||
Maybe I misread the output though | |||
dogbert17 | there must have been changes in how MVM_GC_DEBUG works | 15:13 | |
if I turn it on I get fromspace panics immediately | 15:14 | ||
Geth | MoarVM: f97ce7a855 | (Bart Wiegmans)++ | 2 files Fix definition of MIN Had a semicolon (';'), which it really shouldn't. Probably fixes appveyor issue |
15:15 | |
brrt | anyway, moving house, yes, so afk | ||
timotimo | viel erfolg! | ||
brrt | thx | ||
dogbert17 | bizarre | 15:16 | |
15:28
zakharyas joined
|
|||
Geth | MoarVM: 4092ccebca | (Jonathan Worthington)++ | 3 files Avoid a thread yield loop in mark_thread_ubnlocked Instead, use a condition variable, that is set when GC terminates. This saves a bunch of wasted scheduling work in various workloads. For example, `perf` used to show `sched_yield` accounting for a few percent in a web server benchmark; now it doesn't even appear in the report. |
15:36 | |
japhb imagines that 'ubnlock' is something orcs do | 15:43 | ||
Nice win, BTW | 15:44 | ||
Very happy that Cro is yielding so many benefits for Rakudo and MoarVM. | |||
jnthn | Yeah, it was very helpful to have a fairly large, heavily-supply-using codebase when refactoring supply internals recently | 15:45 | |
japhb | Time to bump nqp/rakudo to latest moarvm, or did you still need to chase down more bugs? | ||
I bet! | |||
jnthn: As I | 15:46 | ||
(FEK) | |||
As I'm starting to design the API for my widget lib, I'm thinking about every widget/subwidget having an input event supply, and bubbling of events happens through widgets emitting to other widget's supplies. | 15:47 | ||
This means a lot of event objects wisking between various supplies every time any input happens anywhere. How likely is it that having O(100) supplies active at once will be a problem at this point? | 15:48 | ||
jnthn | Unlikely | 15:49 | |
japhb | The other possible design is something like IRC::Client using .? method dispatch and a master dispatcher, if the supply route would have scaling problems. | ||
Ah good. | |||
jnthn | I mean, I'm running Cro load tests with -c 100 in ab | ||
So 100 concurrent requests | |||
japhb | OK, that's encouraging | 15:50 | |
jnthn | And ab doesn't know how to do HTTP/1.1 (gah!) and Cro doesn't know how to do the old HTTP/1.0 keepalive | ||
So it's 100 connectinos, and every one has a Supply dangling off it | |||
japhb | How in the world does ab not do 1.1?! | ||
jnthn | :) | 15:51 | |
Yeah, that was my thought too :P | |||
japhb wonders how much code still exists in the wild that knows 1.0 keepalive and *doesn't* know 1.1 and/or 2.0 | 15:52 | ||
timotimo | i don't know 1.0 keepalive, bu ti'm not code | ||
japhb | .oO( Apache: We lots of leading edge projects ... but that httpd thing we started with? Not much love there anymore ....) |
15:53 | |
*We fund | |||
timotimo | well, apache can't scale because it was made before async was discovered | 15:54 | |
jnthn | m: my $s = supply { whenever Supply.interval(0.1) { emit 1; done } }; my $i = 0; react { for ^500 { whenever $s { $i++ } } }; say $i | 15:55 | |
camelia | 500 | ||
jnthn | There's 500 active :) | ||
timotimo | MoarVM panic: Collectable 0x7fd494085750 in fromspace accessed - even in this supply example just now :( (but with a small nursery) | 15:56 | |
Zoffix | m: my $s = supply { whenever Supply.interval(0.1) { emit 1; done } }; my $i = 0; react { for ^50000 { whenever $s { $i++ } } }; say $i | 15:58 | |
camelia | (timeout) | ||
Zoffix | m: my $s = supply { whenever Supply.interval(0.1) { emit 1; done } }; my $i = 0; react { for ^5000 { whenever $s { $i++ } } }; say $i | ||
camelia | (timeout) | 15:59 | |
Geth | MoarVM: 64b62ab6cd | (Jonathan Worthington)++ | 4 files Basic JIT of lock/unlock |
16:00 | |
16:01
lizmat joined
16:12
AlexDaniel_ joined
16:18
robertle joined
|
|||
jnthn | Got curious why that example above tanks once it gets much over 1000 | 16:25 | |
Turns out huge amounts of time go on GC. Odd | |||
But, time to head home now | |||
timotimo | hmm, deep recursive stacks? | ||
jnthn | timotimo: Only if continuations are accidentally creating frames they shouldn't | 16:26 | |
I think maybe it's that we have tons of frames alive in continuations | |||
timotimo | the heap snapshot tracker ought to be able to show this? | 16:27 | |
16:41
leont_ joined
16:43
squashable6 joined
16:45
timo joined
16:47
timo1 joined
16:51
squashable6 joined
16:57
lizmat joined
|
|||
timotimo | huh, i grabbed the latest moar commits and get segfaults rather quickly now | 16:58 | |
wow, wat. tc->instance->VMString is a null pointer? how'd this happen? | 16:59 | ||
samcv | jnthn, maybe i misread it but i thought that was the error | 17:01 | |
Zoffix | oh heh :) | ||
Zoffix changes mind about doing bumps :) | |||
samcv | it's already been bumped though | 17:02 | |
i mean. the appveyor build of rakudo is failing as well | |||
timotimo | oh, i bet it's because i didn't recompile rakudo | ||
Zoffix | Don't see any bumps in the last day or so | ||
samcv | jnthn, oh it does says it is a warning. and then an error later on | ||
ci.appveyor.com/project/rakudo/rak...nm2r93c0at well it's failing for the same reason our builds are | 17:03 | ||
been failing for at least a day | |||
not sure how long. i only noticed it yesterday | |||
17:03
lizmat joined
|
|||
timotimo | yup, my problem was from missing some changes for rakudo's extops | 17:05 | |
Zoffix | ah, cool | ||
17:12
domidumont joined
17:14
squashable6 joined,
Ven`` joined
|
|||
samcv | gonna go through some rt's | 17:15 | |
maybe fix a java one | |||
Zoffix | m: use Test:HahaYouMissedAColonBruh | 17:23 | |
camelia | ( no output ) | ||
samcv | hah | 17:39 | |
[Coke] | m: use 3 | 17:45 | |
camelia | ===SORRY!=== Error while compiling <tmp> Undeclared routine: use used at line 1 |
||
[Coke] | O_o | ||
Zoffix | 3 is not a valid identifier or version literal :) | 17:49 | |
But it'll likely get improved as part of SQUASHathon :) | |||
timotimo | m: sub use(Int $foo) { say "lol" }; use 3 | 17:56 | |
camelia | lol | ||
18:03
zakharyas joined
|
|||
[Coke] | m: need 3 | 18:22 | |
camelia | ===SORRY!=== Error while compiling <tmp> Undeclared routine: need used at line 1 |
||
Zoffix | It's on this ticket: rt.perl.org/Ticket/Display.html?id...et-history | 18:23 | |
Wait, no on this: RT#126669 | 18:24 | ||
synopsebot | RT#126669 [open]: rt.perl.org/Ticket/Display.html?id=126669 [LHF][LTA] error with "need 6"/"use 6" (no "v") | ||
Zoffix | Both are marked LHF for new SQUASHathon participants | ||
[Coke] | Guess I should not be surprised I'm not finding new issues. :) | 18:28 | |
samcv | LHF? | 18:42 | |
timotimo | "low-hanging fruit" | 18:44 | |
Zoffix | as in easy stuff to do | 18:49 | |
18:52
zakharyas joined
|
|||
Zoffix | .tell jnthn tried bumping nqp/MoarVM versions, but stresstest hangs. S32-io/lock.t S32-num/power.t appear to hang entirely and MVM_SPESH_DISABLE and MVM_JIT_DISABLE don't help. S17-channel/stress.t appears to take about 300s to run. The same issues don't exist without the bump and the entire stresstest run comples in ~150s. This is on 24-core box | 19:50 | |
yoleaux | Zoffix: I'll pass your message to jnthn. | ||
Zoffix relocates | 19:51 | ||
19:56
zakharyas joined
|
|||
MasterDuke | seeing the same thing with S32-io/lock.t and S32-num/power.t | 20:08 | |
20:18
brrt joined
|
|||
MasterDuke | .ask jnthn does my comment in github.com/MoarVM/MoarVM/pull/689 about the MVM_CF_USES_FSA flag make sense? | 20:35 | |
yoleaux | MasterDuke: I'll pass your message to jnthn. | ||
jnthn | . | 21:40 | |
yoleaux | 19:50Z <Zoffix> jnthn: tried bumping nqp/MoarVM versions, but stresstest hangs. S32-io/lock.t S32-num/power.t appear to hang entirely and MVM_SPESH_DISABLE and MVM_JIT_DISABLE don't help. S17-channel/stress.t appears to take about 300s to run. The same issues don't exist without the bump and the entire stresstest run comples in ~150s. This is on 24-core box | ||
20:35Z <MasterDuke> jnthn: does my comment in github.com/MoarVM/MoarVM/pull/689 about the MVM_CF_USES_FSA flag make sense? | |||
jnthn | MasterDuke: | 21:41 | |
crap1 | |||
MasterDuke: #define it in MVMString.c also | |||
Ah, I'll note it on the ticket | |||
grr, too used to office keyboard :P | |||
MasterDuke | heh | 21:42 | |
Zoffix | That's why I use the same keyboards at home and office :) | 21:43 | |
timotimo | is it a good keyboard? | ||
jnthn | No :P | 21:45 | |
Zoffix: If you or anyone has chance to work out which commit did it, that'd be handy. I've got family visiting, so am liable not to have time to dig into it until early next week. | 21:47 | ||
otoh, we don't have to bump :) | |||
Zoffix | Doesn't sound like a fun exercerse :) Maybe AlexDaniel has the moarvm bisector ready | 21:48 | |
jnthn | :) | 21:53 | |
I may find a moment tomorrow for it, we'll see :) | |||
Though it's potentially not an easy fix :S | |||
Given I can't think of anything that went in that could be to blame | 21:54 | ||
MasterDuke | github.com/MoarVM/MoarVM/commit/64b62ab6cd somehow? | ||
jnthn | But it was still there with JIT disabled? | 21:55 | |
MasterDuke | ah, right | ||
jnthn | Also S32-io/lock.t is file locking | ||
Not mutex locking | |||
But I don't see anything I/O-ish | 21:56 | ||
MasterDuke | the moar process just seems to be sitting in a futex call according to strace | 21:58 | |
futex(0x563f575746b0, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 0, NULL, 0xffffffff | |||
Zoffix | It does have tests that fire off a promise. Remember when we were adding the close-on-exit feature and some tests in lock.t were hanging because there was a lock waiting on a handle and another thread waiting to close the handle or something | ||
MasterDuke | futex(0x5613933163e0, FUTEX_WAIT_PRIVATE, 0, NULL | ||
the first is the t/spec/S32-io/lock.t process, the second is the getout-22215-992720.code process | 21:59 | ||
AlexDaniel_ | trisectable is not ready yet, and once it's ready… it'll take a week or two to prebuild stuff | 22:16 | |
timotimo | whoa | 22:20 | |
22:44
rba joined
23:20
lizmat joined
|
|||
samcv | sounds awesome AlexDaniel_ | 23:25 | |
AlexDaniel_ | yeah, it sounds awesome! Just does not sound ready yet :) | ||
I'll work on it tonight | 23:26 |