01:47 ilbot3 joined 06:12 brrt joined
nwc10 good * 06:20
"brrt, tell brrt about ..." 06:21
brrt good * nwc10
nwc10 morepypy.blogspot.co.at/2016/08/py...a-for.html -- ... for Python 3.5 support
brrt seen it yes
nwc10 oh yes, back thtere
brrt i wonder how we're going to apply for a thing like that
nwc10 it's even on this screen :-) 06:22
brrt okay, interesting little bit
how come infrastructure code basically has to sponsored out of charity throughout our industry
i mean, the number of people who'd benefit from an up-to-date pypy3 is rather large 06:23
nwc10 I don't have an answer to that, but a big messy example that confirms that it is a problem is OpenSSL
brrt only mozilla takes up the bill
yes, openssl is an excellent example 06:24
large bigco's tend to start their own in-house variants and open-source that
e.g. unladen swallow, and the other python jit guys 06:29
whatweretheycalled
from dropbox 06:30
nwc10 pyston
and Facebook with HipHop and now HHVM
which has stopped blogging - hhvm.com/blog/
brrt hmmm 06:31
does that mean they stopped doing things altogether?
maybe they're working on php7
support
nwc10 github.com/facebook/hhvm/commit/84...d9174f643e -- Aug 9, 2016 06:32
it's dead. they stoppped working at least 12 hours ago!
IIRC when they *were* still blogging they said that they would incorporate the bits of PHP 7 that didn't (er, I forget exactly) conflict with PHP 5
or, I think, make some stuff switchable
brrt hmm 06:33
nwc10 but the impression I get is that their future is with Hack
brrt switches sound saner than mix-and-match
nwc10 I think that the answer about "everything is charity" is that most firms find it very hard to make an internal business case to pay for (more work) on something that is available to them for free (er, at no cost) and basically works well enough already 06:34
brrt hmmm
nwc10 and it's very hard to create your own consultancy to offer such services to regular firms, and actually have a sales pitch
brrt so open source is bound to remain in some low-energy state
nwc10 yes. ish.
although it's interesting that the US Federal Government seems to be waking up to it 06:35
brrt hmmm
interesting
nwc10 although you can fork gov.uk on github: github.com/alphagov
and I think I was told that New Zealand did fork some
brrt hopes that won't be abused someday 06:49
07:37 zakharyas joined 08:00 brrt joined
jnthn morning, #moarvm :) 09:20
Yay, with local fixes I just had the IO::Socket::Async tests run 100 times in a row without any failure 10:16
nwc10 jnthn: I can run t/spec/integration/advent2013-day14.t in a loop "forever" without failing 10:24
(it stops when I rebuilt rakudo in the same dir from a different window) 10:25
jnthn :) 10:29
What flappers/failers do you still have? 10:30
The ASAN explosion in S17-lowlevel/lock.t is the only still on my "tests I know I need to look at" list 10:31
*the only one
nwc10 of async stuff, just that 10:35
t/spec/S32-str/encode.rakudo.moar is also ASAN barfage
and t/spec/S16-io/eof.t makes bad assumptions about what is in /proc (on Linux) 10:36
but my suggested fix in the RT ticket I opened is also daft :-)
jnthn Yeah, I'm just interested in concurrency things at the moment :)
dalek arVM: 40948f6 | jnthn++ | / (13 files):
Implement cancellation completion notification.

So that things that want to really know when something was cancelled are able to find out. (In the course of doing this, I noticed that we seem to be missing a number of cancellation handlers, which will also want looking in to.)
10:37
11:05 dalek joined
jnthn Turns out the lock.t issue, after all this time, was actually an incorrect test... 11:07
nwc10 when the race is won/lost/$whatever, what was the failure? 11:09
ASAN seems to have turned my terminal red, but failed to print out anything
jnthn The test just failed due to wrong values in the array 11:11
nine jnthn: does that mean that all flappers are fixed now?
jnthn So far as I know.
nine Yeah! jnthn++
Need to start hacking on rakudo again. Should be even more fun now :) 11:12
jnthn Hangs too :)
Anothr spectest run just completed happily :) 11:13
nwc10 the test just failed with a SEGV: paste.scsys.co.uk/530610
so there is something ugly in t/spec/S17-lowlevel/lock.t (as was) even if it's not testing correctly.
jnthn You pulled latest roast? 11:14
nwc10 no, this is the one from before dalek died
jnthn Ah
Yes, I know there's still something wrong at the VM level
nwc10 but I was meaning, that version can SEGV MoarVM, and it shouldn't be possible to seg..
aha OK.
jnthn But the test wasn't trying to tickle that. 11:15
The test was trying to do things right, but was wrong.
There's already an RT to cover the underlying SEGV
nwc10 do we have a place for tests that try to ticket, er OK
jnthn Which'll get a fix and a proper test. Maybe this afternoon :)
nwc10 you keep answering my questions before I finish them
jnthn Lunch now... :) Will set off another spectest run whlie I nom :)
11:28 JimmyZ joined
JimmyZ jnthn: re 40948f69208b1f9d460883ab0089f788423db4ba, am I missing something? I never see 'cancel_notify_queue' be read 11:29
does 'I noticed that we seem to be missing a number of cancellation handlers' explain it? :) 11:35
jnthn Apparently you're missing MVM_io_eventloop_send_cancellation_notification which reads it :) 11:57
Lunch spectest run also completed successfully.
nwc10 ASAN has not offered you an alternative lunch 11:59
JimmyZ haha, i just see MVM_repr_push_o(tc, notify_queue, notify_schedulee); , not seems to process it. 12:02
or asynce is complicated, I can't understand it at all :( 12:03
but I still didn't see the code that processes cancel_notify_queue 12:04
or pop
I only see push :) 12:05
jnthn That's 'cus Moar's job is only to push :)
It's sending an async notification to be handled back in Perl 6 land 12:06
So it ends up scheduled on the thread pool
JimmyZ sigh, I still didn't see the way that read cancel_notify_queue .... 12:09
even in perl 6 land 12:10
jnthn It'll just be an nqp::shift in ThreadPoolScheduler.pm, iirc 12:14
JimmyZ ah, find it , thanks 12:32
jnthn This VMArray refactor is gonna be some work... :) 12:45
13:05 unmatched} joined 13:07 unmatched} joined
dalek arVM: 2f269d8 | (Jimmy Zhuo)++ | src/io/eventloop.c:
removed unused code
14:11
lizmat jnthn: seems like the last updates to Moar/NQP/Rakudo borked --profile 14:16
$ perl6 --profile -e 'my $a'
Writing profiler output to profile-1470838613.57616.html
===SORRY!===
SC not yet resolved; lookup failed
timotimo yeah, i've already given a bad and a good commit for that problem 15:47
hm, but not in here, it seems 15:48
jnthn Time for a break, but I figure I've got the first 80% of the VMArray re-work done. 16:23
ilmari now you only have the second 80% left? 16:27
17:10 Zoffix joined
jnthn Hopefully...don't want a third 80% :P 17:20
geekosaur thought it was 90%. but then, that'd mean 10% to reach 90% + the other 90% = 100% to go >.> 17:31
19:33 zakharyas joined 20:03 brrt joined
brrt good * #moarvm 20:06
timotimo heyo brrt
brrt hey timotimo
how's today?
timotimo i feel kind of down today 20:07
jnthn has a totally busted MoarVM today 20:08
brrt bust all the things \o/
timotimo well, completely re-working the storage for VMArray would kind of do that to you :)
jnthn Well, jsut VMArray :)
brrt what had to change on VMArray, fwiw
jnthn Its explosiveness if you mis-used it from multiple threads 20:09
timotimo that's ... a question?
brrt yes, a question
jnthn It's not meant to be a concurrent data structure but it shouldn't be a path to SEGV the VM :)
brrt i know knothing
timotimo "fwiw" isn't a thing i'd end a sentence in?
brrt it's not? 20:10
oh well
timotimo it's not, no
data loss is still likely when you concurrently resize the array
brrt don't... do that then
jnthn Yeah
brrt doctor, it hurts if i do...
jnthn Though I realized we can actually catch some cases of that happening and complain loudly
brrt can we do that cheaply?
jnthn brrt: Yeah, the pain shouldn't incldue memory corruption and a segfault though :) 20:11
Relatively
brrt what will it cost to the nonconcurrent case?
jnthn Hard to say, but I suspect "rather little"
brrt hmmm 20:12
jnthn Plus we should be able to JIT array accesses pretty nicely after this too, it hink
brrt good :-)
jnthn *think
brrt oh, that i'm interested in
jnthn Anyways, I got to the point where I was too tired to hack more on the VMArray bits today, so will continue another time :) 20:18
brrt oh well :-)
lizmat good night, jnthn 20:22
20:35 hoelzro joined 22:04 stmuk_ joined