pugscode.org/ | nopaste: sial.org/pbot/perl6 | pugs: [~] <m oo se> (or rakudo:, kp6:, smop: etc.) || We do Haskell, too | > reverse . show $ foldl1 (*) [1..4] | irclog: irc.pugscode.org/
Set by TimToady on 25 January 2008.
jeffreykegler jql: "I see perl6 as the language between the {} in the rules syntax" 00:24
That makes perl6 sound like a parser generator
TimToady not on the level of say yacc; it just makes it really easy to write top-down parsers that can call into bottom-up parsers, and really easy to integrate something like a simple operator precedence parser 00:31
what gets generated automatically is the lexer for all the alternative points in the grammar
jeffreykegler Yeah, when I saw the rules stuff in the Perl 6 docs 00:32
TimToady but very little in the standard grammar requires more than 1-token lookahead
jeffreykegler I was thinking just in terms of an extension of regular expressions
TimToady and there are precisely two spots that use backtracking
jeffreykegler A feature of the languages like formats in Perl 5
Not the framework of the language 00:33
TimToady it would be easy to substitute in a bottom-up matcher that did more lookahead, but I'm not personally interested in making the user look ahead that far
let alone the computer
TimToady the main thing that's going on is that you can derive new grammars and mix in new rules that have the same lexical and grammatical status as the original rules 00:34
jeffreykegler The track record for more than one token of look-ahead is not good
meppl good night 00:35
TimToady which means every time you mix in more grammar you have to potentially regenerate the lexer
jeffreykegler Is this going on in a fixed number of phases?
TimToady and *that* part of it is supposed to be deterministic in the DFA sense 00:36
jeffreykegler The generation of new grammar I mean
Is this like a two or three pass process? 00:37
TimToady it's however many passes are necessary; every new language must manage its set of lexers and generate them on demand, caching results for efficiency 00:39
jeffreykegler The top down parser is kind of LL?
TimToady the same subrule call in a base grammar may call out to a different set of subrules that were mixed in by the derived langauge
so it has to keep track of that 00:40
jeffreykegler Not a lot of parsing techniques are very friendly to having their grammars modified on the fly
TimToady yeah, pretty much LL, but mixing in a bottom-up parser generally gets rid of most left recursion problems
jeffreykegler LL tends to be only one
TimToady my feeling is that machines are getting fast enough that we can pay the overhead for the flexibility if we're clever about autogenerating efficient lexers 00:41
and LL tends to give much better error messages 00:42
jeffreykegler LL does have the advantage that it's intuitive for programmers -- it's just kind of like subroutine calls 00:42
TimToady I hope so; parrot's approximation seems to be well accepted by those writing grammars in it 00:44
but I really have to finish the autolexing before it can really be properly extensible
jeffreykegler Is the problem automatically determining where the lexing should end and the LL start? 00:45
TimToady not really, I can already do that, and it's pretty well defined what is "declarative" and what is "procedural" in S05 00:46
and I can already generate the set of lexemes at a particular set of alternatives
jeffreykegler So that's what "declarative" and "procedural" are all about!
TimToady I just have to up and write the NFA/DFA engine
for lack of better terms
sort of a pattern/action notion 00:47
but longest tokens are the part we can determine without risking side effects, basically
we assume the "procedural" bits have side effects
and it seems to be a natural place to break it for lexical purposes 00:48
tricky bit is letting the lexer tell the parser how much it has already "cheated" 00:49
jeffreykegler I've just done some NFA / DFA programming, so I've kind of got this stuff in mind
TimToady fortunately we have TDFAs that can at least do captures in DFA region
so we don't have to reparse for the
jeffreykegler Where's the cheating? 00:50
TimToady if you're writing a recursive descent parser, your token may encompass some implicit recursions that you'll have to return from someday
we define longest-token matching transitively through subrules 00:51
jeffreykegler TDFA == tagged DFA?
TimToady yes
TimToady but I haven't integrated that notion yet. I was first gonna just write a non-bactracking NFA 00:52
jeffreykegler Right
"implicit recursions that you'll have to return from someday" 00:53
TimToady I already have the mechanism to do that
basically your tag tells you the route to call down
and the rules are smart enough to pay attention to "fate" 00:54
jeffreykegler Does this mean some of the Rec Descent stuff may be returning into your NFA/DFA
TimToady and just pick up where the token leaves off
essentially. doesn't seem to be a problem in practice 00:55
jeffreykegler If you say so :)
TimToady the NFA/DFA doesn't really do anything
all the "actions" are either above or below it
jeffreykegler That's why it's "declarative" ? 00:56
Procedures are elsewhere
TimToady yes, it's a strategy for mixing in as much "determinstic" matching as possible into the programmer's grammar view without the programmer having to worry about it much 00:57
jeffreykegler Sorry to stack questions here, but you're looking at some academic stuff on DFA's?
TimToady yeah, I've been looking at various papers 00:58
jeffreykegler Any online?
TimToady all of them :)
jeffreykegler I'd be curious 00:59
I'm 54 and my mind works in low gear. Studying papers helps me understand issues.
And also just having written a NFA to semi-deterministic finite automata translator I've got the math fresh in my mind. 01:01
TimToady laurikari.net/ville/spire2000-tnfa.pdf 01:02
that's kind of the direction I'm aiming
jeffreykegler Found it. 01:03
TimToady and I have to get as much done before I turn 54 this year me own self :) 01:04
jeffreykegler As this stuff goes, it doesn't look too deadly.
They say Newton gave up math by the time he turned 50.
I've been reading GH Hardy's Apology of a Mathematician
TimToady well, it's easy for my eyes to glaze over as soon as I see "System XYZZY is a seven tuple of greek alphabetic soup..." 01:05
jeffreykegler I recommend you read the Hardy *AFTER* you write the TDFA engine 01:06
So you won't know you're doing the impossible until you're finished
SamB TimToady: where did you see that?
TimToady that's the generic first definition of any paper on DFA magic :) 01:07
or pretty much any paper on parsing 01:08
dduncan I have an opinion question if anyone can help me with it ... 07:31
let's see if copy-paste works ... 07:33
I intend to make another release of my language spec tonight, just prior to when the proposals would start being looked at (as if it might make any difference), and I have a design question you (or anyone else) may have experience with or advice on ... it relates to the names of 2 main groups of language dialects
in a nutshell, Muldis D has multiple dialects, which are compatible at the very least due to Muldis D code always having to declare what dialect it is in, akin to Perl's "use 5.6" vs "use 5.8" etc
one dialect group is currently called "non-hosted" or "concrete", and is any dialect where code is written as text strings or files like a typical programming language ... examples how you normally write Perl, or how you normally write SQL
the other dialect group is called "hosted" or "abstract", and it is always written in terms of data structures written in some other language like Perl ... an analogy is that SQL::Abstract would be hosted (in Perl), while SQL is non-hosted
my question is what might be better category names than the ones I chose ...
in particular, I like "hosted", but am wondering if its complement group might be something better than "non-hosted", something with one word that means the same thing ... maybe "independent" but better sounding or less misleading?
anyone have thoughts on this?
in SQL terms, I"m looking for the "foo" in "SQL is foo and the inputs of SQL::Abstract is (Perl) hosted"
yep, it did
dduncan slightly longer names, the 2 groups could be named "plain text" and "hosted data", and abbreviate to 'PT' and 'HD' respectively, and used together with 'Muldis D', could be 'PTMD' and 'HDMD' for abbreviation purposes 07:55
spinclad @tell mncharity i must have been blind... debian etch has ruby1.9. 12:16
lambdabot Consider it noted.
moritz_ spinclad: it has, but it's a prerelease
spinclad lenny has a later prerelease, sid has 1.9.0.0 12:17
pdy the version in etch is from June 2006? 12:25
spinclad i may try installing it, it looks like it only depends on sid/lenny libc6 (already installed on my etch system) and libncurses6. 12:32
pdy: yes
which i take it is pretty old as these things go 12:33
spinclad i think i'll be upgrading to lenny one of these days, as it's approaching release slush, and i'm already seeing ghc 6.6.1 (older lenny) disappearing in favor of 6.8.2 12:36
pdy spinclad: depends on your requirements. for a production system it could make sense to stay with the stable release - etch, and use only selected pieces of software from the backports repository 12:37
spinclad sure, this is just my desktop, tho 12:40
pdy ah ok, on my desktop i always go with "testing" :-)
spinclad expect i will soon too
pdy you will get the joy of tags with it 12:41
debtags is awesome for searches
apt-get install ept-cache 12:42
moritz_ debtags is missing the data from popcon 12:44
if it had that, it would be even more useful
pdy its not missing it. its admin duty to combine them :-)
spinclad i see the tags in aptitude already, have apt-cache, ept-cache is different?
(looking)
pdy check the homepage of Enrico Zini for his talk how to combine popcon data with other information 12:45
moritz_ I saw his talk at LinuxTag 2006
back then it didn't include any popcon information
so it seems I'm not up to date
pdy moritz_: his talk's from 2007 are taking it even further :-)
pdy builds a temple for Enrico 12:46
now if i would be a master of the Perl i could make use of all the wealth of information that is out there to my feet! 12:47
spinclad looks nice. look forward to playing with it.
pdy e.g. you like a software and know a friend likes that software too. what other software does the friend have which is not on my system yet, but could be of interest to me 12:48
moritz_ social packaging ;) 12:50
pdy then restrict the results based on tags is::implemented::in:perl
yeah :-)
pugs_svn r19799 | putter++ | a possible talk and project. 14:04
r19800 | ruoso++ | [smop] more leaks solved. Now the first test have only 10 objects leaking... 14:05
ruoso I was testing smop with valgrind and I don't have any more errors, only some memory leaks that seems to be already identified by the SMOP_LOWLEVEL_MEM_TRACE code 17:43
[particle] \o/ 17:43
ruoso and it seems that I'm missing some SMOP_RELEASE on the continuation, as the objects not being destroyed are SMOP__SLIME__Frames and Nodes (the nodes are referenced by the frames, so, it's probably frame's fault) 17:45
brb & 17:46
TimToady \oJ # left arm hurts 17:49
gah, it's too early to be this late...
ludan hi 19:53
moritz_ good localtime() ;) 19:54
pmurias moritz_: good evening
moritz_ pmurias: how are kp6 things going? 20:01
pmurias moritz_: ruoso is making good progress on smop 20:10
i didn't have time to work on kp6 recently 20:11
the ruby backend passes some tests
moritz_ cool
mncharity++ 20:12
ruoso hi pmurias... I'm very near of finishing stage 0 smop... I'm only chasing some memory leaks to have it finished... have some time to help? 20:55
moritz_ what are you using to detect mem leaks? valgrind? 21:02
or your internal tracer? 21:03
ruoso bltn]
both
ruoso with the hand misplaced in the keyboard
right now, it seems that the interpreter is leaving a reference to the frame unreleased
moritz_ how does that work? at termination of program it checks all refcounts and reports if it's != 0? 21:04
pugs_svn r19801 | ruoso++ | [smop] some more fixes, but still with some memory leaks...
pmurias ruoso: hi
ruoso moritz_, yep...
moritz_, actually, it maintains a list with all allocced values 21:05
and everytime you free a value, it removes from the list
in the end, every value in the list is a memory leak
pmurias i'm having an algorithmic competition we-thr (leaving on tu), and i'll be going to sleep soon 21:06
[particle] you can add uids, and print them upon creation and destruction to check what's going on
we did that with parrot
ruoso [particle], that's kind of what the tracer does.. 21:07
I already have a debug message that shows every operation 21:08
[particle] sweet
[particle] TimToady++ # context updates 21:10
TimToady I decided that simplifying = was a better idea that splitting it into two operators 21:11
it still has to glare at the left arg a bit, but the cases that can be considered item assignment are drastically cut down
and more importantly, easily recognizable by a human 21:12
without having to delve into subscripts and such
[particle] i'm happy = stays = 21:17
yes, it seems list context is the default now 21:18
and that's fine. users can be trained to expect that. 21:19
TimToady I wanted to save $a = 1, $b = 2 though
and this does that
[particle] catering to the c-lovers among us 21:20
pugs_svn r19802 | ruoso++ | [smop] small progress... only 6 objects leaking in test/01... 22:02
pugs_svn r19803 | ruoso++ | [smop] additional debug infomarmation. Every responder interface have a char* id that is used by the tracer to present a more usefull message. 22:35
pugs_svn r19804 | ruoso++ | [smop] now a real improvement... only one object leaking in test/01 23:14
ruoso night & 23:21
TimToady good night
wolverian nice work, ruoso++
peepsalot i don't understand this syntax: @values.sort: { $^b cmp $^a }; 23:39
Limbic_Region you are calling the sort method on @values
peepsalot is it like creating an anonymous function and passing it to sort somehow. what is the colon doing?
and the curly braces 23:40
Limbic_Region it is synonomous in perl 5 as sort {$b cmp $a} @values;
peepsalot ok, but I never learned perl 5 ;-)
TimToady the colon is replacing a pair of parens there, so it's equivalent to @values.sort({$^b cmp $^a}) 23:41
Limbic_Region oh, sorry
peepsalot TimToady, ok, that notation makes more sense to me. is there any reason doing it with a colon is better?
TimToady you don't have to figure out where to put the ending paren 23:42
Auzon Well, it's a piece of code that has $^a and $^b representing the two items you are comparing. You then return -1, 0, or 1 (or the P6 object equivalents) to tell it how to sort.
peepsalot so you can replace ANY function call parenthesis with a colon like that?
TimToady no, only methods
a normal function call doesn't need the colon in the first place 23:43
foo(1,2,3) is equivalent to foo 1,2,3
interestingly, you can also write @values.sort :{$^b cmp $^a}
peepsalot hmm, ok
TimToady but then it's parsed as an adverbial block rather than an argument list 23:44
Limbic_Region TimToady - did you see the /msg I sent you on PerlMonks?
TimToady not yet 23:45
peepsalot thanks for the tips. gotta go for the moment. afk
Limbic_Region ok, it was a pointer to a node I thought might interest you
TimToady ah, my "And 0 more" mutated to "And 1 more", which is easy not to see 23:46
yeah, was looking at that and wondering whether to respond
'course, delete is really an op on a container, which adds an additional wrinkle 23:47