|
Parrot 2.6.0 | parrot.org Log: irclog.perlgeek.de/parrot/today | Nopaste: nopaste.snit.ch:8001 | merge html_cleanup (talk to Coke), merge gc_* branches, fix/replace/optimize hashing Set by moderator on 10 August 2010. |
|||
| whiteknight | so somewhere along the line, pir:set_hll_global__vSP(<$FileSystem>, $instance) isn't doing what you say it should be | 00:00 | |
| Austin | Or it's not getting run. Throw in a say-hello to Program::_initload | ||
| dalek | TT #1737 created by nwellnhof++: Timing of GC runs | 00:01 | |
| TT #1737: trac.parrot.org/parrot/ticket/1737 | |||
| whiteknight | ah, you're right. It's not getting run | 00:03 | |
| bug fuggedaboudit. Using FileSystem.instance makes my shit work now | |||
| Austin | Heh. Just don't commit that. | ||
| :) | |||
|
00:05
Psyche^ joined
|
|||
| whiteknight | ...of course, once the test runs it segfaults, so that's not *much* of an improvement | 00:06 | |
| Austin | Hmm.. I don't *think* that's me... | 00:07 | |
| But then, I never do. | |||
| whiteknight | # Could not find sub assert_equal | 00:11 | |
| all my tests fail like this | |||
| Austin | UnitTest/Assertions | 00:12 | |
| Are you doing a use(), did you comment it out? | |||
| whiteknight | no, I'm using use() | 00:14 | |
| And you know what I hate particularly? The error message "Parent isn't a class" | 00:15 | ||
| Austin | Heh. | ||
| Yeah, not my favorite thing to see. | |||
| whiteknight | would help if it told me which class I was trying to add a parent to, or which parent I was trying to add | 00:17 | |
| kid51 | Uh-oh: Parrot build failure | 00:18 | |
| whiteknight | say it ain't so! | 00:19 | |
| cotto_work looks at someone else to blame | |||
| nopaste | "kid51" at 192.168.1.3 pasted "Parrot build failure on Darwin/PPC at r48421" (85 lines) at nopaste.snit.ch/22707 | ||
| cotto_work | It's purl's fault. | ||
| reconfig? | 00:20 | ||
| kid51 | Something happened between yesterday (r48377) and today (r48421). | ||
| cotto_work | looks like some stale bytecode | ||
| kid51 | cotto_work: I'll try that. | ||
| cotto_work | yes, a pbc compat break | ||
| plobsing must have forgotten it | 00:21 | ||
| nope. He remembered. | |||
| odd | |||
| kid51 | perhaps I (uncharacteristically) did not do 'make realclean' before building | ||
| Note: I saw the word 'packfile' in the error output, so after I got the failure the first time, I called 'sh tools/dev/mk_packfile_pbc. | 00:22 | ||
| ... which updated the same 3 files as always (see recent TT) ... | |||
| ... and rebuilt -- without success | 00:23 | ||
| Looks like there was a big branch merge today | |||
| dalek | rrot: r48422 | jkeenan++ | trunk/src/pmc/imageiosize.pmc: [codingstd] Insert POD 'item' so that documentor will know where to add function documentation. |
00:25 | |
| kid51 | cotto_work: Your diagnosis was correct. 'make' succeeded; currently running 'make test' | 00:41 | |
| cotto_work | glad to hear it | ||
| whiteknight | Austin: I've reclaimed about 50% of my test suite by duplicating various subs from UnitTest::Assertions, since use() doesn't seem to be working | 00:47 | |
| Austin | Heh. | 00:48 | |
| Commit the pla stuff, I'll take a look. | |||
| Paul_the_Greek | Where are deprecation notices posted? | 00:51 | |
| dalek | rrot-linear-algebra: 8262a30 | Whiteknight++ | t/ (3 files): start fixing some tests, most of which appear 'broken' because of problems in from UnitTest::Assertions, which placates most common tests (though assert_throws still barfs, for some reason I can't figure out). Non-common tests generally fail because use(UnitTest::Assertions) does nothing |
00:52 | |
| cotto_work | DEPRECATED.pod | 00:53 | |
| whiteknight | Austin: don't worry about it. You've got plenty of your own problems to fix. I'll work through my own mess | 00:54 | |
| Austin | ok | ||
| Austin goes back to sleep. | |||
| Paul_the_Greek | One link to deprecated.pod from Support Policy is dead, the other is blank. | 00:56 | |
| I look into that. | 00:57 | ||
| Meanwhile, how can I tell if a deprecation proposal is okay or has been challenged? | 00:58 | ||
| cotto_work | check the discussion in the related ticket. Each item in DEP.pod needs to have one. | ||
| Paul_the_Greek | So if the version is 2.4 and there is no discussion, it's safe to delete the functions in question? They aren't called anywhere. | 00:59 | |
| cotto_work | yes, but an upgrade path needs to be documented before the thing can be removed. | 01:00 | |
| which functions? | |||
| purl | which functions is probably that in? | ||
| Paul_the_Greek | The ticket: trac.parrot.org/parrot/ticket/1660 | 01:02 | |
|
01:02
ash_ joined
|
|||
| Paul_the_Greek | deprecated.pod is in the top-level directory. make html creates a deprecated.pod.html that is empty. That doesn't seem right. | 01:03 | |
| cotto_work | no, it doesn't | 01:08 | |
| that might be fixed in Coke's html branch | |||
| Paul_the_Greek | I'll check with him. | ||
| Does that ticket look like it can be dealt with now? | |||
|
01:10
davidfetter joined
|
|||
| cotto_work | A version at which the functions are eligible for removal should be included in the file, but it's not. I don't think anyone would complain if you submitted a patch to get them knocked out. | 01:11 | |
| Paul_the_Greek | They do see utterly unused. Okay, I'll do that. | 01:12 | |
| Take care, kids. | |||
|
01:19
plobsing joined
|
|||
| cotto_work | trac.parrot.org/parrot/changeset/45698 looks like it | 01:20 | |
| d'oh | |||
| nm | |||
|
01:22
cognominal joined
01:28
rurban joined
|
|||
| whiteknight | purl msg Austin: The vast majority of PLA tests now pass (of tests I care about, only 18/222 fail). I changed UnitTest::Assertions._initload to an INIT, and the exports for it work again. assert_throws gives me an exception about null in invoke, which I need to track down. github.com/Whiteknight/kakapo/commi...93ee0ca982 | 01:31 | |
| purl | Message for austin stored. | ||
| whiteknight | ...and on that note: bed! | ||
| dalek | rrot: r48423 | jkeenan++ | trunk/src/pmc/bigint.pmc: [codingstd] Insert POD 'item' so that documentor will know where to add function documentation. |
||
| rrot: r48424 | jkeenan++ | trunk/t/native_pbc (4 files): (Once again ...) Run tools/dev/mk_packfile_pbc to update t/native_pbc files |
|||
| rrot: r48425 | plobsing++ | branches/dynop_mapping: branch has been merged |
|||
|
01:37
preflex joined
01:39
rurban_ joined,
hercynium joined
|
|||
| plobsing | ping ash_ | 01:48 | |
| ash_ | pong | ||
| dalek | rrot: r48426 | jkeenan++ | trunk/src/pmc/complex.pmc: [codingstd] Insert POD 'item' so that documentor will know where to add function documentation. |
||
| plobsing | how goes it? | ||
| ash_ | been working on merging against the trunk, i am going to prepare some diff files and push my branch to the svn | 01:50 | |
| plobsing | good. hard pencils down is comming up really quick. we need your work in the core repo by then. | 01:51 | |
| any blockers? | |||
| ash_ | none come to mind, the pbc_to_native is probably going to be a project i'll have to continue after gsoc is over | ||
| plobsing | definitely. if you have any questions with that, don't hesitate to ask anyways. | 01:53 | |
| ash_ | i have been thinking about that, i think i might make an alternative ops2c and have them print a version of the ops that are a bit easier to call directly (ie, change the call sigs for the C functions to be interp, args, so something like say_p will be Parrot_say_p(INTERP, PMC*); instead of the way it works now, i'd have to manipulate the pcc_context | ||
| plobsing | at one point I think ops2c.pl generated both the current form and something similar to what you describe. | 01:54 | |
| ash_ | well, they got rid of ops2c.pl and now have a compilers/opsc/ nqp program to do ops2c now | 01:55 | |
| plobsing | that probably got jetisoned either with the JIT or the CGP core. | ||
| cotto_work | afair ops2c never generated anything more function-based that what it does now. | ||
| plobsing | cotto_work: I thought the inline keywords were a holdover from some functionality ops2c lost | 01:56 | |
| ash_ | does pirc do what i am trying to do? or is that supposed to be an alternative to imcc? | 01:57 | |
| plobsing | pirc is trying to be an alternate imcc. | ||
| ash_ | okay | ||
| cotto_work | they're quite old. I don't know what the original intent was but it wasn't of much significance to ops2c around the time bacek and I worked on opsc. | ||
| as is PIRATE | 01:58 | ||
| I'm rooting for PIRATE but it'll need to get faster before it's usable. | 01:59 | ||
| speaking of which, plobsing, what changes did your branch make? | |||
| (low-level) | |||
| plobsing | relevant to PIRATE: | 02:00 | |
| * OpLib now requires the name of the op_lib to look stuff up in. Core is called "core_ops" | |||
| cotto_work | sounds like that'll be easy to figure out by looking at the tests | 02:01 | |
| plobsing | * PackFile_Bytecode segments now have a list of dynoplibs and ops they reference | ||
| ash_ | the only catch to my approach is new_closure | ||
| since that looks at the current context | |||
| plobsing | * (AFAICT) PackfileRawSegment is no longer able to correctly generate appropriate bytecode segments. | 02:02 | |
| ash_: context is a pretty core concept to Parrot. I would be surprised if you were able to do away with it completely. | |||
| ash_ | does parrot support eval? | 02:03 | |
| plobsing | of course | ||
| ash_ | hmm | ||
| plobsing | but the feature labeled eval is not at the level you're likely thinking. | 02:04 | |
| eval is really just compile this to PBC | |||
| ash_ | ya, i know | ||
| but it means you have to keep the complete context up to date | |||
| plobsing | you're probably looking for "drop into another runloop" which is completely doable. | ||
|
02:04
khairul joined
|
|||
| ash_ | because you don't know if a function might change it | 02:04 | |
| cotto_work | so PFRS needs some additional smarts to know what's a dynop? | ||
| like you said in your message to parrot-dev | 02:05 | ||
| plobsing | no. dynops aren't particularily special. all ops are explicitly mapped. | ||
| ash_ | see, i have been thinking, if a function knew which lexical values it needed to capture (which should be possible to figure out from syntactic analysis) you'd only have to capture a few things, thus your context's would be a lot smaller and easier to manage | 02:06 | |
| cotto_work | so the problem is that PackFileRawSegment can't represent any kind of mapping | ||
| plobsing | but more specifically, the problem (near as I can tell), is that packfile segments have 2 lengths: the header and the data. PFRS generates empty headers | ||
| bytecode segments used to have empty headers | |||
| ash_ | but if you have anything like eval, you have to know the full real context, (by eval, i mean figuring out which code to run at runtime) | 02:07 | |
| cotto_work | ok. thanks. | ||
| plobsing | ash_: to reduce contexts like that, you'd likely need whole-program analysis. Doable, but difficult. | 02:08 | |
| and you'd still have to handle the same features and functionality in contexts. you'd just have to do it less often. | 02:09 | ||
|
02:09
preflex joined
|
|||
| ash_ | ya, but if i did somehow know that, i could build the context right before the call to new_closure and then save it, thats the only op that uses the context, the rest are just callable as is | 02:10 | |
| kid51 | (let's see what smart aleck remark purl has in store for me tonight ...) | 02:11 | |
| kid51 must sleep | |||
| purl | Sleep is for the weak. | ||
| ash_ | anyway, its just a thought, i might not be able to pull it off, because like you said, context's are very much ingrained into parrot | ||
|
02:12
jimk joined
|
|||
| ash_ | so i might have to do it 'properly' but, with simplified call signatures it would make it easier | 02:13 | |
| khairul | cotto_work: could we meet tomorrow instead at the same time? it seems i misremembered my timetable. | ||
| cotto_work | sure | 02:14 | |
| wait, not | 02:15 | ||
| no | |||
| I have a thing that starts 24h from now | |||
| khairul | how about friday? | 02:16 | |
| purl | friday is Saturn rising. we better do it thursday | ||
| cotto_work | should work | ||
| khairul | alright then. | ||
| cotto_work | I'll let you know asap if it doesn't. I expect it'll be fine though. | 02:17 | |
| ash_ | how is lorito going to get away without a new_closure op? | ||
| plobsing | that functionality is going to be built on top of lorito, AFAIK | ||
| cotto_work | ash_: I haven't understood that but chromatic seems to have an idea about how it'll work | 02:18 | |
| cotto_work decommuter | |||
| s/r/s/ | |||
| plobsing | my understanding is that new_closure operates on parrot's call chain, which isn't implicit in lorito. | ||
| ash_ | to create a closure you'd have to know the call chain, will lorito abstract that away some how? | 02:20 | |
| that seems complicated... | 02:22 | ||
| Coke | 'make html' has issues. but just read the file IN THE REPO. it's right there. | 02:24 | |
| (in re: deprecated.pod) | 02:25 | ||
| cotto | ~~ | 02:33 | |
| Coke | cotto: +~ | 02:51 | |
| rakudo: 3, | 02:54 | ||
| p6eval | rakudo 6b318e: OUTPUTĀ«===SORRY!===ā¤Confused at line 22, near "3,"ā¤Ā» | ||
|
02:54
janus joined
|
|||
| Coke | std: 3, | 02:54 | |
| p6eval | std 31912: OUTPUTĀ«ok 00:01 113mā¤Ā» | ||
| ash_ | rakudo: say 1, | ||
| p6eval | rakudo 6b318e: OUTPUTĀ«===SORRY!===ā¤Confused at line 22, near "say 1,"ā¤Ā» | ||
| ash_ | std: say 1, | ||
| p6eval | std 31912: OUTPUTĀ«ok 00:01 114mā¤Ā» | ||
| ash_ | rakudo: say 1, ; | ||
| p6eval | rakudo 6b318e: OUTPUTĀ«1ā¤Ā» | 02:55 | |
| ash_ | std: say 1, ; | ||
| p6eval | std 31912: OUTPUTĀ«ok 00:01 114mā¤Ā» | ||
| ash_ | rakudo: 6; 5; | 02:57 | |
| p6eval | rakudo 6b318e: ( no output ) | ||
| Coke | ... whoops. wrong window. my bad. | 03:01 | |
| plobsing | #parrot has evalbot? | 03:10 | |
| sorear | it does now | 03:11 | |
| somebody decided having nqp: here would be useful | |||
| (they were right) | |||
| \\std: and rakudo: came along for the ride | 03:12 | ||
| Coke | partcl-nqp: puts {always a bridesmaid...} | ||
| p6eval | partcl-nqp: OUTPUTĀ«always a bridesmaid...ā¤Ā» | ||
| sorear | oooooh shiny | ||
| pynie: print "moo" | |||
| Coke | puts \\u2026 | 03:13 | |
| partcl-nqp: puts \\u2026 | |||
| p6eval | partcl-nqp: OUTPUTĀ«ā¦ā¤Ā» | ||
| plobsing | squaak: print("Hello World!") | 03:14 | |
| cotto | perl7: dtrt; | ||
| plobsing | cotto: did it work? | 03:16 | |
| cotto | I don't know. | ||
| after checking my bank account, I can conclusively say "no" | 03:19 | ||
| stupid made-up nyi language | |||
| sorear | you said "dtrt" not "dwim" | 03:22 | |
| sorear ducks | |||
| plobsing geese! | 03:23 | ||
| cotto | The right thing is what I meant. | 03:24 | |
| fortunately it wasn't np-hard | |||
| plobsing | lorito-ish vm-level stuff in an HLL: github.com/plobsing/parrot-deepclone | 03:26 | |
| at least lorito's end-goal | 03:27 | ||
|
03:28
petdance joined
03:42
LoganLK joined
|
|||
| darbelo | bacek: ping. | 04:16 | |
| bacek_at_work | darbelo, pong (barely here) | 04:35 | |
| darbelo | I've found a few odd coments in src/gc/alloc_resources.c, figured you might know what they are about. | 04:38 | |
| One (I think by allison) near the end of free_old_mem_blocks says pool->total_allocated should probably be set to new_block->size istead of total_size. | 04:40 | ||
| Which *I think* makes sense, but I don't know enough about this code to be sure. | |||
| And then inside free_buffer, in src/gc/mark_sweep.c, I found "XXX Jarkko reported that on irix pool->mem_pool was NULL" | 04:42 | ||
| bacek_at_work | I have no idea about second part. It's from ancient parrot history. | 04:44 | |
| First one was related to "shared buffers"/COW. You can try it now. | |||
| darbelo | I did the change the comment implies, and it didn't seem to hurt anything, but I'm unsure what the implications are. | 04:45 | |
| I don't really know how "pool->total_allocated" is used by the gc. | |||
| bacek_at_work | darbelo, it is now afair. | 04:46 | |
| not in current GC at least. | |||
| darbelo | Hmm, after a quick ack, it looks like it's assigned to in a few places but never read. | 04:49 | |
| dalek | rrot: r48427 | darbelo++ | failed to fetch changeset: Sync with trunk. Again. |
04:51 | |
| darbelo | Does it make sense to keep it? | 04:54 | |
|
04:59
luben joined
05:08
Andy_ joined
05:34
chromatic joined
06:02
wagle joined
06:12
wagle joined
06:14
uniejo joined,
treed joined
|
|||
| chromatic | msg cotto Lorito M0 gets away without knowing anything about CPS the same way C gets away without knowing anything about CPS. | 06:23 | |
| purl | Message for cotto stored. | ||
| cotto | That much is clear. | 06:31 | |
|
06:43
Casan joined
07:00
jan joined
07:31
bacek joined
07:36
baest joined,
aloha joined
07:41
ash_ joined
07:56
ash__ joined
08:13
AndyA joined
|
|||
| dalek | kudo: 69561ef | moritz++ | src/Perl6/Actions.pm: :continue and :pos now default to ($/ ?? $/.to !! 0) |
08:28 | |
| jnthn | morning, #perl6 :-) | 08:43 | |
|
08:43
fperrad joined
|
|||
| jnthn | oh wait...channel fial | 08:43 | |
| Morning to parrotfolks too, though. :-) | 08:44 | ||
|
08:54
whiteknight joined
09:09
TiMBuS joined
|
|||
| dalek | kudo: 9d7428f | moritz++ | src/core/Cool-str.pm: fix interaction between :g and :p regex modifiers in Cool.match similar to m/\\G$regex/g in p5 |
09:30 | |
| whiteknight | purl msg Austin Exception types need to be renumbered because Parrot lost the PrederefLoadError type. github.com/Whiteknight/kakapo/commi...e40d0ac8d4 | ||
| purl | Message for austin stored. | ||
|
09:34
macroron joined
09:39
rurban_ joined
|
|||
| dalek | rrot: r48428 | NotFound++ | trunk/src/string/api.c: drop unused local variables and clean code in Parrot_str_to_hashval |
09:47 | |
| rrot: r48429 | NotFound++ | trunk/t/op/cmp-nonbranch.t: tests for cmp with null strings |
|||
| rrot: r48430 | NotFound++ | trunk/src/pmc/fixedintegerarray.pmc: some cleaning of FIA: put METHOD after vtable functions and fix and improve doc, no functional changes |
10:03 | ||
| rrot: r48431 | NotFound++ | trunk/t/dynoplibs/deprecated.t: clean clear and exchange tests |
10:36 | ||
|
10:52
cosimo joined
11:00
whiteknight joined
|
|||
| whiteknight | good morning, #parrot | 11:11 | |
| dalek | kudo: 811c1c5 | moritz++ | src/core/Match.pm: remove warning from Match.perl. While Match.new() does not accept subcaptures so |
11:21 | |
|
11:24
bkuhn joined
11:28
bacek joined,
aloha joined
11:51
lucian joined
12:16
mj41_ joined
|
|||
| dalek | nxed: r592 | NotFound++ | trunk/winxedst1.winxed: more constructor usage in stage 1 compiler |
12:17 | |
|
12:26
Paul_the_Greek joined
|
|||
| dalek | rrot: r48432 | jkeenan++ | trunk/t/codingstd/pmc_docs.t: Two more files now have completely documented PMC functions. notfound++. |
12:33 | |
| rrot: r48433 | fperrad++ | trunk/runtime/parrot/library (2 files): TT#1663 was fixed by r48182 |
|||
| Coke | I think it's reasonable to leave that jarrko comment in the source until we work on irix. | 12:55 | |
| though I might pick a new tag, other than "XXX". perhaps something related to "PORTING" | |||
|
13:03
ruoso joined
|
|||
| Paul_the_Greek | Is there any procedure to modify deprecated.pod other than to simply edit it and submit a patch? | 13:41 | |
| particle | is there a doc in docs or docs/project about it? | 13:49 | |
|
14:00
Andy joined
14:23
plobsing joined
|
|||
| Paul_the_Greek | particle: There is a Cage Cleaner Guide, but it doesn't say anything about editing deprecated.pod | 14:36 | |
| Interestingly, I can't find that document at the site. | |||
| As far as I can tell, I simply edit it. So that's what I did. | 14:38 | ||
| particle | Paul_the_Greek++ for research and initiative | ||
| Paul_the_Greek | It was exhausting, but I got through it. :D | 14:39 | |
|
15:20
theory joined
|
|||
| Coke | Paul_the_Greek: there are notes in the file itself. what question do you have? | 15:33 | |
| Paul_the_Greek | I was just wondering if I simply edit deprecated.pod after I remove a deprecated thing. Apparently that's all I have to do. | 15:35 | |
| Coke | Presumably the ticket that was associated with it needs to be closed, and you need to add a note about how to work around the deprecation to the right spot on the wiki. | 15:36 | |
| Paul_the_Greek | It was simply two C functions that are never used. Nothing user-visible to worry about. I checked in a patch. | 15:37 | |
| Coke | if they were not user visible, why were they in dep.pod, I wonder. | 15:38 | |
| were they static? | |||
| Paul_the_Greek | No, global. In namespace.c | 15:39 | |
| Oh crap, you're right. Why were they in dep.pod? | |||
| The ticket: trac.parrot.org/parrot/ticket/1660 | 15:40 | ||
| I presumed those were intended to be ops, but never were. | 15:41 | ||
| Coke | ugh. that entry didn't say "eligible in <foo>" anywhere, did it. | 15:44 | |
| was that notice in the 2.6.0 release? If so, it's safe to remove now. | 15:45 | ||
| particle | svn blame | ||
| purl | svn blame is, like, just like p4 annotate, only better or just like git blame, only different | ||
| Paul_the_Greek | No, but the ticket said 2.4.0. | ||
| Coke | doesn't matter what the ticket said. | 15:46 | |
| Paul_the_Greek | I tried to check the dep.pod at the site, but it isn't linked anywhere. | ||
| Okay, I won't trust the ticket. | |||
| Coke | Paul_the_Greek: trac.parrot.org - source - tags - 2.6.0 | ||
|
15:46
pyrimidine joined
|
|||
| Coke | trac.parrot.org/parrot/browser/tags...ECATED.pod | 15:47 | |
|
15:47
dafrito joined
|
|||
| Coke | yup, it was listed in that release, so it's safe to apply that patch. we should still point users to the other C functions for getting at globals. | 15:48 | |
| Paul_the_Greek | Yes, it's listed in that dep.pod | ||
| I did find dead links to dep.pod in the doc pages. | 15:49 | ||
| particle | is svn.parrot.org down? | 15:50 | |
| i'm trying to load svn.parrot.org/parrot/trunk/DEPRECATED.pod, but it hangs | |||
| Paul_the_Greek | Must run. Thanks for your help. | ||
| I can load it. | 15:51 | ||
| What I can't remember is where I found the dead links to it. | |||
| particle shakes his tiny fist | |||
| Paul_the_Greek | I'll hunt those up when I return. | ||
| Andy | It would be swell if vim users were to try installing and using the Perl .vim files at github.com/petdance/vim-perl so I can get some eyeballs on them before vim 7.3 comes out. Just check out the repo and do a "make install" into your ~/.vim dir. | 15:52 | |
| Coke | Andy: that's not a valid git url, yes? | 15:55 | |
| Andy | no, but there is one on the page. | 16:01 | |
| purl | okay, Andy. | ||
| nopaste | "coke" at 192.168.1.3 pasted "bug report for andy on "make test" for vim-perl" (19 lines) at nopaste.snit.ch/22720 | 16:05 | |
| Andy | I didn't say to run make test. | ||
| I know it doesn't work. | |||
| Coke | ... *sigh* | ||
| you're welcome. :P | |||
| Andy | Thanks anyway! :-) | ||
|
16:05
chromatic joined
|
|||
| Andy | Coke: github.com/petdance/vim-perl/issues#issue/15 | 16:06 | |
| Coke | Andy: should it auto-recognize .p6 extension? | 16:08 | |
| Andy | I don't know, should it? | 16:09 | |
| Coke | does it automatically treat .pl as perl5 ? | ||
| Andy | I think so | ||
| I'm not sure what those rules are. | |||
| Coke | (my copy of vim does, but I have no idea if that's because of vim-perl or not). if so, then IWBNI .p6 -> perl6. | 16:10 | |
| Andy | is .p6 "official"? | ||
| Coke | I suspect that .nqp should probably also default to perl6 | ||
| .pl is preferred, but you're screwed there. | 16:11 | ||
| Andy | why? | ||
| Coke | lemme see which is preferred for 6 | ||
| Andy | the file detection doesn't just go off of extensions | ||
| I believe it sniffs for "use v6;" | 16:12 | ||
| Coke | feather.perl6.nl/syn/S01.html#Random_Thoughts // "Also, a file with a .p6 ... | ||
| Andy | but not far enough. | ||
| Coke | just to give a few more. | ||
| Andy | But that was 6 years ago, too. | 16:13 | |
| github.com/petdance/vim-perl/issues/issue/30 is open for your added comments. | 16:14 | ||
| thanks, Coke | 16:21 | ||
| Coke | sorry it was nothing insightful. | 16:23 | |
| I use it for editing nqp all the time. I'll try to keep an eye on formatting oddities the next few days. | 16:24 | ||
| Andy | ok | 16:25 | |
| Coke | Andy: some text in a file is showing up red. how can I figure out what that red is supposed to mean? (github.com/petdance/vim-perl/blob/m.../perl6.vim ?) | 16:26 | |
| Andy | uhhh, I dunno. | ||
| can you gist the file? | |||
| nopaste | "coke" at 192.168.1.3 pasted "*@args shows up in red." (7 lines) at nopaste.snit.ch/22722 | 16:27 | |
| Coke | er, sorry, just @args of *@args. | ||
| Andy | not for me. | 16:28 | |
|
16:29
payload joined
|
|||
| Coke | urf. was not parsing as perl6, apparently. | 16:29 | |
| how to customize the colors? | 16:31 | ||
| this a bog-standard vim thing that I've just never done? =-) | |||
| Andy | what, customizing colors? Yeah | 16:32 | |
| Coke | k. will google. | ||
| Andy | :help colorscheme | 16:33 | |
| Coke | Andy: hurm. "# vim: filetype=perl6:" is correct, neh? | 16:44 | |
| (if I want to force p6?) | |||
| Andy | yes | 16:46 | |
| or ft=perl6 | |||
|
16:48
davidfetter joined
|
|||
| Coke | odd. doesn't seem to work. if I load the file and type ":set ft=perl6", the coloring changes. (even with that # vim line.) | 16:49 | |
| nopaste | "coke" at 192.168.1.3 pasted "this is the current file." (9 lines) at nopaste.snit.ch/22723 | 16:50 | |
| Someone at 192.168.1.3 pasted "er, this one." (9 lines) at nopaste.snit.ch/22724 | 16:51 | ||
| Coke | :set filetype shows ft=perl | ||
| ah. I bet i'm not respecting the infile comments. | 16:54 | ||
| Coke fixes his vimrc. #Andy++ | 16:58 | ||
| particle | set syntax=on | 16:59 | |
|
17:00
theory joined
|
|||
| Coke | I had "set modeline", but apparently needed "set modelines 1" | 17:00 | |
|
17:36
preflex joined
17:39
rurban_ joined
|
|||
| darbelo | What's a good name for an opcode wrapping Parrot_str_indexed() ? | 17:40 | |
| Coke | how does it differ from "index" ? | 17:44 | |
|
17:44
ruoso joined
|
|||
| darbelo | Index searches and returns the offset. I want to pass an offset and get back a char. | 17:45 | |
| s/char/length one string/ | 17:46 | ||
| inline op index(out INT, in STR, in STR) :base_core { $1 = ($2 && $3) ? Parrot_str_find_index(interp, $2, $3, 0) : -1; | |||
| } | |||
| I guess I can add a string-returning variant to ord. | 17:47 | ||
| PerlJam | Isn't that what substr is for? | 17:48 | |
| Coke | we have substr and ord already, yes? | ||
| darbelo | Yeah, but in the unshared_buffers branch calling substr with a constant '1' for lenght is more expensive than it used to be. | 17:51 | |
| chromatic | much more expensive, right? | ||
| darbelo | Yep. | 17:52 | |
| dukeleto is working on getting the parrot github mirror in better shape | |||
| davidfetter | sweet | ||
| Coke | so, let's fix the substr opcode instead of adding a new opcode? | 17:54 | |
| cotto_work | dukeleto: how so? | ||
|
17:54
whiteknight joined
|
|||
| Coke | (if it's faster for one char to call this other thing, call that instead. yes?) | 17:54 | |
|
17:54
preflex joined
|
|||
| darbelo | Coke: There is no other thing :) But yeah, I'll try to speed up substr first. | 17:56 | |
| Coke | I mean: make substr opcode call the c function you referenced in the case where length == 1 if that is faster. | ||
| kthakore | moritz: nice article! | 17:57 | |
| moritz: I didn't know about Try::Tiny and List::Util | |||
| darbelo | chromatic: I callgrinded a compile of rakudo's Actions.pm and Parrot_str_substr() is now one of the biggest runtime costs there. First non-gc function after inline op index(out INT, in STR, in STR) :base_core { $1 = ($2 && $3) ? Parrot_str_find_index(interp, $2, $3, 0) : -1; | 17:59 | |
| dukeleto | msg pmichaud is there any way we can get the parrot github account turned into an org so that I can have my new mirror script commit to that? I have a better method of mirroring that won't leave old branches around like my old mirror script | ||
| purl | Message for pmichaud stored. | ||
| darbelo | } | ||
| dukeleto | cotto_work: i learned of a better way to mirror that won't leave old branches around | ||
| darbelo | Eh, mispasted that. It's the first non-gc function in the top 10 time bottlenecks. | 18:01 | |
| It takes more time than Parrot_gc_sweep_pools, even. | 18:02 | ||
|
18:03
mikehh joined
|
|||
| darbelo | s/takes more/accounts for more run/ | 18:03 | |
|
18:08
preflex joined
|
|||
| darbelo | The problem is that now substr() has to allocate a buffer and copy the data over, which is expensive. Before it would just grab a header and set it to point into the other string's buffer. | 18:18 | |
| chromatic | darbelo, is that in the unshared_buffers branch? | 18:19 | |
| darbelo | Yes. | 18:20 | |
| chromatic | I think the only hope for that branch is simplifications in compact_pool. | ||
| darbelo | Could be. I did some simple ones, but didn't really gain much. And I'm not familiar enough with gc to do anything more complicated. | 18:22 | |
| What do you think, in broad strokes, a revamped compact_pool should look like. | 18:24 | ||
| chromatic | I must decommute; let me think about how to answer that well. | 18:25 | |
| moritz | kthakore: glad you liked it | ||
| darbelo | No problem. I wasn't really expecting a fast answer to a question like that. | ||
|
18:27
s1n joined
18:32
hercynium joined
|
|||
| Paul_the_Greek | Do you have preallocated strings for the first 128 characters? | 18:32 | |
| mikehh | getting a failure in t/perl/Parrot_Test.t - Failed test: 70, anyone else getting this? | 18:39 | |
| darbelo | Paul_the_Greek: Nopes. But we do compacting gc for string buffers. | 18:47 | |
| Which, ideally, means that allcoating a string buffer is just a matter of setting a pointer in the string header and adjusting a few values in the proper pool. | 18:49 | ||
| In reallity, I'm not entirely sure I understand what the heck we are doing. | 18:50 | ||
| Paul_the_Greek | You might consider preallocating. A quick check of the character code and you've got your substring. | ||
| darbelo | character code? | ||
|
18:51
tcurtis joined
|
|||
| Paul_the_Greek | If the encoding is ASCII or Unicode, then there would be a vector of 128 string headers for each. Check the character code for <= 127 and grab the header. | 18:51 | |
| Coke | that's assuming that our hot spot is 1-character strings, yes? | 18:52 | |
| darbelo | Oh, you mean the single-char case. | ||
| Paul_the_Greek | The trick is to avoid creating/finding the 1-character string in the string buffer. | ||
| I would think that substr(s, i, 1) is a serious percent of substrings. | |||
| You have to check for a length of 1 anyway, right, because you certainly don't want to do any sort of strcpy or whatever. | 18:53 | ||
| There are various low-level algorithms floating about for doing really fast memcpy's. | 18:54 | ||
| Coke | If you're doing a substring of length one, you should seriously consider doing an ord instead. | ||
| Paul_the_Greek | They may be a portability pain, however. | ||
| darbelo | They would have to be constant strings, but 128 (probably constant) string headers strikes me as pricey. | ||
| Paul_the_Greek | Perhaps ord(), but perhaps chr(ord()), in which case you have the same issue. | ||
| darbelo | Coke: ord returns an INTVAL. | ||
| Coke | (assuming it's a case where you're not doing an arbitrary length which happens to be one.) | ||
| darbelo: yes. | 18:55 | ||
| if you're doing the substr to see if $char eq '+', then an ord is faster. | |||
| darbelo | Does "$I0 == ')' " work? | ||
| Paul_the_Greek | It's faster, but do I want to program that way? It's an HLL, after all. | ||
| Coke | if $ordval eq 43 | ||
| Paul_the_Greek: no, it's PIR. | 18:56 | ||
| Paul_the_Greek | Ah, yes, if the assembler can detect these cases and optimize then, then ord() would certainly be better. | 18:57 | |
| Coke | darbelo: probably not. | ||
| darbelo | From NQP's source : | ||
| && pir::substr($past[$i].name, 0, 1) eq '%' { | |||
| Paul_the_Greek | But x := substr(s, i, 1) ends up doing chr(ord()), which has the same issue. | ||
| Coke | darbelo: there's an excellent case to switch to an ord. | ||
| (and compare against the codepoint instead of a single char string. | 18:58 | ||
| Paul_the_Greek | For sure. | ||
| purl | like totally! | ||
| darbelo | And there's more of that all over our pir code too. | 18:59 | |
| Paul_the_Greek | 219 occurrences of: = .* substr .* , 1 | 19:00 | |
| 48 occurrences of: = .* ord | 19:02 | ||
| Coke | darbelo: you interested in patching nqp-rx and parrot for that? | ||
| I can peek at it this evening otherwise. | |||
|
19:03
chromatic joined
|
|||
| darbelo | Coke: I'll be chopping up the gc code today, feel free to dig into substr/ord. | 19:03 | |
| Coke | darbelo: hokay. | 19:04 | |
| Paul_the_Greek | Are there peephole optimizations now? | ||
| chromatic | Not really, no. | 19:05 | |
| darbelo | Paul_the_Greek: Not in the core. tcurtis is adding something like it to PCT fot GSoC. | ||
| Paul_the_Greek | Because x = substr(,,1); if x == 'y' would need a peephole, no? | 19:06 | |
| nopaste | "chromatic" at 192.168.1.3 pasted "darbelo: simple compact_pool for unshared_buffers" (5 lines) at nopaste.snit.ch/22728 | 19:07 | |
| Coke | darbelo++ | 19:11 | |
| darbelo | chromatic: By 'free old buffer' you mean individually, or do you want to keep the current 'obliterate the whole block in one go' stuff. | 19:12 | |
| chromatic | Obliterate! | ||
| I hate to do compact_pool on every GC run though. | 19:13 | ||
|
19:13
dafrito joined,
icarroll joined
|
|||
| chromatic | That copies a lot of memory around, which seems silly for a precise GC. | 19:13 | |
| mikehh | t/perl/Parrot_Test.t - Failed test: 70 in make corevm/make coretest, test and perl_tests in fulltest | 19:14 | |
| all other tests PASS (pre/post-config, make corevm/make coretest, test, fulltest) at r48433 - Ubuntu 10.04 amd64 (g++ with --optimize) | |||
| chromatic | I almost want to have two types of STRINGs, one which has an extra 8 or 16 bytes at the end of the header for very short buffers and the other which has a longer buffer. | ||
| mikehh | the test is also failing on i386 for me | 19:15 | |
| purl | okay, mikehh. | ||
| mikehh | purl botsnack | ||
| purl | :) | ||
| darbelo | I think I'm seeing, now that I think of it, where the problem comes from. We have to walk the header pools to grab the live buffers (and inspect them to see what block they came from) and then, after doing all of the block bookkeeping, we walk the block list and free what we don't need anymore. | 19:16 | |
| chromatic | One might call that algorithmically questionable. | ||
| Paul_the_Greek | chromatic: That seems like an interesting idea. | ||
| chromatic | Paul_the_Greek, for that to work, we have to have a significant amount of short STRINGs. | 19:17 | |
| mikehh | t/perl/Parrot_Test.t - Failed test: 70 in make corevm/make coretest, test and perl_tests in fulltest | ||
| all other tests PASS (pre/post-config, make corevm/make coretest, test, fulltest) at r48433 - Ubuntu 10.04 i386 (g++) | |||
| Paul_the_Greek | You could actually make the string header some sort of union. If the string is immediate, you don't need some of the slots. Like about 4 of them. | 19:18 | |
|
19:18
theory joined
|
|||
| chromatic | Comme ci, comme ca... checking a flag on every STRING access to see what type of STRING is is seems more awkward than having a few extra struct members we're trying to move anyway. | 19:19 | |
| But maybe I misunderstand how to use it. | 19:20 | ||
| seen moritz | |||
| purl | moritz was last seen on #parrot 54 minutes and 36 seconds ago, saying: kthakore: glad you liked it | ||
| chromatic | moritz, can you get some Rakudo testing on various platforms with the second patch in TT #1737? thanks! | ||
| Paul_the_Greek | So then leave the header almost as it is, but with the string in the header. Then the accesses won't have to care. | ||
| moritz | chromatic: is it in parrot already? | 19:21 | |
| chromatic | Yeah, normal STRINGs and fat STRINGs have isomorphic headers, except for the additional bytes at the end. | ||
| moritz not yet. | |||
| Paul_the_Greek | Are you getting rid of the strstart slot? | 19:22 | |
| moritz | chromatic: that would make it much easier to get testing from other platforms than mine | ||
| chromatic | It's a GC patch, so there's an inherent risk, but point well taken. | 19:23 | |
| moritz | well, that's what we have version control for | ||
| Paul_the_Greek | (I have a patch to pobj.h pending.) | ||
| darbelo | Paul_the_Greek: In the branch, it's already gone. | ||
| chromatic | Hm, a stripped libparrot.so is 1.83 MB. | 19:24 | |
| Seems like it's slimmed down quite a bit lately. | |||
| Paul_the_Greek | So you can keep the headers common and just have string space at the end of the fat string. | ||
| cotto_work | Paul_the_Greek: don't forget to run headerizer when submitting a patch that adds or removes functions. | ||
| chromatic | moritz, is a Parrot branch sufficiently easy for other people to test? | 19:25 | |
| Paul_the_Greek | Oh my. I assumed it was run as part of the build. | ||
| I'll do that now and resubmit the patch. | |||
| chromatic | I'm running headerizer now, preparing to apply the patch from TT #1660. | ||
| moritz | chromatic: I'd prefer trunk, but a branch would be OK too | ||
| chromatic | If a branch is okay, I'll never confuse you and Carl again. | 19:26 | |
| moritz | lol | ||
| tcurtis | chromatic: if there's no flag check on every access, how do you distinguish between the strings with the additional bytes and the ones without? | ||
| darbelo | Semi-unrelated thought: We know that after a compact_pool run we have a bunch of (at least) 80% full pools and the latest allocated one. Would it be worth it to sometimes skip compacting based on that data? | 19:27 | |
| Paul_the_Greek | tcurtis: If you're just accessing a string, why does it matter which kind it is? (asks the dumb newbie) | 19:29 | |
| chromatic | tcurtis, the strstart pointer can point to the data at the end of the struct. | 19:30 | |
| darbelo, I think we should. | |||
| tcurtis | chromatic: Ah, right. | ||
| Paul_the_Greek: when it comes to Parrot internals and C in general, I expect I'm much more of a dumb newbie than you are. | 19:31 | ||
| Paul_the_Greek | Hard to believe, my friend. :D | ||
| darbelo | Hmm, I can probably add a proff-of-concept for that easily. Put a 'Just compacted' flag somewhere and clear it whenever we free enough buffers to drop a pool under the 80% threshold. | 19:32 | |
| chromatic | Does that help explain what compact_pool should do? | 19:33 | |
| darbelo | I don't know what 'that' is. | 19:35 | |
| chromatic | see earlier hand waving about garbage collection | ||
| darbelo | Yes, it does. I think I have nough data to make it suck less now. | 19:37 | |
| Paul_the_Greek | Suck reduction is good. | ||
| Can I headerize just one file? | 19:40 | ||
| chromatic | perl tools/build/headerizer.pl src/file.o | ||
| cotto_work | Is there any chance that a user is relying on those functions? | 19:42 | |
| chromatic | Which functions? | ||
| purl | i heard Which functions was that in? | ||
| Paul_the_Greek | No complaints about the deprecation notice. | ||
| Hang on ... | |||
| cotto_work | (Parrot_store_global_s and Parrot_find_global_s) | 19:43 | |
| chromatic | They were in DEPRECATED.pod, so it's their fault now. | ||
| Paul_the_Greek | My ticket is gone. | 19:44 | |
| cotto_work | luastring appears to | ||
|
19:45
bubaflub joined
|
|||
| darbelo | .oO( Is that a chainsaw I'm hearing? ) |
19:45 | |
| cotto_work | could be | 19:46 | |
| depends on if a migration path can be found | |||
| dalek | rrot: r48434 | chromatic++ | trunk/src/call/args.c: [PCC] Presized sigs in parse_signature_string(). |
||
| rrot: r48435 | chromatic++ | trunk (11 files): [src] Removed deprecations and headerized. Paul C. Anagnostopoulos, TT #1660. |
|||
| rrot: r48436 | chromatic++ | branches/gc_threshold_tuning: Tune GC collection threshold based on memory usage |
|||
| rrot: r48437 | khairul++ | branches/gsoc_instrument/src/dynpmc/instrumentruncore.pmc: Sync the singleton pmcs between the two interpreters |
|||
| rrot: r48438 | khairul++ | branches/gsoc_instrument (9 files): Please codetest. |
|||
| Paul_the_Greek | How does one check the dependencies in HLLs? | 19:47 | |
| chromatic | moritz, the appropriate branch is now gc_threshold_tuning. | ||
| moritz | chromatic: will test | ||
| cotto_work | Paul_the_Greek: if you're looking for function dependencies, ack | ||
| Paul_the_Greek | As in acknowledge or as in holy crap? | 19:48 | |
| cotto_work | ack? | ||
| purl | ack is betterthangrep.com or a grep-like tool for code. or at www.betterthangrep.com/ | ||
| dalek | TT #1660 closed by chromatic++: Deprecate Parrot_find_global_s and Parrot_store_global_s | ||
| TT #1660: trac.parrot.org/parrot/ticket/1660 | |||
| Paul_the_Greek | Do I infer from dalek's lines up there that he ran the headerizer for me and I shouldn't check in another patch? | 19:49 | |
| chromatic | Yes. | ||
| Paul_the_Greek | Thank you, dalek. | 19:50 | |
| cotto_work | dalek is a bot | ||
| chromatic is much more likely to have run headerizer | |||
| chromatic | I'm also much more likely to exterminate. | ||
| Paul_the_Greek | Eventually I'll know the bots from the actual humans. | 19:51 | |
| Andy | The headerizer is what makes chromatic a love god. | ||
| Paul_the_Greek | When I make headerizer: can't find HEADERIZER HFILE directive in "src/ops/core_ops.c" at tools/build/headerizer.pl line 350. | ||
| Andy | It can make you a love god, too. | ||
| particle | the bots have mode +v | ||
| the humans have mode +o | |||
| Andy | you have to do a full make before you can do a make headerizer, Paul_the_Greek | ||
| mikehh | chromatic: ping | ||
| particle | opbots: trust Paul_the_Greek | ||
| slavorg | Ok | ||
| Paul_the_Greek | I did reconfig, full make, headerizer. | ||
| slavorgn | Ok | ||
| Paul_the_Greek | Let me try again. | 19:52 | |
| chromatic | pong, mikehh | ||
| cotto_work | who knows what Parrot_find_global_s did well enough to suggest what its replacement should be? | 19:53 | |
| on the off chance that we don't want to break Lua again without suggesting a fix | 19:54 | ||
| mikehh | chromatic: I upgraded Test::Simple yesterday and it causes t/perl/Parrot_Test.t to fail test 70 , any ideas, - I reverted and the test passes | ||
| Coke | cotto_work: does lua use it? | ||
| cotto_work | yes | ||
| dynext/pmc/luastring.pmc +24 | |||
| Paul_the_Greek | In the case of storing, ns_store_global is the same except when the namespace is null. | ||
| Coke | mikehh: presumably our test is being too specific. | ||
| chromatic | mikehh, can you nopaste the diagnostics? | 19:55 | |
| Paul_the_Greek | In the case of finding, ns_find_global_from_op is the same except that it throws when the namespace is null. | 19:56 | |
| In both cases the difference is when the namespace is null. | 19:57 | ||
| darbelo | Hah! We already check the (AFAICT, always 0 ) guaranteed_reclaimable and possibly_reclaimable values to avoid compact_pool inside mem_allocate. | 19:58 | |
| cotto_work | Let's see how it fares. | 19:59 | |
|
19:59
pyrimidine left,
pyrimidine joined
|
|||
| chromatic | darbelo, I'm not sure it works. | 19:59 | |
| cotto_work | this does not look promising | 20:00 | |
| nopaste | "mikehh" at 192.168.1.3 pasted "failure in t/perl/Parrot_Test.t after upgrading Test::Simple to 0.96" (37 lines) at nopaste.snit.ch/22729 | 20:01 | |
|
20:01
luben joined
|
|||
| darbelo | It depends on how you look at it. compact_pool is never called from there :) | 20:01 | |
| dalek | rrot: r48439 | chromatic++ | branches/gc_threshold_tuning/src/gc (4 files): [GC] Revised Nick Wellnhofer's patch in TT #1737. |
20:03 | |
| chromatic | I'm certain our tracking of used and reclaimable memory is wrong. | ||
| luben | good localtime() | ||
| darbelo | In the branch, it looks to me that we aren't tracking reclaimable memory anymore. | 20:04 | |
| luben | I have made some statistics on hash usage | ||
| mikehh | hi luben | ||
| luben | that could guide our decisions | ||
| chromatic | luben++ | 20:05 | |
| luben | here are my observations: | ||
| all are numbers on rakudo startup | |||
| Paul_the_Greek | reconfig, make, headerizer: can't find HEADERIZER HFILE directive in "src/ops/core_ops.c" at tools/build/headerizer.pl line 350. | ||
| Why is the headerizer looking at that file? | 20:06 | ||
| luben | we create around 80k hashes | ||
| moritz | chromatic: on the branch, rakudo build peaked at ~950M virtual memory... not sure if that's a win compared to before | ||
| running tests now | |||
| luben | of which around 25k are never used | ||
| moritz | wow. | ||
| luben | we make around 32k hash expansions | ||
| Paul_the_Greek | Them's a lot of hashes. | ||
| Yow. | |||
| jnthn | Perl 6. Doing some serious hash. | 20:07 | |
| Coke | O_o | ||
| luben | the biggest hash is below 1024 items | ||
| jnthn is very curious where they're all coming from | |||
| chromatic | Captures. | 20:08 | |
| purl | captures is a arrayref of all CaptureArgs for the chain | ||
| jnthn | Aha. | ||
| chromatic: As in, for named args? | |||
| Paul_the_Greek | jnthn! | ||
| jnthn | o/ Paul_the_Greek :-) | ||
| chromatic: I mean, at startup I can believe we make a lot of calls. | 20:09 | ||
| Paul_the_Greek | jnthn: Do you have Windows binaries of GMP version 4.1.4 or later? | ||
| jnthn | chromatic: But I'd be surprised if we made a lot of PAST nodes. | ||
| luben | So, from this numbers, I think that eleminating hash->bucket_indices and hash->free_list is not a wise idea | ||
| jnthn | Paul_the_Greek: No, 'fraid not...I've been building Parrot without GMP. | ||
| Paul_the_Greek | Ah well. | ||
| luben | because hash put/delete/expand will cost more | 20:10 | |
| Paul_the_Greek | We seem to have this dependency that I cannot meet. I can't find it anywhere. | ||
| chromatic | Oh, startup. Good point. | ||
| I'm sure Callgrind will reveal what creates hashes. | |||
| jnthn | Paul_the_Greek: cs.nyu.edu/exact/core/gmp/ seems to have 4.1 but dunno if that's good enough. | 20:11 | |
| And not binaries, just build instructions. | |||
| chromatic | luben, do you have insight into what's so expensive for hashes? | ||
| luben | chromatic, what is expensive in the hashes ? or what creates a lot of hashes? | 20:12 | |
| chromatic | The former. | ||
| purl | the former is a Key and the latter is a STRING | ||
| luben | hash expand is expensive, but not a lot. | 20:13 | |
| Paul_the_Greek | jnthn: We rely on 4.1.4 or later. There was some horrible bug in 4.1.3. | ||
| luben | the most expensive is hash_mark | ||
| jnthn | Ah, OK | 20:14 | |
| luben | but this is not a problem of hash datastructure but elements (keys,values) marks | ||
| chromatic | Right. | ||
| luben | I have an idea that I could try here | 20:15 | |
| to make paged alocations | |||
| Paul_the_Greek | luben, what does hash_mark do? | ||
| luben | and never realocate buckets, just realocate inices | ||
| it traverses hash keys/values and calls mark_object_alive on them | 20:16 | ||
| they in turn call mark_object_alive on their elements etc. | |||
| it is a part of the GC system | 20:17 | ||
| Paul_the_Greek | During a GC. | ||
| luben | yes | ||
| Paul_the_Greek | Does a GC occur during creation of all those hashes? | ||
| luben | I don't know | ||
| It could be at any time | |||
| Paul_the_Greek | We're hoping for no GC during initialization. | 20:18 | |
| luben | We should not make assumptions about GC runs | ||
| chromatic | We can't guarantee no GC during Rakudo initialization. | ||
| Paul_the_Greek | Right, just a hope. | ||
| If we're timing initialization and there's no GC, then hash_mark doesn't matter. | 20:19 | ||
| luben | If we make a parallel colector, It could be at any point | ||
| Paul_the_Greek | Are some hash tables marked "don't bother to GC"? | ||
| Right. | 20:20 | ||
| I presume all those initial entries will never be GCed. | |||
| luben | yes | 20:21 | |
| their memory management are done separate | |||
| and explicit | |||
| purl | explicit is good :) | ||
| luben | because sometimes we use hashes before we have GC | 20:22 | |
| in fact we have GC, but cstrings for keys/valued are not allocated from GC | |||
| Paul_the_Greek | So no need to mark them? | ||
| chromatic | We don't in that case. | 20:23 | |
| luben | yes | ||
| Paul_the_Greek | Okay, so the issue with these keys is setup and bulk. | ||
| luben | the problem is with out GC system, not with the hashes | 20:26 | |
| on explict hash dealocation we do not have problems | 20:27 | ||
| Paul_the_Greek | Sorry, don't understand. | ||
| The setup/bulk would be helped with fat strings. | |||
| luben | Paul_the_Greek, the most expensive action on hashes is recursive mark stage from our GC | 20:28 | |
| fperrad | Paul_the_Greek have you seen developer.berlios.de/projects/win32gmp/ ? | ||
| Paul_the_Greek | And also with hash tables whose initial size could be given. | ||
| luben | Paul_the_Greek, what is setup/bulk | ||
|
20:28
ruoso joined
|
|||
| chromatic | ... because we use hashes to store things like object attributes and anything stored in a namespace. | 20:28 | |
| luben | Paul_the_Greek, I am working on direction to pre-size hashes | 20:29 | |
| Paul_the_Greek | fperrad: I saw that, but there was some problem. Let me check again. | ||
| Initial allocation of all the strings during initialization. | |||
| luben | Paul_the_Greek, I am thinking to allocate all/enought buckets on hash_init if we know how many they would be | 20:30 | |
| Paul_the_Greek | Right, or at least a close approximation, so only one resize might be necessary. | 20:31 | |
| Also, depending on how the strings are stored in the executable, the hashes could be precomputed. | |||
| chromatic | That would be nice. | 20:32 | |
| Paul_the_Greek | What if the strings were stored in frozen format? | ||
| chromatic | Welcome to Lorito and the world of images! | ||
| luben | I do not think that hash_init have to guess... the code that uses hashes should pass that info to hash_init | ||
| Paul_the_Greek | I worked on images for Common Lisp. It was hell. | ||
| Right. There is no reason the initialization can't know exactly how many strings there are. | 20:33 | ||
| So preallocate hash table and store strings as fat strings (many are short, I bet). | 20:34 | ||
| fperrad: private chat | |||
| ping: jnthn | 20:35 | ||
| luben | My idea now for hash optimizations is to make paged allocations, so we do not have to reallocate hash->buckets list | 20:36 | |
| chromatic | I like that idea. | 20:37 | |
| luben | After that we could make a differend hash_create_n functtion that takes an arg that is the number of buckets to preallocate | ||
| this will make our hash_expand more cheep | |||
| Paul_the_Greek | So the bucket vector would be split into sections? | ||
| luben | also I will try to make default hash size to be 0, to be even more cheep for these 25k empty hashes | 20:38 | |
| Paul_the_Greek, yes | |||
| Paul_the_Greek | But then how do you rehash when you enlarge the table? | 20:39 | |
| luben | now we have a hist of free bickets (hash->free_list). we allocate a page with the size of current alocated space and add the new buckets to the free_list | 20:40 | |
| s/hist/list/ | |||
| on expand we have to reallocate also hash->bucket_indices, but is a lot less space | 20:41 | ||
| and the algorithm for bucket_indices remains the same | |||
| Paul_the_Greek | Ah, I thought you were going to get rid of the indices and use a coalesced scheme. | ||
| luben | If we eleminate indices the hash expansion will cost a lot | 20:42 | |
| Paul_the_Greek | So you allocate more bucket space, add it to the free list, enlarge the index, and then reindex into into the enlarged index. | ||
| luben | also hash->delete and hash->put | ||
| Paul_the_Greek, yes | |||
| Paul_the_Greek | How are the buckets chained? | 20:43 | |
| Chandon | Is all this hash cleverness actually faster / more cache effective than the braindead array of pairs power-of-two and mask technique? | 20:44 | |
| Paul_the_Greek | Benchmarks! | ||
| purl | benchmarks are like college lab reports....often fudged to achieve the desired results | ||
| luben | hash_indices[hasval & mask] is a pointer to hash bucket. every bicket has "next" pointer for hash chains | 20:45 | |
| Paul_the_Greek | Ah, so you unlink the buckets from the old chains as you rehash them into the new index. | ||
| Chandon, of course, has a point. Tough to know how much each change helps. | 20:46 | ||
| luben | yes, bit it will move only half of the indexes and their destination is guaranteed to be free | 20:47 | |
| Paul_the_Greek | If the size is a power of 2. | ||
| luben | Paul_the_Greek, yes | 20:48 | |
| Chandon, what is this technique | |||
| Paul_the_Greek, the last day experiment I have made shows that 'key % size' operator is a lot more expensive that 'key & mask'' | 20:49 | ||
| Paul_the_Greek | A lot more? Hmm. | ||
| luben | so, for now I think to stick with power of 2 pages | ||
| chromatic | Is that because of the cost of the asm instructions or incidental costs of non-powers-of-two pages? | ||
| Paul_the_Greek | If you go to a vector of sizes, you can use & or % as needed. | 20:50 | |
| You still have to check, of course. | |||
| luben | chromatic, I do not know, I have made a new field in hash->size and have replaced all bitmasking with modulud | 20:51 | |
| modulus | |||
| parrot start went from 680ms to 760ms | 20:52 | ||
| Paul_the_Greek | You can also check whether you are resizing from a power-of-2 to the next higher power-of-2 and use two different resizing algorithms. | ||
| The sizes were still powers of 2, just the operator changed? | |||
| luben | I have made a vector with prime number sizes, and it does not help significantly | ||
| Paul_the_Greek, the new expand algorithms was not so expensive | 20:53 | ||
| Paul_the_Greek | So power-of-2 sizes are fine. | 20:54 | |
| luben | so yesterday numbers: trunk(660ms) + expand not power of 2 (680ms) + modulus indexing (760ms) | ||
| all these numbers are for rakudo starup time | 20:58 | ||
| chromatic | My guess is that indexing changed the number of bucket collisions. | 21:00 | |
| luben | it is the same indexing, expressed in differend manner | 21:01 | |
| ok, let me do a quick test | |||
| bubaflub | do we have a tools/build/pmc2c.pl expert around? i'm trying to get out of directory building to work and need some help with dumping pmc's | 21:04 | |
| Paul_the_Greek | We should do a distribution test on the hashes. If they are distributed well, I don't think non-power-of-2 sizes matter much. | ||
| cotto_work | bubaflub: what's the problem? | ||
| purl | hmmm... the problem is nobody ever writes test cases after breaking it | ||
| bubaflub | cotto_work: if i'm building out of directory, pmc2c.pl -- dump will dump everything into the source directory instead of the build directory | 21:05 | |
| the --vtable method always dumps to cwd | |||
| cotto_work | sounds like a bug | 21:06 | |
| purl | bzzzzzzzzzzzzzz... | ||
| luben | Paul_the_Greek, chromatic, here is the quick test I have made: pastebin.com/MrT202jc | 21:07 | |
| Paul_the_Greek, in fact I have made some quick test, only arround 25% of the keys are in collisions | 21:08 | ||
| But there are some patological cases | 21:09 | ||
| Coke | bubaflub: are you working with a modified pmc2c.pl ? | ||
| or just the stock one? (it's not at all surprising that the one on trunk would assume it's in teh build dir.) | |||
| bubaflub | Coke: stock, and i'm making the modifications myself | 21:10 | |
| darbelo | bubaflub: pmc2c doesn't handle dirs very well. You'll be better off just dumping to CWD and then mv-int it away. | ||
| bubaflub | darbelo: ah, that would be much easier | ||
| Paul_the_Greek | That 1.14 ns for the divide over the and. | ||
| I guess the compiler didn't optimize the loop away. :D | 21:11 | ||
| luben | yes | ||
| Chandon | For all you hashing people, here's some interesting discussion on design tradeoffs: www.youtube.com/watch?v=WYXgtXWejRM#t=7m | ||
| luben | with -O2 you get time 0 | ||
| chromatic | I wonder if there's a way to keep power of two buckets for distributing key/value pairs but not require the same number of buckets when resizing. | ||
| Paul_the_Greek | Thanks, Chandon. | ||
| Clever compiler. | 21:12 | ||
| chromatic? | |||
| chromatic | n slots for buckets (where n is 2^x) and m buckets (where m > n) | 21:13 | |
| Paul_the_Greek | I wonder if the 1 ns wouldn't really matter in the long run, if there are other advantages. | ||
| Still not getting it. | |||
| luben | chromatic, we can | 21:14 | |
| darbelo | bubaflub: Also, watch aout for dynpmc goups, pmc2c doesn't handle them well either path-wise. | ||
| Paul_the_Greek | Where do the m > n buckets go if there are no slots for them? | ||
| darbelo | Eh, groups. | ||
| chromatic | We expect some collisions. | ||
| Paul_the_Greek | Right, but where do the buckets go if there are no slots for them? | 21:15 | |
| luben | we now have #define N_BUCKETS(n) ((n)) | ||
|
21:15
fperrad joined
|
|||
| chromatic | There are always slots for them. | 21:15 | |
| bubaflub | darbelo: yeah, i'll get there eventually... right now an out of directory configuration works fine, and the build goes about 4 or 5 lines then explodes on pmc2c.pl --dump | ||
| luben | and we are allocating buckets by this N | ||
| Paul_the_Greek | You just say there are m > n buckets but only n slots. | ||
| said | |||
| chromatic | If there are two slots, finding the right slot is key % 2 | ||
| No matter how many buckets, it's still key % 2. | |||
| Paul_the_Greek | Oh, you mean n entries in the index and m > n buckets. Sorry, me slow. | 21:16 | |
| luben | look. we have bucket index, this is array of pointers, we find the bucket with buck_ind[key & mask] | ||
| chromatic | Exactly. | ||
| luben | all buckets are allocated lineary | ||
| Paul_the_Greek | When you said "bucket slots", I didn't think of the index. | 21:17 | |
| luben | from hash->buckets area | ||
| Paul_the_Greek | Right, no connection between index size and bucket count. | ||
| luben | we could, but I doubt that this will buy us something | ||
| chromatic | Yeah, it's just a thought. | 21:18 | |
| luben | up to 3-4 days we were allocation n = (m - m/4) buckets | ||
| s/allocation/allocating/ | |||
| chromatic | Right. | 21:19 | |
| Paul_the_Greek | It would trade slower insert/lookup for fewer allocations. | ||
| bubaflub | darbelo: i'm a bit worried because pmc2c.pl looks for a default.dump everytime a PMC is dumped, but mine will be in a different directory than the source. i'll either have to modify the find_file() that looks for default.dump or supply the full path to find_file() | ||
| luben | my idea is that with paged allocation we keep current speed for put/get/delete, we keep the number of allocations but they are not so big and we do not make bucket reallocatios | 21:22 | |
| chromatic, (n<m) does not buy us anything, because we have free_list and do not have to rehash/scan on collisions | 21:25 | ||
| Paul_the_Greek | So first bucket allocation size is specified on creation. Additional bucket allocations are some factor of the initial size? | ||
| Or some factor of the current bucket count? | 21:26 | ||
| chromatic | Looks like hash resizing is some 3% of runtime. | 21:27 | |
| Paul_the_Greek | Are the new buckets added to the free list lazily? | ||
| luben | Paul_the_Greek, first allocations is n = next power of 2 from requested size, every next allocations is the hash size (and we add the new page to the size) | ||
| chromatic, we could not shave a lot there - max 3% :) | 21:28 | ||
| chromatic | That's at least 3% of every program's runtime. | ||
| Paul_the_Greek | What do you mean by "hash size"? The size of the index or the number of buckets? | ||
| luben | chromatic, is it inclusive time (with gc) or not inclusive? | 21:29 | |
| chromatic | not inclusive | ||
| Paul_the_Greek | If you would add the new buckets to the free list lazily, then you wouldn't have to touch the new bucket pages at all. | ||
| luben | and inclusive? what's the number if you know it? | ||
| chromatic | 3.9% | 21:30 | |
| purl | 0.039 | ||
| bubaflub | ping dukeleto | ||
| chromatic | Hm, what if the number of bucket indices were a lot greater than the number of free buckets? | 21:31 | |
| luben | Paul_the_Greek, example: request for hash of size=5, we allocate size=8, next allocation is 8 (the size is 16), next allocation is 16 (the size is 32) etc.) | ||
| Paul_the_Greek | Then there would be fewer collisions. | ||
| Chandon | msg whiteknight With gsoc_threads @ r48440 I'm getting no test failures. Do you still see stuff like NCI failing when you test? | ||
| purl | Message for whiteknight stored. | ||
| chromatic | and we wouldn't have to rehash *every* time we add buckets. | 21:32 | |
| Paul_the_Greek | That's because now the index size = bucket count, right? | ||
| chromatic | Right. | ||
| luben | we could test it . its easy, just change the #define | 21:33 | |
| Paul_the_Greek | If the index size and bucket count are decoupled, more options. | ||
| No, you could enlarge the index only when you think the number of collisions is too high. | |||
| Much flexibility ensues. | 21:34 | ||
| In fact, if the initial entry count is given on allocation, you could allocate the right number of buckets, ... | |||
| but the index size could be some function of the initial size, based on performance tests. | 21:35 | ||
| For a table with lots of lookups, make the index size large. For a rarely-used table, make it smaller. | 21:36 | ||
| You could even have a way to specify the index:bucket size ratio for each table. | 21:37 | ||
| luben | test done: it is more expensive to have n = m/2 | ||
| Paul_the_Greek | Which is n and which is m? | ||
| Chandon | I bet you a dollar that anything that complicated will tend to be slower than a simple closed hash with linear probing. | 21:38 | |
|
21:38
dafrito joined
|
|||
| luben | m - index size, n - bickets allocated | 21:38 | |
| Paul_the_Greek | Chandon, do you mean a coalesced table? | ||
| Having a larger index slowed things down? | 21:39 | ||
| luben | yes | ||
| Paul_the_Greek | Is it still a power of 2 in size? | ||
| luben | yes | ||
| chromatic | Did you avoid rehashing when expanding the number of allocated buckets? | ||
| Chandon | Paul_the_Greek: en.wikipedia.org/wiki/Hash_table#Open_addressing | 21:40 | |
| Paul_the_Greek | Chandon: Yes, that is the sort of hash I implemented for my system. I have no clear feelings on its advantages. | 21:41 | |
| luben | chromatic, no, you have to recalc bucket_index, but rehashing is faster than adding a 'hashval' field in bucket struct | ||
| Paul_the_Greek | luben, what is n and how many keys were added? | ||
| chromatic | Why do you have to recalculate a bucket index? | 21:42 | |
| luben | 1st test n=4, m=8, 2nd test n=8, m=16 (same result) | ||
| Paul_the_Greek | How many keys? | ||
| luben | chromatic, on hash_expand | ||
| dalek | rrot: r48440 | Chandon++ | branches/gsoc_threads (10 files): [gsoc_threads] Now with passing tests. |
21:43 | |
| chromatic | I'm suggesting separating index storage from bucket storage. | ||
| Paul_the_Greek | Right. | ||
| Chandon is wondering whether a separate index even matters. | 21:44 | ||
| chromatic | I haven't managed to wrap my brain around Chandon's suggestion yet, nor have I wrapped bacon around it. | ||
| luben | chromatic: imagne hashval(a) = 1001, hashval(b) = 0001, initial table is 8 in size, so they go in slot(1) of hash_index. When you resize, you have to move hash_ind[1001] to point to bicket of 'a' and update hash_ind[0001] to point to bucket of 'b' | 21:45 | |
| Paul_the_Greek | Oops, I'm blurring open addressing and coalescing. | 21:46 | |
| chromatic | ... because the buckets have moved, right. | ||
| Paul_the_Greek | luben: how many keys did you hash into your tables in the test? | ||
| chromatic | Seems like there could be cheaper ways to do that: walk the indices and add an offset to every pointer. | ||
| luben | no the bucket is in maybe in the same position, but (hashval & 7) may not be equal to (hashval & 15) | 21:47 | |
| chromatic | I'm thinking it'd stay hashval & 7 | ||
| Paul_the_Greek | Half the entries in the index have to move up to the upper half. | ||
| I think. | 21:48 | ||
| chromatic | I don't see why, but maybe I haven't explained myself. | ||
| luben | no, then for yeach lookup we have to walk chains for (hashval & 15) and hashval & 7) | ||
| Paul_the_Greek, yes | 21:49 | ||
| chromatic | Suppose we allocate a new hash with 8 buckets and 8 index slots. | ||
| Paul_the_Greek | A key that used to hash to 01001 and end up at index 001 now ends up at index 1001. | ||
| chromatic | Storing a bucket means hashkey & 3. | ||
| luben | chromatic, as we do now | ||
| chromatic | Suppose we need to allocate 8 more buckets. | ||
| luben | yes | ||
| chromatic | ... but we don't allocate any more index slots. | ||
| Paul_the_Greek | Oh, no more index slots. Yes, then no reindexing. | 21:50 | |
| chromatic | Storing a new bucket still means hashkey & 3. | ||
| luben | the collisions will be higher | ||
| Paul_the_Greek | Right. | ||
| chromatic | Now 8/8 is pretty silly, because of higher collisions, but.... | ||
| ... suppose we allocate a hash with 2b and 8i. | |||
| We can double the number of allocated buckets twice without rehashing. | |||
| Paul_the_Greek | Probably no collision. | ||
| chromatic | Probably no collision until we get 7 buckets full. | 21:51 | |
|
21:51
integral joined
|
|||
| Paul_the_Greek | Right. | 21:51 | |
| luben | yes | 21:52 | |
| bubaflub | darbelo or anyone else who cares about out of directory building: i have an alternative idea: rather than modifying the entire configure and build process, can i just build in directory and then have a script that moves all the built files into a different build dir? | ||
| Paul_the_Greek | Imagine a user is going to add a bunch of entries to a table, but doesn't know how many. | ||
| luben | we have to make a separate bucket store allocation, but we have to make it anyway if we want to make paged allocations | 21:53 | |
| Paul_the_Greek | So first set the index to a medium size and the buckets small. | ||
| Add all the entries, asking for no resizing of the index. | |||
| Then say "done" and allow the index to be set to a good size and everything reindexed. | 21:54 | ||
| Only one reindex. | |||
| luben | yes | ||
| I could try that, it converges with paged allocation | |||
| mikehh | t/perl/Parrot_Test.t - Failed test: 70 in make corevm/make coretest, test and perl_tests in fulltest | ||
| chromatic | We can also get clever about how we describe the Hash struct so as to avoid excess allocations. | ||
| mikehh | all other tests PASS (pre/post-config, make corevm/make coretest, test, fulltest) at r48440 - Ubuntu 10.04 i386 (gcc with --optimize) | ||
| chromatic | mikehh, did you get a chance to nopaste the diagnostic output of that test? | 21:55 | |
| Paul_the_Greek | And allow the user to say "create table, no reindexing" ... add entries ... "done now" | ||
| luben | chromatic, we are clever now, on hash_create we make just 1 allocation for all datastructures | ||
| chromatic | We could definitely use that noreindex strategy for storing constant strings at startup. | ||
| Paul_the_Greek | luben: are the buckets added to the free list lazily? | 21:56 | |
| luben | chromatic, yes | ||
| Paul_the_Greek, on hash_create and hash_expand | |||
| Paul_the_Greek | chromatic: but if we can set the hash sizes at creation, then we wouldn't reindex anyway. | ||
| mikehh | the failing test depends on versions of Test::Builder and is pretty horrible, it also requires trailing spaces in the test file | ||
| chromatic | True, but think about when we load bytecode. | ||
| Paul_the_Greek | luben: The buckets are added to the free list lazily? | ||
| That is, they aren't chained onto the free list when first created? | 21:57 | ||
| luben | they are chained when hash is created/expanded | ||
| mikehh | chromatic: nopaste.snit.ch/22729 | ||
| Paul_the_Greek | If we could stop doing that, then we wouldn't have to touch the bucket memory at all. No paging. | ||
| chromatic | That's an odd error, mikehh. No wonder you're confused. | 21:58 | |
| luben | Paul_the_Greek, please explain | ||
| chromatic | Looks like the test expects the diagnostic output on STDERR, not STDOUT. | ||
| Paul_the_Greek | Instead of chaining the new buckets onto the free list ... | 21:59 | |
| chromatic | * Test::Builder::Tester now sets $tb->todo_output to the output handle and | ||
| not the error handle (to be in accordance with the default behaviour of | |||
| Test::Builder and allow for testing TODO test behaviour). | |||
|
21:59
pyrimidine left
|
|||
| Paul_the_Greek | keep a separate free pointer that just advances along the new memory, taking each bucket as needed. | 21:59 | |
| chromatic | Heh. | ||
| Paul_the_Greek | Check out Parrot_add_to_free_list in mark_sweep.c | ||
| chromatic | You don't want to know how much work that was to fix. | 22:00 | |
| Paul_the_Greek | So first you look on the free list, then you advance the pointer. | ||
| mikehh | chromatic: I extracted the test and was working on it - it works with Test::Simple/Builder 0.94 but fails with 0.96 | ||
| luben | Paul_the_Greek, we could do that but I think it will cost more on hash_put | ||
| Paul_the_Greek | Well, you have to check the free list anyway. | ||
| Now, if empty, you would add more bucket space. | 22:01 | ||
| luben | yes | ||
| Paul_the_Greek | Instead, you look at the free pointer pointing into the last-allocated bucket space. | ||
| If there are more unused buckets, you grab one. | |||
| If not, then you allocate more space. | |||
| So you're right, it's a bit more time to add an entry. | 22:02 | ||
| But less time when adding more bucket space, because no adding to free list. | |||
| So the computing is about the same, but you save the paging. | |||
| Must go feed cats. Be back in awhile. | 22:03 | ||
| luben | imagine, you have hash of size 16 full with 15 entries, so bicket 15 is free. if you delete a bucket at pos 1, you point free_space at pos 1, on next put, you allocate that buffer and you have to lineary scan the whole bucket store to find next free bucket | 22:04 | |
| mikehh | chromatic: I know I have had problems with that test file before, I usually use Kate as my default editor, which auto removes trailing whitespace, and that file requires it | 22:05 | |
| chromatic | Ah, right. | ||
| mikehh | I have got Kate sewt up to use spaces instead of tabs and remove trailing whitespace (and a few other things) so when working with makefiles or that file I use vim | 22:07 | |
| chromatic | I fear we'll have to check the version of T::B and do something conditionally there. | 22:08 | |
| mikehh | Yeah, that is what I thought | 22:10 | |
| have to think about re-writing that test at some stage | |||
|
22:15
jsut_ joined
|
|||
| whiteknight | purl msg Chandon (as of r48440) I see 5 failing coretests still: t/pmc/complex.t:11, t/pmc/filehandle.t:9, t/pmc/nci.t:19-20, t/pmc/stringhandle.t:11 | 22:57 | |
| purl | Message for chandon stored. | ||
| Paul_the_Greek | luben: No, you don't point the free space at bucket 1. You put bucket 1 on the free list. The free pointer stays pointing at 15. | ||
| Chandon | whiteknight: Weird. What platform? | ||
| whiteknight | x86_64 | 22:58 | |
| Chandon | That's what I'm on. Why would we be getting different results? | ||
| whiteknight | no idea. Let me try a fresh checkout and see what happens | ||
| luben | Paul_the_Greek, so at that case you put an entry, it goes at pos 15, and you have ot linear scan to find that pos 1 is empty | 22:59 | |
| whiteknight | Chandon: actualy, which compiler are you using? | ||
| Paul_the_Greek | Whenever you free a bucket, it goes on the free list. | ||
| luben | the free_list initialization now is cheap, it is fast linear scan, one time | 23:00 | |
| Chandon | whiteknight: gcc | ||
| Paul_the_Greek | Whenever you need a bucket, you look at the free list first. | ||
| You only look at the free pointer when there are no buckets on the free list. | |||
| whiteknight | Chandon: okay, I use clang by default. I'l try now with gc | ||
| gcc | |||
| luben | Paul_the_Greek, aha, I got it now | ||
| Paul_the_Greek | I love this stuff. | 23:01 | |
| luben | I am working now ro separate allocation for buckets; it will give us some flexibility to make things like that | ||
| whiteknight | Chandon: aha! that did it. No test failures with GCC (though I did see a "tests out of sequence" issue in threads_io.t | 23:02 | |
| Chandon | It's still giving you tests out of sequence? Argh! | ||
| whiteknight | Chandon: I think it only happens sometimes | ||
| Chandon | Only happens sometimes = double Argh. | 23:03 | |
| whiteknight | let me try it again | 23:04 | |
| Chandon | Also, why would clang be giving significantly different behavior than gcc? | 23:05 | |
| nopaste | "Whiteknight" at 192.168.1.3 pasted "t/src/thread_io.t results for Chandon+++" (72 lines) at nopaste.snit.ch/22733 | 23:09 | |
| whiteknight | there are a few runs of it | ||
| looks like the sequece problem happens about 25% of the time | |||
| Chandon | whiteknight: I'm getting the same result. Now I'm going to have to try to figure out if that's actually a bug. If it always succeeded I'd be happy and if it always failed I'd be sure it was wrong. As it is, it might just be scheduling variation. | 23:10 | |
| whiteknight | if you increase the length of the sleep, can you eliminate that variation? | 23:12 | |
| cotto_work | Chandon: is it possible that this failure is triggered by hash ordering? | ||
| whiteknight | Chandon: increasing that timeout to 1.0 makes it pass for me every time | 23:13 | |
| Chandon | cotto_work: Don't think so. It's thread timing, so it's probably terrible all by itself. | 23:14 | |
| cotto_work | probably so | 23:15 | |
| Chandon | whiteknight: Which one? | ||
| whiteknight | line 33 | ||
|
23:16
jsut joined
|
|||
| Chandon | whiteknight: Really? Now I'm extra confused. My errors were swapping tests 2 and 3. | 23:17 | |
| whiteknight | mine too | 23:18 | |
| looking at the test again, that is pretty weird | |||
| "Task" here is a green thread, right? | |||
| (I love the word "grumblecake") | |||
| Chandon | Yes, Task is a green thread. | 23:19 | |
| whiteknight | I'm just making sure I'm keeping the terminology straight | 23:20 | |
| cotto_work appreciated that too | |||
| whiteknight | Chandon: I'm looking over your proposal again now, is there anything big that you haven't gotten done yet? | 23:23 | |
| I don't see anything obvious | 23:24 | ||
| Chandon | Actual parallel execution of code? | ||
| cotto_work | bubaflub: what about making --vtable=/path/to/vtable.tbl work in pmc2c.pl? | ||
| From a quick look, I suspect it'd be very easy to get that to work. | |||
| (and preserve the current behaviour for --vtable) | 23:25 | ||
| whiteknight | Chandon: ah, well, yes. I was sort of overlooking that | ||
| Chandon | whiteknight: Yea, if we carefully overlook that then I've mostly done what I proposed, give or take blocking operations other than file input. | 23:26 | |
| whiteknight | this was a very ambitious project though. It's impressive that you got this far | 23:27 | |
|
23:28
estrabd joined
|
|||
| whiteknight | I am looking forward to getting all this merged into trunk | 23:30 | |
| Chandon | Still needs some work to make it make a bit more sense before that should happen. | 23:31 | |
| whiteknight: In threads_io.t, if you add $run->flush; after the $run->print, can you still get it to fail? | 23:32 | ||
| bubaflub | cotto_work: maybe... lemme take a look | ||
| whiteknight | Chandon: no | 23:33 | |
| Chandon | whiteknight: Then I'll commit that and declare victory. | 23:35 | |
| whiteknight | Chandon++ | ||
| Chandon | So why isn't it working right with clang? | 23:36 | |
| whiteknight | no idea. Let me dig into that right now | 23:37 | |
|
23:38
shockwave joined
|
|||
| shockwave | Greetings and salutations. | 23:38 | |
| cotto_work | I am shocked, but I still wave. | ||
| shockwave | Is throwing allowed within destructors in parrot? | ||
| whiteknight | urg. I'm reading over the code in src/thread.c, and I will be very happy to see all this shit go away | ||
| hello shockwave | |||
| dalek | rrot: r48441 | whiteknight++ | branches/gsoc_threads/src/threads.c: cleanup trailing whitespace. My editor does this automatically |
23:39 | |
| rrot: r48442 | Chandon++ | branches/gsoc_threads/t/src/threads_io.t: [gsoc_threads] Now with less sporadic failure. |
|||
| Chandon | whiteknight: I tried to kill some of it, but it's got its tendrils *everywhere*. | ||
| whiteknight | yeah, I know | ||
| horrible | |||
| purl | horrible is GateKeeper from wingate (will eat over 200mb of ram in 4 hours) or The Mummy or ouzo or PacMan joke book or being out of ice cream or STUCK AT WORK or a great "tv" show. | ||
| whiteknight | purl forget horrible | ||
| purl | whiteknight: I forgot horrible | ||
| cotto_work | Coke: ping | 23:40 | |
| shockwave | I'm going to re-ask my question, just incase someone that knows the answer didn't see it, since it scrolled up pretty fast. | 23:43 | |
| Is PIR code allowed to throw exceptions within destructors in Parrot? | |||
| Chandon | shockwave: What happens when you try it? | 23:44 | |
| whiteknight | shockwave: My guess is "no" | ||
| Paul_the_Greek | So here's a question. Note documentation: docs.parrot.org/parrot/latest/html/...e.ops.html | ||
| whiteknight | shockwave: destructors are executed out of normal control flow, inside the GC. If you throw an exception at that point it's either going to be ignored (unlikely) or will crap up the GC (likely) | ||
| Paul_the_Greek | It says that .PARROT_ERRORS_OVERFLOW_FLAG control promotion to BigInt. | 23:45 | |
| whiteknight | shockwave: it's worth a test, but I don't expect the answer to be good | ||
| shockwave | Chandon. I haven't tried it. I dont' want to just try it, even if it semi-works, because the behavior may be undefined (as in, mostly works, but shouldn't). | ||
| whiteknight: That's my guess too. | |||
| Paul_the_Greek | It is not consulted in any core operation. | 23:46 | |
| whiteknight | shockwave: Ah, I see what you are saying. for the "official" answer, I say this: A pox on you if you do it | ||
| Paul_the_Greek: that's probably a legacy thing that got forgotten | |||
| shockwave | whiteknight: Googling 'pox'... | ||
| Paul_the_Greek | It's consulted by the Integer PMC. | 23:47 | |
| cotto_work | like chicken | ||
| Paul_the_Greek | Should we change the documentation of it? | ||
| shockwave | ah chicken-pox. | ||
| whiteknight | shockwave: yeah, it's not a modern saying | ||
| Chandon | Did someone merge a patch to IMCC that makes it eat babies more? | 23:48 | |
| whiteknight | Chandon: why, is somebody missing a baby? | ||
| shockwave | Well, as of now, I have disallowed the *explict* throwing on exceptions within destructors. That is, actual 'throw' clauses seen by the compiler within the destructor are not allowed. | ||
| But that doesn't stop the usuage of code that throws exceptions. | 23:49 | ||
| I made that descion without even thinking abount consulting you guys since it seems like the right thing to do, base on what other languages out there seem to do. | 23:50 | ||
| But then I saw this dude talking on Youtube: www.youtube.com/user/GoogleTechTalk...lVpPstLPEc | |||
| Chandon | whiteknight: Nope, just getting segfaults on the task tests after a trunk merge and gdb is blaming imcc code. | ||
| dukeleto | Chandon: plobsing merged the dynops_mapping branch recently, which fiddled with imcc | ||
| shockwave | He says that in the D programming language, one is allowed to throw exceptions from within destructors, so that got me thinking. | ||
| whiteknight | urg | 23:51 | |
| Chandon | dukeleto: That sounds like a good explanation. | ||
| dukeleto | Chandon: it got merged in r48412 | 23:53 | |
| Chandon: try the rev before that to see if it is the culprit | |||
| shockwave | Well, thanks guys. I guess the answer is: "That behavior is undefined". That's what I expected :) | 23:55 | |
| cotto_work | msg Coke Do the OSU OSL folks know that Smolder has memory leak issues and may need some special treatment to keep it from messing with the rest of our VM? | 23:56 | |
| purl | Message for coke stored. | ||
| shockwave | bye | 23:57 | |
|
23:57
shockwave left
|
|||