Geth | rakudo/windows_latest_azure_pipeline: 49deff8780 | (Timo Paulssen)++ | 2 files let's try if gdb exists on this azure runner ... |
00:03 | |
00:09
vrurg joined
00:11
vrurg__ left,
vrurg_ joined
00:13
vrurg left
|
|||
Geth | rakudo/windows_latest_azure_pipeline: 50bfc41170 | (Timo Paulssen)++ | azure-pipelines.yml oops the "not cache restored" went on the wrong step |
00:13 | |
rakudo/windows_latest_azure_pipeline: a750649761 | (Timo Paulssen)++ | azure-pipelines.yml try to get azure to give us some information about what the heck is going on |
00:20 | ||
rakudo/windows_latest_azure_pipeline: efaf597d42 | (Timo Paulssen)++ | azure-pipelines.yml try to get azure to give us some information about what the heck is going on |
00:25 | ||
rakudo/windows_latest_azure_pipeline: 514c39a56c | (Timo Paulssen)++ | azure-pipelines.yml of *course* i got the path wrong … well, that's all the time we have for today. |
00:44 | ||
02:20
MasterDuke joined
|
|||
MasterDuke | here's my current smallest golf gist.github.com/MasterDuke17/03b0d...550ac07424 | 02:22 | |
removing any of the lines that do anything, or removing either MVM_SPESH_NODELAY=1 or MVM_SPESH_BLOCKING=1 and the test file passes | |||
but `MVM_SPESH_NODELAY=1 MVM_SPESH_BLOCKING=1 make t/spec/S09-typed-arrays/native-shape1-num.t` results in `Failed test: 7` (`not ok 7 - Accessing non-existing on num64 array dies`) | 02:24 | ||
06:22
[Coke] left
08:19
[Coke] joined
08:36
MasterDuke left
09:11
finanalyst left
09:29
dawids joined
10:05
dawids left
|
|||
timo | love me a good short golf like that | 10:37 | |
i can get it to die without nodelay by running the test more times in a row, so that's also goodbad | 10:46 | ||
patrickb | lizmat: I've got an idea that might put us in a lot better position wrt our ability to process RakuDoc. Let's define a subset that does not allow intermixing code and RakuDoc. Then have a tool to convert intermixed code and doc to that safe subset (replace referenced code with C<> or similar). | 11:35 | |
lizmat: In the new RakuAST Grammar, would it be possible to hit the RakuDoc entrypoint directly and prohibit escaping to the Raku grammar? I. e. use the RakuAST grammar to parse the safe RakuDoc subset guaranteeing that no BEGIN time stuff is happening? | 11:37 | ||
Such a safe RakuDoc parser should be unproblematic to use on e.g. GitHub. | 11:38 | ||
timo | there's an opportunity to create a candidate for infix:<,> with just a single argument, which would be used by Any.List (via Any.list), for example in multi method set-shape from native_array.rakumod | 11:41 | |
patrickb: i imagine you could replace the Raku language in the braid with one that does almost nothing | |||
just a random little thing i stumbled upon, probably not worth much | 11:45 | ||
lizmat | patrickb: the "problem" with rakudoc is that as far as the grammar is concerned, it's whitespace | 11:49 | |
so there is no direct entry point | 11:50 | ||
fwiw I've not been able to completely grok that mechanism myself yet | 11:51 | ||
patrickb | but there are parts of the grammar that processes rakudoc (i.e. X<> markup, tables, paragraphs, ...), no? | 11:52 | |
timo | SHAPED-ARRAY-STORAGE seems to be able to handle a single dimension being passed in without a list around it, but at least in this native array example i'm looking at it's turned into a List earlier when there's just a single dimension | 11:53 | |
lizmat | patrickb: yes there are... except for X<> markup | ||
that's done in Raku / NQP outside of the grammar | 11:54 | ||
patrickb | What's the input for that logic? RakuDoc strings? | ||
It might be simpler than I anticipated... | |||
lizmat | Str | 11:55 | |
patrickb | So Raku already contains a RakuDoc parser that has essentially nothing to do with the Raku grammar?! Then we are almost done already, right? | 11:56 | |
lizmat | RakuAST::Doc::Paragraph.from-string(Str) | ||
patrickb | Oh wow. | ||
lizmat | yeah, *that* problem is solved :-) | ||
patrickb | So the only missing piece is a tool to write out Rakudoc from a Raku source file, pretty much a `raku --rakudoc=RakuDoc::To::RakuDoc` | 11:58 | |
lizmat | that's also there already, but it depends on the code being valid | 11:59 | |
patrickb | lizmat: Wow, wow! Depending on the code compiling is not an issue at all for all the usecases I have in mind. | 12:00 | |
Do I miss something obvious? Why did noone think of this earlier? | |||
lizmat | wasn't your issue with the Deparse approach that you could have code execute ? | 12:03 | |
maybe I'm missing your point? | |||
patrickb | That was about syntaxhighlighting. E.g. in the debugger I'm working on I want to highlight code I get passed from remote. I don't have all the source code available, only one file. | 12:05 | |
For a source code editor it's also necessary for the highlighter to be able to deal with broken code. | |||
lizmat | right, and the Deparse approach doesn't allow that | 12:06 | |
patrickb | But the RakuDoc usecase is different. On GitHub the typical usecase is parsing a toplevel README.rakudoc (which will usually be safe rakudoc anyways). That could happen via such a SafeRakuDoc parser. | 12:07 | |
lizmat | a lot of cases, the RakuDoc is embedded in the source code though | ||
especially with declarator docs | 12:08 | ||
patrickb | On raku.land, there is no way around parsing all of the code. We want to have the docs that are embedded in code. But: My current plan is to have the processing pipeline in two stages. The first stage is run inside a highly constrained sandbox. It extracts all RakuDoc from a module and generates SafeRakuDoc. Then outside of the sandbox we process that SafeRakuDoc and generate HTML. | 12:09 | |
lizmat | that could work, now that I understand what you're saying... | 12:10 | |
if you would exclude declarator docs, making such a SafeRakuDoc could be simple | |||
with declarator docs, probably less so... as the grammar *requires* them to be attachable to code, at least atm and afair | 12:11 | ||
patrickb | I do want declarator docs. I hoped we could just turn the referenced lines of code into `C<>` doc blocks or so. | ||
Oh, in the SafeRakuDoc, yes, those will exclude Declarator blocks! | 12:12 | ||
lizmat | the thing is that declarator docs need the info about what they are attached to for a sensible rendere | ||
patrickb | I think I misunderstood the statement. I agree, SafeRakuDoc can't and won't have declarator docs. | ||
That's why the converter to generate SafeRakuDoc from a module will need to turn declarator docs into `C<>` or similar. | 12:13 | ||
lizmat | again: declarator docs need the info about what they are attached to for a sensible render | 12:14 | |
and then it becomes tricky | |||
patrickb | But can't we turn the code they are attached to into a literal string? | 12:15 | |
We could even add a new syntax element to SafeRakuDoc =begin raku-code\n some literal string =end raku-code | |||
lizmat | yes, a renderer can... but to get the thing it is attached to, we need to compile the code (at least at the moment) | ||
#| is especially tricky in that respect | 12:16 | ||
patrickb | But compiling the code would be the job of the first tool, the one that generates SafeRakuDoc. And that is totally fine. | ||
lizmat | but compiling is inherently unsafe wrt BEGIN | 12:17 | |
so how would you integrate that into a raku.land pipeline ? | 12:18 | ||
patrickb | I put that first bit that generates SafeRakuDoc in a highly constrained Sandbox. | 12:19 | |
12:19
[Coke] left
|
|||
patrickb | Embrace the inevitable I guess. | 12:20 | |
lizmat | ok, but then why not completely render the HTML in there as well?? | ||
why the extra step? | |||
compilation is the real hard part, rendering a lot less | |||
patrickb | Because HTML can be malicious. If I assume evil code in the module that code could take over and generate HTML with embedded evil JS. | 12:21 | |
If I have SafeRakuDoc as the intermediate nothing bad can happen anymore. | |||
Also separating the two parts will also give us a safe tool to process SafeRakuDoc. And that tool can be used on e.g. GitHub. | 12:23 | ||
lizmat | ok, so: Sandbox compiles the code, the renderers the AST back to RakuDoc | ||
patrickb | Yes. | ||
lizmat | and the RakuDoc is exported out of the sandbox, and then parsed again, and then rendered to HTML ? | ||
patrickb | Yes. | ||
Specifying SafeRakuDoc and having a secure pipeline to process it is just a really good idea I think. | 12:24 | ||
lizmat | sorry, but if the evil code in the Sandbox would actually not render pure RakuDoc, but actually render evil code as well? | ||
the out of the sandbox re-parse would then execute the evil code nonetheless | 12:25 | ||
patrickb | It can't as SafeRakuDoc inherently can't contain malicious code. | ||
lizmat | that's what you're saying :-) but is that true ? | ||
patrickb | The out of the sandbox re-parse will use the SafeRakuDoc parser. That parser will error out with a syntax error if the input files contain stuff that is not SafeRakuDoc. | 12:27 | |
lizmat | ok, so you depend on the SandBox renderer to change declarator docs into SafeRakuDoc somehow | 12:30 | |
hmmm | |||
patrickb | Exactly. | 12:31 | |
lizmat | ok, that could actually work | ||
patrickb | And on GitHub we can use that same SafeRakuDoc parser to parse README.rakudoc files. | 12:32 | |
lizmat | $source.AST.rakudoc should give you all of the RakuAST::Doc objects | 12:33 | |
patrickb | Having SafeRakuDoc also opens the door for third-party RakuDoc parsers as they don't need to implement a full Raku parser anymore. | ||
lizmat | with the .WHEREFORE method giving you the object a declarator doc is attached to | ||
so you would offer the SafeRakuDoc parser as a service ? | 12:34 | ||
how would you otherwise sandbox it? | |||
patrickb | You mean the parser that *produces* SafeRakuDoc? | ||
lizmat | yes | 12:35 | |
patrickb | I have some ideas. I think using a layer-able container tool might be a good idea, as I can then reuse the containers to not have to reinstall all the dependencies every time. | 12:36 | |
Then prohibit all network access including DNS except for the fez ecosystem. | |||
Give it a generous timeout and fire. | 12:37 | ||
lizmat | I'd say block all DNS and network after fetching the tar file | 12:39 | |
s | |||
patrickb | That's even better. | ||
timo | patrickb: please be aware that containers like docker barely give you security | ||
12:39
[Coke] joined
|
|||
patrickb | But I need to install the deps. | 12:39 | |
timo | however, if you're on something like fedora, there's all the selinux stuff that can help | 12:40 | |
patrickb | If there is a way to tell zef to deeply fetch a modules dependencies in a separate step I can do that, then give up all network, then install. | 12:41 | |
lizmat | ugexe?? | 12:42 | |
^^ | |||
timo | --deps-only maybe? | 12:43 | |
patrickb | timo: Do you have some info I can read up on? I've read statements in both directions many times, but found it difficult to get well informed info. | ||
--deps-only installs all of the deps excluding the module itself. It does not separate out the `fetch` part. | |||
lizmat | perhaps --deps-only --fetch ? | 12:44 | |
patrickb | --fetch apparently applies to the current module only | 12:49 | |
There is `zef fetch` but that does not resolve dependencies. | |||
But in general that's a solvable problem I guess. | 12:50 | ||
timo | www.redhat.com/en/blog/latest-cont...ed-selinux here's an older example of a possible container escape that was mentioned as an example of selinux being good for your container-related security | ||
.o( of course it's already better to be running podman than docker since that runs containers without root privileges on the host by default ) | 12:54 | ||
patrickb | Ah, so it's not inherently unsafe, just a lot more potent to be exploited as the attack surface is bigger? | 12:55 | |
timo | it's a little, little bit safer than just running the executable regularly | ||
there's also a bunch of mitigations that you can easily turn on in a .service file if you invoke your thingie with systemd, either "enable"/"start" or you can use many of these options with "systemd-run", but these features do not depend on systemd, systemd just makes them easy and convenient to access | 12:56 | ||
stuff like "get your own /dev and /tmp" and "mount the entire system read-only for the program" | |||
patrickb | I guess in general there is no good reason not to use qemu instead. Layering FSes is possible as well. So maybe we should just go down that lane. | 12:57 | |
timo | systemd can even run your program from a filesystem image, so rather than giving the program your entire root filesystem and saying "but only under these subdirectories!" you can give it a complete FS that couldn't give access to your system's filesystem if it tried | 12:58 | |
and those can also employ layering and whatnot | 12:59 | ||
also, a custom seccomp filter would drastically cut down on possible attacks, and systemd has a few named preset seccomp filters you can pick from | 13:00 | ||
there's also landlock which i know nothing about | 13:02 | ||
oh that looks interesting. i wonder if they are per-thread, and we could/should landlock for example the spesh thread | 13:05 | ||
ok landlock seems to be mainly about TCP ports and parts of the filesystem, so you can prevent one thread and anything exec'd from it from writing to a specific path or so | 13:11 | ||
patrickb | timo: But what I don't get, from a security POV, why would a systemd confinement be a better sandbox than e.g. a podman container? Both have roughly the same attack surface, don't they? | 13:13 | |
timo | there are additional easily accessible options in the systemd confinement, but i think you can get everything from both? | 13:15 | |
patrickb | A very quick googling told me that both podman and docker also support seccomp profiles. | 13:16 | |
But anyways, there is no real reason not to just use a full qemu VM, right? | 13:17 | ||
timo | if it's a small OS inside the qemu VM that's probably fine? then restoring from a snapshot after every run of the sandboxed thing can be a good way to ensure nothing gets persistence (+/- exploitable bugs of course) | 13:19 | |
patrickb | Yup, something along those lines. | 13:23 | |
It's been some time, but I think in qemu you can have layered disk images. If I use those I don't even have to restore a snapshot, just layer on some base image and remove the layer afterwards. | 13:24 | ||
timo | full vm snapshot restore has the benefit that there are nooks and crannies outside of the FS where an attacker can store persistent state if needed | 13:27 | |
patrickb | But then it's a disadvantage, not an advantage, right? | 13:30 | |
timo | how do you mean? | 13:32 | |
patrickb | If full vm snapshot restores give attackers places outside of the FS where they can store persistent state, then than's a disadvantage. Or did I read your last message wrong? | 13:33 | |
timo | no, what i mean is that if you only restore the FS between executions, an attacker can get persistent state stuck in all manner of other places | 13:37 | |
if you do a full restore of the VM that includes its entire RAM contents, and virtual device state. no place to get any changes to remain | |||
patrickb | Ah. Would it even be possible to swap out the FS without cold restarting the VM? | 13:43 | |
My idea was to restart the VM every time. | |||
If I restore a snapshot instead I basically save the startup time, right? | 13:44 | ||
Geth | rakudo/windows_latest_azure_pipeline: 62f855c767 | (Timo Paulssen)++ | azure-pipelines.yml can't believe we were building without debug symbols |
||
timo | i'd have to try it out in practice if it saves much time, especially when you're running some kind of highly optimized-for-qemu OS that boots ultra fast? | 13:45 | |
well, it's probably not me who is going to try it out tbh | 14:15 | ||
Geth | rakudo/windows_latest_azure_pipeline: 553da923ed | (Timo Paulssen)++ | azure-pipelines.yml can't believe we were building without debug symbols |
14:17 | |
timo | what on earth is this Enter-VsDevShell doing that takes ten frickin minutes?? | 14:29 | |
Geth | rakudo/main: 850a5a71c3 | (Elizabeth Mattijsen)++ | 2 files RakuAST: fix deparsing of { } in argument lists If there is only one statement in a block that is part of an argument list, then render that without newlines. Adapt failing tests accordingly |
14:30 | |
rakudo/windows_latest_azure_pipeline: 34ab643d7d | (Timo Paulssen)++ | azure-pipelines.yml help debug slowness of startup of vsdevshell? |
14:41 | ||
rakudo/windows_latest_azure_pipeline: db09f60f0f | (Timo Paulssen)++ | azure-pipelines.yml also install .pdb files |
14:46 | ||
coleman | patrickb - I would recommend starting with VM snapshots, and then LAYERING IN container-based state after that. My perspective here is about practical gettin' it done concerns rather than security. In my expererience with automating wild builds, you almost always need the VM as an intermediate container/orchestrator | 15:19 | |
Orchestrating snaphots and disk mounts is worth it. It's hard but it can make stuff REALLY fast. | 15:20 | ||
Geth | rakudo/windows_latest_azure_pipeline: 672c1176fc | (Timo Paulssen)++ | fulltrace.gdb try to get gdb to load .pdb files, debug output please |
15:22 | |
coleman | If you (or anyone else) has a rough-looking bash script that builds an environment, even a half broken one, feel free to paste it to an issue on Raku/infra; it could be cleaned up and turned into a Packer build | 15:23 | |
I've used these for the docs site's CI github.com/Raku/infra/tree/main/images/scripts | 15:24 | ||
timo | what's packer? | 15:33 | |
dev.azure.com/Rakudo/cc6e25e1-327d...15/logs/14 this is really super annoying, the setup for doing something with powershell on an azure pipeline windows instance is multiple minutes? that can just not be right | 15:37 | ||
it looks like the super slow command is $devShell = &"C:\Program Files (x86)\Microsoft Visual Studio\Installer\vswhere.exe" -latest -find **\Microsoft.VisualStudio.DevShell.dll | 15:38 | ||
surely that isn't just looking for a file on disk for seven minutes? is it installing a fresh visual studio or something? | |||
patrickb | timo: My bet is on that thing looking for vs on disk for 7 minutes. | 15:46 | |
timo | no, a line earlier it's getting the -property installpath and it's done in .7 seconds | 15:47 | |
patrickb | I vaguelly remember starting up Developer Shells on Windows also being slow (as in taking 8 seconds or so). | ||
timo | 12 minutes in this particular run | 15:48 | |
fulltrace.gdb:4: Error in sourced command file: | |||
`D:\a\1\install\bin\moar.exe.pdb': can't read symbols: file format not recognized. | |||
HHHHHHHHUUUUUUUUUUUUHHHHHHHHHHHHHH | |||
i want to burn this thing to the ground, what on earth is wrong with all of this | |||
Geth | rakudo/windows_latest_azure_pipeline: 18f5f91fab | (Timo Paulssen)++ | fulltrace.gdb neraruenglongdruioeanrdtdntuosdnug |
15:54 | |
lizmat | that's quite a mouthful! | 15:57 | |
timo | i can't believe this absolute effing nonsense | 15:59 | |
ugexe | patrickb: you can sort of do that assuming it is ok to have the distribution extracted and that it is located in zefs cache. `zef install FooBar --deps-only --/build --/test --dry-install` to fetch all the dependencies to e.g. `$HOME/.zef/store`. Then you could disable the network based ecosystems such that the cache is the only source left ala `zef install FooBar --/fez --/rea --deps-only` | 16:00 | |
timo | i will disengage from this task and the next windows machine that i encounter in the wild will have my foot through its floppy disk drive slot | ||
ugexe | although now that we use a staging repo a --dry-install would still end up precompiling :/ | ||
oh right, maybe just add a --/precompile flag to avoid that | 16:04 | ||
16:04
sena_kun joined
|
|||
patrickb | Alternatively, I would guess allowing get requests exclusively on the ecosystem domains with hard coded DNS all the time isn't that big of a security issue, is it? | 16:05 | |
ugexe | sounds fine, but i haven't had time to read all the back log to know what you're trying to solve | 16:10 | |
timo | github.com/rakudo/rakudo/issues/5566 if anybody wants to pick up where i left off, or pick through the results for anything maybe useful | ||
ugexe | I also have no idea what is going on, but I wonder why the latest run mentions mingw | 16:22 | |
Looking for separate debug info (build-id) for C:\Windows\System32\msvcrt.dll | |||
Trying d:\c\mingw-builds\ucrt64-seh-posix\x86_64-1220-posix-seh-ucrt-rt_v10-rev2\mingw64\lib\debug/.build-id/b5/a3b6a41f9c3a9f300af0d42574ce9b.debug... no, unable to compute real path | |||
(No debugging symbols found in C:\Windows\System32\msvcrt.dll) | |||
16:23
sena_kun left
|
|||
timo | probably just in the search paths by default | 16:25 | |
16:26
sena_kun joined
|
|||
timo | or that's where gdb was built | 16:28 | |
Geth | rakudo/main: 5032aad27c | (Elizabeth Mattijsen)++ | src/core.c/RakuAST/Deparse.rakumod RakuAST: fix deparsing issue with package declarator doc Which also fixes #5640 |
||
16:34
sena_kun left
|
|||
Geth | rakudo/windows_latest_azure_pipeline: 431e566684 | (Timo Paulssen)++ | azure-pipelines.yml debug output installPath and devShell |
16:36 | |
timo | github.com/microsoft/vswhere/issues/318 | 16:38 | |
Geth | rakudo/ugexe-azure-pipelines-test: e7dfac111a | (Nick Logan)++ (committed using GitHub Web editor) | azure-pipelines.yml Test windows-latest with older rakudo |
16:48 | |
rakudo: ugexe++ created pull request #5641: Test windows-latest with older rakudo |
|||
ugexe | i have a windows 11 VM with VS2022 that has a working 2023.04 so i'm curious if that'll still work on azure. unfortunately being on an arm host means it isn't exactly the same | 16:49 | |
Geth | rakudo/windows_latest_azure_pipeline: c4adbabff1 | (Timo Paulssen)++ | azure-pipelines.yml try with lldb instead of gdb |
17:00 | |
lizmat | patrickb: I just realized that we probably don't want to create a SafeRakuDoc representation | 17:02 | |
Geth | rakudo/windows_latest_azure_pipeline: 61a6eeff21 | (Timo Paulssen)++ | azure-pipelines.yml use static path for DevShell.dll |
17:03 | |
timo | TF400813: The user 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa' is not authorized to access this resource. | ||
lizmat | patrickb: $path.spurt: $source.AST(:compunit).rakudoc.gist would give you a file with the source representation of the AST | 17:04 | |
you could then safely .EVAL that to get the in-memory version of the AST without parsing the original source code (without any BEGIN actions) | 17:05 | ||
that you can then render | |||
hmmmm | 17:16 | ||
another complicating factor are code examples, aka =begin code :raku :-( | 17:17 | ||
which sorta brings me back to skipping the whole SafeRakuDoc step, and go straight to rendering in the sandbox | 17:18 | ||
afk& | |||
Geth | rakudo/windows_latest_azure_pipeline: 3a28a4ec81 | (Timo Paulssen)++ | azure-pipelines.yml no silent build please |
||
timo | ugexe: so in theory we may be able to get some kind of debugging if we get lldb installed on the azure runner. don't ask me how though. | 17:30 | |
ugexe | learn.microsoft.com/en-us/azure/de...bol-server | ||
i wonder if that would be useful? | |||
timo | no | 17:31 | |
the symbol files are right there | |||
gdb has no support for the pdb format though | |||
lldb does, but isn't installed | |||
ugexe | right, but if the pdb artifacts were published couldnt we run a local lldb? | ||
timo | don't forget to compile moar with --debug and we might be missing the .pdb files in "make install" | ||
if we had a core file close to where the process exits. which i guess we can do with gdb? | 17:32 | ||
Geth | rakudo/windows_latest_azure_pipeline: 64a8b352ff | (Timo Paulssen)++ | azure-pipelines.yml different moar build for debugspew |
17:36 | |
rakudo/windows_latest_azure_pipeline: 09a46e860f | (Timo Paulssen)++ | azure-pipelines.yml bla bla debugspam bla bla |
|||
ugexe | my first guess would be that `choco install llvm` would install lldb | 17:37 | |
Geth | rakudo/windows_latest_azure_pipeline: ae29a28850 | (Timo Paulssen)++ | azure-pipelines.yml i think this is how you are supposed to use CHECKOUT_TYPE |
17:42 | |
rakudo/windows_latest_azure_pipeline: 7621a9ceb6 | (Timo Paulssen)++ | azure-pipelines.yml telemeh spam |
17:45 | ||
rakudo/windows_latest_azure_pipeline: 548f2ce63d | (Timo Paulssen)++ | azure-pipelines.yml telemeh spam |
|||
Blin: ce4b1162c8 | (Will Coleda)++ | bin/blin.p6 Skip SOD, it breaks 'zef update' right now |
18:10 | ||
patrickb | lizmat: A pure RakuDoc would not need to parse =begin code blocks. It can safely treat it as random text. | 18:25 | |
Also having a SafeRakuDoc opens the door for RakuDoc support on GitHub / GitLab / .. | 18:26 | ||
The issue with rendering in the sandbox is that outputting HTML from within the sandbox is unsafe. You could have evil Js in the Html and no good way to know. Once you put that HTML on the web bad things start happening. | 18:29 | ||
[Coke] | =begin code isn't executing code... | 18:31 | |
I am probably missing some context here. | |||
19:02
finanalyst joined
20:06
[Coke] left
20:23
[Coke] joined
|
|||
Geth | rakudo/windows_latest_azure_pipeline: 779f630649 | (Timo Paulssen)++ | azure-pipelines.yml get telemeh windows build fix |
20:54 | |
21:19
finanalyst left
21:22
sena_kun joined
|
|||
Geth | rakudo/windows_latest_azure_pipeline: 9aa22becb0 | (Timo Paulssen)++ | azure-pipelines.yml set telemetry log var correctly |
21:24 | |
rakudo/windows_latest_azure_pipeline: d2b235ee83 | (Timo Paulssen)++ | azure-pipelines.yml fix syntax error? |
21:40 | ||
21:44
finanalyst joined
|
|||
Geth | rakudo/windows_latest_azure_pipeline: ec4862a2dd | (Timo Paulssen)++ | azure-pipelines.yml telemetry log file has a pid at the end so glob it |
21:52 | |
21:53
finanalyst left
22:04
sena_kun left
|
|||
Geth | rakudo/windows_latest_azure_pipeline: 413bc024db | (Timo Paulssen)++ | azure-pipelines.yml pull in moar debug change |
22:28 |