01:02:45 pkhuong [~pkhuong@gravelga.xen.prgmr.com] has joined #sbcl 01:02:54 -!- Qworkescence [~quad@unaffiliated/quadrescence] has quit [Quit: Leaving] 01:03:59 -!- slyrus [~chatzilla@99-28-163-38.lightspeed.miamfl.sbcglobal.net] has quit [Ping timeout: 246 seconds] 01:17:16 slyrus [~chatzilla@99-28-163-38.lightspeed.miamfl.sbcglobal.net] has joined #sbcl 01:41:07 9 MB core (: 01:41:40 Nice! 01:52:07 ~3x as much time for startup. 01:54:54 -!- attila_lendvai [~attila_le@unaffiliated/attila-lendvai/x-3126965] has quit [Quit: Leaving.] 01:55:34 That trade off will be quite worth it to some. 01:56:32 I'm not sure how much of that is due to just compressing zeros away. 01:57:08 I'll have to try a dummy RLE too... And transpose bytes followed by RLE/zlib 02:05:59 This one is probably going in saturday-ish. 02:19:37 cool 02:20:46 we could have a tree shaker... or we could just throw more brute force at the problem ;) 02:24:19 <|3b|> well, lower ram use would be nice too :) 02:24:42 |3b|: it gets flushed into swap ;) 02:24:43 *|3b|* is switching VPS servers soon though, so will care less about that 02:27:14 hypercube32 [~hypercube@231.125.189.72.cfl.res.rr.com] has joined #sbcl 02:35:44 well, transposing doesn't seem to help (good thing too :) 03:14:24 tsuru [~charlie@50.9.237.217] has joined #sbcl 03:15:15 -!- homie` [~levgue@xdsl-78-35-139-113.netcologne.de] has quit [Quit: ERC Version 5.3 (IRC client for Emacs)] 03:30:26 -!- hypercube32 [~hypercube@231.125.189.72.cfl.res.rr.com] has quit [Quit: Leaving] 04:04:48 stassats [~stassats@wikipedia/stassats] has joined #sbcl 05:20:52 stassats` [~stassats@wikipedia/stassats] has joined #sbcl 05:24:12 -!- stassats [~stassats@wikipedia/stassats] has quit [Ping timeout: 250 seconds] 06:02:57 -!- pchrist [~spirit@gentoo/developer/pchrist] has quit [Quit: Lost terminal] 07:04:24 blumbri_ [~blumbri@c-67-181-176-186.hsd1.ca.comcast.net] has joined #sbcl 07:05:26 -!- blumbri_ [~blumbri@c-67-181-176-186.hsd1.ca.comcast.net] has quit [Client Quit] 07:09:51 sdemarre [~serge@91.176.86.170] has joined #sbcl 07:51:19 -!- sdemarre [~serge@91.176.86.170] has quit [Quit: Leaving.] 07:52:38 sdemarre [~serge@91.176.86.170] has joined #sbcl 08:27:04 -!- stassats` [~stassats@wikipedia/stassats] has quit [Ping timeout: 246 seconds] 09:21:55 -!- blumbri [~blumbri@c-67-181-176-186.hsd1.ca.comcast.net] has quit [Ping timeout: 258 seconds] 11:09:54 attila_lendvai [~attila_le@catv-80-98-25-142.catv.broadband.hu] has joined #sbcl 11:09:54 -!- attila_lendvai [~attila_le@catv-80-98-25-142.catv.broadband.hu] has quit [Changing host] 11:09:54 attila_lendvai [~attila_le@unaffiliated/attila-lendvai/x-3126965] has joined #sbcl 11:26:11 attila_lendvai1 [~attila_le@catv-80-98-25-142.catv.broadband.hu] has joined #sbcl 11:26:12 -!- attila_lendvai [~attila_le@unaffiliated/attila-lendvai/x-3126965] has quit [Disconnected by services] 12:02:12 blumbri [~blumbri@c-67-181-176-186.hsd1.ca.comcast.net] has joined #sbcl 12:45:20 pkhuong: have you considered something LZO based? might be a lot faster and still cause noteable compression 13:02:23 pkhuong: cool beans! 13:45:44 christoph_debian: sure. now that the hooks are clear, it's just a simple matter of coding. 13:46:22 the problem with less common formats is that it's more likely we'd have to bundle it with our source. 13:52:54 homie [~levgue@xdsl-78-35-154-157.netcologne.de] has joined #sbcl 13:54:24 charlie [~user@c-68-53-57-241.hsd1.tn.comcast.net] has joined #sbcl 13:55:16 -!- charlie is now known as tsuru` 13:58:05 GPLv2. no go. 14:15:42 antgreen [~user@bas3-barrie18-1279561681.dsl.bell.ca] has joined #sbcl 14:36:07 -!- hlavaty [~user@91-65-217-112-dynip.superkabel.de] has quit [*.net *.split] 14:36:08 -!- _8david [~user@91-65-217-112-dynip.superkabel.de] has quit [*.net *.split] 14:36:08 -!- jsnell [~jsnell@ash.snellman.net] has quit [*.net *.split] 14:36:09 -!- esden [~esden@repl.esden.net] has quit [*.net *.split] 14:36:10 -!- luis [~luis@nhop.r42.eu] has quit [*.net *.split] 14:36:10 -!- specbot [~specbot@tiger.common-lisp.net] has quit [*.net *.split] 14:36:10 -!- cow-orker [~foobar@pogostick.net] has quit [*.net *.split] 14:37:36 esden [~esden@repl.esden.net] has joined #sbcl 14:44:15 hlavaty [~user@91-65-217-112-dynip.superkabel.de] has joined #sbcl 14:44:15 _8david [~user@91-65-217-112-dynip.superkabel.de] has joined #sbcl 14:44:15 jsnell [~jsnell@ash.snellman.net] has joined #sbcl 14:44:15 luis [~luis@nhop.r42.eu] has joined #sbcl 14:44:15 specbot [~specbot@tiger.common-lisp.net] has joined #sbcl 14:44:15 cow-orker [~foobar@pogostick.net] has joined #sbcl 14:46:09 <_8david> do you guys also see a reliable failure in signals.impure.lisp / SLEEP-MANY-INTERRUPTS on Linux/amd64, or should I try to bisect? 14:47:31 I just started seeing it on openbsd/amd64 a day or two ago 14:48:00 didn't nikodemus change nanosleep a couple of days ago? 14:48:18 I mean, I don't like to blame anyone but since nikodemus is the only one doing any commits round here... ;-) 14:49:28 the last successful test run I did was apparently 6d3e70a 14:51:21 i added sleep-many-interrupts after them 14:51:36 the nanosleep changes, that is 14:51:53 ha 14:52:56 i haven't build many vanilla trees over the past couple of days, though. i /know/ i tested the commit that added that test on darwin, not 100% sure i tested it on linux 14:54:10 i'll see what's up and deal with it 14:55:02 see if you just added the test that's not entirely fair! 14:55:20 there was me thinking that you broke everything horribly, but you just revealed that everything was already horribly broken 14:55:41 now I suspect you of breaking it a long time ago instead 15:10:24 _8david: what's the failure mode? 15:11:23 ah. 15:11:32 deadline VS timeout ;) 15:14:07 libzlma is not gplv2, it's "public domain". 15:14:24 I was looking at LZO ;) 15:14:41 oh hey, you even said LZO. Sorry, I can't read. :) 15:15:30 but anything that has a more interesting compress:decompression speed ratio is welcome 15:15:52 foom: what's the default scheduling quantum on linux these days? 15:16:03 lzma is about the same speed as gzip at decompression, and can get substantially higher compression. 15:16:41 depends on your distro... 15:16:48 My debian 2.6.32 is at 250hz 15:16:57 that's the upstream default 15:17:38 I think redhat default is 1000hz 15:17:54 it can vary from 100 on "server" kernels to 1000hz on low-latency workstation kernels 15:18:04 looks like a very consistent .1 s delay from each signal 15:18:28 which makes sense. 15:18:47 how is that test supposed to work? 15:23:05 ok, interesting. different semantics between linux and darwin. 15:23:45 darwin accounts for the time spent handling signals. linux (and other sane platforms, I assume) account the time spent waiting in the queue. 15:30:12 actually, if we could have a simple block decompressor, more mprotect tricks could probably get us back to previous startup times. 15:31:18 is a gzip-core different in its behaviour from a gzexe runtime+core? 15:33:32 Krystof: it inflates in-memory rather than to /tmp 15:35:31 (and the gzexe shell script doesn't work on darwin?!) 15:35:52 bah, darwin 15:40:36 Of course, a zipped core is bad for overall memory usage if you run multiple processes 15:40:41 yup. 15:40:51 unless you're running linux with the duplicate page detection code. :) 15:40:53 but that was true of gzexe etc core already 15:41:57 argh. the lzma sdk is a tarbomb 15:42:44 -!- attila_lendvai1 [~attila_le@catv-80-98-25-142.catv.broadband.hu] has quit [Quit: Leaving.] 15:45:41 I remember 10 years ago when I had software which always extracted archives into a single directory: either into the one toplevel dir in the archive, or into a dir named after the archive itself. 16:41:04 Qworkescence [~quad@unaffiliated/quadrescence] has joined #sbcl 16:44:20 nikodemus, 1. What is the compiler lock? 2. Will 16k in some sense guarantee it will be gone? 16:47:15 Qworkescence: you can't run the compiler in more than one thread 16:48:00 Qworkescence: it's a global lock that ensures calls to COMPILE are serialized. IIRC, that also affects things like CLOS. 16:49:09 -!- hlavaty [~user@91-65-217-112-dynip.superkabel.de] has quit [Remote host closed the connection] 16:51:37 antifuchs, is it like Python's GIL? 16:52:49 doesn't the GIL mean that no python code can run in parallel? 16:53:11 the big compiler lock only affects the compiler, not the code that was generated (unless that code calls the compiler) (: 17:01:53 antifuchs, That is true, the GIL is much worse. 17:03:07 but the BCL is still annoying, because it affects things you might not expect 17:03:27 also, poiu could use threads on sbcl too (not just on clozure) (: 17:04:15 (if that doesn't make ITA want to pitch in a few k, I don't know what would) (-: 17:24:28 <_8david> pkhuong: so, how about always using MADV_MERGEABLE when starting from a compressed code? 17:24:45 <_8david> http://paste.lisp.org/display/124289 17:25:01 myeah... Dunno. It's a potential slow down. 17:25:54 <_8david> is it a slowdown if /sys/kernel/mm/ksm/run isn't enabled? 17:26:03 right. 17:26:55 <_8david> Nicer would be a way for users to enable/disable it explicitly; and enable by default only on compressed cores perhaps. 17:31:27 _8david: did you have relocatable dynamic space working in a branch? 17:40:48 <_8david> erm, I might have a working relocatable dynamic space for 1.0.12.42 in a branch. 17:41:32 I want to see how it hooks in the saving/loading logic, see if I can make the two work cleanly together. 17:43:02 <_8david> if it works at all, this is it: http://repo.or.cz/w/sbcl/lichteblau.git/shortlog/refs/heads/just-relocation 17:48:17 _8david: funny. I would have precomputed the relocation data. 17:50:54 <_8david> what data do you mean? 17:51:56 <_8david> explicit, precomputed relocation information would be nice if you want to make read-only or static space relocatable. To avoid scanning the entire dynamic space for references, we would fix up only places that need it. 17:52:42 well, static/ro space is a bit more work, iirc. 17:52:43 <_8david> But my branch only relocates dynamic space itself, so vast amounts of pointers in there need adjustment. 17:53:14 but I was thinking dynamic space, and just store a bitmap of pointer-to-dynamic-space-ness 17:53:25 <_8david> yes, for static/ro we actually need relocation information that we currently don't have. (Or in parts, that genesis has, but discards to save space.) 17:55:30 <_8david> pkhuong: do it! 17:57:14 <_8david> bench it / mark it / opti-mize it (as Daft Punk might express it) 17:57:40 techno logic. 18:14:55 -!- antgreen [~user@bas3-barrie18-1279561681.dsl.bell.ca] has quit [Remote host closed the connection] 18:19:42 <_8david> pkhuong: if I'm counting it right, 30% of dynamic space words need to be touched 18:20:07 at one bit per word 18:20:11 it's still pretty reasonable ;) 18:20:30 -!- nikodemus [~nikodemus@cs181056239.pp.htv.fi] has quit [Ping timeout: 258 seconds] 18:21:26 <_8david> sure, the bitmap size will be affordable. I'm more thinking "how much time / how many memory accesses would we save by doing so". 18:29:24 -!- Qworkescence [~quad@unaffiliated/quadrescence] has quit [Ping timeout: 246 seconds] 18:34:12 Qworkescence [~quad@unaffiliated/quadrescence] has joined #sbcl 19:07:39 -!- Qworkescence [~quad@unaffiliated/quadrescence] has quit [Ping timeout: 245 seconds] 20:04:19 -!- sdemarre [~serge@91.176.86.170] has quit [Ping timeout: 245 seconds] 20:10:59 -!- slyrus_ [~chatzilla@173-228-44-88.dsl.static.sonic.net] has quit [Remote host closed the connection] 20:16:43 Qworkescence [~quad@unaffiliated/quadrescence] has joined #sbcl 22:15:18 homie` [~levgue@xdsl-78-35-131-242.netcologne.de] has joined #sbcl 22:17:35 -!- homie [~levgue@xdsl-78-35-154-157.netcologne.de] has quit [Ping timeout: 260 seconds] 23:07:25 attila_lendvai [~attila_le@catv-80-98-25-142.catv.broadband.hu] has joined #sbcl 23:07:26 -!- attila_lendvai [~attila_le@catv-80-98-25-142.catv.broadband.hu] has quit [Changing host] 23:07:26 attila_lendvai [~attila_le@unaffiliated/attila-lendvai/x-3126965] has joined #sbcl 23:15:29 -!- tsuru` [~user@c-68-53-57-241.hsd1.tn.comcast.net] has quit [Ping timeout: 252 seconds] 23:50:29 -!- Qworkescence [~quad@unaffiliated/quadrescence] has quit [Quit: Leaving]