02:16:36 redline6561-work [~redline65@c-66-56-55-169.hsd1.ga.comcast.net] has joined #sbcl 02:16:44 -!- redline6561-work [~redline65@c-66-56-55-169.hsd1.ga.comcast.net] has quit [Client Quit] 02:54:45 rpg [~rpg@216.243.156.16.real-time.com] has joined #sbcl 02:56:19 slyrus [~chatzilla@99-27-204-74.lightspeed.irvnca.sbcglobal.net] has joined #sbcl 03:48:14 -!- rpg [~rpg@216.243.156.16.real-time.com] has quit [Quit: rpg] 04:59:14 -!- nikodemus_ [~nikodemus@cs181058025.pp.htv.fi] has quit [Ping timeout: 258 seconds] 05:29:48 slyrus_ [~chatzilla@99-27-204-74.lightspeed.irvnca.sbcglobal.net] has joined #sbcl 06:17:43 flip214 [~marek@2001:858:107:1:7a2b:cbff:fed0:c11c] has joined #sbcl 06:17:43 -!- flip214 [~marek@2001:858:107:1:7a2b:cbff:fed0:c11c] has quit [Changing host] 06:17:43 flip214 [~marek@unaffiliated/flip214] has joined #sbcl 06:27:29 tWip [u258@gateway/web/irccloud.com/x-cmuownklupsvyaww] has joined #sbcl 08:02:31 hlavaty [~user@91-65-223-81-dynip.superkabel.de] has joined #sbcl 08:17:36 -!- redline6561 [~redline65@li69-162.members.linode.com] has quit [Quit: ZNC - http://znc.in] 08:20:08 redline6561 [~redline65@li69-162.members.linode.com] has joined #sbcl 09:29:21 -!- antoszka [~antoszka@unaffiliated/antoszka] has quit [Ping timeout: 260 seconds] 09:38:20 -!- cmm- [~cmm@bzq-79-177-199-45.red.bezeqint.net] has quit [Ping timeout: 250 seconds] 09:39:21 cmm [~cmm@bzq-79-177-199-45.red.bezeqint.net] has joined #sbcl 09:42:20 antoszka [~antoszka@unaffiliated/antoszka] has joined #sbcl 10:17:54 -!- jiacobucci [~jiacobucc@gw-asdl.ae.gatech.edu] has quit [Ping timeout: 255 seconds] 10:23:11 jiacobucci [~jiacobucc@gw-asdl.ae.gatech.edu] has joined #sbcl 10:26:16 attila_lendvai [~attila_le@catv-89-132-189-7.catv.broadband.hu] has joined #sbcl 10:26:16 -!- attila_lendvai [~attila_le@catv-89-132-189-7.catv.broadband.hu] has quit [Changing host] 10:26:16 attila_lendvai [~attila_le@unaffiliated/attila-lendvai/x-3126965] has joined #sbcl 10:55:19 -!- attila_lendvai [~attila_le@unaffiliated/attila-lendvai/x-3126965] has quit [Disconnected by services] 10:55:19 attila_lendvai1 [~attila_le@catv-89-132-189-7.catv.broadband.hu] has joined #sbcl 11:19:31 nikodemus_ [~nikodemus@cs181058025.pp.htv.fi] has joined #sbcl 11:21:50 _8david: i don't know if i need admin access to the github mirror, but... please add me to the org anyways :) 11:43:40 -!- cmm [~cmm@bzq-79-177-199-45.red.bezeqint.net] has quit [Ping timeout: 244 seconds] 11:44:50 cmm [~cmm@bzq-79-177-199-45.red.bezeqint.net] has joined #sbcl 12:26:36 <_8david> nikodemus_: done, for both github and gitorious 12:57:45 thanks 13:35:33 rpg [~rpg@216.243.156.16.real-time.com] has joined #sbcl 13:41:17 -!- flip214 [~marek@unaffiliated/flip214] has quit [Remote host closed the connection] 14:00:21 homie [~levent.gu@xdsl-78-35-161-245.netcologne.de] has joined #sbcl 14:20:45 hargettp_ [~hargettp_@dhcp-162.mirrorimage.net] has joined #sbcl 14:23:26 -!- hargettp_ [~hargettp_@dhcp-162.mirrorimage.net] has quit [Remote host closed the connection] 14:32:43 fe[nl]ix pasted "deadlock cycle in swank" at http://paste.lisp.org/display/122629 14:32:57 any idea how to debug that ? 14:35:02 fe[nl]ix: i actually know the issue, but i'd completely forgotten about it 14:35:48 the easiest workaround is to add a (with-deadline ...) around the FINISH-OUTPUT in auto-flush thread 14:37:05 i'll try to commit something to slime today that deals with it 14:37:38 ok, thanks :) 14:37:38 but as a local hack (with-deadline (:seconds 0.1) (finish-output ...)) in the stream-finish-output method should do it 14:37:59 sb-ext:with-deadline, that is 14:38:04 um, sb-sys 14:38:18 now, lunch, before i become even more incoherent 14:48:03 -!- cmm [~cmm@bzq-79-177-199-45.red.bezeqint.net] has quit [Ping timeout: 252 seconds] 14:49:02 cmm [~cmm@bzq-79-177-199-45.red.bezeqint.net] has joined #sbcl 15:18:30 fe[nl]ix: i used to have a test-case the triggered that pretty reliably, but i can't remember what it was 15:18:34 do you have one? 15:19:13 let me try again 15:19:59 yes 15:20:23 it's recompiling iolib from scratch :D 15:21:02 ok. 15:28:09 nikodemus_: re sse stuff, I just had a flash that might fix some logic bug in the type system 15:28:36 if that fixes the other conceptual issues I had with the design, most of angravilov should be good to go 15:33:00 also, we'll have an api to bikeshed over afterward 15:34:53 excellent 15:35:05 fe[nl]ix: workaround committed to slime 15:35:33 but first, lunch. 15:39:01 nikodemus_: thanks, I'll try it 15:40:11 -!- cmm [~cmm@bzq-79-177-199-45.red.bezeqint.net] has quit [Ping timeout: 240 seconds] 15:41:17 cmm [~cmm@bzq-79-177-199-45.red.bezeqint.net] has joined #sbcl 16:10:12 Phoodus [~foo@68.107.217.139] has joined #sbcl 16:12:13 how painful would it be for sb-ext:bytes-consed-between-gcs et al to be 64 bits wide on x86_64? 16:12:40 our tuning for a heavily functional program seems to want more on a larger box 16:12:56 as the stop-the-world with GC running in only a single thread is very painful on lots of cores 16:13:04 so deferring it helps tremendously 16:15:50 (having the GC stop the world and use all cores to do collection would be better, but I know that's a far more involved request) 16:17:56 nikodemus_: thanks, it works :) 16:18:33 Phoodus: huh, interesting. i would have imagined that it was already, but you're right 16:19:49 huh, it _is_ unsigned long in the runtime 16:20:03 you mean from the C interface? 16:20:12 I just saw the lisp side of things define it as unsigned 32 16:22:17 pdlogan [~patrick@174-25-37-137.ptld.qwest.net] has joined #sbcl 16:22:21 yeah. you should be able to change that to sb!alien:unsigned-long without any trouble 16:22:44 that's very good to hear 16:22:52 can you change that in the upstream? 16:23:45 not today 16:24:08 but if you can do that locally, and report that it works, i should be able to see to it tomorrow 16:24:13 oh not today, we can change ours 16:24:20 sure, will do 16:24:23 thanks 16:24:56 send in a patch for git format-patch and you'll get the Author credit in history :) 16:25:09 from, even -- not for 16:25:47 *nikodemus_* has a love-affair with git commit --author="..." 16:26:14 artifically inflating the contributor count since 2011-06! 16:28:07 heh 16:32:16 has there been any thought/discussion on a GC that multithreads while the world is stopped? 16:33:27 luis has some code, but iirc it didn't really win much 16:33:48 on how many cores? 16:33:56 ask luis 16:34:08 luis: ? ;) 16:34:36 making the gc not stop the world -- or do so only briefly -- is more interesting, and will probably happen at some point 16:34:47 right 16:35:09 though we're on 24 cores; any time spent in a single-core environment is 24x more expensive 16:35:27 but /when/ is unknown. depends on when someone has 2 out of urge/time/funding 16:35:48 yeah, a fully async GC brings up many interesting weirdnesses 16:35:52 Phoodus: have you tried stack allocating as much as possible? 16:36:01 that can make a huge difference 16:36:20 another important consideration is avoiding dirty boxed objects in old generations 16:36:31 we're using an interruptable heavily continuation passing style model 16:36:47 so we're mostly heap 16:37:18 I do want to focus on creating less garbage (we're generating about 1GB/sec on the smaller dev boxes), but having more GC options is always a good thing 16:37:33 especially for short-term issues before large structural code changes can be done 16:38:02 since we're mostly functional, we're not affecting very many old generation objects 16:38:10 ok, that's good 16:38:13 most of our functional structures are cons-based, too 16:38:52 conses can require a lot of scavenging, though 16:39:32 yeah, I know. But since we have a ton of shared-tail lists, it's the most applicable structure 16:39:38 if you can replace a million conses with two specialized vectors of ten million elements, it's probably a win 16:40:04 can't argue with tail sharing, really :) 16:40:26 yep, I'm looking at the potential for a sort of copy-on-read destructive array system 16:40:39 but I think there are more algorithmic gains we can get first 16:40:55 Phoodus: where/what are you working on, if i may ask? 16:41:03 -!- homie [~levent.gu@xdsl-78-35-161-245.netcologne.de] has quit [Remote host closed the connection] 16:41:06 knowledge base stuff 16:41:23 at? 16:41:24 symbolic reasoning 16:41:33 www.grindwork.com 16:41:46 -!- christoph_debian is now known as siccegge 16:42:03 -!- siccegge is now known as christoph_debian 16:42:52 we're running faster than allows us to keep our site updated though :-P 16:43:12 hah 16:43:55 i'll add you to my list of "interested parties" for SMP-friendlier GC 16:44:10 yep 16:44:31 anything else that's a hot ticket for you? 16:44:59 probably nothing in SBCL itself 16:45:17 though I'm getting killed on sethash of #'equal hashtables 16:45:26 most of which are only getting a single element 16:45:46 but I think I'm going to hack around that with storing a list until it's >N elements, and only then making a hashtable 16:45:49 yeah. we should add a tweak to keep tiny tables in a list 16:47:07 one of our cases creates 50k hashtables, puts a single element in them, and GCs :) 16:47:25 so avoiding the table creation altogether is probably more benificial for our specific case 16:47:56 ouch 16:48:04 re byte-consed-between-gcs, I've pushed that to 12 GB without issue via sb-alien 16:48:27 nice, still waiting on a remote build here 16:48:34 no need for a rebuild. 16:48:46 and re gc, if I didn't have a diploma to get, I'd try a hack that stops the world only to fork. 16:49:37 that'd get you concurrent (mark/sweep) GC for almost free. 16:49:38 I've read up on a lot of GC tech, but I'm not familiar enough with working at the MMU level to effectively work in that realm 16:50:22 MMU tricks are surprisingly expensive. 16:50:35 the latency of GC actually isn't a problem; stopping all cores but 1 kills our computational throughput 16:51:01 there's also the option of working with processes 16:51:01 I don't mind if there are big heavy GC steps mixed in, but they should use all the cores ;) 16:51:34 parallelising a GC is non-trivial; depending on the shape of the heap, it might all be very fine-grained parallelism or almost completely serial. 16:53:12 what's your workload? 16:53:31 knowledge base, symbolic reasoning 16:53:48 in terms of gc pressure and parallelism? 16:54:20 it collects about 1GB/sec on our 4 core workstations 16:54:44 for instance, if the shared data is mostly read-only and built ahead of time, you can do stuff like cons the shared data up, make sure it's tenured, tell the GC to assume the oldest generation is always alive, and fork. 16:54:49 pkhuong: to fork? 16:55:12 you mean run GC in a separate process and communicate the results back? 16:55:13 it's mostly read-only, runtime created, short-lived 16:55:16 nikodemus_: yeah. 16:55:26 Phoodus: and sharing between threads? 16:56:12 it freely passes around the intermediate data between threads 16:56:37 and we are converging on a model where we're going to have multiple threads thinking over the same data instead of partitioning 16:56:39 how about extent? could you assign objects to an allocation pool at allocation, and know when it's safe to drop each allocation pool from the heap? 16:56:53 Phoodus: but the shared data is mosly rad-only? 16:56:57 yes 16:57:22 we stuck closely to a functional programming model for easier threading 16:57:40 there might be usable workarounds 16:58:26 nikodemus_: right. Stop the world, perform some small stuff (e.g. moving GC for the nursery), and then fork. 16:58:50 the forked process gets a consistent view of the heap thanks to the OS's native write barrier. 16:59:28 we don't assume a small nursery ;) 16:59:29 (much, much, faster than going through a segv handler) 16:59:51 but if GC handles threading better, then our nursery can return to more sane sizes 17:05:09 nikodemus_: also, if we use some OS-specific stuff, we can track writes without the software write barrier. 17:05:10 pkhuong: nursery collections can get pretty expensive too when there is tenured dirt 17:05:44 pkhuong: ? you some something other than page protection? 17:05:50 yes. 17:06:07 what's this magic? :) 17:06:09 solaris gives us atomic dirty bits from the page table, which is perfect for GCs. 17:06:14 ooh 17:06:23 does linux have anything? 17:06:31 linux gives us access to the page table mapping from address to some sort of unique HW page id. 17:06:51 (though software barrier has the advantage of smaller granularity) 17:07:08 oh, real software instead of mmu. 17:07:25 so, to detect writes, we'd have to fork *before* the writes and compare page table mappings 17:07:53 *blink* i see you've been thinking about this :) 17:07:57 yes. 17:08:11 fbsd has something like linux, iirc. 17:08:20 OS X has something close to it, but the granularity is coarser. 17:08:31 I think the mach interface gives more info, though. 17:08:35 that is pretty neat 17:08:53 boehm does the solaris thing, but none of the rest 17:09:09 though i haven't yet found a test case where the mprotect/sigsegv showed up to any notable degree 17:09:16 well, not found. seen. 17:09:25 it does, but indirectly. 17:09:46 e.g. our insane page map that takes seconds to fork. 17:10:03 but i've seen many that make us pay through the nose for the large granularity 17:10:42 even cl-bench has those -- the array tests jumped right up when page size was increased because GC suddenly needed scavenge a lot more 17:10:46 antgreen` [~user@CPE00222d6c4710-CM00222d6c470d.cpe.net.cable.rogers.com] has joined #sbcl 17:11:11 -!- antgreen [~user@CPE00222d6c4710-CM00222d6c470d.cpe.net.cable.rogers.com] has quit [Ping timeout: 240 seconds] 17:11:36 -!- slyrus_ [~chatzilla@99-27-204-74.lightspeed.irvnca.sbcglobal.net] has quit [Ping timeout: 250 seconds] 17:11:42 if we only had to parallelise nursery GCs, it wouldn't be too bad... the rest could be concurrent ;) 17:13:09 there's a paper i keep meaning to read, which extends boehms mostly parallel mark/sweep collector with a copying nursery 17:14:50 hm, don't have it on this laptop 17:16:09 this one, i think: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.22.8915 17:22:20 Phoodus: my concurrent GC didn't yield any speedup because it's naively coded :) 17:25:50 homie [~levent.gu@78.35.161.245] has joined #sbcl 17:26:23 smarter folks got the same algorithm to perform 17:26:53 in sbcl? 17:27:15 not in SBCL, no 17:29:13 I don't know how much the GC would change in order to do its world-stopped work multicore 17:29:42 for us, I think it would be a nice option in there, even if it were a stopgap to a (mostly) purely async GC in the future 17:33:25 luis: did you do a separate GC algorithm, or just tried to parallelize what already existed? 17:40:30 slyrus_ [~chatzilla@99-27-204-74.lightspeed.irvnca.sbcglobal.net] has joined #sbcl 17:50:53 -!- hlavaty [~user@91-65-223-81-dynip.superkabel.de] has quit [Ping timeout: 258 seconds] 17:54:43 -!- nikodemus_ [~nikodemus@cs181058025.pp.htv.fi] has quit [Ping timeout: 258 seconds] 17:55:35 stassats [~stassats@wikipedia/stassats] has joined #sbcl 18:06:20 -!- slyrus_ [~chatzilla@99-27-204-74.lightspeed.irvnca.sbcglobal.net] has quit [Ping timeout: 252 seconds] 18:21:48 tcr1 [~tcr@155-dom-3.acn.waw.pl] has joined #sbcl 18:23:58 gor[e] [~svr@79.165.187.105] has joined #sbcl 18:40:33 -!- gor[e] [~svr@79.165.187.105] has quit [Ping timeout: 255 seconds] 18:44:41 -!- cmm [~cmm@bzq-79-177-199-45.red.bezeqint.net] has quit [Ping timeout: 240 seconds] 18:45:44 cmm [~cmm@bzq-79-177-199-45.red.bezeqint.net] has joined #sbcl 18:52:22 -!- tcr1 [~tcr@155-dom-3.acn.waw.pl] has left #sbcl 18:55:52 tcr2 [~tcr@89.108.255.34] has joined #sbcl 18:56:08 -!- tcr2 [~tcr@89.108.255.34] has left #sbcl 18:57:18 tcr1 [~tcr@89.108.255.34] has joined #sbcl 18:58:07 -!- tcr1 [~tcr@89.108.255.34] has left #sbcl 19:07:07 gor[e] [~svr@79.165.187.105] has joined #sbcl 19:28:36 -!- cmm [~cmm@bzq-79-177-199-45.red.bezeqint.net] has quit [Ping timeout: 260 seconds] 19:29:16 cmm [~cmm@bzq-79-177-199-45.red.bezeqint.net] has joined #sbcl 19:29:57 tcr2 [~tcr@155-dom-3.acn.waw.pl] has joined #sbcl 19:30:18 -!- tcr2 [~tcr@155-dom-3.acn.waw.pl] has left #sbcl 19:31:27 slyrus_ [~chatzilla@173-228-44-88.dsl.static.sonic.net] has joined #sbcl 19:53:14 -!- peddie [~peddie@repl.esden.net] has quit [Ping timeout: 250 seconds] 19:54:02 peddie [~peddie@repl.esden.net] has joined #sbcl 20:16:08 -!- homie [~levent.gu@78.35.161.245] has quit [Quit: ERC Version 5.3 (IRC client for Emacs)] 20:43:15 -!- fe[nl]ix [~quassel@pdpc/supporter/professional/fenlix] has quit [Remote host closed the connection] 20:44:52 fe[nl]ix [~quassel@pdpc/supporter/professional/fenlix] has joined #sbcl 21:07:11 -!- cmm [~cmm@bzq-79-177-199-45.red.bezeqint.net] has quit [Ping timeout: 260 seconds] 21:07:51 cmm [~cmm@bzq-79-177-199-45.red.bezeqint.net] has joined #sbcl 21:08:23 -!- antgreen` [~user@CPE00222d6c4710-CM00222d6c470d.cpe.net.cable.rogers.com] has quit [Ping timeout: 244 seconds] 21:11:03 -!- pdlogan [~patrick@174-25-37-137.ptld.qwest.net] has quit [Quit: Leaving.] 21:12:53 finally. Looks like this third conset implementation builds 21:18:23 is it a good idea to call sockint::close on sockets fds? 21:18:55 because just calling free leaves file descriptors 21:44:00 -!- gor[e] [~svr@79.165.187.105] has quit [Read error: Operation timed out] 22:08:26 -!- stassats [~stassats@wikipedia/stassats] has quit [Ping timeout: 250 seconds] 22:50:01 antgreen [~green@shutterkraft.com] has joined #sbcl 23:19:10 except that random changes to the source makes the build fail. 23:36:07 -!- _8david [~user@91-65-223-81-dynip.superkabel.de] has quit [Read error: Operation timed out] 23:54:39 -!- attila_lendvai1 [~attila_le@catv-89-132-189-7.catv.broadband.hu] has quit [Quit: Leaving.] 23:59:18 hargettp [~hargettp@pool-71-174-131-227.bstnma.east.verizon.net] has joined #sbcl