00:20:02 -!- wbooze [~wbooze@xdsl-78-35-170-209.netcologne.de] has quit [Ping timeout: 252 seconds] 00:44:58 stassats [~stassats@wikipedia/stassats] has joined #sbcl 00:49:17 wbooze [~wbooze@xdsl-78-35-191-197.netcologne.de] has joined #sbcl 00:52:35 tcr [~tcr@88-134-109-42-dynip.superkabel.de] has joined #sbcl 00:57:27 -!- tcr [~tcr@88-134-109-42-dynip.superkabel.de] has quit [Ping timeout: 265 seconds] 01:06:52 not to sound like an I-told-you-so, but github has a pretty nice file release API ((: 01:07:37 anything is better than sourcefourge 01:08:34 there's that too 01:20:12 time to move from sourceforge to github or something? 01:25:18 -!- wbooze [~wbooze@xdsl-78-35-191-197.netcologne.de] has quit [Quit: none] 01:28:40 wbooze [~wbooze@xdsl-78-35-191-197.netcologne.de] has joined #sbcl 01:51:49 -!- rpg [~rpg@216.243.156.16.real-time.com] has quit [Quit: rpg] 01:52:58 tcr [~tcr@88-134-109-42-dynip.superkabel.de] has joined #sbcl 01:59:07 -!- tcr [~tcr@88-134-109-42-dynip.superkabel.de] has quit [Ping timeout: 260 seconds] 02:04:15 tcr [~tcr@88-134-109-42-dynip.superkabel.de] has joined #sbcl 02:27:51 -!- tcr [~tcr@88-134-109-42-dynip.superkabel.de] has quit [Quit: Leaving.] 03:02:10 -!- wbooze [~wbooze@xdsl-78-35-191-197.netcologne.de] has quit [Quit: none] 03:03:09 wbooze [~wbooze@xdsl-78-35-191-197.netcologne.de] has joined #sbcl 03:28:03 tcr [~tcr@88-134-109-42-dynip.superkabel.de] has joined #sbcl 03:32:44 -!- tcr [~tcr@88-134-109-42-dynip.superkabel.de] has quit [Ping timeout: 255 seconds] 03:42:26 -!- kanru` [~kanru@111-249-166-23.dynamic.hinet.net] has quit [Ping timeout: 252 seconds] 04:02:24 kanru` [~kanru@111-249-159-235.dynamic.hinet.net] has joined #sbcl 04:28:21 tcr [~tcr@88-134-109-42-dynip.superkabel.de] has joined #sbcl 04:32:38 -!- tcr [~tcr@88-134-109-42-dynip.superkabel.de] has quit [Ping timeout: 245 seconds] 04:52:36 -!- Fare [~fare@c-68-81-138-209.hsd1.pa.comcast.net] has quit [Ping timeout: 244 seconds] 04:52:50 -!- minion [~minion@tiger.common-lisp.net] has quit [Ping timeout: 255 seconds] 05:03:37 p_nathan [~Adium@75.87.250.229] has joined #sbcl 05:04:14 Is there a mechanism to tell if overflow trapping is set to on or off in SBCL? 05:05:03 Fare [fare@nat/google/x-ormyhxukodfqnmue] has joined #sbcl 05:10:29 there is iirc, I don't remember offhand what it is but several of the float tests use it 05:10:47 look in tests/*float*.lisp in the sbcl source 05:12:57 kk. Thank you 05:28:43 tcr [~tcr@88-134-109-42-dynip.superkabel.de] has joined #sbcl 05:33:20 -!- tcr [~tcr@88-134-109-42-dynip.superkabel.de] has quit [Ping timeout: 255 seconds] 05:37:31 -!- specbot [~specbot@tiger.common-lisp.net] has quit [Ping timeout: 260 seconds] 06:23:08 -!- Fare [fare@nat/google/x-ormyhxukodfqnmue] has quit [Read error: Operation timed out] 06:29:04 tcr [~tcr@88-134-109-42-dynip.superkabel.de] has joined #sbcl 06:34:36 -!- tcr [~tcr@88-134-109-42-dynip.superkabel.de] has quit [Ping timeout: 245 seconds] 06:53:36 Fare [fare@nat/google/x-aatoosiefnzoxixt] has joined #sbcl 06:56:18 hydan [~user@ip-89-102-13-27.net.upcbroadband.cz] has joined #sbcl 06:59:57 sdemarre [~serge@198.184-64-87.adsl-dyn.isp.belgacom.be] has joined #sbcl 07:24:11 -!- Fare [fare@nat/google/x-aatoosiefnzoxixt] has quit [Ping timeout: 245 seconds] 07:31:21 tcr [~tcr@88-134-109-42-dynip.superkabel.de] has joined #sbcl 07:34:43 stassats` [~stassats@wikipedia/stassats] has joined #sbcl 07:35:38 -!- tcr [~tcr@88-134-109-42-dynip.superkabel.de] has quit [Ping timeout: 255 seconds] 07:37:21 pnpuff [~L@unaffiliated/pnpuff] has joined #sbcl 07:38:26 -!- stassats [~stassats@wikipedia/stassats] has quit [Ping timeout: 244 seconds] 07:42:30 -!- p_nathan [~Adium@75.87.250.229] has quit [Quit: Leaving.] 08:11:42 edgar-rft [~GOD@HSI-KBW-091-089-005-153.hsi2.kabelbw.de] has joined #sbcl 08:31:42 tcr [~tcr@88-134-109-42-dynip.superkabel.de] has joined #sbcl 08:36:23 -!- tcr [~tcr@88-134-109-42-dynip.superkabel.de] has quit [Ping timeout: 255 seconds] 08:37:06 -!- pchrist [~spirit@gentoo/developer/pchrist] has quit [Quit: leaving] 09:03:49 pchrist [~spirit@gentoo/developer/pchrist] has joined #sbcl 09:23:47 tcr [~tcr@88-134-109-42-dynip.superkabel.de] has joined #sbcl 10:06:48 -!- pnpuff [~L@unaffiliated/pnpuff] has quit [Ping timeout: 264 seconds] 10:28:14 pnpuff [~L@unaffiliated/pnpuff] has joined #sbcl 11:03:33 -!- sdemarre [~serge@198.184-64-87.adsl-dyn.isp.belgacom.be] has quit [Ping timeout: 244 seconds] 11:04:28 -!- pnpuff [~L@unaffiliated/pnpuff] has quit [Quit: ...stop weaving and watch how the pattern improves.] 11:07:27 sdemarre [~serge@198.184-64-87.adsl-dyn.isp.belgacom.be] has joined #sbcl 11:35:00 -!- kanru` [~kanru@111-249-159-235.dynamic.hinet.net] has quit [Ping timeout: 276 seconds] 12:54:01 -!- wbooze [~wbooze@xdsl-78-35-191-197.netcologne.de] has quit [Remote host closed the connection] 12:55:17 wbooze [~wbooze@xdsl-78-35-145-191.netcologne.de] has joined #sbcl 13:00:21 kanru` [~kanru@111-249-156-75.dynamic.hinet.net] has joined #sbcl 13:25:56 -!- wbooze [~wbooze@xdsl-78-35-145-191.netcologne.de] has quit [Remote host closed the connection] 13:40:28 Fare [~fare@c-68-81-138-209.hsd1.pa.comcast.net] has joined #sbcl 13:47:39 wbooze [~wbooze@xdsl-78-35-145-191.netcologne.de] has joined #sbcl 13:53:09 -!- wbooze [~wbooze@xdsl-78-35-145-191.netcologne.de] has quit [Quit: none] 15:29:46 LiamH [~none@96.231.220.53] has joined #sbcl 15:30:10 pnpuff [~L@unaffiliated/pnpuff] has joined #sbcl 15:30:23 -!- pnpuff [~L@unaffiliated/pnpuff] has left #sbcl 15:34:05 -!- tcr [~tcr@88-134-109-42-dynip.superkabel.de] has quit [Quit: Leaving.] 16:04:19 -!- easye [~user@213.33.70.157] has quit [Ping timeout: 255 seconds] 16:04:48 tcr [~tcr@88-134-109-42-dynip.superkabel.de] has joined #sbcl 16:13:35 -!- tcr [~tcr@88-134-109-42-dynip.superkabel.de] has quit [Ping timeout: 260 seconds] 16:24:14 is there an lp bug # for conservative GC? 16:28:22 p_nathan [~Adium@75.87.250.229] has joined #sbcl 16:37:05 -!- gko [~user@114-34-168-13.HINET-IP.hinet.net] has quit [Ping timeout: 255 seconds] 16:43:55 tcr [~tcr@88-134-109-42-dynip.superkabel.de] has joined #sbcl 16:58:29 wbooze [~wbooze@xdsl-78-35-141-61.netcologne.de] has joined #sbcl 17:06:32 -!- kanru` [~kanru@111-249-156-75.dynamic.hinet.net] has quit [Ping timeout: 252 seconds] 17:37:33 what, the fact that our gc is conservative at all? 17:37:43 there probably ought to be 17:41:24 is it conservative beyond being stack-conservative? 17:52:56 Fare: nope, unlike some systems language I won't name ;) 17:53:58 -!- tcr [~tcr@88-134-109-42-dynip.superkabel.de] has quit [Read error: Connection reset by peer] 17:54:10 tcr [~tcr@88-134-109-42-dynip.superkabel.de] has joined #sbcl 17:58:13 is the recent bug with large flonum arrays an example of stack conservativeness gone wrong? 17:58:40 (locally (declare (optimize safety)) (defclass )) will still make SBCL type-check values passed as initargs to make-instance? (Hint hint: https://bugs.launchpad.net/sbcl/+bug/485718) 17:58:54 I also remember weird conservativeness effects happening with threads -- what happens to dead thread stacks? 17:59:18 "recent" but yeah, i'm fairly confident it is, without having reproduced it yet. 18:00:03 dead threads are cleaned up and ignored. There might some delay on a few platforms, I never looked at the interaction. 18:00:18 FWIW, I routinely spawn threads to avoid stack conservativeness issues. 18:04:10 a top-level (declaim (optimize (safety 3))) seems to make SBCL type-check slots' types during initialization but not elsewhere at (setf slot-value) time 18:05:05 easye [~user@213.33.70.157] has joined #sbcl 18:14:41 tcr: and accessors? 18:22:36 -!- LiamH [~none@96.231.220.53] has quit [Quit: Leaving.] 18:40:36 -!- easye [~user@213.33.70.157] has quit [Remote host closed the connection] 18:58:55 easye [~user@213.33.70.157] has joined #sbcl 19:32:13 if it's stack-conservativeness, it's fairly sucky to be able to allocate in exactly the right place to bogusly retain the vectors, even once you've returned to the repl 19:46:31 -!- wbooze [~wbooze@xdsl-78-35-141-61.netcologne.de] has quit [Quit: none] 19:48:21 wbooze [~wbooze@xdsl-78-35-141-61.netcologne.de] has joined #sbcl 19:51:35 Krystof: right. I thought about scrubbing the stack at each iteration of the REPL... that's an ugly band-aid though. 20:01:37 LiamH [~none@96.231.220.53] has joined #sbcl 20:09:48 REPL and GC always reminds me of * ** *** / // /// 20:22:42 -!- sdemarre [~serge@198.184-64-87.adsl-dyn.isp.belgacom.be] has quit [Quit: Leaving.] 20:30:51 pkhuong: huh, do we not? 20:31:08 should I test if doing that makes the problem disappear? 20:32:41 is the hypothesis that there's something in the stack frames above (/below) the frame for eval-in-repl (or whatever) that doesn't get overwritten when making new stack frames but is nevertheless considered live? 20:35:53 running (sb-sys:scrub-control-stack) directly at the repl doesn't make the 2 enormovectors gcable to subsequent (gc :full t) calls 20:39:17 and yeah, something like that. 20:44:27 hm. looks like I picked the wrong year to unlearn x86 assembly 20:46:55 Krystof: well, what happens if you run the same thing in a thread, let the thread die and be GCed, and then look at ROOM 20:46:58 ? 20:48:26 hold on, I'm in assembly and I don't understand what I'm seeing 20:51:33 -!- |3b|` [foobar@cpe-72-179-19-4.austin.res.rr.com] has quit [Remote host closed the connection] 20:51:48 ok, I'm confused. Can I have a sanity check, please? 20:51:55 We hit arch_scrub_control_stack 20:52:11 (gdb) print/x $esp-4 20:52:11 $1 = 0xb7994a8c 20:52:16 |3b|` [foobar@cpe-72-179-19-4.austin.res.rr.com] has joined #sbcl 20:52:24 -!- Vivitron [~Vivitron@pool-98-110-213-33.bstnma.fios.verizon.net] has quit [Ping timeout: 260 seconds] 20:52:25 this is the first address that we should be scrubbing, right? And then we scrub downwards from there, right? 20:52:31 (until we hit guard pages) 20:53:05 there was a lot of strange "logic", but I think nyef yanked it away. 20:53:07 let me see. 20:54:49 oh, wait, I'm unconfused. (Misread 0xb779... as 0xb799...) 20:59:38 doing the thread test needs to wait for me to rebuild this x86 sbcl with threads :-/ 21:00:04 uh? aren't they enabled by default on linux? 21:00:08 yes 21:00:15 but this is a 5-year old laptop 21:00:44 it has done lots of sbcl development in the past, some of which was when threads were less desireable 21:00:49 ah. 21:00:52 some moron left a customize-target-features.lisp in place 21:02:24 so, a-s-c-s bails out if it hits a zeroed out 4K page, but I don't think that's an issue (: 21:12:02 minion [~minion@tiger.common-lisp.net] has joined #sbcl 21:12:03 specbot [~specbot@tiger.common-lisp.net] has joined #sbcl 21:17:45 with a throwaway thread, the 2 enormovectors are still there 21:19:01 I did (sb-thread:make-thread (lambda () (main 10000))), waited for the heap exhaustion, (sb-thread:release-foreground), abort thread restart, three lots of (gc :full t), then (room) [ in a completely clean sbcl, obviously ] 21:20:27 Happy New Year. 21:20:30 Did we just release an SBCL that effectively comes without sprof on Darwin? :-( "mea culpa" 21:21:10 yeah, probably. Mea culpa too 21:21:17 let's wait and see how many complaints we get 21:25:28 (I mean, let's fix it too! :) 21:32:23 Krystof: really. mm, this more complicated that expected :\ 21:32:52 ah wait, that's once heap exhaustion is involved. 21:33:32 how about (sb-thread:join-thread (sb-thread:make-thread [just run the routine a couple times]))? 21:34:28 what's the best way to submit a trivial patch to sbcl? 21:34:46 Fare: email a git patch to the list? 21:36:06 pkhuong: how do you mean? (I should say I haven't even read the code that's running; it might be formatting my hard drive for all I know) 21:37:10 iiuc, it's calling the cons-2-vectors/perform FP arith function very many times. Things only fail sometimes. 21:37:29 sadly the argument is both related to the sizes of the arrays and to the number of iterations 21:37:32 It's not like *all* the vectors are leaking or something. 21:37:42 I wouldn't be surprised to find that it was in fact heap exhaustion related 21:38:40 hm, having now read it it allocates twice per call to nik, then fills the arrays 21:39:59 if I interrupt a call to (main 1000) after a while of scrolling, there are no leaks 21:40:39 so, I'm thinking that it's conservativeness + gcs at unlucky moments that lead to heap exhaustion 21:40:43 and then, who knows? 21:41:10 I can believe that; for me the main problem is that after that there's what amounts to a leak 21:41:51 I'd like to be more confident in that hypothesis before diving in there. 21:42:22 that's fair enough 21:42:46 I wonder what map-referencing-objects would tell me if I could get it right 21:43:23 probably. 21:43:44 Map allocated objects to find the two zombie vectors, and then map-r-o? 21:44:29 another possibility is that it's /room/ that is lying 21:44:35 very much so. 21:45:36 *sigh* list-allocated-objects dumps me into ldb 21:45:42 so re conservativeness, replacing the call to nik with (sb-thread:join-thread (sb-thread:make-thread #'nik :arguments (list n 1000))) might be a useful diff. 21:48:08 ok, using print_generation_stats suggests that room is not lying to me 21:49:23 where n is something that doesn't trigger heap exhaustion? 21:49:55 but I don't see a leak when I don't trigger heap exhaustion anyway 21:50:15 where N is something that does 21:50:31 there shouldn't be any exhaustion with the thread. 21:50:56 oh, because it's not looping? 21:50:58 sorry 21:51:45 no, leave the loop as is. I'm thinking the exhaustion is caused by conservativeness. Consing and frobbing the humongovectors in a temporary threads would help. 21:52:04 with you 21:52:19 ROOM is another suspect. 21:52:46 I'm less suspicious of room now that I've seen print_generation_stats telling me about the same amount of memory 21:53:39 but just executing ROOM itself could spritz the stack with strange pointer like objects. 21:54:19 oh, during the run? True 21:54:27 right. 21:55:17 today's exciting news is that I get heap exhaustion with the join-thread/make-thread variant 21:55:44 good! 21:56:49 when I terminate the thread which is stuck in the debugger, and do the same (gc :full t) dance, I still retain two enormovectors 21:57:22 5: 0 0 0 0 18 2 0 19532 19539 80072984 12008 85441693 10 7 0.0000 21:57:56 [ oh all this mental energy on a gc problem, when I could be thinking about the horrific modular arithmetic problem :-( ] 21:59:36 taking out the calls to room 22:00:59 still heap exhaustion 22:02:21 how much free space is there, before calling main? 22:02:38 http://paste.lisp.org/display/134361 22:04:03 and before? 22:04:17 sec 22:05:07 Total bytes allocated = 31608056 22:05:08 Dynamic-space-size bytes = 536870912 22:05:55 ah, only 512MB. that makes more sense. 22:09:08 ok. So, just bad timing? There's a bunch of vectors left around (no GC yet), it tries to allocate one 80MB vector, fails. 22:10:25 I can't remember -- don't we try a gc if there's no room for a big allocation? 22:11:06 but that is plausible, yes 22:11:31 sticking explicit gc calls into the loop seems to keep allocation stable 22:11:33 that's what I thought, but gc_find_freeish_pages doesn't. 22:14:15 stupid-gc-mode: trigger a GC after each allocation that leaves at least half the heap allocated. 22:17:09 plus, we actually trigged a GC after the allocation sequence, I think (sure, we're in p-a, but would any VOP fail if we GCed immediately?). 22:17:14 *trigger a GC 22:20:39 the version with explicit calls to gc runs to completion but still leaks an 80MB enormovector 22:21:26 so, current hypothesis: there's something on the stack under (over) the repl frame which happens to have an address that looks like the address of that enormovector 22:21:32 and there's nothing much we can do about it 22:22:32 I wonder if I can find the address of that enormovector and also print off the stuff on the stack 22:22:38 see if we can confirm this hypothesis 22:23:08 easiest would be to printf in the stack scanning routine. 22:25:15 -!- wbooze [~wbooze@xdsl-78-35-141-61.netcologne.de] has quit [Remote host closed the connection] 22:27:47 I will have to delay that pleasure 22:27:51 it is definitely sleepy time 22:28:01 and I'm crashing 23:04:27 -!- p_nathan [~Adium@75.87.250.229] has left #sbcl 23:47:06 -!- edgar-rft [~GOD@HSI-KBW-091-089-005-153.hsi2.kabelbw.de] has quit [Quit: paranoid] 23:55:18 -!- LiamH [~none@96.231.220.53] has quit [Quit: Leaving.]