01:02:55 homie`` [~levgue@xdsl-78-35-162-215.netcologne.de] has joined #sbcl 01:02:55 -!- homie` [~levgue@xdsl-78-35-159-74.netcologne.de] has quit [Read error: Operation timed out] 01:36:06 -!- kanru`` [~user@61-228-145-2.dynamic.hinet.net] has quit [Ping timeout: 252 seconds] 02:00:00 echo-area [~user@182.92.247.2] has joined #sbcl 02:46:09 -!- specbot [~specbot@pppoe.178-66-84-191.dynamic.avangarddsl.ru] has quit [Disconnected by services] 02:46:13 specbot [~specbot@pppoe.178-66-47-87.dynamic.avangarddsl.ru] has joined #sbcl 02:49:44 -!- stassats` [~stassats@wikipedia/stassats] has quit [Ping timeout: 245 seconds] 03:03:17 -!- CampinSam [~Sam@24-176-98-217.dhcp.jcsn.tn.charter.com] has quit [Quit: leaving] 05:07:23 -!- echo-area [~user@182.92.247.2] has quit [Read error: Connection reset by peer] 05:10:05 echo-area [~user@182.92.247.2] has joined #sbcl 05:22:31 angavrilov [~angavrilo@217.71.227.190] has joined #sbcl 06:10:23 just my numerical simulation 2c on the random issue: afaik, we just generate a random in [1, 2) (with the obvious bit twiddling hack) and subtract. 06:11:11 it does mean that some (denorms) will never be generated, but that's not necessarily a bad thing, given how slow denorm arithmetic can be. 06:11:21 plus, what happens when someone is in flush to zero mode? 06:14:55 argh, no minion. memo for stassats: calling C code that might use xmm registers from the GC is a Bad Idea, until we fix our trampoline code to save/restore them (which I don't think we have, yet). fast_bzero saves the one sse register it uses. 06:32:08 stassats [~stassats@wikipedia/stassats] has joined #sbcl 06:35:15 borkman [~user@S0106001111de1fc8.cg.shawcable.net] has joined #sbcl 06:47:20 oh well. Nobody's going to use our prng to drive serious numerical simulations. 06:57:26 did anyone report strange bugs when write_generation_stats is used? 07:00:54 -!- cmm [~cmm@bzq-79-182-209-72.red.bezeqint.net] has quit [Ping timeout: 265 seconds] 07:27:08 -!- Kryztof [~user@78.3.117.144] has quit [Ping timeout: 245 seconds] 07:53:46 -!- huangjs [~huangjs@190.8.100.83] has quit [Ping timeout: 252 seconds] 08:09:08 -!- ASau` [~user@95-25-227-191.broadband.corbina.ru] has quit [Remote host closed the connection] 08:11:11 ASau` [~user@95-25-227-191.broadband.corbina.ru] has joined #sbcl 08:24:34 -!- antoszka_ is now known as antoszka 08:26:41 http://www.cs.purdue.edu/homes/grr/snapshot-gc.pdf <- another way to ensure short pauses: no generation or such heuristics, just fork ;) 09:23:16 nikodemus [~nikodemus@cs78186070.pp.htv.fi] has joined #sbcl 09:23:16 -!- ChanServ has set mode +o nikodemus 09:24:15 -!- echo-area [~user@182.92.247.2] has quit [Remote host closed the connection] 09:24:16 afternoon 09:30:26 so, I've looked at prxq's random float patch. 09:38:11 I have mixed feelings about the iterative use of random numbers. It's not what we're told to do for simulations, but I don't think anyone would use the system prng for that anyway. 09:39:37 for the rest, we can optimise the division out, which may help recover the lost ~10% in generation speed. 09:39:59 where can I read up on how people running such simulations do it instead? 09:44:18 http://www.iro.umontreal.ca/~lecuyer/papers.html <- this prof tells us that we should try and use one random value when generating non-U(0,1) variates, in order to exploit variance reduction techniques when comparing scenarios. 09:46:08 and the strong prng he advocates when speed isn't too much of an issue basically generates an integer in [0, 2^53) and scales it back to [0, 1) with a multiplication. 09:52:43 pkhuong: well, in that case you could simply use the integer as mantissa and used a fixed exponent, no? 09:52:56 flip214: not quite, but that's the idea. 09:53:42 well, IIRC the x87 has a normalize instruction, so that might simply work 09:54:19 that's neither a bottleneck nor complicated. 09:55:40 I just don't know that it's an issue that not all the floats in a given range could be generated. 10:01:37 lichtblau: lecuyer's cource notes are only available in french, and it semes like a lot of its content is folk knowledge. 10:03:10 -!- antoszka [~antoszka@cl-113.waw-01.pl.sixxs.net] has quit [Changing host] 10:03:11 antoszka [~antoszka@unaffiliated/antoszka] has joined #sbcl 10:39:16 it may be /wrong/ but, i suspect running simulations using system prng isn't all that rare -- maybe people doing "serious" simulations know better, but not everyone who does simulation-like stuff does 10:40:11 (not saying we should necessarily cater for them, just saying that i would expect them to be out there) 10:44:16 pkhuong: so, you're going to (or already are in) .fr these days? 10:44:26 I'm already in Lille (north of .fr) 10:44:34 hence the funny hours. 10:45:05 pkhuong: what's funny about them? 10:45:21 pretty normal for a few million people, I'd guess ... 10:45:35 flip214: funny for EST. 10:46:06 well, if you've been an early bird, you could just migrate to long-sleeping ... no timezone change required ;) 10:47:01 cool 10:47:04 Will pkhuong.eu have more or less time for SBCL than before? 10:47:23 probably less until mid-may. 10:48:31 My advisor thinks I can pretty much finish my thesis in the next 6 months, and he's suggested restructuring it to more easily submit it to competitions, so there's a teensy bit of pressure. 10:52:17 how long are you staying in france/europe? 10:52:27 6 months total, until ~mid october. 10:53:04 I should arrange a couple meetings... at least with the cool teclo people in zurich ;) 10:53:30 btw, what do you think of not doing any generationality, and just going for a straight non-moving mark/sweep, with fork(2) trickery to ensure short pauses? In many ways, we'd have a much simpler runtime. 10:57:28 if forking becomes a core part of the runtime, how do you deal with windows? 10:59:20 and, first, forking should become cheaper 10:59:48 with a large heap the fork() itself takes more than a full GC 11:00:14 fe[nl]ix: how much of that is due to our funky memory map? 11:00:40 most, I think 11:01:19 it was a quick benchmark for iolib.os:create-process 11:01:43 on ccl it was two orders of magnitude faster 11:09:31 jsnell: worst case, it can be emulated with memory protection :\ 11:10:18 well, for me sbcl seems to use hugepages ... that would help with forking, I believe. don't know about GC, though. 11:11:34 pkhuong: i would be interested in investigating the metronome and azul's pauseless algorithm 11:11:51 the latter needs a custom kernel or a fair amount of runtime support, though 11:12:00 hugepages are not going to help when the memory map is incredibly fragemented due to the protection flags 11:12:22 now, if we also had card marking instead of vm tricks for the write barrier... 11:14:11 jsnell: that's why I wanted to try fork and no generation. 11:14:23 no need for a write barrier when posix provides it. 11:19:44 pkhuong: wait, how does that work? fork and have the child GC and report back to parent? 11:21:17 nikodemus: I'd guess the child has to report to the parent, so that the PID stays constant ... 11:21:56 and everything that was unreachable at point N must be unreachable at N+1, as no new references to "old", unreachable data can be taken 11:22:48 nikodemus: yup. Fork gives us a consistent view of the heap without any software write barrier. So run a regular non-moving mark/sweep, update the parent's free list, and we're done. 11:23:21 pkhuong: is it possible that a reference might exist only in some threads registers? 11:23:55 flip214: you have to scan for roots in the parent, as usual. 11:24:45 I'm thinking about the CPU registers ... they'd be invisible in the forked child 11:24:52 or even non-existant 11:25:00 which is why you scan for roots in the parent process. 11:26:44 I don't seem to understand what you tell me ... 11:27:05 is it possible that a (non-root) reference is _only_ in a CPU register in some thread when the fork() gets called? 11:27:27 flip214: a reference in a register is a root. 11:27:38 cmm [~cmm@bzq-79-182-209-72.red.bezeqint.net] has joined #sbcl 11:28:28 ah, so all registers are dumped into some memory location, and are used for traversing in the child. understood, thanks. 11:28:52 but that means a more or less "atomic" put-all-registers-and-fork ... 11:34:47 pkhuong: have you read about the azul's algorithm? 11:35:22 nikodemus: only skimmed some stuff a couple years ago 11:38:12 this is quicker to read than the paper, but less detailed also: http://www.artima.com/lejava/articles/azul_pauseless_gc.html 11:39:23 re windows, I'm probably wrong, but CreateFileMapping/MapViewOfFile looks like it's just enough for us. 11:50:46 well, "Windows NT/2000 Native Api Reference" includes a re-implementation of fork on p. 161. Most of it is book-keeping that we wouldn't need, so it doesn't seem too bad. 11:52:16 nikodemus: *read* barrier? 11:52:21 yes 11:52:36 ouch. 11:52:45 but the gains look pretty impressive 11:53:22 and the overall idea is a lot easier to understand than metronome, which gave me a headache 11:53:26 sure, if you have dozens of threads. 11:53:54 well, with "modern" CPUs there'll be more and more threads... 11:54:10 that requires modern programs as well. 11:54:36 i'm slowly becoming convinced that not only is there a slide into more and more threads, but more and more interesting applications are at least somewhat realtimish 11:54:39 only parts that can be easily parallalized by the compiler/interpreter ... 11:56:01 flip214: none. 11:56:35 pkhuong: not necessarily true. look at R... average(), sum(), etc. can be easily done on N cpus. 11:56:43 http://paste.lisp.org/display/129256 11:57:08 flip214: I'm pretty sure I'll never agree to any kind of transparent parallelism in SBCL. 11:57:19 flip214: there is very little in CL that can be given a treatment like that 11:58:02 nikodemus: compilation in asdf, in quicklisp, .... as long as the depencies are known it's a walk in the park 11:58:20 flip214: really? I never realised. 11:58:50 don't try sarcasm ;) 11:59:13 "as long as dependencies are known" ; i like the way you didn't say "just" 11:59:14 Well, I don't know about asdf, but I believe that quicklisp has good dependency tracking 11:59:19 it's not sarcasm. 11:59:42 it ain't? sorry. 11:59:44 Fare could tell you a lot about attempting multi-*process* compilation in poiu. 12:00:37 I grant you that there might be a few real-world "chances" to observe ;) 12:02:05 nikodemus: the downside is that it requires a recent Intel CPU(Nehalem+) and kernel support 12:02:58 and a kernel patch (: 12:03:02 exactly 12:03:23 a fairly invasive patch to the VM layer 12:03:25 aim at the future! 12:03:26 iirc, the patch isn't anywhere close to being pulled in the mainline, and it doesn't seem easy to modularize. 12:04:44 the problem is that Linux devs are dogmatically opposed to any support for GCs and dynamic languages 12:04:54 so there's little chance to see it merged 12:05:16 at least special-purpose support. 12:05:49 yes but certain features would find little use elsewhere 12:06:00 perhaps Oracle 12:06:35 right. 12:07:57 maybe if the Oracle people push for those features, using them in the JVM and the DB 12:08:13 what's the performance implication if that is run as user-mode-linux? 12:08:25 eek, putting hope into Oracle makes me feel queasy 12:09:01 flip214: ... the goal is to speed up VM management. Going through twice the context switch isn't going to achieve that. 12:09:21 also, not an option for SBCL. 12:09:49 doesn't dalvik use gc, too? perhaps google is better to hope for ... 12:10:12 good point 12:10:37 azul's concurrent pauseless GC on android? 12:10:57 I don't think smartphones run 128-thread programs just yet. 12:10:58 yes, why not ? 12:12:26 pauseless is the thing. many have complained that dalvik is unuseable for games, for example 12:17:50 and even the ARM processors (that soon will be low-end) are multi-core now ... 12:21:07 iirc dalvik's game problems have a lot to do with not having a low-latency sound API available 12:21:39 that too, but the main problem is with rendering 12:21:58 there seem to be quite a few "main problems" ;) 12:24:26 well, for uid 0 it might be possible to remap the page tables into the process' virtual memory space ... then you don't need any syscalls to change mapping ;) 12:24:41 once they're mapped you could do setuid(), too 12:25:07 flip214: how do you think remapping happens? 12:25:32 don't know about azul, but normally you'd have to do munmap() and mmap() ... 12:27:13 fe[nl]ix: interestingly, I think linux supports some of the stuff azul needs... for file-backed mappings. 12:27:40 It has a call to re-arrange (and duplicate/eliminate) pages in bulk. 12:28:03 hmmm 12:28:22 map the entire heap on the disk ?? 12:28:32 interesting 12:28:36 i've thought about it (: 12:29:31 it would have interesting security and performance implications though 12:29:40 dirty pages sooner or later get written to disk 12:29:50 open O_EXCL in a tmpfs-backed FS. 12:30:02 OMG 12:30:05 hahaha 12:30:30 that's quite a hack 12:31:56 I don't remember what it was for... I think it was to have locations that only the GC could write to. 13:09:51 take 2: http://paste.lisp.org/display/129256#1 13:10:44 oh yeah, re sb-ext:quit: would it be possible to make it work in situations it's not broken, and ERROR out otherwise? 13:11:19 pkhuong: i fear that some people are using it to abort threads 13:11:41 so error in that case? 13:12:16 how would that help? 13:12:42 it still breaks someone's code 13:12:45 you have the deprecation, and it's midly backward compatible 13:17:56 hlavaty [~user@91-65-218-223-dynip.superkabel.de] has joined #sbcl 13:19:18 saschakb [~saschakb@p4FEA03A8.dip0.t-ipconnect.de] has joined #sbcl 13:22:11 -!- nikodemus [~nikodemus@cs78186070.pp.htv.fi] has quit [Ping timeout: 260 seconds] 13:23:08 -!- saschakb [~saschakb@p4FEA03A8.dip0.t-ipconnect.de] has quit [Remote host closed the connection] 13:28:32 nikodemus [~nikodemus@188-67-13-181.bb.dnainternet.fi] has joined #sbcl 13:28:32 -!- ChanServ has set mode +o nikodemus 13:38:51 Oh, wow. I missed a discussion on GC, parallel compilation, and SB-EXT:QUIT ? 13:39:25 nyef: and fork on windows ;) 13:39:35 Right, but I don't use windows anymore. 13:40:10 I do use the GC, I do have explicit dependency information for my entire source base at a per-file level, and I use SB-EXT:QUIT. 13:40:48 ok. f it. I'm going to go give sw write barriers a fourth try. 13:41:59 (Though I must admit, I go to some trouble to only use SB-EXT:QUIT from the initial thread, and am working towards making it be the ONLY thread at that time, as part of SIGTERM handling for the application I'm working on. 13:51:11 -!- dsp__ is now known as dsp_ 13:52:34 Hrm. Read barriers on commodity hardware? That could be... interesting. 13:53:49 First two things that come to mind, though, are "what about pinned objects" and "what about when we have conservative roots, like on x86 or x86-64". 14:00:46 that GC would need to be precise 14:01:25 That's what I thought. Doesn't address pinned objects, though. 14:01:28 leuler [~user@p54903AD6.dip.t-dialin.net] has joined #sbcl 14:01:50 what is the problem with those ? 14:02:27 homie``` [~levgue@xdsl-78-35-170-195.netcologne.de] has joined #sbcl 14:02:37 Mmm... I guess it's more a complication than an actual problem. Nevermind. 14:03:04 So, who's volunteering to make the x86-64 backend support precise GC? 14:05:38 -!- homie`` [~levgue@xdsl-78-35-162-215.netcologne.de] has quit [Ping timeout: 260 seconds] 14:11:54 we missed a nifty optimisation for 63 bit fixnums: word=>fixnum can be a simple addition instead of a shift (: 14:12:25 Ahh. 14:13:01 Only works when n-fixnum-tag-bits is 1, though, right? 14:13:53 yup 14:15:09 I wonder why we thought that inling logcount was a good idea. 14:15:28 (I'm going through all the VOP definition, trawling for MOV that should be annotated) 14:16:38 annotated for what, exactly? 14:17:28 software write barrier stuff. 14:17:31 Ahh. 14:20:23 -!- nikodemus [~nikodemus@188-67-13-181.bb.dnainternet.fi] has quit [Quit: This computer has gone to sleep] 14:21:59 pkhuong: what happened with the 3rd try? 14:22:37 antgreen [~user@out-on-235.wireless.telus.com] has joined #sbcl 14:25:09 foom: there were some strange self-build errors. 14:25:30 I'm going for a debuggability-oriented approach now (: 14:30:30 huangjs [~huangjs@190.8.100.83] has joined #sbcl 14:30:39 -!- edgar-rft [~user@HSI-KBW-078-043-123-191.hsi4.kabel-badenwuerttemberg.de] has quit [Quit: ERC Version 5.3 (IRC client for Emacs)] 15:03:42 leuler: I figure you might have an idea on this question: do you think we should implement sub-byte array accessors as natural-sized accesses + shift/mask, or byte-sited + shift/mask (x86[-64])? 15:12:23 pkhuong: read and/or write? 15:12:34 read. 15:12:56 writes if we change reads, I guess. (no narrow to wide hazard) 15:13:12 attila_lendvai [~attila_le@92-249-130-108.digikabel.hu] has joined #sbcl 15:13:12 -!- attila_lendvai [~attila_le@92-249-130-108.digikabel.hu] has quit [Changing host] 15:13:13 attila_lendvai [~attila_le@unaffiliated/attila-lendvai/x-3126965] has joined #sbcl 15:14:54 Kryztof [~user@78-3-117-144.adsl.net.t-com.hr] has joined #sbcl 15:14:54 -!- ChanServ has set mode +o Kryztof 15:22:43 just read a bit in AMD's optimization guide. It seems byte accesses would be very problematic. Operating on one byte is OK, but if the next access is to another byte in the same word it hurts badly. 15:23:26 quote: "Avoid store-to-load forwarding pitfalls, such as ... loading data from anywhere in the same doubleword of memory other than the identical start addresses of the stores when using word or byte stores" 15:25:18 so it prefers aligned accesses to qwords or dwords. the latter is obviously necessary on x86. x86-64 could use both (on AMD), but should match load and store sizes, so decide on one once and for all. 15:27:24 so, we could just generate byte-sized accesses for both reads and writes... 15:28:24 but not if you want to access several array elements shortly after another, such as in a loop. 15:29:00 sited in different bytes near to each other, that is. 15:29:06 ok, in which case we're better off with dwords 15:29:52 yes. and to avoid partial register writes this allows to use mov instead of movzx which costs slightly less. 15:33:06 all right. Looks like I've annotated all the nasty-looking MOVes. Would it be cleaner to emit the write barrier in the mov emitter, or to annotate VOPs? 15:33:48 What I've got is the default MOV instruction checks for potentially nasty destinations (qword-sized memory operand with a base register that's not bp/sp/thread-base) 15:34:27 and two pseudo-instructions movu[nboxed] and movr[eference]... 15:35:06 those skip the checks, and now I'm wondering if I should emit the barrier as part of movr, or in VOPs that emit movr? 15:35:48 What about xchg, rep mov, which are both used afaik? 15:36:17 and push? 15:36:42 I believe xchg is used only reg/reg-wise. 15:36:47 leuler: those I'll arrange later. 15:37:05 there's exactly one pop to memory, fwiw (popw). 15:37:42 How would that look, to annotate a VOP? There may be several writes in one VOP. 15:38:23 each write is annotated. But I don't think I found any VOP that had more than one write to the dynamic heap, last time I went through this. 15:39:57 leuler: an XCHG that involve memory would be strange in regular code, since it has an implicit LOCK 15:40:26 so when you say "annotate VOP" you want to annotate the INST expression? 15:40:47 no, probably just insert a (emit-write-barrier ...) form. 15:41:39 but its in the generator, not declarative in the argument specifications? 15:42:45 nope 15:43:02 all hand-rolled. 15:43:50 I like the explicit method better (to annotate): Disadvantage: accidentally missed a write somewhere - should be avoidable. Advantage: explicit. 15:44:36 right. OTOH, I intend to leave the checks in the MOV instruction, so it's going to be hard to miss a write by accident. 15:48:04 Wait a second, I just reread what you wrote when introducing movr above. 15:48:51 movr is a "pseudo-instruction", surely? Throw the write barrier in there. 15:49:32 If all VOPs are modified to either use MOV, MOVR or MOVU, and MOVR is the only one with the barrier, than OAOO says to put the barrier emitter into MOVR. 15:49:33 nyef: that's my question. Shove that automatically in movr or insert them by hand? 15:49:39 leuler: good (: 15:49:45 upps, nyef beat me. 15:51:07 OAOO OAOO OAOO 15:51:10 Do VOPs containing calls to movr need to be modified additionally? A new temporary, more restrictions on operand types, lifetimes, targets, whatever? 15:51:31 good afternoon. Does everyone wish they were in Zadar? I had an excellent outing today to a neighbouring island 15:51:52 leuler: one temporary, which should usually default to temp-reg-tn 15:52:40 Kryztof: I for certain :( 15:52:56 Kryztof: ... 15:52:59 Why do you need a temporary for the write barrier? 15:53:01 Kryztof: the sun is shining here, too, but the sea is farther away 15:54:33 nyef: to find out which card we have to mark. 15:54:56 does movr modify flags? 15:55:02 leuler: yes. 15:55:27 Hrm. Okay, with all that, maybe it should be an explicit operation. 15:56:31 that is, until one of you figures out how to make the sequence tighter (: 15:56:42 maybe just make the name somewhat more standing out. As a VOP programmer, one needs to memorize (or look up) the effects of all instructions and pseudo instructions anyway. 15:58:21 it's up for a s// any time, so I'll leave it at that for now. 15:58:37 fine, too. 15:59:27 so all the talk about different tradeoffs for different random number uses makes me want to resurrect the idea of a random state protocol 15:59:42 Kryztof: which obviously means that we need subclassable structs ;) 15:59:51 I know! 15:59:59 it's like it was only 3 years ago that I wrote that 16:02:25 but it also needs some thought about what the protocol would look like and whether it would slow stuff down by lots 16:03:27 -!- attila_lendvai [~attila_le@unaffiliated/attila-lendvai/x-3126965] has quit [Quit: Leaving.] 16:04:49 The lisp community might be at least as well served by a portable PRNG library with selectable generators and (in the light of recent random-float talk) maybe different algorithms to provide floats and whatever instead of only some number of random bits in an integer. 16:06:16 true. It'd be a fun hack as well. 16:06:21 I think you get most of the same issues there, but this way we can Extend and Extinguish the competition! ("Lisp community", hah) 16:06:48 SBCL go! 16:09:11 -!- antgreen [~user@out-on-235.wireless.telus.com] has quit [Read error: Connection reset by peer] 16:17:19 -!- Posterdati [~tapioca@host107-237-dynamic.6-87-r.retail.telecomitalia.it] has quit [Ping timeout: 272 seconds] 16:19:51 pkhuong: regarding prxq's random-floats: You wrote earlier "I just don't know that it's an issue that not all the floats in a given range could be generated.". I am with you here. 16:22:07 I just read both papers referenced in prxq's mail. The first one (moler) doesn't contain a motivation. The second one (Morgenstern) uses a Monte-Carlo simulation that yields wrong results with the current way to generate random floats but succeeds with the modified generator. 16:26:22 But the Monte-Carlo simulation there seems to be done numerically quite questionable. I mean: sample from some sine function over billions of periods? The result would be mostly dependent on the sine range reduction algorithm I'd think. 16:29:21 should we worry about generating floats in a range of [-m, n] ? 16:29:59 for m distinct from n? 16:30:05 Posterdati [~tapioca@host49-216-dynamic.16-87-r.retail.telecomitalia.it] has joined #sbcl 16:30:37 if we have working [0,n] then [-m,n] is trivial if someone needs it, modulo the frequency of 0 16:31:28 pkhuong: What's your question? An interval closed on both ends? An interval containing 0 as an interior point? 16:31:30 Kryztof: not if we want to generate all the floats in that range... 16:34:16 The spec only allows [0, x[, so to get to [-m, n[ one needs a subtraction, inevitably incurring loss of signifance, having generated all the nice random floats near zero in vain. 16:34:46 significance, I mean. 16:35:22 same with adding, except that it's worse, because now there are tons of tiny values in the range that won't be generated anyway. 16:39:41 -!- Kryztof [~user@78-3-117-144.adsl.net.t-com.hr] has quit [Ping timeout: 252 seconds] 16:44:07 oh right. Krystof's latest blog post on how codewalked extensions don't compose really reminds me of the work on 3lisp and other reflexive lisps. 16:45:17 they had some ways to express the layering, and to make some things skip the underlying levels of metacircular interpretation. 16:48:17 -!- leuler [~user@p54903AD6.dip.t-dialin.net] has quit [Quit: Au revoir] 16:48:20 ASau`` [~user@176.14.176.32] has joined #sbcl 16:49:07 Kryztof [~user@78-3-43-191.adsl.net.t-com.hr] has joined #sbcl 16:49:08 -!- ChanServ has set mode +o Kryztof 16:49:09 nikodemus [~nikodemus@cs27100107.pp.htv.fi] has joined #sbcl 16:49:09 -!- ChanServ has set mode +o nikodemus 16:50:25 -!- ASau` [~user@95-25-227-191.broadband.corbina.ru] has quit [Ping timeout: 260 seconds] 16:50:54 I disappeared and didn't notice 16:51:03 pkhuong: I was thinking that with [+0,n] we could choose from [+0,n] 16:51:03 and -[+0,m] randomly with probability n:m 16:51:14 then we get all the near-zero floats 16:51:28 Kryztof: yeah, it's getting complicated (: 16:52:08 the issue with [0,n) is of course a problem 16:52:16 stupid world 16:52:54 I don't think it is. 16:53:02 oh? Good :-) 16:53:05 It's just an artefact of the floating point world. 16:53:17 do we not end up with (-m,n) rather than [-m,n)? 16:53:24 maybe that doesn't really matter 16:53:33 no, we can still generate true zeros. 16:54:09 oh, I see what you mean... Yeah, it's pretty hard to get it right (: 16:54:26 bah. I thought you had a clever solution ;-) 16:55:26 well, I was thiking (- [rand] m), but that fails if we want to generate near zeros ;) 16:55:51 yeah 16:55:58 sucks 16:56:14 and yes that's a pretty essential transformation (using [rand] or (- 1 [rand])) for numerical simulations... 16:56:22 *yet 16:56:38 by the way, Pascal Costanza's talk at ELS was partly about 3lisp 16:57:54 I have heard most of it several times already, but maybe you haven't :-) 17:00:50 I'll have to hunt that stuff down then 17:08:54 ok, time to disappear for real 17:10:35 I look forward to random protocols and subclassable structs magically appearing by the time I land ;-) 17:15:31 -!- Kryztof [~user@78-3-43-191.adsl.net.t-com.hr] has quit [Ping timeout: 260 seconds] 18:07:56 -!- homie``` [~levgue@xdsl-78-35-170-195.netcologne.de] has quit [Quit: ERC Version 5.3 (IRC client for Emacs)] 18:10:02 homie [~levgue@xdsl-78-35-170-195.netcologne.de] has joined #sbcl 18:17:55 -!- nikodemus [~nikodemus@cs27100107.pp.htv.fi] has quit [Ping timeout: 260 seconds] 18:18:42 -!- Quadrescence [~quad@unaffiliated/quadrescence] has quit [Quit: Leaving] 18:28:13 Quadrescence [~quad@unaffiliated/quadrescence] has joined #sbcl 18:44:47 nikodemus [~nikodemus@188-67-13-181.bb.dnainternet.fi] has joined #sbcl 18:44:47 -!- ChanServ has set mode +o nikodemus 19:23:24 any ideas why (disassembly) would give an error "invalid feature expression"? http://paste.lisp.org/display/129261 19:25:31 It's... reading the source file? 19:26:53 Is it? Why? 19:28:16 I have no idea, but it's choking on a #+#. combination. 19:29:09 it's the breathtaking source location + disaseembly annotation at work 19:30:08 Ah, and I haven't seen it before because I never compile #+#. with enough debug to trigger it? 19:30:08 i rarely see disaseembly annotations to produce anything useful 19:30:09 sdemarre [~serge@91.176.154.85] has joined #sbcl 19:30:56 when it decides to produce something 19:43:07 CampinSam [~Sam@24-176-98-217.dhcp.jcsn.tn.charter.com] has joined #sbcl 19:52:52 -!- nikodemus [~nikodemus@188-67-13-181.bb.dnainternet.fi] has quit [Quit: Leaving] 19:53:32 What's the default heap size on x86-64? 19:55:24 Heh. SYS:SRC;COMPILER;X86-64;PARAMS.LISP starts off with a comment to the effect that the term ``word'' means a 16-bit or 32-bit quantity, depending on context, followed immediately by defining n-word-bits to be 64. 19:55:52 :D 19:59:54 -!- angavrilov [~angavrilo@217.71.227.190] has quit [Remote host closed the connection] 20:08:14 ... And I think I'm seeing an 8-gig default heap? 20:09:49 nyef: not anymore 20:10:41 On the one hand, I'm running 1.0.50.1-9d2548c in production. On the other hand, what's the default these days? 20:11:21 depends on memory size 20:11:35 Ah. 20:11:48 Hrm. That... could be problematic. 20:11:55 in some strange way, 1044M on 4G, and 1148M on 8G 20:12:29 How utterly bizarre. 20:13:08 I suppose it might be helpful for the general case, but when about the only thing you're going to be running is SBCL, you might want to throw more of your RAM to the heap. 20:13:57 yeah, i set it to 4G and 8G respectively, albeit sbcl is not the only thing i run there 20:22:13 i'm lost in the maze trying to figure out how it calculates it 20:23:05 Mmm. Not really critical for me at this point, I was more curious than anything else. 20:23:32 oh, it's fixed actually 20:23:39 "512Mb for 32-bit platforms, 1Gb for 64-bit ones." 20:26:43 it's bytes-consed-between-gcs which is calculated based on dynamic-space-size 20:28:51 it's weird seeing commit dates being "december" while it was april actually 20:30:57 git log --pretty=fuller somewhat mitigates it 20:31:06 There's a difference between when the commit was originally written and when it was last rebased for commit upstream. 20:31:17 Sortof screws up planet sbcl, too, IIRC. 20:31:57 rebase -i and commit --amend keeps original date, too, even when you merge other stuff in. That kind of sucks IMO. 20:32:51 You start a commit and then add a bunch more changes to it...but it always keeps that original author date. (unless you explicitly override) 20:33:15 I think that was a good feature of the original CVS system. The commit date was the commit date, none of this "five months or so ago" noise. 20:33:33 nyef: well, you can specify git log to report whatever you want 20:33:36 "five months or so ago"? 20:33:59 including this 20:34:44 git stores both commit and author dates, it's just that by default it tends to report author dates, and author dates don't get set in a way that makes much sense to me if using the patch editing commands. 20:35:28 with --date=realtive 20:35:46 Author: Nikodemus Siivola Date: 5 months ago 20:35:48 oh, why would anyone want *that*? 20:40:21 foom: if you time-travel often, it becomes hard to calculate stuff of-hand 20:40:38 s/stuff/dates/ 20:47:32 -!- sdemarre [~serge@91.176.154.85] has quit [Ping timeout: 246 seconds] 20:53:23 prxq [~mommer@mnhm-5f75f980.pool.mediaWays.net] has joined #sbcl 21:00:34 -!- prxq [~mommer@mnhm-5f75f980.pool.mediaWays.net] has quit [Quit: Leaving] 21:03:53 stassats` [~stassats@wikipedia/stassats] has joined #sbcl 21:08:10 prxq [~mommer@mnhm-5f75f980.pool.mediaWays.net] has joined #sbcl 21:10:42 -!- huangjs [~huangjs@190.8.100.83] has quit [Remote host closed the connection] 21:24:36 -!- stassats` [~stassats@wikipedia/stassats] has quit [Read error: Operation timed out] 21:49:37 -!- prxq [~mommer@mnhm-5f75f980.pool.mediaWays.net] has quit [Quit: Leaving] 21:50:14 saschakb [~saschakb@p4FEA03A8.dip0.t-ipconnect.de] has joined #sbcl 21:50:53 -!- ASau`` is now known as ASau 21:53:51 LiamH [~healy@pool-74-96-18-66.washdc.east.verizon.net] has joined #sbcl 22:33:38 edgar-rft [~user@HSI-KBW-078-043-123-191.hsi4.kabel-badenwuerttemberg.de] has joined #sbcl 22:35:02 -!- saschakb [~saschakb@p4FEA03A8.dip0.t-ipconnect.de] has quit [Remote host closed the connection] 23:03:33 kwmiebach__ [~kwmiebach@164-177-155-66.static.cloud-ips.co.uk] has joined #sbcl 23:04:06 brown` [user@nat/google/x-trsxsswrbjkzxtcd] has joined #sbcl 23:06:18 -!- kwmiebach_ [~kwmiebach@164-177-155-66.static.cloud-ips.co.uk] has quit [Ping timeout: 245 seconds] 23:08:35 -!- LiamH [~healy@pool-74-96-18-66.washdc.east.verizon.net] has quit [Ping timeout: 246 seconds] 23:22:29 -!- ASau [~user@176.14.176.32] has quit [Remote host closed the connection] 23:28:41 ASau [~user@176.14.176.32] has joined #sbcl