00:00:16 Racket and R6RS strips of part of the scheme minimalism. That's good. I always can use "#lang R5RS, and for daily use it is good to have some more codebase. 00:00:24 Hah... R6.. minimalist?? 00:00:38 that's why I'm implementing R5RS :) 00:00:52 Yeah.. R6 is like The Matrix: Reloaded. It Didn't Happen. 00:01:01 R5 is followed by R7. There is one Matrix film 00:01:03 tabemann: And update to R7RS "small" once out...? ;^) 00:01:27 *tabemann* does kind of like, though, that R6RS made defines immutable, though, as the idea of mutable defines is, well, a pain to implement efficiently 00:01:34 *tabemann* has never looked at R7RS 00:01:49 Me neither; but hopefully R7 continues that as an idea 00:02:17 R7RS has the 2 dialects, it is strange. 00:02:22 probably the most complex part of the inner loop of my VM is dereferencing and setting defines 00:02:38 pumpkin360: Make sense though as a compromise. 00:02:49 (okay, well, when I get around to actually *implementing* dynamic-wind that'll be different...) 00:03:19 youlysses: maybe. I still like the R5RS more. And racket as an extension. 00:03:35 dynamic-wind is only hard to implement when you get to call/cc. So you can make it a feature of firstclass continuations 00:03:54 well yeah 00:04:07 it's actually the continuation application that's hard, not call/cc or dynamic-wind themselves 00:04:33 pumpkin360: Well, I think and hope that R7RS "Large" will become the most prominent, but too I understand the need/want from a significant portion of the community who wants to "stick to our roots". :^P 00:04:54 I somehow suspect the "stick to our roots" portion will be favoring R5RS 00:05:04 Sure... call/cc just has to save a snapshot of the dynamic-wind scopes that are currently active, then activating the continuation has to re-invoke bits of them 00:05:38 Unfortunately R5 misses too many things like a decent module system, to stick to that alone 00:05:49 tis true 00:05:51 tabemann: That's pretty much why "Small" in R7RS is a thing though, to try and sway them over and say we can have it both ways. 00:06:53 I'm hoping that R7-small is basically R5 modernised a bit, without too much added 00:07:20 LeoNerd: call/cc and "primitive" continuation application can be implemented extremely fast and simply provided one puts one's state in the right format on the heap, the problem is that call/cc is not forbidden within the before and after handlers 00:08:11 Indeedy 00:08:20 It's the jumping out of and back into dynamic-wind that makes it fun 00:08:43 Oh I see what yo umean.. .call/cc in the before thunk :) 00:08:46 Yes... that's Even More Fun 00:09:01 both that and continuation application in the before or after thunk 00:09:11 Mmmmhm 00:10:43 so I have to save all my intermediate state for handling after thunks and before thunks on the stack, so it can be properly replayed in the *next* continuation application if a continuation is applied in an after or before thunk or replayed again if call/cc is applied inside an after or before thunk and that continuation is later applied 00:11:07 I think I've got down an algorithm that is correct... but it's still not simple 00:13:57 -!- mmc1 [~michal@j212142.upc-j.chello.nl] has quit [Ping timeout: 264 seconds] 00:14:15 too bad you can't really get rid of continuations, because they are so nice for doing certain sorts of things in Scheme 00:14:31 like implementing cooperative multithreading (which I've done before) and coroutines 00:15:02 I prefer shift/reset style ones 00:15:09 They don't make my head explode quite as much 00:15:15 (well, I know there are Scheme implementations that require a command line option to activate continuations, such as Bigloo) 00:15:45 how to allow assigment of built-in functions in GUI racket R5RS ? 00:15:46 That's possibly reasonable 00:15:57 (non-gui is not working :() 00:16:10 pumpkin360, make your own bindings; you can assign those. 00:16:21 (define cons cons) (set! cons +) 00:16:58 Riastradh: I make "(define (min a)..." and I get error that I can't change constant 00:18:02 question - is an error supposed to happen in R5RS if you define something twice? 00:18:21 guess not. 00:18:40 It does not usually at least. 00:18:59 OK, you may have to rename them on import in Racket. 00:19:08 Rename min to scheme:min, say. 00:20:35 *tabemann* checked R5RS, it said that it will overwrite existing defines 00:20:59 LeoNerd: have you completed SICP? if you don't mind me asking.. :-) 00:21:20 "completed"..? I read it sure 00:21:36 well, somethings will never be completed.. but yeah.. 00:22:15 so how would you recommend approaching SICP for someone who wants to learn perl also? 00:22:39 Hrm..? I don't get the question 00:22:57 But I'd suggest not learning more than one language at once. :) It gets all mixed up 00:22:59 *tabemann* doesn't see someone would want to learn perl 00:23:15 s/see/see why 00:23:27 -!- youlysses [~user@75-132-28-10.dhcp.stls.mo.charter.com] has quit [Remote host closed the connection] 00:23:57 LeoNerd: ok. I just find many resources on perl just teach syntax and not design and how to design abstractions. 00:24:11 Cromulent|2 [~Cromulent@cpc1-reig5-2-0-cust251.6-3.cable.virginmedia.com] has joined #scheme 00:24:26 HoP seems cool though 00:24:28 Ahyes... Yeah 00:24:33 That's what Perl is about! 00:24:34 Ohyes, HoP is interesting. 00:24:39 isn't pearl an highly imperative language? 00:24:49 Perl is a highly write-only language, for sure. 00:24:53 Imperative. Functional. OO.. Whatever you want.. 00:24:54 so, I've turned to scheme to learn design, but it's not working out in the perl aspect of my learning. 00:25:39 I've found I can't seem to directly apply sicp to perl 1 to 1, it seems. like you said mixing languages can be all mixed up and stuff. 00:26:02 so I'm trying to figure out what order to do things.. I guess. 00:26:15 Flush Perl from your head; it's toxic... 00:26:30 Mostly because there's a layer missing. SICP talks briefly about breaking things into layers of abstraction, then quite quickly goes next into how to apply that idea to Scheme specifically... 00:26:36 Or more generally Lisps.. 00:26:45 Scheme is quite functional and pearl is not. It's more about mixing paradigms and not languages, but that's just my thought 00:26:46 I want to do perl / scheme / haskell / C eventually. 00:27:10 The trick with any language really is to work out how to break the problem into pieces of the right shape, for the given language, or subset thereof... 00:27:48 Some day, maybe I will finish ridding my computing environment of Perl. 00:28:47 Most of it doesn't serve any useful purpose; except for autoconf and automake, it just sits around because various parts of the software nominally depend on it and never invoke it (e.g., urxvt, and most of irssi). 00:29:30 E.g. a lot of the things I do are quite IO-bound, and vary between vaguely and entirely async; so lately I've found that Futures are a good way to shape such programs.. 00:29:44 ok, cool 00:30:07 Turn everything into a Future and lots of neat things can come out of it, where you end up getting most of the benefits of async/callback style code, but can still do fairly straightline-reading logic of call/return synchronous code.. 00:31:37 for those talking about r7rs-small as if it wasn't done... it's done :) http://trac.sacrideo.us/wg/raw-attachment/wiki/WikiStart/r7rs.pdf 00:32:01 r7rs-large is in progress now 00:32:06 Ooh :) 00:32:28 I haven't been this excited since C11 was finalised.. :) 00:32:42 I just wish perl had more resources similar to SICP / HoP / program Design. 00:32:42 r7rs-small is indeed a smallish addition to r5rs, most notably libraries 00:33:03 zacts: Oneday I plan to write a book on async/futures/etc... 00:33:11 cool 00:33:43 zacts: Thing is, my pragmatic sense of er... pragmatism... points out that such a book isn't really perl-specific at all. It would be 95% "here's how to design around futures", and about 5% perl stuff around the edges. 00:33:53 would you say it's ok to read something like HTDP (how to design programs) while learning more perl? 00:33:55 You could just as easily write that book about any other language which supported the same features 00:34:05 there is also an interpreter that implements R7RS-small completely: https://code.google.com/p/chibi-scheme/ 00:34:18 should you want to give it a try 00:34:38 I suspect most authors feel similarly - there's basically no point writing a book about -general- program design, in one specific language. Design principles largely carry across into any other language having at-least equivalent power on the same feature sets. 00:34:40 zacts: HtDP is about designing programs in functional style. 00:35:03 pumpkin360: but htdp 1st edition also covers imperative programming 00:35:17 Perl happens to be fairly good at functional style, also... 00:35:28 zacts: Ok, didn't know that. It will be good then probably. 00:35:55 yeah, as HoP points out. I've only read the first ch of HoP and liked it. 00:36:00 Supports mutable lexical closures, just like Scheme does... anonymous functions just like Scheme.. :) 00:36:06 LeoNerd: but isn't it mainly not-functional? 00:36:21 What's not-functional mean? 00:37:10 mutable lexical closures suck because they mean you can't just copy your free variables into the closure, but have to keep the stack frame the variables are in alive for as long as the closure exists 00:37:12 Not sure, but from what I seen the code written in it looks more like C/C++ code than scheme code, doesn't it ? 00:37:28 which means that you have to support putting stack frames on the heap, even if it is not by default 00:37:40 syntax doesn't necessarily constitute what a language is capable of? 00:37:53 tabemann: Perl refcounts them... So until you -actually- form a closure they can just live on the stack 00:38:19 pumpkin360: Ya; don't confuse surface syntax with underlying abilities... 00:38:30 there's the other strategy of forcing the user to *explicitly* make variables mutable, and allocating them themselves on the heap, which is done by OCaml and SML 00:38:55 tabemann, the compiler can perform that transformation automatically too. 00:38:56 LeoNerd: I just don't know where to turn next? perl is my first lang, I like it, and want to learn more. so reading modules + perl, then HTDP/SICP?, then more perl again. I don't know where to turn next with self study.. 00:39:07 LeoNerd: ok then. I admitt I know not enough to speak about pearl, sorry. If it is functional friendly, how does it compare to JS then ? 00:39:15 Hah.. 00:39:32 No need to hang onto variables that the closure isn't going to refer to; in fact, frankly I think it should be a bug to do so, even though it is common to do so. 00:39:35 JavaScript is more functional-friendly than Perl... 00:39:43 It's exactly equivalent 00:40:37 Heard only that it is nice. But it's not homoiconic - don't like it :P 00:40:43 Scheme spells it 'lambda', JS spells it 'function', Perl spells it 'sub'. In each case, the keyword can create named or anonymous functions or closures. Including mutable ones. 00:41:01 mutable state considered harmful 00:41:14 #perl seems daunting to me. I feel like I'm expected to know everything already, or they link me to perl books that teach syntax and not design. 00:41:16 mutable state considered to mimick the real world, and be useful for modeling and understanding it 00:41:46 #perl-help is more friendly though, I've been getting lots of help there. 00:41:58 if you need mutable state, compartmentalize it explicitly; that's what, say, Haskell and Clojure (when it doesn't have to be compatible with Java) do 00:42:31 (compartmentalizing mutable state for one thing makes it much easier to implement STM well...) 00:43:50 STM == software transactional memory 00:44:03 Yah; I guess I just don't see the point of taking the one thing that makes a computer a computer (namely, the stored state and the ability to act in future differently depending on that state), and intentionally removing it 00:44:07 Yes I'm aware what STM is 00:44:40 LeoNerd: you could say the same thing about, say, explicit memory management versus automatic memory management 00:45:17 LeoNerd: So do You think that racket is a good general-purpos language, or I should look on pearl (and by general purpos I from scripting, web, algorithmics and AI)? 00:45:33 Not really, because memory management is a means to an end; the end being to have a place to store that state 00:45:55 A function without state is exactly that; a function in the mathematical sense. A soulless mapping from input values to output values and nothing more 00:46:21 LeoNerd: hm.. so any ideas or suggestions as to what to study next? 00:46:27 but functions without state make it infinitely easier to reason about what they do, whereas functions with state can launch nuclear missiles for all you know 00:46:35 pumpkin360: Racket is an implementation, not a language. In any case, not one I know. And Perl is spelled Perl, without an 'a'. 00:46:50 just like automatic memory management makes it infinitely easier to reason about how memory allocation 00:46:55 Hah. 00:47:09 I worked as a reliability engineer at one of the world's largest communication networks 00:47:23 Automatic memory management does not, in any way, make it easier to reason about large-scale systems :) 00:47:35 LeoNerd: ok, sorry of the spelling mistake. Concerning racket - well it is a language, with exactly one implementation . 00:47:49 I spent most of my time poking Java with sticks to wonder why suddenly 200 processes span up the CMS collector for 30 minutes at a time, for no obvious reason 00:48:25 well that presumably wouldn't have been a problem with the azul garbage collector 00:48:48 that's stop-the-world considered harmful there 00:48:55 CMS != STW 00:49:05 *tabemann* wishes Haskell, for one, had an incremental garbage collector 00:49:33 with incremental garbage collectors you can actually control their behavior, e.g. how much time they will spend garbage collecting 00:49:40 Yes... CMS does that 00:49:46 CMS has tonnes of tuning knobs. Loads of them 00:49:51 Far too many in my opinion 00:50:07 I spent a lot of time just fiddling with numbers on a commandline to see if they made the problem better or worse 00:50:10 -!- agumonkey [~agu@179.217.72.86.rev.sfr.net] has quit [Ping timeout: 268 seconds] 00:50:32 Considered writing a genetic algorithm type thing... run a small 5% experiment on the cluster, change one param a bit, see if overall it has a statistically-significant improvement... then just keep going 00:51:25 Overall, I don't think these systems -actually- save anyone much effort at large scales. Individual developers have to worry slightly less about "did I remember to free() that malloc()?", but systems engineers then have to spend much more effort in tuning all the GC'ing subsystem anyway 00:51:37 garbage collection does have its limitations, but its limitations are outweighted by manual memory management's 00:52:24 I'd say the only real benefit of it is that it manages to distribute the effort of maintaining a large system across more people 00:52:32 So it allows larger systems to grow than could be supported without it 00:52:41 i feel like a truly large system is just way less likely to get finished using manual memory management 00:52:55 the big problem with manual memory management is that it forces you to keep a chain of custody for all objects allocated on the heap, and keeps you from treating them as *values* 00:53:21 also, manual memory management typically has *slower* allocation than good generational garbage collector implementations 00:53:48 and deallocation in the first generation with those scales in time not with how many objects you have allocated but how many you want to preserve 00:54:12 turbofail, perhaps you don't consider them `finished', but there are one or two large systems deployed out there written without garbage collection. Your operating system, for example, or everything in the back end over at Google. 00:54:14 That's true. I'll agree 100% there. A single allocation operation is likely ot be much faster. 00:54:37 Riastradh: *ahem* Yes, I did mention "one of the world's largest communcations systems" for a reason ;) 00:54:54 However: consider a more useful metric in large systems: allocation time -per KiB- of process memory. 00:55:05 sure, i said "less likely" not impossible 00:55:17 GC'ed systems only perform efficiently if they have 3, 4, maybe sometimes 10 times the amount of physical memory, as their working set requires 00:55:41 When they start to get memory constrained, they get -very- very slow, wasting exponentially more time in performing back-to-back GC and not making much progress 00:55:55 When you are one developer on your own 8GiB laptop, you don't care. 00:56:04 When you are trying to host all the world's email, you start to care more :) 00:56:55 yeah, GC time proportional to the amount of space you're keeping starts to look pretty bad once you're keeping 40GB of heap 00:56:57 on desktop and server systems, memory is cheap; the main limiting factor is how much RAM you can put on your motherboard (e.g. 32 GB for my system); it is more a factor in mobile systems where you are basically constrained in memory 00:57:03 (though personally I wasn't on gmail; I was on gtalk. Which has additionally lots of realtime constriants ;) ) 00:57:47 "I'm sorry, you want to pause for 200msec now to regain some memory? But I have a frame of video to deliver, damnit. Not good enough!" 00:58:36 which is why GC isn't good for realtime... 00:59:02 Or, IMHO, massive shared transactional systems 00:59:37 Heirarchial memory pooling is where my money is, to be honest. If you're dealing with a constant stream of individually-independent transactions of some kind, pooling memory can work very well 00:59:53 but this is like, say, manual locking versus STM - in theory you can get better performance out of manual locking, but in practice you will get far less error-prone and far more maintainable code out of STM... 00:59:55 new request => new pool. allocate into that. Once the request is done, blow the lot away in one go 01:00:19 I know they've done work on region inference with SML, but apparently that didn't get much better performance than GC 01:02:14 Anyway,.. much as I would love to continue chatting large-systems design, I should go to bed :) Night all 01:03:04 g'night 01:03:14 good night. 01:03:54 it looks like some of these large systems may make use of reference counting to do their memory management 01:04:35 I've heard that reference counting, for being deterministic, often has problems in practice other than just cycles, in that single dereferences can lead to unbounded chains of deallocation 01:04:48 sure, reference counting isn't real time either 01:05:14 at least not the easy implementation 01:05:39 whereas with a good incremental GC implementation, you can put absolute limits on how much time you spend at a time in GC 01:05:39 it can be made to do its deallocation more incrementally 01:06:36 on the other hand, depending on your data structure, you may be able to impose a hard limit on the depth of your reference chains anyway 01:07:52 with generational GC you can rely heavily on ensuring infant mortality for the vast majority of objects, as the first generation typically scales with how many are promoted to the second generation 01:08:12 s/as the first generation/as GCing the first generation 01:08:37 that's how, say, Haskell can get away with allocation rates on the order of *1 GB per second* 01:10:40 that may be true for systems that generate a lot of garbage but don't need to accumulate a whole lot of data 01:13:01 that said, the azul GC apparently handles huge live heaps quite nicely 01:13:28 now if only i could afford to use it 01:13:56 one common optimization in GC is to treat larger objects differently from smaller objects, e.g. GHC uses a copying GC for the second generation for smaller object, but IIRC mark-and-trace GC for larger objects, which are automatically promoted to the second generation 01:14:12 s/smaller object/smaller objects 01:14:42 Ripp__ [~Ripp___@50-0-142-26.dsl.dynamic.sonic.net] has joined #scheme 01:15:30 sure, but what if you have a large number of smallish objects that you want to keep? 01:15:40 like, say, a 40GB red-black tree 01:16:14 apparently for small objects the cost of copying them is actually negligible versus the cost of mark-and-trace, and you get other benefits such as heap compaction 01:17:03 copying has the advantage that it makes the act of freeing much cheaper 01:17:34 because you just move the data then forget about it 01:18:03 whereas freeing in a mark-and-sweep is little different and little cheaper than freeing in manual memory management (which can be expensive) 01:19:57 and the act of moving is simply just copy the data and advance a pointer 01:21:39 it gets cheaper still when you do what, say, GHC does and have precompiled routines for moving each data type, so copying is faster than if you actually used memcpy to do it 01:28:19 Nisstyre-laptop [~yours@oftn/member/Nisstyre] has joined #scheme 01:28:57 -!- Ripp__ [~Ripp___@50-0-142-26.dsl.dynamic.sonic.net] has quit [Quit: This computer has gone to sleep] 01:29:17 -!- Nisstyre-laptop is now known as nisstyre 01:29:21 -!- trusktr [~trusktr@c-76-114-26-222.hsd1.ca.comcast.net] has quit [Remote host closed the connection] 01:34:15 -!- nisstyre [~yours@oftn/member/Nisstyre] has quit [Quit: Leaving] 01:52:39 sstrickl [~sstrickl@pool-71-191-94-169.washdc.fios.verizon.net] has joined #scheme 01:52:39 -!- sstrickl [~sstrickl@pool-71-191-94-169.washdc.fios.verizon.net] has quit [Changing host] 01:52:39 sstrickl [~sstrickl@racket/sstrickl] has joined #scheme 01:53:13 -!- sstrickl [~sstrickl@racket/sstrickl] has quit [Client Quit] 01:55:06 ping 01:55:12 implementation question 01:55:22 ka-*PWINNNGGGG*!! 01:55:23 pong 01:56:12 when unwinding the dynamic-wind stack when applying a continuation, should I remove frames *before* applying an after thunk or *before* applying an after thunk 01:56:33 (note that I'm planning on adding frames only after before thunks, once I'm done removing all the frames that I need to remove) 01:56:38 the effect of this is 01:56:49 if there is an error in an after thunk 01:57:09 and it is caught below the continuation application 01:57:14 If control has entered the before thunk, the after thunk should have a chance to run. 01:57:45 -!- Cromulent|2 [~Cromulent@cpc1-reig5-2-0-cust251.6-3.cable.virginmedia.com] has quit [Quit: KVIrc 4.2.0 Equilibrium http://www.kvirc.net/] 01:57:46 If control has entered the after thunk, it shouldn't re-enter the after thunk. 01:57:52 what I mean is should after thunks be able to be entered twice, thanks to catching exceptions below the continuation application combined with reentering the continuation application thank to call/cc 01:58:06 wait 01:58:34 if an exception occurs inside a continuation application, *all* the after thunks should be called, in inside-out order, down to the point where the exception handler is 01:58:54 Don't mix up throws with handling exceptions. 01:59:12 Signalling an exception need not entail throwing. 02:00:14 actually it doesn't matter 02:01:06 because the dynamic-wind frame will be saved if the call/cc is *before* the particular frame where the exception occurred, and if the call/cc was inside the same frame, the point in execution inside the after thunk will be saved 02:03:03 -!- pumpkin360 [~main@agiw179.neoplus.adsl.tpnet.pl] has quit [Ping timeout: 245 seconds] 02:03:16 it's that the execution state inside the thunk will be saved means that it doesn't mean whether the dynamic-wind frame is removed/added before or after the thunk's execution in case call/cc is called inside the thunk, and the continuation is later applied 02:03:26 waait 02:03:26 So, when you transition from outer to inner, set the current state before calling the before thunk, and when transitoning from inner to outer, set the current state before calling the after thunk. 02:03:49 it *does* matter, but for different reasons 02:03:59 for after thunks 02:04:07 theseb [d807e14e@gateway/web/freenode/ip.216.7.225.78] has joined #scheme 02:04:51 if the frame is removed *before* the thunk is called, the next time the thunk is entered via continuation, the before thunk will not be called 02:05:05 but we're going to be inside the after thunk, and the before thunk will need to be called first 02:05:30 so the dynamic-wind frame for the after thunk must be removed *after* the after thunk is called 02:05:49 tabemann: You need to get firmly in mind that exceptions run in the context of the procedure that raises them in the Lisp world, or you will always be at cross-purposes with any Lisper you talk to about this. 02:06:03 s/exceptions/exception handlers 02:06:39 I'm not just considering exceptions, but all cases of escaping and reentering an after thunk 02:06:51 but I'll consider that with exceptions 02:06:58 ever *implemented* lambda calculus? I got an implementation debugged and working but Y combinator always gives "max recursion depth exceeded" when I try to use it....I've tried both versions..how fix? 02:06:59 (oh yeah, you've got restartable exceptions) 02:07:38 dammit 02:07:43 I just realized a major problem 02:07:52 consider this 02:08:12 you call/cc in an after thunk 02:08:18 save the continuation somewhere 02:08:35 then you apply a *different* continuation to escape from the after thunk 02:08:43 okay 02:08:45 now 02:08:58 you later apply the first continuation to reenter the after thunk 02:09:01 now here's the problem 02:09:19 if you pop the dynamic-wind frame before entering the after thunk 02:09:29 the rest of the after thunk will be run without a matching before thunk call 02:09:30 but 02:09:43 if you pop the dynamic-wind frame after leaving the after thunk 02:09:59 there will be potentially code run in the before thunk called when reentering the after thunk 02:10:15 that will not be matched by code in the after thunk, because you'll be entering the middle of the after thunk, and not the start 02:10:36 neither is optimal, but which is more correct? 02:11:39 from thinking about it 02:11:49 the first might be better 02:12:12 because there'll initially be code in the before thunk which won't have been matched by code executed in the after thunk, because it will have been exited prematurely 02:12:34 -!- BossKonaSegwaY [~Michael@72.49.0.102] has quit [Ping timeout: 256 seconds] 02:12:39 but when the after thunk is reentered, it will have a chance to be matched by the rest of the code in the after thunk 02:12:39 but 02:12:53 that only applies if you immediately call/cc, and then apply the second continuation, in the after thunk 02:13:24 (actually, applying the second continuation would have to be conditional, or else you could form an infinite loop anyways) 02:14:15 if you call/cc earlier in the after thunk, do some stuff, then apply the second continuation, then it is likely stuff will be done twice 02:14:26 which behavior is more correct? 02:14:33 hmm 02:14:55 maybe I should find an impl. of Scheme that implements R5RS and test this... but how am I to know whether *their* implementation is correct? 02:15:37 any comments? 02:16:14 -!- brianloveswords [~brianlove@li124-154.members.linode.com] has quit [Excess Flood] 02:17:30 (and yes, this is a realistic scenario - this would happen if, say, you implemented cooperative multithreading with continuations, and yielded inside an after thunk) 02:17:33 I don't quite follow you (it's late here and I'm tired), but you should look at the discussion in R7RS, even if you are only targeting R5RS, because it represents common practice. 02:17:52 brianloveswords [~brianlove@li124-154.members.linode.com] has joined #scheme 02:18:04 And of course you should really target R7RS-small, not R5RS! :-) 02:18:38 dynamic-wind is described on p. 53 02:18:49 hmm... it's early enough that I *could* retarget my VM to R7RS-small 02:19:16 I recommend it. There are not that many more things, and there is IMO quite a bit more clarity. 02:19:37 Most of the growth was from libraries and specific datatypes. 02:20:02 do you know where to find the standards doc? 02:20:21 because I searched for R7RS small, and it didn't point me at anything that obviously linked to the final doc 02:20:47 It's epsilon short of final right now (the Scheme Steering Committee has to meet and bless it, but they aren't a technical committee) 02:21:02 -!- theseb [d807e14e@gateway/web/freenode/ip.216.7.225.78] has left #scheme 02:21:16 http://trac.sacrideo.us/wg/raw-attachment/wiki/WikiStart/r7rs.pdf 02:21:16 I presume R7RS-small uses syntax-case? 02:21:32 No, only syntax-rules. Syntax-case gets into phasing issues that we wanted to avoid in the small language. 02:23:43 You're familiar with identifier phases? 02:24:30 -!- tupi [~user@189.60.0.240] has quit [Remote host closed the connection] 02:24:33 -!- Razz [~tim@kompiler.org] has quit [Read error: Operation timed out] 02:24:39 Razz [~tim@kompiler.org] has joined #scheme 02:24:59 okay, R7RS-small does support setting defines, so I won't have to change my VM code there 02:25:12 Yes, your own defines. Defines imported from a module are immutable. 02:25:34 *tabemann* isn't sure if he's familiar with identifier phases 02:25:37 The same is true in all versions of the standard: things like define-constant exist only as implementation extensions. 02:26:03 agh, that *will* make my implementation complex, as my code for globals will have to remember whether an identifier is in the same module 02:26:15 It's not required, only permitted, to forbid changes. 02:26:27 -!- weie_ [~eie@softbank221078042071.bbtec.net] has quit [Quit: Leaving...] 02:27:02 In general, r7rs like r5rs is based on "it is an error to X", which means the user can't rely on specific behavior. 02:27:06 okay, I was thinking that I would have to make every function know what module it was executing in, and keep a marker associated which each top-level definition indicating which module it was defined in 02:27:19 jrapdx [~jra@c-98-246-145-216.hsd1.or.comcast.net] has joined #scheme 02:27:53 Nah, I would just treat two defines of the same thing in the same module as an error; that should suffice, unless you are doing an optimizing compiler. 02:28:02 s/thing/identifier 02:28:55 okay, so redefining things *is* forbidden in R7RS? 02:29:38 No, it's an error to do so. The user can't count on it working, but the implementation may do what it likes: signal an error, allow the redefinition, ignore the redefintion, or make demons fly out of your nose. 02:29:45 shite 02:29:50 carleastlund [~carleastl@209-6-40-238.c3-0.smr-ubr1.sbo-smr.ma.cable.rcn.com] has joined #scheme 02:30:00 I'm supposed to be able to support R5RS too?! 02:30:04 Eh? 02:30:10 or is that an error in the doc? 02:30:20 page 55 02:30:38 for scheme-report-environment and null-environment 02:31:08 No, that just means they only contain the R5RS identifiers, it doesn't mean the procedures the identifiers are bound to have R5RS semantics. 02:31:31 That is also required only if you implement the R5RS compatibility library, which is not required. 02:33:49 okay, the doc seems clearer with regard to the semantics of dynamic-wind 02:34:13 i.e. pop a dynamic-wind frame before calling an after thunk and push a dynamic-wind frame after calling a before thunk 02:35:54 I just realized your description of what you'd have to do to support immutable imported identifiers is based on a misunderstanding. 02:36:16 I've decided I'm for sure going to work through HTDP. 02:36:22 All you have to check is whether the identifier in a set! expression is defined in this library (okay) or an imported library (error). 02:36:35 No run-time check is needed, it's purely syntactic. 02:37:24 but how do you do this when code is defined in the order that it is specified, and something could be defined or imported *after* where the set! is specified? 02:37:33 No. 02:37:46 The set of bindings in the environment is statically fixed. 02:39:05 okay, I'd gotten the impression that defines were defined in order, and top-level references in them were late-bound... or was that R5RS, and things have changed between then and R7RS-small? 02:39:34 The right-hand expressions in definitions are evaluated in order. 02:40:08 but they can see things defined after them 02:40:26 Yes. 02:40:38 hence they are late-bound 02:40:43 No. 02:41:16 Except as a feature of a development environment. 02:42:08 so you need to go through the input file, expand all macros, then create "boxes" for each thing defined in the top-level environment, and then evaluate the defines themselves in order, binding the names in them to those "boxes"? 02:42:36 Filling the boxes with the values, not binding the names. Binding names happened when doing macro expansion. 02:43:19 okay, that will require some changes to my implementation of globals... but the resulting implementation should be faster 02:43:39 Macro expansion is when the meaning of each name is determined. 02:44:14 BossKonaSegwaY [~Michael@72.49.0.102] has joined #scheme 02:45:03 you mean when the names are created, before values are actually added to them 02:45:08 Just so. 02:45:20 Ripp__ [~Ripp___@50-0-142-26.dsl.dynamic.sonic.net] has joined #scheme 02:45:45 Values happen at run-time. Macro expansion and binding names to meanings happen at compile-time. 02:46:47 *poof* 02:47:23 okay, then I can probably unify my handling of locals and globabls 02:47:25 *globals 02:47:38 just make the globals the outermost environment on the scope stack 02:47:39 fridim__ [~fridim@bas2-montreal07-2925317871.dsl.bell.ca] has joined #scheme 02:49:15 which is good, because my globals implementation was slow and my locals implementation was fast 02:51:52 Riastradh has been describing the behavior of identifiers bound to variables. Identifiers bound to syntax can't be used before they are defined, at least not in R7RS. 02:52:58 that simplified things 02:53:15 Yes. 02:53:23 Nor can variables be referred to before they are imported. 02:53:38 (Again, these are things the user can't count on; the implementation is free to provide them as extensions.) 02:55:47 it is nice that R7RS explicitly specifies exception-handling mechanisms, or I would have either had to dig up some SRFI on the subject or just make my own and hope it doesn't suck too much 02:56:16 Yes, they come from R6RS 02:57:05 You might want to look at pp. 77-79 where the (few) incompatibilities and (many) extensions of R7RS over R5RS are listed. 02:59:03 *tabemann* is reading that now 02:59:17 it definitely feels "bigger" than R5RS 02:59:23 Cromulent [~Cromulent@cpc1-reig5-2-0-cust251.6-3.cable.virginmedia.com] has joined #scheme 03:01:19 It is, but most of the changes are small changes. 03:02:41 Libraries are the big-deal change. 03:02:49 yeah, I see them 03:02:59 Everything else is the stuff you often find in R5RS implementations anyway. 03:03:16 And for that matter, most R5RS systems have some sort of modules anyway. 03:04:48 it is nice to at least have standardized that 03:05:01 *jcowan* nods. 03:06:27 this *is* going to be a good amount more work for me implementation-wise 03:06:54 but it would be nice to have an R7RS-small Scheme that I can embed in Haskell, instead of the many crappy mini-Schemes that people like to write in Haskell 03:08:13 random question 03:08:39 can let-values take arbitrary numbers of values, or must it specifically match the number of values returned by the expression in question? 03:11:02 It is an error to return the wrong number of values. Some implementations bark, some drop the excess values or supply some value 03:11:08 if there are too many variables. 03:11:24 okay, I've implemented it to be an error 03:11:41 Note however that let-values can accept an improper list, permitting it to capture "n or more values" 03:11:58 s/accept/be written with 03:13:13 okay, that wasn't clear from the doc 03:13:45 that is something I'll have to modify my VM to handle 03:15:50 -!- jao [~jao@pdpc/supporter/professional/jao] has quit [Ping timeout: 240 seconds] 03:15:55 -!- MrFahrenheit [~RageOfTho@77.221.25.95] has quit [Ping timeout: 276 seconds] 03:16:27 See the definition of in 7.1 03:16:39 it's the same for lambda and let-values (but not for let) 03:16:51 Have you found the formal semantics helpful at all? 03:17:04 *tabemann* hasn't read the formal semantics in that back much 03:18:24 (Not many do, I believe.) 03:21:09 semantics question 03:21:14 when doing a tail call 03:21:19 and an exception occurs 03:21:33 is the exception handler called in the context of the last non-tail call? 03:22:41 Gooder [~user@79.155.200.192.client.dyn.strong-in144.as13926.net] has joined #scheme 03:23:20 (I'm wondering because I'm writing exception code for function application, and during function application, if I do a non-tail call a return frame will be on the return stack indicating who did the calling, but if I do a tail call, the return frame on the top of the return stack will be the last non-tail call 03:23:52 no 03:24:11 because I need to resume the call itself, tail or not 03:26:08 Just so. 03:26:16 The calling frame should be the caller's frame. 03:26:33 er, that sounds stupid 03:26:55 The continuation of an exception handler is the (middle of the) last procedure invoked before the exception was raised. 03:27:11 So yes, you are right. 03:28:48 Note that not all exceptions are in fact continuable, and typically errors are not. 03:29:10 The point of tail calling is that when foo tail-calls bar, foo's frame is eagerly reclaimed. 03:29:28 In some implementations, it's only asymptotically eagerly reclaimed. 03:30:06 For example, in Chicken, tail calls are not implemented directly; rather, when the stack fills up, it's reclaimed in bulk. 03:30:54 there's no reason not to implement tail calls directly in my implementation, since my scopes stack is just a linked list 03:31:08 and I'm implementing everything on the Haskell heap 03:31:48 *jcowan* nods. 03:31:50 Sure. 03:32:03 Chicken is one of the highest-performance implementations 03:32:51 okay, so for errors there is no reason for me to have them be continuable 03:33:29 Indeed. But that doesn't mean you can toss away the context instantly, because guard clauses run (mostly) in the raiser's context. 03:34:01 preflex_ [~preflex@unaffiliated/mauke/bot/preflex] has joined #scheme 03:34:24 See the discussion of guard on p. 20 03:34:29 -!- preflex [~preflex@unaffiliated/mauke/bot/preflex] has quit [Ping timeout: 248 seconds] 03:34:41 It just means there is no way back. 03:34:45 -!- preflex_ is now known as preflex 03:35:06 So the static context can be discarded, but the dynamic context cannot be unwound. 03:35:18 Is this going to be a classical tree-walking interpreter, or a bytecode interpreter? 03:35:46 -!- fadein [~Erik@67.161.246.186] has quit [Quit: leaving] 03:36:11 sort of in between 03:36:55 it's not compiling to binary code that could be saved to a file, but it is compiling to virtual instructions that operate on a (many) stack machine 03:37:30 but within these instructions there can be things like, say, arbitrary values (e.g. a whole list could be pushed onto the argument stack with a single push-constant instruction) 03:40:11 *jcowan* nods. 03:40:12 Sure. 03:40:40 Sounds like it should be quite efficient. 03:42:15 I'm trying to implement as much code that is going to be used repeatedly in Haskell 03:42:21 as the Haskell is getting compiled to native code 03:42:32 Okay. 03:42:53 so it's not a VM designed to be used in a general fashion 03:42:55 Are you piggybacking on Haskell numbers? 03:43:19 I'm using Haskell Integers and Rationals for exact numbers and Doubles for inexact numbers 03:43:27 I'm not really planning on implementing complex numbers 03:43:45 (Integers are internally represented as word-size numbers when small and as bignums when large) 03:44:32 I thought Haskell already provided complex numbers 03:45:57 if I provide complex numbers, they'd probably have to be exact-only, as I don't think there is support for many inexact number routines for complex numbers (e.g. sin takes an provides floating-point values) 03:47:34 (I don't feel like implementing various numerics routines for complex numbers myself) 03:53:54 Look at https://code.google.com/p/chibi-scheme/source/browse/bignum.c starting at "complex numbers" 03:54:05 It should be no problem to make them from that. 03:54:43 It's very clear even if you don't really know C. 03:56:17 there is support in Haskell for complex numbers - I just am not aware of there necessarily being support for various numeric operations like sin or log for them 03:57:47 wait, I think there is 03:58:12 okay, maybe I will implement them after all 03:58:26 :) 03:59:06 We are all still waiting with bait on our breath for a Scheme that implements native quaternions 03:59:44 from the looks of it, I might be able to implement both exact and inexact complex numbers 04:00:08 no, I can't 04:01:10 because while I can parameterize Complex against things other than Float or Double, the type in question has to be an instance of RealFloat for Complex a to be an instance of Num 04:01:13 Right. 04:01:25 But lots of Schemes have no exact complex number support. 04:01:57 See http://trac.sacrideo.us/wg/wiki/NumericTower for what is out there. 04:02:00 kvda [~kvda@unaffiliated/kvda] has joined #scheme 04:02:52 Also http://trac.sacrideo.us/wg/wiki/ComplexRepresentations 04:05:21 okay, so they're quite heterogeneous, but there are a good few that support only inexact complex numbers 04:05:52 and many which lack complex numbers altogether 04:06:02 Indeed 04:06:21 One of the major limits of Scheme is that there is no standard or even semi-standard way to extend the numeric tower 04:06:35 And since it is about the only exception to Scheme's relentless monomorphism, that's rather a pity. 04:06:52 in Haskell you can do that because there are a number of different numerics type classes that anyone can implement 04:06:57 *jcowan* nods. 04:07:09 OTOH, most people don't need to go outside the existing scope. 04:07:49 is that page (ComplexRepresentations) right about c/c++? 04:08:05 wouldn't _Complex int be considered exact? 04:08:16 or not, because its precision is limited 04:09:27 Yeah, it would be. My bad. 04:09:46 oh, okay 04:09:53 gravicappa [~gravicapp@ppp91-77-171-89.pppoe.mtu-net.ru] has joined #scheme 04:10:00 In C, anyway, complex int is not a legal type 04:11:05 -!- xilo [~xilo@107-209-248-232.lightspeed.austtx.sbcglobal.net] has quit [Ping timeout: 246 seconds] 04:11:14 that's what I was thinking 04:11:37 I was thinking "what extension to C *is* that, as last time I checked C didn't have parametric polymorphism..." 04:11:43 in c99 `_Complex int` is. and c++ has std::complex. 04:11:58 *tabemann* is used to programming in ANSI C, not C99 04:13:00 *tcsc* hasn't written ansi c in a long time 04:13:27 nobody uses _Complex in c99 though. it's just too weird looking. 04:13:34 well not *exactly* ANSI C, as most of the C implementations I've used support at least //, and many also support putting variables places other than tops of blocks 04:13:37 Yes, but there seems to be no guarantee that std::complex actually works; the only guaranteed supported types are float, double, and long double. 04:13:53 By "ANSI C" you mean ISO C89? 04:13:59 yeah 04:14:00 wow, that's so typical of c++ 04:14:17 At least that's true of C++98, I'm not sure about later versions 04:14:52 "The effect of instantiating the template complex for any other type is unspecified." 04:16:05 Does anyone know if _Complex int actually *works*? 04:16:05 well, c++11 brought c99's complex.h in, so `_Complex int` imight work. 04:16:08 in c99? 04:16:57 for some reason the support for nesting functions in C++11 just does not seem right, since it doesn't *actually* solve the funarg problems... 04:17:45 tabermann: it sort of does. 04:18:04 jcowan: no, i stand corrected 04:18:13 _Complex int seems to be a gcc extension. 04:18:31 does C11 support complex numbers? 04:18:40 zacts: they became optional 04:18:50 ok 04:19:53 tcsc: how does C++11 support returning functions from their containing functions, and having the returned functions (properly) access the scope from in which they were defined? 04:20:12 tabermann: they have a capture list 04:20:27 you list what's copied, what's captured by reference, etc. 04:20:42 wait 04:20:47 so you can do this?: 04:21:06 you have two different functions, and a shared variable in their defining context 04:21:18 you put the two functions in a struct and return that struct from the function in which they were defined 04:21:34 and then the two functions can both modify that shared variable and see each other's changes? 04:21:39 yes 04:21:41 well 04:21:44 if it's on the heap 04:22:01 otherwise what happens will be total chaos 04:22:14 exactly - it doesn't solve at least one of the funarg problems 04:22:29 In C11 there are still only the three complex number types 04:22:29 as if you solved both funarg problems you should be able to do that 04:23:22 This is why everyone hates Lispers. 04:23:33 They are always saying, "Yes, yes, we solved that problem before you or your parents were born." 04:23:44 Well, I exaggerate about the parents. 04:24:10 mind you lisps for a long time didn't bother with solving the funarg problems and just used dynamic scope instead 04:24:19 lexical scope only came along with scheme 04:24:32 Well, sort of. In the interpreter you had dynamic scope, but lexical scope in the compiler goes right back to Lisp 1.5 04:24:40 Scheme was the first to make things consistent. 04:26:06 It was also the first Lisp to abolish dynamic scope altogether, though it has now returned in the first-class form of parameters. 04:27:38 when you say lisps before scheme supported lexical scope, did they support shared mutable state in functions returned from their defined scope? 04:28:01 (i.e. did they solve the upwards funarg problem?) 04:28:24 -!- Ripp__ [~Ripp___@50-0-142-26.dsl.dynamic.sonic.net] has quit [Quit: This computer has gone to sleep] 04:28:27 tabemann: http://ideone.com/5aurCA 04:29:49 ah, they use make_shared - that's kinda cheating 04:29:59 tabemann: how so? 04:30:12 (and i just wrote that) 04:30:41 it's not actually solving the funarg problem, because you're explicitly putting your value in the heap 04:31:48 the value occupies memory which needs to be shared between the two closures. 04:33:32 you could argue that it is equivalent to, say, a ref cell in an ML... 04:33:34 I don't think the distinction between lexical and dynamic scope was really clearly understood until 1975 04:33:58 tabemann: Indeed, that is a common implementation of variables subject to set!. 04:34:02 tabemann: do they not solve the funarg problem? 04:36:06 tcsc: I'd say it's solved by them by copying all local name bindings used in functions declared locally into the closures themselves, i.e. there is no locally declared mutable state in the first place 04:36:19 See http://en.wikipedia.org/wiki/Man_or_boy_test for a frightening look at what can happen with only downward funargs. 04:36:47 whereas in C++ you *do* have locally declared mutable state, and that isn't handled properly with upward funargs 04:37:07 -!- oleo [~oleo@xdsl-78-35-144-204.netcologne.de] has quit [Remote host closed the connection] 04:37:20 jcowan: I could never wrap my brain around that example myself 04:37:51 Nor I, but the COmmon Lisp version is very clear 04:37:53 http://rosettacode.org/wiki/Man_or_boy_test#Common_Lisp 04:41:02 -!- joneshf-laptop [~joneshf@086.112-30-64.ftth.swbr.surewest.net] has quit [Ping timeout: 240 seconds] 04:41:11 of course with the likes of algol 60, implementing downwards funargs is far more trivial than implementing upwards funargs 04:41:11 taberman: i guess, but c++ programmers are already used to not holding on to pointers on the stack for too long, and so the cost of solving that (global gc, right?) wouldn't be worth it 04:41:23 as all you have to do is package a function pointer with a pointer to your stack frame 04:41:33 the hard part is that Algol 60 had call-by-name 04:42:02 (for the record downwards funargs work too: http://ideone.com/WtlgpR ) 04:42:13 GC wouldn't help with holding onto stack pointers. If you do that, any (reasonable) C implementation will hose you. 04:42:13 that is downwards, right? or is that upwards 04:42:28 Downwards is passing functions as arguments, upwards is returning them. 04:42:51 C has both downward and upward funargs at the expense of not having nested procedures. 04:43:32 except as a gcc extension (which only has upwards i think) 04:43:38 traditionally languages with nested functions often supported downards funargs but not upwards funargs, e.g. Pascal 04:43:41 you mean downwards 04:43:44 fadein [~Erik@c-67-161-246-186.hsd1.ut.comcast.net] has joined #scheme 04:43:49 yes. 04:43:54 i do. 04:47:08 i mean, if anything c++11 just proves that all languages asymptotically approach lisp as they age 04:47:28 common lisp. half of it. and a poor approximation at that. 04:47:49 If the First through Ninth Rules are ever discovered, programming will be revolutionized. 04:48:03 it does seem that over time closures or at least anonymous functions have become more popular, e.g. they're adding them to Java 8 04:48:09 hah 04:48:22 They were already present, a bit disguised, in Java 5 04:48:32 anonymous classes, yeah 04:48:41 delimited continuations are all the rage these days too 04:49:11 Java gets around having to really try hard to implement the downwards funarg problem by forcing you to make references to variables in your scope that are final 04:49:16 jcowan, what rules are these? 04:49:18 We will have a delimited continuations package in R7RS-alrge 04:49:27 and it shouldn't be *that* hard, considering that Java does automatic heap allocation *already* 04:49:57 kvda: tabemann quoted Greenspun's Tenth Rule (about any application program winding up with a poor implementation of half of CL inside) 04:50:06 Greenspun's other Rules are not known. 04:50:07 tabemann: well, the implementation as is is really easy 04:50:21 jcowan, haha 04:50:23 just copy the closed-over pointers 04:51:00 yeah 04:51:03 the final Object x[] = { closed }; thing just explicitly heap allocates an array holding closed 04:51:34 "Sorry, Han-Wen, but there aren't 9 preceding laws. I was just trying to give the rule a memorable name." --Philip Greenspun 04:52:36 -!- jcowan [~John@mail.digitalkingdom.org] has quit [Quit: Leaving] 04:53:52 tcsc: what I meant is it should have been hard for the Java people to have properly implemented shared mutable variables, considering that the many people implementing languages such as Scheme, Common Lisp, and even the likes of Lua and JavaScript have done so successfully 04:54:03 s/should/shouldn't 04:55:18 joneshf-laptop [~joneshf@086.112-30-64.ftth.swbr.surewest.net] has joined #scheme 04:55:30 i don't disagree, i was only saying that the way they did it makes a lot of sense 04:55:42 not for the user 04:56:30 -!- aranhoide [~smuxi@145.Red-79-155-211.dynamicIP.rima-tde.net] has quit [Ping timeout: 264 seconds] 04:56:47 not really sure why the spec doesn't force implementations to do it though. 04:58:35 I mean, implementation is pretty simple 04:59:05 you automatically detect which shared cells are being modified by any of the sharing parties, and put them in ref cells in the heap, transparently 04:59:42 you can determine this at runtime, or you can determine this statically and encode this in the JVM bytecode 05:01:15 it's not *as* simple as the "let's put all the variables used in nested functions in the closures themselves", and probably slower, but that approach naturally works better for languages that don't *have* mutable local variables 05:01:28 whereas it's pretty incongruous for a language which does have mutable local variables 05:03:06 i think i would just have the compiler emit a `final Object[]` or whatever with the captures and avoid new bytecodes. i think that would work 05:03:47 also would be better for locality and less likely to interfere with the memory model 05:05:04 actually, you probably meant that by encode this in the JVM bytecode 05:05:29 well the way you mention is a way that it could be implemented *without* modifying the JVM bytecode *or* the runtime 05:05:41 and is definitely a way I see it being implemented in JVM languages 05:06:54 okay, so you did mean modify the bytecode. yeah. i think the jvm spec would just say that this has to work and leave it up to implementations to do right. 05:07:16 sort like of the way they talk about gc. 05:08:17 I don't see why the people implementing the Java language couldn't've done things the way you mentioned 05:09:04 I mean they could have created, transparently, an inner class for each function containing nested functions or (pre-Java 8) anonymous classes sharing mutable state with their scope 05:09:26 and put all the shared mutable state in there, and new-ed it transparently as a final 05:09:30 przl [~przlrkt@p4FE64D06.dip0.t-ipconnect.de] has joined #scheme 05:10:15 maybe because it would make it harder to reason about memory usage and the lifetimes of variables 05:11:29 that's a poor excuse though 05:11:50 that hasn't been a problem with many languages that *do* properly solve the upwards funarg problem with shared mutable state 05:12:24 it sounds like one that would be given by a vm engineer and not a language designer 05:13:19 weie [~eie@softbank221078042071.bbtec.net] has joined #scheme 05:13:21 on the other hand java tries/tried to compete with c++ so who knows 05:14:44 at least the designers of Java have a little more concern for /safety/ that the designers of C++ 05:15:16 not to mention language size 05:16:40 I mean, if they're not going to properly share mutable state between parent functions and child functions in C++, they probably shouldn't allow the parent or the child functions to share variables that one of them mutates...' 05:17:33 at least c++11 lambdas work as the designers intended 05:18:06 -!- fridim__ [~fridim@bas2-montreal07-2925317871.dsl.bell.ca] has quit [Ping timeout: 264 seconds] 05:18:31 unlike other c++11 features 05:19:25 noexcept: invented to avoid the runtime check that a c++98 throw() declaration has. requires a runtime check. 05:19:26 *tabemann* hasn't looked enough at C++11 to really say, but he isn't surprised 05:20:39 jcowan [~John@mail.digitalkingdom.org] has joined #scheme 05:21:30 -!- YoungFrog [~youngfrog@164.15.131.113] has quit [Ping timeout: 268 seconds] 05:21:40 both C++ and Java just seem so... ad hoc in how they do things... 05:21:42 thats the worst offender off the top but c++98 was terrible there. templates, throw(), auto_ptr (sort of)... 05:22:02 CL is pretty ad hoc too. 05:22:03 java was at least designed. c++ just happened 05:22:03 -!- Cromulent [~Cromulent@cpc1-reig5-2-0-cust251.6-3.cable.virginmedia.com] has quit [Quit: KVIrc 4.2.0 Equilibrium http://www.kvirc.net/] 05:22:31 jcowan: hence why I don't like CL nearly as much as Scheme or Haskell 05:22:41 i've never written much common lisp but it always seemed pretty... enormous and chaotic 05:23:22 -!- entitativity [~entity@c-50-136-180-20.hsd1.ca.comcast.net] has quit [Read error: Connection reset by peer] 05:23:38 entitativity [~entity@c-50-136-180-20.hsd1.ca.comcast.net] has joined #scheme 05:25:14 okay, well, I should get to bed right now 05:25:28 -!- BossKonaSegwaY [~Michael@72.49.0.102] has quit [Read error: Connection reset by peer] 05:25:34 didn't do as much implementation on scmhs today as planned, but figured out a good bit on what to implement 05:28:31 -!- tabemann [~travisb@adsl-76-199-152-119.dsl.milwwi.sbcglobal.net] has quit [Quit: Leaving] 05:30:43 maybe next time i try to implement a scheme it should compile to c++11 05:31:14 that sounds so much easier than it probably would be 05:33:06 no, i don't think i'd get anything for free by doing that. 05:34:16 -!- kvda [~kvda@unaffiliated/kvda] has quit [Quit: z____z] 05:36:06 ffio [~fire@unaffiliated/security] has joined #scheme 05:41:03 bjz_ [~brendanza@125.253.99.68] has joined #scheme 05:41:23 -!- bjz [~brendanza@125.253.99.68] has quit [Read error: Connection reset by peer] 05:44:39 -!- jcowan [~John@mail.digitalkingdom.org] has quit [Quit: Leaving] 05:50:59 userzxcvasdf [~neutral_a@c656847C1.dhcp.as2116.net] has joined #scheme 05:52:56 amgarchIn9 [~amgarchin@p4FD56D51.dip0.t-ipconnect.de] has joined #scheme 05:56:56 Gooder` [~user@79.155.200.192.client.dyn.strong-in144.as13926.net] has joined #scheme 05:57:50 -!- amgarchIn9 [~amgarchin@p4FD56D51.dip0.t-ipconnect.de] has quit [Ping timeout: 240 seconds] 06:01:58 zbigniew_ [~zb@3e8.org] has joined #scheme 06:02:30 fgudin_ [fgudin@odin.sdf-eu.org] has joined #scheme 06:02:49 pchrist_ [~spirit@gentoo/developer/pchrist] has joined #scheme 06:04:28 -!- Gooder [~user@79.155.200.192.client.dyn.strong-in144.as13926.net] has quit [Read error: Connection reset by peer] 06:04:29 -!- zbigniew [~zb@3e8.org] has quit [Remote host closed the connection] 06:04:29 -!- fgudin [fgudin@odin.sdf-eu.org] has quit [Remote host closed the connection] 06:04:29 -!- pchrist [~spirit@gentoo/developer/pchrist] has quit [Ping timeout: 276 seconds] 06:08:15 -!- tcsc [~tcsc@c-71-192-176-137.hsd1.ma.comcast.net] has quit [Quit: bye!] 06:12:08 -!- przl [~przlrkt@p4FE64D06.dip0.t-ipconnect.de] has quit [Ping timeout: 256 seconds] 06:16:34 kvda [~kvda@unaffiliated/kvda] has joined #scheme 06:19:17 -!- userzxcvasdf [~neutral_a@c656847C1.dhcp.as2116.net] has quit [Remote host closed the connection] 06:24:36 -!- stamourv`` [~user@ahuntsic.ccs.neu.edu] has quit [Read error: Connection reset by peer] 06:24:44 stamourv`` [~user@ahuntsic.ccs.neu.edu] has joined #scheme 06:27:32 -!- kobain [~kobian@unaffiliated/kobain] has quit [Quit: El motor por excelencia http://www.europio.org/] 06:29:48 jewel_ [~jewel@105-236-120-81.access.mtnbusiness.co.za] has joined #scheme 06:31:35 Ripp__ [~Ripp___@50-0-142-26.dsl.dynamic.sonic.net] has joined #scheme 06:40:28 -!- pchrist_ is now known as pchrist 06:42:11 -!- hive-mind [pranq@unaffiliated/contempt] has quit [Read error: Operation timed out] 06:43:55 -!- jewel_ [~jewel@105-236-120-81.access.mtnbusiness.co.za] has quit [Ping timeout: 276 seconds] 06:59:00 mmc1 [~michal@j212142.upc-j.chello.nl] has joined #scheme 07:03:03 hive-mind [pranq@unaffiliated/contempt] has joined #scheme 07:06:21 YoungFrog [~youngfrog@geodiff-mac3.ulb.ac.be] has joined #scheme 07:06:40 -!- mmc1 [~michal@j212142.upc-j.chello.nl] has quit [Ping timeout: 256 seconds] 07:10:44 jewel [~jewel@105-236-120-81.access.mtnbusiness.co.za] has joined #scheme 07:10:49 Cromulent [~Cromulent@cpc1-reig5-2-0-cust251.6-3.cable.virginmedia.com] has joined #scheme 07:12:31 -!- kvda [~kvda@unaffiliated/kvda] has quit [Quit: z____z] 07:17:50 -!- racycle [~racycle@75-25-129-128.lightspeed.sjcpca.sbcglobal.net] has quit [Remote host closed the connection] 07:20:36 -!- ffio [~fire@unaffiliated/security] has quit [Quit: WeeChat 0.4.1] 07:20:42 xwl__ [user@nat/nokia/x-pdjotyemkqlvuwyn] has joined #scheme 07:29:11 -!- xwl__ [user@nat/nokia/x-pdjotyemkqlvuwyn] has quit [Remote host closed the connection] 07:31:48 -!- Ripp__ [~Ripp___@50-0-142-26.dsl.dynamic.sonic.net] has quit [Quit: Leaving] 07:32:00 bondar [~bondar@197.156.132.62] has joined #scheme 07:32:10 -!- bondar [~bondar@197.156.132.62] has quit [Excess Flood] 07:48:39 -!- Cromulent [~Cromulent@cpc1-reig5-2-0-cust251.6-3.cable.virginmedia.com] has quit [Quit: KVIrc 4.2.0 Equilibrium http://www.kvirc.net/] 07:50:55 lazyden [~lazyden@178-136-187-236.static.vega-ua.net] has joined #scheme 07:51:07 -!- lazyden [~lazyden@178-136-187-236.static.vega-ua.net] has left #scheme 07:59:27 youlysses [~user@75-132-28-10.dhcp.stls.mo.charter.com] has joined #scheme 08:01:04 ffio [~fire@unaffiliated/security] has joined #scheme 08:03:19 -!- peterhil [~peterhil@85-76-123-249-nat.elisa-mobile.fi] has quit [Read error: Connection reset by peer] 08:11:55 -!- ffio [~fire@unaffiliated/security] has quit [Quit: WeeChat 0.4.1] 08:12:01 trusktr [~trusktr@173-10-14-122-BusName-stockton.hfc.comcastbusiness.net] has joined #scheme 08:13:25 -!- arubin [~arubin@99-114-192-172.lightspeed.cicril.sbcglobal.net] has quit [Quit: Textual IRC Client: www.textualapp.com] 08:14:18 agumonkey [~agu@179.217.72.86.rev.sfr.net] has joined #scheme 08:16:57 wingo [~wingo@cha74-2-88-160-190-192.fbx.proxad.net] has joined #scheme 08:19:38 -!- fgudin_ is now known as fgudin 08:22:12 przl [~przlrkt@46.231.183.162] has joined #scheme 08:25:37 Cromulent [~Cromulent@cpc1-reig5-2-0-cust251.6-3.cable.virginmedia.com] has joined #scheme 08:27:17 Natch_u [~Natch@c-10cfe155.25-4-64736c10.cust.bredbandsbolaget.se] has joined #scheme 08:28:18 blackwol` [~blackwolf@ool-4574e84c.dyn.optonline.net] has joined #scheme 08:29:22 serhart_ [~serhart@bonerbonerboner.com] has joined #scheme 08:29:22 danking_ [~danking@192.81.214.176] has joined #scheme 08:29:27 fdr_ [~rafaelfdr@ps53163.dreamhost.com] has joined #scheme 08:29:33 -!- hive-mind [pranq@unaffiliated/contempt] has quit [Disconnected by services] 08:29:54 -!- Natch [~Natch@c-10cfe155.25-4-64736c10.cust.bredbandsbolaget.se] has quit [Ping timeout: 240 seconds] 08:29:54 -!- gravicappa [~gravicapp@ppp91-77-171-89.pppoe.mtu-net.ru] has quit [Ping timeout: 240 seconds] 08:29:54 -!- Natch_u is now known as Natch 08:29:55 -!- danking [~danking@192.81.214.176] has quit [Ping timeout: 240 seconds] 08:29:55 -!- fadein [~Erik@c-67-161-246-186.hsd1.ut.comcast.net] has quit [Ping timeout: 240 seconds] 08:29:55 -!- blackwolf [~blackwolf@ool-4574e84c.dyn.optonline.net] has quit [Ping timeout: 240 seconds] 08:29:56 -!- serhart [~serhart@192.81.215.249] has quit [Ping timeout: 240 seconds] 08:29:56 -!- fdr [~rafaelfdr@ps53163.dreamhost.com] has quit [Ping timeout: 240 seconds] 08:30:01 fadein [~Erik@c-67-161-246-186.hsd1.ut.comcast.net] has joined #scheme 08:30:05 hive-mind [pranq@unaffiliated/contempt] has joined #scheme 08:37:54 peterhil [~peterhil@158.127.31.162] has joined #scheme 08:37:57 gravicappa [~gravicapp@ppp91-77-171-89.pppoe.mtu-net.ru] has joined #scheme 08:56:22 -!- Cromulent [~Cromulent@cpc1-reig5-2-0-cust251.6-3.cable.virginmedia.com] has quit [Quit: KVIrc 4.2.0 Equilibrium http://www.kvirc.net/] 09:09:25 -!- agumonkey [~agu@179.217.72.86.rev.sfr.net] has quit [Ping timeout: 248 seconds] 09:11:01 -!- przl [~przlrkt@46.231.183.162] has quit [Ping timeout: 246 seconds] 09:14:06 przl [~przlrkt@46.231.183.162] has joined #scheme 09:16:36 Cromulent [~Cromulent@cpc1-reig5-2-0-cust251.6-3.cable.virginmedia.com] has joined #scheme 09:19:31 -!- Aethaeryn [~Michael@wesnoth/umc-dev/developer/aethaeryn] has quit [Remote host closed the connection] 09:21:42 -!- przl [~przlrkt@46.231.183.162] has quit [Ping timeout: 264 seconds] 09:25:24 Aethaeryn [~Michael@wesnoth/umc-dev/developer/aethaeryn] has joined #scheme 09:30:39 yacks [~py@103.6.158.102] has joined #scheme 09:31:03 przl [~przlrkt@46.231.183.162] has joined #scheme 09:36:48 -!- przl [~przlrkt@46.231.183.162] has quit [Ping timeout: 268 seconds] 09:37:30 -!- Aethaeryn [~Michael@wesnoth/umc-dev/developer/aethaeryn] has quit [Remote host closed the connection] 09:39:37 amgarchIn9 [~amgarchin@theo1.theochem.tu-muenchen.de] has joined #scheme 09:41:55 przl [~przlrkt@46.231.183.162] has joined #scheme 09:42:59 -!- yacks [~py@103.6.158.102] has quit [Quit: Leaving] 09:44:50 -!- Cromulent [~Cromulent@cpc1-reig5-2-0-cust251.6-3.cable.virginmedia.com] has quit [Quit: KVIrc 4.2.0 Equilibrium http://www.kvirc.net/] 09:46:48 -!- tenq is now known as tenq|away 09:57:16 jarod_ch_ [~jarod_che@115.192.190.206] has joined #scheme 09:59:37 -!- trusktr [~trusktr@173-10-14-122-BusName-stockton.hfc.comcastbusiness.net] has quit [Remote host closed the connection] 10:02:03 pumpkin360 [~main@agec117.neoplus.adsl.tpnet.pl] has joined #scheme 10:06:11 -!- przl [~przlrkt@46.231.183.162] has quit [Quit: leaving] 10:06:26 przl [~przlrkt@46.231.183.162] has joined #scheme 10:11:50 Cromulent [~Cromulent@cpc1-reig5-2-0-cust251.6-3.cable.virginmedia.com] has joined #scheme 10:11:54 ventonegro [~alex@cust.static.46-14-234-161.swisscomdata.ch] has joined #scheme 10:17:41 -!- jewel [~jewel@105-236-120-81.access.mtnbusiness.co.za] has quit [Ping timeout: 248 seconds] 10:21:51 -!- peterhil [~peterhil@158.127.31.162] has quit [Ping timeout: 246 seconds] 10:31:30 jewel [~jewel@105-236-120-81.access.mtnbusiness.co.za] has joined #scheme 10:36:00 -!- Cromulent [~Cromulent@cpc1-reig5-2-0-cust251.6-3.cable.virginmedia.com] has quit [Quit: KVIrc 4.2.0 Equilibrium http://www.kvirc.net/] 10:40:45 dRbiG [drbig@unhallowed.pl] has joined #scheme 10:48:34 mmc1 [~michal@sams-office-nat.tomtomgroup.com] has joined #scheme 10:49:11 -!- Gooder` [~user@79.155.200.192.client.dyn.strong-in144.as13926.net] has quit [Remote host closed the connection] 10:58:07 -!- pygospa [~Pygosceli@kiel-d9bfcbd5.pool.mediaWays.net] has quit [Ping timeout: 246 seconds] 11:00:09 pygospa [~Pygosceli@kiel-5f768749.pool.mediaWays.net] has joined #scheme 11:01:14 -!- amgarchIn9 [~amgarchin@theo1.theochem.tu-muenchen.de] has quit [Quit: Konversation terminated!] 11:01:21 alexei [~amgarchin@theo1.theochem.tu-muenchen.de] has joined #scheme 11:26:07 -!- przl [~przlrkt@46.231.183.162] has quit [Ping timeout: 246 seconds] 11:27:46 -!- ventonegro [~alex@cust.static.46-14-234-161.swisscomdata.ch] has quit [Quit: ventonegro] 11:40:31 Cromulent [~Cromulent@cpc1-reig5-2-0-cust251.6-3.cable.virginmedia.com] has joined #scheme 11:47:17 jvc [~jvc@111.161.77.227] has joined #scheme 11:47:47 jvc_ [~jvc@111.161.77.227] has joined #scheme 11:48:07 -!- jvc [~jvc@111.161.77.227] has left #scheme 11:48:26 jvc [~jvc@111.161.77.227] has joined #scheme 11:48:43 -!- jvc_ [~jvc@111.161.77.227] has quit [Remote host closed the connection] 11:49:12 -!- jvc [~jvc@111.161.77.227] has left #scheme 11:49:37 jvc [~jvc@111.161.77.227] has joined #scheme 11:52:09 b4283 [~b4283@114-136-1-183.dynamic.hinet.net] has joined #scheme 11:52:24 -!- jvc [~jvc@111.161.77.227] has left #scheme 11:52:48 jvc [~jvc@111.161.77.227] has joined #scheme 11:53:14 -!- pumpkin360 [~main@agec117.neoplus.adsl.tpnet.pl] has quit [Quit: WeeChat 0.4.1] 11:56:09 -!- Cromulent [~Cromulent@cpc1-reig5-2-0-cust251.6-3.cable.virginmedia.com] has quit [Quit: KVIrc 4.2.0 Equilibrium http://www.kvirc.net/] 11:59:38 ffio [~fire@unaffiliated/security] has joined #scheme 12:10:52 ventonegro [~alex@cust.static.46-14-234-161.swisscomdata.ch] has joined #scheme 12:11:57 lazyden [~lazyden@178-136-187-236.static.vega-ua.net] has joined #scheme 12:12:02 -!- lazyden [~lazyden@178-136-187-236.static.vega-ua.net] has left #scheme 12:15:46 tcsc, you would get an enormous requirement for a bloated toolchain for free by doing that!n 12:15:57 s/n$// 12:17:44 -!- C-Keen [cckeen@pestilenz.org] has quit [Quit: WeeChat 0.3.8] 12:17:54 C-Keen [cckeen@pestilenz.org] has joined #scheme 12:18:08 -!- C-Keen [cckeen@pestilenz.org] has quit [Client Quit] 12:19:24 C-Keen [cckeen@pestilenz.org] has joined #scheme 12:23:14 -!- C-Keen [cckeen@pestilenz.org] has quit [Client Quit] 12:25:30 yacks [~py@103.6.158.102] has joined #scheme 12:26:28 tacey [~tacey@211.101.48.70] has joined #scheme 12:27:10 -!- ffio [~fire@unaffiliated/security] has quit [Quit: WeeChat 0.4.1] 12:27:25 ffio_ [~fire@unaffiliated/security] has joined #scheme 12:30:49 ffio [~fire@unaffiliated/security] has joined #scheme 12:34:28 peterhil [~peterhil@158.127.31.162] has joined #scheme 12:35:29 przl [~przlrkt@46.231.183.162] has joined #scheme 12:45:26 -!- danking_ is now known as danking 12:45:38 davexunit [~user@38.104.7.18] has joined #scheme 12:47:31 -!- tenq|away is now known as tenq 12:47:52 C-Keen [cckeen@pestilenz.org] has joined #scheme 12:48:10 -!- ffio [~fire@unaffiliated/security] has quit [Quit: WeeChat 0.4.1] 12:51:26 -!- jarod_ch_ [~jarod_che@115.192.190.206] has quit [Quit: Textual IRC Client: http://www.textualapp.com/] 12:56:09 Cromulent [~Cromulent@cpc1-reig5-2-0-cust251.6-3.cable.virginmedia.com] has joined #scheme 13:01:51 -!- tacey [~tacey@211.101.48.70] has quit [Quit: Lost terminal] 13:07:38 -!- ffio_ [~fire@unaffiliated/security] has quit [Ping timeout: 245 seconds] 13:08:40 stepnem [~stepnem@internet2.cznet.cz] has joined #scheme 13:09:00 -!- peterhil [~peterhil@158.127.31.162] has quit [Quit: Must not waste too much time here...] 13:09:19 peterhil [~peterhil@158.127.31.162] has joined #scheme 13:14:22 -!- arbscht_ [~arbscht@fsf/member/arbscht] has quit [Read error: Operation timed out] 13:24:39 youlysse` [~user@75-132-28-10.dhcp.stls.mo.charter.com] has joined #scheme 13:26:24 -!- youlysses [~user@75-132-28-10.dhcp.stls.mo.charter.com] has quit [Quit: brb] 13:33:51 MrFahrenheit [~RageOfTho@77.221.25.95] has joined #scheme 13:34:48 racycle [~racycle@75-25-129-128.lightspeed.sjcpca.sbcglobal.net] has joined #scheme 13:43:40 -!- Cromulent [~Cromulent@cpc1-reig5-2-0-cust251.6-3.cable.virginmedia.com] has quit [Quit: KVIrc 4.2.0 Equilibrium http://www.kvirc.net/] 13:48:21 -!- youlysse` [~user@75-132-28-10.dhcp.stls.mo.charter.com] has quit [Ping timeout: 246 seconds] 13:48:38 xilo [~xilo@107-209-248-232.lightspeed.austtx.sbcglobal.net] has joined #scheme 13:50:06 oleo [~oleo@xdsl-78-35-135-178.netcologne.de] has joined #scheme 13:57:29 `fogus [~fogus@freedom.d-a-s.com] has joined #scheme 14:01:44 -!- taylanub is now known as HinaIchigo 14:27:04 kobain [~kobian@unaffiliated/kobain] has joined #scheme 14:28:02 -!- kobain [~kobian@unaffiliated/kobain] has quit [Client Quit] 14:29:10 kobain [~kobian@unaffiliated/kobain] has joined #scheme 14:30:42 youlysses [~user@75-132-28-10.dhcp.stls.mo.charter.com] has joined #scheme 14:34:57 -!- przl [~przlrkt@46.231.183.162] has quit [Ping timeout: 264 seconds] 14:35:50 -!- youlysses [~user@75-132-28-10.dhcp.stls.mo.charter.com] has quit [Read error: Operation timed out] 14:37:09 przl [~przlrkt@46.231.183.162] has joined #scheme 14:37:21 -!- HinaIchigo is now known as taylanub 14:57:42 -!- add^_ [~user@m176-70-3-33.cust.tele2.se] has quit [Quit: thunder] 15:03:00 spektroskop [~alex@cm-84.208.195.71.getinternet.no] has joined #scheme 15:09:38 -!- zbigniew_ is now known as zbigniew 15:17:16 tupi [~user@189.60.0.240] has joined #scheme 15:19:32 -!- przl [~przlrkt@46.231.183.162] has quit [Ping timeout: 256 seconds] 15:27:37 sstrickl [~sstrickl@racket/sstrickl] has joined #scheme 15:31:14 -!- serhart_ is now known as serhart 15:39:11 -!- peterhil [~peterhil@158.127.31.162] has quit [Quit: Must not waste too much time here...] 15:40:38 -!- jewel [~jewel@105-236-120-81.access.mtnbusiness.co.za] has quit [Ping timeout: 240 seconds] 15:57:42 Cromulent [~Cromulent@cpc1-reig5-2-0-cust251.6-3.cable.virginmedia.com] has joined #scheme 15:58:19 tcsc [~tcsc@c-71-192-176-137.hsd1.ma.comcast.net] has joined #scheme 16:05:09 -!- brianloveswords [~brianlove@li124-154.members.linode.com] has quit [Excess Flood] 16:05:40 brianloveswords [~brianlove@li124-154.members.linode.com] has joined #scheme 16:10:33 -!- ventonegro [~alex@cust.static.46-14-234-161.swisscomdata.ch] has quit [Quit: ventonegro] 16:15:31 -!- taylanub [tub@p4FD91047.dip0.t-ipconnect.de] has quit [Disconnected by services] 16:15:49 -!- spektroskop [~alex@cm-84.208.195.71.getinternet.no] has left #scheme 16:16:04 taylanub [tub@p4FD90644.dip0.t-ipconnect.de] has joined #scheme 16:23:02 -!- mmc1 [~michal@sams-office-nat.tomtomgroup.com] has quit [Ping timeout: 240 seconds] 16:34:33 -!- tcsc [~tcsc@c-71-192-176-137.hsd1.ma.comcast.net] has quit [Quit: computer sleeping] 16:34:59 -!- Saeren_ is now known as Saeren 16:36:12 agumonkey [~agu@179.217.72.86.rev.sfr.net] has joined #scheme 16:41:09 -!- b4283 [~b4283@114-136-1-183.dynamic.hinet.net] has quit [Remote host closed the connection] 16:48:45 add^_ [~user@m176-70-26-99.cust.tele2.se] has joined #scheme 16:50:06 aranhoide [~smuxi@145.Red-79-155-211.dynamicIP.rima-tde.net] has joined #scheme 16:52:12 -!- sttau [~sttau@unaffiliated/sttau] has quit [Quit: ZNC - http://znc.in] 16:55:18 -!- alexei [~amgarchin@theo1.theochem.tu-muenchen.de] has quit [Ping timeout: 256 seconds] 16:59:54 -!- Cromulent [~Cromulent@cpc1-reig5-2-0-cust251.6-3.cable.virginmedia.com] has quit [Quit: KVIrc 4.2.0 Equilibrium http://www.kvirc.net/] 17:04:24 arubin [~arubin@99-114-192-172.lightspeed.cicril.sbcglobal.net] has joined #scheme 17:10:53 -!- jrapdx [~jra@c-98-246-145-216.hsd1.or.comcast.net] has quit [Remote host closed the connection] 17:15:49 mmc1 [~michal@j212142.upc-j.chello.nl] has joined #scheme 17:26:02 tcsc [~tcsc@c-71-192-176-137.hsd1.ma.comcast.net] has joined #scheme 17:29:33 jvc_ [~jvc@111.161.77.227] has joined #scheme 17:30:32 -!- jvc [~jvc@111.161.77.227] has left #scheme 17:32:50 -!- tcsc [~tcsc@c-71-192-176-137.hsd1.ma.comcast.net] has quit [Quit: bye!] 17:33:00 tcsc [~tcsc@c-71-192-176-137.hsd1.ma.comcast.net] has joined #scheme 17:41:39 -!- fdr_ [~rafaelfdr@ps53163.dreamhost.com] has quit [Remote host closed the connection] 17:45:36 Cromulent [~Cromulent@cpc1-reig5-2-0-cust251.6-3.cable.virginmedia.com] has joined #scheme 17:52:54 -!- thehandler [~this@41.89.164.16] has quit [Ping timeout: 264 seconds] 17:55:31 -!- jvc_ [~jvc@111.161.77.227] has quit [Remote host closed the connection] 17:56:28 ddp [~ddp@71-83-115-6.static.reno.nv.charter.com] has joined #scheme 17:58:18 -!- Cromulent [~Cromulent@cpc1-reig5-2-0-cust251.6-3.cable.virginmedia.com] has quit [Quit: KVIrc 4.2.0 Equilibrium http://www.kvirc.net/] 18:01:44 -!- ddp [~ddp@71-83-115-6.static.reno.nv.charter.com] has quit [Quit: ddp] 18:07:54 -!- aranhoide [~smuxi@145.Red-79-155-211.dynamicIP.rima-tde.net] has quit [Ping timeout: 264 seconds] 18:10:13 aranhoide [~smuxi@175.Red-79-155-211.dynamicIP.rima-tde.net] has joined #scheme 18:11:14 ski_ [~md9slj@ed-5355-14.studat.chalmers.se] has joined #scheme 18:12:36 ffio [~fire@unaffiliated/security] has joined #scheme 18:14:57 boycottg00gle [~user@stgt-5f71a37a.pool.mediaWays.net] has joined #scheme 18:15:03 arbscht_ [~arbscht@fsf/member/arbscht] has joined #scheme 18:17:25 przl [~przlrkt@p4FE64D06.dip0.t-ipconnect.de] has joined #scheme 18:17:29 -!- przl [~przlrkt@p4FE64D06.dip0.t-ipconnect.de] has quit [Client Quit] 18:20:11 -!- add^_ [~user@m176-70-26-99.cust.tele2.se] has quit [Read error: Connection reset by peer] 18:21:27 add^_` [~user@m176-70-26-99.cust.tele2.se] has joined #scheme 18:23:15 -!- ffio [~fire@unaffiliated/security] has quit [Quit: WeeChat 0.4.1] 18:24:11 -!- add^_` is now known as add^_ 18:26:02 jvc [~jvc@111.161.77.227] has joined #scheme 18:28:31 jao [~jao@55.Red-79-148-157.dynamicIP.rima-tde.net] has joined #scheme 18:28:34 -!- jao [~jao@55.Red-79-148-157.dynamicIP.rima-tde.net] has quit [Changing host] 18:28:34 jao [~jao@pdpc/supporter/professional/jao] has joined #scheme 18:34:13 -!- jvc [~jvc@111.161.77.227] has quit [Ping timeout: 248 seconds] 18:39:33 -!- hiroakip [~hiroaki@77-20-192-229-dynip.superkabel.de] has quit [Ping timeout: 246 seconds] 18:40:02 hiroakip [~hiroaki@77-20-192-229-dynip.superkabel.de] has joined #scheme 18:41:26 -!- C-Keen [cckeen@pestilenz.org] has quit [Quit: WeeChat 0.3.8] 18:42:41 C-Keen [cckeen@pestilenz.org] has joined #scheme 18:42:41 -!- C-Keen [cckeen@pestilenz.org] has quit [Client Quit] 18:44:52 -!- gravicappa [~gravicapp@ppp91-77-171-89.pppoe.mtu-net.ru] has quit [Remote host closed the connection] 18:51:53 -!- tenq is now known as tenq|away 19:05:08 ehaliewicz [~user@50-0-51-11.dsl.static.sonic.net] has joined #scheme 19:05:42 ffio [~fire@unaffiliated/security] has joined #scheme 19:08:55 -!- add^_ [~user@m176-70-26-99.cust.tele2.se] has quit [Read error: Connection reset by peer] 19:09:27 add^_ [~user@m176-70-26-99.cust.tele2.se] has joined #scheme 19:09:44 -!- add^_ [~user@m176-70-26-99.cust.tele2.se] has quit [Read error: Connection reset by peer] 19:10:18 gcartier [~gcartier@modemcable010.136-201-24.mc.videotron.ca] has joined #scheme 19:11:48 add^_ [~user@m176-70-26-99.cust.tele2.se] has joined #scheme 19:12:47 -!- add^_ [~user@m176-70-26-99.cust.tele2.se] has quit [Read error: Connection reset by peer] 19:13:09 robgssp [~user@cpe-24-93-28-218.rochester.res.rr.com] has joined #scheme 19:17:41 -!- boycottg00gle [~user@stgt-5f71a37a.pool.mediaWays.net] has quit [Remote host closed the connection] 19:19:39 C-Keen [cckeen@pestilenz.org] has joined #scheme 19:22:29 -!- ffio [~fire@unaffiliated/security] has quit [Quit: WeeChat 0.4.1] 19:51:52 -!- Giomancer [~gio@107.201.206.230] has quit [Quit: Leaving] 19:55:03 Kruppe [~user@j2petkovich.uwaterloo.ca] has joined #scheme 20:00:28 alexei [~amgarchin@p4FD56D51.dip0.t-ipconnect.de] has joined #scheme 20:01:41 ffio [~fire@unaffiliated/security] has joined #scheme 20:04:37 -!- gcartier [~gcartier@modemcable010.136-201-24.mc.videotron.ca] has quit [Remote host closed the connection] 20:14:18 -!- ffio [~fire@unaffiliated/security] has quit [Quit: WeeChat 0.4.1] 20:29:57 -!- aranhoide [~smuxi@175.Red-79-155-211.dynamicIP.rima-tde.net] has quit [Ping timeout: 248 seconds] 20:39:19 pumpkin360 [~main@agec117.neoplus.adsl.tpnet.pl] has joined #scheme 21:00:36 Nisstyre [~yours@oftn/member/Nisstyre] has joined #scheme 21:01:50 -!- davexunit [~user@38.104.7.18] has quit [Quit: Later] 21:07:08 cjh` [chris@66.228.41.158] has joined #scheme 21:08:49 -!- pumpkin360 [~main@agec117.neoplus.adsl.tpnet.pl] has quit [Read error: Operation timed out] 21:13:57 -!- MrFahrenheit [~RageOfTho@77.221.25.95] has quit [Read error: Operation timed out] 21:24:59 pumpkin360 [~main@agec117.neoplus.adsl.tpnet.pl] has joined #scheme 21:27:13 -!- wingo [~wingo@cha74-2-88-160-190-192.fbx.proxad.net] has quit [Ping timeout: 245 seconds] 21:29:55 igotnole_ [~igotnoleg@63-230-0-138.slkc.qwest.net] has joined #scheme 21:30:13 -!- igotnolegs- [~igotnoleg@65-130-102-30.slkc.qwest.net] has quit [Ping timeout: 246 seconds] 21:30:14 -!- igotnole_ is now known as igotnolegs- 21:33:45 -!- tupi [~user@189.60.0.240] has quit [Ping timeout: 264 seconds] 21:38:38 jrapdx [~jrapdx@74-95-41-205-Oregon.hfc.comcastbusiness.net] has joined #scheme 21:40:45 fridim_ [~fridim@bas2-montreal07-2925317871.dsl.bell.ca] has joined #scheme 21:45:08 cosmez [~cosmez@200.92.100.68] has joined #scheme 21:46:42 davexunit [~user@c-71-232-35-199.hsd1.ma.comcast.net] has joined #scheme 21:48:33 -!- alexei [~amgarchin@p4FD56D51.dip0.t-ipconnect.de] has quit [Ping timeout: 246 seconds] 21:51:40 -!- Kruppe [~user@j2petkovich.uwaterloo.ca] has quit [Ping timeout: 256 seconds] 21:53:55 -!- agumonkey [~agu@179.217.72.86.rev.sfr.net] has quit [Ping timeout: 276 seconds] 21:56:18 youlysses [~user@75-132-28-10.dhcp.stls.mo.charter.com] has joined #scheme 22:09:33 -!- pumpkin360 [~main@agec117.neoplus.adsl.tpnet.pl] has quit [Quit: WeeChat 0.4.1] 22:14:50 estevocastro [~estevocas@21.Red-95-122-81.staticIP.rima-tde.net] has joined #scheme 22:29:50 -!- tcsc [~tcsc@c-71-192-176-137.hsd1.ma.comcast.net] has quit [Quit: bye!] 22:30:32 -!- githogori [~githogori@c-50-156-57-127.hsd1.ca.comcast.net] has quit [Read error: Connection reset by peer] 22:48:14 githogori [~githogori@c-50-156-57-127.hsd1.ca.comcast.net] has joined #scheme 22:54:53 defanor_ [~d@ppp91-77-135-45.pppoe.mtu-net.ru] has joined #scheme 22:57:51 -!- defanor [~d@ppp91-77-116-215.pppoe.mtu-net.ru] has quit [Ping timeout: 268 seconds] 23:04:55 -!- stepnem [~stepnem@internet2.cznet.cz] has quit [Ping timeout: 256 seconds] 23:12:52 MrFahrenheit [~RageOfTho@77.221.25.95] has joined #scheme 23:13:21 agumonkey [~agu@179.217.72.86.rev.sfr.net] has joined #scheme 23:14:57 Giomancer [~gio@107.201.206.230] has joined #scheme 23:29:59 tabemann [~travisb@adsl-108-199-221-25.dsl.milwwi.sbcglobal.net] has joined #scheme 23:35:23 pumpkin360 [~main@agec117.neoplus.adsl.tpnet.pl] has joined #scheme 23:35:50 jvc [~jvc@111.161.77.227] has joined #scheme 23:37:50 Hi. Code generating all permutations in scheme (didn't like HtDP one). If anybody has idea how can the ugly (if (null? seq)...) be avoided would be greatful for telling. 23:37:55 http://bpaste.net/show/117111/ 23:38:35 -!- jvc [~jvc@111.161.77.227] has quit [Remote host closed the connection] 23:39:05 pumpkin360: was something wrong with the HtDP one? 23:39:06 all other thoughts about the code are also welcome. 23:39:50 (note that I haven't looked at whatever part of HtDP you're specifically referring to in some time) 23:40:20 carleastlund: It was extremelly long. I know that abstractions is a good thing but isn't a a function generating all permutations an enough level of abstraction ? 23:40:36 carleastlund: the very beginning. 23:41:16 -!- robgssp [~user@cpe-24-93-28-218.rochester.res.rr.com] has quit [Remote host closed the connection] 23:41:36 pumpkin360: I'm certain there's nothing about permutations at "the very beginning", the book doesn't even cover lists until a ways in. And, building smaller abstractions can be very useful in systematically solving problems. Each layer of "abstraction" is a problem that can be deferred and solved separately. 23:41:41 but (map ...) etc. wasn't introduced yet so maybe that's the reason why they done it so akward 23:42:39 Cromulent [~Cromulent@cpc1-reig5-2-0-cust251.6-3.cable.virginmedia.com] has joined #scheme 23:44:10 Let me just say that, perhaps because you designed your solution as a single working piece, I can't read it at all. All the nested maps, appends, etc., just makes my eyes go crossed. And I'm used to being able to grade student solutions of problems like this at a glance. So honestly, I think you'd gain a lot from learning to do things the HtDP way -- so that some day you'll hit a happy medium, not so that 23:44:11 you'll do things quite that laboriously forever. 23:45:02 robgssp [~user@cpe-24-93-28-218.rochester.res.rr.com] has joined #scheme 23:45:20 carleastlund: So we should keep as many layers of abstraction as we can? Those layers make the code much longer and function like (permutations ...) seems not so hard to test. Also nobody will ever try to create any other function from it. 23:45:55 he didn't say "as many layer as you can" 23:46:05 carleastlund: readability is the "only" downside. 23:46:15 pumpkin360: You should keep enough structure in the code that it is easy to debug, maintain, read, modify, and adapt. Right now, your function is a black box that I can use as long as it works, but if I ever want another function like it, or to fix a problem with it, I'm hosed. 23:46:41 pumpkin360: readability is _everything_. I wouldn't hire a programmer who couldn't write readable code. There's no "only" about readability. 23:47:07 Code is for reading first. Sometimes computers can run it too, which is nice. 23:47:33 I'm not saying this to be abstract and ivory tower-y. Write code for reading first, and the running part will be better for it. 23:47:42 what if you're sacrificing performance for readability? 23:48:28 xilo: I've never found that I had to make code unreadable to make it perform. Any hand-optimization can be made readable. 23:48:56 also... how do i figure out which scheme implementation to use? :X there's so many... 23:49:08 xilo: Racket. (shameless plug of my favorite) 23:49:25 just use whichever one provides the libraries that you need to get useful things done. which in many cases will end up being racket 23:49:28 carleastlund: why? compared to something like guile or chicken 23:50:03 xilo: All depends on your goals for the programming you're doing. Racket is a very powerful general-purpose language. On the other hand, if you're doing some specific thing another is good at -- like embedded code -- then use that one instead. 23:50:18 i mean for general use 23:50:36 Kruppe [~user@CPE602ad0938e9a-CM602ad0938e97.cpe.net.cable.rogers.com] has joined #scheme 23:50:41 xilo: Then Racket. It's definitely designed with general use in mind, which largely means it comes with a lot more libraries than just in the Scheme standard and SRFIs. 23:50:52 ic 23:51:00 xilo: Had this problem for a long time. And, as Carl said, Racket. You can pick R5RS and then it is compatibe with the specification as no other I came across. The community is awesome. Some nice GUI if You don't like terminal. Lot's of nice libs. 23:51:16 mmk thanks 23:51:50 i've been trying to learn scheme and i'm like 23:52:00 uh there's 3 general purpose and several other implementations lol 23:52:40 there are DOZENS of implementations. Many people make them for eductional purposes 23:52:50 yeah i've noticed 23:52:50 and only some decent 23:52:55 there's a lot of educational 23:53:01 then there's some for embedded 23:53:15 but the big 3 for general real world stuff seems to be guile/chicken/racket 23:53:32 does racket come with a good library manager? 23:53:41 there are more. 23:55:26 pumpkin360: why do you need to check for (null? seq) ? 23:56:00 estevocastro: If I don't I don't know where to begin the list. 23:56:22 xilo: yes, racket has a package system for automatic library installation; we're in the process of breaking the core implementation into packages now, so the system is getting a lot of attention and improvements, too 23:56:51 estevocastro: and it does not start the map iteration on the lowest level of recusrion. 23:57:16 carleastlund: cool deal 23:57:38 carleastlund: when can the update be expected ? 23:58:19 pumpkin360: it won't be in the release we're pushing out now, so probably 3 more months before we release the split-up version, but development versions are available in the meantime for anyone who wants them 23:59:27 -!- mmc1 [~michal@j212142.upc-j.chello.nl] has quit [Ping timeout: 246 seconds]