00:01:53 ASau` [~user@p54AFEC23.dip0.t-ipconnect.de] has joined #scheme 00:02:20 -!- ASau [~user@p5083D401.dip0.t-ipconnect.de] has quit [Ping timeout: 246 seconds] 00:03:07 -!- amgarching [~amgarchin@p4FD61658.dip0.t-ipconnect.de] has quit [Quit: Konversation terminated!] 00:03:14 amgarching [~amgarchin@p4FD61658.dip0.t-ipconnect.de] has joined #scheme 00:06:54 -!- arubincloud [uid489@gateway/web/irccloud.com/x-pdiosoevtubvyybz] has quit [] 00:07:48 tiksa [~tiksa@gateway/tor-sasl/tiksa] has joined #scheme 00:18:51 -!- duncanm [~duncan@a-chinaman.com] has quit [Ping timeout: 264 seconds] 00:24:32 duncanm [~duncan@a-chinaman.com] has joined #scheme 00:24:32 la la la 00:33:58 zRecursive [~czsq888@183.13.193.150] has joined #scheme 01:01:42 -!- amgarching [~amgarchin@p4FD61658.dip0.t-ipconnect.de] has quit [Ping timeout: 244 seconds] 01:01:43 -!- mrowe is now known as mrowe_away 01:07:23 -!- ASau` is now known as ASau 01:17:20 fridim__ [~fridim@bas2-montreal07-2925317577.dsl.bell.ca] has joined #scheme 01:19:37 -!- Tuplanolla [~Put-on-la@dsl-jklbrasgw2-54f8aa-52.dhcp.inet.fi] has quit [Ping timeout: 240 seconds] 01:22:25 -!- kwmiebach__ [sid16855@gateway/web/irccloud.com/x-uzzcteaktdyjuige] has quit [Ping timeout: 240 seconds] 01:22:38 rudybot: seen duncanm 01:22:38 *offby1: duncanm was seen quitting one hour ago, saying "Ping timeout: 264 seconds", and then duncanm was seen joining in #scheme fifty-eight minutes ago 01:24:59 kwmiebach__ [sid16855@gateway/web/irccloud.com/x-vcpdrysmpiupshud] has joined #scheme 01:25:37 arubin [~arubin@99-114-192-172.lightspeed.cicril.sbcglobal.net] has joined #scheme 01:27:56 -!- dpk [uid15387@gateway/web/irccloud.com/x-hkiqfiwakytvvnjj] has quit [Quit: Connection closed for inactivity] 01:28:01 -!- mrowe_away is now known as mrowe 01:37:19 -!- jlongster [~user@pool-173-53-114-190.rcmdva.fios.verizon.net] has quit [Ping timeout: 272 seconds] 01:39:27 jeapostrophe [~jay@216-21-162-70.slc.googlefiber.net] has joined #scheme 01:39:27 -!- jeapostrophe [~jay@216-21-162-70.slc.googlefiber.net] has quit [Changing host] 01:39:27 jeapostrophe [~jay@racket/jeapostrophe] has joined #scheme 01:42:30 amgarching [~amgarchin@p4FD61658.dip0.t-ipconnect.de] has joined #scheme 01:44:14 Justor [~nevzets@unaffiliated/nevzets] has joined #scheme 01:44:54 -!- tenqu is now known as tenq 01:45:01 Is there a portable way to fork processes in Scheme? 01:45:03 I cou;dn find any SRFI that defines simply running unix commands 01:52:35 -!- pjdelport_ [uid25146@gateway/web/irccloud.com/x-xhgtyibvkucktxgf] has quit [Quit: Connection closed for inactivity] 01:55:49 b4283 [~b4283@60-249-196-111.HINET-IP.hinet.net] has joined #scheme 01:55:58 *offby1* laughs cruelly 01:56:05 Justor: there isn't one. 01:56:11 Justor: every implementation does it differently. 01:58:20 offby1, my old friend, I like what you've done with your beard 01:59:07 I'm surprised there isn't an SRFI that standardizees this. 01:59:49 -!- amgarching [~amgarchin@p4FD61658.dip0.t-ipconnect.de] has quit [Quit: Konversation terminated!] 01:59:49 alexei [~amgarchin@p4FD61658.dip0.t-ipconnect.de] has joined #scheme 01:59:54 system string string ... -> string number 01:59:54 I'd take it. 02:08:36 klltkr [~klltkr@unaffiliated/klltkr] has joined #scheme 02:12:34 -!- _snits_ [~snits@inet-hqmc07-o.oracle.com] has quit [Remote host closed the connection] 02:17:13 -!- pnkfelix [~pnkfelix@bas75-2-88-170-201-21.fbx.proxad.net] has quit [Ping timeout: 272 seconds] 02:18:08 snits [~snits@inet-hqmc07-o.oracle.com] has joined #scheme 02:24:31 Rodya_ [~trav@2601:b:c400:856:ad0d:174f:7c1c:b722] has joined #scheme 02:24:57 -!- davexunit [~user@fsf/member/davexunit] has quit [Remote host closed the connection] 02:25:12 manamonghippos [~manamongh@lan.cis.uab.edu] has joined #scheme 02:28:44 akp [~akp@c-50-133-254-143.hsd1.ma.comcast.net] has joined #scheme 02:29:18 hey guys, i am trying to setup for going through SICP. what scheme interperter should i be using? 02:29:29 *interpreter 02:30:22 frkout_ [~frkout@101.110.31.120] has joined #scheme 02:31:06 akp: it is VERY hard for us to pick an approprate Scheme implementation :( 02:31:32 ? is there a recommended on for using with the book? 02:31:50 i tried to find an mit-scheme-x86_64 package but couldn't 02:32:24 Guile, chicken(only R5RS)? 02:32:48 -!- frkout [~frkout@101.110.31.250] has quit [Ping timeout: 240 seconds] 02:33:43 is there a difference between using 1.8 and 2.0? 02:35:09 akp, sicp dates from a time that scheme was some-what different I guess, I don't think sicp calls it scheme as well 02:35:09 It just calls it "our lisp" or something 02:35:56 I'd also say that in today's ecosystem it teaches some-what improper programming practices such as not using strong typing properly. Making data structures in conses rather than named records. 02:35:57 no, they call it scheme. 02:36:21 what would you recommend then? 02:37:46 akp: You could use Racket with the SICP support package. 02:38:04 oh, where can i get the SICP support package? 02:38:34 akp: http://www.neilvandyke.org/racket-sicp/ 02:38:48 thank you. 02:39:15 akp: Seems like it basically involves clicking a checkbox and writing "#lang planet neil/sicp" at the top of your file. 02:39:46 yea, i am going through the directions now =) 02:39:55 akp, hmm, they do? 02:40:38 I remember when going through it that I was a bit amused by that they always called it "our lisp" or something like that as if it was so old that Scheme hadn't yet been given a name. 02:40:54 Justor: yeah. they call it Scheme, the dialect of Lisp that we use - see http://mitpress.mit.edu/sicp/full-text/book/book-Z-H-7.html near the bottom 02:41:17 maybe you read the first edition? 02:42:12 But I don't know, SICP is a bit like TACP, it teaches algorithms and program structure but it fails to make clear the wondrous things of module systems and strong typing to ensure your code remains maintainable, it doesn't actually teach you to make 'software'. 02:42:12 I read the online version about 6 years ago I think. 02:42:13 But yeah, they do call it scheme there 02:42:35 i am hoping to fix my basics with this i guess. 02:42:50 -!- alexei [~amgarchin@p4FD61658.dip0.t-ipconnect.de] has quit [Ping timeout: 264 seconds] 02:42:52 i can half-assed program. i can't really do anything difficult it seems 02:43:02 Justor: I've never seen an introductory anything teach you how to be good at anything. 02:43:46 manamonghippos, hmm, real world ocaml and teach you a Haskell are really good in my opinion in explaining the module system and how a piece of software should be structured. 02:43:49 An actual application rather than a program that it is a single computable function that finds a solution to a problem 02:44:33 I haven't read any other teaching courses on Scheme though so maybe they do that too. 02:44:44 I feel C is really worthy of learning 02:44:57 Justor: Hmm maybe so. I'm not sure those are introductory programming books though. 02:46:07 I guess my problem with sicp is that it neither teaches you modern scheme nor proper programming facilities applicable to the modern era. Mostly explains a very theoretical level of computation theory to you but not how to fit that into an actual piece of software. 02:46:07 manamonghippos, I suppose they don't teach to program as much as that they teach a language yeah. 02:46:50 I'm not sure how many MIT folks care about what we think of as proper software. 02:47:59 but I get what you're saying, I agree 02:48:28 manamonghippos, well, would you think it is ever a good idea to define a reational number as simply a cons of two integers instead of a define-record-type? 02:48:32 is HtDP a better book then? 02:48:32 Scheme lists itself are a bit of an archaism from the past that's not really that sane, meh. 02:49:15 list? should really run in constant time and cons for lists should require its second argument be a list. 02:49:15 akp, I don't know, never read it, but that it's later makes it more likely. 02:49:55 hopefully i don't get yelled at for this, but what kind of software do some of you guys use scheme to build? 02:50:02 From what I understand it attempts to address certain flaws in SICP. 02:50:20 akp: I'll be honest I've never shipped anything in a lisp 02:50:49 akp, you've been yelled at before eh? 02:51:10 yes, for many things. sometimes for reason i still do 02:51:13 nt understand. 02:51:33 akp: I've written some small graphical programs, scripts, some small web services and some client/server stuff. 02:51:36 And I mostly use scheme for shell scripts really. IT's my contension that dynamic typing doesn't really work for large programs but is very good for scripts that need to be done quickly and flexibly and still have reasonable guarantees of correctness. 02:51:37 Which I might also be yelled at for. 02:52:23 akp, that's because the scheme community is filled with bittered people who are very angry that scheme is as obscure as it is and vent this at everyone who reminds them of this. 02:52:23 Well, not 'filled', but they definitely exist. 02:52:24 Hardly your fault. 02:52:49 i've been yelled at by the C people when i was much youger 02:52:55 and told to figure it out myself. 02:53:11 The X people always suck 02:54:14 does planet.racket-lang.org only let the latest clients connect or something? 02:54:37 akp: Are you getting a tcp-connect error? 02:54:42 yeah 02:55:07 akp: Ok what you're looking for might actually be on pkgs.racket-lang.org rather than planet 02:55:08 the IDE is showing forbidden client access to planet.racket-lang.org 02:55:14 oh ok 02:55:20 They are phasing out planet iirc 02:55:42 try raco pkg install ________ 02:55:52 manamonghippos: would i put pkgs.racket-lang.org then? 02:56:18 THey are, why? 02:56:18 Never used it but I thought it was a cool idea. 02:56:31 It's getting replaced with a more command-liney solution 02:56:44 I might be completely wrong here 02:57:02 Be aware that I only know a bit of what I'm talking about 02:57:11 lol 02:58:49 manamonghippos: what would i pass for install? i've tried neil/sicp and sicp 02:59:42 and it seems the IDE just gets denied access to the network for some reason. 03:05:47 oh well. seems i got it installed. thanks guys 03:06:07 akp: Sorry, how did you get it working? 03:06:16 voodoo it seems 03:06:22 i had to run the IDE as root. 03:06:29 then select run 03:06:35 exit the ide 03:06:39 and restart as my user 03:08:27 akp: I was about to say, mine worked just fine. 03:14:47 przl__ [~przlrkt@p5DCA33F0.dip0.t-ipconnect.de] has joined #scheme 03:17:48 -!- The-Mad-Pirate [The-Mad-Pi@181.165.233.90] has quit [Ping timeout: 252 seconds] 03:18:06 -!- przl [~przlrkt@p5DCA32D6.dip0.t-ipconnect.de] has quit [Ping timeout: 244 seconds] 03:27:46 yacks [~py@103.6.159.103] has joined #scheme 03:30:33 -!- ozzloy_ is now known as ozzloy 03:31:21 -!- ozzloy [~ozzloy@ozzloy.lifeafterking.org] has quit [Changing host] 03:31:21 ozzloy [~ozzloy@unaffiliated/ozzloy] has joined #scheme 03:31:57 gaz_ [~gaz@host31-53-237-202.range31-53.btcentralplus.com] has joined #scheme 03:34:26 -!- gaz__ [~gaz@host81-151-246-232.range81-151.btcentralplus.com] has quit [Ping timeout: 264 seconds] 03:34:42 -!- MichaelRaskin [~MichaelRa@195.91.224.161] has quit [Quit: MichaelRaskin] 03:36:02 -!- akp [~akp@c-50-133-254-143.hsd1.ma.comcast.net] has quit [Remote host closed the connection] 03:37:14 akp [~akp@c-50-133-254-143.hsd1.ma.comcast.net] has joined #scheme 03:39:28 -!- annodomini [~lambda@wikipedia/lambda] has quit [Quit: annodomini] 03:49:13 -!- hiyosi [~skip_it@126.73.30.125.dy.iij4u.or.jp] has quit [Read error: Operation timed out] 03:53:24 eliyak [~eliyak@c-71-194-134-120.hsd1.il.comcast.net] has joined #scheme 03:53:24 -!- eliyak [~eliyak@c-71-194-134-120.hsd1.il.comcast.net] has quit [Changing host] 03:53:24 eliyak [~eliyak@wikisource/Eliyak] has joined #scheme 04:01:30 -!- Nizumzen [~Nizumzen@cpc1-reig5-2-0-cust251.6-3.cable.virginm.net] has quit [Ping timeout: 244 seconds] 04:06:01 -!- Justor [~nevzets@unaffiliated/nevzets] has left #scheme 04:09:05 -!- przl__ [~przlrkt@p5DCA33F0.dip0.t-ipconnect.de] has quit [Ping timeout: 246 seconds] 04:12:28 -!- arubin [~arubin@99-114-192-172.lightspeed.cicril.sbcglobal.net] has quit [Quit: My MacBook has gone to sleep. ZZZzzz] 04:15:44 -!- joneshf-laptop [~joneshf@128.120.118.199] has quit [Ping timeout: 246 seconds] 04:23:04 -!- Rodya_ [~trav@2601:b:c400:856:ad0d:174f:7c1c:b722] has quit [Quit: Ex-Chat] 04:30:26 -!- jao [~jao@pdpc/supporter/professional/jao] has quit [Ping timeout: 244 seconds] 04:31:21 -!- copumpkin is now known as daddy 04:31:30 -!- daddy is now known as copumpkin 04:40:35 phipes [~phipes@unaffiliated/phipes] has joined #scheme 04:41:26 -!- Kneferilis [~Kneferili@nb1-210.static.cytanet.com.cy] has quit [Ping timeout: 252 seconds] 04:48:24 -!- bjz [~bjz@125.253.99.68] has quit [Ping timeout: 240 seconds] 04:48:29 Nizumzen [~Nizumzen@cpc1-reig5-2-0-cust251.6-3.cable.virginm.net] has joined #scheme 04:55:48 bjz [~bjz@125.253.99.68] has joined #scheme 04:58:26 annodomini [~lambda@c-76-23-156-75.hsd1.ma.comcast.net] has joined #scheme 04:58:26 -!- annodomini [~lambda@c-76-23-156-75.hsd1.ma.comcast.net] has quit [Changing host] 04:58:26 annodomini [~lambda@wikipedia/lambda] has joined #scheme 05:00:17 -!- frkout_ [~frkout@101.110.31.120] has quit [Remote host closed the connection] 05:00:24 -!- jeapostrophe [~jay@racket/jeapostrophe] has quit [Ping timeout: 244 seconds] 05:00:52 frkout [~frkout@101.110.31.250] has joined #scheme 05:13:39 arubin [~arubin@99-114-192-172.lightspeed.cicril.sbcglobal.net] has joined #scheme 05:16:33 hiyosi [~skip_it@126.73.30.125.dy.iij4u.or.jp] has joined #scheme 05:21:14 -!- hiyosi [~skip_it@126.73.30.125.dy.iij4u.or.jp] has quit [Ping timeout: 264 seconds] 05:21:42 -!- tcsc [~tcsc@c-76-118-148-98.hsd1.ma.comcast.net] has quit [Quit: computer sleeping] 05:26:35 jerrychow [~jerrychow@58.245.253.218] has joined #scheme 05:26:58 -!- jerrychow [~jerrychow@58.245.253.218] has quit [Client Quit] 05:31:20 -!- mango_mds [~mango_mds@c-71-207-240-42.hsd1.al.comcast.net] has quit [Ping timeout: 246 seconds] 05:32:59 *copumpkin* waves at emma 05:33:03 so how about that scheme 05:33:13 err, racket is exciting these days! 05:33:59 -!- araujo [~araujo@gentoo/developer/araujo] has quit [Read error: Connection reset by peer] 05:34:22 araujo [~araujo@gentoo/developer/araujo] has joined #scheme 05:36:06 -!- zjxv [~jxv@71-84-192-255.dhcp.wsco.ca.charter.com] has quit [Quit: QUIT] 05:51:18 -!- arubin [~arubin@99-114-192-172.lightspeed.cicril.sbcglobal.net] has quit [Quit: My MacBook has gone to sleep. ZZZzzz] 05:58:26 -!- fridim__ [~fridim@bas2-montreal07-2925317577.dsl.bell.ca] has quit [Ping timeout: 264 seconds] 06:00:07 -!- manamonghippos [~manamongh@lan.cis.uab.edu] has quit [Quit: Leaving] 06:00:38 -!- oleo [~oleo@xdsl-78-35-129-173.netcologne.de] has quit [Quit: Leaving] 06:05:30 jewel [~jewel@105-236-25-225.access.mtnbusiness.co.za] has joined #scheme 06:11:21 racket is good for teaching 06:11:58 -!- annodomini [~lambda@wikipedia/lambda] has quit [Quit: annodomini] 06:14:50 bacon is good for eating 06:14:54 here here! 06:15:28 zRecursive: correct...DrRacket is good for teaching 06:17:10 hiyosi [~skip_it@125.30.73.126] has joined #scheme 06:17:38 offby1: Any real application(s) developed using Racket except rudybot ? :) 06:18:28 IIRC, there is RWin which seems not active now 06:19:39 gravicappa [~gravicapp@ppp91-77-175-222.pppoe.mtu-net.ru] has joined #scheme 06:19:51 zRecursive: what do you mean "real"? you can always dig up some web framework and some such somewhere 06:21:01 theseb: "real" simply means what we can use to do something useful 06:22:19 -!- hiyosi [~skip_it@125.30.73.126] has quit [Ping timeout: 272 seconds] 06:23:25 zRecursive: yes, Racket is used as a scripting language in video games like The Last of Us. 06:23:56 great ! 06:24:00 (and of course there are other industrial users too) 06:25:28 I ever tested RWin before, and i am scared of its big RAM usage ! 06:33:13 rudybot: you're not a "real" application 06:33:13 ijp: i have only had to write a windows application once 06:35:02 -!- bjz [~bjz@125.253.99.68] has quit [Ping timeout: 264 seconds] 06:35:03 -!- Nizumzen [~Nizumzen@cpc1-reig5-2-0-cust251.6-3.cable.virginm.net] has quit [Quit: KVIrc 4.2.0 Equilibrium http://www.kvirc.net/] 06:36:15 -!- vyv [~vyv@bas1-northbay04-1177802783.dsl.bell.ca] has quit [Ping timeout: 272 seconds] 06:36:39 rudybot: You are a "real" application 06:36:39 zRecursive: He told me in private mail roughly what his proposal involved, and it requires both changing the semantics of procedure application (to check if the operand is an fexpr) and somehow capturing and preserving the compile-time environment for use at runtime. 06:36:56 Isp-sec [~palach@89-178-42-213.broadband.corbina.ru] has joined #scheme 06:38:15 -!- theseb [~cs@74.194.237.26] has quit [Quit: Leaving] 06:43:18 vyv [~vyv@bas1-northbay04-1177802783.dsl.bell.ca] has joined #scheme 06:45:56 bjz [~bjz@125.253.99.68] has joined #scheme 06:48:32 -!- peterhil [~peterhil@dsl-hkibrasgw3-58c156-108.dhcp.inet.fi] has quit [Read error: Connection reset by peer] 06:50:03 peterhil [~peterhil@dsl-hkibrasgw3-58c156-108.dhcp.inet.fi] has joined #scheme 06:54:17 -!- gaz_ [~gaz@host31-53-237-202.range31-53.btcentralplus.com] has quit [Quit: Lost terminal] 06:54:22 rudybot, I think you know too much 06:54:23 jewel: I don't like dynamic-wind much as a cleanup routine. It forces you to write in CPS form pretty much, especially when used for a one time cleanup. 06:56:36 rudybot: You always doesnot answer the question :) 06:56:39 zRecursive: One does-not need to be aware of the existence of the color of bees in the Congo, to have experiences before the proposal of such a statement that will either provide on in having interest or lack thereof. Like if taylanub likes bees, they are likely to show more interest in learning such a thing, than one who experiences something negative in-relation to them. Though this is a *very* simplified example and most 06:57:55 rudybot: You can sleep now 06:57:56 zRecursive: I optimized away (sleep 28800) last night :( 06:58:42 I wish I could optimise away that much sleep 06:58:45 rudybot: Can you sleep microseconds ? 06:58:46 zRecursive: "On the outskirts of this new language, lurking on its crimsoned frontier, will lie words that will themselves have been cast into exile  foul offgassings within a lexicon that itself stands as a towering monument to the boundlessly obscene, words that will curve backward and devour themselves, each one an afflicted universe in the process of total collapse, words that exist for microseconds before streaking, 06:59:31 (assuming the units of sleep are seconds) 06:59:44 rudybot: (sleep 0.001) ? 06:59:44 zRecursive: ok, time to sleep 07:00:19 ijp: It seems CL canot sleep microseconds ? 07:01:37 rudybot: doc sleep 07:01:38 zRecursive: your sandbox is ready 07:01:39 zRecursive: http://docs.racket-lang.org/reference/threads.html#(def._((quote._~23~25kernel)._sleep)) 07:03:06 28800s = 480min = 8h, means rudybot probably sleeps better than the average person on this channel 07:03:43 hehe 07:09:03 -!- tenq is now known as hatFolk|out 07:17:03 -!- mrowe is now known as mrowe_away 07:18:02 hiyosi [~skip_it@126.73.30.125.dy.iij4u.or.jp] has joined #scheme 07:23:02 -!- hiyosi [~skip_it@126.73.30.125.dy.iij4u.or.jp] has quit [Ping timeout: 264 seconds] 07:55:53 -!- yacks [~py@103.6.159.103] has quit [Quit: Leaving] 07:58:43 frkout_ [~frkout@101.110.31.120] has joined #scheme 08:01:45 -!- vyv [~vyv@bas1-northbay04-1177802783.dsl.bell.ca] has quit [Ping timeout: 244 seconds] 08:02:17 -!- frkout [~frkout@101.110.31.250] has quit [Ping timeout: 244 seconds] 08:03:22 vyv [~vyv@bas1-northbay04-1177802783.dsl.bell.ca] has joined #scheme 08:06:03 da4c30ff [~da4c30ff@c18.adsl.tnnet.fi] has joined #scheme 08:07:30 wingo [~wingo@cha74-2-88-160-190-192.fbx.proxad.net] has joined #scheme 08:13:17 Okasu [~1@unaffiliated/okasu] has joined #scheme 08:15:22 -!- omefire [~omefire@c-50-159-45-177.hsd1.wa.comcast.net] has quit [Read error: Connection reset by peer] 08:17:16 omefire [~omefire@50.159.45.177] has joined #scheme 08:17:18 -!- omefire [~omefire@50.159.45.177] has quit [Max SendQ exceeded] 08:17:49 omefire [~omefire@c-50-159-45-177.hsd1.wa.comcast.net] has joined #scheme 08:18:47 hiyosi [~skip_it@126.73.30.125.dy.iij4u.or.jp] has joined #scheme 08:20:05 -!- zRecursive [~czsq888@183.13.193.150] has left #scheme 08:23:38 -!- hiyosi [~skip_it@126.73.30.125.dy.iij4u.or.jp] has quit [Ping timeout: 264 seconds] 08:24:30 -!- klltkr [~klltkr@unaffiliated/klltkr] has quit [Quit: My MacBook has gone to sleep. ZZZzzz] 08:24:31 jerrychow [~jerrychow@58.245.253.218] has joined #scheme 08:37:50 yacks [~py@122.179.97.23] has joined #scheme 08:38:53 stepnem [~stepnem@77.78.117.8] has joined #scheme 08:40:11 -!- jerrychow [~jerrychow@58.245.253.218] has quit [Quit: Leaving] 08:46:38 -!- jewel [~jewel@105-236-25-225.access.mtnbusiness.co.za] has quit [Ping timeout: 246 seconds] 08:58:59 -!- eliyak [~eliyak@wikisource/Eliyak] has quit [Quit: That's it, I quit!] 09:06:12 -!- frkout_ [~frkout@101.110.31.120] has quit [Remote host closed the connection] 09:06:35 przl [~przlrkt@p4FF5BCD3.dip0.t-ipconnect.de] has joined #scheme 09:06:48 frkout [~frkout@101.110.31.250] has joined #scheme 09:17:30 asc [~Charkov@c5.edrana.lt] has joined #scheme 09:19:37 hiyosi [~skip_it@126.73.30.125.dy.iij4u.or.jp] has joined #scheme 09:20:30 MichaelRaskin [~MichaelRa@195.208.66.22] has joined #scheme 09:20:42 dpk [uid15387@gateway/web/irccloud.com/x-oebmpkinrysbjgvg] has joined #scheme 09:24:25 -!- hiyosi [~skip_it@126.73.30.125.dy.iij4u.or.jp] has quit [Ping timeout: 244 seconds] 09:29:19 add^_ [~user@m176-70-197-83.cust.tele2.se] has joined #scheme 09:43:53 Tuplanolla [~Put-on-la@dsl-jklbrasgw2-54f8aa-52.dhcp.inet.fi] has joined #scheme 10:01:12 pjdelport_ [uid25146@gateway/web/irccloud.com/x-cycsfghmqyuhlmsq] has joined #scheme 10:05:53 -!- Isp-sec [~palach@89-178-42-213.broadband.corbina.ru] has quit [Ping timeout: 272 seconds] 10:20:26 hiyosi [~skip_it@126.73.30.125.dy.iij4u.or.jp] has joined #scheme 10:24:59 -!- hiyosi [~skip_it@126.73.30.125.dy.iij4u.or.jp] has quit [Ping timeout: 246 seconds] 10:41:03 -!- phipes [~phipes@unaffiliated/phipes] has quit [Quit: My MacBook has gone to sleep. ZZZzzz] 11:01:03 pnkfelix [~pnkfelix@89.202.203.51] has joined #scheme 11:15:54 cleatoma [~cleatoma@host31-52-140-136.range31-52.btcentralplus.com] has joined #scheme 11:28:06 alexei [~amgarchin@p4FD62903.dip0.t-ipconnect.de] has joined #scheme 11:30:50 -!- ASau [~user@p54AFEC23.dip0.t-ipconnect.de] has quit [Ping timeout: 264 seconds] 11:36:39 shivani_ [uid11848@gateway/web/irccloud.com/x-aaohilxvzadrdmdb] has joined #scheme 11:37:55 -!- peterhil [~peterhil@dsl-hkibrasgw3-58c156-108.dhcp.inet.fi] has quit [Read error: Connection reset by peer] 11:41:59 -!- wingo [~wingo@cha74-2-88-160-190-192.fbx.proxad.net] has quit [Ping timeout: 246 seconds] 11:42:13 peterhil [~peterhil@dsl-hkibrasgw3-58c156-108.dhcp.inet.fi] has joined #scheme 11:42:50 -!- przl [~przlrkt@p4FF5BCD3.dip0.t-ipconnect.de] has quit [Ping timeout: 264 seconds] 11:44:36 joneshf-laptop [~joneshf@98.255.30.38] has joined #scheme 11:52:55 -!- Tuplanolla [~Put-on-la@dsl-jklbrasgw2-54f8aa-52.dhcp.inet.fi] has quit [Ping timeout: 272 seconds] 11:54:20 wingo [~wingo@cha74-2-88-160-190-192.fbx.proxad.net] has joined #scheme 11:59:15 mango_mds [~mango_mds@c-71-207-240-42.hsd1.al.comcast.net] has joined #scheme 12:00:09 ASau [~user@p54AFEC23.dip0.t-ipconnect.de] has joined #scheme 12:01:04 jewel [~jewel@105-237-9-187.access.mtnbusiness.co.za] has joined #scheme 12:01:09 -!- yacks [~py@122.179.97.23] has quit [Ping timeout: 272 seconds] 12:09:26 przl [~przlrkt@p4FF5BCD3.dip0.t-ipconnect.de] has joined #scheme 12:19:34 hiyosi [~skip_it@126.73.30.125.dy.iij4u.or.jp] has joined #scheme 12:25:59 -!- aoh [~aki@adsl-99-115.netplaza.fi] has quit [Changing host] 12:25:59 aoh [~aki@unaffiliated/aoh] has joined #scheme 12:32:20 mango_mds2 [~mango_mds@c-71-207-240-42.hsd1.al.comcast.net] has joined #scheme 12:34:05 -!- mango_mds [~mango_mds@c-71-207-240-42.hsd1.al.comcast.net] has quit [Ping timeout: 272 seconds] 12:45:50 -!- gravicappa [~gravicapp@ppp91-77-175-222.pppoe.mtu-net.ru] has quit [Ping timeout: 264 seconds] 12:49:37 Kneferilis [~Kneferili@nb1-210.static.cytanet.com.cy] has joined #scheme 12:54:40 yacks [~py@103.6.159.103] has joined #scheme 13:03:35 -!- haroldwu [~haroldwu@unaffiliated/haroldwu] has quit [Ping timeout: 252 seconds] 13:06:04 gravicappa [~gravicapp@ppp91-77-179-89.pppoe.mtu-net.ru] has joined #scheme 13:07:55 haroldwu [~haroldwu@219.85.185.194] has joined #scheme 13:19:59 annodomini [~lambda@c-76-23-156-75.hsd1.ma.comcast.net] has joined #scheme 13:20:00 -!- annodomini [~lambda@c-76-23-156-75.hsd1.ma.comcast.net] has quit [Changing host] 13:20:00 annodomini [~lambda@wikipedia/lambda] has joined #scheme 13:24:04 davexunit [~user@fsf/member/davexunit] has joined #scheme 13:27:26 juta [c19df00c@gateway/web/freenode/ip.193.157.240.12] has joined #scheme 13:28:54 -!- juta [c19df00c@gateway/web/freenode/ip.193.157.240.12] has quit [Client Quit] 13:30:58 -!- annodomini [~lambda@wikipedia/lambda] has quit [Quit: annodomini] 13:31:01 -!- Sgeo [~quassel@ool-44c2df0c.dyn.optonline.net] has quit [Read error: Connection reset by peer] 13:32:21 jeapostrophe [~jay@racket/jeapostrophe] has joined #scheme 13:33:56 tupi [~user@189.60.14.19] has joined #scheme 13:38:00 tcsc [~tcsc@c-76-118-148-98.hsd1.ma.comcast.net] has joined #scheme 13:42:43 -!- da4c30ff [~da4c30ff@c18.adsl.tnnet.fi] has quit [Quit: Computer has gone to sleep.] 13:43:16 da4c30ff [~da4c30ff@c18.adsl.tnnet.fi] has joined #scheme 13:44:38 Tuplanolla [~Put-on-la@dsl-jklbrasgw2-54f8aa-52.dhcp.inet.fi] has joined #scheme 13:46:19 davexuni` [~user@38.104.7.18] has joined #scheme 13:46:54 -!- davexunit [~user@fsf/member/davexunit] has quit [Write error: Broken pipe] 13:48:58 -!- cleatoma [~cleatoma@host31-52-140-136.range31-52.btcentralplus.com] has quit [Quit: Leaving] 13:49:02 -!- da4c30ff [~da4c30ff@c18.adsl.tnnet.fi] has quit [Ping timeout: 249 seconds] 13:49:20 annodomini [~lambda@wikipedia/lambda] has joined #scheme 13:49:56 -!- nisstyre [yourstruly@oftn/member/Nisstyre] has quit [Ping timeout: 245 seconds] 13:54:45 -!- tcsc [~tcsc@c-76-118-148-98.hsd1.ma.comcast.net] has quit [Quit: computer sleeping] 14:03:34 langmartin [~langmarti@host-68-169-175-226.WISOLT2.epbfi.com] has joined #scheme 14:12:56 -!- asc [~Charkov@c5.edrana.lt] has quit [Read error: Connection reset by peer] 14:18:52 nisstyre [yourstruly@oftn/member/Nisstyre] has joined #scheme 14:23:32 oleo [~oleo@xdsl-84-44-179-171.netcologne.de] has joined #scheme 14:26:49 -!- moea [~moe@host217-42-246-181.range217-42.btcentralplus.com] has quit [Read error: Connection reset by peer] 14:27:11 moea [~moe@host217-42-246-181.range217-42.btcentralplus.com] has joined #scheme 14:27:27 -!- annodomini [~lambda@wikipedia/lambda] has quit [Quit: annodomini] 14:28:35 -!- przl [~przlrkt@p4FF5BCD3.dip0.t-ipconnect.de] has quit [Ping timeout: 245 seconds] 14:31:12 przl [~przlrkt@p549FD851.dip0.t-ipconnect.de] has joined #scheme 14:33:34 noobboob [uid5587@gateway/web/irccloud.com/x-sehknqffhvubexbk] has joined #scheme 14:35:49 kobain [~sambio@unaffiliated/kobain] has joined #scheme 14:36:15 -!- araujo [~araujo@gentoo/developer/araujo] has quit [Read error: Connection reset by peer] 14:36:45 araujo [~araujo@gentoo/developer/araujo] has joined #scheme 14:40:54 rudybot: eval (sleep 5) 14:41:00 *offby1: Done. 14:52:19 -!- LeoNerd [leo@2a01:7e00::f03c:91ff:fe96:20e8] has quit [Write error: Broken pipe] 14:53:19 -!- araujo [~araujo@gentoo/developer/araujo] has quit [Quit: Leaving] 14:53:48 LeoNerd [leo@2a01:7e00::f03c:91ff:fe96:20e8] has joined #scheme 14:59:17 -!- b4283 [~b4283@60-249-196-111.HINET-IP.hinet.net] has quit [Quit: Konversation terminated!] 15:22:24 mgodshall [~mgodshall@8.20.30.249] has joined #scheme 15:29:11 -!- ASau [~user@p54AFEC23.dip0.t-ipconnect.de] has quit [Ping timeout: 244 seconds] 15:31:47 -!- MichaelRaskin [~MichaelRa@195.208.66.22] has quit [Quit: MichaelRaskin] 15:36:25 ASau [~user@p54AFEC23.dip0.t-ipconnect.de] has joined #scheme 15:40:26 -!- akp [~akp@c-50-133-254-143.hsd1.ma.comcast.net] has quit [Ping timeout: 264 seconds] 15:40:41 -!- ASau [~user@p54AFEC23.dip0.t-ipconnect.de] has quit [Ping timeout: 245 seconds] 15:43:39 -!- joneshf-laptop [~joneshf@98.255.30.38] has quit [Ping timeout: 244 seconds] 15:44:07 -!- davexuni` is now known as davexunit 15:44:15 -!- davexunit [~user@38.104.7.18] has quit [Changing host] 15:44:15 davexunit [~user@fsf/member/davexunit] has joined #scheme 15:45:31 -!- woodboy4_ [~woodboy45@host26-53-dynamic.11-79-r.retail.telecomitalia.it] has quit [Remote host closed the connection] 15:46:13 woodboy__ [~woodboy45@host26-53-dynamic.11-79-r.retail.telecomitalia.it] has joined #scheme 15:47:16 araujo [~araujo@gentoo/developer/araujo] has joined #scheme 15:50:29 -!- woodboy__ [~woodboy45@host26-53-dynamic.11-79-r.retail.telecomitalia.it] has quit [Ping timeout: 246 seconds] 15:51:54 ASau [~user@p54AFEC23.dip0.t-ipconnect.de] has joined #scheme 15:52:51 annodomini [~lambda@wikipedia/lambda] has joined #scheme 15:53:12 nugnuts [~nugnuts@pool-74-105-21-221.nwrknj.fios.verizon.net] has joined #scheme 15:59:07 -!- LeoNerd [leo@2a01:7e00::f03c:91ff:fe96:20e8] has quit [Write error: Broken pipe] 15:59:27 -!- annodomini [~lambda@wikipedia/lambda] has quit [Quit: annodomini] 16:00:19 LeoNerd [leo@2a01:7e00::f03c:91ff:fe96:20e8] has joined #scheme 16:02:53 jxv [~jxv@71-84-192-255.dhcp.wsco.ca.charter.com] has joined #scheme 16:07:34 -!- shivani_ [uid11848@gateway/web/irccloud.com/x-aaohilxvzadrdmdb] has quit [Quit: Connection closed for inactivity] 16:10:05 -!- hiyosi [~skip_it@126.73.30.125.dy.iij4u.or.jp] has quit [Ping timeout: 246 seconds] 16:10:37 b4283 [~b4283@218-164-122-169.dynamic.hinet.net] has joined #scheme 16:12:12 -!- davexunit [~user@fsf/member/davexunit] has quit [Remote host closed the connection] 16:14:29 theseb [~cs@74.194.237.26] has joined #scheme 16:14:54 phipes [~phipes@unaffiliated/phipes] has joined #scheme 16:27:28 -!- moea [~moe@host217-42-246-181.range217-42.btcentralplus.com] has quit [Read error: Connection reset by peer] 16:27:50 moea [~moe@host217-42-246-181.range217-42.btcentralplus.com] has joined #scheme 16:32:11 annodomini [~lambda@173-14-129-9-NewEngland.hfc.comcastbusiness.net] has joined #scheme 16:32:12 -!- annodomini [~lambda@173-14-129-9-NewEngland.hfc.comcastbusiness.net] has quit [Changing host] 16:32:12 annodomini [~lambda@wikipedia/lambda] has joined #scheme 16:34:54 -!- araujo [~araujo@gentoo/developer/araujo] has quit [Read error: Connection reset by peer] 16:35:19 araujo [~araujo@190.73.46.113] has joined #scheme 16:35:19 -!- araujo [~araujo@190.73.46.113] has quit [Changing host] 16:35:19 araujo [~araujo@gentoo/developer/araujo] has joined #scheme 16:38:30 -!- b4283 [~b4283@218-164-122-169.dynamic.hinet.net] has quit [Quit: ] 16:43:09 davexunit [~user@fsf/member/davexunit] has joined #scheme 16:44:42 b4283 [~b4283@118.150.135.102] has joined #scheme 16:46:18 MichaelRaskin [~MichaelRa@195.91.224.161] has joined #scheme 17:03:22 -!- noobboob [uid5587@gateway/web/irccloud.com/x-sehknqffhvubexbk] has quit [Quit: Connection closed for inactivity] 17:04:11 -!- przl [~przlrkt@p549FD851.dip0.t-ipconnect.de] has quit [Quit: leaving] 17:05:21 arubincloud [uid489@gateway/web/irccloud.com/x-yehhjhrfjqhzvoun] has joined #scheme 17:06:43 mango_mds [~mango_mds@c-71-207-240-42.hsd1.al.comcast.net] has joined #scheme 17:08:02 -!- mango_mds2 [~mango_mds@c-71-207-240-42.hsd1.al.comcast.net] has quit [Ping timeout: 264 seconds] 17:09:05 -!- langmartin [~langmarti@host-68-169-175-226.WISOLT2.epbfi.com] has quit [Quit: sleep] 17:27:47 -!- Riastradh [~riastradh@jupiter.mumble.net] has quit [Ping timeout: 265 seconds] 17:29:17 jao [~jao@10.125.14.37.dynamic.jazztel.es] has joined #scheme 17:29:20 -!- jao [~jao@10.125.14.37.dynamic.jazztel.es] has quit [Changing host] 17:29:20 jao [~jao@pdpc/supporter/professional/jao] has joined #scheme 17:32:04 -!- phipes [~phipes@unaffiliated/phipes] has quit [Quit: My MacBook has gone to sleep. ZZZzzz] 17:34:36 woodboy__ [~woodboy45@host26-53-dynamic.11-79-r.retail.telecomitalia.it] has joined #scheme 17:37:19 -!- b4283 [~b4283@118.150.135.102] has quit [Remote host closed the connection] 17:37:39 hiyosi [~skip_it@125.30.73.126] has joined #scheme 17:43:09 -!- hiyosi [~skip_it@125.30.73.126] has quit [Ping timeout: 272 seconds] 17:47:00 noobboob [uid5587@gateway/web/irccloud.com/x-bjgdonxqgftjxswj] has joined #scheme 17:47:20 hiroakip [~hiroaki@77-20-51-63-dynip.superkabel.de] has joined #scheme 17:50:11 -!- pnkfelix [~pnkfelix@89.202.203.51] has quit [Ping timeout: 246 seconds] 18:06:42 tupi` [~user@189.60.14.19] has joined #scheme 18:07:45 mango_mds2 [~mango_mds@c-71-207-240-42.hsd1.al.comcast.net] has joined #scheme 18:07:50 -!- snits [~snits@inet-hqmc07-o.oracle.com] has quit [Remote host closed the connection] 18:08:02 snits [~snits@inet-hqmc07-o.oracle.com] has joined #scheme 18:08:28 SrPx [b19dcc46@gateway/web/freenode/ip.177.157.204.70] has joined #scheme 18:08:50 -!- mango_mds [~mango_mds@c-71-207-240-42.hsd1.al.comcast.net] has quit [Ping timeout: 244 seconds] 18:09:18 Hello! Quick question: is there a version of scheme with no side effects at all? Also, is there any scheme that is compiled using supercombinators & graph reduction? If not, why? 18:10:46 vyv_ [~vyv@bas1-northbay04-1177802783.dsl.bell.ca] has joined #scheme 18:10:50 bars0__ [~Name@d132-201.icpnet.pl] has joined #scheme 18:15:45 jeapostr1phe [~jay@216-21-162-70.slc.googlefiber.net] has joined #scheme 18:20:22 -!- vyv [~vyv@bas1-northbay04-1177802783.dsl.bell.ca] has quit [Write error: Broken pipe] 18:20:23 -!- bars0_ [~Name@d132-201.icpnet.pl] has quit [Write error: Broken pipe] 18:20:23 -!- jeapostrophe [~jay@racket/jeapostrophe] has quit [Write error: Broken pipe] 18:20:25 -!- LeoNerd [leo@2a01:7e00::f03c:91ff:fe96:20e8] has quit [Write error: Broken pipe] 18:20:31 -!- tupi [~user@189.60.14.19] has quit [Write error: Broken pipe] 18:20:47 -!- arubincloud [uid489@gateway/web/irccloud.com/x-yehhjhrfjqhzvoun] has quit [Ping timeout: 279 seconds] 18:21:55 SrPx: define Scheme. probably not. who knows. 18:22:01 -!- Blkt [~Blkt@2a01:4f8:150:80a1::aaaa] has quit [Excess Flood] 18:22:18 illerucis [~illerucis@cpe-72-225-194-43.nyc.res.rr.com] has joined #scheme 18:22:19 jewel_ [~jewel@105-237-9-187.access.mtnbusiness.co.za] has joined #scheme 18:22:26 -!- jewel [~jewel@105-237-9-187.access.mtnbusiness.co.za] has quit [Excess Flood] 18:22:28 anything similar :< I am almost finishing my language and still looking for something I can use instead 18:22:54 -!- nugnuts [~nugnuts@pool-74-105-21-221.nwrknj.fios.verizon.net] has quit [Remote host closed the connection] 18:23:15 LeoNerd [leo@2a01:7e00::f03c:91ff:fe96:20e8] has joined #scheme 18:23:36 I'm not sure graph reduction is a good strategy for eager languages 18:23:43 arubincloud [uid489@gateway/web/irccloud.com/x-tsstplspjzghqwxt] has joined #scheme 18:24:03 Blkt [~Blkt@2a01:4f8:150:80a1::aaaa] has joined #scheme 18:24:04 without laziness you don't need the sharing 18:27:11 as for scheme without side effects, it would not be scheme as per any scheme standard, but it's basically the same trivial functional language every one uses when doing papers on compilation etc. 18:29:45 Sounds like another lost soul seeking dynamically typed Haskell. 18:30:14 -!- jeapostr1phe [~jay@216-21-162-70.slc.googlefiber.net] has quit [Ping timeout: 264 seconds] 18:30:31 Tuplanolla: even ghc comes with a shut-up-and-compile option these days 18:31:12 That's not necessarily a good idea though. 18:32:44 surely my choice or wording gave my opinion away 18:36:39 ijp: it doesn't really give you untyped Haskell 18:38:34 hiyosi [~skip_it@126.73.30.125.dy.iij4u.or.jp] has joined #scheme 18:41:18 One day, when I have enough free time, I want to experiment combining Scheme, Haskell and Coq. 18:42:18 I'm curious to see if a dynamically typed language can be both mathematically rigorous and pleasant to use. 18:42:50 define mathematically rigorous 18:43:26 -!- hiyosi [~skip_it@126.73.30.125.dy.iij4u.or.jp] has quit [Ping timeout: 264 seconds] 18:43:31 I'll leave that for later, but my hypothesis is "no". 18:50:06 -!- ebzzry [~ebzzry@112.204.28.168] has quit [] 18:50:23 ebzzry [~ebzzry@112.204.28.168] has joined #scheme 18:57:23 klltkr [~klltkr@unaffiliated/klltkr] has joined #scheme 18:59:41 -!- klltkr [~klltkr@unaffiliated/klltkr] has quit [Client Quit] 19:02:26 -!- kwmiebach__ [sid16855@gateway/web/irccloud.com/x-vcpdrysmpiupshud] has quit [Ping timeout: 245 seconds] 19:04:14 joneshf-laptop [~joneshf@128.120.118.42] has joined #scheme 19:07:19 kwmiebach__ [sid16855@gateway/web/irccloud.com/x-zkonhoyfemnfwvjr] has joined #scheme 19:08:28 klltkr [~klltkr@unaffiliated/klltkr] has joined #scheme 19:09:59 -!- arubincloud [uid489@gateway/web/irccloud.com/x-tsstplspjzghqwxt] has quit [Ping timeout: 246 seconds] 19:12:23 arubincloud [uid489@gateway/web/irccloud.com/x-udfeznchfdmdfhii] has joined #scheme 19:14:49 Tuplanolla: Mathematically speaking, dynamic typing is just a subset of static typing with a very large default sum type. 19:15:57 Making it rigorous necessarily involves refining the types, to become more static and constrained. 19:16:22 So it's probably a bit of a contradiction in terms. 19:16:50 (Of course, rigor can mean many things: Scheme is operationally rigorous in a way that e.g. Haskell isn't.) 19:17:53 -!- gravicappa [~gravicapp@ppp91-77-179-89.pppoe.mtu-net.ru] has quit [Remote host closed the connection] 19:27:54 wingo_ [~wingo@cha74-2-88-160-190-192.fbx.proxad.net] has joined #scheme 19:30:49 -!- wingo [~wingo@cha74-2-88-160-190-192.fbx.proxad.net] has quit [Ping timeout: 272 seconds] 19:32:50 langmartin [~langmarti@host-68-169-175-226.WISOLT2.epbfi.com] has joined #scheme 19:35:07 -!- jxv [~jxv@71-84-192-255.dhcp.wsco.ca.charter.com] has quit [Ping timeout: 244 seconds] 19:35:33 jxv [~jxv@71-84-192-255.dhcp.wsco.ca.charter.com] has joined #scheme 19:36:03 boycottg00gle [~user@stgt-4d02c078.pool.mediaways.net] has joined #scheme 19:39:21 hiyosi [~skip_it@126.73.30.125.dy.iij4u.or.jp] has joined #scheme 19:44:45 -!- hiyosi [~skip_it@126.73.30.125.dy.iij4u.or.jp] has quit [Ping timeout: 272 seconds] 19:54:22 -!- wingo_ is now known as wingo 20:01:49 -!- langmartin [~langmarti@host-68-169-175-226.WISOLT2.epbfi.com] has quit [Quit: sleep] 20:03:32 langmartin [~langmarti@host-68-169-175-226.WISOLT2.epbfi.com] has joined #scheme 20:05:10 Riastradh [~riastradh@jupiter.mumble.net] has joined #scheme 20:07:57 Tuplanolla: you got it :( but why lost soul? It is less about dynamic typing and more about the design of my editor, it simply won't work with a typed language... 20:08:41 The other problem is compiling to javascript, I believe an actually simple FP language would be trivial, but haskell, ocaml, ml and all others are not simple at all. They're behemoths with huge runtimes and stuff 20:09:20 I mean, I'm actually mad. Why is the world like that. So much talk about the simpleness of the lambda calculus. Guess what, it does not work when you add a ton of complexity on top of it 20:09:34 -!- tali713 [~tali713@2001:0:53aa:64c:32:562a:b3ee:137e] has quit [Ping timeout: 256 seconds] 20:09:40 "does not work" 20:09:55 calling something simple only works when you keep it simple, right? 20:10:15 sure, but I assumed you weren't just being tautological but had some sort of point 20:10:29 I don't know what tautological means 20:10:36 trivially true 20:10:45 i.e. red apples are red 20:11:31 I don't see where you're going, but I'm still mad. Here I am developing my language. Implementing stream fusion, inlining, and whatnot. And I didn't even want to do it. Why do I have to? 20:12:46 if it's on your own time, you don't have to do anything 20:13:03 turn off the computer, take up knitting 20:14:37 I wish knitting would give me a simple functional language with a simple optimizing compiler 20:15:34 tali713 [~tali713@c-76-17-236-129.hsd1.mn.comcast.net] has joined #scheme 20:17:46 look at forth, you don't *need* inlining 20:18:44 I mean, a language that could be defined entirely in a single page, and that building a compiler for it to performant code in other languages took not much more than that 20:18:48 that is it, i guess 20:18:56 the world doesn't have that, so shame on the world 20:19:29 if you come up with a nicer world, we can come up with nicer languages 20:20:20 nisstyre_ [yourstruly@oftn/member/Nisstyre] has joined #scheme 20:20:26 how do I do that 20:20:30 until then, we're stuck with complicated runtimes, and hardware that doesn't match the way nicer languages work 20:21:19 seriously why are things like that... 20:21:29 SrPx: use a hardware Scheme processor :) 20:21:41 well, for a start, simplicity is hard 20:22:03 SrPx: aren't those ~30 years old now? 20:22:09 er, ecraven 20:22:21 the famous saying is that "simplicity does not precede complexity, but follows it" 20:22:49 .oO(I've been quoting perlis a lot recently) 20:23:09 oh wait symbolics and LMI never used Scheme did they? 20:23:14 ijp: so have I 20:23:19 nisstyre_: they are, and no 20:23:31 however, it wouldn't be too hard to design an FPGA Scheme toy processor 20:23:53 designing something to compete against ARM/Intel in terms of power (in both senses), probably *would* be hard :) 20:24:13 ecraven: that's what I thought 20:24:27 but FPGAs are usually better for simple-ish algorithms right? 20:24:30 like cracking DES 20:24:48 probably, but they are much easier to re-design and develop for than ASICs 20:24:52 (also much cheaper :) 20:24:56 yeah 20:25:01 there was a plan to fabricate the design in "lambda the ultimate opcode", but not to make it competitive 20:25:24 fpgas are good for parallelism 20:25:53 no that is not the point, I want a simple language. if we had a simple enough language we could translate it to javascript without needing 1mb runtimes and all my problems would end. 20:25:58 maybe an external usb-connected multi-Scheme-processor might be useful? :) 20:25:59 I mean, look at what I got ATM: http://lpaste.net/100387 20:26:21 SrPx: I'm working on it 20:26:22 I am not a language designer, I am completely noob with compilers and stuff. Yet I doubt anyone will argue this is not MUCH better than any FP->JS solution around 20:26:45 I'd argue it 20:26:51 Why would you, ijp ? 20:26:53 SrPx: I'm designing a language that may potentially have m:n concurrency in the browser as well 20:27:00 your not even the 100th person to write a fp->js compiler 20:27:13 I haven't decided exactly on some things, but it will have implicit types ala H-M 20:27:17 SrPx: That just looks like an s-expression syntax for JS ? 20:27:22 with maybe some impredicative stuff 20:27:24 The 10000th. Yet none produces a 1->1 straightforward result like that. 20:27:30 Well not "just" but almost just. 20:27:39 SrPx: because 1->1 doesn't produce good code 20:27:54 SrPx: your best bet is either asm.js or LLVM 20:27:59 taylanub: not exactly, it is just lambda calculus with numbers and lists. It is simple enough so I can translate directly to JavaScript with not much effort. 20:28:01 or design a VM and implement it in asm.js 20:28:13 it produces simple code, it's easy to do, but not performant code 20:28:25 use an intermediate language that is good for translating to low level machines 20:28:29 Eh I'm blind, didn't even notice you turn a map into a for loop. 20:28:41 ccorn [~ccorn@83.218.128.161] has joined #scheme 20:29:13 ijp: No? I got the same performance as a page long code I optimized for a game a few years ago with a one liner on that language. It is a code that generate 3d meshes, so it uses quaternions, vectors and lots of math-heavy stuff 20:29:33 It is much faster than any other fp->js solution I tried, regardless of being much simpler 20:29:39 So how not? 20:30:00 SrPx: this is what mine looks like btw https://github.com/oftn/JLambda/blob/master/test.jl 20:30:29 (using haskelly operators in that code for testing) 20:30:37 I have to remark that lambda is a bad name for an anonymous function, much like pi is a bad name for a number relating to the properties of a circle. Mathematicians are just lazy when it comes to giving things descriptive names. 20:30:44 SrPx: What even do you mean with FP ? JS itself supports everything your language does, no ? So you're just inventing an alternative syntax to JS, not ? 20:30:52 Tuplanolla: do you know why lambda has that name? 20:30:59 Tuplanolla: blame Church for just using yet another greek letter 20:30:59 I know lambda calculus. 20:31:08 That's where the bad names come from. 20:31:09 right, but why did church choose lambda? 20:31:13 and not even an obscure greek letter 20:31:16 taylanub: where is the compiler 20:31:21 SrPx: ? 20:31:30 I don't remember that. 20:31:43 taylanub: pardon, meant to nisstyre_ 20:31:50 ok 20:31:52 ijp: I thought it was mostly arbitrary? Was there some earlier combinatory calculus that used it? 20:32:19 SrPx: the compiler is still being designed 20:32:20 russell used a ^ above a letter for application, church changed it to a prefix, it then morphed into lambda 20:32:25 -!- noobboob [uid5587@gateway/web/irccloud.com/x-bjgdonxqgftjxswj] has quit [Quit: Connection closed for inactivity] 20:32:25 I've done most of the parser 20:32:30 nisstyre_: oh it is not functional yet? 20:32:41 SrPx: I'm going to use a CPS translation 20:32:44 By the way let me nit-pick on the term "anonymous function" for implying that functions have names by default and not being informative at all in what the meant functionality really is. 20:32:56 and no not really, you could generate Haskell code though, which is what I did to test stuff 20:33:26 the language looks interesting, by the way, but I **really** think we don't need anything more than pure lambda calculus. I augmented it with lists and numbers but honestly if I was smart enough I could probably write a compiler that identified numbers on the code and represented them as machine-ints automatically 20:33:28 Even if you use a lambda as the symbol doesn't mean you have to call it that though. 20:33:29 s/application/abstraction/ 20:33:45 SrPx: BTW your example is messed up, isn't it ? 20:33:51 Tuplanolla: right, and I could call my dog a cat 20:33:52 taylanub: no? 20:33:54 why? 20:33:57 SrPx: I may add in macros to keep the core language small (it already desugars as much as possible btw) 20:34:02 let must be a special form though 20:34:02 SrPx: What's (map procedure) mean ? 20:34:08 because of the type system 20:34:16 SrPx: Oh, curried ? 20:34:16 Rather "dee oh gee", ijp. 20:34:23 nisstyre_: macros are editor extensions in my language, though. even those I thought did not belong to the language spec 20:34:44 taylanub: yes... 20:34:45 SrPx: okay, but I would want them to be part of the actual language 20:34:51 SrPx: surely that compilation is wrong anyway 20:34:59 nisstyre_: that is fine (: 20:35:00 SrPx: also no implicit begin in mine 20:35:05 ijp: no it is not. test it lol? 20:35:07 the language may be side effect free, but what if you give the same list to two different functions 20:35:10 in fact no begin at all as of yet 20:35:21 so mutating in the js is not sound 20:36:27 ijp: one way of keeping mutation sound would be uniqueness types (Rust and Clean both do something like that) 20:36:48 or you could say screw it like OCaml 20:37:11 (And Scheme) 20:37:48 nisstyre_: my language generates functions that receive data and mutate it for performance reasons. the language itself is completely immutable, there are no side effects nor anything, obivously. just the final function generated destroys the data. 20:37:50 was going to ask that next .. you seem to mutate implicitly or something ? 20:37:50 SrPx: (append (f a) (f a)) 20:37:51 rudybot: init r5rs 20:37:51 ijp: your r5rs sandbox is ready 20:37:58 so if you want to keep things immutable just wrap it around a function that clones arguments and tada. 20:38:02 this is mostly good with no drawbacks 20:38:12 SrPx: what needs mutating? 20:38:12 environments? 20:38:43 -!- yacks [~py@103.6.159.103] has quit [Quit: Leaving] 20:38:58 rudybot: (define (f a) (let loop ((a a)) (if (null? a) 'done (begin (set-car! a (*2 (+ 5 (car a)))) (loop (cdr a))))) a) 20:39:00 ijp: Done. 20:39:10 rudybot: (let ((a (list 1 2 3))) (f a) a) 20:39:14 -!- ccorn [~ccorn@83.218.128.161] has quit [Quit: ccorn] 20:39:20 ijp: error: *2: undefined; cannot reference an identifier before its definition in module: 'program 20:39:37 ooh set-car! :) 20:40:09 gah 20:40:10 rudybot: (begin (define (f a) (let loop ((a a)) (if (null? a) 'done (begin (set-car! a (* 2 (+ 5 (car a)))) (loop (cdr a))))) a) (let ((a (list 1 2 3))) (f a) a)) 20:40:37 *ijp* pokes rudybot 20:40:53 anyway, try it in a repl, then try (let ((a (list 1 2 3))) (append (f a) (f a))) 20:41:01 ijp: what about it? 20:41:08 instead of (12 14 16 12 14 16) you get (34 38 42 34 38 42) 20:41:11 -!- andares [~andares@unaffiliated/jacco] has quit [Ping timeout: 252 seconds] 20:41:17 QED 20:41:20 set-car! should usually be avoided anyway 20:41:30 and set-cdr! obviously 20:41:36 ijp: ; Value: (mcons 12 (mcons 14 (mcons 16))) 20:41:39 nisstyre_: it was to mimic the code output by SrPx 's compiler 20:41:47 oh okay, yeah 20:41:59 rudybot: (let ((a (list 1 2 3))) (append (f a) (f a))) 20:41:59 ijp: ; Value: (mcons 34 (mcons 38 (mcons 42 (mcons 34 (mcons 38 (mcons 42)))))) 20:42:02 andares [~andares@unaffiliated/jacco] has joined #scheme 20:42:05 spooky action at a distance 20:42:20 hence, the output code is wrong 20:42:28 ijp: that's how I describe dynamic scope 20:42:31 hiyosi [~skip_it@126.73.30.125.dy.iij4u.or.jp] has joined #scheme 20:42:38 (spooky action at a distance) 20:42:39 (modulo a compiler proving that the list is not shared) 20:42:44 ijp: sure, a second 20:42:56 or at least it doesn't fit with the semantics of what most people would think `map' should have 20:44:01 ijp: what is f? 20:44:26 SrPx: f is a scheme translation of the js you output in http://lpaste.net/100387 20:44:34 he defined f beforehand 20:44:38 only using lists rather than vectors 20:44:52 -!- hiyosi [~skip_it@126.73.30.125.dy.iij4u.or.jp] has quit [Ping timeout: 244 seconds] 20:45:12 rudybot: (+ 2 3) (displayln "test") 20:45:14 nisstyre_: right now this works: > (displayln '(\| a b)) (| a b) 20:45:22 lol 20:45:36 nisstyre_: you need a begin if you have more than one on a line 20:45:38 I just wanted to see if rudybot has an implicit begin 20:45:47 yeah, got it 20:45:52 bu most of the time, people just do one a line 20:46:29 SrPx: anyway the reason why haskell and other compilers are so damn complex is that making optimizations like the one you're trying to do work in non-toy examples is really hard 20:46:58 well it's actually not that hard if you're willing to have exponential compile times 20:47:21 turbofail: if you get help from the type system it can be easier 20:47:41 I'm not an expert though 20:47:53 the type system still doesn't help you do flow analysis all that much 20:47:55 yes, everything is easier if you can make the user provide a checkable proof of soundness 20:48:20 turbofail: flow analysis meaning things like branch prediction? 20:48:53 nisstyre_: more just like finding out what functions get called with what arguments 20:50:09 it's what drives most of stalin's optimizations 20:50:23 Nizumzen [~Nizumzen@cpc1-reig5-2-0-cust251.6-3.cable.virginm.net] has joined #scheme 20:50:33 it's also why stalin is ridiculously slow 20:50:57 it takes time to ship all the bad code to the gulag 20:51:05 indeed 20:51:08 turbofail: so it looks through each source file or compilation unit and finds where every function gets applied and with what arguments? 20:51:18 yep 20:51:32 do you know what specific optimizations that lets you do? 20:51:55 avoid any overhead associated with looking up names in the environment? 20:52:00 ijp: eh no... yo uare wrong 20:52:21 go on 20:52:33 ijp: http://lpaste.net/100389 20:52:38 nisstyre_: see ftp://ftp.ecn.purdue.edu/qobi/fdlcc.pdf 20:52:47 bleh, stupid non-breaking space 20:52:49 ftp://ftp.ecn.purdue.edu/qobi/fdlcc.pdf 20:52:52 I took some time because my editor is not functional, I was changing the way it deals with macros so I had to rollback to code that >< 20:53:02 -!- mango_mds2 [~mango_mds@c-71-207-240-42.hsd1.al.comcast.net] has quit [Ping timeout: 264 seconds] 20:53:05 turbofail: okay yes, closure conversion stuff 20:53:08 that's what I figured 20:53:18 Compiling with Continuations goes into this a bit 20:53:19 This result could be better, though. But it is correct netherless. I will investigate 20:53:25 SrPx: I see, so you are fully inlining everything? 20:53:29 also does inlining, escape analysis, etc. 20:53:53 -!- boycottg00gle [~user@stgt-4d02c078.pool.mediaways.net] has quit [Remote host closed the connection] 20:53:54 ijp: except when it is bigger than it would be advantageous... the compiler does test in time 20:53:55 in that case, I was wrong in my prediction, but the above statement is accurate so far as it goes 20:53:55 turbofail: I will have to read this 20:54:12 unfortunately the paper only talks about the closure conversion 20:54:38 ijp: which statement? 20:54:43 but basically if you have good information about the data flow in your program you can do a lot of things. it would let you justify the use of mutation, for example 20:54:44 as I understand it, the object is basically to "eliminate" all free variables and put them into some kind of struct or whatever associated with a function 20:55:04 -!- langmartin [~langmarti@host-68-169-175-226.WISOLT2.epbfi.com] has quit [Quit: sleep] 20:55:11 SrPx: the statement that if you applied that function f to the same array a twice, you would mutate a twice 20:55:20 or justify stack allocation of data structures 20:55:21 -!- taylanub [tub@p4FD92BD6.dip0.t-ipconnect.de] has quit [Disconnected by services] 20:55:31 yeah, that would also be nice 20:55:38 the reason it does not hold, is because you are not doing that 20:55:47 taylanub [tub@p4FD93D84.dip0.t-ipconnect.de] has joined #scheme 20:55:57 ijp: ? yes, I don't get what is wrong, though 20:56:10 (in fact, it would have been twice as bad with the obvious mutating version of append, since you would get a circular list of #1=(34 38 42 . #1#) 20:56:32 SrPx: the point is that a, from the point of view of the original language (scheme) was not mutated 20:56:51 it was evaluated on a pure function f (twice), and those results appended 20:56:54 yes, that is what I mean. so it is correct. so what is wrong? 20:57:02 *ijp* groans 20:57:08 :/ sorry I dont get it 20:57:39 nisstyre_: an example of a compiler that takes it to an even further extreme is https://github.com/axch/dysvunctional-language 20:58:11 rudybot: (define f-original (lambda (a) (map (lambda (x) (* x 2)) (map (lambda (a) (+ a 5)))))) 20:58:11 ijp: Done. 20:58:16 it's based off of some of the ideas of stalin, but it's more restricted 20:58:20 SrPx: ^ is your original function f, correct? 20:58:31 turbofail: yea, I tried it and even mailed the creator... great work 20:58:43 turbofail: excellent, thanks for the link 20:58:44 yes, it is 20:58:46 rudybot: (let ((a (list 1 2 3))) (append (f a) (f a))) 20:58:46 ijp: ; Value: (mcons 34 (mcons 38 (mcons 42 (mcons 34 (mcons 38 (mcons 42)))))) 20:58:47 I knew about Stalin but not that 20:58:59 whoops 20:59:05 rudybot: (let ((a (list 1 2 3))) (append (f-original a) (f-original a))) 20:59:06 ijp: error: mmap: arity mismatch; the expected number of arguments does not match the given number given: 1 arguments...: # 20:59:16 *ijp* bangs head off table 20:59:21 calm down 20:59:34 rudybot: (define f-original (lambda (a) (map (lambda (x) (* x 2)) (map (lambda (a) (+ a 5)) a)))) 20:59:34 ijp: Done. 20:59:35 you forgot an "a" 20:59:39 now we have the original f 20:59:41 rudybot: (let ((a (list 1 2 3))) (append (f-original a) (f-original a))) 20:59:42 ijp: ; Value: (mcons 12 (mcons 14 (mcons 16 (mcons 12 (mcons 14 (mcons 16)))))) 20:59:45 yea 20:59:58 with my original example we get the list (12 14 16 12 14 16) 21:00:04 nugnuts [~nugnuts@pool-74-105-21-221.nwrknj.fios.verizon.net] has joined #scheme 21:00:04 yes... that is the result I got 21:00:30 now look at (define (f a) (let loop ((a a)) (if (null? a) 'done (begin (set-car! a (* 2 (+ 5 (car a)))) (loop (cdr a))))) a) 21:00:41 that is your f translated into scheme 21:00:50 -!- bjz [~bjz@125.253.99.68] has quit [Quit: Textual IRC Client: www.textualapp.com] 21:00:51 the js version, not the scheme version 21:01:00 it updates a inplace 21:01:14 rudybot: (let ((a (list 1 2 3))) (append (f a) (f a))) 21:01:17 ijp: ; Value: (mcons 34 (mcons 38 (mcons 42 (mcons 34 (mcons 38 (mcons 42)))))) 21:01:29 ijp: oh, sure! But that is my point. The compiled JS function is not to be considered as a function. Consider it as bytecode that operates on memory. Except it does not operate on memory, but JS objects. 21:01:30 now my example gives the result (34 38 42 34 38 42) 21:02:02 SrPx: right, but that was not evident when you posted originally 21:02:06 I mean, just ignore the compiled JS function. If you use the "original" function (inside the language) it is not the same as the compiled js function 21:02:08 ijp: oh OK 21:02:19 the example remains correct, the assumption on which it is based, is wrong 21:02:55 by the way, as I said, there is an option to make semantically equivalent function: function(a){return f(clone(a))} <- just compile it like that and then it is identical to the original function, inside the language 21:02:59 -!- Nizumzen [~Nizumzen@cpc1-reig5-2-0-cust251.6-3.cable.virginm.net] has quit [Quit: KVIrc 4.2.0 Equilibrium http://www.kvirc.net/] 21:03:13 it is too slow to clone objects in general... 21:03:22 but anyway 21:03:23 Nizumzen [~Nizumzen@cpc1-reig5-2-0-cust251.6-3.cable.virginm.net] has joined #scheme 21:03:26 back to the original discussion 21:03:42 that is what I wanted: a simple functional language that I could translate to performant javascript just like that 21:03:59 small, self-contained functions 21:04:06 SrPx: what's your definition of functional? 21:04:17 lexical scope, first class functions? 21:04:24 not the 500kb+ files that most other languages generate 21:04:27 discouraged mutation? 21:05:08 SrPx: well I dislike CoffeeScript and that 21:05:10 it could be reasonable to have a compiler that you can direct to do a super-intense flow analysis on a few key routines, while doing a much simpler compilation elsewhere 21:05:15 mostly because it's just syntactic sugar for JS 21:05:22 nisstyre_: well, the lambda calculus. just the lambda calculus, because I truly believe it is good enough 21:05:31 SrPx: well, one disadvantage of your approach, is that functions in the object code do not correspond to functions in the source code 21:05:36 but that will only work if those few key routines don't call out to any of the non-key routines 21:05:36 SrPx: uh, okay, but at least put macros in the language 21:05:41 so you can't just trivially mix it with normal js 21:05:43 you still want let, ffs 21:06:08 and "just lambda calculus" isn't really that great tbh 21:06:19 and if you develop a module system, then a change in a module you depend on will require a recompile 21:06:22 I mean, I just created one of my most complicate functions in my game, which is a full 3d mesh transformer that depends on lots of algebra. look at the JS result: http://jsperf.com/sdklfjlkasjfljafdafljfaf it is only not faster than my hand optimized code because I used the wrong algorithm (see there are 4 fors, not 2) 21:06:26 typed lambda calculus is better 21:06:28 ...all the usual problems of an inlining system 21:06:32 and all I used was lambda calculus 21:06:38 and it was short, readable and easy to write 21:06:43 much easier than in JS 21:06:52 so why people really think they need more than lambda calculus? 21:07:00 IO 21:07:04 It's easier on the hands too. 21:07:08 ijp: yeah presumably for interactive development you would just not use the inlining and other fancy shit 21:07:26 SrPx: because they want to do things like create graphs and heaps, and do side effects (safely if possible) 21:07:47 oh fuck IO. you can use assembly to IO if you want. it is just pumbling. just do your actual stuff in a pure FP language and then bring it with machine with something else. good enough 21:07:56 in other words, you are unhappy with the tradeoffs made in other compilers 21:07:58 that's fine 21:07:59 and they don't want to have to implement anonymous recursion for everything either 21:08:29 also, the lambda calculus cannot model genuine nondeterminism 21:08:34 nisstyre_: that is a point, but the need of heaps and stuff are merely performance considerations. you can have the same semantical program and let a compiler create those structures when it identify they are required. 21:08:37 all without leaving lambda calculus 21:08:37 btw, https://github.com/ghcjs/ghcjs 21:08:50 I mean, there is no semantics that make you **need** heaps 21:08:55 SrPx: how? 21:09:00 nisstyre_: I've tried it too! 16816mb files 21:09:15 how would a compiler identify when a certain data structure is optimal? 21:09:18 I've tried EVERYTHING I swear. Even stalin you guys proposed. That is how desperate I was 21:09:29 anyway, unhappy with the tradeoffs != their tradeoffs are wrong 21:09:54 nisstyre_: how not? If your language is universal then it does know exactly what your program does. Just find the right patterns. The same way I convert a recursive foldl (which is what I use to implement maps) in a simple for loop 21:09:55 I think a few of yours are, but that's life 21:10:29 SrPx: how would it know when to use say, a hash table vs. a tree based map? 21:10:56 nisstyre_: just a word about needing lets. this is why I took macros out of my language. macros transform code in a way that the editor can't predict. that is bad for many reasons 21:11:12 nisstyre_: after a lot of thinking I realised the problem is that macros are functions without the inverse function 21:11:24 nisstyre_: so my macro system is modified to include inverse functions, and it is part of the editor 21:11:33 langmartin [~langmarti@host-68-169-175-226.WISOLT2.epbfi.com] has joined #scheme 21:11:48 SrPx: macros are like functions that take syntax and return syntax 21:11:54 nisstyre_: the cool part is that some things are substitued by macros automatically. for example: (((a)(* a a)) 5) becomes (let ((a 5)) (* a a)) 21:12:04 this becomes automatically when you write the first 21:12:05 but the problem with macros is more that they only operate at a syntactic level 21:12:15 so you can turn certain macros on and off depending on your preferences 21:12:20 so no type safety or anything, so debugging could be hard 21:12:45 and the compiler DOES know what is what. it does know what the "a" inside the let macro is, and thus can highlight and create clickable pointers 21:12:57 this is impossible if you use scheme-like macros because those are turing complete and confuse the editor 21:13:14 SrPx: DrRacket does it pretty nicely 21:13:19 SrPx: if you try DrRacket, you'll see that macros don't confuse it 21:13:28 it does exactly that ? 21:14:06 you need to include a way to convert back and forth in your macros... including a way to translate source positions inside the macro to source positions inside the resulting transformation 21:14:15 the background macro expansion can be *really* annoying when you have large expensive macros 21:14:27 fortunately you can turn it off 21:14:47 I've never had a problem, though I stopped using DrRacket mostly for other reasons 21:15:08 ijp: i don't find it to be an issue, even though i write a lot of typed racket programs, which is quite an expensive macro 21:15:11 I am pretty sure it doesnt do automatic macro conversions? 21:15:13 the arrow stuff for identifiers is nice though 21:15:26 SrPx: it runs macro expansion in the background continuously 21:15:42 the same way that Eclipse, say, runs the compiler in the background 21:15:59 samth: what I mean is, you write (((a)(* a a)) 5) and it becomes (let ((a 5)) (* a a)) ? 21:16:12 no 21:16:30 so that is the point 21:16:34 what it does is highlight and create clickable pointers despite not having reversible macros 21:16:42 Oh ok 21:16:45 but that is just part of the thing 21:16:59 it will also tell you if you have any syntax errors 21:17:02 it could just hardcore that for the let macro... 21:17:24 nisstyre_: there are no syntax errors if you use reversible macros 21:20:23 -!- hiroakip [~hiroaki@77-20-51-63-dynip.superkabel.de] has quit [Ping timeout: 272 seconds] 21:21:18 jeapostrophe [~jay@216-21-162-70.slc.googlefiber.net] has joined #scheme 21:21:24 -!- jeapostrophe [~jay@216-21-162-70.slc.googlefiber.net] has quit [Changing host] 21:21:24 jeapostrophe [~jay@racket/jeapostrophe] has joined #scheme 21:22:35 -!- langmartin [~langmarti@host-68-169-175-226.WISOLT2.epbfi.com] has quit [Quit: sleep] 21:23:25 SrPx: btw, https://en.wikipedia.org/wiki/Uniqueness_type 21:23:30 you may be interested in that 21:23:43 and check out the page on linear logic, especially to do with resources 21:23:44 reversible macros? 21:24:00 doable, but sounds prohibitive 21:24:21 especially when you can just store the original code somewhere. perhaps in a... syntax object of some sort 21:24:22 sounds undecidable 21:24:53 nisstyre_: well, "prohibitive" comes from the imagined constraints necessary to make it work 21:25:16 but yeah, probably undecidable 21:25:47 (define-syntax-rule (foo x) (list x x)) (define-syntax-rule (bar x) (list x x)) 21:26:15 hiroakip [~hiroaki@77-20-51-63-dynip.superkabel.de] has joined #scheme 21:26:32 *ijp* goes to wikipedia the current state of reversible computing 21:36:33 langmartin [~langmarti@host-68-169-175-226.WISOLT2.epbfi.com] has joined #scheme 21:37:04 nisstyre_: I... just... I am seriously not interested in anything that involves writing compilers and programming languages. :'( It is my punishment to have to do it. But yes that will be probably useful to me, thank you mate :C 21:38:13 by the way what I mean with reversible is just that: the editor knows that ((fn (a) a) 5) and (let ((a 5)) a) are equivalent and allows the user to chose the representation he likes more 21:38:43 oh btw I have read that article. I am too stupid to understand it, though. ha 21:39:15 I mean I know what it is for and the problem it solves... but... 21:39:39 akp [~akp@c-50-133-254-143.hsd1.ma.comcast.net] has joined #scheme 21:40:24 it is like light. I know it is a wave and a particle and at the same time I have no idea what that means 21:40:51 SrPx: it's a particle 21:41:00 hiyosi [~skip_it@126.73.30.125.dy.iij4u.or.jp] has joined #scheme 21:41:06 dependent typing itself is something I can't comprehend at all. type theory is hard 21:41:23 nisstyre_:... ? 21:41:31 SrPx: it's actually not that complicated, you're probably just reading blog posts and stuff where people throw around jargon 21:42:45 SrPx: for starters, dependently typed languages allow types to have types 21:42:55 the best visualisation I ever had of light is: electrons go up and down, that creates a perturbation on the electromagnetic field. and that sounds like a wave to me. :/ 21:43:08 nisstyre_: HAHA that is not confusing at all 21:43:52 SrPx: but necessary if you're going to do any computation of types in a sound manner 21:44:14 SrPx: I recommend QED by Richard Feynman for light 21:44:27 he explains it all, and without any need for advanced math 21:44:37 saving that :D 21:44:53 I sure hope that qed doesn't stand for quantum electrodynamics. 21:44:58 in fact I don't think he uses any math beyond grade school arithmetic 21:45:04 Tuplanolla: of course it does 21:45:19 -!- hiyosi [~skip_it@126.73.30.125.dy.iij4u.or.jp] has quit [Ping timeout: 244 seconds] 21:46:24 Oh. I read it as if it were about giving a "light" about dependent typing. It made sense because "QED", you know. Now the name is less cool for me 21:46:28 Man I need a rest 21:46:36 SrPx: two unrelated things :P 21:47:38 but you guys still didn't answer the mystery on why nobody uses the lambda calculus for practical purposes 21:47:48 I mean you tried to, maybe I'm so stupid to understand 21:47:51 I have a hard time not relating it to a bunch of mathematics. 21:47:57 SrPx: because you need more stuff in the lambda calculus to get things done 21:48:04 samth: like? 21:48:07 like data structures and numbers and lists 21:48:07 SrPx: because it ends up looking like thishttp://homepages.cwi.nl/~tromp/cl/lazy-k.html 21:48:14 sorry, http://homepages.cwi.nl/~tromp/cl/lazy-k.html 21:48:31 and input and output, and files, and conditionals, and ... 21:48:44 and defintions, and more binding forms 21:49:27 Symmetry groups here, tensors there... 21:49:32 SrPx: use unlambda, cry 21:49:57 samth: well, he did already say no to IO 21:50:05 nisstyre_: except it does not! That is the whole point of lambda calculus, abstraction. See: (cure_disease cancer). This could be very well a functional, GBs-worth complex lambda calculus program wrote in 2225. Yet I can understand it and what it does 21:50:27 that is what I don't get 21:50:41 everything you see in other languages, every single feature can be implemented as a function 21:50:48 right, but taht's problematic 21:51:00 ijp: why, ijp? 21:51:02 what does (1 (cons 3 4)) mean? 21:51:06 can be implemented *badly* as a function 21:51:12 turbofail: example!? 21:51:28 ijp: nothing! What does this have to do? 21:51:50 SrPx: well, if you implement numbers as functions and conses as functions, then yo should be able to apply a number to a cons 21:52:04 ijp: so just don't!! 21:52:15 and now you've invented types 21:52:38 so we're up to lc+types, good 21:52:38 Types work very well as an editor plugin, though. You don't need it in the language. 21:52:47 And I don't have anything against types, really. Just that they're not solid 21:53:04 -!- jewel_ [~jewel@105-237-9-187.access.mtnbusiness.co.za] has quit [Ping timeout: 244 seconds] 21:53:15 now, we don't want a simple calculation like (expt 10 100000) to take forever do we? 21:53:19 In the sense that, lambda calculus won't change, and it is a solid model of computation. But there are many type systems and we don't know one that will survive centuries 21:53:25 so unary coding of arithmetic ala churhc is out 21:53:53 ijp: that doesn't make sense... that is implementation-dependent. Just find the church numbers and implement them as machine ints 21:53:57 That is what idris does 21:54:08 so now you are up to lambda calculus + types + machine numbers 21:54:19 No! Just lambda calculus 21:54:21 good, now how about saving time with definitions 21:54:24 "find the church numbers"? 21:54:26 haha 21:54:38 turbofail: yes. What is the problem? 21:54:45 you don't want to write everything out as one giant fixed point combinator do you? 21:55:02 What do you mean? 21:55:04 "out" 21:55:05 ? 21:55:19 you do write functions don't you? 21:55:27 Yes? 21:56:01 so what's the problem? 21:56:10 -!- wingo [~wingo@cha74-2-88-160-190-192.fbx.proxad.net] has quit [Ping timeout: 244 seconds] 21:56:12 I just don't understand the grammar of that pharase 21:56:22 But yes I like functions and I want to write everything as functions 21:56:39 Dude you use Scheme. You ALMOST do it... scheme is like Lambda Calculus with a few things added. 21:56:41 don't ask me about english grammar, I only use the language, I don't understand it 21:57:03 -!- mrowe_away is now known as mrowe 21:57:03 You more than anyone in this world should understand my cry :c 21:57:14 SrPx: exactly, and those things are convenient, and adding them doesn't take away from the fact that we "could" write it directly in the lambda calculus 21:57:30 so we pretend they are, even though they aren't and get on with our lives 21:57:56 just like no set theorist actually uses the definition of (a,b) as {{a},{a,b}} 21:58:04 SrPx: no one uses the pure lambda-calculus because its inconvenient and pointless to omit numbers, data structures, conditionals, etc 21:58:25 but by pretending they are you lock yourself into them. So, for example, you have a program in Scheme, but you want to port it to JavaScript. Or any other architeture that is going to be released in the future. Say the current computation model/processors become obsolete 21:58:31 plonk 21:58:45 you can't have it both ways 21:58:47 So, now you can't because your program has lots of stuff that is made for scheme, that are made thinking in the current architecture, thinking in scheme compilers 21:59:07 if you just wrote it in lambda calculus you would be safe. Another architecture? Just made a compiler. It is so simple you can do it 21:59:37 Compile to JavaScript? Yes that is easy, my compiler has 150 lines of code and transforms high level code in code that is as fast as my hand-optimized code in JS. 21:59:39 seriously??? 21:59:40 why not?? 21:59:56 I don't want to sound hard headed, it is just that I'm not convinced 22:00:07 SrPx: i highly doubt that your numeric computation in lambda-calculus compiles to fast arithmetic in JS 22:00:27 samth: it can't without "locking himself into them" 22:01:03 samth: http://jsperf.com/sdklfjlkasjfljafdafljfaf (it is the only example I have because everything is new. It is slower because it uses the wrong algorithm. the right version is the SAME speed, but I didnt save the link and I dont remember it) 22:01:11 but yet you can see it is not much slower anyway 22:01:26 -!- akp [~akp@c-50-133-254-143.hsd1.ma.comcast.net] has quit [Ping timeout: 264 seconds] 22:01:49 I use quaternions, 3d vectors, rotation, map and foldl. all in short high level functions 22:02:01 SrPx: i have no idea what that is, but it sure has plenty of numbers in it 22:02:01 when I used those in javascript I had a performance of 50 ops / s ! 22:02:21 so I had to inline everything by hand and I finally managed to get the performance up to hundreds of ops per second 22:02:38 that is old code. now I just reimplemented in my language and boom. no effort, it is as fast as my hand optimized code 22:02:42 if that is not a proof of concept what is? 22:03:41 -!- antoszka [~antoszka@unaffiliated/antoszka] has quit [Ping timeout: 245 seconds] 22:04:00 and I don't know shit about compilers. someone competent would certainly be able to compile lambda calculus much better than I am 22:04:11 so it boils down to: people don't think lambda calculus is good to program in 22:04:27 bjz [~bjz@125.253.99.68] has joined #scheme 22:04:48 but why, god, if after a few definitions it is THE SAME as scheme? literally, you can make it look THE SAME as scheme. the only difference being that you just removed a few things from the specs and implemented in-language 22:04:51 -!- bjz [~bjz@125.253.99.68] has quit [Client Quit] 22:04:55 SrPx: most modern functional languages are built on LC plus lots of extensions 22:05:05 those extensions are necessary to do any real programming 22:05:25 bjz [~bjz@125.253.99.68] has joined #scheme 22:05:25 -!- davexunit [~user@fsf/member/davexunit] has quit [Quit: Later] 22:05:33 nisstyre_: and necessary to make them unable to compile anything other than the architecture they aimed initially!! 22:05:42 see how haskell fails miserably to compile to javascript 22:05:44 that is my point 22:05:52 I haven't followed this discussion thoroughly, but how does random access data work with lambda calculus? 22:05:55 and I don't think those extensions are necessary, as you can see 22:05:56 SrPx: well ghcjs is an experiment 22:06:03 akp [~akp@c-50-133-254-143.hsd1.ma.comcast.net] has joined #scheme 22:06:07 ghc wasn't designed with JS in mind obviously 22:06:15 Tuplanolla: the compiler just recognizes it and translates to vectors!!!! 22:06:17 hell, GHC itself is a research project 22:06:28 nisstyre_: but that is the point!!!!! 22:06:36 it was designed with an architecture in mind 22:06:37 it should not!!! 22:06:42 that's not lambda calculus anymore if it's doing that 22:06:51 Tuplanolla: well, depends 22:06:56 a language should be designed to survive time. computers, architectures change!! programs dont!! what is written is written!!! 22:07:14 turbofail: it IS! it is just compiled to something that is not, to run. to be real 22:07:33 the lambda calculus has no concept of anything vector-like 22:07:36 SrPx: I don't think SML depends much on any architecture 22:07:37 my point is: the code base should be in lambda calculus because it is pure. then we make compilers to target whatever is the architecture we want to put our programs in! 22:07:45 I do like your optimism. 22:07:46 or Haskell98 really 22:08:07 you can make a compiler target whatever architecture you want without having the input language be the "pure lambda calculus" 22:08:18 SrPx: so you're proposing flipping the normal compiler architecture 22:08:25 Tuplanolla: if you think about the usual church encoding of pairs (define (cons a b) (m a b)) (define (car p) (p (lambda (x y) x))) (define (cdr p) (p (lambda (x y) y))) 22:08:28 instead of desugaring to a core language you want to so add sugar 22:08:35 *to add sugar 22:08:46 you can certainly extend that to any size for a random access "struct" 22:09:08 yes, but when your language is complex enough doing so is hard. I'm just using lambda calculus because it is the better simplicity/power ratio I'm aware of. Could be SKI if you like so, or anything really. But not Haskell. Not Ocaml. Not even Scheme. Those have HUGE specs. That is the problem. nisstyre_ 22:09:13 the problem is that which sugar is the best one is probably going to be extremely hard to decide 22:09:19 forgot a (lambda (m) in the definition of cons 22:09:46 SrPx: come back when you can write a TCP/IP stack in lambdas 22:09:54 nisstyre_: I don't get what you mean btw, "instead of desugaring to a core langauge you want to add sugar" that is actually conflitant ? 22:09:58 There's the "extend". 22:10:28 SrPx: you're proposing taking a small kernel language and deducing which constructs make it more efficient 22:10:46 Tuplanolla: a general vector definition would be tricky, but I'm not yet going to say impossible 22:10:49 normally it's the other way around, taking a big language and mechanically translating it into a small kernel language 22:10:52 nisstyre_: that is the problem!!!!!!!!!!!!!! TCP/IP stacks are NOT computation!! They are things about our world and machines!!! 22:10:53 that's how Scheme works 22:11:08 SrPx: so you think they should be done purely in hardware? 22:11:14 nisstyre_: but you CAN represent a program that uses TCP/IP with pure languages. Just use a convention. Then plug it into anything that understands that conventions! 22:11:20 those* 22:11:22 how are you supposed to fix security bugs? 22:11:24 *unless* you make the argument that access is really linear because functions are curried 22:11:36 which is probably fair enough 22:11:46 nisstyre_: which bugs? There are no security bugs in pure computation. Security doesn't even have to do with computation 22:11:46 Hence the query. 22:11:54 SrPx: false 22:12:07 SrPx: not all security bugs are "bugs" in the conventional sense 22:12:08 SrPx, your stance on removing from the language everything that is specific to an architecture is internally consistent with your point that programs should not be written with an architecture in mind. However, for the purposes of this discussion, I will contend that programs _should_ be written with an architecture in mind. 22:12:21 most of the major ones are flaws in the design 22:12:35 gnomon++ 22:12:47 gnomon: but then you are not addressing my argument, that doesn't make sense 22:13:10 are you saying programs shouldn't be designed to process information from people and connect physical systems? 22:13:13 I have one unrelated question. Does a curried Scheme variant exist? 22:13:23 i'm sure racket has one 22:13:26 Tuplanolla: doubtless 22:13:45 Tuplanolla: (define ((f a) b) (+ a b)) is valid Racket I think 22:13:51 I'd be infinitely more surprised if there weren't one 22:13:59 nisstyre_: not quite the same thing 22:14:02 Whenever I look for one, I only find ways to implement it with Scheme. 22:14:04 nisstyre_: that's worked in other schemes in the past, i think 22:14:13 -!- tupi` [~user@189.60.14.19] has quit [Ping timeout: 272 seconds] 22:14:14 nisstyre_: I'm saying we should have 2 separate things. A computation language that express algorithms. And languages that deal with hardware. Code written in that computation language lasts forever. My definition of a "quaternion" will live as long as universe lasts. 22:14:37 nisstyre_: then I want to use that quaternion in a real program? Sure, I compile my language to C and use it inside C programs. Done! 22:14:41 many schemes have curried definition, but not curried application 22:14:43 SrPx: okay, so you're saying we should separate pure and impure code more? 22:14:47 to use a horrible turn of phrase 22:14:49 nisstyre_: I want to use that quaternion in a Python application? I compile it to Python and use it there! 22:14:59 SrPx, to support my point, I will offer up the idea that system architectures are what elevate raw material, silicon or otherwise, until it can support the mathematical abstractions you're after. 22:15:01 nisstyre_: yes! Totally! Those are 2 different things anyway 22:15:10 nisstyre_: but don't forget, it's lc, so there is no impure code :) 22:15:14 ijp: curried application meaning ((f a) b) instead of (f a b)? 22:15:19 Tuplanolla: eli barzilay wrote one for his class: http://pl.barzilay.org/resources.html see the discussion of schlac 22:15:33 SrPx, so architectures are just embeddings into physical space of the mathematical abstractions you're describing. 22:15:35 nisstyre_: meaning the two are not equivalent 22:16:00 gnomon: yes! That is, too, what I think. How that goes against what I am saying 22:16:13 I was looking for something with f x (g y) z instead of (((f x) (g y)) z). 22:16:25 SrPx, it sounds like you're frustrated that this embedding exists, and you consider that hewing to the point-in-time particularities of a particular embedding is wasted effort. Yes? 22:16:52 ijp: looks like the curried definition won't let you do (f a b) in Racket, but if you do (define g (curry f)) then you can do both with g 22:16:53 -!- mgodshall [~mgodshall@8.20.30.249] has quit [Quit: mgodshall] 22:16:56 nothing about scheme requires any knowledge of the underlying architecture 22:17:11 gnomon: that is complicate wording but I guess that seems like it is correct. I'm frustrated that programming languages are complex, mostly because they are designed aiming an architecuture. 22:17:21 gnomon: incidentally, that position doesn't allow you to ever actually program :) 22:17:40 gnomon: but architectures born and die, while code written lasts. Code written in Python today will be useless in 100 years (except if we make translations). 22:17:44 SrPx, I contend that aiming a program to take advantage of an architecture is a complexity _win_, not a loss. 22:17:50 ijp, I'm getting to that. 22:18:11 SrPx: most code will be useless because of the sequential nature of the majority of code written 22:18:20 except for things that are pure and inherently sequential 22:18:32 -!- ASau [~user@p54AFEC23.dip0.t-ipconnect.de] has quit [Remote host closed the connection] 22:18:50 SrPx, au contraire, I disagree: code, like any language - spoken and written, I mean, not just constructed - is interpreted differently over time. 22:18:55 gnomon: that IS a win, indeed. But why do we need to clue that INTO THE LANGUAGE DEFINITION? We can make architecture optimizations in the COMPILER part. This way, if architecture changes we just change the COMPILER, not EVERY PROGRAM WE EVER WROTE in a hardware-guided language! 22:19:16 nisstyre_: yes.!! That is too my point 22:19:17 as I saw it when I ponked him, SrPx is in a superposition of states whereby he wants to argue 1) representation doesn't matter so just program in the lambda calculus 2) representation does matter so you can have efficient numbers 3) everything is a function, but not every function can be applied to an arbitrary value 4) but you don't need types 5) programming to any architecture is bad since you will only need to rewrite it later 22:19:19 SrPx, because hiding the architecture behind an impenetrable curtain is a _loss_, not a win! 22:19:24 did I leave any out? 22:19:59 also, he's insisting that no language that exists today has ever been compiled to something other than its original target 22:20:02 ijp, no, we have the same conclusion; I am running the ol' reductio ad absurdem, we'll see how it pans out. 22:20:15 turns out i've been using a PDP this whole time! 22:20:22 ijp: haha no that is not at all what I think, while at the same time it is, indeed. I think (1), half of (3) belongs to a language design. (2) (4) (5) belong to things outside the language: editors, compilers, etc 22:20:37 SrPx: but that usually has nothing (except how easy it is maybe) to do with the language 22:20:44 just the design of the program 22:21:05 you can write parallel code in C, most people don't except for stuff the compiler does for you 22:21:19 (SIMD stuff etc) 22:21:21 SrPx, the reason why exposing the architecture to the language spec is a win is that it allows programmers - those who wrote the software originally _and_ those who port it to new architectures in the future - to make meaningful decisions about program designs which differentiate between usefully fast and uselessly slow. 22:21:21 Curious, samth, but is there an implementation of such a thing? 22:21:38 Tuplanolla: yeah, you can download the implementation from that page 22:21:41 I saw an awful lot of uppercase when I just looked at my #scheme window. 22:21:42 What's up? 22:21:55 turbofail: we're not in Tech Square any more, Toto 22:22:03 Riastradh: just trolling 22:22:07 I was thinking more than a tiny example. 22:22:27 -!- Nizumzen [~Nizumzen@cpc1-reig5-2-0-cust251.6-3.cable.virginm.net] has quit [Ping timeout: 272 seconds] 22:22:58 gnomon: stop trying to win the argument and think about what you are saying for a second!! It is not about arguing. Look, you are saying it is a win because it helps those porting software to new architecture, while my argument was that porting wouldn't be even necessary! Do you see how that is absurd? 22:23:33 plonk 22:24:25 You sound like a category theorist, SrPx. 22:24:41 You're not actually listening to what I'm saying, so I'll wait until we can discuss the point later. I'm not trying to correct a way in which I think you are incorrect, I am expressing an opposing viewpoint so we can explore the problem space together. 22:24:51 -!- _5kg_ [~zifeitong@60.191.2.238] has quit [Ping timeout: 264 seconds] 22:24:52 Now, now, Tuplanolla, #scheme is not the place for playground insults. 22:25:08 gnomon: I will try to read everything you said again, hold on then. But I think we are just not understanding eachother 22:25:09 It's a wonderful subject! 22:25:21 ...some of my best friends are category theorists... 22:25:33 ijp, other than that, you're a pretty neat person! 22:25:36 *gnomon* ducks and runs 22:26:06 *SrPx* sighs 22:26:35 SrPx, shall we continue? 22:27:08 I kind of understand what you are doing, but this is not being effective. I still believe in what I believe, and the only thing that makes me doubt it is authority: smart people don't believe in that, so I shouldn't believe too. But that is never a good way to take things. So that is what I have. 22:27:15 gnomon: but yes 22:27:26 Tuplanolla: here's an example: http://pasterack.org/pastes/1209 22:28:32 gnomon: but just so we can be effective, can you describe what I am proposing? I am bad at expressing myself so you could be actually addressing something else...! 22:28:44 Even more so in english that is not my mother tongue 22:29:01 Tuplanolla: btw I didn't even know that was an insult :{ 22:29:01 SrPx, OK, to continue: my assertion is that exposing the underlying hardware architecture at the language level is a win, because it allows programmers (and porters) to make reasonable assumptions about performance. Does that strike a dissonant chord? I think it may. 22:29:18 It's not exactly that, SrPx. 22:29:28 What's the point of a expressing a computational idea that there's no computer to compute? 22:29:30 I see ((f 1) 2) instead of f 1 2 there, samth. 22:30:07 Tuplanolla: try changing it to (f 1 2) 22:30:08 gnomon: proceed 22:30:11 Riastradh: ask Peter Shor 22:30:29 Oh. 22:30:32 There's a theoretical computer to compute it, at least, whose properties are formally understood. 22:30:49 SrPx, it sounds like you are proposing a programming language embedded purely in the mathematical space, where (eg.) computing in Church numerals could easily be translated by a SufficientlySmartCompiler into fast, machine-native code, and the result would be that all programs could then become polished models of perfection, and compilers could hide all the dirty details. Yes? 22:30:51 Tuplanolla: samth very cool to know about that racket capacity, by the way. 22:31:01 But you try writing down Shor's algorithm in lambda calculus! 22:31:20 andre von tonder made a quantum lambda calculus, I think 22:31:42 Riastradh: yeah, and I just looked it up and it has been run on the number 21 22:31:59 gnomon: yes...! I do indeed think that and am willing to defend the so disregarded "sufficientlysmartcompiler" concept. (I do actually believe even a "notsosmart" compiler can do a good enough of a job) 22:31:59 I dimly remember that, ijp. Any idea whether it has any actual relation to quantum computers? 22:32:05 no idea 22:32:33 http://www.mathstat.dal.ca/~selinger/quipper/ 22:32:35 it's in the uncomfortable middle ground where it seems interesting, but not interesting enough that I need to read about it right now 22:32:43 SrPx, excellent, excellent! (I do not mock or deride the concept of a SufficientlySmartCompiler, I think it is a wonderful and important idea. But bear with me.) 22:32:46 gnomon: to illustrate my point, my 150 lines compiler is able to go from almost-lambda-calculus to ridiculously-fast-javascript that is better than what I took one day to optimize by hands 22:32:54 gnomon: but yea proceed 22:32:58 which is why I have had the paper sitting on my hard drive for about 4 years... 22:33:30 `ridiculously fast javascript', heh. 22:33:35 davexunit [~user@fsf/member/davexunit] has joined #scheme 22:33:50 -!- oleo [~oleo@xdsl-84-44-179-171.netcologne.de] has quit [Ping timeout: 264 seconds] 22:33:52 Riastradh: ridiculously fast turtle: a turtle that is faster than other turtles. nothing wrong with that, right? 22:33:57 SrPx, the problem is that writing a SufficientlySmartCompiler takes a lot of work. A huge amount of work. A truly staggering, shockingly huge amount of work. In practice. 22:33:59 but if there was hope for "bananas, lenses and barbed wire" there is hope for it 22:34:11 oleo [~oleo@xdsl-78-35-152-142.netcologne.de] has joined #scheme 22:34:35 SrPx, and the issue is that all that work goes into some - many! - layers of a compiler that may target an architecture with an effective life span of 5-10 years. 22:34:49 gnomon: if you like that kind of stuff... citation needed! I don't think that, at all. Again, my dumb compiler is the best proof of concept I have. What you say about it? 22:35:04 show us your compiler 22:35:13 Riastradh: it's not so heh these days 22:35:15 I think if it's `fast' then you're measuring the compiler in the JavaScript engine you're using... 22:35:38 SrPx, how long did it take to write GCC? How many person-years did it take to write Intel's optimizing compiler? How many hours of work went into Racket, or Mozilla's SpiderMonkey, or Stalin, or LuaJIT? 22:35:43 Riastradh: pretty much 22:36:20 SrPx, I believe that you're conflating "writing a compiler is easy" with two other similar ideas: 22:36:21 turbofail: I can show results of the compiler... I don't want to open the code yet as it is a mess, I will be doing so in the next months. It is not by any means anything with value, by the way. Just a 3-steps transformer 22:36:30 sigh, http://arewefastyet.com/ still doesn't label its axes 22:36:37 well at least describe how it works 22:36:51 1- a compiler used only by the programmer who wrote it can generate extremely fast code with very little complexity; 22:36:54 turbofail: so if you want send me some lambda calculus program. notice it is augmented with numbers and lists, but nothing else 22:37:09 2- writing a compiler is easy, so writing a fast, general, correct compiler must be at least moderately easy. 22:37:27 SrPx, do you agree that you hold the beliefs I just listed? 22:37:30 SrPx: so a lexer/parser, some kind of AST transformer, and then something that spits out javascript? 22:38:29 Nizumzen [~Nizumzen@cpc1-reig5-2-0-cust251.6-3.cable.virginm.net] has joined #scheme 22:38:34 SrPx, I am not trying to entrap you in a rhetorical cul de sac, by the way. This is not an adversarial conversation. 22:38:36 ijp: it's ms for sunspider and kraken, and "score" for octane 22:38:40 gnomon: you have a point, *maybe* writing a compiler is not that simple, under those terms is. But... 22:39:04 I like how ARE WE FAST YET? gets stuck at "Loading..." if JavaScript is disabled. 22:39:05 SrPx, there is another knock-on effect. 22:39:08 SrPx: what primitives do i get? 22:39:11 samth: I can infer that "smaller is better" from the relative positions of v8 and jsc :) 22:39:19 er, lower, not smaller 22:39:30 do i get to use define? 22:39:52 gnomon: 1. it is a *maybe*. 2. we can't really know the answer for that. 3. but..... EVEN considering that writing compilers is hard, my argument holds. because it would still be MUCH easier to write a compiler than porting the entire codebase of a language to another new language that is better for a new architecture. 22:40:03 because i'm sure as hell not writing it using a fixed-point combinator 22:40:24 SrPx, what if porting to a new architecture did not require completely rewriting the codebase? 22:40:25 gnomon: so for that I ask: do you actually believe rewriting every program written in a language is by any chance harder than writing a single program (a compiler), in the case a new architecture appears? 22:40:26 ijp: basically all people who care already know how those benchmarks are measured :) 22:40:43 You have two paths and both are bad, SrPx: rewriting the programs or rewriting the compiler. 22:40:48 SrPx, what if porting to a new architecture required merely revisiting a few well-documented system-specific points in the codebase? 22:40:58 meh, never mind, i'm bored 22:41:03 Tuplanolla: unless the language in question is Forth 22:41:04 samth: indeed 22:41:09 and you're Chuck Moore 22:41:17 I'm not even Chuck. 22:41:38 gnomon: the codebase of EVERY single programmer of made in a language? Without writing a compiler from the old language to the new language? 22:41:43 hiyosi [~skip_it@126.73.30.125.dy.iij4u.or.jp] has joined #scheme 22:42:12 SrPx, why without writing a recompiler? 22:42:25 That is a perfectly valid tactic. 22:42:26 Tuplanolla: but the compiler is a single program. The codebase is a world of programs that may even not be at your reach (what happens with program I wrote and did not publish? Do *I* have to change them because the prog lang I used decided to change to fit a new architecture?) 22:42:37 The compiler absolutely is not a single program. 22:42:56 I see the appeal, but that's not exactly true. 22:43:07 But that's a small quibble, let's assume that actually is true for now so we can address the greater point. 22:43:11 -!- akp [~akp@c-50-133-254-143.hsd1.ma.comcast.net] has quit [Ping timeout: 244 seconds] 22:44:24 SrPx, I suggest that there is a midpoint between (A) a world of mathematically pure programs buttressed by perfectly engineered compilers, one for each architecture, and (B) having to rewrite every program in existence from scratch for every new architecture. 22:44:56 gnomon: I see your point, you are pretty good. At this point we are considering an eventual architecture change, and you are arguing that the work required to compile every program written in a language is the same as to the work required to write a new compiler 22:45:08 buttressed doesn't get used nearly enough these days 22:45:33 gnomon: but let me turn this debate a little bit, can we be practical here and talk about a real world problem? 22:45:36 SrPx, I am contending that there is a three-way tradeoff to be made between language "purity" (for some value of that term), compiler complexity, and burden on the programmer(s). 22:45:56 ASau [~user@p54AFEC23.dip0.t-ipconnect.de] has joined #scheme 22:46:02 SrPx, absolutely, let's! 22:46:13 Why does the porridge bird lay its egg in the air? 22:46:17 gnomon: Haskell. Case 1: Haskell was just Lambda Calculus - everything else was a plugin. At this point, we would have a very fast Lambda Calculus compiler and nothing would change, users wouldn't even notice. 22:46:26 gnomon: Case 1: Haskell is what it is. 22:46:31 case 2* 22:46:57 Riastradh, I'm afraid that reference is lost on me... 22:46:58 because scrambled eggs are preffered by porridge? 22:47:02 -!- hiyosi [~skip_it@126.73.30.125.dy.iij4u.or.jp] has quit [Ping timeout: 264 seconds] 22:47:07 preferred* 22:47:10 SrPx: what makes LC so special? 22:47:13 some pun to that effect 22:47:40 So, JavaScript problem appears. A new architecture, indeed. As you can see, Haskell fails miserably to compile to JavaScript. Now, if it was lambda calculus, as you can see, even me without any knowledge about compilers was able to produce a satisfactory compiler from LC->JavaScript. So, if haskell was just LC, every haskell program would be running in the browser today with no problems. 22:47:47 gnomon: what about that? 22:47:55 nisstyre_, assume that it could be any mathematically "pure" model of computation - a Turing machine, lambda calculus, Iverson notation, whatever. Lambda calculus is not inherent to SrPx's argument. 22:48:16 akp [~akp@c-98-229-24-3.hsd1.ma.comcast.net] has joined #scheme 22:48:21 SrPx: but what about the DOM? 22:48:21 -!- jao [~jao@pdpc/supporter/professional/jao] has quit [Ping timeout: 244 seconds] 22:48:26 thue 22:48:27 nisstyre_: as I said, just that it has a very high complexity(language_definition)/power(language) ratio. Other languages have that too, I'm just picking one 22:48:28 gnomon: it seems to have been pretty central from the beginning 22:48:37 we'll do it all in thue 22:48:53 ijp, indeed, but by the principle of charity I am assuming the strongest possible form of SrPx's argument. 22:48:55 SrPx: would DOM manipulation just be done via some predefined functions? 22:49:12 nisstyre_: I just represent the DOM as trees in my language and I run an outsider DOM-Renderer in javascript to actually give life to my computation. performance is excellent 22:50:28 so, for example, I just adopt a convention. Say: [["div","hello",["pos","20px","20px"],["child",["span"...]]] ... that is all inside my language. Then I have a function render(tree){} that transforms that into the actual page inside the JS engine. Nothing special, really 22:50:37 SrPx, I will answer your question with a question: does Haskell fail miserably to compile to JavaScript because the Haskell-to-JS compiler(s) is(are) poor? Because of an inherent semantic mismatch between the way that Haskell and JS accidentally expose performance-critical mechanisms of their own implementations? Or because programmers have taken been writing Haskell programs according to... 22:50:38 -!- mrowe is now known as mrowe_away 22:50:58 and from now I could port any of my sites to anywhere else by just implementing such renderer, nisstyre_ 22:51:13 ...the performance that a particular Haskell implementation has (or a series of them have) offered, leading again to a cloaked dependence on a particular architecture? 22:51:46 -2s/taken been/been/ 22:52:28 gnomon: that is a good point. I believe that all of those are correct. And my point is: if Haskell was MUCH SIMPLER, then writing a good compiler would be MUCH EASIER. 22:52:43 *SrPx* does not know how/where to use "was" and "were". English is hard 22:52:44 SrPx, but there is a flip side to your argument. 22:52:47 SrPx: Haskell is actually not that complicated 22:52:57 SrPx: what you're thinking of are GHC's extensions 22:53:13 even type classes are easy to implement 22:53:20 pretty easy 22:53:22 SrPx, if Haskell was MUCH SIMPLER, then writing a good compiler would be MUCH EASIER, but the result would expose (and thus make programs dependent upon) MUCH MORE of the compilation target! 22:53:25 (assuming you have impredicative types) 22:53:31 -!- akp [~akp@c-98-229-24-3.hsd1.ma.comcast.net] has quit [Ping timeout: 244 seconds] 22:53:38 SrPx, the three-way tradeoff between language purity, compiler perfection, and programmer burden is real. 22:53:54 rudybot: I told you adding type functions was a bad idea 22:53:54 ijp: no, just the macro, so it can be a stand alone project and people can contribute adding stuff like the growl thing or whatever notification backend is preferred 22:54:21 nisstyre_: I honestly don't know how complicate it is, all I now is I can't compile a processor-hungry function from haskell to javascript and expect good performance as of today. I tried it a lot. 22:54:56 gnomon: I have no idea which time zone you are in, but you should note that (collectively) we've been at this discussion for over 90 minutes 22:55:00 gnomon: now this is one argument I don't get. "if Haskell was MUCH SIMPLER, then writing a good compiler would be MUCH EASIER, but the result would expose (and thus make programs dependent upon) MUCH MORE of the compilation target!" 22:55:05 ijp, I read the scrollback. 22:55:41 I'm merely suggesting you have a time in mind at which to end it 22:55:42 ijp: /ignore is your friend 22:56:39 samth: right and it's on for a few people, but I'm always reticent to /ignore the transitive closure of people replying 22:56:56 SrPx, imagine a small, simple language; a small, simple compiler; and a complicated program which must run fast, because it does a very important job of some kind. A programmer who wants the program to run faster will experiment with different implementation techniques to achieve greater speed. That experimentation will yield results! 22:57:01 ijp: the world needs threaded irc 22:57:03 I guess it needn't be transitive 22:57:27 -!- langmartin [~langmarti@host-68-169-175-226.WISOLT2.epbfi.com] has quit [Quit: sleep] 22:57:45 SrPx, those results will show that some techniques run faster. The performance tests the programmer is running will reveal details about everything that supports the computation: the algorithm, the compiler, the architecture. 22:57:59 gnomon: i believe that, in practice, if the language is so simple the compiler will find the optimal implementation and the attempts of the programmer to gain speed boosts will be useless, anyway. for example, 22:58:03 samth, the world had threaded IRC, and then Google killed Wave. 22:58:13 SrPx, BINGO. 22:58:15 wave is alive, just ignored 22:58:26 like your grandmother in that home 22:58:31 it's merely resting 22:58:45 gnomon: i don't think that wave would be an effective replacement for #scheme 22:58:47 pining for the fjords 22:58:59 -!- mrowe_away is now known as mrowe 22:59:04 SrPx, do you believe that there is an optimal representation of every given algorithm, and that a SufficientlySmartCompiler should find it every time? 22:59:17 gnomon: ((a)(* a a)) and (((mul)((a)(mul a a)) ((a)(* a a))). A programmer might attempt to go from the second program to the first expecting performance gains. He won't. The resulting code is the same, in my language 22:59:25 it would be nice if we could generalise to say that the apache project is where projects go to die, but I'm not sure that's true 22:59:26 gnomon: no, I don't. 22:59:58 some of those projects live on obnoxiously, distracting people from making a better alternative 23:00:05 like hadoop 23:00:12 -!- nisstyre_ [yourstruly@oftn/member/Nisstyre] has quit [Quit: WeeChat 0.4.3] 23:00:35 SrPx, OK. Then do you believe that a programmer must make tradeoffs based on empirical evidence to achieve performance goals, while keeping in mind that some observed behaviour may be intentional on the part of the language designers and compiler authors, and some may be the result of how the architecture works? 23:00:36 gnomon: I do believe that for most programs the compiler can easily find a almost optimal solution and that it is very unlikely that the programmer will find a better solution 23:01:16 turbofail: isn't the answer to hadoop to realize that you probably don't have big data, and use a hash table instead? 23:01:29 gnomon: as in: the compiler can't find the best program, but it can do a much better job than the programmer and it would take a human programmer weeks to do what a compiler can do in a second, in the matter of optimizations. 23:01:33 rudybot: do you reject satan and all his works and empty promises 23:01:34 ijp: ,g satan santa computer intrusion 23:01:43 *ijp* googles 23:01:44 gnomon: so, no, the programmer shouldn't do any performance considerations at all. just code the function the way he likes 23:02:08 samth: well, yes, that's the right answer sometimes 23:02:08 SrPx, that is an interesting belief. On what evidence do you base it? 23:02:12 but not always 23:02:25 turbofail: the other times you use a list? 23:02:42 SrPx, sorry, let me ask a different question. 23:02:50 gnomon: my own experience in the last days and nothing else. As I said, my guloid->mesh one liner in my language produced the same performance than something I spent countless hours optimising by hand in javascript. 23:03:41 gnomon: actually, not just my own experience but the fact that source code for most programs are small. genectic programmings, for example. it can find solutions from 0. now, the problem here is much easier: we are giving the compiler a solution and it just has to tweak a little bit 23:03:50 SrPx, my different question is this: when a program fails to achieve a performance goal, is it the responsibility of the programmer to fix it? Or must she simply hand it off to the compiler authors and plead "make this run faster"? 23:04:19 samth: well, maybe several lists 23:04:21 so, I mean - if genectic programming can find solutions to complex problems in a HUGE space faster than humans, a compiler can probably find optimizations in the small space that is that of finding a better representation of a program 23:04:23 -!- add^_ [~user@m176-70-197-83.cust.tele2.se] has quit [Quit: ERC Version 5.3 (IRC client for Emacs)] 23:04:35 gnomon: I know which answer I would like it to be 23:05:01 gnomon: he hands it off to the compile authors! then the compile authors identify the patterns that they were unable to identify initially and optimize them accordingly. bonus: everyone else is benefited 23:05:02 -!- Nizumzen [~Nizumzen@cpc1-reig5-2-0-cust251.6-3.cable.virginm.net] has quit [Quit: KVIrc 4.2.0 Equilibrium http://www.kvirc.net/] 23:05:25 indeed, I have done that a few times on my own experience with that language. 23:05:26 And then everyone was a compiler author. 23:05:42 SrPx, I humbly suggest that genetic programming has had an enormous impact on the real world of software development, as we can easily observe by looking at how it has impacted web browsers, email clients, operating systems, embedded software, cell phones, financial trading systems, health care, data compression... 23:05:50 -!- davexunit [~user@fsf/member/davexunit] has quit [Read error: Connection reset by peer] 23:06:00 SrPx, so are the compiler authors not also programmers? 23:06:18 how about "code poet" 23:06:19 SrPx, to whom do the compiler authors hand off their programs when they must run faster? 23:06:21 most of my optimizations on the compiler come from implementing a function naturally and noticing it resulted in slow code. for example, (map double (map double x)) resulted in HORRIBLE performance. then I found about stream fusion and now that I implemented things correctly, completely unrelated code all around the base got MUCH faster. things like summing a filter of a mapped list. it was a miracle 23:06:39 if I just optimized my (map double (map double x)) to (map ((a)(double (double a)) x) that would never happen. 23:07:10 gnomon: to themselves. like I did 23:07:32 SrPx, so _someone_ is responsible for meeting performance goals. 23:07:41 gnomon: compile authors are. 23:08:15 _5kg_ [~zifeitong@60.191.2.238] has joined #scheme 23:08:20 SrPx, and the job of a compiler author is to translate from the platonic ideal of a mathematical representation of computation into a program that takes into account the mechanics of how it actually runs..? 23:08:52 what I like more about this conversation is that you are really smart and probably much more experienced than me - and you are touching points that I have been struggling about for the last years. so I am actually able to answer. 23:09:12 years is a exaggeration* 23:09:24 gnomon: yes! 23:09:32 SrPx, I'm delighted to hear that. Thank you for the kind words. That is precisely what I'm trying to do, since I remember having this very conversation from your perspective a few years ago. 23:09:48 SrPx, so, aren't we then all compiler authors? 23:10:54 the bad news is that you are still not able to convince me though... and that alone reinforcers my beliefs :/ 23:11:05 gnomon: hmm I don't get that last point. we are all compiler authors? 23:12:12 SrPx, I don't believe that I will change your mind, but I am not trying to, and I don't think I should. Breadth and depth of experience will do that job. I'm just trying to plant the seeds of ideas in your memory so that when you run into something that _does_ change your mind, you can remember to help those who haven't yet discovered what you have. 23:12:44 wow. such sufficient. very compile. 23:12:46 (I am happy that you pointed that, because that was an active effort on my part. "Oh, that function is slow. No, I will resist the urge and won't change it a little bit. I will find why the compiler is not able to make it by itself." 23:12:49 ) 23:12:58 *ijp* smacks turbofail upside the haid 23:13:05 gnomon: that sounds awesome 23:13:16 SrPx, since every programmer is trying to map her conceptual model of a problem solution into a computation, thinking about a problem is also part of the process of eventually producing a running program. 23:14:01 gnomon: oh but I don't think that. I think the programmer should just make something that works, regardless of how he does achieve that. Then the compiler is the one responsible to figure out what he tried to mean, and translating it into a proper representation on the machine. 23:14:09 SrPx, the point I'm trying to make is that drawing a line between "program author" and "compiler author" is not particularly meaningful, since compilers are also programs. 23:14:38 But compiler authors are wizards who make the magic happen and program authors are just drudgeons who use it! 23:14:41 gnomon: it is, in that case. The compiler author is the only one that has to think about the machine because it is part of his problem. Everyone else shouldn't. 23:14:42 SrPx, but when the machine fails to perfectly translate it, the programmer must not give up! 23:15:01 gnomon: he should!! 23:15:22 SrPx, the programmer must continue to explore, continue to learn, so as to discover if the issue was with her understanding of the problem, or her expression of the program, or the way the compiler behaved, or the way the compiled program made the system behave! 23:15:42 quick rule of thumb: 99% of the time it's your fault 23:16:24 SrPx, and the goal of a useful programming system, including documentation, teachers, universities, code comments and the like, is to minimize the effort necessary to gain this understanding of why a solution is not performing the way it should! 23:16:59 gnomon: that is how it is today. Do you believe that will hold for 200 years? That in 200 years someone will write a factorial using non-memoized recursion, and the uber-futuristic-compiler will leave it as it is? 23:17:01 (0.0099% of the time it's the library you are using 0.000099% of the time the compiler, 0.00000099 the OS, and 0.00000001 the hardware) 23:17:43 figures courtesy of Lying Cretan Statistics Ltd. 23:17:53 We can stop worrying about all this when the first compiler that's smarter than a programmer is written. 23:17:56 SrPx, I believe that people will still be writing such programs, yes; and that they will learn that doing so is foolish because better techniques exist; and they will learn about these better techniques and become better programmers. 23:18:31 (from Crete?) 23:18:33 Tuplanolla: The stupidest programmer? already there. 23:18:38 SrPx, I believe that truly excellent programmers will recognize good ideas and try to package them up so that other beginners may understand them more swiftly and completely, so that good ideas spread far and last a long time. 23:18:49 turbofail: cf. Epimenides paradox 23:18:54 SrPx, I dunno, what's the value in teaching a compiler to memoize factorial? Wouldn't it be easier to write factorial to be memoized? Is there a broader strategy that implies automemoizing factorial and is in general a good strategy? 23:19:00 gnomon: I don't think that at all, I'd bet money on that. I believe that programming will become a matter of "explaining a problem to the computer". A stupid factorial definition explains to the computer what I am trying to do, so it is done there. 23:19:05 SrPx, I also believe that good, working solutions will be forgotten, and rediscovered, and forgotten again. 23:19:11 ah i see 23:19:17 I forgot my quantifiers. 23:19:38 gnomon: so, good programmers will be those that are able to implement the "correct" program faster - not those who are able to optimize it. 23:19:51 SrPx, and what of youngsters? 23:20:18 There's actually a curious paper about a case study of mechanically translating a bad algorithm into a good one, SrPx. 23:20:27 You should read it. 23:20:40 "why scheme is faster than C" 23:20:54 or anything in the field of constructive algorithmics 23:20:57 Also good reading: http://c2.com/cgi/wiki?SufficientlySmartCompiler 23:21:02 gnomon: difficulty adjusts. We will just teach harder problems to them. Instead of a month explaining why that factorial definition causes a stack overflow (that won't even exist anymore), we will explain why that neural network can't solve certain problem. 23:21:21 The machine will be forgot... gnomon I have read that by the way (: 23:21:21 mihai_ [~mihai@81.170.72.2] has joined #scheme 23:21:33 It's called Calculating Functional Programs: http://www.cse.iitb.ac.in/~as/fpcourse/calculating.ps.gz 23:21:36 Tuplanolla: there are many, I guess. Do you have the link? 23:21:43 Oh, thanks. 23:21:49 Tuplanolla: that falls into the of my latter category 23:22:01 categories* 23:22:38 SrPx, out of curiousity, have you ever programmer a neural network, or looked at the database of weightings produced by a network written by someone else? NN's are good at producing results, not so good at producing solutions that expand the aggregate knowledge of the human race. 23:22:46 -1s/programmer/programmed/ 23:22:59 (Are they even good at producing results...?) 23:23:12 (not really, but I'm assuming they are for SrPx's benefit) 23:23:15 gnomon: no, I completely noob any kind of AI. I just used that as an example of something complex 23:23:26 from what i understand, they're good enough 23:23:53 for certain things, like object recognition 23:23:53 In theory, NN's aren't particularly complex. 23:24:00 when machines can learn, they are going to be annoyed by what we call machine learning 23:24:04 I am really interested in that, though. I might give it a try after my hopeless "I have a cool idea for an editor but have no programming language to use" journey 23:24:46 SrPx, you'll have fun. It is educational. I suggest you look into superoptimizers while you're at it. 23:25:31 I've read something about that, though, but stopped for lack of time and prereqs. I don't think it so fun, though. I feel I'm wasting my time because I'm not experienced enough to implement a programming language. 23:25:59 It's a tool. Build something with it. You don't have to have the perfectest tool to build a fun or useful thing. 23:26:07 I seriously just wish some day I'll open hacker news and find someone solved that problem and happily incorporate it in my editor and proceed to work in game dev ... 23:26:21 Everyone always hates every programming language they have to use because the tool is never perfect. 23:26:22 SrPx, implementing a programming language is an exercise in perserverance, not a matter of experience. Look up JonesForth, or read through SICP! 23:26:49 but... I just wanted an editor. Honestly :/ 23:26:51 Designing a better programming language is really using an existing one to build a thing that is a tool that you like better. 23:26:59 Gotta run. 23:27:00 *poof* 23:27:07 ? see you? 23:27:11 *gnomon* steals Riastradh's seat 23:27:44 I have yet to go through the phase of writing my own programming language and text editor, because everything else is bad. 23:27:55 *ijp* looks in disgust at the stained carpet where gnomon was a moment ago 23:27:59 SrPx, if you keep waiting for a perfect programming language, you'll never have any fun. 23:28:06 I hear it's quite common. 23:28:14 gnomon: but honestly, you do believe in everything that you said, or you do believe that in the future it will be more like I say? 23:28:46 gnomon: I have a perfect programming language already, to be fair. I just wait for the compiler (: 23:29:11 customizing emacs fulfills a similar function 23:29:14 SrPx, I believe that in the future we will have smarter compilers that are able to operate at a higher level of abstraction than they currently do. I believe that in that sense you are correct. 23:29:41 seriously, you guys would understand. Any code that I write on that language is so easy to port it makes me happy. Next semester if a professor asks for a program in python, I will just write a LC->Python compiler using what I have and it will be there already. 23:29:54 maybe the idea is bad, but for ME it is a wonder and it solves MANY of MY problems 23:30:19 SrPx, I do not believe that better compilers will free language designers to offer fewer, purer features to programmers, or that purer languages will overtake ones which offer both mathematical purity _and_ the ability to take advantange of knowledge about the underlying architecture. 23:31:00 well, it *would* be nice if a compiler could for example automatically chose a list or a vector implementation for me, depending on what is more efficient for that concrete run... 23:31:20 ecraven, J and some APL's do that. 23:31:28 gnomon: I really don't understand that belief. Why don't they just put that knowledge in the compiler? I mean, we know vectors are fast. So we compile lists to vectors. It is the same use of the knowledge, except instead of having a "list" and a "vector" primitive (like Scheme) we have only one. 23:31:53 gnomon: now you say, lists are faster for some algorithms. Well, compiler analyse. If we are using a lot of cons and almost no random access, just use lists then! 23:31:57 (for-each (lambda (x) (give x (create-pony))) (channel-users "#scheme")) 23:32:12 SrPx, so you're talking about profile-guided optimization, then? 23:32:28 ijp: what is a "pony" and why you're giving it to me :( 23:32:59 gnomon: I don't know if this particular technique has a name... I'm just... saying it is the compiler's job 23:33:25 SrPx: the choice of vector vs. list might change between different input data, for example 23:33:35 if you have a lot of insertions, vectors might be way slower than lists 23:34:01 SrPx, a pony is a small horse. In some cultures little girls often want to own a pony, but they are very expensive; so when someone says "I want a pony", it means that they want something that everyone wants but which most people don't have, for practical reasons. 23:34:26 SrPx, I'm saying that the thing you are describing _does_ have a name, and it is called "profile-guided optimization". 23:35:39 SrPx, typically the compiler will generate an "instrumented" binary, and that will be run through some tests - sometimes automated, sometimes in actual production environments, whatever. The generated profile is then fed back into the compiler so that it may weight optimization tradeoffs in ways that benefit the trained test cases. 23:36:01 gnomon: of course, when we can programmatically generate ponies, the price will plummet 23:36:27 ecraven: that is a very good argument, to which I propose that you should give information to the compiler about the expected input, without touching the code itself. It could be a separated file like this: (optimize MyLib.myFun (third-input use-vectors)). This way you have a piece of code that solves your real-world problem without polluting the semantics of the code itself. 23:36:49 SrPx: ideally, the compiler should re-compile code on the fly :) 23:36:53 like Self 23:37:00 SrPx, in a smaller way, a similar technique - instrumenting the actual performance of a running program to generate optimization points - is used by tracing VMs. JavaScript implementations get a lot of press for doing that now, but go read up about HP's Dynamo system. 23:37:34 ecraven: I would say that, but then we can NEVER be as fast as hand-optimized code, by logic. So some would not be happy enough with such a language and we'd fall back to gnomon's architecture-aware languages 23:37:43 Nizumzen [~Nizumzen@cpc1-reig5-2-0-cust251.6-3.cable.virginm.net] has joined #scheme 23:38:09 Even if the performance as 99% in most of cases... by not being as fast people would not be content 23:38:09 SrPx, that un-pure, performance-specific program annotation mechanism _does_ exist: http://www.gnu.org/s/emacs/manual/html_node/cl/Declarations.html 23:38:11 SrPx: why not? hand-optimized code would face the same problem, and probably can't implement dynamic re-compilation 23:38:23 http://www.lispworks.com/documentation/HyperSpec/Body/m_declai.htm 23:39:52 In a more limited sense, you could say the same thing about C macros, or the "-O ..." progression of GCC optimization levels, or so on. 23:40:10 See, that is a pretty clever use of the compiler without an optimization. I guess this is a proof of concept of what I just said, clearing that last argument about "vectors/lists could be better depending on input"? 23:40:18 (Declarations) 23:40:38 Or Forth's peephole optimizers and opcode fusers, or APL's phrase recognition and deforestation techiques, and so on. 23:40:57 ecraven: dynamic recompilation? Because it would aways waste some resources in recompiling / identifying / profiling etc in runtime... 23:41:21 ecraven: resources that could be given to the actual program, if they were known previously... there is no way around that 23:41:30 SrPx, so it becomes a tradeoff between resources wasted on dynarec vs. those wasted on missed optimization opportunities, doesn't it? 23:42:00 gnomon: hm no? We just agreed that no optimisation opportunity would be lost with something akin to Declarations 23:42:02 SrPx, you're touching on some important ideas that have been very, very well studied. 23:42:20 SrPx: not at all, if it depended on the *runtime* input, you can't well declaim/declare whether to use vectors or lists 23:42:30 hiyosi [~skip_it@126.73.30.125.dy.iij4u.or.jp] has joined #scheme 23:42:56 -!- Okasu [~1@unaffiliated/okasu] has quit [Quit: Lost terminal] 23:43:04 ecraven: yes, but then machine-aware languages are not at a better place anyway. 23:43:27 so it can't be used as an argument in favor of those... 23:43:32 indeed, you'd maybe even be *faster* with dynamic recompilation, amortizing the additional time spent compiling :) 23:43:49 SrPx, why not? When performance matters, why do you assert that a system which knows less about how a program behaves in reality can outperform one that knows more? 23:44:30 ecraven: oh, that is maybe true about dynamic recompilation but that again does not make any effect in the machine-guided-language vs math-pure-language argument 23:44:42 noobboob [uid5587@gateway/web/irccloud.com/x-lycntghllckygfog] has joined #scheme 23:44:51 gnomon: I don't... why are you saying that? The amount of information is the same in both cases. 23:45:02 no, but it is a point against declarations :) 23:45:26 if your program processes any sort of input, you probably have more information at runtime than at compile-time 23:45:44 ecraven: uh huh, that could be true indeed! 23:45:54 (and if your program processes no input then it can, and perhaps should, be replaced with a lookup table) 23:46:57 My declaration proposal was aiming the following scenario: you know your program should be faster with vectors. How to solve it, without polluting the language spec with vectors? 23:47:11 -!- hiyosi [~skip_it@126.73.30.125.dy.iij4u.or.jp] has quit [Ping timeout: 246 seconds] 23:47:50 So, indeed it does not aim the "but we don't know which is faster, vectors or lists, until runtime." In which case, dynamic compilation could work better. But this does not affect the initial argument, in the end, as dynamic compilation can be used for both 23:47:58 THat is it. 23:48:01 SrPx, so you propose moving the vector-specific bits of the program out of the "core" language into the ghetto of mere implementation details? What should this ghetto language be called? 23:48:38 SrPx, at what point does the ghetto language become more useful than the pure core? 23:48:53 akp [~akp@c-50-133-254-143.hsd1.ma.comcast.net] has joined #scheme 23:48:59 SrPx, at what point does the benefit of the separation get lost by the constant need to mix two different languages? 23:49:01 gnomon: yes. I don't know... whatever the compiler creator wants to call it? "The LC->CUDA compiler hints language." 23:49:32 Sgeo [~quassel@ool-44c2df0c.dyn.optonline.net] has joined #scheme 23:49:58 gnomon: notice that my idea is this: you search a program in a huge wikipedia of code written in that pura language. Then you find the compiler for the target you want. Then you give the compiler hints, if you may. 23:50:04 SrPx, my point is that the separate language you're proposing _is also a programming language_. Isn't it better to make it part of your core language, to diminish the mental burden on programmers and on compiler authors? 23:50:04 those ARE separated things, conceptually. 23:50:36 SrPx, do you know about Gödel? 23:51:26 gnomon: not really, because those are separated things. Look at this code: int (std::vector x){return x.len();} . Now suppose someone invents a lisp machine and makes it popular. That code is now garbage, it performs terribly in the new architecture. If the programmer JUST separated the information... then the program would still be fast enough! 23:52:02 std::vector * or whatever is the syntax of this . 23:52:10 gnomon: no 23:52:37 Also, it does not need to be a programming language. And the fact it is *optional* makes it even more desirable to be a separated thing. 23:53:04 It can be just some compiler options, for example. 23:53:50 The fact is: that information IS hardware-specific but is NOT algorithm-specific. Those ARE separated things. Algorithm, hardware. Naturally. 23:53:54 SrPx, you have written programs, so you know that expressing them well can be complicated. Do you truly believe that putting all that expressive power into a set of compiler options would yield any kind of complexity benefit? 23:54:03 So I am not trying to separe them into a ugly mess. It is how things are... 23:54:26 SrPx, I think you're mistaking fundamental truths for beliefs that you hold. 23:54:43 You believe hardware AND algorithms are not separated by nature? 23:54:44 SrPx, I don't need to correct you on those points, you'll learn the difference as you go along. 23:55:07 SrPx, how do you think the hardware was designed? 23:55:19 gnomon: ...? 23:55:24 -!- theseb [~cs@74.194.237.26] has quit [Quit: Leaving] 23:55:43 SrPx, I can assure you that some algorithms informed the mechanism of the accumulator logic gates in your CPU. It was not discovered by some ever-probing neural network. 23:56:43 pardon, I don't understand that phrase gramatically, I guess 23:56:57 algorithms informed the accumulator logic? could you rephrase please? 23:57:44 SrPx: what do you mean by "information is hardware-specific but not algorithm-specific"? 23:58:11 SrPx, the hardware of your CPU implements lower-level algorithms. Trace it down the complexity path and you end up at circuit designs which work based on the observed rules of the behaviour of electricity. Those rules are also described by mathematical models. 23:58:41 ASau` [~user@p5083D20D.dip0.t-ipconnect.de] has joined #scheme 23:58:56 gnomon: I'm reminded of the perennial belief in languages so expressive/intuitive that you "won't need programmres" 23:59:10 ecraven: I mean that the "vector" inside the c++ code there is only there because it holds beliefs about how the hardware behave. in the other hands, ((list)(foldl (add 1) 0 list)) is the equivalent lambda-calculus function. It does not have any hardware-dependent information. It is just an algorithm. 23:59:13 SrPx, I am asserting that trying to draw some artificial line between a mathematically pure programming language and the messy world of implementation details is not going to bear fruit: that line does not exist in any meaningful way, or rather you can draw it anywhere you like, so it is not useful. 23:59:27 ijp, indeed. And yet: lawyers. 23:59:33 ecraven: (((add b 1)) instead of (add 1), my bad 23:59:34 My COBOL sense, it tingles.