comp.lang.ada
 help / color / mirror / Atom feed
* Haskell, anyone?
@ 2015-11-15 20:42 mockturtle
  2015-11-15 20:51 ` Paul Rubin
                   ` (4 more replies)
  0 siblings, 5 replies; 54+ messages in thread
From: mockturtle @ 2015-11-15 20:42 UTC (permalink / raw)


Dear all,
I recently read in the Italian version of Linux Format an interview to Katie Miller.  It got me intrigued the fact that when she talks about Haskell, she says things that usually Ada enthusiasts say about Ada ("when it compiles, it runs...", "the number of bugs is reduced...").  I remembered that some time ago I met someone that was enthusiast both about Ada and Haskell (when I said that I program in Ada, he said something "Finally!  Now I need to find someone who likes Haskell."

The question should be obvious: it seems that Haskell and Ada would attract the same kind of person; does someone here know and like Haskell as well?

Riccardo

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: Haskell, anyone?
  2015-11-15 20:42 Haskell, anyone? mockturtle
@ 2015-11-15 20:51 ` Paul Rubin
  2015-11-15 20:53 ` Nasser M. Abbasi
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 54+ messages in thread
From: Paul Rubin @ 2015-11-15 20:51 UTC (permalink / raw)


mockturtle <framefritti@gmail.com> writes:
> The question should be obvious: it seems that Haskell and Ada would
> attract the same kind of person; does someone here know and like
> Haskell as well?

I use Haskell though I'd say its attraction is rather different from
Ada's.  The online book http://learnyouahaskell.com is a good way to get
started.


^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: Haskell, anyone?
  2015-11-15 20:42 Haskell, anyone? mockturtle
  2015-11-15 20:51 ` Paul Rubin
@ 2015-11-15 20:53 ` Nasser M. Abbasi
  2015-11-15 21:50 ` Mark Carroll
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 54+ messages in thread
From: Nasser M. Abbasi @ 2015-11-15 20:53 UTC (permalink / raw)


On 11/15/2015 2:42 PM, mockturtle wrote:
>
> The question should be obvious: it seems that Haskell and Ada would attract
>the same kind of person; does someone here know and like Haskell as well?
>

I do not know about Haskell, never used it (too many programming languages, too
little time, and they all seem to be re-inventing the same wheel again and again).

But there is another programming language called "D" that is supposed to
be close to Ada in terms of being a "good typed" language. see for example

http://forum.dlang.org/post/xnulsgpdhkgjpllwkgxq@forum.dlang.org

There is a Haskell vs. Ada report here:

http://haskell.cs.yale.edu/?post_type=publication&p=366

"We describe the results of an experiment in which several
conventional programming languages, together with the functional
language Haskell, were used to prototype a Naval Surface
Warfare Center (NSWC) requirement for a Geometric Region Server...

The results indicate that the Haskell prototype took significantly
less time to develop and was considerably more concise and easier
to understand than the corresponding prototypes written in several
different imperative languages, including Ada and C++."

This was in 1994 !

  
> Riccardo
>

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: Haskell, anyone?
  2015-11-15 20:42 Haskell, anyone? mockturtle
  2015-11-15 20:51 ` Paul Rubin
  2015-11-15 20:53 ` Nasser M. Abbasi
@ 2015-11-15 21:50 ` Mark Carroll
  2015-11-15 22:11 ` mockturtle
  2015-11-16 10:25 ` Hadrien Grasland
  4 siblings, 0 replies; 54+ messages in thread
From: Mark Carroll @ 2015-11-15 21:50 UTC (permalink / raw)


On 15 Nov 2015, mockturtle wrote:

> I recently read in the Italian version of Linux Format an interview to Katie
> Miller. It got me intrigued the fact that when she talks about Haskell, she
> says things that usually Ada enthusiasts say about Ada ("when it compiles, it
> runs...", "the number of bugs is reduced...").

Katie is correct. Among other things, especially with the help of some
of the extensions, Haskell offers a strong static type system that is
very expressive and the compiler is good at checking those claims.
Getting my Haskell code to compile at all is most of the debugging work.
It is nice, for instance, to be able to declare things as numerical that
compile to simply being a basic number but in the source are different
types that the compiler's analysis won't let me mix up accidentally --
i.e. the type information can sometimes be used then discarded -- or to
be able to declare that two types have some relationship such that, in
the right context, if I know what one of the types is, I can then be
sure what the other is.

> I remembered that some time ago
> I met someone that was enthusiast both about Ada and Haskell (when I said that
> I program in Ada, he said something "Finally! Now I need to find someone who
> likes Haskell."
>
> The question should be obvious: it seems that Haskell and Ada would attract
> the same kind of person; does someone here know and like Haskell as well?

"As well" is strong, as I don't use Ada. (-: Partly because of Haskell's
run-time system and lazy-by-default evaluation, while it /can/ be fast
in the hands of skilled practitioner, and the modern compilers can do
impressive optimization (like stream fusion), and Haskell does offer
powerful concurrency such as software transactional memory, it would be
a challenge to use it for real-time systems for which one wants to be
sure of performance: I wouldn't want to be using it for programming an
anti-ballistic missile. Basically, Haskell /is/ really great for safety
and correctness but, unless fans of Haskell can correct me, I'd expect
that Ada has the edge when time and space really matter. Languages like
Idris might shine the light toward a way forward, but right now they're
research projects, not for production code.

-- Mark

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: Haskell, anyone?
  2015-11-15 20:42 Haskell, anyone? mockturtle
                   ` (2 preceding siblings ...)
  2015-11-15 21:50 ` Mark Carroll
@ 2015-11-15 22:11 ` mockturtle
  2015-11-15 22:48   ` Nasser M. Abbasi
                     ` (2 more replies)
  2015-11-16 10:25 ` Hadrien Grasland
  4 siblings, 3 replies; 54+ messages in thread
From: mockturtle @ 2015-11-15 22:11 UTC (permalink / raw)


I just installed ghc & C. and I am giving a try to the tutorials.  The syntax is a bit... intimidating, though :-).  I mean, I find more intutive something like

   function add(X, Y: Integer) return Integer;

rather than

    add :: Integer -> Integer -> Integer

(Yes, I understand that this means that add is a function that maps an integer into a function that maps integers to integers... [confused? :-)] and I guess that this is taken from the common mathematical notation of functions) I guess one gets used to it, nevertheless, it is not the most intuitive syntax in the world... :-)

I see, however, that you can do some very powerful/funny stuff like infinite lists and mapping functions to functions.

Oh, well, back to the tutorial...

Riccardo


^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: Haskell, anyone?
  2015-11-15 22:11 ` mockturtle
@ 2015-11-15 22:48   ` Nasser M. Abbasi
  2015-11-15 23:05     ` Mark Carroll
  2015-11-15 23:19     ` Jeffrey R. Carter
  2015-11-16  3:59   ` Paul Rubin
  2015-11-16  8:33   ` Dmitry A. Kazakov
  2 siblings, 2 replies; 54+ messages in thread
From: Nasser M. Abbasi @ 2015-11-15 22:48 UTC (permalink / raw)


On 11/15/2015 4:11 PM, mockturtle wrote:
> I just installed ghc & C. and I am giving a try to the tutorials.
>The syntax is a bit... intimidating, though :-).  I mean, I find more intutive something like
>
>     function add(X, Y: Integer) return Integer;
>
> rather than
>
>      add :: Integer -> Integer -> Integer
>

If this is how Haskell look, then thanks but no thanks, I'll skip ;)

On Ada, One of the best features is the ability to define subtypes very
easily. I have not seen any other language with this feature. (one has
to make Class in other languages, and define all the operations each time).

This is for me what makes Ada for me. One can define a type
that matche the range of the physical quantity being modeled.
This helps catch many errors.

Can one do this in Haskell?  Say define a new integer subtype that
can only take values from only 1...20.  And have the compiler
and run time check for this?

>
> Riccardo
>

--Nasser


^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: Haskell, anyone?
  2015-11-15 22:48   ` Nasser M. Abbasi
@ 2015-11-15 23:05     ` Mark Carroll
  2015-11-16  4:11       ` Paul Rubin
  2015-11-15 23:19     ` Jeffrey R. Carter
  1 sibling, 1 reply; 54+ messages in thread
From: Mark Carroll @ 2015-11-15 23:05 UTC (permalink / raw)


On 15 Nov 2015, Nasser M. Abbasi wrote:
(snip)
> On Ada, One of the best features is the ability to define subtypes very
> easily. I have not seen any other language with this feature. (one has
> to make Class in other languages, and define all the operations each time).
>
> This is for me what makes Ada for me. One can define a type
> that matche the range of the physical quantity being modeled.
> This helps catch many errors.
>
> Can one do this in Haskell?  Say define a new integer subtype that
> can only take values from only 1...20.  And have the compiler
> and run time check for this?

One could do subrange types back with Modula-3 too! (Possibly also
earlier Pascal-like languages.) I /can't/ think off-hand how to do this
easily with Haskell (at least without unusual extensions) but I /think/
that Idris, that I'd mentioned as illuminating a possible way forward,
would probably make you work a bit harder for subtypes, in the theorem
proving for dependent types, but with the payback of allowing a rather
wider range of kinds of subtype constraint. Unfortunately the leading
edge of compiler-checked correctness still makes one sweat for that
final thumbs-up.

-- Mark


^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: Haskell, anyone?
  2015-11-15 22:48   ` Nasser M. Abbasi
  2015-11-15 23:05     ` Mark Carroll
@ 2015-11-15 23:19     ` Jeffrey R. Carter
  2015-11-16  9:36       ` J-P. Rosen
  1 sibling, 1 reply; 54+ messages in thread
From: Jeffrey R. Carter @ 2015-11-15 23:19 UTC (permalink / raw)


On 11/15/2015 03:48 PM, Nasser M. Abbasi wrote:
> 
> This is for me what makes Ada for me. One can define a type
> that matche the range of the physical quantity being modeled.
> This helps catch many errors.

One could do this with Pascal in 1970.

-- 
Jeff Carter
"Apart from the sanitation, the medicine, education, wine,
public order, irrigation, roads, the fresh water system,
and public health, what have the Romans ever done for us?"
Monty Python's Life of Brian
80

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: Haskell, anyone?
  2015-11-15 22:11 ` mockturtle
  2015-11-15 22:48   ` Nasser M. Abbasi
@ 2015-11-16  3:59   ` Paul Rubin
  2015-11-16  8:33   ` Dmitry A. Kazakov
  2 siblings, 0 replies; 54+ messages in thread
From: Paul Rubin @ 2015-11-16  3:59 UTC (permalink / raw)


mockturtle <framefritti@gmail.com> writes:
> Oh, well, back to the tutorial...

Haskell is one of the most interesting things I've done as a programmer,
and anyone into programming languages as an area of interest rather than
just as tools should try it out.  It's very different from Ada in the
sense of being a typed Lisp descendant, with a fairly complex runtime
system that makes heavy use of garbage collection.  So as Mark says, it
doesn't give you precise control of cpu or memory utilization the way
Ada does.  In fact it's easy to write Haskell code that consumes large
amounts of memory by accident (so-called space leaks).  It takes a while
to understand Haskell well enough to avoid these.

If it helps, I'd say the archetypal Ada application is something like a
jet engine controller, that absolutely has to satisfy realtime and
memory constraints, must never throw exceptions, etc, but whose
underlying function is reasonably simple (squirt fuel into the engine at
the just the right time).

The archetypal Haskell application OTOH is e.g. a compiler, that does
intricate transformations on complicated data, that must not produce a
wrong answer (i.e. generate wrong code), but for which other types of
failure (like taking too long or running out of memory) are merely
inconveniences (you fix the bug or find a workaround if that happens).


^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: Haskell, anyone?
  2015-11-15 23:05     ` Mark Carroll
@ 2015-11-16  4:11       ` Paul Rubin
  2015-11-16  5:17         ` Nasser M. Abbasi
  0 siblings, 1 reply; 54+ messages in thread
From: Paul Rubin @ 2015-11-16  4:11 UTC (permalink / raw)


Mark Carroll <mtbc@bcs.org> writes:
>> Say define a new integer subtype that can only take values from only
>> 1...20.  And have the compiler and run time check for this?

> I /can't/ think off-hand how to do this easily with Haskell (at least
> without unusual extensions) but I /think/ that Idris, ...

Haskell doesn't have subtypes per se.  You could define a new numeric
type that does a runtime check that a value created in that type is in
range, or throws an error otherwise.  This is called a "smart
constructor" in Haskell.  I think Ada's range types are similar: if X,
Y, and Z are in that 1..20 range type and X=15 and Y=15, then Z:=X+Y is
a legal Ada statement that doesn't raise any error until runtime.
Haskell's concept of types is entirely static (compile time) so Haskell
wouldn't consider "range 1..20" to be a type unles it could verify at
compile that Z:=X+Y was in range, by analyzing the code that produced X
and Y.

There's a new experimental Haskell feature called "refinement types"
which does things like that, and Idris (and Agda and Coq) are built to
do such things, but that's all way beyond what traditional Ada tries to
do.  It's more like SPARK on steroids where there a bunch of proof
obligations ("verification conditions" in SPARK terminology) that have
to be satisfied with almost every statement.

So I'd say don't worry about this til you've gotten more used to the
basics of Haskell.  If you really want to know more, there's another
online book "Software Foundations" that goes into it using Coq:

  http://www.cis.upenn.edu/~bcpierce/sf/current/index.html

Meanwhile there's a Haskell newsgroup comp.lang.haskell which is usually
quiet, but some knowledgeable Haskellers hang out there, so you can get
questions answered there.  The Freenode #haskell IRC channel is also big
and friendly, plus there's the mailing lists, etc.  See www.haskell.org
for more info.

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: Haskell, anyone?
  2015-11-16  4:11       ` Paul Rubin
@ 2015-11-16  5:17         ` Nasser M. Abbasi
  2015-11-16  5:48           ` Paul Rubin
  2015-11-16  8:45           ` Simon Wright
  0 siblings, 2 replies; 54+ messages in thread
From: Nasser M. Abbasi @ 2015-11-16  5:17 UTC (permalink / raw)


On 11/15/2015 10:11 PM, Paul Rubin wrote:

> range, or throws an error otherwise.  This is called a "smart
> constructor" in Haskell.  I think Ada's range types are similar: if X,
> Y, and Z are in that 1..20 range type and X=15 and Y=15, then Z:=X+Y is
> a legal Ada statement that doesn't raise any error until runtime.

It is detected at compile time. It gives warning. So compiler
can see at compile time the problem.

>cat foo.adb

with ada.text_io; use ada.text_io;
  procedure foo is
    type I is range 0 .. 20;
    x,y : I :=15;
    z : I :=0;
    begin
      z:= x + y ;
  end foo;


>gnatmake foo.adb
gcc-4.8 -c foo.adb
foo.adb:8:12: warning: value not in range of type "I" defined at line 4
foo.adb:8:12: warning: "Constraint_Error" will be raised at run time

--Nasser


^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: Haskell, anyone?
  2015-11-16  5:17         ` Nasser M. Abbasi
@ 2015-11-16  5:48           ` Paul Rubin
  2015-11-16  5:59             ` Nasser M. Abbasi
  2015-11-16  8:45           ` Simon Wright
  1 sibling, 1 reply; 54+ messages in thread
From: Paul Rubin @ 2015-11-16  5:48 UTC (permalink / raw)


"Nasser M. Abbasi" <nma@12000.org> writes:
> It is detected at compile time. It gives warning. So compiler
> can see at compile time the problem.

That's different, a special case that the compiler happens to be able to
figure out.  A true compile time check will refuse to compile the code
unless it can prove x+y<20, possibly with manual assistance.  Also the
lack of proof would result in an error (you can't compile the code), not
a warning.


^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: Haskell, anyone?
  2015-11-16  5:48           ` Paul Rubin
@ 2015-11-16  5:59             ` Nasser M. Abbasi
  2015-11-16  6:47               ` Paul Rubin
  0 siblings, 1 reply; 54+ messages in thread
From: Nasser M. Abbasi @ 2015-11-16  5:59 UTC (permalink / raw)


On 11/15/2015 11:48 PM, Paul Rubin wrote:
> "Nasser M. Abbasi" <nma@12000.org> writes:
>> It is detected at compile time. It gives warning. So compiler
>> can see at compile time the problem.
>
> That's different, a special case that the compiler happens to be able to
> figure out.  A true compile time check will refuse to compile the code
> unless it can prove x+y<20, possibly with manual assistance.

But the source code is legal. So it has to compile it. But
it does give a warning. In Ada, the practice is to keep working
on the code until one gets a clean compile. By clean, it means
no warnings.

I think we are arguing about semantics here. For me, the compiler
did detect this at compile time, and issued a "message" to the user.
That was the point.

> Also the
> lack of proof would result in an error (you can't compile the code), not
> a warning.
>

Again, I am not a programming language lawyer. But legal code
should compile. Error should be generated only if the code is
not confirming to the standard. So I do not think this should be
an "error" but only a  "wanring".

--Nasser


^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: Haskell, anyone?
  2015-11-16  5:59             ` Nasser M. Abbasi
@ 2015-11-16  6:47               ` Paul Rubin
  0 siblings, 0 replies; 54+ messages in thread
From: Paul Rubin @ 2015-11-16  6:47 UTC (permalink / raw)


"Nasser M. Abbasi" <nma@12000.org> writes:
> Again, I am not a programming language lawyer. But legal code
> should compile.

In Haskell, code with type errors is not legal.

> Error should be generated only if the code is not confirming to the
> standard. So I do not think this should be an "error" but only a
> "warning".

But I don't see any way to usefully get rid of the warning in Ada, since
Ada can't tell whether the condition is satisfied except in the simplest
cases.  SPARK can deal with more complex cases but it's not part of Ada
per se.

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: Haskell, anyone?
  2015-11-15 22:11 ` mockturtle
  2015-11-15 22:48   ` Nasser M. Abbasi
  2015-11-16  3:59   ` Paul Rubin
@ 2015-11-16  8:33   ` Dmitry A. Kazakov
  2015-11-16  9:33     ` mockturtle
  2 siblings, 1 reply; 54+ messages in thread
From: Dmitry A. Kazakov @ 2015-11-16  8:33 UTC (permalink / raw)


On Sun, 15 Nov 2015 14:11:07 -0800 (PST), mockturtle wrote:

>    function add(X, Y: Integer) return Integer;
> 
> rather than
> 
>     add :: Integer -> Integer -> Integer
> 
> (Yes, I understand that this means that add is a function that maps an
> integer into a function that maps integers to integers... [confused? :-)]

Hmm, but that is not an equivalent of "add." In mathematical notation "add"
is:

   add : Integer x Integer -> Integer

a mapping of pairs (Integer, Integer) to Integer.

[ However, in Ada's terms nothing requires it to be a mapping. E.g. "add"
may have side effects so that "add" with same arguments would return
different values. Which is no mapping then. There is still no pure
functions in Ada 2012, right? ]

> and I guess that this is taken from the common mathematical notation of
> functions) I guess one gets used to it, nevertheless, it is not the most
> intuitive syntax in the world... :-)

Seems so, except that doubled colon, which normally has the meaning
"deduced" rather than "equal."

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de


^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: Haskell, anyone?
  2015-11-16  5:17         ` Nasser M. Abbasi
  2015-11-16  5:48           ` Paul Rubin
@ 2015-11-16  8:45           ` Simon Wright
  2015-11-16 14:38             ` Brian Drummond
  1 sibling, 1 reply; 54+ messages in thread
From: Simon Wright @ 2015-11-16  8:45 UTC (permalink / raw)


"Nasser M. Abbasi" <nma@12000.org> writes:

> with ada.text_io; use ada.text_io;
>  procedure foo is
>    type I is range 0 .. 20;
>    x,y : I :=15;
>    z : I :=0;
>    begin
>      z:= x + y ;
>  end foo;
>
>
>>gnatmake foo.adb
> gcc-4.8 -c foo.adb
> foo.adb:8:12: warning: value not in range of type "I" defined at line 4
> foo.adb:8:12: warning: "Constraint_Error" will be raised at run time

The compiler can only do this because the values and the assignment are
close enough (in the same scope?). How often does this sort of mistake
happen IRL?

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: Haskell, anyone?
  2015-11-16  8:33   ` Dmitry A. Kazakov
@ 2015-11-16  9:33     ` mockturtle
  2015-11-16  9:45       ` Paul Rubin
  0 siblings, 1 reply; 54+ messages in thread
From: mockturtle @ 2015-11-16  9:33 UTC (permalink / raw)


On Monday, November 16, 2015 at 9:33:57 AM UTC+1, Dmitry A. Kazakov wrote:
> On Sun, 15 Nov 2015 14:11:07 -0800 (PST), mockturtle wrote:
> 
> >    function add(X, Y: Integer) return Integer;
> > 
> > rather than
> > 
> >     add :: Integer -> Integer -> Integer
> > 
> > (Yes, I understand that this means that add is a function that maps an
> > integer into a function that maps integers to integers... [confused? :-)]
> 
> Hmm, but that is not an equivalent of "add." In mathematical notation "add"
> is:
> 
>    add : Integer x Integer -> Integer
> 
> a mapping of pairs (Integer, Integer) to Integer.
> 

<disclaimer>
I'm not an expert, I am still at the tutorial level.  What follows is what I understood so far... :-)
</disclaimer>

I agree, but in Haskell tutorials I found the point of view that I described above.  The reason is that in Haskell functions can have only one argument, so if you wants more than argument you need this kind of "iteration."  To be honest, you can also define "add" as a function that maps a pair (x,y) into an Integer, you still have a one-argument function that operates on a single object, that is, a pair.  The iterated function approach allows you to do something like

   inc = add 1

In this case, inc is the Integer -> Integer function returned by "add" called with argument 1.

Source: https://www.haskell.org/tutorial/functions.html

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: Haskell, anyone?
  2015-11-15 23:19     ` Jeffrey R. Carter
@ 2015-11-16  9:36       ` J-P. Rosen
  2015-11-16 18:14         ` Jeffrey R. Carter
  0 siblings, 1 reply; 54+ messages in thread
From: J-P. Rosen @ 2015-11-16  9:36 UTC (permalink / raw)


Le 16/11/2015 00:19, Jeffrey R. Carter a écrit :
> On 11/15/2015 03:48 PM, Nasser M. Abbasi wrote:
>>
>> This is for me what makes Ada for me. One can define a type
>> that matche the range of the physical quantity being modeled.
>> This helps catch many errors.
> 
> One could do this with Pascal in 1970.
> 
Not really. In Pascal, all integer types are compatible. In Ada terms, a
Pascal integer /type/ is really a subtype of Pascal Integer.

-- 
J-P. Rosen
Adalog
2 rue du Docteur Lombard, 92441 Issy-les-Moulineaux CEDEX
Tel: +33 1 45 29 21 52, Fax: +33 1 45 29 25 00
http://www.adalog.fr

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: Haskell, anyone?
  2015-11-16  9:33     ` mockturtle
@ 2015-11-16  9:45       ` Paul Rubin
  0 siblings, 0 replies; 54+ messages in thread
From: Paul Rubin @ 2015-11-16  9:45 UTC (permalink / raw)


mockturtle <framefritti@gmail.com> writes:
> Source: https://www.haskell.org/tutorial/functions.html

That tutorial is a reasonable way to get started, but note that it is
very old.  You might look at the Haskell Wikibook

  https://en.wikibooks.org/wiki/Haskell

which incorporates the tutorial, and also the "Learn you a Haskell" book
that I linked earlier, and also "Real World Haskell"
( http://book.realworldhaskell.org ) which unfortunately is now also
somewhat out of date.  But they're all ok for starting.


^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: Haskell, anyone?
  2015-11-15 20:42 Haskell, anyone? mockturtle
                   ` (3 preceding siblings ...)
  2015-11-15 22:11 ` mockturtle
@ 2015-11-16 10:25 ` Hadrien Grasland
  2015-11-16 11:19   ` Simon Wright
                     ` (4 more replies)
  4 siblings, 5 replies; 54+ messages in thread
From: Hadrien Grasland @ 2015-11-16 10:25 UTC (permalink / raw)


Le dimanche 15 novembre 2015 21:42:27 UTC+1, mockturtle a écrit :
> Dear all,
> I recently read in the Italian version of Linux Format an interview to Katie Miller.  It got me intrigued the fact that when she talks about Haskell, she says things that usually Ada enthusiasts say about Ada ("when it compiles, it runs...", "the number of bugs is reduced...").  I remembered that some time ago I met someone that was enthusiast both about Ada and Haskell (when I said that I program in Ada, he said something "Finally!  Now I need to find someone who likes Haskell."
> 
> The question should be obvious: it seems that Haskell and Ada would attract the same kind of person; does someone here know and like Haskell as well?
> 
> Riccardo

*** Rant warning, all FP fans please close your eyes for a second ***

I've been learning OCaml, a close cousin of Haskell, for about a month now, in an attempt to better understand functional programming's principles, benefits and drawbacks. Here are my thoughts so far :

Functional programming is based on the core tenet that mutable state is evil and should be obliterated. Indeed, in doing so, functional programming proponents make some good point : mutable state makes programs less predictable, less testable, and harder to understand. And by eliminating mutable state, functional programming also puts an end to an endless quarrel between programmers and mathematicians about the meaning of the "=" sign.

Now, that's for the theory. In practice, eliminating state also has strong negative consequences. For one thing, all useful programs have state. If a program does not display anything on the screen, does not write to any file, and in general does not command any external peripheral to *change state*, it is effectively a useless program which wastes CPU cycles for no good purpose. This means that in practice, all functional programmings always make some compromises in their war against mutable state. Typical abstractions include putting an imperative programming backdoor somewhere in the language, as in OCaml, or adding some abstract and opaque "state" type that functions take as a parameter and return as an abstract result.

======

A second problem with programming without mutable state is that if you cannot change your state, you will have to duplicate it, sometimes a lot. For example, consider a pseudo code which computes a running average :

"For all i such that 2 < i < N - 1, B[i] = A[i - 1] + A[i] + A[i + 1]"

Unless your programming language provides syntactic sugar for this specific scenario, you will have to fill in B's elements one by one. But because functional programs cannot change B once they have assigned a value to it, you will need to create a whole lot of copies of B, each with one element changed with respect to the previous one :

- A copy of B with its first element set
- A copy of B with its first and second element set
- A copy of B with its first, second and third element set
- and so on...

If B is big, this means a whole lot of memory traffic, and thus pretty awful performance. Unless, that is, the compiler is clever enough to optimize the copies out and effectively produce an imperative program. Which leads us to the second core tenet of functional programming : "The compiler and the user are both clever enough". This one is very strong, and is not without reminding the usual rethoric of C programmers : "If the user hits himself in the foot with pointers, he's not good enough to handle C".

======

Functional programming language designers' belief that users are genius becomes most obvious when one tries to perform simple tasks with these. For example, let us assume that we have some list of data that we want to display to the user. In any modern imperative language, this takes one to three very simple lines of code :

"For all elements E of A, display E"

In a functional programming program, loops are considered evil, because on their inside they hide some kind of mutable counter which goes against the idea of having no mutable state. So instead, you will often be forced to write something awful like this :

"let f : List a -> List =
.....if a is empty then
.........return empty list
.....else
.........display first element of a
.........return f(rest of a)"

Notice that this example is long, much more difficult to understand, and makes heavy use of recursion (and thus, assumes that the compiler will be clever enough to optimize it out). And if you have some fun, imagine what the functional programmer will have to get through if he has to write the "display" function at some point.

Also, I have cheated by using return statements. Functional programming does not have statements, because statements have some underlying mutable state to them (the program counter) and thus originate from the realm of Satan. To write any complex program in functional programming languages, you are expected to do everything using expressions. So if you have ever suffered the horror of debugging three nested trigraphs in C, imagine how a typical functional programmer feels when he has to debug this :

http://benchmarksgame.alioth.debian.org/u64q/program.php?test=pidigits&lang=ghc&id=4

======

In short, functional programming produces write-only code that pleases mathematicians and puts a lot of pressure on compiler writers. This classifies it as a programming style which is very much squarely aimed at people from academia. What about the claim that functional programs are always correct ?

In a further attempt to please code aesthetes, functional programming languages have very elaborate type inference systems, which are heavily (ab)used by most practitioners. The idea is that you should not have to repeat yourself by writing types over and over again if you already know them. Also, a type inference fan will add, this means that your programs will always get genericity for free : a program which adds floats, will probably also work with ints, and so on.

But there is a price to pay for type inference. And to understand which price, we have to look no further than the most successful implementation of type inference worldwide, C++ templates.

Whenever someone passes the wrong parameter to a C++ template library, that person is inundated by a wall of compiler diagnostics, sometimes thousands of lines long, and full of library implementation details. The reason why this is the case is that because type inference is used througout the whole code hierarchy, compilers can only detect errors at the very bottom of it. The error then needs to bubble up the code and module hierarchy, accumulating an enormous amount of diagnostic noise along the way.

Decades of experience with full type inference suggest that it will always break software abstractions and result in incomprehensible compiler diagnostics, no matter how it is implemented. Which is why even the C++ people are now shifting away from it, adding some extra layers of type check to templates in a future version of the standard with "concepts".

It sounds like functional programmers still have that lesson to learn, however, as even though many FP languages support type annotations, I have yet to encounter real-world functional programs which take the time to use them properly.

======

Even when you have manage to get through unreadable compiler diagnostics and broken software abstractions, even when you do get a functional program with type inference to compile, you still are far from having perfect assurance that your program is incorrect. In fact, because your program is barely readable, as outlined above, you cannot even check it on a basic way by reading through it manually.

The great thing about FP, though, is that the resulting program will be much easier to test, precisely because it has little to no mutable state. But I'm not sure if all the pain you have to go through in order to reach this point is worth it.

======

Now, of course, as I mentioned on top, this post is rantish and I have made a couple of important simplifications. For example, I have ignored a very important piece of syntaxic sugar featured by most functional programming languages, the list comprehension.

A typical list comprehension which filters integer elements according to some criterion, then returns their successor, would look like this :

"[ FOR ALL elem IN source_list WITH filter(elem) DO elem + 1 ]"

Like any data-parallel abstraction, this programming language feature is very nice to use, and will result most of the time in code being simpler and the compiler doing the right thing. However, this only works for simple code that fits within the list comprehension's programming model, which is usually fairly limited. This is why I classify this feature as syntaxic sugar : it is usually not designed for maximum power, only for occasional convenience.

For example, my running average example couldn't be implemented with the list comprehension syntax above, because it is a gather operation, and not a map operation (it accesses multiple source indices). Few functional programming languages in the wild feature list comprehensions that support gather operations.

So, don't only read my somewhat negative opinion, nor that of the Haskell zealots who will try to handwave away all the problems of the functional programming model. Read many arguments and counter-arguments, try it yourself, see how you like it, which problems it fits best, and so on. This is programming we're talking about, after all :)

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: Haskell, anyone?
  2015-11-16 10:25 ` Hadrien Grasland
@ 2015-11-16 11:19   ` Simon Wright
  2015-11-16 11:25     ` Hadrien Grasland
  2015-11-16 13:59   ` G.B.
                     ` (3 subsequent siblings)
  4 siblings, 1 reply; 54+ messages in thread
From: Simon Wright @ 2015-11-16 11:19 UTC (permalink / raw)


Hadrien Grasland <hadrien.grasland@gmail.com> writes:

> you still are far from having perfect assurance that your program is
> incorrect.

All programs are incorrect
This is a program
Therefore it is incorrect
QED


^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: Haskell, anyone?
  2015-11-16 11:19   ` Simon Wright
@ 2015-11-16 11:25     ` Hadrien Grasland
  0 siblings, 0 replies; 54+ messages in thread
From: Hadrien Grasland @ 2015-11-16 11:25 UTC (permalink / raw)


Le lundi 16 novembre 2015 12:19:01 UTC+1, Simon Wright a écrit :
> > you still are far from having perfect assurance that your program is
> > incorrect.
> 
> All programs are incorrect
> This is a program
> Therefore it is incorrect
> QED

Hah, good catch :) Caught a couple similar typo here and there while re-reading, I've really become WAY too much used to having "Edit" buttons on all communication channels I use.

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: Haskell, anyone?
  2015-11-16 10:25 ` Hadrien Grasland
  2015-11-16 11:19   ` Simon Wright
@ 2015-11-16 13:59   ` G.B.
  2015-11-16 20:24   ` Jeffrey R. Carter
                     ` (2 subsequent siblings)
  4 siblings, 0 replies; 54+ messages in thread
From: G.B. @ 2015-11-16 13:59 UTC (permalink / raw)


On 16.11.15 11:25, Hadrien Grasland wrote:
> Whenever someone passes the wrong parameter to a C++ template library, that person is inundated by a wall of compiler diagnostics, sometimes thousands of lines long, and full of library implementation details. The reason why this is the case is that because type inference is used througout the whole code hierarchy, compilers can only detect errors at the very bottom of it. The error then needs to bubble up the code and module hierarchy, accumulating an enormous amount of diagnostic noise along the way.

Even accessibility check failures of Ada programs using anonymous
pointer types cannot come close to that! ;-)

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: Haskell, anyone?
  2015-11-16  8:45           ` Simon Wright
@ 2015-11-16 14:38             ` Brian Drummond
  0 siblings, 0 replies; 54+ messages in thread
From: Brian Drummond @ 2015-11-16 14:38 UTC (permalink / raw)


On Mon, 16 Nov 2015 08:45:32 +0000, Simon Wright wrote:

> "Nasser M. Abbasi" <nma@12000.org> writes:

>>>gnatmake foo.adb
>> gcc-4.8 -c foo.adb foo.adb:8:12: warning: value not in range of type
>> "I" defined at line 4 foo.adb:8:12: warning: "Constraint_Error" will be
>> raised at run time
> 
> The compiler can only do this because the values and the assignment are
> close enough (in the same scope?). How often does this sort of mistake
> happen IRL?

Often enough to make the warning worthwhile : I've seen it at least once 
IRL. 

But you're correct : even passing one of the expression variables in as a 
parameter is enough to defeat the warning, even whet the actual parameter 
is a constant - or even a literal. That case too could potentially be 
warned about, but then you're well into the realm of diminished returns.

-- Brian

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: Haskell, anyone?
  2015-11-16  9:36       ` J-P. Rosen
@ 2015-11-16 18:14         ` Jeffrey R. Carter
  0 siblings, 0 replies; 54+ messages in thread
From: Jeffrey R. Carter @ 2015-11-16 18:14 UTC (permalink / raw)


On 11/16/2015 02:36 AM, J-P. Rosen wrote:
>>
> Not really. In Pascal, all integer types are compatible. In Ada terms, a
> Pascal integer /type/ is really a subtype of Pascal Integer.

True. Earlier in the post it discussed a subtype rather than a type, IIRC, so I
guess I should have quoted that part of the post rather than the part I did.

-- 
Jeff Carter
"Apple juice and Darvon is fantastic together."
Play It Again, Sam
127

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: Haskell, anyone?
  2015-11-16 10:25 ` Hadrien Grasland
  2015-11-16 11:19   ` Simon Wright
  2015-11-16 13:59   ` G.B.
@ 2015-11-16 20:24   ` Jeffrey R. Carter
  2015-11-16 23:23   ` Paul Rubin
  2015-12-06 12:59   ` David Thompson
  4 siblings, 0 replies; 54+ messages in thread
From: Jeffrey R. Carter @ 2015-11-16 20:24 UTC (permalink / raw)


On 11/16/2015 03:25 AM, Hadrien Grasland wrote:
> a lot of stuff

I can't say I disagree with you much, though I would have some caveats. The
functional language I'm most familiar with is Erlang. It was developed and used
by Ericsson for the real-world purpose of developing telephony systems; as such,
it may be less academic and more practical than Haskell. I can't imagine how one
could write a GUI program with it, but apparently Hammer & Chisel are using it
for parts of their Discord VOIP application.

-- 
Jeff Carter
"Apple juice and Darvon is fantastic together."
Play It Again, Sam
127


^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: Haskell, anyone?
  2015-11-16 10:25 ` Hadrien Grasland
                     ` (2 preceding siblings ...)
  2015-11-16 20:24   ` Jeffrey R. Carter
@ 2015-11-16 23:23   ` Paul Rubin
  2015-11-17  8:26     ` Dmitry A. Kazakov
  2015-11-17 10:49     ` Hadrien Grasland
  2015-12-06 12:59   ` David Thompson
  4 siblings, 2 replies; 54+ messages in thread
From: Paul Rubin @ 2015-11-16 23:23 UTC (permalink / raw)


Hadrien Grasland <hadrien.grasland@gmail.com> writes:
> *** Rant warning, all FP fans please close your eyes for a second ***

I peeked ;).  The post is full of errors and misconceptions that might
clear up with more experience.

> Functional programming is based on the core tenet that mutable state
> is evil and should be obliterated.

Ermph, that's an overstatement, I'd say that the tenet is to separate
effects from denotational values.  State mutation (like i/o) is
considered an effect so FP tries to use it sparingly.  But Ocaml has
reference cells whose contents you can write to in ordinary expressions,
and Haskell has cells you can access with i/o-like actions, plus ways
to thread state through purely functional code.  

> in practice, all functional programmings always make some compromises
> in their war against mutable state.

We would say in Haskell that the Haskell program computes a pure value
called an i/o action (e.g. an action that prints something) and the
runtime then executes the action.

> "For all i such that 2 < i < N - 1, B[i] = A[i - 1] + A[i] + A[i + 1]"
> - A copy of B with its first element set
> - A copy of B with its first and second element set...

No of course you wouldn't do anything like that.  An idiomatic Haskell
two-liner if A is a list would be

    b = zipWith2 add3 a (tail a) (tail (tail a)) where
      add3 x y z = x+y+z

or you could write something similar using recursion.  For immutable
arrays, you'd use toList and fromList in combination with something like
that.  There are also mutable array types which would let you write an
imperative-looking loop.

> "The compiler and the user are both clever enough". 

It's true that there are FP idioms you have to get used to if you're
coming from imperative programming.

> "For all elements E of A, display E"
> In a functional programming program, loops are considered evil,

Typcally you'd write something like "mapM_ display a" which is
Haskell for running the display action over all the elements of a.
You don't even need that intermediate variable E cluttering your code.

> "let f : List a -> List =
> .....if a is empty then
> .........return empty list
> .....else
> .........display first element of a
> .........return f(rest of a)"
> Notice that this example is long, much more difficult to understand,

That doesn't look like Ocaml to me, but I don't know Ocaml that well.
The Haskell implementation if you didn't want to use mapM_ would be:

   f :: [a] -> IO ()
   f [] = return ()
   f (x:xs) = display x >> f xs

which is probably shorter than comparable imperative code.  Note that
the type annotation (first line) is not required, since the compiler can
figure it out if you omit it.

> imagine how a typical functional programmer feels when he has to debug
> this :
> http://benchmarksgame.alioth.debian.org/u64q/program.php?test=pidigits&lang=ghc&id=4

I'd say that code has been golfed a bit (since one of the benchmark
metrics is how short the code is) and also has a few speed optimizations
that clutter things a little, but it is fairly straightforward Haskell,
and the main obstacle to debugging is probably understanding the
algorithm.  I'd guess that the sub-expressions were written and tested
separately before being combined to that big function.  That's not so
safe in imperative coding since the side effects of the different
sub-expressions could interfere when you combine them.  But it's less of
a problem in FP code that has no side effects.

The Ada equivalent is here:

  http://benchmarksgame.alioth.debian.org/u64q/program.php?test=pidigits&lang=gnat&id=2

It's 5x-10x as much code, depending on how you count.  It's a pretty
safe bet that the Haskell version took less time to code and debug.

> What about the claim that functional programs are always correct ?

That's silly, there's no such claim.  There's a justified claim that FP
makes it easy to avoid certain classes of bugs that happen all the time
in imperative programs, but that's much weaker.  You can also use FP
techniques in imperative programming to avoid some of those bugs, and I
do that all the time these days.  That's why I think it's worth learning
some FP even if you real work uses imperative languages.

> But there is a price to pay for type inference. And to understand
> which price, we have to look no further than the most successful
> implementation of type inference worldwide, C++ templates.
> Whenever someone passes the wrong parameter to a C++ template library,
> that person is inundated by a wall of compiler diagnostics,

Oh that's way overblown, those awful diagnostics are because C++
templates are basically glorified macros and C++ does a lot of automatic
type coercions (like adding int to float gives float), bloating the
number of combinations of type assignments in a big expression.  GHC's
error messages can be messy and hard to understand, but it's nothing
like C++ template errors.  GHC messages are often "error at line 145,
blah blah blah blah [long-winded traceback]" where the blah blah blah
isn't that informative, but the line number is usually accurate, so you
just look at line 145 and figure out what is wrong (that takes some
experience, I admit).  Haskell has no automatic conversions (int plus
float is a type error), which simplifies things too.

> The reason why this is the case is that because type
> inference is used througout the whole code hierarchy, compilers can
> only detect errors at the very bottom of it. 

It's not exactly like that, it's a unification algorithm so it's like
discovering that a crossword puzzle has no solution.  I've been told
that ML/Ocaml programs are still ok relying on inference since ML's type
system was historically simpler than Haskell's.  Haskell programmers
generally will write annotations (polymorphic ones when appropriate) on
top-level functions, while usually letting the compiler infer types for
the stuff inside the functions.

> C++ people are now shifting away from it, adding some extra layers of
> type check to templates in a future version of the standard with
> "concepts".

That idea (bounded polymorphism aka type classes) has been in Haskell
since forever, and the idea is to make the types more precise, not
particularly to clean up error messages.

> even though many FP languages support type annotations, I have yet to
> encounter real-world functional programs which take the time to use
> them properly.

Just what real-world functional programs have you looked at?  Again I
know that Ocaml programs tend to use less of them than Haskell programs
but that's because they're of apparently less benefit in Ocaml.

> The great thing about FP, though, is that the resulting program will
> be much easier to test, precisely because it has little to no mutable
> state. But I'm not sure if all the pain you have to go through in
> order to reach this point is worth it.

I've generally found it easier to code stuff in Haskell than in
low-level imperative languages, because of the higher abstraction,
shorter code, and larger amount of runtime services integrated with the
language.  The main cost is that the Haskell code runs slower and uses
more memory, but it still suffices for lots of things.

> For example, my running average example couldn't be implemented with
> the list comprehension syntax above, because it is a gather operation,

   averages = [x+y+z | x:y:z:_ <- tails a]

works for me.  Maybe that's nicer than the zipWith2 version I gave
earlier.

> So, don't only read my somewhat negative opinion, nor that of the
> Haskell zealots who will try to handwave away all the problems of the
> functional programming model. 

The problem isn't that your opinion is negative, it's that it's
uninformed.  There are some well-informed critiques of Haskell around
including Bob Harper's (he's a designer of SML and rails against Haskell
all the time) and Ben Lippmeier's (he designed a language called
Disciple that tries to address what he sees as Haskell shortcomings,
that is quite interesting).  But I think you've only taken a superficial
look at the topic and come away with some strong opinions that aren't
validly grounded in facts.


^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: Haskell, anyone?
  2015-11-16 23:23   ` Paul Rubin
@ 2015-11-17  8:26     ` Dmitry A. Kazakov
  2015-11-17  9:10       ` Mark Carroll
  2015-11-17 10:49     ` Hadrien Grasland
  1 sibling, 1 reply; 54+ messages in thread
From: Dmitry A. Kazakov @ 2015-11-17  8:26 UTC (permalink / raw)


On Mon, 16 Nov 2015 15:23:16 -0800, Paul Rubin wrote:

> Hadrien Grasland <hadrien.grasland@gmail.com> writes:

>> "The compiler and the user are both clever enough". 
> 
> It's true that there are FP idioms you have to get used to if you're
> coming from imperative programming.

Because declarative programming is non-algorithmic and no programming at
all, if taken seriously. It is like the idea of stateless programming, just
a fiction.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de


^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: Haskell, anyone?
  2015-11-17  8:26     ` Dmitry A. Kazakov
@ 2015-11-17  9:10       ` Mark Carroll
  2015-11-17 20:09         ` Dmitry A. Kazakov
  0 siblings, 1 reply; 54+ messages in thread
From: Mark Carroll @ 2015-11-17  9:10 UTC (permalink / raw)


On 17 Nov 2015, Dmitry A. Kazakov wrote:

> On Mon, 16 Nov 2015 15:23:16 -0800, Paul Rubin wrote:
>
>> Hadrien Grasland <hadrien.grasland@gmail.com> writes:
>
>>> "The compiler and the user are both clever enough". 
>> 
>> It's true that there are FP idioms you have to get used to if you're
>> coming from imperative programming.
>
> Because declarative programming is non-algorithmic and no programming at
> all, if taken seriously. It is like the idea of stateless programming, just
> a fiction.

Chris Okasaki's "Purely Functional Data Structures" is a great
introduction to the basics of what one can do with algorithms when one
takes declarative programming seriously, regardless of if one deems it
"non-algorithmic and no programming at all". I do generally find it
possible to translate algorithms from books like CLR(S) -- longest
common subsequence or directed graph maximum flow or whatever -- to
being purely declarative, it just means that I have to step back and
really understand the idea and approach of the algorithm, rather than
just mechanically translating their for loops and arrays and whatever
into the appropriate syntax. For instance, with Haskell the elements of
an array of an imperative algorithm may become return values from some
recursive lazily evaluated function. (It is still possible to use an
imperative approach when it fits best, for instance I often did when
translating my simple sysadmin Perl 5 scripts to Haskell.) But I fear
we're drifting off-topic so I'll again return to lurking here and look
out for any queries in comp.lang.haskell.

-- Mark

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: Haskell, anyone?
  2015-11-16 23:23   ` Paul Rubin
  2015-11-17  8:26     ` Dmitry A. Kazakov
@ 2015-11-17 10:49     ` Hadrien Grasland
  2015-11-17 12:01       ` G.B.
                         ` (2 more replies)
  1 sibling, 3 replies; 54+ messages in thread
From: Hadrien Grasland @ 2015-11-17 10:49 UTC (permalink / raw)


Le mardi 17 novembre 2015 00:23:24 UTC+1, Paul Rubin a écrit :
> > *** Rant warning, all FP fans please close your eyes for a second ***
> 
> I peeked ;).  The post is full of errors and misconceptions that might
> clear up with more experience.

Most certainly :) As I said before, even if I've been regularly trying to understand the intricacies of FP for about a year now, it's the longest-lasting continuous attempt so far, and it's only been a month. I have already grasped that getting it right is going to take a lot more time.


> Ermph, that's an overstatement, I'd say that the tenet is to separate
> effects from denotational values.  State mutation (like i/o) is
> considered an effect so FP tries to use it sparingly.  But Ocaml has
> reference cells whose contents you can write to in ordinary expressions,
> and Haskell has cells you can access with i/o-like actions, plus ways
> to thread state through purely functional code.  

If the goal were to only use side-effects *sparingly*, I believe that all skilled imperative programmers already do that really. Kids learning imperative programming these days are repeatedly told that...
- Global variables are to be used with extreme caution
- Variable reuse is a terrible idea, their scope should be reduced instead
- If something can be const, it should really be
- Complex algorithms should be decomposed into simpler procedures and functions

The difference, as far as I can see it, is that the philosophy of functional programming is to hunt mutable variables and effects much further, sometimes I would say far beyond the point of diminishing returns.

For example, speaking of Haskell, when your mechanism to hide side-effects is conceptually so complicated that a many learning resources feel a desperate need to hide it under a rug or introduce it as late as possible, and when a significant part of your user base seems more obsessed with the religious-sounding concept of purity than with that of writing cool programs, I feel like there might be a problem.

This is one of the reasons why I picked OCaml this time around, actually. I've read that it has a somewhat more pragmatic approach, encouraging the use of functional constructs when they work, but leaving imperative features around for those situations where they are the most appropriate answer to the problem at hand. This sounds like an attitude I am more likely to get behind: know many tools, and pick the right tool for the right job.


> > "For all i such that 2 < i < N - 1, B[i] = A[i - 1] + A[i] + A[i + 1]"
> > - A copy of B with its first element set
> > - A copy of B with its first and second element set...
> 
> No of course you wouldn't do anything like that.  An idiomatic Haskell
> two-liner if A is a list would be
> 
>     b = zipWith2 add3 a (tail a) (tail (tail a)) where
>       add3 x y z = x+y+z
> 
> or you could write something similar using recursion.

You know, mentally counting the amount of programming concepts accumulated in this short code snippet and imagining myself trying to explain it to my C students of last year, I wonder if this is why Structure and Interpretation of Computer Programs was considered one of the hardest programming classes ever taught at the MIT.


> For immutable
> arrays, you'd use toList and fromList in combination with something like
> that.  There are also mutable array types which would let you write an
> imperative-looking loop.

Converting to and from lists sounds awful from the point of view of performance, which is often the core point of using arrays. I guess that as usual, we have to hope that the compiler is going to be clever enough.


> > "For all elements E of A, display E"
> > In a functional programming program, loops are considered evil,
> 
> Typcally you'd write something like "mapM_ display a" which is
> Haskell for running the display action over all the elements of a.
> You don't even need that intermediate variable E cluttering your code.

Ah, yes, indeed. I'll give you that point :)


> 
> > "let f : List a -> List =
> > .....if a is empty then
> > .........return empty list
> > .....else
> > .........display first element of a
> > .........return f(rest of a)"
> > Notice that this example is long, much more difficult to understand,
> 
> That doesn't look like Ocaml to me, but I don't know Ocaml that well.
> The Haskell implementation if you didn't want to use mapM_ would be:
> 
>    f :: [a] -> IO ()
>    f [] = return ()
>    f (x:xs) = display x >> f xs
> 
> which is probably shorter than comparable imperative code.  Note that
> the type annotation (first line) is not required, since the compiler can
> figure it out if you omit it.

There is also pattern matching in OCaml, but in this situation, I did not feel like its conceptual complexity was worth the code savings. Adding the actual function call, it would look something like this, but I'm not sure about the exact syntax because I haven't been diving into OCaml's imperative features yet:

let f list = match list with
| [] -> []
| x::xs -> (display x; xs)

f(A)

That's 5 lines. Comparable imperative code would be, as I said, one or three lines depending on the loop syntax : one line if you are allowed to do loop one-liners (as in C derivatives), and three if you need to use a block statement for every loop, as in Pascal derivatives. The program length and complexity is simply not comparable.

However, again, I will give you that I had forgotten about using the good old map, which will save us from the horrors of recursion this time around.


> > imagine how a typical functional programmer feels when he has to debug
> > this :
> > http://benchmarksgame.alioth.debian.org/u64q/program.php?test=pidigits&lang=ghc&id=4
> 
> I'd say that code has been golfed a bit (since one of the benchmark
> metrics is how short the code is)

I believe that the Benchmark Game has finally dropped that metric, thankfully. But you are right that it might still have been in effect by the time this code was written.


> and also has a few speed optimizations
> that clutter things a little, but it is fairly straightforward Haskell,
> and the main obstacle to debugging is probably understanding the
> algorithm.

Hmm... it's a bit more than that, I'd say. Since imperative programs have straightforward control flow, debuggers can provide programmers with the very nifty abstraction of stepping through code instructions, iteratively, checking that it does what is expected. In functional programs, especially when lazy evaluation is used, I am not convinced yet that the debugging methodology can always be so straightforward, though I might not have met the right tool yet.

Personally, I tend to dislike long expressions and break them down in the shortest sub-expression that will be meaningful to the problem at hand, because these are not only easier to understand, but also easier to debug as intermediate sub-results can easily be examinated (given a good enough debugger) or manually printed. But note that this is not specific to imperative programs : in OCaml, I also heavily use let-bindings for this purpose. The main thing which I dislike so far about the OCaml syntax is that let bindings tend to require a lot of nesting, which gives idiomatically formatted code a very strange diagonal shape :

let a = 42 in
...let b = 42 * a in
......let c = 24 * b in


> I'd guess that the sub-expressions were written and tested
> separately before being combined to that big function.  That's not so
> safe in imperative coding since the side effects of the different
> sub-expressions could interfere when you combine them.  But it's less of
> a problem in FP code that has no side effects.

As I said in the beginning, it really depends how you write imperative code. Well-written imperative code really doesn't need that many side-effects to get the job done. 

In this situation, a good imperative programmer would not only implement portions of the algorithm as separate sub-routines, but also leave them around in the final program for documentation. Compilers have become so good at inlining these days, that there is usually little point in obscuring code with manual


> The Ada equivalent is here:
> 
>   http://benchmarksgame.alioth.debian.org/u64q/program.php?test=pidigits&lang=gnat&id=2
> 
> It's 5x-10x as much code, depending on how you count.  It's a pretty
> safe bet that the Haskell version took less time to code and debug.

The main reasons why Ada is so verbose is that asks you to be quite specific about everything you do (which is good for debugging, because it means the compiler and runtime can catch more errors), and it puts readability above ease of writing. I personally think that these two characteristices make it a good choice for long-lasting codebases, but I will certainly agree that it also makes the language a poor fit for prototyping and quick hacking scenarii, where Python, C++, or domain-specific languages will be better tools for the job.


> That's silly, there's no such claim.  There's a justified claim that FP
> makes it easy to avoid certain classes of bugs that happen all the time
> in imperative programs, but that's much weaker.  You can also use FP
> techniques in imperative programming to avoid some of those bugs, and I
> do that all the time these days.  That's why I think it's worth learning
> some FP even if you real work uses imperative languages.

...and part of the reason why I'm still learning FP, actually, in spite of all the learning pain :)

Just the other day, I encountered a C++ problem where passing a closure as a parameter to a function was the right thing to do, and any other alternative would have been needlessly clunky. I'm not sure I would have thought about that if it weren't for all my past attempts at learning FP.

 
> > The reason why this is the case is that because type
> > inference is used througout the whole code hierarchy, compilers can
> > only detect errors at the very bottom of it. 
> 
> It's not exactly like that, it's a unification algorithm so it's like
> discovering that a crossword puzzle has no solution.  I've been told
> that ML/Ocaml programs are still ok relying on inference since ML's type
> system was historically simpler than Haskell's.  Haskell programmers
> generally will write annotations (polymorphic ones when appropriate) on
> top-level functions, while usually letting the compiler infer types for
> the stuff inside the functions.

That's pretty much also the way I use C++11 type inference: no "auto" shall touch my function signatures, but I'm perfectly comfortable using them inside of short code snippets where the type of variables is obvious.

 
> > C++ people are now shifting away from it, adding some extra layers of
> > type check to templates in a future version of the standard with
> > "concepts".
> 
> That idea (bounded polymorphism aka type classes) has been in Haskell
> since forever, and the idea is to make the types more precise, not
> particularly to clean up error messages.

Hmm... I have yet to start learning Haskell again, but the last time I tried, I got the impression that Haskell type classes were pretty similar to the interface concept in object-oriented programming : stating that a type must have some methods defined in order to be used in some context. Ad-hoc interfaces which do not require explicit type support exist in Go, for example.

From what I understood, what the C++ people are trying to do with concepts is quite a bit stronger: they want to build a language infrastructure which allows one to assert any static predicate about a type before allowing it to be passed as a parameter to a template. That is to say, they want to be able to assert anything that can be verified at compile time about a type, not just whether it has some methods or not.

I find the idea quite interesting, but I'm curious about what the implementation would look like. C++ has a long history of coming up with pretty cool ideas, then producing an implementation which is insanely difficult to use correctly. Move semantics in C++11 are a great example of this.


> > even though many FP languages support type annotations, I have yet to
> > encounter real-world functional programs which take the time to use
> > them properly.
> 
> Just what real-world functional programs have you looked at?  Again I
> know that Ocaml programs tend to use less of them than Haskell programs
> but that's because they're of apparently less benefit in Ocaml.

Not thinking about anything in particular, just the strange snippets I have ended up on the web while browsing for functional programs over time. It's true that the only very large and successful functional program which I know of is Emacs, and its Lisp internals most certainly do not exhibit the latest and greatest that functional programming languages can do.


> > The great thing about FP, though, is that the resulting program will
> > be much easier to test, precisely because it has little to no mutable
> > state. But I'm not sure if all the pain you have to go through in
> > order to reach this point is worth it.
> 
> I've generally found it easier to code stuff in Haskell than in
> low-level imperative languages, because of the higher abstraction,
> shorter code, and larger amount of runtime services integrated with the
> language.  The main cost is that the Haskell code runs slower and uses
> more memory, but it still suffices for lots of things.

How is this different from e.g. Python ?


> > For example, my running average example couldn't be implemented with
> > the list comprehension syntax above, because it is a gather operation,
> 
>    averages = [x+y+z | x:y:z:_ <- tails a]
> 
> works for me.  Maybe that's nicer than the zipWith2 version I gave
> earlier.

Most certainly a lot nicer, now that's something I can imagine showing to first-year students without scaring them away from functional programming for their entire life :)

Would it be correct of me to assume that if I asked you to handle end points by reusing edge input values, your idiomatic Haskell answer would be to append duplicates of the first and last input value on each end of the input list ?

(After all, if we're going to take the performance hit of reading from a list, we'd better take advantage of their fast insertion capabilities)


> The problem isn't that your opinion is negative, it's that it's
> uninformed.  There are some well-informed critiques of Haskell around
> including Bob Harper's (he's a designer of SML and rails against Haskell
> all the time) and Ben Lippmeier's (he designed a language called
> Disciple that tries to address what he sees as Haskell shortcomings,
> that is quite interesting).  But I think you've only taken a superficial
> look at the topic and come away with some strong opinions that aren't
> validly grounded in facts.

So far, I have the impression that what I dislike about functional programming is the obsession of absolute functional purity beyond the limit of practicality, which is not specific to Haskell but rather a general tendency of FP languages. Nevertheless, point taken, we would both benefit more from this argument after I spend a couple more months recursing in the pain of learning FP ! :)

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: Haskell, anyone?
  2015-11-17 10:49     ` Hadrien Grasland
@ 2015-11-17 12:01       ` G.B.
  2015-11-17 16:43         ` Hadrien Grasland
  2015-11-17 13:01       ` Thomas Løcke
  2015-11-18  0:11       ` Paul Rubin
  2 siblings, 1 reply; 54+ messages in thread
From: G.B. @ 2015-11-17 12:01 UTC (permalink / raw)


On 17.11.15 11:49, Hadrien Grasland wrote:
> the horrors of recursion

Recursion, I speculate, is one of the two things that
a programmer new to FP style needs to learn, the other
being combinatorial control of the evaluator's mechanics.
It is like learning how to write assembly programming,
with one difference: the "paper" is turned by 90°
and FP is a little less explicit about instructing the
machine.(*)

(Is it a coincidence that (declarative) logics and the
λ-calculi have lead to variations that have "combinatory"
as part of their names?)


__
(*) IIRC, using the core of Mozart/Oz, one builds functions
from primitives, each of them being syntactically structured
like an instruction to a three-address machine.


^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: Haskell, anyone?
  2015-11-17 10:49     ` Hadrien Grasland
  2015-11-17 12:01       ` G.B.
@ 2015-11-17 13:01       ` Thomas Løcke
  2015-11-17 16:45         ` Hadrien Grasland
  2015-11-18  0:11       ` Paul Rubin
  2 siblings, 1 reply; 54+ messages in thread
From: Thomas Løcke @ 2015-11-17 13:01 UTC (permalink / raw)


On 11/17/2015 11:49 AM, Hadrien Grasland wrote:
> .. obsessed with the religious-sounding concept of purity than with that of writing cool programs...

One example of a cool program is xmonad. It's a tiling window manager.

I use it and it is indeed a very cool piece of software.

This guy on youtube talks abit about Haskell and he also "deconstructs"
xmonad:

https://www.youtube.com/user/jekor/videos
https://www.youtube.com/watch?v=63MpfyZUcrU

I've no idea if he's good at what he's doing, but it might be worth
checking out.

:o)

-- 
Thomas Løcke


^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: Haskell, anyone?
  2015-11-17 12:01       ` G.B.
@ 2015-11-17 16:43         ` Hadrien Grasland
  2015-11-17 18:04           ` Paul Rubin
  0 siblings, 1 reply; 54+ messages in thread
From: Hadrien Grasland @ 2015-11-17 16:43 UTC (permalink / raw)


Le mardi 17 novembre 2015 13:01:24 UTC+1, G.B. a écrit :
> > the horrors of recursion
> 
> Recursion, I speculate, is one of the two things that
> a programmer new to FP style needs to learn, the other
> being combinatorial control of the evaluator's mechanics.
> It is like learning how to write assembly programming,
> with one difference: the "paper" is turned by 90°
> and FP is a little less explicit about instructing the
> machine.(*)

I would say that recursion is the goto statement of functional programming languages : necessary in order to achieve Turing-completeness, and occasionally useful, but makes control flow messy and programs very difficult to reason about, and should thus only be used in situations where no higher-level abstraction will fit.

The recursive hoops that one has to jump through in order to "modify" a complex data structure are especially gruesome. Just implemented a radix tree in OCaml today, as an exercise, and the insertion function gave me a glimpse of what FP hell looks like. Sometimes, immutability can really make things much worse.

I enthusiastically approve of the creative uses that FP languages find to discriminated types, though. It gives me plenty of ideas to experiment with in my future Ada programs.

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: Haskell, anyone?
  2015-11-17 13:01       ` Thomas Løcke
@ 2015-11-17 16:45         ` Hadrien Grasland
  0 siblings, 0 replies; 54+ messages in thread
From: Hadrien Grasland @ 2015-11-17 16:45 UTC (permalink / raw)


Le mardi 17 novembre 2015 14:16:51 UTC+1, Thomas Løcke a écrit :
> On 11/17/2015 11:49 AM, Hadrien Grasland wrote:
> > .. obsessed with the religious-sounding concept of purity than with that of writing cool programs...
> 
> One example of a cool program is xmonad. It's a tiling window manager.
> 
> I use it and it is indeed a very cool piece of software.
> 
> This guy on youtube talks abit about Haskell and he also "deconstructs"
> xmonad:
> 
> https://www.youtube.com/user/jekor/videos
> https://www.youtube.com/watch?v=63MpfyZUcrU
> 
> I've no idea if he's good at what he's doing, but it might be worth
> checking out.

Nice video concept ! I love how he spends time explaining the data structures, rather than simply jumping through heaps of code logic, it does look like XMonad's code comment documentation is pretty awesome.


^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: Haskell, anyone?
  2015-11-17 16:43         ` Hadrien Grasland
@ 2015-11-17 18:04           ` Paul Rubin
  2015-11-17 21:42             ` Hadrien Grasland
  0 siblings, 1 reply; 54+ messages in thread
From: Paul Rubin @ 2015-11-17 18:04 UTC (permalink / raw)


Hadrien Grasland <hadrien.grasland@gmail.com> writes:
> I would say that recursion is the goto statement of functional
> programming languages : necessary in order to achieve
> Turing-completeness, and occasionally useful, but makes control flow
> messy and programs very difficult to reason about,

No that's continuations ;-).  Recursion works fine and is simpler
than loops to reason about and simple to use.

> The recursive hoops that one has to jump through in order to "modify"
> a complex data structure are especially gruesome.  Just implemented a
> radix tree in OCaml today

The FP preference would be to use an immutable data structure like a
red-black tree:

http://goodmath.scientopia.org/2009/11/30/advanced-haskell-data-structures-red-black-trees/


^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: Haskell, anyone?
  2015-11-17  9:10       ` Mark Carroll
@ 2015-11-17 20:09         ` Dmitry A. Kazakov
  0 siblings, 0 replies; 54+ messages in thread
From: Dmitry A. Kazakov @ 2015-11-17 20:09 UTC (permalink / raw)


On Tue, 17 Nov 2015 09:10:14 +0000, Mark Carroll wrote:

> On 17 Nov 2015, Dmitry A. Kazakov wrote:
> 
>> On Mon, 16 Nov 2015 15:23:16 -0800, Paul Rubin wrote:
>>
>> Because declarative programming is non-algorithmic and no programming at
>> all, if taken seriously. It is like the idea of stateless programming, just
>> a fiction.
> 
> I do generally find it
> possible to translate algorithms from books like CLR(S) -- longest
> common subsequence or directed graph maximum flow or whatever -- to
> being purely declarative, it just means that I have to step back and
> really understand the idea and approach of the algorithm, rather than
> just mechanically translating their for loops and arrays and whatever
> into the appropriate syntax.

I don't think that is declarative. Imperative means that you have some
computational model and express the solution in terms of basic operations
of that model. Whether that be loops or recursive functions is not much
matter, except that former are much easier to understand, test, validate,
prove correctness, estimate numerical complexity, optimize etc.

A declarative approach is inherently non-constructive. You don't try to
understand anything, you just state something and let the inference system
to deduce the solution without understanding how it works. Without even
knowing what must be stated. Reverse problems are much harder than direct
ones, as we know from mathematics. That is why FP is so much try and error
rather than focused design. This a consequence of being declarative. Same
problems have other declarative approaches, e.g. relational algebra,
modeling languages etc.

Once you try to see what is going behind the scenes, it is no more
declarative.

"Let there be light" is declarative. Physics of electromagnetic fields is
imperative.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de


^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: Haskell, anyone?
  2015-11-17 18:04           ` Paul Rubin
@ 2015-11-17 21:42             ` Hadrien Grasland
  2015-11-18  4:36               ` Paul Rubin
  0 siblings, 1 reply; 54+ messages in thread
From: Hadrien Grasland @ 2015-11-17 21:42 UTC (permalink / raw)


Le mardi 17 novembre 2015 19:04:53 UTC+1, Paul Rubin a écrit :
> > I would say that recursion is the goto statement of functional
> > programming languages : necessary in order to achieve
> > Turing-completeness, and occasionally useful, but makes control flow
> > messy and programs very difficult to reason about,
> 
> No that's continuations ;-).  Recursion works fine and is simpler
> than loops to reason about and simple to use.

Consider these two C code snippets:

int iterate_recursion(int n) {
...if (n < 2) {
......return 1;
...} else {
......return iterate_recursion(n-1);
...}
}

int iterate_goto(int n) {
...beginning:
...if (n < 2) {
......return 1;
...} else {
......n = n-1;
......goto beginning;
...}
}

You have to admit that in the end, the basic approach is almost exactly the same, and people will need to reason about it in exactly the same way ("okay, so at the start of the iteration, the input parameter is indeed n, but after that, we'll decrement it and go back to the beginning of the function, this time with the input parameter equal to n-1, and then...").

Hence my claim that recursion is, indeed, the ultimate goto, including when it comes to ease of understanding. Guy Steele will be most pleased.

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: Haskell, anyone?
  2015-11-17 10:49     ` Hadrien Grasland
  2015-11-17 12:01       ` G.B.
  2015-11-17 13:01       ` Thomas Løcke
@ 2015-11-18  0:11       ` Paul Rubin
  2015-11-18  9:44         ` Hadrien Grasland
  2 siblings, 1 reply; 54+ messages in thread
From: Paul Rubin @ 2015-11-18  0:11 UTC (permalink / raw)


Hadrien Grasland <hadrien.grasland@gmail.com> writes:
> As I said before, even if I've been regularly trying
> to understand the intricacies of FP for about a year now, it's the
> longest-lasting continuous attempt so far, and it's only been a
> month.

You should probably look at Hughes' paper "Why FP matters" if you
haven't:

http://www.cse.chalmers.se/~rjmh/Papers/whyfp.html

> I have already grasped that getting it right is going to take a lot
> more time.

I don't know, it's not really harder than other types of programming,
just different.

> If the goal were to only use side-effects *sparingly*, I believe that
> all skilled imperative programmers already do that really. ...
> - Global variables are to be used with extreme caution

Except that the external world (that i/o interacts with) is in essence a
giant global variable.  One thing I learned from Haskell is the value of
putting all the i/o in the outer layer of a program rather than
scattering it through everything.  Writing functions to have no side
effects (like IO, or old-fashioned stateful OOP) when possible is
another angle on avoiding global variables.

> - Variable reuse is a terrible idea, their scope should be reduced
> instead

Sure, like using recursion instead of loops with mutating indexes.

> - If something can be const, it should really be

Example: instead of using mutable maps like hash tables, FP prefers
mutation-free balanced trees, etc.  The Chris Okasaki book that Mark
Carroll mentioned is all about this topic.

> The difference, as far as I can see it, is that the philosophy of
> functional programming is to hunt mutable variables and effects much
> further, sometimes I would say far beyond the point of diminishing
> returns.

I think with the right tools, the FP style leads to quicker and safer
development, if you don't mind the performance hit.  If you look at some
low level Haskell libraries, they do use mutation and other ugly tricks
to maximize speed.  That lets higher-level application code get the
performance gains while leaving the libraries to deal with the dirt.
The libraries *could* (mostly) be written in pure FP style and they'd
work fine, they'd just be slower.

> when your mechanism to hide side-effects is conceptually so
> complicated that a many learning resources feel a desperate need to
> hide it under a rug

Oh, that was the notorious "monad tutorial" era.  We're over that now
;-).  See:

https://byorgey.wordpress.com/2009/01/12/abstraction-intuition-and-the-monad-tutorial-fallacy/

If you take a math class you have to learn some stuff.  People shouldn't
expect skillful programming to be different.

> OCaml...  has a somewhat more pragmatic approach, encouraging the use
> of functional constructs when they work, but leaving imperative
> features around for those situations where they are the most
> appropriate answer to the problem at hand.

This is basically true from what I can tell.

> This sounds like an attitude I am more likely to get behind: know many
> tools, and pick the right tool for the right job.

Actually it's a reason to pick Haskell for your learning experience,
because you get to understand FP a lot better, instead of something
that's half-FP half-imperative.  At that point Ocaml becomes easy.  I
haven't used Ocaml but have looked at the docs and it has much less
learning curve than Haskell.  It's basically Lisp with a type system.

Looking at it another way, say you're coming from a highly abstract
background like Haskell and you want to try some low level programming
to find out how computers would really work.  Does that mean study Ada?
IMHO it means study assembler and move to Ada from there.

> Structure and Interpretation of Computer Programs was considered one
> of the hardest programming classes ever taught at the MIT.

The SICP course covered a lot of material in a short time, but FP wasn't
that big or difficult an aspect of it.  The textbook is really great and
and it's online and you should read it if you haven't
(mitpress.mit.edu/sicp).

>> For immutable arrays, you'd use toList and fromList 
> Converting to and from lists sounds awful from the point of view of
> performance, which is often the core point of using arrays.

The point of arrays is constant-time random access.  Without that you
can use quadratic time to touch all the elements depending on order.
The toList and fromList there just each make a single sequential scan so
it's linear time just like the averaging calculation.  Of course if you
do it often enough, the constant factor can become a problem, and that's
when to think about other approaches.

> There is also pattern matching in OCaml, but in this situation, I did
> not feel like its conceptual complexity was worth the code savings.

Pattern matching is very basic to Haskell and Ocaml, so use it freely.

> good old map, which will save us from the horrors of recursion this
> time around.

Haskell tends (more than Ocaml does) to use higher order functions like
map or fold instead of recursion.  That's partly because lazy evaluation
(one of Haskell's more controversial features) lets you map over a huge
list without creating an intermediate list, if the thing consuming the
map output doesn't keep it around.

> In functional programs, especially when lazy evaluation is used, I am
> not convinced yet that the debugging methodology can always be so
> straightforward, though I might not have met the right tool yet.

It does take some getting used to.  GHCi has an interactive debugger
now, if that helps.

> let a = 42 in
> ...let b = 42 * a in
> ......let c = 24 * b in

Haskell lets you put the bindings at the same indent level or in braces,
but it's a superficial difference.

> The main reasons why Ada is so verbose is that asks you to be quite
> specific about everything you do (which is good for debugging, because
> it means the compiler and runtime can catch more errors)

Meh, it mostly means there's more places for your code to go wrong.  If
the compiler is already debugged and can figure stuff out so you don't
have to, you're better off.  Think of the Ada compiler doing things like
register allocation for you.  Haskell just takes that to another level.

> Just the other day, I encountered a C++ problem where passing a
> closure as a parameter to a function was the right thing to do... I'm
> not sure I would have thought about that if it weren't for all my past
> attempts at learning FP.

Yes, C++11 got closures and std::shared_ptr is a poor man's garbage
collection.  It's getting more like Haskell every day ;-).

> I got the impression that Haskell type classes were pretty similar to
> the interface concept in object-oriented programming : stating that a
> type must have some methods defined in order to be used in some
> context.

Yes, something like that, but also restricting what types are allowed in
some polymorphic terms.  

> From what I understood, what the C++ people are trying to do with
> concepts is quite a bit stronger: they want to build a language
> infrastructure which allows one to assert any static predicate about a
> type before allowing it to be passed as a parameter to a
> template. 

I think that's called "Axioms" which weren't part of the C++11 concepts
proposal iirc.  But I didn't follow the matter closely.

> I find the idea quite interesting, but I'm curious about what the
> implementation would look like. 

http://www.generic-programming.org/software/ConceptGCC/

> the only very large and successful functional program which I know of
> is Emacs, and its Lisp internals most certainly do not exhibit the
> latest and greatest that functional programming languages can do.

Lisp is untyped so of course you won't see type annotations ;-).  Emacs
Lisp is quite old fashioned, descended from Maclisp which was written in
the 1970s, before FP was a thing.  Xmonad (xmonad.org) as someone
mentioned is a good example of fairly advanced Haskell.

>> I've generally found it easier to code stuff in Haskell than in
>> low-level imperative languages, ... The main cost is that the Haskell
>> code runs slower and uses more memory
> How is this different from e.g. Python ?

Python has a more ad hoc feel than Haskell.  Its types are nowhere near
as precise as Haskell types, and they are checked only at runtime.  It
uses stateful generators instead of lazy evaluation for making lazy
sequences (that can lead to various annoying bugs), etc.  But yes,
Python is great for throwing quick scripts together, and I use it a lot.

>>    averages = [x+y+z | x:y:z:_ <- tails a]

> Would it be correct of me to assume that if I asked you to handle end
> points by reusing edge input values, your idiomatic Haskell answer
> would be to append duplicates of the first and last input value on
> each end of the input list ?

Probably would paste the extra values onto the output rather than the
input.  Actually I'm not sure the zipWith2 example was so idiomatic at
this point.  I'm liking the listcomp version better too.

> So far, I have the impression that what I dislike about functional
> programming is the obsession of absolute functional purity beyond the
> limit of practicality, 

I think if you're trying to learn a new subject, it's better to do it
with a feeling of open exploration and discovery, rather than pursuit of
a specific end result.  So don't worry too much about practicality until
you know the landscape.  There are amazing things you might never have
thought of, that you could end up bypassing by staying on too narrow a
track.


^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: Haskell, anyone?
  2015-11-17 21:42             ` Hadrien Grasland
@ 2015-11-18  4:36               ` Paul Rubin
  2015-11-18  8:48                 ` Hadrien Grasland
  0 siblings, 1 reply; 54+ messages in thread
From: Paul Rubin @ 2015-11-18  4:36 UTC (permalink / raw)


Hadrien Grasland <hadrien.grasland@gmail.com> writes:
> int iterate_recursion(int n) ...
> int iterate_goto(int n) ...
>
> You have to admit that in the end, the basic approach is almost
> exactly the same, ..."okay, so at the start of the iteration, the input
> parameter is indeed n, but after that, we'll decrement it..."

I don't think so.  In the recursive version, n never changes in the
context of the function being run.  The function is called again with a
new value.  Anyway it's not a big deal, there may be some times when an
iterative loop is slightly more natural, but writing it recursively is
always easy anyway.  There are much bigger issues in FP that take a
really different approach to deal with.  For example, the entire topic
of functional data structures.  Another is the programs-as-proofs
concept in type theory (Haskell doesn't quite reach there seriously, but
Agda does).  Recursion vs iteration is a tiny step along the way.

> Hence my claim that recursion is, indeed, the ultimate goto, including
> when it comes to ease of understanding. Guy Steele will be most
> pleased.

IIRC, he called lambda the ultimate goto because tail calls can be
compiled into jumps.  No biggie.  If you read about continuations in
Scheme, they're a heck of a lot more confusing than recursion. ;-)


^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: Haskell, anyone?
  2015-11-18  4:36               ` Paul Rubin
@ 2015-11-18  8:48                 ` Hadrien Grasland
  2015-11-18  9:23                   ` Paul Rubin
  0 siblings, 1 reply; 54+ messages in thread
From: Hadrien Grasland @ 2015-11-18  8:48 UTC (permalink / raw)


Le mercredi 18 novembre 2015 05:36:32 UTC+1, Paul Rubin a écrit :
> > int iterate_recursion(int n) ...
> > int iterate_goto(int n) ...
> >
> > You have to admit that in the end, the basic approach is almost
> > exactly the same, ..."okay, so at the start of the iteration, the input
> > parameter is indeed n, but after that, we'll decrement it..."
> 
> I don't think so.  In the recursive version, n never changes in the
> context of the function being run.  The function is called again with a
> new value.  Anyway it's not a big deal, there may be some times when an
> iterative loop is slightly more natural, but writing it recursively is
> always easy anyway.  There are much bigger issues in FP that take a
> really different approach to deal with.  For example, the entire topic
> of functional data structures.  Another is the programs-as-proofs
> concept in type theory (Haskell doesn't quite reach there seriously, but
> Agda does).  Recursion vs iteration is a tiny step along the way.

Note that here, our focus is different. My message was about the fact that a program with recursion is about as hard to read and understand as an old-fashioned program featuring gotos. Actually harder, I may add, because for any nontrivial use of recursion, you also need to fit an implicit stack somewhere in your mental model.

Your counterpoint was that recursive code is easy to *write*, and this I won't dispute. It also takes me only a couple of minutes to turn a loop which doesn't modify its argument into a recursion.

However, readability and ease of writing are different goals, which are often at odds with each other. I believe they are in the case of loops vs recursion. And I believe there is also such a usability compromise at work when you need to give up on data structures which everyone knows about, and use more sophisticated and obscure ones, because your programming language does not support very well the conventional abstractions.

So far, my impression is that in the philosophy of functional programming, it is okay to sacrifice readability and make the life of newcomers harder by using highly sophisticated abstractions even when they are not strictly necessary, if in exchange we can benefit in some other important code design area such as referential transparency, code reuse, or ease of writing complex algorithms. Much like, for example, garbage collection and lazy evaluation are based on the philosophical viewpoint that it is okay to sacrifice performance and run-time predictability if in exchange, we can express some tricky concepts like infinite lists or shared variables more easily.

Honestly, I find this fascinating. There's no single good answer to all these complicated programming tradeoffs, and for me exploring this diversity of viewpoints is both one of the greatest joys and the greatest pains of learning new programming languages.

Personally, I am quite obsessed with the idea that my code might be read by someone else someday, including the person I'll be in 20 years, so I tend to put readability fairly high on my priority list when I program something. "Try to use the simplest abstraction that will work well" is an important principle for me. But I can certainly agree that there are some benefits to be had in compromizing on readability sometimes too, depending on the problem you are dealing with. I'm curious about what else I'll discover in my continued programming language exploration.


^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: Haskell, anyone?
  2015-11-18  8:48                 ` Hadrien Grasland
@ 2015-11-18  9:23                   ` Paul Rubin
  2015-11-18 10:44                     ` Hadrien Grasland
  0 siblings, 1 reply; 54+ messages in thread
From: Paul Rubin @ 2015-11-18  9:23 UTC (permalink / raw)


Hadrien Grasland <hadrien.grasland@gmail.com> writes:
> My message was about the fact that a program with recursion is about
> as hard to read and understand as an old-fashioned program featuring
> gotos. Actually harder...  Your counterpoint was that recursive code
> is easy to *write*,

I don't find the recursive version harder to read.  In the goto version,
the variable is changing while the thing runs, so at one place it's 5
and at another place it's 4, and I have to keep track of that, and it's
potentially exponentially worse when there's more than one variable
changing.  In the recursive version, the values don't change.  Yes you
have to be aware of possible stack consumption but that becomes second
nature, and there are also tools that can monitor your code for space
consumption to catch any problems.

> readability and ease of writing are different goals, which are often
> at odds with each other. I believe they are in the case of loops vs
> recursion.

It's probably mostly a matter of what you're used to.  If I look at a
Swedish web page and can't understand much of it, that's because I don't
know the language, not because of the style or ideas in it.

> And I believe there is also such a usability compromise at work when
> you need to give up on data structures which everyone knows about, and
> use more sophisticated and obscure ones, because your programming
> language does not support very well the conventional abstractions.

Again it's a matter of what you're used to and what you consider
conventional.  There are some intro classes being taught with Haskell
and ML these days, and SICP back in the day used Scheme in
mostly-functional style.  If you're writing concurrent programs you
should know about those immutable structures anyway.

> So far, my impression is that in the philosophy of functional
> programming, it is okay to sacrifice readability and make the life of
> newcomers harder by using highly sophisticated abstractions even when
> they are not strictly necessary,

I'd say Haskell is at the high end of the sophistication scale for
general-purpose languages, so that critique applies more to it than to
(say) Scheme or ML.  But IMO that sophistication also makes it the most
enlightening for programmers trying to expand their technical range.
Ocaml may be more practical and I want to try using it sometime.

> Personally, I am quite obsessed with the idea that my code might be
> read by someone else someday, including the person I'll be in 20
> years, so I tend to put readability fairly high on my priority list
> when I program something. 

Haskell is pretty readable if you know the language, especially if you
write good comments.  It's not COBOL but you really probably don't want
it to be.  One issue though is that the language keeps changing.  SML is
basically frozen and I guess that has attractions.

>"Try to use the simplest abstraction that will work well" is an
>important principle for me.

Sure, that makes sense.

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: Haskell, anyone?
  2015-11-18  0:11       ` Paul Rubin
@ 2015-11-18  9:44         ` Hadrien Grasland
  0 siblings, 0 replies; 54+ messages in thread
From: Hadrien Grasland @ 2015-11-18  9:44 UTC (permalink / raw)


Le mercredi 18 novembre 2015 01:11:38 UTC+1, Paul Rubin a écrit :
> > As I said before, even if I've been regularly trying
> > to understand the intricacies of FP for about a year now, it's the
> > longest-lasting continuous attempt so far, and it's only been a
> > month.
> 
> You should probably look at Hughes' paper "Why FP matters" if you
> haven't:
> 
> http://www.cse.chalmers.se/~rjmh/Papers/whyfp.html

See my reply to your other post. My conclusion so far (which has no pretense of being final, of course) is that there are benefits to the functional approach, but that said benefits won't come for free.

You get more code reuse from polymorphic higher-order functions, you pay by increasing your abstractions' sophistication and thus making the life of newcomers harder (and in general making your code harder to read even by experienced people).

You use more powerful data structures (lists), you pay by losing constant-time random access, efficient memory usage, and cache locality.

You get the problems of mutable variables out of the way, you pay by extra complexity in problems which this abstraction adresses well.

You get conceptually infinite data structures (lazy evaluation), you pay with largely unpredictable run-time behavior (in terms of memory consumption, CPU usage, and error locality).

You free yourself from the pain of manual memory management, you get bitten back by it when your program ends up consuming massive amounts RAM, trashing all caches on the system, and when a train fails to brake because it was busy reclaiming memory instead of answering the driver's commands.

Programming is about tradeoffs, which is why no single programming language can be universally classified as better than the other. Myself, I'm curious about learning new sets of tradeoffs, so as to expand the set of problems which I can solve efficiently as a programmer.

But here too there's a tradeoff: with much programmer curiosity comes much learning pain :)


> > I have already grasped that getting it right is going to take a lot
> > more time.
> 
> I don't know, it's not really harder than other types of programming,
> just different.

From a usability point of view, different is hard. I know, that's unfair. Everyone who tries to do something new hates this statement. But it doesn't make it less true. If you're proposing something new, it better have enormous benefits to compensate for the learning pain.


> > If the goal were to only use side-effects *sparingly*, I believe that
> > all skilled imperative programmers already do that really. ...
> > - Global variables are to be used with extreme caution
> 
> Except that the external world (that i/o interacts with) is in essence a
> giant global variable.  One thing I learned from Haskell is the value of
> putting all the i/o in the outer layer of a program rather than
> scattering it through everything.  Writing functions to have no side
> effects (like IO, or old-fashioned stateful OOP) when possible is
> another angle on avoiding global variables.
> 
> > - Variable reuse is a terrible idea, their scope should be reduced
> > instead
> 
> Sure, like using recursion instead of loops with mutating indexes.
> 
> > - If something can be const, it should really be
> 
> Example: instead of using mutable maps like hash tables, FP prefers
> mutation-free balanced trees, etc.  The Chris Okasaki book that Mark
> Carroll mentioned is all about this topic.

Again, I agree that functional programming takes these imperative programming best practices further. What I am saying is that imperative programmers already know about them, but stop at a certain sweet spot of optimization beyond which the price to pay for immutability becomes heavier :)


> > The difference, as far as I can see it, is that the philosophy of
> > functional programming is to hunt mutable variables and effects much
> > further, sometimes I would say far beyond the point of diminishing
> > returns.
> 
> I think with the right tools, the FP style leads to quicker and safer
> development, if you don't mind the performance hit.  If you look at some
> low level Haskell libraries, they do use mutation and other ugly tricks
> to maximize speed.  That lets higher-level application code get the
> performance gains while leaving the libraries to deal with the dirt.
> The libraries *could* (mostly) be written in pure FP style and they'd
> work fine, they'd just be slower.

Quantify the importance of "quicker development", though. Many programs spend a lot more time being maintained than being written for the first time. At this point, I am not yet convinced yet that functional programming languages have fully taken into accound the importance of this fact.

That is not a concern for all programs, though, and I agree that for example it could make FP languages more fun for write-and-forget codebases like prototypes and hackathons. Tradeoffs and "it depends on your project", as usual, I guess.


> Oh, that was the notorious "monad tutorial" era.  We're over that now
> ;-).  See:
> 
> https://byorgey.wordpress.com/2009/01/12/abstraction-intuition-and-the-monad-tutorial-fallacy/
> 
> If you take a math class you have to learn some stuff.  People shouldn't
> expect skillful programming to be different.

That is true, however there is also a benefit in staying away from complex and powerful mathematical abstractions unless you truly need to use them. It frees your mental resource in order to better focus on the problem you're solving.

For example, I know that the sets of rational numbers and polynomials are both rings. This powerful mathematical abstraction has been recorded into my head at some point in my studies. But thankfully, I don't need to remind myself of this fact anytime I need to use a rational fraction or a polynomial in my computations.

I also know all there is to know about Hilbert spaces, the Gram-Schmidt orthonormalization algorithm, and the difference between Gauss' pivot and LU decomposition. But I don't need that knowledge to do linear algebra everyday, since people more skilled than me have built powerful computation tools and libraries, which will pick the right algorithm for me most of the time, and otherwise explain me the trade-off between different approaches in simple words within their documentation.

I believe that programming languages should be the same: sure, it's best if they are built on top of a very solid mathematical background, that many clever people have spent a lot of time elaborating over hundreds of years. But I shouldn't need to care about all of that in order to *use* the stuff. When low-level details (mathematics) leak into a high-level abstraction (programming languages), it is somewhat problematic in my opinion.


> > This sounds like an attitude I am more likely to get behind: know many
> > tools, and pick the right tool for the right job.
> 
> Actually it's a reason to pick Haskell for your learning experience,
> because you get to understand FP a lot better, instead of something
> that's half-FP half-imperative.  At that point Ocaml becomes easy.  I
> haven't used Ocaml but have looked at the docs and it has much less
> learning curve than Haskell.  It's basically Lisp with a type system.
> 
> Looking at it another way, say you're coming from a highly abstract
> background like Haskell and you want to try some low level programming
> to find out how computers would really work.  Does that mean study Ada?
> IMHO it means study assembler and move to Ada from there.

Conceptually, I could agree with you, but practically speaking, impedance mismatch is also an issue. If you try to learn too many new concepts in too little time, your brain blacks out and you hit a wall. That pretty much sums up my previous experiences with FP.

This is not to say that I won't give Haskell another try later on. After all, I all but have it as my goal to give every reasonably popular programming language a try at least once in my life. But for now, I'm afraid I need something simpler to understand :)


> > The main reasons why Ada is so verbose is that asks you to be quite
> > specific about everything you do (which is good for debugging, because
> > it means the compiler and runtime can catch more errors)
> 
> Meh, it mostly means there's more places for your code to go wrong.  If
> the compiler is already debugged and can figure stuff out so you don't
> have to, you're better off.  Think of the Ada compiler doing things like
> register allocation for you.  Haskell just takes that to another level.

I disagree, and in fact I think even functional programmers can understand why extra verbosity sometimes helps with static code checking because I just saw an example of it in my OCaml classes recently.

Consider this :

"type Euro = float ;;
type Dollar = float ;;

let five_euros = 5. ;;
let five_dollars = 5. ;;
let silly_idea = five_euros + five_dollars"

And compare it to this :

"type Money =
| Euro of float
| Dollar of float ;;

let five_euros = Euro 5. ;;
let five_dollars = Dollar 5. ;;
let silly_idea = five_euros + five_dollars ;;"

Given one simple change to the type hierarchy and a bit of extra verbosity in every let-binding, a code which did something absolutely rubbish has turned from an error which could only be detected at run time to a static compile-time error.

This is the Ada philosophy in a nutshell : given precise enough type and function definitions, error detection can often be pushed from testing to run-time, and sometimes from run-time to compile-time as well. I believe that it is a philosophical standpoint which Ada and most modern functional programming languages have in common, although the precise implementation details will differ.


> > Just the other day, I encountered a C++ problem where passing a
> > closure as a parameter to a function was the right thing to do... I'm
> > not sure I would have thought about that if it weren't for all my past
> > attempts at learning FP.
> 
> Yes, C++11 got closures and std::shared_ptr is a poor man's garbage
> collection.  It's getting more like Haskell every day ;-).

Just a nitpick which I'm sure you are already aware of, but shared_ptr is reference counting, not garbage collection. This means that it comes with a somewhat different set of tradeoffs.

Advantages of reference counting:
- Garbage collection can only manage dynamically allocated memory, whereas reference counting can easily handle any kind of resource
- Reference counted programs are a lot more deterministic, and thus easier to reason about and debug
- In general, garbage collection tends to be a lot more wasteful in terms of memory usage

Advantages of garbage collection:
- Due to economics of scale, garbage collection usually comes with a significantly reduced CPU overhead
- It is easier to deal with some tricky scenarii like cyclic references when using garbage collection


> > So far, I have the impression that what I dislike about functional
> > programming is the obsession of absolute functional purity beyond the
> > limit of practicality, 
> 
> I think if you're trying to learn a new subject, it's better to do it
> with a feeling of open exploration and discovery, rather than pursuit of
> a specific end result.  So don't worry too much about practicality until
> you know the landscape.  There are amazing things you might never have
> thought of, that you could end up bypassing by staying on too narrow a
> track.

Heh, again, learning comes with its pains, but I wouldn't do it if it weren't also quite rewarding in the end :)


^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: Haskell, anyone?
  2015-11-18  9:23                   ` Paul Rubin
@ 2015-11-18 10:44                     ` Hadrien Grasland
  2015-11-18 11:02                       ` Dmitry A. Kazakov
                                         ` (2 more replies)
  0 siblings, 3 replies; 54+ messages in thread
From: Hadrien Grasland @ 2015-11-18 10:44 UTC (permalink / raw)


Le mercredi 18 novembre 2015 10:23:15 UTC+1, Paul Rubin a écrit :
> > My message was about the fact that a program with recursion is about
> > as hard to read and understand as an old-fashioned program featuring
> > gotos. Actually harder...  Your counterpoint was that recursive code
> > is easy to *write*,
> 
> I don't find the recursive version harder to read.  In the goto version,
> the variable is changing while the thing runs, so at one place it's 5
> and at another place it's 4, and I have to keep track of that, and it's
> potentially exponentially worse when there's more than one variable
> changing.  In the recursive version, the values don't change.  Yes you
> have to be aware of possible stack consumption but that becomes second
> nature, and there are also tools that can monitor your code for space
> consumption to catch any problems.

What I am saying is that even if values are indeed immutable in functional languages, recursion is used as a way to cheat the system and make them change on a conceptual level. That when you recurse into a list, what you mean is really "okay, now let's start over from the beginning of this function, this time considering the tail of the list instead of the full list".

For example, if you wanted to explain the factorial to a student who has never been exposed to the concept, chances are that your first approach wouldn't be to write on a blackboard

"fact n = n * fact (n-1)
fact 1 = 1"

Instead, you would start with a human-readable definition that looks like this :

"fact n = n * (n-1) * (n - 2) * ... * 2 * 1"

Then you would explain how, concretely, such a quantity is computed using some iterative algorithm, which itself may either be recursive and use an implicit accumulator in the form of the program's stack, or use a loop and an explicit accumulator.

In any case, if you are using a machine which can only perform one binary operation at a time, you will always need an accumulator to compute the final result. In functional programming, what you've done is essentially to forbid yourself from explicitly manipulating the accumulator. You leave it to the implementation instead. That's an interesting choice... but someone trying to understand a recursive program will still need to "play the computer" and figure out what the machine is doing with its mutable accumulators at some point.


> > readability and ease of writing are different goals, which are often
> > at odds with each other. I believe they are in the case of loops vs
> > recursion.
> 
> It's probably mostly a matter of what you're used to.  If I look at a
> Swedish web page and can't understand much of it, that's because I don't
> know the language, not because of the style or ideas in it.

I would gladly challenge this claim by picking two large groups of 10 years old who have never been exposed to imperative programming yet, trying to explain to each of them how to do compute something using accumulators and recursion, and comparing how much time it takes and what the average success rates are. But I'm lacking the resources ;)

The thing is, our world is imperative. We don't exist, then we are born, then we are alive, then we die, then we stay dead. The horrible events that happened in Paris this week-end cannot be erased by moving up the call stack. The state of things around us keeps changing, and we learn mathematics by moving our fingers, drawing schematics on mutable paper, and writing down equations. In that sense, I believe that imperative constructs better match the world we are used to living in, and should thus be easier to learn and understand.

This does not mean that there isn't a benefit to the functional ways. But I don't think that we can handwave away their additional complexity with a simple "it depends on what you're used to". Functional constructs are, indeed, less intuitive, and according to Occam's principle such extra learning complexity deserves a good motivation (which I believe most functional programming fans will be happy giving anyway).


> > And I believe there is also such a usability compromise at work when
> > you need to give up on data structures which everyone knows about, and
> > use more sophisticated and obscure ones, because your programming
> > language does not support very well the conventional abstractions.
> 
> Again it's a matter of what you're used to and what you consider
> conventional.  There are some intro classes being taught with Haskell
> and ML these days, and SICP back in the day used Scheme in
> mostly-functional style.  If you're writing concurrent programs you
> should know about those immutable structures anyway.

I'm writing programs for massively parallel GPU processors, and last time I checked Blelloch's scan was still one of the most efficient algorithms ever devised for performing cumulative sums on these architectures. But what that algorithm does to data would most certainly make any fan of immutability shiver...


> > So far, my impression is that in the philosophy of functional
> > programming, it is okay to sacrifice readability and make the life of
> > newcomers harder by using highly sophisticated abstractions even when
> > they are not strictly necessary,
> 
> I'd say Haskell is at the high end of the sophistication scale for
> general-purpose languages, so that critique applies more to it than to
> (say) Scheme or ML.  But IMO that sophistication also makes it the most
> enlightening for programmers trying to expand their technical range.
> Ocaml may be more practical and I want to try using it sometime.

Well, I'm learning OCaml using a pretty nice MOOC which has been set up by my former university. If you're interested the address is

https://www.france-universite-numerique-mooc.fr/courses/parisdiderot/56002/session01/about

I guess it's now a bit late to join this session, but given its immense success, I'm pretty sure that they will schedule another next year if you're still interested :)

Oh, and if you end up on a French page by mistake, don't fear. There should be a way to change the language somewhere, and even if the FUN website is bilingual, this class is taught in English.


^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: Haskell, anyone?
  2015-11-18 10:44                     ` Hadrien Grasland
@ 2015-11-18 11:02                       ` Dmitry A. Kazakov
  2015-11-18 12:41                         ` G.B.
  2015-11-18 23:06                       ` Randy Brukardt
  2015-11-19  7:22                       ` Paul Rubin
  2 siblings, 1 reply; 54+ messages in thread
From: Dmitry A. Kazakov @ 2015-11-18 11:02 UTC (permalink / raw)


On Wed, 18 Nov 2015 02:44:02 -0800 (PST), Hadrien Grasland wrote:

> For example, if you wanted to explain the factorial to a student who has
> never been exposed to the concept, chances are that your first approach
> wouldn't be to write on a blackboard
> 
> "fact n = n * fact (n-1)
> fact 1 = 1"
> 
> Instead, you would start with a human-readable definition that looks like this :
> 
> "fact n = n * (n-1) * (n - 2) * ... * 2 * 1"

Not really. Inductive definitions are all OK, BUT they are not given
upside-down, e.g. rather this:

   1! = 1            -- Induction basis
   (n+1)! = (n+1) * n!   -- Inductive step

or, for that mater,

  n! = 1 * 2 * 3 * ... * n

> In any case, if you are using a machine which can only perform one binary
> operation at a time, you will always need an accumulator to compute the
> final result.

Actually any machine is a FSA. Stateless computation is an oxymoron. The
sole reason of having any program running is for the side effects it
produces.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de


^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: Haskell, anyone?
  2015-11-18 11:02                       ` Dmitry A. Kazakov
@ 2015-11-18 12:41                         ` G.B.
  0 siblings, 0 replies; 54+ messages in thread
From: G.B. @ 2015-11-18 12:41 UTC (permalink / raw)


On 18.11.15 12:02, Dmitry A. Kazakov wrote:
> Actually any machine is a FSA. Stateless computation is an oxymoron. The
> sole reason of having any program running is for the side effects it
> produces.


One doesn't show one's state in mathematical circles, does one?
That would be like showing notes and computations, but not results.
Procedures are talked about in public only when they are the subject
of declarative sentences, such as sentences that explain properties
of procedures. Which then, sadly, might necessitate referring to
examples of procedural computation. ;-)

Dirt (procedures, states) is for the hands of engineers and (some)
educators, mostly for people whose work (programs) has effects on
non-people.


^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: Haskell, anyone?
  2015-11-18 10:44                     ` Hadrien Grasland
  2015-11-18 11:02                       ` Dmitry A. Kazakov
@ 2015-11-18 23:06                       ` Randy Brukardt
  2015-11-19  8:56                         ` Hadrien Grasland
  2015-11-19  7:22                       ` Paul Rubin
  2 siblings, 1 reply; 54+ messages in thread
From: Randy Brukardt @ 2015-11-18 23:06 UTC (permalink / raw)


"Hadrien Grasland" <hadrien.grasland@gmail.com> wrote in message 
news:937b88b1-66aa-4205-a412-9588d83a3f26@googlegroups.com...
...
>This does not mean that there isn't a benefit to the functional ways. But I 
>don't
> think that we can handwave away their additional complexity with a simple 
> "it
> depends on what you're used to". Functional constructs are, indeed, less 
> intuitive,
>and according to Occam's principle such extra learning complexity deserves 
>a
>good motivation (which I believe most functional programming fans will be
>happy giving anyway).

I agree. In addition, this is an Ada forum, and it would be nice to get back 
to conversation about Ada.

So, let me say that it certainly possible to program in a functional style 
using Ada.

It's always been possible to construct pure functions (those with no 
side-effects) in Ada, and it's likely that Ada 2018 will have a way to 
declare and enforce that.

One can use the advanced data structures in Ada.Containers to program using 
lists, maps, trees, and so on without messing with the ugly details 
(including storage management). It would be easy to use these structures as 
a basis for even higher level operations if one really wanted to hide 
iterators and the like. (You can already use the call-back form of iteration 
to avoid any visible imperative code, if you wanted to do that.)

As Hadrien said, it's useful to have FP techniques available when they work 
well, but one really doesn't want to be fenced into any single programming 
paradyme. Basic imperative programming alone, or ADTs alone, or OOP alone, 
or FP alone, or whatever alone, is almost never the best way to get things 
done. Some problems map naturally into one style or another, and its crazy 
to be fenced into some other style for such problems.

                                    Randy.



^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: Haskell, anyone?
  2015-11-18 10:44                     ` Hadrien Grasland
  2015-11-18 11:02                       ` Dmitry A. Kazakov
  2015-11-18 23:06                       ` Randy Brukardt
@ 2015-11-19  7:22                       ` Paul Rubin
  2015-11-19  9:39                         ` Hadrien Grasland
  2 siblings, 1 reply; 54+ messages in thread
From: Paul Rubin @ 2015-11-19  7:22 UTC (permalink / raw)


Hadrien Grasland <hadrien.grasland@gmail.com> writes:
> What I am saying is that even if values are indeed immutable in
> functional languages

Hadrien, per the observations of Randy and others that this has drifted
away from Ada, I'll try to post a reply on comp.lang.functional instead
of here pretty soon.  Feel free to join me over there.


^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: Haskell, anyone?
  2015-11-18 23:06                       ` Randy Brukardt
@ 2015-11-19  8:56                         ` Hadrien Grasland
  2015-11-19  9:19                           ` Hadrien Grasland
                                             ` (2 more replies)
  0 siblings, 3 replies; 54+ messages in thread
From: Hadrien Grasland @ 2015-11-19  8:56 UTC (permalink / raw)


Le jeudi 19 novembre 2015 00:06:20 UTC+1, Randy Brukardt a écrit :
> ...
> >This does not mean that there isn't a benefit to the functional ways. But I 
> >don't
> > think that we can handwave away their additional complexity with a simple 
> > "it
> > depends on what you're used to". Functional constructs are, indeed, less 
> > intuitive,
> >and according to Occam's principle such extra learning complexity deserves 
> >a
> >good motivation (which I believe most functional programming fans will be
> >happy giving anyway).
> 
> I agree. In addition, this is an Ada forum, and it would be nice to get back 
> to conversation about Ada.
> 
> So, let me say that it certainly possible to program in a functional style 
> using Ada.
> 
> It's always been possible to construct pure functions (those with no 
> side-effects) in Ada, and it's likely that Ada 2018 will have a way to 
> declare and enforce that.
> 
> One can use the advanced data structures in Ada.Containers to program using 
> lists, maps, trees, and so on without messing with the ugly details 
> (including storage management). It would be easy to use these structures as 
> a basis for even higher level operations if one really wanted to hide 
> iterators and the like. (You can already use the call-back form of iteration 
> to avoid any visible imperative code, if you wanted to do that.)
> 
> As Hadrien said, it's useful to have FP techniques available when they work 
> well, but one really doesn't want to be fenced into any single programming 
> paradyme. Basic imperative programming alone, or ADTs alone, or OOP alone, 
> or FP alone, or whatever alone, is almost never the best way to get things 
> done. Some problems map naturally into one style or another, and its crazy 
> to be fenced into some other style for such problems.
> 
>                                     Randy.

Actually, speaking of functional programming Ada, I'm curious : would there be an elegant way to produce a higher-order function equivalent in Ada ?

Ada sure has the very powerful ability to declare functions within functions, and it also has function pointers. But due to the way accessibility rules work, my understanding is that this would not be enough to implement things like partial application or containers of functions, because once we leave the scope of a function, it instantly ceases to exist.

This would leave us to using ad-hoc functor objects which are declared at global scope and whose state is filled up as needed, as in the C++98 era, which is certainly not what I would call an elegant solution.

Or have I missed some important language feature which could be used to this end, like a way to serialize a function and its state to a stream ?


^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: Haskell, anyone?
  2015-11-19  8:56                         ` Hadrien Grasland
@ 2015-11-19  9:19                           ` Hadrien Grasland
  2015-11-19 21:27                           ` Randy Brukardt
  2015-11-24 12:03                           ` Jacob Sparre Andersen
  2 siblings, 0 replies; 54+ messages in thread
From: Hadrien Grasland @ 2015-11-19  9:19 UTC (permalink / raw)


-"higher-order function" +"first-class function", obviously. Sorry, yesterday has been a long day...


^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: Haskell, anyone?
  2015-11-19  7:22                       ` Paul Rubin
@ 2015-11-19  9:39                         ` Hadrien Grasland
  0 siblings, 0 replies; 54+ messages in thread
From: Hadrien Grasland @ 2015-11-19  9:39 UTC (permalink / raw)


Le jeudi 19 novembre 2015 08:22:58 UTC+1, Paul Rubin a écrit :
> > What I am saying is that even if values are indeed immutable in
> > functional languages
> 
> Hadrien, per the observations of Randy and others that this has drifted
> away from Ada, I'll try to post a reply on comp.lang.functional instead
> of here pretty soon.  Feel free to join me over there.

I'll be there, ready when you are :)

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: Haskell, anyone?
  2015-11-19  8:56                         ` Hadrien Grasland
  2015-11-19  9:19                           ` Hadrien Grasland
@ 2015-11-19 21:27                           ` Randy Brukardt
  2015-11-24 12:03                           ` Jacob Sparre Andersen
  2 siblings, 0 replies; 54+ messages in thread
From: Randy Brukardt @ 2015-11-19 21:27 UTC (permalink / raw)


"Hadrien Grasland" <hadrien.grasland@gmail.com> wrote in message 
news:36a48f94-41ee-491a-8312-d6e7732bf482@googlegroups.com...
...
>Actually, speaking of functional programming Ada, I'm curious : would there
>be an elegant way to produce a higher-order function equivalent in Ada ?
>
>Ada sure has the very powerful ability to declare functions within 
>functions,
>and it also has function pointers. But due to the way accessibility rules 
>work,
>my understanding is that this would not be enough to implement things like
>partial application or containers of functions, because once we leave the
>scope of a function, it instantly ceases to exist.
>
...
>Or have I missed some important language feature which could be used to
>this end, like a way to serialize a function and its state to a stream ?

Not really. Some of what you're thinking of would work using anonymous 
access to subprogram parameters (which do not require any accessibility 
check). That allows passing any function to another one. But as those are 
limited to parameters, it's hard to keep them around for long.

That means they're good for iteration and mapping constructs. Not so much 
for lazy evaluation or partial application.

On a couple of occassions, we've talked about named anonymous 
access-to-subprogram types (obviously a bad name that would get replaced if 
something on this line was adopted). Those would have a representation that 
would have enough information to call a nested subprogram; 'Unchecked_Access 
would be used to create values (so you'd be on your own to prevent calls to 
dangling subprograms). That would allow longer term storage of such 
subprograms (useful for subprograms declared in generic instances or 
long-lived subprograms like the main subprogram). These haven't gone 
anywhere. (The usual context is GNAT's 'Unrestricted_Access, a mechanism 
that cannot be reasonably adopted in other compilers; the GCC memory model 
makes it possible. Many of us would like something in the language so that 
'Unrestricted_Access wouldn't be anywhere near as necessary.)

But I don't know of anything on the drawing boards for partial application 
or similar things. So far as I know, no one has asked (formally or 
informally).

                                          Randy.


^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: Haskell, anyone?
  2015-11-19  8:56                         ` Hadrien Grasland
  2015-11-19  9:19                           ` Hadrien Grasland
  2015-11-19 21:27                           ` Randy Brukardt
@ 2015-11-24 12:03                           ` Jacob Sparre Andersen
  2 siblings, 0 replies; 54+ messages in thread
From: Jacob Sparre Andersen @ 2015-11-24 12:03 UTC (permalink / raw)


Hadrien Grasland <hadrien.grasland@gmail.com> wrote:

> Actually, speaking of functional programming Ada, I'm curious: would
> there be an elegant way to produce a higher-order function equivalent
> in Ada ?

One way to (sort of) do it:

   http://rosettacode.org/wiki/First-class_functions#Ada

You can, more or less, write something similar with tagged types.

Greetings,

Jacob
-- 
Statistics is like a bikini - what it shows is very suggestive,
but the most important stuff is still hidden.


^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: Haskell, anyone?
  2015-11-16 10:25 ` Hadrien Grasland
                     ` (3 preceding siblings ...)
  2015-11-16 23:23   ` Paul Rubin
@ 2015-12-06 12:59   ` David Thompson
  2015-12-07  7:25     ` Hadrien Grasland
  4 siblings, 1 reply; 54+ messages in thread
From: David Thompson @ 2015-12-06 12:59 UTC (permalink / raw)


On Mon, 16 Nov 2015 02:25:14 -0800 (PST), Hadrien Grasland
<hadrien.grasland@gmail.com> wrote:
(linebreaks added to make quote readable)
> [snip] you are expected to do everything using expressions. 
So if you have ever suffered the horror of debugging three nested
trigraphs in C, <snip>

Trigraphs don't nest; each encodes a single character. You  likely
mean the conditional operator, often called _ternary_ because it (if
it is considered a single thing) has three operands:

    predicate ? trueexpr : falseexpr 

which can be abused for things like

    cond1? cond2? val12: cond3? val103: val100: cond4? val04: val00

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: Haskell, anyone?
  2015-12-06 12:59   ` David Thompson
@ 2015-12-07  7:25     ` Hadrien Grasland
  0 siblings, 0 replies; 54+ messages in thread
From: Hadrien Grasland @ 2015-12-07  7:25 UTC (permalink / raw)


Le dimanche 6 décembre 2015 13:59:04 UTC+1, David Thompson a écrit :
> (linebreaks added to make quote readable)
> > [snip] you are expected to do everything using expressions. 
> So if you have ever suffered the horror of debugging three nested
> trigraphs in C, <snip>
> 
> Trigraphs don't nest; each encodes a single character. You  likely
> mean the conditional operator, often called _ternary_ because it (if
> it is considered a single thing) has three operands:
> 
>     predicate ? trueexpr : falseexpr 
> 
> which can be abused for things like
> 
>     cond1? cond2? val12: cond3? val103: val100: cond4? val04: val00

That's what I meant indeed, thanks for the fix :)


^ permalink raw reply	[flat|nested] 54+ messages in thread

end of thread, other threads:[~2015-12-07  7:25 UTC | newest]

Thread overview: 54+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-11-15 20:42 Haskell, anyone? mockturtle
2015-11-15 20:51 ` Paul Rubin
2015-11-15 20:53 ` Nasser M. Abbasi
2015-11-15 21:50 ` Mark Carroll
2015-11-15 22:11 ` mockturtle
2015-11-15 22:48   ` Nasser M. Abbasi
2015-11-15 23:05     ` Mark Carroll
2015-11-16  4:11       ` Paul Rubin
2015-11-16  5:17         ` Nasser M. Abbasi
2015-11-16  5:48           ` Paul Rubin
2015-11-16  5:59             ` Nasser M. Abbasi
2015-11-16  6:47               ` Paul Rubin
2015-11-16  8:45           ` Simon Wright
2015-11-16 14:38             ` Brian Drummond
2015-11-15 23:19     ` Jeffrey R. Carter
2015-11-16  9:36       ` J-P. Rosen
2015-11-16 18:14         ` Jeffrey R. Carter
2015-11-16  3:59   ` Paul Rubin
2015-11-16  8:33   ` Dmitry A. Kazakov
2015-11-16  9:33     ` mockturtle
2015-11-16  9:45       ` Paul Rubin
2015-11-16 10:25 ` Hadrien Grasland
2015-11-16 11:19   ` Simon Wright
2015-11-16 11:25     ` Hadrien Grasland
2015-11-16 13:59   ` G.B.
2015-11-16 20:24   ` Jeffrey R. Carter
2015-11-16 23:23   ` Paul Rubin
2015-11-17  8:26     ` Dmitry A. Kazakov
2015-11-17  9:10       ` Mark Carroll
2015-11-17 20:09         ` Dmitry A. Kazakov
2015-11-17 10:49     ` Hadrien Grasland
2015-11-17 12:01       ` G.B.
2015-11-17 16:43         ` Hadrien Grasland
2015-11-17 18:04           ` Paul Rubin
2015-11-17 21:42             ` Hadrien Grasland
2015-11-18  4:36               ` Paul Rubin
2015-11-18  8:48                 ` Hadrien Grasland
2015-11-18  9:23                   ` Paul Rubin
2015-11-18 10:44                     ` Hadrien Grasland
2015-11-18 11:02                       ` Dmitry A. Kazakov
2015-11-18 12:41                         ` G.B.
2015-11-18 23:06                       ` Randy Brukardt
2015-11-19  8:56                         ` Hadrien Grasland
2015-11-19  9:19                           ` Hadrien Grasland
2015-11-19 21:27                           ` Randy Brukardt
2015-11-24 12:03                           ` Jacob Sparre Andersen
2015-11-19  7:22                       ` Paul Rubin
2015-11-19  9:39                         ` Hadrien Grasland
2015-11-17 13:01       ` Thomas Løcke
2015-11-17 16:45         ` Hadrien Grasland
2015-11-18  0:11       ` Paul Rubin
2015-11-18  9:44         ` Hadrien Grasland
2015-12-06 12:59   ` David Thompson
2015-12-07  7:25     ` Hadrien Grasland

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox