The aesthetics of it are a bit hard to convey, but some languages are simply more beautiful than others. Haskell, at its core, is incredibly beautiful. I honestly don't know what else to say about elegance from this point of view; I suggest looking through some of the answers in this related question: What makes a mathematical proof "elegant"?

But there's a practical aspect to it too. This is where splitting up "elegance" into "simplicity" and "expressiveness" helps because we can then concretely look at how Haskell is simple and elegant and how this is useful.

Simplicity

Haskell has a reputation—perhaps fair, perhaps not—of being difficult. I won't address this. Some people mistake difficulty for complexity, but Haskell is not complex.

To be fair, it's not as simple as it could be either. Haskell is an old testbed for research into functional programming; between extensive experimentation and backwards compatibility its accumulated a fair amount of cruft and superficial complexity. (Forgive the language its scars: it's older than I am.)

But Haskell's complexity really is superficial! Everything, the whole language, boils down to an incredibly simple core, a variant of a typed lambda calculus. I'm not talking abstractly either; as part of compilation, your program is actually simplified directly to a much smaller language creatively called Core. Haskell has grown lots of features over the years, but these rarely involved extending Core; they could be understood purely in terms of the existing language. We can reasonably think of everything else as syntax sugar—elaborate syntax sugar at times, but still just sugar.

Haskell will never quite be Scheme but, at heart, it is remarkably less complex than you'd expect.

The practical upshot is that understanding and reasoning about Haskell is less work than you'd expect. Generally, to make conclusions about the language you really just have to make conclusions about the parts that are in Core. Core is merely a tame elaboration of System F, well-studied and well-understood in CS.

I mean, I'm overstating the case a bit and people don't usually think about Core specifically unless they're dealing with the compiler, but the point stands: you can understand Haskell as you would understand a typed, polymorphic lambda calculus which may sound difficult, may even be difficult, but is absolutely not complex. There's just not that much there, not many rules to how the system works.

Expressiveness

If all we cared about was simplicity, we could just use the simplest language of all—the language that had no constructs and couldn't do anything. But that is, obviously, completely useless. What we really want is a system that's simple but still lets us write the code we want.

This isn't a matter of capability per se. Haskell, like most languages you'd encounter, is Turing-complete. In a sense, it can express exactly the same functionality as any other Turing-complete language. But this really doesn't capture how we program in practice. (Or else, why would we need more than a single Turing-complete language like the untyped lambda calculus or FRACTRAN or whatever?)

The answer is that we care about our program itself, not just what it does. We care about it in a few ways:

* what can we say about the program statically?

* what sorts of abstractions can we easily define and use?

* how flexible is the language—can we write code that looks exactly how we want?

* how verbose is it—do we have a lot of extra boilerplate, or can we write concise programs?

You know what else all this adds up to: how much fun the language is to use. Haskell is fun. That's a big part of its appeal. It's flexible enough to let you write programs the way you want and, in turn, Haskell programs can be incredibly elegant. The top answer by Ray Li about what makes a mathematical proof elegant breaks it down to three points:

1. An elegant proof is unexpectedly simple.

2. An elegant proof hits at the heart of the problem.

3. An elegant proof tells you more than just its result.

Answer by Tikhon Jelvis:

Haskell is

surprisingly elegant. This is something that's incredibly important to some people—especially ones involved in math and CS—but hard to convey to others. In programming languages, I think of elegance as a combination ofsimplicityandexpressiveness, making it a property that is at once aesthetic in natureandpractical.The aesthetics of it are a bit hard to convey, but some languages are simply

more beautifulthan others. Haskell, at its core, is incredibly beautiful. I honestly don't know what else to say about elegance from this point of view; I suggest looking through some of the answers in this related question: What makes a mathematical proof "elegant"?But there's a practical aspect to it too. This is where splitting up "elegance" into "simplicity" and "expressiveness" helps because we can then concretely look at

howHaskell is simple and elegant and how this is useful.

SimplicityHaskell has a reputation—perhaps fair, perhaps not—of being

difficult. I won't address this. Some people mistakedifficultyforcomplexity, but Haskell isnotcomplex.To be fair, it's not as simple as it could be either. Haskell is an old testbed for research into functional programming; between extensive experimentation and backwards compatibility its accumulated a fair amount of cruft and superficial complexity. (Forgive the language its scars: it's older than I am.)

But Haskell's complexity really is superficial!Everything, the whole language, boils down to an incredibly simple core, a variant of a typed lambda calculus. I'm not talking abstractly either; as part of compilation, your program isactuallysimplified directly to a much smaller language creatively called Core. Haskell has grown lots of features over the years, but these rarely involved extending Core; they could be understood purely in terms of the existing language. We can reasonably think of everything else as syntax sugar—elaborate syntax sugar at times, but still just sugar.Haskell will never quite be Scheme but, at heart, it is remarkably less complex than you'd expect.

The practical upshot is that understanding and reasoning about Haskell is less work than you'd expect. Generally, to make conclusions about the language you really just have to make conclusions about the parts that are in Core. Core is merely a tame elaboration of System F, well-studied and well-understood in CS.

I mean, I'm overstating the case a bit

and people don't usually think about Core specifically unless they're dealing with the compiler, but the point stands: you can understand Haskell as you would understand a typed, polymorphic lambda calculus which maysounddifficult, may evenbedifficult, but is absolutelynot complex. There's just not that much there, not many rules to how the system works.

ExpressivenessIf all we cared about was simplicity, we could just use the simplest language of all—the language that had no constructs and couldn't do anything. But that is, obviously, completely useless. What we really want is a system that's

simplebut still lets us write the code we want.This isn't a matter of capability

per se. Haskell, like most languages you'd encounter, is Turing-complete. In a sense, it can express exactly the same functionality as any other Turing-complete language. But this really doesn't capture how we programin practice. (Or else, why would we need more than a single Turing-complete language like the untyped lambda calculus or FRACTRAN or whatever?)The answer is that we care about our program

itself, not justwhat it does. We care about it in a few ways:

- what can we say about the program statically?
- what sorts of abstractions can we easily define and use?
- how
flexibleis the language—can we write code that looks exactly how we want?- how
verboseis it—do we have a lot of extra boilerplate, or can we write concise programs?You know what else all this adds up to: how much fun the language is to use.

Haskell isThat's a big part of its appeal. It's flexible enough to let you write programs the way you want and, in turn, Haskell programs can be incredibly elegant. The top answer by Ray Li about what makes a mathematical proof elegant breaks it down to three points:fun.

- An elegant proof is unexpectedly simple.
- An elegant proof hits at the heart of the problem.
- An elegant proof tells you
morethan just its result.Haskell is a language that, by the same criteria, admits

elegant programs. Here's a nice example:In Haskell,

compare(as used in sorting functions and similar) returns a value of type

Orderingwhich is either

EQ,

GTor

LT. This is already better than many other languages which return an int because it makes the intentions explicit (is -1 less than or greater than?) and keeps you from mixing ints-as-orderings with any other sort of int.

Interestingly, the

Orderingtype is also an instance of the

Monoidclass. The name comes from algebra and sounds exotic, but it mostly just means we have a way to combine two things, represented by the

<>operator. The exact way that they can be combined depends on the type;

<>just has to obey some rules that make it behave intuitively. In the case of ordering,

<>captures

lexical order—that is, if you made to comparisons, it uses the first one unless it was equal in which case it looks at the second. The definition could look like this (itself pretty elegant):LT <> _ = LT GT <> _ = GT EQ <> x = xAlready, this is a bit useful. We have an operator for lexical comparisons and, since it's just an instance of a broadly applicable abstraction, we didn't even have to come up with a new name for it. We could use it to write custom sorting functions pretty easily. For example, in a music app, we might want to sort by author and then by album:

sortBy (\ a b -> byAuthor a b <> byAlbum a b) tracksOnce you're keyed into the idiom, this is

waymore readable than the more explicit version which would probably involve anifor

caseexpression, no doubt taking multiple lines and obscuring the fact that you're sorting

tracks.

But the fun doesn't stop there! The Monoid class has a generic instance for

functionsthat return values of the monoid. Combining two functions like this means producing a single function that takes an argument, passes it intobothinput functions and combines the results with<>. It makes a lot more sense in code (remember that

fand

gare functions that produce something that can be combined with

<>):

f <> g = \ x -> f x <> g xSo if

ais a monoid, so is

b -> a, for

anyb. The trippy thing is that, in Haskell, functions of multiple arguments are just functions of a single argument that return functions themselves. (This is called "Currying".) So

c -> b -> ais

c -> (b -> a). If

ais a monoid then so is

b -> a, and if

(b -> a)is a monoid so is

c -> (b -> a), so this relationship extends—for free!—to functions of

any number of arguments.Putting all this together, it means we can use

<>to

combine comparison functions! We can write our earlier line in a way that has almost no extraneous information or noise:sortBy (byAuthor <> byAlbum) tracksNot only is this readable—the intentions of whoever wrote this line are clear—but we also got this functionality

for free, just becauseOrderinghappened to be in the

Monoidclass.

This is definitely and elegant idiom and it's

fun.Many different Haskell features come together to make programs elegant and fun, as outlined in many of the other answers here (which are more than worth reading!). But it's important to look at this in a high level: most people like Haskell not because of any one specific feature or property but because, in combination, it produces an ethereally elegant language that's

a blastto use.What more can you ask?

Why do computer scientists like Haskell so much?