All the kids today with their dependent types and their provable correctness… I’m more of a hack hack, explore, evolve, iterative incremental improvement, big ball of mud, foot gun, intuitive programmer. Leaning into emotions and heuristics and hammock time and smells and refactoring. Hacking on the right side of the brain?
In the Pie language from The Little Typer:
(claim Odd
(-> Nat U))
(define Odd
(lambda (n)
(Sigma ((half Nat))
(= Nat n (double half)))))
We can formally prove that 42 is odd.
I don’t wanna parade-rain on this stuff. It’s cool that people are trying new things. I’m… just hanging back to see where things are going.
left_adjoint wrote a really good follow-up post:
A lot of dependently typed programming languages that have text editor integration will fill in the skeleton of what the program must be by the induction principle of the type, giving you a bunch of “holes” to fill in for your code. You can fill in those holes piecemeal and even be working on defining your theorems and properties about your code simultaneously.
About a year ago I saw a demo of someone using Elm (the web framework, not the mail app) in a similar way. For me, that still seem like it’d be a major leap in how I work with code. It’s great that it does the boiler plate for me, and I really, really appreciate that it does, but… when it’s not done yet?
How easy is it to refactor stuff, mold stuff around? Lisp to me is more like a, uh, a moldable clay, while Elm came across as programming with a super enhanced bionic exoskeleton on. Each keystroke inferring mountains. As you can see, my ignorance is showing here. I don’t have that much experience with a good typed language (outside of Haskell, which inspired brev, though I based my polymorphic multimethods on arbitrary (and arbitrarily composable) preds rather than on a type hierarchy) and am still reeling from Java and C++ back in the day. That’s why I’m being careful to try to not parade-rain.
It’s a process that I find very symbiotic, really, and less antagonistic than the typical “write code => does it work => aww fuck” loop.
This symbiotic, exploratory processes sounds very appealing. I love REPL-driven programming, making something, refactoring it, extracting, inlining, renaming stuff until it’s awesome.
One thing that does not sound great is the name, “Agda”, in a Swedish-speaking context. I know, I know, I’m getting tedious with my constant wish for a new name for GIMP and Apache but… “Agda” (because of the song it intentionally references) is my least favorite of all those.
I feel like there are three reasons for types.
Which, yes. Very appealing. Run-time chex are slooow.
Which… isn’t the compiler auto-doing things that you ideally
wouldn’t even have to do? Like, take the classic & well-known awk
program '!x[$0]++'
which removes duplicate (including
non-consecutive) lines. It doesn’t need to generate code for all
record types, it just… works on all types right away. And the entire
program is just seven chars♥︎
Similarly I made as-list so that list operators could work on many other types, not just lists.
This is the part that I think is sus. As the “42 is odd” example above implies. To sell me on dependent type stuff, lean into the performance and the super-powered coding, just like left_adjoint did♥︎
As an analogy, overly relying on spell checkers can make some people miss things that are still legit spellings but are the wrong words in that particular sentence, like effect/affect.
But, it’s worse than that since (and I’m complaining about type annotation, not type inference) you need to enter the type info anyway. It’s “bugfinding through redundancy”. Sort of the same philosophy as “write a test for every line of code” but more rigid and limited. Of course reduntantly writing out what you want the function to accept and return is going to catch some bugs.
If you like this sort of type checking, you’re not alone. A lot of people love them, and ultimately there’s no need to argue. Time will tell if that style of programming does lead to overall fewer bugs, or at least does so for programmers of a certain taste, and if so, that’s fine by me. I’m not gonna take your ML away.
But as my “42 is odd” example shows, I’m not too happy with the whole “statically typed programs are Provably Correct” hype leading into the same hopeless dead end as Russell and Whitehead did a hundred years earlier.
Coming from C and Pascal, when I first discovered languages that didn’t have type annotations in the nineties (like awk, perl, and scheme) I felt as if I had found the holy grail of programming. No longer would I have to write out boring and obvious boilerplate. It was a breath of fresh air. Other people obviously feel differently and that’s fine.
For me, it seems that a lot (not all, but some) of the things a good type annotation system helps you with are things you don’t even need to do with dynamically typed languages. It also feels like with a type annotated language, there’s a catch-22 problem leading you to have to basically write the function before you write it (maybe with pseudocode or flowcharts) just so you can know what type signatures to use.
I felt that wow, a cons pair of car and cdr can express data in so many ways, I can just immediately write the actual logic of my code. Whereas when I worked as a Java dev (don’t worry, I’ve looked at modern type languages too, like Haskell) we had to slog through writing types (classes and instances), UML diagrams, wall full of post-its, architecture, ConnectionKeepingManagerFrobnicator.new() etc. With Scheme it was just, all that red tape just fell away. No need for pseudocode since I could just send whatever I was thinking into the REPL.
The type community loves the expression “to reason about the code”. Well, to me it’s a heck of a lot easier to reason about the code when it’s a fifth the size. (Sexps help too since it’s easier for me to grok a code tree than a linear sequence of unparsed tokens of code data.)
Obviously, type fans have had similar epiphanies but in the other direction, falling in love with static just like I fell in love with dynamic. And that’s cool. Let the deserts bloom. Humanity can’t be betting all of its eggs on my approach. I see the type craze as an experiment. One that might even be right. So please, go ahead.
I’m just really, really grateful that it’s not me who have to slog through it. I can sit under the cork tree sniffing dynamically typed flowers.
13:04 <sandra> I haven’t been convinced that programs can be proven correct but I’m not well-read on the area so I moderate how much I complain about it so I don’t Dunning-Kruger myself too much. A little bit is OK.
13:06 <Corbin> “correct” is the tricky word there, not “proven”. A proof is simply something which convinces somebody of something; the somebody is called a “verifier” and the something is called a “proposition” or sometimes “property”.
13:08 <Corbin> A correctness proof for a machine is usually made relative to some other machine. Thanks to Turing, Rice, and Gödel, we usually can’t make absolute statements about correctness.
13:09 <Corbin> What we can do is verify that, given some machines A and B as well as a compiler from A to B, the compiler has the property of emulating actions in A faithfully as actions in B. So, if A is a reasonable machine that we trust, then so is B, via the compiler.
13:10 <sandra> That, I can finally get behind
13:11 <sandra> I’ve been waving Gödel in dep type peeps’ faces for the past two years
13:11 <sandra> And now I feel super bad about that.