We now make stylesheets so complicated that we need to write them with a framework that transpiles it down to CSS before shipping? The purpose of stylesheets is to apply consistent styling across semantically distinct content.
I’m not onboard with that perspective. CSS is the appropriate place for presentation complexity.
He goes on:
JavaScript is not bad. It has its purpose, which for example stylesheets are for? No!
JavaScript mangles semantics in a way that CSS does not.
Dynamic content (such as animated elements) and even some amount of
user interaction is well handled by CSS to the extent that it more
affords designers making pages such that that stuff can be turned off,
ignored, overridden. That’s much more difficult with JavaScript and
the virtual DOM. A simple wget -qO- |sed 's;";\n;g' |grep mp3$
is
not as easily done with a JavaScript-laden page. CSS doesn’t (as
often) get in the way of access the page’s content in a broader, more
accessible way.
Now, I do think approaches like “object-oriented CSS” are completely misguided, adding an additional layer into something that was already meant to be the glue between layers. But since the cure for that is to allow the CSS layer to be messy and complex, I have sympathy for people making or using things like SASS or Emacs macros. I think that’s good actually. Tools are our friend here. This webpage has just a few lines of CSS but even then I’ve generated those lines using Liquid conditionals so that, for example, pages that don’t have a blockquote don’t style how blockquotes look.
One opposite approach, Tailwind CSS, I’m not fond of either, since it uses a redundant level of classes but it doesn’t harm anyone else, it’s just a cockamamie way to work.
Really the only “style” I wanna add to the older, pre-style web pages is max-width on the text but that shouldn’t have to be server-side. There should just be a good default, narrower max-width for the body text on older, unstyled web pages. Server-side styling (the font-size tags and friends) was a mistake.
In practice, I’ve been enjoying epub novels, email, text messages, IRC, and Atom/RSS feeds more than web pages for the past several years. 🤷🏻♀️
The web is getting ruined by mandatory darkmode, tracking and popups. Popups have been a scourge on the web since time immemorial but for a while we were able to successfully block them. With CSS that’s way harder.
So yes, CSS is bad if plain text is on the menu, which I’d rather choose any day of the week.
Styling and typography are good things! But it should be client side.
One lesson I’ve learned in the past few years is that idempotent is good.
That means a switch you push and it stays pushed even if you push more.
For example, piping things through kramdown is idempotent:
echo "Eating only spiders and leaves"|kramdown|kramdown|kramdown
A li’l bit of wasted electricity but text doesn’t get borked.
Used to be I thought toggles were really practical and nifty, and steppers that looped around like a Pacman stage.
But on the Mudita Pure phone, the idempotent menus were a big problem since the screen didn’t work in the dark. I couldn’t orient myself through stepping all the way up.
A while ago, I got the suggestion to set up
alias config='/usr/bin/git --git-dir=$HOME/.cfg/ --work-tree=$HOME'
so I could use config
to save dotfiles and such without every app
believing that my entire home directory is a git repository.
I can clone the dotfiles from elsewhere with git clone ~/.cfg
, but
in the directory I instead call config
, like config add
, config
commit
etc.
But since apps don’t believe that my entire home directory is a git repository, that includes magit and I sometimes wanna use magit to restore files or partially stage hunks or whatever. Things that are too time-consuming to do by hand.
So here’s a script that temporarily actually does make the home directory into that git repository, waits for you to be done with Magit, and then restores things.
This does clobber the ~/.git
file or even breaks if that pathname is
a directory or such, so don’t have that.
#!/bin/sh
echo "gitdir: $(realpath "$1")" > ~/.git
cd "$(realpath "$1")"
git config --unset core.bare
git config core.worktree ../
echo "Do your magit stuff! Then hit RET here when you're done."
read nothing
git config --unset core.worktree
git config core.bare true
rm ~/.git
cd -
Call it with your conf repo dir as an argument, in my case I’d call:
cfgmagit ~/.cfg
For a repo,
git clone https://idiomdrottning.org/cfgmagit
Creating zshbrev kinda stopped my releasing of apps into a halt because now instead of making a whole app I’ll just add a few lines to .zshbrev/functions.scm
, usually hardcoding paths and variables too just because that’s simpler.
Like right now, I wanted a simple CLI app to turn a shell glob into a .pls
playlist so I just:
(define (lspls . files)
(print "[playlist]
NumberOfEntries=" (length files))
((over (print "File" (add1 i) "=" x))
(map
(strse "/var/example/path1"
"https://example.url/prefix1"
"/var/example/path2"
"https://example.url/prefix2")
(map realpath files)))
(void))
(Using my real file paths and URL prefixes instead of those example ones.)
Zshbrev too good! The Emacs-inspired whole “big ball of spaghetti” approach where any function can call any other function—like “realpath” in this example, originally defined for something else—turned out to be quite the ticket for code reuse.
Update: The upstream documentation was updated shortly after I wrote this, clarifying some of the things I was unsure about.
On Gitea instances and Forgejo instances (like Codeberg) you can create git PRs from the command line without using the web and, unlike GitHub, you don’t have to install any new apps outside of the git you cloned with.
You do need an account on that particular instance that the repo is hosted on. It’s not truly decentralized the way git send-email
or git request-pull
is. Those two methods are way more awesome and I’m always glad to see more support being built for them instead of for this local-account-only PR workflow. Needing a thousand accounts all over the place is such a pain with the Gitea/Forgejo model.
Clone the repo, make and commit your changes, figure out what branch you want to push it to (this example will use the branch name “main”).
Then run
git push origin HEAD:refs/for/main -o topic=main
That prompts you for your username and password on that instance and now the awesome part is that it creates the fork and the PR and everything else for you, you don’t have to fiddle with an annoying web UI. It’s all from the comforts of your own cozily scriptable CLI.
If you wanna use a feature branch, let’s say you’re working on a feature called “hacks” that you wanna push into main, that’d be:
git push origin HEAD:refs/for/main -o topic=hacks
You can also replace the word for
(which means a normal PR) with drafts
or for-review
and change the title or description with -o title="My important PR" -o description="This fixes the dosh-distimmer so we don't have to rely on that darn Gostak"
or whatever.
Now, pushing to origin with that weird custom refspec again will open new PRs.
If you instead wanna push more changes to your already-submitted-waiting-for-merge PR, I’m not sure how to do that yet, maybe that’s what the -o topic=
is for.
Unlike GitHub where every project you send PRs to will litter your own account page with a useless dangling fork, here the forks seem more ephemeral, they don’t show up on the webpage and you can’t even clone them down again, they only exist for the PR. So when I make drive-by commits on GH, I often just work from /tmp because if I ever need my own version of that repo, I can clone it from the web page, but Forgejo/Gitea works differently in that regard.
I got the impression that this is on by default for repos created within the last few years but older repos need to turn on agit support.
An Emacs function that reads a single char, then runs a query replace backwards from where you are, letting you optionally replace that char with spaces. After the query replace it then sends you back to where you were.
It’s a vanilla query replace which has lots of convenient ways to get out of it or tweak the text at point and so on. One that I learned today is period for “yes, but this is the last one”, which can be pretty convenient.
You can also use E to change the replacement string with something else than a space.
(defun fix-spaces ()
(interactive)
(query-replace (char-to-string (read-char)) " " nil nil nil t)
(goto-char (mark)))
Probably it’s only me doing this but when I type on the tablet’s touchscreen keyboard, my hands are so tiny that I often miss the space bar and instead hit b, n, or m or even v sometimes.
Trying it out for a bit I’m wondering if maybe “fix-anything” is a better version, which reads two characters instead of assuming the second is a space.
(defun fix-anything ()
(interactive)
(query-replace (char-to-string (read-char)) (char-to-string (read-char)) nil nil nil t)
(goto-char (mark)))
Here is another fun variant that just replaces one match of any character with any other:
(defun replace-one-character-backwards ()
(interactive)
(save-excursion
(when (re-search-backward (char-to-string (read-char)) nil t)
(replace-match (char-to-string (read-char))))))
I wanted a filter in brev (I mean, a procedure) that if a gemtext snippet matched this format:
a line that ends with a quote:
> the second line is a blank line, or it's a quote line
=> an://url/with a mandatory preceding blank line
it should rewrite that to
=> an://url/with a line that ends with a quote:
> the second line is a blank line, or it's a quote line
and if it didn’t match that format, just pass it through unchanged.
Thanks to the fanciest define of all time and its require
feature,
that was pretty easy.
Require takes two arguments; the first is either a bool or a pred, the
second is a value. If the bool is true, or the pred applied to the
value is true, then pass through the value. Otherwise backtrack out of
the procedure entirely! Through magic time travel♥︎ probably
better known as call-with-current-continuation
.
First do nothing by default, just pass through all input as it is:
(define (link-linify-source x) x)
Then handle the special case of a string that’s splittable into the pattern we’re looking for.
(define (link-linify-source (= (fn (string-split x "\n" #t))
((? (strse? ":$") hd)
(? (disjoin (like? "") (strse? "^>")) lean)
. rest)))
(conc
(string-intersperse
(cons*
(conc
(string-intersperse
(take (string-split (require (strse? "^=> ") (last (butlast rest)))) 2))
" " hd)
lean
(drop-right (require (< 2 (length rest)) rest) 3))
"\n")
"\n"))
If, in the middle of this mess, we notice that the second-to-last line
isn’t a link line, we just nope out as if the filter never even
matched, thanks to require
. Pretty neat♥︎
Used to be that whenever I saw a Python code repository of an app I wanted to hack on and install I’d get a pit of fear in my stomach.😰 It’s so difficult to install with all the pips and venvs and what not.
But fpm to the rescue! At least if there’s a setup.py in the repo.
In a temp directory,
fpm -s python --python-package-name-prefix python3 \
--python-bin python3 --python-internal-pip -t deb /path/to/repo
Problem one is that sometimes there might be dependencies. Go install
them first (if they’re in apt, use that) and then add
--python-disable-dependency
for each one. For example, I’ve been
compiling a package that depends on click which I already have
installed from Debian’s own repos so I would:
fpm -s python --python-package-name-prefix python3 \
--python-bin python3 --python-internal-pip \
--python-disable-dependency click -t deb /path/to/repo
There can be multiple instances of the --python-disable-dependency
flag.
Problem two is that that turns into a kinda janky deb because all the paths are relative, so I need to be in / when I install it. But I just cd to / and then install ‘em with the path to the temp directory where the deb is with gdebi.
The awesomest part is that they don’t go in their own venvs or dockers
or chroots or their own li’l boring & redundant worlds. Instead it’s
all according to traditional Debian philosophy, all part of the same
file system with no redundancy. Straight into
/usr/lib/python3/dist-packages
so other packages you install can
use them.
You can also install things from python’s package repos without having the repo locally; just put the package name instead of the path to the repository. Look at those debs carefully before installing ‘em so you don’t get malware and stuff. It’s not something I do a lot since what I want is usually something that’s in apt already (in which case no need to use fpm at all) or it’s something I wanna hack on (in which case I did need the repo, so I could make changes & patches).
This one is what I call a “double-disclaimer”; a post so iffy that I need to stick not one but two disclaimers in front of it.
First of all, I’m glad people are making FOSS version control for the non-technical audience. Thank you for that. And if that, through your usability studies and such means that for that crowd, the “staging area” a.k.a. the “index” is just not worth its weight in confusion for them, that’s fine. Get rid of it.
Staging is a client side UI affordance, not something that really needs too be part of the wire format or the remote repo format. If I’m collaborating with Alice and Bob on the other side of the world and one of them is not using staging at all and the other is carefully staging everything, I wouldn’t be able to tell the difference. It’s one of those things that doesn’t hurt anyone else. Even though I sometimes think vim users should be required to wear bells to warn others of their presence, when all is said and done, they can receive & read text files and they can write & send text files, so they’re only hurting themselves. Or, if these UI studies are right, it’s us who are using staging who are only hurting ourselves.
Second disclaimer: I know that some of the non-staging tools like
jujutsu and gitless do have ways to do some of the cool things that
staging can do, like the jj split
command in jujutsu, and got even
has an optional index; if you run got stage you can stage things,
otherwise commits commit the entire thing.
So awesome is Magit and its staging area that I sometimes put a text file under git that I normally would never use VC for, just so I can revert hunks from commits and revert hunks from the staging area, too.
It sometimes feels like this wonderful time machine; I still use the weird default Emacs “undo chain” but that hardly matters with all the glorious powers of Magic available!
Jujutsu describes git’s staging area this way:
The index is very similar to an intermediate commit between HEAD and the working copy
That’s right, and that’s why jujutsu doesn’t need it, and that’s awesome. For them. But Magit leverages this feature of being almost-a-commit wonderfully because then I can revert hunks without ever having to have committed them! I can put all kinds of junk in my files and only commit the good things.
Now, in the grim darkness of the far future someone might come up with an Emacs interface for Pijul or for Jujutsu that is as awesome as Magit is, and when that happens, this essay is gonna look pretty wack. But it hasn’t happened yet.
Brev has doesn’t have any boiler plate. I just open a project-name.brev file and start hacking. Now that I am using git more, I’ve started making a folder for it first, project-name/project-name.brev. And then I hack away.
I have a shell script that when compiling adds the (import brev mdg) to the top automatically. So I don’t even need to put that in the file. Hello World is just
(print "Hello world")
And then when I am nearing the release stage, I run brev2scm on the brev file.
Since (import brev mdg) is shorthand for “bring in all kinds of batteries ever invented”, but on a more serious, more done project you don’t need all those dependences linked in. Brev2scm asks some questions and then creates a project-name.scm file that handles the module declaration and the few specific imports that particular file needs. (If my file is a stand-alone application rather than a module/library, then I edit the generated .scm file to only have the imports.)
apt-get install chicken
chicken-install brev
On zsh, I can just
csc -prologue <(echo "(import brev mdg)") project-name.brev
when I’m doing the prototype and then later on once I’m pretty much done and I’ve ran brev2scm, I can switch to
csc project-name.scm
I’m sure bash has it’s own way of doing that kinda here docs but
otherwise brev-prol.scm
file with (import brev mdg)
first so you
can:
csc -prologue brev-prol.scm linen.brev
To just start evaulating things in a REPL, that’s a Chicken app called
csi
and when I use that, I do call the (import brev mdg)
manually.
Here is in my Emacs conf file:
(setq scheme-program-name "csi")
(add-to-list 'auto-mode-alist '("\\.brev$" . scheme-mode))
(add-to-list 'auto-mode-alist '("\\.egg$" . scheme-mode))
As I’m hacking, I use cmuscheme in Emacs, and all those dinky li’l
“let me just try this function” things, like say you’ve written the
function frobnicate
, and you write a (frobnicate 1 2 2)
to see if
you get what you’re expecting, those things I put in a file, it
doesn’t matter what it’s called, but it can autogenerate the unit
tests with make-tests.
So just evaluate (frobnicate 1 2 2)
and stuff in Emacs with
scheme-mode and cmuscheme, and let’s say it returns 23 and I’m happy
with that, I don’t need to record the 23 just look at it, because then
the make-tests will turn it into a test that looks like:
(unless (equal? (frobnicate 1 1 2) 23)
(error ...))
This isn’t something I need to think right away other than: all those dumb li’l “test evals”, save them! Don’t delete them. Put them in a toybox.scm or junk.brev or whatever file. And later they can turn into beautiful spun hay-gold by make-tests.
I’m not the most tinfoil-hat–wearing hacker in the world but here’s a minimum baseline I want all package managers, especially programming-language–specific package managers, to have:
I want to be sure that the binary I’m getting is compiled from a particular commit in the source VC. I’m not talking about “pinning to an old version for the sake of reproducible builds” here, that’s a feature with pros and cons that I’m neutral on, I’m talking about verification. To be able to know that “OK, this binary came from this source”. It’s not the be-all-end-all to malware since obfuscation exists but that’s why I call it a baseline.
That’s it. A one item wishlist.
I’ve always been pretty comfy with the official Debian-provided debs. Even a mirror is subjected to checksums, and there are e2e signatures. Third-party PPA’s are another story, and I’m not too fond of them (Third-party Debian source packages would be a great fix there), but growing up in the safe embrace of vanilla Debian and then coming to the world of waittaminute you want me to run what command on what now!? has been a wild ride.
I know that in the bigger dependencies debate, there’s arguments about few big dependencies vs many small, NIH vs code reuse etc. That’s gonna take a while to sort out. But the bare minimum for a binary package is some sorta paper trail on how it got made.
Goreleaser, I’m especially calling you out in this. You make high-quality debs but the binaries in those debs might as well be blobs from Planet X for all I know. I can trace the build to a Github “action” being run on such and such tag but then I need to trust Github, and the creators of that particular action i.e. you.
Now it’s possible that some of the package systems I’m slagging here, whether or not I named them by name in this post or not, do have some a solution for this. That’s great! That’s a win for everyone if that’s the case. Please make that solution a li’l easier to find.
Right, no-one owes me or anyone this. I know. These are just a pair of cents tossed into the crumpled paper cup on the street called “design discourse” and probably worth as little, too. Thanks for reading anyway.♥︎
You know how in the Unix shell, you can have programs and they can talk to each other through pipelines, env vars, and command line args, right?
Emacs is a similar environment, although its functions can compose in even more ways. Not only that, the ways they can compose is itself extensible, so over the years people have invented “hooks” and “advice” and “macros” and more. It’s a mess is what it is, but it’s a very flexible mess.
It has a ton of stuff already, including being able to run Unix shell functions, so it can do a superset of what a normal shell can do.
When using the define
from match-generics you need to take some
care about what order you define the calls.
If the things are clearly different, like different numbers of arguments or conflicting predicates or different static data, like
(define (foo bar 4) (+ bar 4))
(define (foo bar 5) (* bar 5))
then that’s fine, those can go in any order, but if you have something that’s a superset of something else, that’s when you need to put the supersets first and the special cases last. Like,
(define (foo bar anything) (- bar anything))
(define (foo bar 4) (+ bar 4))
(define (foo bar 5) (* bar 5))
is right. If you would’ve put the “anything” one last, it’d shadow the special 4 and 5 ones.
This is a huge pain and it even ships with a way to reset things at the REPL if you’ve borked things up:
(define: reset foo)
I even have an Emacs shortcut to define the definition at point, that’s how annoying this is for when you’ve made mistakes.
It’s a lot better in version 2 of match-generics since it is a lot smarter at recognizing you when you’re overwriting something that it already knew, and that can fix some typos, but things like misspelled predicates is still grounds for a reset.
Some other generics system use type classes to automatically be able to order things based on supersets first, it basically has a database of every possible type of arg and whether or not that arg can be a subperset of something else more specific.
With brev, you can do one thing to numbers and another to integers by putting the special case last:
(define (foo (? number? bar)) ...)
(define (foo (? integer? bar)) ...)
In those other typeclass-based systems, those could’ve gone in any order and it’d just do the right thing, but with match-generics, you need to mind the order.
What you gain from this is a lot more flexility. You can do any predicate, even custom ones, and custom destructuring too. Since you supply the order, it doesn’t have to be able to figure out the order.
It happens plenty often as I write documentation and I write about some special hoop or requirement the programmer needs to go through, or some case the library doesn’t handle automatically, and I’m like so ashamed and I think “wow, I can’t ship this” and I go back to the drawing board and implement the missing thing. But this time, the trade-off (given Scheme’s limitations) is just super worth it.
That started out as just how I implemented my first prototype using consing, but I immediately liked it because it lets you add special cases later on that you maybe didn’t think of at first.
Like “Oh, I need to be able to handle numbers, too”, OK, that’s no problem.
I use call-tables all the time in my code. Sometimes a good solution is to have two call-tables with the same data, one going from A to B and the other going from B to A. That way I can find anything from anything.
Here’s where they came from!
I was a li’l disappointed in Software Design for Flexibility since I think SICP is the most important philosophical text of the 20th century (sorry Wittgenstein and Sartre, but Eva Lu Ator is where it’s at).
SDfF is actually a good book and I can recommend it, just, it’s a normal good book whereas SICP is like the heavens parted. I guess part of the problem is me.
Georg Lichtenberg once said “A book is a mirror. When a monkey looks in, no apostle looks out”. And when I read SICP I had a lot to learn but when I read SDfF twenty years later I had stuff pretty much figured out and what was in it didn’t felt like super new ideas to me and some of them were stuff that felt completely unworkable since they’d require going over all imported functions and recording their arity and things like that, only to get slightly shorter, slightly less readable code.
But as I implemented that arity
recorder, I realized that I could
extract the body out to a more general create-arity
such that:
(define arity (create-arity))
(arity cons 2); => 2
(arity cons); => 2
I thought arity
might not be the only thing I’d wanna create with
that thing. Me and jcowan had made great hay out of using hash-tables
to record nodes in tree.scm. Having a way to bind anything to
anything with any equivalence function is awesome!
So I renamed it call-table
and shipped it!
Any time you have nodes or vertices of any kind, spray on some call-tables and boom, now you have edges. It’s a more flexible design than what was in Software Design for Flexibility.♥︎
The ability to open the hood and access the underlying hash-table was there from day one. Astractions with escape hatches for the win. Just like how Lisp has lists that you can do list processing on but you can easily access the underlying cons cells.
Chicken already had callable-data-structures which I vaguely
remembered the day after making call-tables. I feel doubly
guilty to Mario for not only had I ended up biting his schtick with
these call-tables but also later on ended up calling a module mdg
(for “match-define generics”), which was pure coincidence. I’m sorry,
Mario!
When I looked at that again, I saw that they exactly the same api for
accessing the data in the hash-table and for accessing the underlying
hash-table but what’s so clutch with the way SDfF’s arity
and other
call-tables work is that you can also easily and side-effectively
change what’s in the table. It’s not just callable on the reading end,
it’s also callable for setting.
Callable-data-structures did have one awesome feature that I hadn’t
implemented then: support for using generalized set!
. I added that
after but I haven’t actually used it myself yet even though I use
call-tables all the time.
For a while I was doing a lot of Clojure work and unlike some other
lisp, they use tables all the time, although the paradigm there
is usually not to use them as an arity
-like store but instead use
them in a purely functional way, shadowing them rather than changing
them, which is cheap on Clojure because of how they’re implemented
(so, on Clojure, the shadow can safely share parts of the original
table).
That’s not something I have implemented for mine yet. It’d require swapping away the underlying datastructure from SRFI-69 hash-tables to something more custom, that shares data until nodes are changed in which case it splits those nodes specifically and shares the rest.
One thing I did find useful in Clojure though was being able to write out a table and its contents right in the code, so I implemented the call-table literals.
Like I could (define arity (ct cons 2 not 1 reduce 3))
and it’d then
be able to know that (arity cons)
is 2 and I can still add new stuff
to arity
like any other call-table.
Here’s a li’l three line shell script that tries to help you guess and
set format.subjectPrefix
and sendemail.to
in a git repo.
Normally it uses completing-read and all its dependencies, but you
can edit the script to instead use gum choose
or whatever you want.
You can get the script by
git clone https://idiomdrottning.org/git-setup-email
Here’s a li’l mini function for Emacs that when called sets the window width to one space wider than the current cursor position:
(defun cozify-window ()
(interactive)
(shrink-window-horizontally
(- (window-width) (current-column) 1)))
By window, I mean the panels in side of the frames.
I mean the things that split up your Emacs into left and right when
you’ve done C-x 3
even in the TUI mode of Emacs.
On the web, there used to be well-designed pages and badly designed pages.
There used to be some really poorly designed pages that were just one big pixmap or Flash file.
And some that were nice and gentle and straight-forward; your good old “body, h1 hi, p hello how are you” type sites. Nothing fancy, just set up the way the web was meant to work.
Now, let’s say someone comes up with a new and exciting tag like <blink>
.
Ideally, here’s what you want:
<blink frobnication-level="34">hello jed</blink>
would just turn into hello jed
).And of course, ideally you don’t wanna come up with too much of this kinda stuff.
Unfortunately, some browser vendors or spec creators think a little different.
Let’s use blinking text as the example again. A browser vendor is like
“OK, wow, we can get the text to blink! To appear, disappear, appear,
disappear, appear, disappear in a very amusing way! Let’s do that for
all sites now, and provide the <non-blinking>
tag for those sites
that want to just look normal.”
Whenever that happens, every site on the entire web needs to ftp in
one last time to that old University account or grandma’s old
high-school poetry site that she posted to the WRAITH-L mailing list
when she was a goth synth emo kid on dialup in 1994 and wrap their
thing in <non-blinking>
.
I think this is a bad thing for the web and should never happen.
Again.
Because it already has happened, for example with the
<meta name="viewport" content="width=device-width, initial-scale=1" />
debacle. Everyone who had a well-behaved site and wanted their site to
just keep looking normal had to go in and add this.
The slippery slope tug of war of font sizes is another example.
Somewhen down the line, my tab-width accidentally got set to 2.
That meant that indent-region in Lisp code inserted a ton of extra tabs.
And when I pasted that code into other formats, like Markdown, it’d look weird so I’d run untabify on it.
I didn’t know what was going on but when I finally got around to finding the issue and fixing it, I was really surprised.
I set the tab-width back to the default (which is 8) and reverted the lisp buffers. They suddenly looked massively borked. Even though I had run indent-region on them earlier!
And then when I re-ran indent-region, it started looking normal again.
In other words: the tab-width has zero bearing on how lisp code looks in Emacs.
I’ve been programming since 1993 and never cared about the tabs vs spaces debate. I like a two-column indentation offset, or even one is fine, and eigth is a li’l too wide for my taste, but that’s technically a preference not directly tied to whether or not a file should be encoded with tabs or with spaces. If it’s with spaces, everyone sees the same, if it’s with tabs, everyne sees according to their setting, both those have merit. I’m OK either way and use dtrt-indent so that code I check in will match its surroundings.
People who are fans of using tabs use the argument that tabs make it so that everyone can have the code looking the way they want to.
But that’s not what scheme mode was doing. It indented to the same fixed columns no matter what the tab size was, and only used tabs as some sorta cheap file compression engine. “Oooh I can replace 8 spaces with one byte!”
Conclusion: scheme-mode doesn’t deserve to use tabs. It lost that privilege.
(add-hook 'scheme-mode-hook
(lambda ()
(setq-local indent-tabs-mode nil)))
I do agree with Björn here, but when I last wrote about it, someone commented something good that I hadn’t thought of: distros can take up the curating mantle, selecting and packaging the most accessible, internationalized software while leaving the more half-baked un-thought-through whim-releases in their “compile if you dare” repos.
Re-reading what Eloquence wrote that time, it’s not even “curating” or banning things from distros and storefronts; just tagging them up so that people can have structured semantic info on accessibility, language issues, bakedness, maintenance etc. Yes, that’s a perfect solution to this. Completely agree.
I believe that devs don’t owe you anything and every piece of FOSS is a precious, unearned gift, but devs aren’t owed to be featured in distros either. Not that that’s anything we ask for, we just hack something up that we ourselves need and figure “might as well release it”. (And by “distros” here, I know that there are hacky, half-baked, experimental distros too. That’s kind of besides the core point that commenter was trying to say.)
I lucked in to realizing something early on about moldy half-baked tarballs (or repo, these days) that can maybe sorta compile if you throw sed and grep and chewing gum and wire hangers at them: they’re optional. They’re an opportunity we may use, not something we’re forced to use. If this had been the world of Windows 95 that I was coming from, we wouldn’t even see these half-finished work-in-progress packages, they’d be totally hidden from us, kept from us in some inaccessible campanile cupboard. This hacky stuff is our chance to get in on the ground floor and test and patch and report and help. If we want to.
Now, devs, that’s not a license to you to treat bug reports—and UX issues and a11y issues and i18n issues and doc confusion are bugs too–with meanness or contempt. You do not have to fix the bug reports, feel free to mark everything “wontfix” or say “I have zero time right now, send patches if you wanna” or just not even reply at all, but don’t be rude or mean, and especially not if the users aren’t rude or acting entitled. Corporate dev shops pay big money for the kind of feedback these “whining” users are sending you for free.
Here’s a li’l completing-read that asks you to select one line from all the lines piped into it, and pipes that one line out. So you can use it in the middle of pipes, it doesn’t have to be at the end.
The smart thing is that if you’re in Emacs, like in shell-mode
for example, it uses Emacs’ completing-read
(which you
hopefully have souped up with vertico or the like), and
if you’re outside of Emacs in another terminal, it just passes it on
to gum choose.
It has a ton of dependencies as per usual with these small anti-NIH hacks where I don’t wanna reinvent a ton of wheels:
Note that you can call it from any shell, not just zsh. That’s how zshbrev works: it can run zsh commands, aliases, and sourced shell functions in brev, which is why it needs zsh installed, but then the commands defined in zshbrev work in any shell. Even bash.
Here is the zshbrev function:
(zsh-import (stdout gum) (sexp emacsclient))
(define (completing-read)
(cond ((get-environment-variable "INSIDE_EMACS")
(emacsclient
'-e
(with-output-to-string
(fn (pp `(completing-read "Please select: " ',(read-lines)))))))
((string=? "dumb" (get-environment-variable "TERM"))
;; not sure how to handle dumb terms here,
;; maybe I should just shuf -n 1? or head -1? or print all the lines?
;; for now just error out
(niy))
(else (gum 'choose) (void))))
Put that in .zshbrev/functions.scm
, then run zshbrev, and there you have it.