Darn AI and its filthy mind! What will it think of next?
Darn AI and its filthy mind! What will it think of next?
One of my guilty pleasures is to rewrite trivial functions to be statements free.
Since I’d be too self-conscious to put those in a PR, I keep those mostly to myself.
For example, here’s an XPath wrapper:
const $$$ = (q,d=document,x=d.evaluate(q,d),a=[],n=x.iterateNext()) => n ? (a.push(n), $$$(q,d,x,a)) : a;
Which you can use as $$$("//*[contains(@class, 'post-')]//*[text()[contains(.,'fedilink')]]/../../..")
to get an array of matching nodes.
If I was paid to write this, it’d probably look like this instead:
function queryAllXPath(query, doc = document) {
const array = [];
const result = doc.evaluate(query, doc);
let node= result.iterateNext();
while (node) {
array.push(node);
n = result.iterateNext();
}
return array;
}
Seriously boring stuff.
Anyway, since var/let/const are statements, I have no choice but to use optional parameters instead, and since loops are statements as well, recursion saves the day.
Would my quality of life improve if the lambda body could be written as => if n then a.push(n), $$$(q,d,x,a) else a
? Obviously, yes.
The only clue we have is that the desk reflections look really plausible.
But yeah, it’s real: https://www.newyorker.com/news/our-columnists/the-president-is-shilling-beans
You can list every man page installed on your system with
man -k .
, or justapropos .
But that’s a lot of random junk. If you only want “executable programs or shell commands”, only grab man pages in section 1 with a
apropos -s 1 .
You can get the path of a man page by using
whereis -m pwd
(replacepwd
with your page name.)You can convert a man page to html with
man2html
(may requireapt get man2html
or whatever equivalent applies to your distro.)That tool adds a couple of useless lines at the beginning of each file, so we’ll want to pipe its output into a
| tail +3
to get rid of them.Combine all of these together in a questionable incantation, and you might end up with something like this:
List every command in section 1, extract the id only. For each one, get a file path. For each id and file path (ignore the rest), convert to html and save it as a file named
$id.html
.It might take a little while to run, but then you could run
firefox .
or whatever and browse the resulting mess.Or keep tweaking all of this until it’s just right for you.