There, got a PR open for federating emojis in display names and bios :)
Luckily a lot of the groundwork was done already (we wrote a lot of cool util functions way back when which are paying off now), so it was just a matter of adding a wee bit more logic, and doing some teensy weensy database fiddling.
This PR also has lots and lots of tests, because who doesn’t like tests right.
The history of almost every non-trivial field is full of dead ends and bad ideas that were impossible to predict before the fact. Genuinely reasoning about the field from first principles will lead you into one of these dead ends
This is why it isn't uncommon for people who genuinely engage in reasoning from first principles to end up as cranks or conspiracy theorists. You need a justification of why your intellectual dead end isn't actually a dead end to assuage your cognitive dissonance
I mean, the above fix is literally: "Have you tried turning it off and on again?"
Hard disk controllers are computers, too. And they are just as disappointing as all the other computers.
One of the arguments in favour of surveillance capitalism is the great usefulness of cloud-based ML predictions.
After all, who can deny the usefulness of photo apps that automatically recognize faces, detect your speech or help you making sense of the deluge of information in a social feed?
The argument usually goes like this: these features require large neural networks, which in turn require a lot of computational power to train the models, and a lot of memory and disk storage in order to load and save those models.
You can't do such things on small devices that run on batteries. Therefore your phone *HAS* to send your data to #BigTech servers if you want to get those features. Otherwise, you just won't get those features.
Except that... What if this whole argument is bollocks?
#POET (Private Optimal Energy Training) proves that you can run both the training and the predictions locally, without compromising neither on precision, nor on performance.
After all, the really expensive part of training is back-propagation. POET breaks down the back-propagation performance issue by quantizing the layers (so real-number large tensor multiplications get reduced to smaller multiplications of integer tensors, without sacrificing precision too much), and a clever way of caching the layers that are most likely to be needed, so we don't have to recalculate them, without caching everything though (which would be prohibitive in terms of storage).
The arguments in the paper sound very convincing to me. The code is publicly available on Github. I haven't yet had time to test it myself, but I will quite soon - and try to finally build an alternative voice assistant that can completely run on my phone.
Open question, folks:
Digital Ocean is getting too expensive to host my servers there. I'm pricing out options for #vps providers to move my stuff to.
I'm considering moving everything to OVH, after speccing out servers and computing monthly prices. However, before I start planning out the migration I wanted to ask if anybody out there used their services. What do you think of them? Do they suck? If so, how much?
If you use their hosted database service, is it a hosted database server (where you can stand up however many databases (groups of tables) you want, and how many users thereof) or is a hosted /database/ (i.e., a group of tables) only?
twitter quote on CS
there is NO field that more grievously overestimates the layman's understanding of what they do than computer science. if you do too much programming you ascend/descend to an entirely different plane of existence where you think that regular human beings know what linux is
People want to run their own servers, and they do. But they won't, if the price of that is dealing with unix.
@meena you can use clone-indirect-buffer-other-window (C-x 4 c) to create a new buffer for the same file, and then run magit-blame on one of them
i don't think the reality has sank in yet