I read weird things online, almost every day.
Today, I read an article about using a commercial relational database to process large (very large) linear algebra problems. These type of linear algebra problems, often with 1000’s of dimensions (rows x columns) in matrices, are typically found in running neuron-net simulations such as are used in contemporary machine-learning algorithms (the type of of tools behind the magic of e.g. google translate).
The article can be found here. I suppose the reason I read it at all is because I used to work with relational databases, and I have a vague but slightly comprehensible memory of the principles of linear algebra, it being one the few advanced math topics I actually mastered before my college math-major career crashed and burned in 1984. I don’t claim any deep understanding, but I liked the idea of hacking a relational database to do this other type of work – it definitely feels like a kind of “hack” – but a useful one that could end up making large neural-net algorithms more manageable, which opens the way for new, more complex machine-learning applications. Useful hacks often become state-of-the-art for the following generation of programmers, and get grandfathered into important processes, languages and applications. The whole thing just sort of hovers there on edge of understanding, which seems to be where I generally situate my technical reading, these days.
Meanwhile, I saw no notable tree today.