What does it sound like when a machine learns? There are many ways you could envision exploring this synesthetic experiment, resulting in a variety of different sonorizations for the learning process.
While thinking about it, it struck me that it is quite similar to tuning a guitar. When you pluck a string you're comparing the pitch it produces against the pitch you want it to make (whether it's with a tuner, a piano, or your head). If your guitar string's pitch is too high, you lower it; if it's too low, you raise it. And you continue in this way until it matches the pitch you want. You repeat this for all strings.
This is essentially curve-fitting (which is basically what a lot of machine learning algorithms are doing, anyway, just in high dimensions). So it seemed like a nice analogy to explore The Sound Of Learning.
I shamelessly copied the curve-plotting (and CSS settings) of @notwaldorf's really nice intro page to TensorFlow.js (if you're new to Machine Learning you should definitely give it a read) and added a little bit of code to explore this idea.
What's happening is that at each iteration where the model is trying to fit the green curve onto the blue one, the model gets a loss (essentially a number indicating how far off you are from the true curve). At each iteration I pick a string to tune at random and detune it by an amount given by the loss.
Once the loss is close to zero, the guitar strings will be in tune, and you should hear a nice E chord! It's quite satisfying to see both the curves match, and dissonance give way to consonance. 😅
Enjoy!
*x3 +
*x2 +
*x +