I think there is some misunderstanding about how an neural networks (NN) work.
There is currently no consumer level NN AI that ‘learns’ on its own that I’m aware of - so the answer to this question is no.
One of the focuses on artificial general intelligence (AGI) is having the NN ‘reinforce’ itself of the fly (i.e. learn). To understand what that means, you need to understand that there are two portions to the progress of NN AI which are relevant - the ‘brain’ itself, and training - this understanding is also relevant to the question of the specialised ‘silicon’ available to AI.
A NN brain consist of a number of objects that are modelled on the human neurone. They have a number of inputs, that are combined mathematically, with a ‘weight’ factor. If the number is above a certain value, the neurone ‘fires’ it’s output. This is basically like the electrical inputs to a biological neurone being resisted by the fatty layer across the nerve.
Training consists of initially randomly attributing a value to weight across all such neurones in an AI brain, and then ‘training’ a large number of these randomly created brains, picking the most successful combinations of the weights, and the 'breeding; those brains, by modifying the weights in relation to eachother (the successful brains) and then repeating the training. Essentially, evolving the brain.
In life, a brain alters itself by modifying the amount of fat on each nerve in response to it’s own success. The beauty of NN’s in technology is that they are super light weight, relative to the complexity of function - I have a friend who routinely messes around with a brain of around 48K cells on his M2 Macbook - but training involves creating many MANY of these brains, and simulating interactons and testing the success of these NNs. It’s very intensive, but once you have it, the NN is very valueable.
Current research is looking for ways to let NN’s efficiently evaluate their own weights based on stimuli, and ‘reinforce’ themselves (like behavioural reinforcement), but it makes the brain much less efficient.
The use of dedicated hardware for AI’s is not a pre-requisite to using an AI - so comment about ‘Windows’ meaning that AI pro has nothing to do with Apple Silicon is not true. Any AI built using current code libraries that are made to work effectively with dedicated AI Hardware will see a performance improvement. This is in much the same way that 3D graphics can be done with just a CPU - but dedicated 3D hardware provides huge performance boosts by creating graphic specific functions at a hardware level.
I hope this helps people understand what’s going on. I have opinions about how Algoriddim could continue to improve with this technology, but I’ve written enough, and I’m sure the Devs are talking about it all day every day 