The fact that we call machine learning technologies intelligence, no matter how artificial, is misleading at best. It is true that ML algorithms can perform tasks that have traditionally been very hard to implement on a computer. However, basically all of the algorithms are deterministic and very linear in nature. The same input will get you the same result, at most with minor noise if some randomness is used. The most popular algorithms have no memory and do not include loops, that would allow complex data processing. It is debatable whether these are a strict requirement for implementing intelligence, but they seem to be part of the more common instances that have formed naturally. Also the simple and straightforward nature of training does not really lend itself to the variety needed to comprehend ideas outside of the performed mathematical transform.
Understanding implies the ability to adaptively use given information in order to solve similar but new challenges. ML systems do not learn new tricks and you can not ask them about the nature of their work. In fact, knowing what the algorithms do, has become almost a field of science of its own. This comes from the fact that most algorithms are trained with messy data and we can not be sure of the true nature of the function they develop internally, because it is by its nature an abstraction of an unknown transformation. We can say that 99% of the time it seems to do what was intended, but as can be seen from work in adversarial neural networks, we can make the algorithm confuse inputs that are still simple for us to decipher as natural intelligent beings. Even if you would try to argue that these functions could be self-aware, we do not train them with enough data to have a point of view into the types of answers we wish to gain from them.
Learning
It is often said that algorithms learn something. Though what usually happens is that they deterministically conform their internal structure to perform a given transform between two mathematical spaces. So learning is much easier to say, but in this case there is none of the implied understanding involved. Learning implies understanding, but as we stated above, there is no real capability for understanding in these systems. Another point is that the algorithms are essentially static. They do not develop on their own and retain new information. The training sequence is entirely separate from the daily use of the algorithms. Normally one goes through the following steps to create an ML algorithm.
- Initialise
- Train
- Test
- Use
In the initialisation, we choose frameworks, loss functions and topology for our algorithm, predetermining many of it's possible attributes.
During training we run data throught the algorithm and tweak it's properties to make it more efficient at transforming the input into the correct output. Clear examples of both input and output are needed in some form for most systems.
During testing we use unseen examples to see if the algorithm actually works with real data and not just the training set which it knows very well. After this point the algorithm retains it's exact properties.
Then it is set and ready for use, if the accuracy is good enough for the job. The use of the algorithms does not modify it further, because we would not be able to guarantee it's accuracy without testing.
But I've heard of systems that learn on their own..
Yes you have, but have you ever implemented one? The continuously learning systems do not integrate learning into use, but they record data that is given to them, creating an ever increasing library of training data. Then the separate training function is run periodically on the new data to increase the accuracy of the algorithm. In most cases the more data you have the better. Sometimes the purpose of the algorithm may also change along the way which requires retraining in this way. In some cases this can easily be automated to create what externally looks like an learning system, but is in fact a string of unintelligent deterministic functions that repeat their diven funcitonss ad nauseam.
There is an increasing amount of tricks to help and make machine learning easier and help remove manual steps from the devlopment of new algorithms. Sadly as long as the base algorithms are not drastically developed, the system remain but a patchwork of clever tricks that does not have any control over what it processes and how.
Source
- https://www.wired.com/story/algorithm-predicts-criminality-based-face-sparks-furor/
- https://www.youtube.com/watch?v=7qPDwsCLbZc
- https://designyoutrust.com/2020/06/face-depixelizer-neural-network-brings-back-the-sharpness-of-photos-in-low-resolution-and-the-results-are-very-unexpected/
- https://www.aclu.org/blog/privacy-technology/surveillance-technologies/amazons-face-recognition-falsely-matched-28
- https://www.theverge.com/2020/6/24/21301759/facial-recognition-detroit-police-wrongful-arrest-robert-williams-artificial-intelligence
- https://medium.com/@CoalitionForCriticalTechnology/abolish-the-techtoprisonpipeline-9b5b14366b16
- https://spectrum.ieee.org/tech-talk/computing/software/making-blurry-faces-photorealistic-goes-only-so-far
- https://twitter.com/moyix/status/1367575109305794563