Why Artificial Intelligence is Still Not That Intelligent

new.blicio.us Follow Apr 16, 2020 · 4 mins read
Why Artificial Intelligence is Still Not That Intelligent
Share this

The optimistic futurists of the world would love for AI to replace human intelligence and live life on autopilot. This is why you may see headlines like “Are Computers Already Smarter Than Humans” when you go to our favorite tech news site. In all reality, computers are still as dumb as they are decades ago and artificial intelligence programming libraries are still in its infancy.

Artificial Intelligence has been utilized for a long time to be implemented into email spam filters, captcha codes, malware detection, and even censorship. If you have checked most forums, comment sections, or our email inbox, you can see that there is still quite a bit of spam that is fooling these AI mechanisms. Through learning patterns, AI is improving but it is still decades away from replacing the human brain.

Not Even Apple’s Face Recognition Can Get it Together Embarrassingly, Apple made headlines on multiple occasions when their Face ID mechanism couldn’t tell two Asian women apart. There had also been many other reports, mostly in cases of Asian individuals, and this rendered their experimental phone unlocking technology useless.

What is Behind Artificial Intelligence?

Adversarial machine learning is a phrase mostly heard by those within big tech companies that are developing AI learning solutions. It is a field of computer security that aims to benchmark machine learning algorithms by exploiting vulnerabilities. This can be done through various creative means in order to bypass AI networks that were designed to classify objects in otherwise stationary environments.

For example, the obfuscation of words can be used to bypass spam filters. Another example of the spoofing of network packets is hiding malware code from smart firewalls.

Here are some of the common methods of machine learning: • Reinforcement Learning - Reinforcement learning is a generalized type of learning that is reward based. Behaviors that provide expected results will be rewarded in some way (like points) to encourage repeated “good behavior.” • Neural Network - Intriguing Properties of Neural Networks was a recent and critical paper published in 2014 that made this method very popular. The idea is to have a network of interconnected nodes to emulate the human brain. • Stochastic Gradient Descent via Back-propagation - Errors are calculated and distributed backward through the network layers to make learning more efficient. This is why it is also known as the backward propagation of errors. • Fast Gradient Sign Method - A common algorithm that makes subtle obfuscations to images for the purpose of testing image recognition learning networks.

Adversarial Examples Gum Up the Works A common attack is the “Black Blox” attack, in which localized simulations of machine learning frameworks can be used to fool the network. Ideally, this is done with networks that use open-sourced frameworks for machine learning. This may include common frameworks like Apache Mahout, TensorFlow from Google, or Amazon Machine Learning (AML).

Using advanced adversarial example libraries to create such attacks will easily fool many of the top machine learning platforms more than 80% of the time. Even platforms like MetaMind, Amazon Web Services, and the Google Cloud Platform are affected.

Here are some libraries for adversarial example generation: • Cleverhans - Being programmed in Python, this library uses TensorFlow to accelerate its graph computations. • DeepFool - A Python library that finds minimum adversarial perturbations in deep learning networks. • Deep-Pwning - A constantly expanding library that evaluates the robustness of deep learning networks. It is compared to Metasploit but for machine learning. • FoolBox - This library utilizes NumPy and SciPy in order to fool neural networks. • Evolving AI Lab - This library fools pattern recognition of deep neural networks.

The same methodology can be applied to image detection software (like facial recognition) using adversarial stenography. Such libraries have already been developed and tested, such as the library “adversarial-stenography” from Github user Dvolkhonskiy, in order to fool stenographic image recognition applications.

This can force image recognition software to approve of a photo, even if the desired object (like a face) isn’t even present. This is how hackers can easily bypass locking technology or image spam filters.

Top Tech Companies Have Faith, and They are Hoarding Data

Artificial intelligence requires constant data to learn and recognize patterns. Obviously, modifications to the code are regularly required by the programming team, but variance in data is the most important aspect to learn.

This is why top tech companies are holding onto user data to use for future AI implementations, which will likely be used for advertising technology. For example, Facebook has over 1.3 Billion users in their pool to use as guinea pigs for AI learning. Google also has a monopoly on most of the world’s search traffic, Youtube, and over 1 Billion Gmail users. If breakthroughs in artificial intelligence ever happen, it will likely be from these top companies.

AI is not ready to run civilization on autopilot. As you can see from the examples above, there are still efficient attacks against the best AI algorithms on the market and it still isn’t even scratching the surface of potential exploits. Of course, as adversarial example libraries continue to probe deep learning networks of large companies, it only encourages rapid improvement in development.

Written by new.blicio.us Follow