Skip to main content

Machines Are Closer To Copying The Way Humans Learn

The gold standard of artificial intelligence is a computer that can learn the same way we do as humans. For example, if you see just one toothbrush and know its use, it’s pretty easy to identify other toothbrushes. If it’s long, thin, has little bristles and a handle, we can be pretty sure it’s a toothbrush. And since we know it has to fit in a mouth, we can imagine what would be a good tool for the job, and what might not, further limiting what a toothbrush can be.
Getting machines to learn this way has been a struggle, because complex objects, like toothbrushes, have to be explained in mathematics formulas so the computer can understand it. A lot of work in machine learning, which is how we tackle artificial intelligence, is centered around how to best represent objects and ideas so computers can understand them.
Other computers can already do this with deep learning, a discipline within artificial intelligence that uses networks of mathematics  equations to understand ideas within data. However, whereas deep learning techniques could require the machine to analyze dozens to millions of examples, the current method claims to work from a single example of an idea.
This means that one day we could have true facial recognition at any angle from just one good image of a person.
The results claimed with this method are impressive. To test how well the algorithm learned, the researchers tested it against humans. Both humans and machines were given a new character, and had to reproduce it.

New research in Science claims to have come closer to the human method of learning. Their idea: build a tiny computer program for each "learned" concept. These little programs initially explain a small concept that it's already seen, and generates different ways to get to the same end product.
Researchers showed their algorithm examples of handwritten letters from several ancient alphabets and how they were written, and the algorithm memorized those processes in the form of a computer program that explained how each letter was constructed.
 The researchers call this Bayesian Program Learning, and by showing the algorithm how a character is constructed, it then understands the different parts of each letter. In the future, it can use those parts in different ways to classify or create new characters, much like humans do.
The best way to explain this is through an example. Right now this only works for very simple symbols, like handwritten letters of the alphabet.

Comments

  1. very good information bro...keep doing the great work and sharing:)

    ReplyDelete

Post a Comment