How to Simulate Human Intuition

- Programming, Mathematics

Latest revision:

Circuit board shaped like a brain

A few months ago, I started to study deep learning, a branch in computer science heavily inspired by biology that allows programs to learn arbitrary concepts without explicit programming.

How good is deep learning? Well, good enough to play Go with super-human performance, a significant milestone in artificial intelligence research since the techniques used 20 years beforehand to beat the Chess world champion Garry Kasparov were not good enough for Go despite both being turn-based strategy games with complete information. (For reference, there are less than 5 x 1046 legal Chess board positions, while there are more than 2 x 10170 legal Go board positions.)

To my surprise, I found that the basic concepts of deep learning were relatively simple. Even more surprisingly, I felt that this young science seemed to contain some major flaws that could be improved upon. Considering this, I decided to fully understand them, and try to reinvent this science in my own way to see if my approaches would be better.

For my first post on this topic, I will explain what deep learning is, how it works and why it is successful in developing powerful forms of artificial intelligence for specific tasks that weren't possible until recently.

Oh, and don't expect a vulgarization of the concept with some pretty metaphors. We're about to go into the nitty-gritty details here with all the juicy mathematics that comes with it. Expect some linear algebra, calculus and a little bit of statistics.

The general problem

Let's say we have a function that takes as input a vector of real values of a predefined length, and returns as output a vector of real values also with a predefined length.

y = f ( x )

However, the definition of this function is unknown. The only information that we have about it is observed data in the form of a list of examples of inputs and their corresponding outputs, with the possible additional restriction that we may not be able to remember the whole list due to limited memory.

The objective is to come up with a new function that will attempt to simulate the unknown function as best as possible, and to possibly improve its definition as new examples become known.

s ( x ) f ( x )

The main issues with tacking this problem are as follows:

Simulated function model

Instead of using a complicated function model as a base for our simulated function, deep learning instead use a simple function model, but then combine a bunch of them to achieve a highly-complex result.

Let's start by having our simulated function as a series of simpler, embedded vector functions of real numbers.

s ( x ) = ( d l d l1 d 2 d 1 ) ( x )

Note that the number of embedded functions, and the number of real variables in each vector connecting embedded functions, are completely arbitrary. Depending on the type of problem, we can simply add or remove them to change the efficiency and/or accuracy of the simulated function. For example, we may want to simulate an identity function and use a intermediate vector of a smaller length than the input/output to force data compression and find correlations in the input.

The issue now is to define a good function model for our embedded functions to be configurable and powerful enough to scale in this way.

d i ( x ) = ?

In terms of configuration, an easy solution would be to multiply the vector elements with a matrix of configurable real numbers, called weights, and have the dimensions of this matrix match the desired input and output lengths.

𝑾 i = [ w i11 w i12 w i21 w i22 ] d i ( x ) = 𝑾 i x Τ

To ensure that the product does not necessarily equal a zero matrix if the input is a zero vector, we could also force an extra element to the input vector always equal to 1 called bias, and expend the weights matrix accordingly.

d i ( x ) = 𝑾 i [ x Τ 1 ]

However, those are not sufficient. The problem is that we cannot do better than a linear combination, as the whole stack of embedded functions could be reduced into a single one. This is not powerful enough to simulate arbitrary functions for our needs. Fortunately, a way to fix this is to apply an additional activation function immediately before returning the output to introduce non-linearity.

d i ( x ) = A ( 𝑾 i [ x Τ 1 ] ) A ( x ) = [ a( x 1 ) a( x 2 ) a( x 3 ) ]

This is where the theory gets fuzzy. Just the Wikipedia page about activation functions lists more than 20 possibilities previously considered in scientific research.

a (x) = ?

Interestingly, as of this writing, one of the most popular activation function appears to be the rectified linear unit, which returns the input if it is positive and 0 otherwise. Despite forcing values to be positive and having weight improvements disrupted when reaching the constant range, this simple activation function breaks linearity, is quick to compute and achieves very good experimental results.

a (x) = ReLU (x) = max ( 0 , x )

Minimizing errors

In order to optimize our simulated function, we need a way to measure errors against the results of the original function so that we can attempt to minimize them. We will need to define an error function in order to do so, using the outputs of the original mystery function and corresponding simulated function as inputs of this error function.

e ( f , s ) = ?

Defining said error function is surprisingly not trivial. For now I would like to at least mention that they are based around the distance between the real and simulated output vectors.

A common error function is the mean squared error, which is traditionally defined as the sum of the squares of those distances divided by 2. The division by 2 is optional but simplifies calculations later on.

e ( f , s ) = i=1 n | s ( x i ) f ( x i ) | 2 2

In any case, after settling on an error function, we are then able to start thinking about how to optimize it. Obviously we would like it to be equal to 0, but this is not possible in the general case, so what we are going to attempt instead is to be as close to 0 as possible. Assuming no structural changes will be made in our final function model, we would be able to attempt this by changing its weights.

Fortunately, calculus gives us a way to simplify the work. The local minimums of the error function can only be found where its gradient in relation to each configurable variable are undefined or equal to 0. Similarly, gradients point in the direction of the steepest ascent, so by following the function's curve in the opposite direction, one can eventually find a local minimum.

This allows for a technique called gradient descent, where the value of the error function is minimized by updating the weights towards the opposite direction of the gradient of the weights by a predefined learning factor, which may be a constant or a function of the number of observations. A higher learning factor means less examples are required, but the resulting simulated function has a higher risk of being more unstable and less accurate. As for the initialization of weights prior to the first gradient descent, it is traditionally random although close to 0.

e ( f , s ) = [ e ( f , s ) w 1 e ( f , s ) w 2 ] w = w l e ( f , s )

One point to consider is that gradient descent does not require remembering all previously-observed data. Indeed, we can estimate the error function by considering only a subset of pairs of real and simulated outputs, which is good enough to apply this technique. I won't go into the details, but one such common approximation is called stochastic gradient descent, which picks random data points instead of using the entire set. This solves the limited memory problem.

Another point to consider is that when using gradient descent with the simulated function model described above, we do not need to optimize its error function directly. Instead, we can optimize the error function for each embedded functions separately, which simplifies calculations. The idea is to first perform forward propagation, i.e. calculate the simulated output normally for a given example input, then perform backward propagation which is done in the following manner:

  1. Perform gradient descent for the topmost embedded function
  2. Calculate the input that should have been passed to the embedded function to achieve the expected output
  3. Perform gradient descent for the embedded function immediately underneath by using the result of the previous step as the expected output for it
  4. If the last tweaked embedded function is not the lowermost one, go to step 2

Note that this technique may not work properly while weights are equal as they would remain equal afterwards. One simple workaround for this is to initialize weights with random values to break symmetry.

In addition, there is the possibility that the gradient may suddenly become so large in a certain direction that backward propagation would cause the weights to diverge instead of converging. One way to mitigate this issue is to make sure that the size of the gradient is capped before updating the weights, although that can also decrease the gradient's effenciency if the cap is set too low.

Unfortunately, despite all of this, there are still non-negligible drawbacks of using gradient descent and backward propagation in this manner:

Where's the biology in all that?

I find that getting inspiration from biology for deep learning was a good idea, but that one should not be stuck following its model as it prevents creative improvements from emerging, hence why I'm only talking about it at the very end.

From my understanding (I'm not a biology expert, so don't quote me on that), biological neural networks behave similarly to what I described above. Neurons are stacked in layers and with all neurons of a specific layers connected to neurons of adjacent layers. During forward propagation, these neurons shoot electric pulses at a specific frequency within a range from nothing to some physical maximum, based on the electric pulses received as input. During backward propagation, the strength of these connections are corrected to better match the real output.

This background may help you understand some of the terminology that you may find when researching deep learning. Here is a summary of such common terms vs what I used in this post:

What next?

I implemented the above model in Python with PyTorch, which you read in more details in my follow-up post Deep Learning in Python With PyTorch - Tutorial and Demo!

References

The main introductory resources I've consulted before writing this post are:

For those interested in a full in-depth course on deep learning, I highly recommend:

Related content I wrote

Floating mathematical formulas

A Technical Introducition to MathML Core for Writing Mathematics on the Web

- Programming, Mathematics

Thanks to recent efforts, all major web browsers currently support MathML Core, a subset of MathML focused on important presentation markup, to support mathematics on the web. As of this writing, the MathML Core specifications are still not finalized, but given its strong origins and support, it can…

Fireworks

The New Open Source Video Game Randomizer List Is Now Live

- Video Games, Programming

Time to update your bookmarks! After a few months of work behind the scenes, the new open source version of The BIG List of Video Game Randomizer is now live for your enjoyment, with dark mode support and a brand new UI for better readability! The new URL is: https://randomizers.debigare.com/ (The…

Open treasure chest with a question mark in it

The Future of the Video Game Randomizer List

- Video Games, Programming, Anecdotes

It's hard to believe that it's been almost 8 years since I first posted on the ROMhacking.net forums a list of video game randomizers that I found online, and that it would evolve into the massive project it has become today, with almost 900 entries currently being listed. It's always a strange…

Dice stacked in a triangle shape, with their face numbers matching their row position

I Designed the Perfect Gambling Game, But...

- Mathematics, Business, Game Design

Back in 2006-07-08, during the 13th Canadian Undergraduate Mathematics Conference at McGill University, I presented a gambling game I designed with the novel property of being both advantageous to players and the house, and that despite this proprety, that pretty much nobody in their right mind…

Stream of zeros and ones in space

Minifying JSON Text Beyond Whitespace

- Programming, Mathematics

JSON is a common data serialization format to transmit information over the Internet. However, as I mentioned in a previous article, it's far from optimal. Nevertheless, due to business requirements, producing data in this format may be necessary. I won't go into the details as to how one could…

See all of my articles