AI/machine learning

Random Forests Vs. Neural Networks: Which Is Better and When?

Random Forest and Neural Network are the two widely used machine-learning algorithms. What is the difference between the two approaches? When should one use Neural Network or Random Forest?

gettyimages-841273242.jpg

Which is better, Random Forest or Neural Network? This is a common question, with a very easy answer: It depends. I will try to show you when it is good to use Random Forest and when to use Neural Network.

First of all, Random Forest (RF) and Neural Network (NN) are different types of algorithms. The RF is the ensemble of decision trees. Each decision tree, in the ensemble, processes the sample and predicts the output label (in case of classification). Decision trees in the ensemble are independent. Each can predict the final response. The NN is a network of connected neurons. The neurons cannot operate without other neurons; they are connected. Usually, they are grouped in layers and process data in each layer and pass forward to the next layers. The last layer of neurons is making decisions.

The RF can only work with tabular data. (What is tabular data? It is data in a table format). On the other hand, NN can work with many different data types:

  • Tabular data
  • Images [the NN become very popular after beating image classification benchmarks, for more details please read more about Convolutional Neural Networks (CNN)]
  • Audio data (also handled with CNN)
  • Text data—can be handled by NN after preprocessing, for example with bag-of-words. In theory, RF can work with such data as well, but, in real-life applications, after such preprocessing, data will become sparse and RF will be stuck.

OK, so now you have some intuition that, when you deal with images, audio, or text data, you should select NN.

What About Tabular Data?

In the case of tabular data, you should check both algorithms and select the better one. Simple. However, I would prefer RF over NN because they are easier to use. I'll show you why.

RF Vs. NN—Data Preprocessing

In theory, the RF should work with missing and categorical data. However, the Sklearn implementation doesn't handle this. To prepare data for RF (in Python and Sklearn package), you need to make sure that:

  • Your data contains no missing values
  • Categorical data has been converted into numerical

Data preprocessing for NN requires filling missing values and converting categorical data into numerical. What is more, there is a need for feature scaling. In the case of different ranges of features, there will be problems with model training. If you don't scale features into the same ranges then features with larger values will be treated as more important in the training, which is not desired. What is more, the gradients values can explode and the neurons can saturate, which will make it impossible to train NN. To conclude, for NN training, you need to do the following preprocessing:

  • Fill missing values
  • Convert categorical data into numerical
  • Scale features into the same (or at least similar) range

Keep in mind that all preprocessing that is used for preparing training data should be used in production. For NN you have more steps for preprocessing, so more steps to implement in the production system as well.

RF Vs. NN—Model training

Data is ready, we can train models.

For RF, you set the number of trees in the ensemble (which is quite easy because of the more trees in RF the better) and you can use default hyperparameters and it should work.

You need some magic skills to train NN well.

You need to define the NN architecture. How many layers to use; usually 2 or 3 layers should be enough. How many neurons to use in each layer? What activation functions to use? What weights initialization to use?

Architecture ready. Then you need to choose a training algorithm. You can start with simple Stochastic Gradient Descent (SGD), but there are many others. Let's go with simple SGD: You need to set learning rate, momentum, decay. Not enough hyperparameters? You need to set a batch size as well (batch: how many samples to show for each weights update).

You know what is funny? That each of the NN hyperparameters mentioned above can be critical. For example, you set too large learning rate or not enough neurons in second hidden-layer and your NN training will be stuck in a local minimum.

Read the full story here.