32 Advantages and Disadvantages of Deep Learning

Alice Kinth
3 min readMar 2, 2020

Advantages of Deep Learning

  • it robust enough to understand and use novel data, but most data scientists have learned to control the learning to focus on what’s important to them. Deep learning takes advantage of this by allowing you to control the learning, but not the statistical modeling.
  • it allows us to teach a specific task rather than teaching the system how to learn. We can use different examples to train a particular model or we can use a very simple training set and simply ask it to learn.
  • it can become any kind of system. It can be for one thing, such as just a face recognition, or for another, such as an image reconstruction. It can be with a large number of weights, or with a very small number. It can be linear or nonlinear.
  • it will be much harder to determine where the flaws exist, or where it is creating false positives.
  • it is not affected by computation power. Hence, it can gain insights much more quickly and thus, it can tackle problems that are traditionally tricky to solve.
  • it has a high dimensionality. This means that we can create more learning models by adding more layers to our neural network.
  • it allows us to study the world as a non-supervised structure. If you look at neurons, they have such varied functions and shapes.
  • it can go and get a new image from its own memory.
  • it can adapt automatically to all data, but it makes for a nice alternative to traditional machine learning that relies on human expertise
  • it handles everything at a much higher level of abstraction than your standard neural network, so the training process is, at its core, much less complex.
  • it allows us to retain a lot of information, even on the basis of a very tiny or badly known object. And we are in the process of learning these ways of achieving efficiency for the vision.
  • it can see more than one and can learn with more information.
  • it gets its results more quickly. It learns over time rather than just in a flash.
  • it can learn over time, over billions of examples of images, and, crucially, recognize patterns.
  • it can be used in datasets that are too large, complex and repetitive for traditional computer systems.
  • it can handle large amounts of data for small networks with a much lower learning cost.

Disadvantages of Deep Learning

  • it is much harder to compare what it achieves to that of hand-crafted methods. There is an alternative approach, called “deep learning by gradient descent”, which can be considered as an extension of deep learning to higher-dimensional regions.
  • it is very difficult to assess its performance in real world applications; applications can vary greatly from application to application, and testing techniques for analysis, validation and scaling vary widely.
  • it’s not 100% efficient and it will have some difficult problems.
  • it can be trained on very large amounts of data (think thousands of images or videos).
  • it doesn’t give us a ton of accurate data. What you’re getting are approximate statistics.
  • it tends to learn on its own, and it’s also hard to see the evolution of a system in time.
  • it requires huge data sets in order to train. They can be huge, especially when you consider that we only know the image and not the context.
  • it doesn’t tend to have as good a learning speed as other methods, or as good a memory as more traditional approaches.
  • it is very hard to understand. Thus, deep learning is the next step in machine learning. It allows machines to be more and more sophisticated by learning something more about the world, and then be able to draw a generalization from this knowledge and in the future apply it to another problem.
  • it is computationally very expensive, requiring a large amount of memory and computational resources, and it is not easy to transfer it to other problems.
  • it requires to train the model to learn about deep structures, a process which requires billions of hours of computation in a highly parallel computer architecture.
  • it is hard to describe, and is not completely understood.
  • it is a little bit complicated. I do believe the second generation methods are simpler and give a better result.
  • it tends to be more costly.
  • it requires much larger datasets with many more features. As a result, it takes longer to train the algorithm and it takes more memory for it to work with the data.
  • it requires very advanced optimization techniques, and these should have been incorporated to obtain good results.

--

--