Bigger Is Not Better: Why A Complex Deep Learning Network Is Often Worse than a Simple One for Business Problems

Courtlin Holt-Nguyen
Accelerated Analyst
6 min readMar 19, 2023

Artificial intelligence (AI) is rapidly advancing in the business world, with an increasing number of companies employing deep learning networks to improve their operations. However, it may come as a surprise that more complex and sophisticated deep learning models may not necessarily be better suited for solving business problems. In fact, in many cases, deploying a simpler network can yield more effective results. In this blog post, we’ll explore why complex deep learning networks can be inefficient and even detrimental when applied to business scenarios.

1. Deep learning networks require a significant amount of training data, making it difficult to obtain accurate results for business problems.

In my experience, one of the biggest challenges with deep learning networks is obtaining enough training data to achieve accurate results. This is especially true for businesses, which may not have the resources to gather large amounts of data. The bigger the network, the more data it requires, which can become quite costly. Additionally, outliers in the data can have a significant impact on the accuracy of deep learning models, making them less reliable for business solutions. I’ve found that over-reliance on neural networks without adequate data processing can lead to unreliable results as well. It’s important for businesses to carefully consider the tradeoffs between complexity and accuracy when choosing a deep learning network for a business solution. In some cases, a simpler network may actually be more effective. Ultimately, businesses need to weigh the costs and benefits of implementing a deep learning network and determine if it’s the best solution for their specific problem.

2. Larger networks need even more data, which can be challenging and costly for businesses to obtain.

I’ve seen firsthand how larger networks require even more data, which can be challenging and costly for businesses to obtain. While deep learning algorithms have shown outstanding performance on complex tasks, their accuracy often depends on the amount of data available for training. As businesses collect more data, the need for larger and more complex networks will inevitably grow. Obtaining the required amount of data for training can be a daunting task, especially when dealing with limited resources. While we can’t ignore the power of deep learning, the costs and challenges of obtaining enough data for larger networks must be carefully considered.

3. Outliers in the data can significantly impact the accuracy of deep learning models, making them less reliable for business solutions.

As I’ve mentioned earlier, deep learning networks require a significant amount of training data to provide accurate results, which is already a challenge for businesses to obtain. To make matters worse, outliers in the data can significantly impact the accuracy of these models, rendering them less reliable for business solutions. Outliers are data points that don’t follow the expected pattern or trend of the rest of the data, and they can skew the model’s output significantly. For example, in a fraud detection model, an outlier may be a legitimate transaction that looks suspicious because it deviates from the norm. If the model is too reliant on outliers, it can lead to false positives or negatives, which is a costly mistake for businesses. Therefore, it’s essential to identify and handle outliers carefully before feeding the data into the model to ensure the model’s reliability.

4. Neural networks require significant computing power to train and optimize, making them impractical for some businesses.

As someone who has worked with deep learning networks, I can attest that one significant drawback is the amount of computing power they require. In some cases, the computing power needed for a neural network can be impractical for a business. This can be especially true for smaller companies that may not have the resources to invest in the necessary hardware. Although cloud computing GPUs have certainly helped to reduce the cost and complexity of training models, costs can quickly escalate for complex models that require extensive hyperparameter tuning. While deep learning networks have incredible potential, it’s important for businesses to weigh the benefits against the costs when considering implementing neural networks as a solution.

5. Over-reliance on neural networks without adequate data processing can lead to unreliable results.

In my experience, relying solely on deep learning networks without adequate data processing can be a recipe for unreliable results. It’s essential to remember that deep learning networks are only as good as the data they’re trained on. If the data is incomplete, inconsistent, or biased, the network will produce inaccurate or even harmful results. It’s crucial to have robust data governance and quality control processes in place to ensure that the data is trustworthy before training a neural network. Unfortunately, data governance and data quality control are usually deemed less important by senior management until data quality problems become undeniable. Once decision makers begin to question the quality of the data used for the analysis, it becomes exponentially more difficult to convince them to take action based on one’s analysis. Garbage-in, garbage-out can have fatal consequences for one’s credibility and career.

Additionally, relying excessively on neural networks without considering other methods, such as rule-based systems or decision trees, can limit the scope of the analysis and fail to take into consideration crucial contextual information. Simpler tools like regression analysis and decision trees are often easier to interpret and explain to people without an analytics background who are nonetheless being asked to take action based on the results of a model.

6. Analytic tools for deep learning networks have yet to catch up with the technology, making it difficult to accurately analyze results.

I’ve found that one of the biggest challenges in using deep learning networks for business problems is the lack of appropriate analytic tools to accurately analyze and explain the results. The technology is evolving rapidly, but the tools are lagging behind. This can make it difficult to understand why a model isn’t performing as expected and to identify ways to improve it. As a result, businesses may struggle to achieve the accuracy they need for their solution. It’s important to keep in mind that while deep learning networks can be incredibly powerful, they’re not a silver bullet.

7. Advantages of a Single layer vs Multi-layer neural network

One advantage of a single layer neural network is its simplicity. With only one layer, the network can be trained faster, making it more practical for businesses with limited computing power or data resources. Additionally, single layer networks are more transparent, allowing for easier interpretation of results. However, multi-layer neural networks have their own advantages. With the ability to learn features at various levels of abstraction, deep neural networks can achieve higher levels of accuracy and are better suited for complex problems. While deeper networks require more data and computing power, they can generate more accurate and reliable results. Overall, the choice between a single layer or multi-layer neural network will vary depending on the specific business problem and available resources.

Key Takeaways

1. Deep learning networks require a significant amount of training data, making it difficult for businesses to obtain accurate results, especially with limited resources.

2. Outliers in data can significantly impact the accuracy of deep learning models, making them less reliable for business solutions, and therefore, need to be handled carefully.

3. Neural networks require significant computing power, which can be impractical for some businesses, especially smaller ones with limited resources.

4. Over-reliance on neural networks without adequate data processing can lead to unreliable results, and businesses should consider deep learning as one tool in a broader toolkit for solving problems.

5. The K.I.S.S. principle should be applied to neural network design, with businesses carefully considering the tradeoffs between complexity and accuracy to choose the best deep learning network for their specific problem.

--

--

Courtlin Holt-Nguyen
Accelerated Analyst

Former Head of Enterprise Analytics. I share practical data science tutorials with working code. Data scientist | data strategist | consultant.