Overview
Last week we covered the basics of machine learning, delving into its fundamental concepts and principles. We explored the different types of learning, including supervised, unsupervised, and semi-supervised learning, as well as batch and online learning. We also discussed the importance of using machine learning for making predictions and the significance of features and labels. Finally, we touched upon the challenges faced in machine learning, such as the lack of data, curse of dimensionality, and overfitting.
In this lesson, we’ll be covering common machine learning problems and troubleshooting techniques. We will delve into typical issues such as regression and classification, and explore the best practices to tackle these problems. We will also discuss practical ways to overcome challenges like overfitting, under-fitting, and managing high-dimensional data. By the end of this lesson, you’ll be equipped with essential skills to address real-world machine learning challenges and continue your journey towards proficiency in this ever-evolving domain.

Sources of Problems in ML
Data
Data quality plays a vital role in the success of machine learning projects. Noisy data, which can include grainy images or misspelled words, can lead to inaccurate or less reliable models. Additionally, missing, insufficient, or overly complex data can create challenges in designing effective algorithms. As data complexity increases, so does the need for more data points and higher-dimensional models to accurately capture the underlying relationships.
Model
Developing a robust model is essential for handling complex data and ensuring that the algorithm performs well on new, unseen data points. A robust model should be flexible enough to adapt to various types of data and have the ability to generalize well, preventing issues such as overfitting or underfitting. Striking the right balance between model complexity and generalization is crucial for achieving optimal performance in machine learning tasks.
Domain Knowledge
Domain Knowledge: Domain knowledge is a key factor in the success of machine learning projects, as it allows practitioners to better understand the context and nuances of the task at hand. By having a solid grasp of the big picture, machine learning professionals can make more informed decisions when selecting features, designing models, and interpreting results. This understanding ultimately leads to the development of more accurate and reliable machine learning solutions tailored to the specific needs of a given domain.
The Importance of Good Data
The importance of good and plentiful data in machine learning is something we really can’t emphasize enough. You know what they say, “garbage in, garbage out.”
Data comes in all shapes and sizes, from spreadsheets with loads of columns to images with varying pixels and channels, or even phrases with different word counts and embeddings. To make the most of our machine learning projects, we need to keep our data clean and well-prepared.
Now, when we talk about clean data, we mean making sure we have the right data types, handling any null values by replacing or imputing them, and keeping noise in the original data to a minimum. Sometimes, training with extra noise-corrupted inputs can actually help our model become more robust. And let’s not forget about imbalanced classes – they can be a real headache! If we don’t address them, we might end up with biased models that just don’t perform well on underrepresented classes. So, in a nutshell, taking care of our data is super important if we want to create accurate and reliable machine learning solutions.
Key Takeaways:
Data should be proper and plentiful, but it is often not!
- Correct data types
- Replace null values (imputation)
- Minimize noise in original data (can train with extra noise-corrupted inputs)
- Imbalanced classes a big problem!
Training, Validation, and Testing
Now let’s move onto the steps involved in setting up a machine learning project.
Firstly, we need to determine the task, which could be regression or classification for supervised learning, or clustering for unsupervised learning. After deciding on the task, we must collect the data and establish the train-test split, typically following an 80%-10%-10% distribution.
The next step involves pre-processing the data and, if necessary, augmenting it to improve its quality. We then choose a model, such as a neural network, and select an optimizer, which is the algorithm responsible for minimizing the loss.
At this point, we must decide on a few parameters and begin training our model while monitoring the validation loss. Factors to consider include the number of epochs (how many times the model goes through the training data), batch size (the size of the training batches), and learning rate (the pace at which the optimizer updates the loss function). By following these steps, we can efficiently set up and execute a machine learning project.
Key Takeaways:
- Decide on a task (regression/classification if supervised, clustering if unsupervised)
- Collect data, decide train-test split (normally 80%-10%-10%)
- Choose a model (for ex. neural network) and optimizer (the algorithm that minimizes the loss)
- Now, decide on a few parameters and train while tracking validation loss:
- Epochs (how many times model goes through the training data)
- Batch size (how large your training batches should be)
- Learning rate (how slow/fast the optimizer updates the loss function)
Loss During Training
During the training process, the loss function helps measure the discrepancy between the actual values and the model’s predictions. A lower loss signifies a better fit. However, tracking only the training loss is not sufficient, as it can lead to overfitting. To avoid this, it is crucial to monitor both training and validation set performance. A model that predicts well on the validation set is likely to perform well on the test set.
The key difference between the validation and test sets lies in their usage. The validation set is used for experimenting with and fine-tuning the model, such as adding or removing layers in a neural network. Although it is theoretically possible to use the test set for this purpose, it is essential to reserve the test set for the final evaluation and avoid using it during the model development process. The model can access the validation data but should not be exposed to the test data until it is ready for the final assessment.
Key Takeaways
- The loss function measures the discrepancy between truth and prediction
- The lower the loss, the better the fit… but we cannot just track training loss, or we will overfit
- Overfitting means your model has memorized the data, rather than learning the relationship o
- We need to keep an eye on both training and validation set performance
Overfitting & Underfitting
Overfitting and underfitting are two common challenges in machine learning that relate to a model’s capacity, or its ability to model increasingly complex data. Striking a balance between data complexity and model capacity is essential; too much capacity can lead to overfitting, while insufficient capacity results in underfitting.
The solution to these issues lies in regularization. Regularization can be applied even to high-capacity models, making them more suitable for less-complex data. The regularizer works by keeping the model’s parameters low, effectively limiting its capacity. By adjusting and selecting the optimal hyperparameters, we can achieve the ideal capacity, mitigating the risks of both overfitting and underfitting in our machine learning models.

Key Takeaways:
- Must balance data complexity and model capacity
- Too much capacity, risk of overfitting
- Not enough, underfitting
Model Searching
Model searching is an important step in the machine learning process, where we explore different models to find the one that performs best for our specific task. Instead of selecting a random model and relying solely on regularization to address potential issues, we evaluate various competing models to determine their performance.
Additionally, we can experiment with a chosen model by adjusting its capacity, observing the effects of these changes, and fine-tuning it to achieve optimal results. This systematic approach to model searching helps ensure that we select the most suitable model for our data and objectives, ultimately enhancing the overall performance of our machine learning project.

Key Takeaways:
- Even a model with high capacity can be regularized to make it more suitable for the less-complex data
- The regularizer keeps the model’s parameters low enough, so its capacity is limited
- We choose from a few competing models, and see which performs best
The Training Process
The training process is a critical stage in machine learning, where the selected model and its hyperparameters are put to work to learn from the data. During this phase, it is essential to monitor the model’s loss to evaluate its performance and progress.
By observing the gap between the training and validation loss, we can gain insights into the model’s behavior and make adjustments as needed. This careful tracking of the loss helps ensure that the model is learning effectively, allowing it to make accurate predictions and contribute to the success of the project.

Final Thoughts
Understanding the common challenges and best practices in machine learning is crucial for developing accurate and reliable models. By focusing on the quality of data, selecting the most suitable model, and effectively balancing capacity, we can address issues like overfitting and underfitting, leading to improved performance in our projects. By mastering these principles and techniques, you’ll be well-equipped to tackle real-world machine learning problems and continue your journey towards proficiency in this ever-evolving domain.
Next week, we will dive into the practical aspects of machine learning, exploring the topic of machine learning implementations. We will discuss when and why to use machine learning, and perhaps more importantly, when not to use it. This knowledge will help you make informed decisions when incorporating machine learning into your projects and ensure that you achieve the best possible results. Stay tuned!