In Week 4, we focused on understanding machine learning in real-world scenarios and the concept of ML pipelines. We discussed the practical applications of machine learning across various industries and the challenges practitioners face. Furthermore, we delved into the inner workings of neural networks, exploring their capabilities, limitations, and ways to tackle more complex problems. We also examined the significance of model capacity and design in achieving accurate results in machine learning projects.

This week, we will embark on a practical guide to a machine learning project focused on image generation. This hands-on approach will allow you to apply the concepts and techniques learned in previous lessons, offering a deeper understanding of how machine learning can be utilized to solve real-world problems and challenges. Get ready for an exciting journey into the practical side of machine learning as we dive into the world of image generation!

Fake Image Generation

In this lesson, we explore the fascinating world of fake image generation, just one example of data generation. While our primary focus is on image generation, it is worth noting that there are many other possibilities for data generation, including text for chatbots, video for deepfakes, and simulated data.

Key Takeaways:

  • This is just one example of data generation, which is a very popular and useful field in ML. Many possibilities:
    • Text (chatbots)
    • Video (deepfakes)
    •  Simulated data (for video games, simulation engines, etc)

GANs – Generative Adversarial Networks

Generative Adversarial Networks (GANs) have emerged as one of the most popular models for generating images and other types of data. GANs employ two neural networks that compete with each other in order to produce fake data, hence the term ‘adversarial.’ These two networks are the generative model (‘G’) and the discriminative model (‘D’).

Key Takeaways:

  • GANs are used to generate new (i.e. fake) data (hence ‘generative’).
  • GANs use two neural networks that ‘compete’ with one another in order to produce this fake data (hence ‘adversarial’, and ‘networks’).
    • The generative model (‘G’) and the discriminative model (‘D’)

Statistical Distributions

To understand how GANs work, it’s essential to grasp the concept of statistical distributions. Distributions are a means of summarizing the behavior of random variables, such as height, SAT scores, or income. One of the most famous distributions is the normal distribution. In the context of GANs, we can think of images as random samples from an underlying distribution, and our primary goal is to learn this distribution.

Key Takeaways:

  • Before understanding how these models work, we need to understand statistical distributions
    • Distributions are just a way of summarizing the behavior of a random variable (i.e. height, SAT score, income, etc.)
    • Perhaps the most famous is the normal distribution.
    • Consider images as random samples from an underlying distribution, and our goal is to learn this distribution.

How GANs Work to Generate Data

The ‘networks’ part of GANs implies that both G and D are neural networks, which are best suited for handling imaging data. These networks compete with each other: G learns the data’s latent distribution (an unsupervised task without labels) and passes generated data to D (a supervised task as real data is labeled with 1 and generated data with 0). G’s objective is to fool D, while D aims to maximize the probability of distinguishing real data from G’s data. During training, the weights of G and D are updated simultaneously to minimize a loss.

Key Takeaways

They compete with one another:

The generator learns the data’s latent distribution (an unsupervised task as there are no labels; latent means the distribution is hidden, or unknown), and passes generated data into the discriminator (a supervised task as we label real data with 1 and generated data with 0)

  • The job of G is to fool D – the job of D is to maximize the probability of telling real data apart from G’s data.
  • Once training is done, you have two models:
    o G, which learned the distribution of the data.
    o D, which learned to tell fake and real data apart.
  • So, to generate new data, you just use G, as you know G is up to the job, because it was able to fool D, which was
    also trained to do its job!

Applications of Image Data Generation

Upon completing the training, you will have two models: G, which has learned the data distribution, and D, which has learned to differentiate between real and fake data. To generate new data, you simply use G, as it has proven its capability by fooling D. In some cases, generating fake images may yield better results than using real images. You can find an implementation of GANs for handwritten digits on GitHub.


This week’s lesson delved into the fascinating realm of fake image generation using Generative Adversarial Networks (GANs). We explored the concept of statistical distributions and how GANs employ two competing neural networks, G and D, to generate realistic images. We also discussed the various applications of image data generation and the potential benefits of using generated images in certain scenarios.

As we move forward to next week’s lesson, we will explore different machine learning models and their applications across a wide range of industries and problem-solving contexts. Get ready to broaden your understanding of the diverse landscape of machine learning models and their real-world implications!