Generative AI for everyone : Unveiling Key Learnings from the Course Adventure

Ali Issa
3 min readFeb 1, 2024
Generated by DALLE 3

I’m happy to share that I’ve obtained a new certification: Generative AI for Everyone from DeepLearning.AI!

This is a non-technical, three-week course taught by Andrew Ng. If you didn’t have time to watch, feel free to read this summary in case you’re interested:

📅 Week 1

In the first week, introduced generative AI and its capabilities in generating high-quality text, audio, and images. We delved deeper into (LLMs) and the types of tasks they can be used for, such as writing (where we ask the model to generate new content based on our query, like translation, email writing, creating announcements), reading (analyzing, summarizing content), and chatting (building customized chatbots like trip planners, recipe ideas).

During the first week, we explored what the LLM can and cannot do. Andrew gave an example: if a fresh college graduate can follow the instructions in the prompt to complete the task, then the LLM should be able to do it. For instance, can a fresh graduate determine whether this sentence “I love my new llama T-shirt! The fabric is so soft.” is positive? If yes, then the LLM should have no problem identifying the sentiment behind it. The course also discussed some disadvantages of LLMs, like knowledge cut-off, limit size in context length (input + output size).

Moreover, Andrew gave some advice related to prompting: be clear and specific in the prompt, think about why the result isn’t giving the desired output, refine your prompt, and repeat.

📅 Week 2

This week started by showing the huge advantages of having LLMs. By benefiting from this knowledge, we won’t spend months creating a dataset for sentiment analysis, for example, then writing code (building the architecture) to train the model, and then deploying it, which could take more than five months. Now, with a simple API call and a string as a prompt, we can do this in hours. Then the life cycle of a Generative AI project was introduced, which is composed of identifying the scope of the project -> building the system (could be a prototype at the beginning) -> internal evaluation -> deployment.

We can go back and forth between each component to enhance the system. Moreover, concerning the cost of sending API calls, we need to make an estimate of how many words/tokens the LLM would consume per call, for example, where token size is about 0.75 of the number of words. Also, a quick overview and basic introduction about Retrieval-Augmented Generation (RAG), fine-tuning, Reinforcement Learning from Human Feedback (RLHF), Agents, and tools were given during this week.

📅 Week 3

In the last week, we started by discussing how we can use LLMs in different jobs, like programming, recruiting, etc. Then the concept of automation was introduced, and it was mentioned that tasks will be automated rather than the job itself. Andrew differentiated between the concept of augmentation, where we use LLM to help us with tasks, and automation, where it’s done without human interference.

He discussed how feasible the task is through AI and how it will add value to our system. He also talked about analyzing tasks in jobs for automation or augmentation potential and societal concerns, responsible AI.

Resources: Check the Generative AI for Everyone | Coursera

If you like what you see, hit the follow button! You can also find me on LinkedIn, and we can follow each other there too.