The top layers—Infrastructure and Application—offer the best opportunities for new entrants, as they have a lower learning curve and barrier to entry. Those with cloud experience can leverage it to integrate Gen AI, with a focus on accessible areas like MLOps and managed LLM service
The excitement around Generative Artificial Intelligence (Gen AI) has reached its peak, but it is also accompanied by considerable confusion. Questions about what to study, how to approach it, and where the most job opportunities lie are at the forefront of our minds. Personally, I’m a proponent of first-principles thinking, which involves deconstructing a problem into its smallest logical components. I apply this approach to understanding Gen AI.
Gen AI can be broken down into four distinct layers:
The foundational layer comprises the hardware necessary for training models, such as silicon chips produced by companies like AMD and NVIDIA. Next, we have the Language Learning Model (LLM) models that are trained and executed on these chips. Examples of such models include those developed by OpenAI and Anthropic. Following that is the infrastructure layer, which encompasses providers that offer convenient solutions for consuming, hosting, training, and deploying models. AWS is a prime example of such a provider, offering managed services like Amazon Bedrock for hosting pre-trained models and Amazon EC2 for provisioning VMs for model training. Finally, we reach the application layer, where these trained models are utilized. Examples of applications include Adobe Firefly, LLM-powered chatbots, and LLM-driven travel agents. Now, here’s the crucial aspect: as we ascend from the bottom to the top of these layers, the learning curve becomes less steep, and the barrier to entry for new market players decreases. Developing new hardware, such as chips, demands substantial investments, making it challenging for newcomers to enter the market. Consequently, the most promising opportunities exist within the top two layers.
For those already familiar with cloud computing, integrating Gen AI with existing knowledge can significantly enhance their value. For instance, individuals working in DevOps can delve into MLOps, while those proficient in Kubernetes or Serverless architectures can explore integration with Gen AI technologies. Similarly, application developers can leverage managed LLM services to augment functionality. Personally, I’m dedicating most of my efforts to mastering this uppermost layer.
Top comments (0)