Back to previous chapter 3 core principles

Building with AI means incorporating non-deterministic technology from a third-party provider into your product. This approach differs significantly from traditional product development, which is mostly linear—from discovery to delivery—with fairly well-defined roles and responsibilities. AI-driven development is inherently iterative, a fact that should be considered very early in the building process.

In parallel with typical user discovery, teams should initiate "AI discovery" early. This process helps quickly develop a strong intuition about the potential value of model outputs, how the architecture should be designed, and what data points will be required.

This is what we call the Life of an AI feature with its main steps and ****the key gestures that need to be done right

Remember, this entire process is empirical by nature of the AI technology, every step require seamless and fast iterations

image.png

💡 Ideate:

At the ideation stage, the team explores potential use cases for AI within the product. This involves identifying where AI could add the most value and formulating a solid business case for its integration. The focus here is to align AI's capabilities with the product's objectives and user needs.

🧱 Prototype:

In the prototype phase, the team iterates the architecture of the AI feature and the sequence of AI calls that will be triggered. This phase includes fast empirical tests to identify the optimal prompts structure of each AI calls and the relevant data points that will be passed as variables in the prompts to include clear context and goals into the instruction to ensure quality and consistency of the out pout.

💯 Validate:

With first convictions about prompt structure and include data inputs teams can now gather test cases in test collections to test and evaluate at a broader scale. Test collections include a vast diversity of production like inputs and optionally expected outputs to evaluate against. The collection are used to run batch tests ensuring the AI performs well under different conditions. The goal is to define evaluation criteria to quantifies the success rate of each prompts and establish confidence quickly before going into production.

🔎 Monitor:

Once the AI feature is live, monitoring becomes crucial as quality is never guaranteed. It is key to ensures that AI inputs and outputs (I/Os) are properly logged and accessible, allowing teams to track the AI’s performance in real-time. Automated systems can flag issues as they arise, while human evaluations can assess the quality and effectiveness of the AI's output, ensuring it continues to deliver value.

📈 Improve:

Continuous improvement is vital for maintaining the AI feature’s relevance and performance. Teams should be able to easily replay production scenarios in an effective playground environment that will enable to validate fixes, and check for possible regression at scale to ensure that updates do not negatively impact the feature's functionality, before changing in production.

Continue to the next chapter II - Design the right architecture for your feature