This looks like a powerful step toward making prompt engineering more scalable and production-ready. The version control approach, along with staging environments and real-time analytics, seems particularly useful for teams handling high-volume AI workloads.
One question: How do you handle prompt drift over time? As models evolve, prompt effectiveness can degrade—do you provide any automated testing or monitoring to detect when a deployed prompt needs adjustment?
Looking forward to exploring Portkey’s capabilities.
thank you! we don't have a strong Evals module within our Prompt Studio at the moment. So there's no straightforward way to do that. however, we do have one of the better modules for applying guardrails on live AI traffic and setting up routing based on guardrail verdicts: https://portkey.ai/features/guardrails
lol. i get you - i think the 3rd point you shared - it's not exactly about CI issue - CI is already as fast as it can be. It's just that a prompt is a critical part of your AI app and any change to it needs to go through a few hoops before it's available to all users.
On Portkey, since we decouple prompt templates from your code - you can continue iterating on the prompts on Portkey and just reference the prompt ID in code. Any change you make to Portkey prompts automatically get reflected in the app because the prompt ID keeps pointing to the latest / published version.
This looks like a powerful step toward making prompt engineering more scalable and production-ready. The version control approach, along with staging environments and real-time analytics, seems particularly useful for teams handling high-volume AI workloads.
One question: How do you handle prompt drift over time? As models evolve, prompt effectiveness can degrade—do you provide any automated testing or monitoring to detect when a deployed prompt needs adjustment?
Looking forward to exploring Portkey’s capabilities.
thank you! we don't have a strong Evals module within our Prompt Studio at the moment. So there's no straightforward way to do that. however, we do have one of the better modules for applying guardrails on live AI traffic and setting up routing based on guardrail verdicts: https://portkey.ai/features/guardrails
> How do you version control thousands of prompts?
Kill me now.
> How do you collaborate across hundreds of engineers
What do you mean by that? This only targets a few big companies.
> A tech firm that cut deployment times from 3 days to near-instant
That's a process and maybe CI issue, I don't see how AI would improve any of that but I'll be gladly proven wrong.
> You can try it yourself at prompt.new
All I see is a login page from another company. Don't you have a web site with all those serious prompting you do?
lol. i get you - i think the 3rd point you shared - it's not exactly about CI issue - CI is already as fast as it can be. It's just that a prompt is a critical part of your AI app and any change to it needs to go through a few hoops before it's available to all users.
On Portkey, since we decouple prompt templates from your code - you can continue iterating on the prompts on Portkey and just reference the prompt ID in code. Any change you make to Portkey prompts automatically get reflected in the app because the prompt ID keeps pointing to the latest / published version.
does that make sense?