When it comes to AI. Do not compromise.
I know you are in a hurry. Cutting corners.
A couple of challenges you are dealing with:
* Multiple LLMs, Multiple use-cases: Fragmented development where developers write use-case-specific code for each LLM.
* API Changes: Frequent updates to LLM APIs require constant code refactoring.
* Security Risks: Ad-hoc API key handling raises security concerns.
* Reliability: Lack of fallback mechanisms makes production risky.
* Data Accessibility: Scattered logging without centralization hinders data access and much more.
Do not compromise on this. It will hit you on your way out, to say the least.
How are smart teams solving this?
Think Proxy/Gateway! (Intermediary, abstraction).
* Centralized Access: A single API point for any LLM streamlines development.
* API Adaptations: Code changes happen in one place, minimizing disruption.
* Enhanced Security: Centralized key management (e.g., AWS Secrets Manager or similar) and for exampel Workload Identity Federation mitigate risks.
* Improved Reliability: Automatic fallbacks ensure service continuity.
* Centralized Data: Accessible logging aids research and decision-making.
This is just the fundamentals. There is so much value to unveil here!
I also see for example Kong Inc. has newly presented offers around this: https://techcrunch.com/2024/02/15/kongs-new-open-source-ai-gateway-makes-building-multi-llm-apps-easier/
https://konghq.com/products/kong-ai-gateway
Picture: Me talking about something interesting. Like, my name, or AI.