Challenge

A financial institution approached us with a unique set of requirements for a conversational agent. Their primary concern was security and privacy, necessitating a solution that could operate entirely locally, without any connection to the public internet. This requirement was driven by the sensitive nature of the financial data they handle and the need to comply with stringent regulations regarding data protection and privacy.

The institution sought to utilize local, open-source models for the conversational agent, emphasizing the importance of transparency and the ability to audit and modify the system as needed. This approach would allow them to tailor the conversational agent closely to their specific needs while maintaining control over the technology stack and data processed.

Additionally, the customer expressed the need for a Document Question Answering (QA) instance. This component was crucial for testing Retrieval Augmented Generation (RAG) capabilities on various data sets, enabling the conversational agent to provide accurate and relevant responses by drawing information from an extensive internal knowledge base. The challenge lay in creating a robust, secure, and private chat solution that could leverage advanced AI functionalities while operating within the customer's strict security and privacy parameters.

Solution

To meet the challenge presented by the financial institution, our solution encompassed a multi-faceted approach that prioritized security, privacy, and functionality. We designed a system that operates within a Virtual Private Cloud (VPC) on AWS, ensuring that the conversational agent and all associated data remain isolated from the public internet. This environment provided a secure foundation for the entire solution, leveraging AWS's robust infrastructure to maintain data integrity and confidentiality.

A VPN client was established between the VPC and dedicated client computers. This setup ensured a secure, encrypted connection for all communications between the financial institution's users and the conversational agent, further enhancing the privacy and security of the system.

At the core of the solution is the local deployment of an open-source-fueled conversational agent. By opting for an open-source model, we provided the financial institution with the flexibility to inspect, modify, and enhance the agent according to their evolving needs. This approach also facilitated compliance with industry standards and regulations, as the institution could ensure that all components of the system met their stringent requirements.

In addition to the conversational agent, we deployed an open-source Document QA system locally. This system enabled the Retrieval Augmented Generation (RAG) functionality, allowing the agent to access and extract information from the institution's internal documents and data repositories accurately. This capability was crucial for providing reliable and contextually relevant answers to user queries, drawing on a vast internal knowledge base.

Furthermore, we implemented a local Flowise deployment for creating custom AI applications. Flowise allowed for the seamless integration of various AI functionalities into the conversational agent, enabling the financial institution to tailor the system's capabilities to their specific operational needs.

Overall, this solution provided a secure, private, and highly functional conversational agent that met the financial institution's requirements for privacy, security, and AI-driven efficiency, all while maintaining a local and disconnected operational environment.

AI Tech Stack

  • AWS VPC Setup
  • Huggingchat instance
  • Text Generation Inference deployment
  • Kafka 8x7b as LLM for the conversational agent
  • multilingual-e5-large embedding model
  • Terraform for efficient building and destroying of infrastructure configurations during the prototyping phase