Healthcare, GenAI, Editorial/Content Creation
Amazon SageMaker, Anthropic's Claude, Llama 2, LangChain, Lambda, S3 Buckets
AI Engineering, DevOps,
Project Manager
0-PoC in six weeks,
HIPAA Compliance
Healthcare, GenAI, Editorial/Content Creation
Amazon SageMaker, Llama 2, LangChain, Lambda, S3 Buckets
AI Engineering, DevOps,
Project Manager
0-PoC in six weeks, HIPAA Compliance
More than one in five Americans suffer mental illness every year. Dr. Katz amplifies wellness therapy to meet the scale of need.
Dr. Katz offers a personalized patient engagement platform that uses video to transform care, helping providers focus on diagnosis, communication and improvement. The startup partnered with Loka to undergo Loka’s GenAI Workshop, hoping to uncover scaling strategies and unlock efficiencies in app design and deployment, all based in generative AI.
Dr. Katz enlisted Loka to build generative AI services that facilitate clinician-patient collaboration and clinical professional development, simplify video publishing and automate routine clinical tasks. Additionally, all system architecture must comply with HIPAA privacy regulations.
The first use case the team aimed for was a streamlined video-publishing workflow that would reduce administrative burden, save valuable time and improve quality control.
Loka integrated a Large Language Model (LLM) on Amazon SageMaker, with a primary goal of identifying potential vulnerabilities. Loka’s AWS-certified team of expert ML engineers developed a refined pipeline using a limited number of content samples.
"From our very first conversations with the Loka team,” said Nathaniel Hundt, Dr. Katz founder and CEO, “they were extremely interested in the work and the mission of the company and would roll up their sleeves alongside us to figure out the best possible solutions.”
Ultimately Hundt chose to build on Amazon Bedrock with Anthropic's Claude for LLM, due to its performance, cost efficiency and HIPAA compliance. This approach leveraged LangChain for efficient prompt creation and LLM handling. His goal is to opt for the best available model for the company's use cases, now and in the future.
Ever mindful of budget, Loka configured AWS cost controls such as budget alerts and infrastructure pausing during inactivity.
“We find that this is something that we’re able to deploy because we can afford it, and that matters to us,” Hundt said. “We want to roll generative AI into our application in a manner that’s not cost prohibitive.”
To create a scalable system, we built a workflow responsible for fetching transcriptions from an S3 bucket and providing LLM-based suggestions. This workflow is handled by a Lambda function, which is triggered when a transcription file is uploaded to an S3 bucket. Subsequently, it sends requests to the SageMaker endpoint hosting the LLM, which in turn, provides suggestions for titles, descriptions and tags.
- Deployed LLMs on Amazon SageMaker
- Efficient use of LangChain for LLM workflow handling with Llama 2
- AWS budget alerts and infrastructure pausing curb cost
- Scalable workflow fetches transcriptions from S3 bucket
- Lambda function integration triggers enhancements via Amazon SageMaker
Significant boost in video publishing and ability to deliver personalized information for patients and clinicians.
Ensured the generative AI implementation met HIPAA regulations and privacy requirements.
Improved quality and consistency through automated process for generating suggested titles, descriptions and content tags.
Best-practice infrastructure management brought major savings, with more ahead via Amazon Bedrock.
”We've enjoyed every interaction with the Loka team. We found that they have skilled leaders who operate on the product side and understand the direction of the market through their relationships with other leading customers.”