Loading...

How does it work?

We extract students' interests from academic profiles (learning records, assessment data, and teacher notes) using embeddings -- a low-cost way of using LLMs. We use this to identify their interests using the prompt Given this information about a student, return a JSON object {"interests": "..."} containing comma-separated keywords related to their academic interests, learning styles, and educational challenges.

We transcribe educational videos using the open-source Whisper model. It's great at creating clean, accurate transcripts of educational content.

We don't diarize the transcript to add speaker names, but using Anthropic's Claude 3 Haiku, a frontier model, we can do this with a simple prompt: Label this educational content between instructors and participants based on context.

We create personalized highlights using the prompt: Given this video transcript: Title: ${title}. Transcript: ${transcript_with_timing}. Write a few paragraphs for: ${profile}. Explain how this content relates to the student's specific challenges, learning style, and interests. Focus on practical strategies and connections to their academic goals. Respond as JSON paragraphs with timing.

How long does it take?

The entire analysis for a 20 minute educational video takes under half a minute.

How much does it cost?

For Whisper on OpenAI with Claude 3 Haiku, the typical cost of a 20-minute video personalized for 6 students is under 30 cents, and grows at about 3 cents per student.

Can this run privately?

Yes. With open-weight models like Whisper and LLama-3, you can run this in your school's data center or district servers, ensuring student data privacy.

How is this priced?

This is not a product. It's a demo of personalized learning technology. Email s.anand@gramener.com for details.