Early access

Your screen.
Their teacher.

Kyu watches a user's screen, tells them what to click, points at it, waits for them to do it, then moves to the next step. The user learns by doing, not by reading a manual or watching a video.

The problem People don't read documentation. They don't watch training videos. They click around until they give up, then they raise a support ticket.
The cost Every new student, employee, or customer who can't figure out your software costs you time, support hours, and adoption.
The fix An AI that sits on their screen and walks them through it. Step by step. In real time. Like having a patient tutor looking over their shoulder.

A user who's never opened Figma designs
a button from scratch. Guided entirely by Kyu.

A three-step loop that runs itself.

01

Kyu speaks the instruction

A voice tells the user exactly what to do next. An arrow appears on screen pointing at the element they need to click, type into, or select.

02

The user acts at their own pace

Kyu waits. The user takes as long as they need. When they finish the action and stop moving, Kyu notices.

03

Kyu verifies and advances

Kyu takes a screenshot, checks the result against what it expected, and either confirms ("nice.") and moves to the next step, or re-explains what went wrong.

Anyone who onboards users to software
has this problem.

Accessible onboarding

Not everyone learns from written docs. Kyu speaks instructions aloud and adapts to the user's pace. It works for vision-impaired users, neurodivergent learners, ESL speakers, and anyone who learns better by doing than by reading.

Enterprise tool training

Every company has that one internal system nobody knows how to use. CRM, HR portal, ticketing system, expense tool. Instead of recording training videos that go out of date, define the steps and let Kyu teach each employee directly.

Support ticket reduction

"How do I export a report?" "Where do I change my password?" These tickets cost $15-25 each to resolve. Kyu turns them into guided walkthroughs that happen inside the product. The user learns, the ticket never gets created.

Not a chatbot. Not a tooltip tour.
Something different.

It sees the screen

Kyu captures what the user is looking at and understands it. Not pixel matching. Actual comprehension of UI elements, text, and layout.

It talks and listens

Instructions are spoken aloud, not displayed as text walls. Users can talk back using push-to-talk to ask questions or get clarification mid-tutorial.

It reads the UI state

Beyond screenshots, Kyu reads accessibility data: what's selected, what's focused, what text is highlighted. It knows what the user did, not just what's visible.

It checks the work

After every step, Kyu verifies the outcome. If the user did it right, it moves on. If not, it explains what happened and helps them try again. No one gets stuck.

Want to try it?

Kyu is in early access. We're testing it with a small group of universities and organisations. If you're dealing with onboarding, training, or software adoption, book a short call and we'll show you what it can do with your tools.

Book 15 minutes