How Inpromptyou works

Two sides of the same coin: you create an assessment, candidates take it in a sandboxed AI environment, and both of you get useful data out the other end.

For test creators

Set up an assessment in under five minutes

1

Define the task

Write what candidates should accomplish. This could be drafting marketing copy, debugging code, summarizing a legal document, or anything else you'd use an LLM for at work.

  • Choose from GPT-4o, Claude, or Gemini
  • Write a clear brief with specific deliverables
  • Define what a good output looks like
2

Set the constraints

Token budget, time limit, max prompt attempts. These constraints are what make the assessment meaningful -- they separate people who can prompt efficiently from people who brute-force their way through.

  • Token budget caps total usage across all prompts
  • Time limits keep things practical
  • Fewer attempts needed = higher score
3

Share the link, watch results come in

Each test gets a unique URL. Send it to candidates, embed it in your ATS, or post it on a job listing. Results arrive in real time on your dashboard.

  • Candidates get their score immediately
  • You get detailed analytics on each approach
  • Compare candidates side by side

For test takers

Prove your skills, not describe them

1

Review the brief

You'll see the task description, which AI model you'll be using, and the constraints: how much time, how many tokens, how many attempts. Plan before you type.

2

Work with the AI

The sandbox gives you a real chat interface with the specified model. Write prompts, review outputs, iterate. The platform tracks everything: token usage, attempt count, time spent. Efficiency matters as much as getting the right answer.

3

Get your Prompt Score

Submit when you're satisfied. You'll immediately see your score (0-100) with a breakdown across efficiency, speed, accuracy, and attempts. See where you rank. Put it on your LinkedIn if you're proud of it.

The Prompt Score

A composite metric from 0 to 100 that measures how effectively someone uses AI to accomplish a task. Four weighted components:

35%

Accuracy

Does the output match what was asked for? Evaluated against the success criteria defined by the test creator.

30%

Efficiency

How many tokens did they burn relative to the budget? Getting the same result with fewer tokens scores higher.

20%

Speed

How fast did they finish? Faster is better, as long as quality holds.

15%

Attempts

How many prompts did they need? Fewer, more precise prompts indicate stronger prompting ability.

Ready to try it?

Create your first assessment in under five minutes, or take a sample test to see the candidate experience.