Applied on website
Round 1:
AI interview
- Asked about me
- Asked to design URL shortener, I explained
- Asked how you will handle deletions in it
Round 2:
Take home assignment https://github.com/MdSadiqMd/Mercor-Assignment
Round 3:
Took this round for all of the people at once, told to optimize the code for throughput https://github.com/MdSadiqMd/mercor-eng-takehome-main In last asked how was the interview process, All people praised, I yelled at them that it’s not nice
Round 4:
Given an object structure that data is present in S3 we need to ingest it into a database and optimize the search
calculation:
- 12kB per candidate
- 1024 embeddings - 4kb (total - 4bytes * 1024)
- 500 million candidates Some questions:
- which is costly storing in ram or querying ? ram is cheap then how can we effectively get it ?
- how can we get distributed search from Turbopuffer ?
- how do we shard the database ? Sharding won’t help in searching
- Retrieving from S3 is not that simple, network crash, worker crash all kinds of stuff happens ?
- how can i get data from the continents like Asia if i had only shard via country ?
Verdict: Haven’t responded, Take it as Rejected