Rendered at 05:48:24 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
pastescreenshot 56 minutes ago [-]
The interesting question to me is not whether the system can generate a plausible PR-time test, but whether the useful ones survive after the PR is gone. If Canary catches a real regression, how often can that check be promoted into a stable long-lived regression test without turning into a flaky, environment-coupled browser script? That conversion rate feels closer to the real moat than the generation demo.
vivzkestrel 2 hours ago [-]
- there are atleast 10 dozen code review startups at this point and i see a new one on YC every week
- what is your differentiator?
blintz 13 hours ago [-]
I really want automated QA to work better! It's a great thing to work on.
Some feedback:
- I definitely don't want three long new messages on every PR. Max 1, ideally none? Codex does a great job just using emoji.
- The replay is cool. I don't make a website, so maybe I'm not the target market, but I'd like QA for our backend.
- Honestly, I'd rather just run a massive QA run every day, and then have any failures bisected, rather than per-PR.
- I am worried that there's not a lot of value beyond the intelligence of the foundation models here.
monkpit 34 minutes ago [-]
Isn’t the last point the case with every AI startup? Nobody has a moat and it’s tough to build one because the playing field is so level.
Bnjoroge 12 hours ago [-]
Agree on your last point and it's going to be a very bitter lesson. In any case, you probably wanna shift alot of the code verification as left as possible so doing review at PR time isnt the right strat imo. And claude/codex are well positioned to do the local review.
arkheosrp26 4 hours ago [-]
[flagged]
Visweshyc 13 hours ago [-]
Thanks for the feedback!
- Agreed that the form factor can be condensed with a link to detailed information
- With the codebase understanding, backend is where we are looking to expand and provide value
- The intelligence of the models does lay out the foundation but combining the strength of these models unlocks a system of specialized agents that each reason about the codebase differently to catch the unknown unknowns
recsv-heredoc 10 hours ago [-]
The market timing on this is perfect - it fills a major current gap I've seen emerging.
I've heard a few stories of QA departments being near-burnout due to the increased rate developers are shipping at these days. Even we're looking for any available QA resources we can pull in here.
No harm meant with the question - but what's the advantage over Claude Code + the GitHub integrations?
Visweshyc 9 hours ago [-]
We evaluated test generation using Claude code and our purpose built harness and measured the quality of tests in catching the unknown unknowns. We noticed Claude Code misses the second order effects that actually break applications. You also need infrastructure to execute the tests - browser fleets, ephemeral environments, data seeding need to be handled
warmcat 13 hours ago [-]
Good work. But what makes this different than just another feature in Gemini Code assist or Github copilot?
Visweshyc 12 hours ago [-]
Thanks! To execute these tests reliably you would need custom browser fleets, ephemeral environments, data seeding and device farms
mikestorrent 42 minutes ago [-]
If that's what you guys are bringing, you should put that more up front; focus on making it clear you're providing ingredients that Claude et al will not be providing on their own without Real Actual Software to do it.
solfox 13 hours ago [-]
Not a direct competitor but another YC company I use and enjoy for PR reviews is cubic.dev. I like your focus on automated tests.
Visweshyc 12 hours ago [-]
Thanks! We believe executing the scenarios and showing what actually broke closes the loop
Bnjoroge 12 hours ago [-]
what kinds of tests does it generate and how's this different from the tens of code review startups out there?
Visweshyc 11 hours ago [-]
The system focuses on going beyond the happy path and generating edge case tests that try to break the application. For example, a Grafana PR added visual drag feedback to query cards. The system came up with an edge case like - does drag feedback still work when there's only one card in the list, with nothing to reorder against?
solfox 13 hours ago [-]
Looks interesting! Looks like perhaps no support for Flutter apps yet?
Visweshyc 12 hours ago [-]
Yes we currently support web apps but plan to extend the foundation to test mobile applications on device emulators
- what is your differentiator?
Some feedback:
- I definitely don't want three long new messages on every PR. Max 1, ideally none? Codex does a great job just using emoji.
- The replay is cool. I don't make a website, so maybe I'm not the target market, but I'd like QA for our backend.
- Honestly, I'd rather just run a massive QA run every day, and then have any failures bisected, rather than per-PR.
- I am worried that there's not a lot of value beyond the intelligence of the foundation models here.
I've heard a few stories of QA departments being near-burnout due to the increased rate developers are shipping at these days. Even we're looking for any available QA resources we can pull in here.
No harm meant with the question - but what's the advantage over Claude Code + the GitHub integrations?