Your Claude History Could Get You Hired
The end of painful job searches
I got an email today!
Nobody is encouraged to send me emails like this. I do not appreciate the E.E. Cummings schtick, nor the effortful string of emojis. Especially coming from Google, now the second biggest company on earth. Gemini did not cook on ad copy.
That said, it’s pretty obvious that it could cook up a decent CV. I’ve seen it do so for a number of friends and there aren’t really drawbacks to having it make things look good and help drill down on what’s relevant to include. The hours-long process of tailoring things to every role is now much quicker and a good deal of guff has been removed, painlessly.
Those who are doing this well will have a temporary advantage as everyone adjusts to the new equilibrium. They’ll get more interviews at the margin. Life will be a little easier. And then the jig will be up, everyone will be doing it.
So interviews will become the new hiring mechanism. They’ll be easier to conduct en masse because employers can have Claude Mythos run the Zoom calls, take notes, and make decision based on a conversation with their interview bot. But then the savvy will train up AI versions of themselves to attend the interviews and give the best possible answers. Eventually everyone will be doing that too. Another jig’s up.
This is a cycle of signal death. Everyone wants to communicate to prospective employers how good they are and so there’s an ever-increasing supply of resumes as their price heads to zero, which increasingly looks like the equilibrium outcome. Everyone will be able to make a thousand tailored CVs and send them to every interesting opening and there won’t be much of a way to tell them apart. Signal becomes noise.
How do you get back the signal?
One option is to do a digital version of blue book testing, which is many people’s preferred solution to the AI-in-education ‘crisis’. You’d figure out a way to do it like the Remote LSAT or something, where Mythos is carefully proctoring every applicant, unblinkingly, as they peck nervously away at their keyboard. Instead of ever sending in an application, you’d just enlist to take the hiring test on xx/xx/xxxx and any job search would just be a series of similar tests.
Perhaps that will work. Perhaps people will find ways to send their AI selves into those testing spaces too. If they don’t, what they’re left with is a new kind of system stress, not like the old stress, which will reward a different set of people. Out with LinkedIn-maxxing and in with cool, clutch heads.
That’s (generally) a good trait to have, but it’s also establishing it independently from tool use. If Mythos (or whatever AI you think attains these capabilities) can conduct interviews and proctor tests, surely it’s going to be used on the job. So what’s the use of measuring applicant efficacy without tool assistance? Surely some very capable people will be badly-suited to tools and thus less desirable than less capable people who are naturals. Employers aren’t hiring for cosmic merit.
Before I speculate on another option for getting back to signal, I want to float an idea. What seems clear to me is that any output internal to the hiring process is either going to be highly noisy because of AI (CVs, online interviews, open-book tests) or emitting the wrong signal because avoiding AI (in-person interviews, closed-book tests). The solution will require going to outputs external to the hiring process.
One classic option on the table that does this is looking at the applicants past portfolio of work. If I was a film director putting together an original soundtrack, I’d know exactly what I was getting quality-wise if I hired Jonny Greenwood or Trent Reznor. Regardless of what tools they used, they made what they made and can probably do something like it again.
What would it mean for someone in payroll to have a portfolio though? Maybe more professions become portfolio-based. That could work. It’s also true that most aren’t today, so most people don’t have portfolios.
Or do they?
These are my recent chats in ChatGPT and Claude. They are what they are. At this point there’s a trail of these on both services going back three(?) years. Taken in screenshots like this, they are not very informative at all. But as a time-series body of data on my interests, attitudes, habits, and ability to work with these tools, it’s probably the most informative such thing in existence.
That body of data is my portfolio. Everyone who uses the AIs has a portfolio of thought.
Now, I’m not saying we just turn over everyone’s chats with their Claude therapist to every company they might be interested in working for. Privacy is totally dying, but this needn’t speed it along. Instead, OpenAI or Anthropic could let you export a certified cognitive profile in the same way they made those fun Spotify wrapped-style moodboards based on your chats for the year. This would strip out the HIPAA stuff you don’t owe your employer.
Because there will be so much information to parse through, the full cognitive portfolio would be examined by an employer’s Mythos, with the executive summary passed on to the dude making hiring decisions. They can then ask some follow-up questions to Mythos and get basically a full picture of everything they’d want to know.
Hiring is saved.
Derek Parfit has come under fire for spending too much time in the weeds addressing objections.1 I can’t end up like that. For this reason, I’ll just address a couple.
There’s good old Goodhart—we make a measure (your chat history) a target and people turn it into a bad measure by optimizing for it. In the more straightforward case, where you just begin asking more intelligent questions about more interesting subject matter, I think we should admit that this is good for people; akin to how it’s good that people end up taking a survey English course to get their degree that’s become 90% target and 10% measure. But this isn’t quite the college case, since there people try to optimize for a degree and end up taking a great deal of trivia alongside their Shakespeare. People trying to optimize their chat history will have to do so by… sharpening their thinking. How well they do at this is itself useful signal. Goodhart’s Law is on our side.
There is also a less straightforward and wholesome version of this objection.
So we’ve spent this article imagining AI good enough to perfect CVs, stage immaculate realistic interview performances, and perform economically important tasks with human supervision. If that’s what we’re running with, why not expect the AI to be able to spin up sock puppet Claude accounts filled with fake histories aimed solely at convincing that Mythos hiring bot to praise their human to the head hiring honcho?
More and more AIs will collect IDs in the future I suspect, so that’s one response. Maybe only ID’d accounts get to commission a certified portfolio. There’s also the sheer computational effort of running servers for years on end to produce a convincingly time-stamped fake that looks errant and human throughout. The CVs and interview performances are one-shot tasks, basically. The capabilities required to complete the many, many shot task of essentially faking a life should be much more robust. Once hired, there’s also the matter of continuing to carry off the fraud. It’s a very uphill battle.
Bringing things back to earth for the next one: what about all the people for whom their ChatGPT history is actually not a very good signal at all? I mean, I have one friend who until two weeks ago didn’t even have a Claude subscription. I have other friends who still primarily talk to other humans. Yet more who have probably never queried the AI about anything but their college papers.
To this I say two things.
First, that is still a signal. You don’t use the latest technology. Maybe there’s other evidence about you that can be used to assess whether you’d be good at adapting to the tools if asked to on the job; maybe not. You could imagine a really hipster employer smiling upon a submitted cognitive portfolio that indicates very little engagement with AI.
And second, we’re clearly still in the transitional stage of AI uptake. Sure, lots of us aren’t using it every single day. We’re still at the point where it’s mundane day-to-day utility isn’t obvious to everyone. But Mythos is coming. Then its successor. Then its. So on and so on. Everyone will be using it, more and more, and you’re looking at a gradual phasing into the new hiring order as the share of AI-natives advances across time.2
Okay, that should cover the worthy objections. Time to start optimizing for the future of hiring: ask more questions, frame them better and more precisely, throw out more weird hypotheticals, and keep an open mind.3
Conversations with Tyler ‘Elijah Millgram on the Philosophical Life (Ep. 125)’ [Transcript]
COWEN: As you have tried to apply biography of a sort to John Stuart Mill, to Nietzsche — if you did the same to Derek Parfit, what kind of understanding do you come away with?
MILLGRAM: Don’t spend your whole life living at All Souls [College].
COWEN: And what’s wrong with All Souls?
MILLGRAM: All Souls is wonderful, but don’t spend too much time.
COWEN: That’s the margin, right? We’re economists here, in addition to philosophers.
MILLGRAM: Well, Parfit wrote a wonderful first book; I still teach this first book, the Reasons and Persons book. His last book was horrifically awful. I don’t know if I want to blame it on the institution . . .
COWEN: Two volumes. It’s trying to reply to every possible criticism, right?
MILLGRAM: It’s so bad. It’s thin in a way that the first one isn’t. I was actually visiting at All Souls when he was finishing it up, and I tried to have conversations with him about it and about the draft. We had frequent conversations, after lunch, and they would — within seconds — turn into Parfit saying, “But look, it’s obviously right; you just know that torturing babies for fun is a bad thing.” Whatever you think about the merits of torturing babies, not that I think it’s a good thing, there’s a thinness to that.
You can see that the environment had somehow whittled him down or thinned him down. I don’t know enough to say for sure how it happened, but that’s my impression. It’s too cloistered an environment for you to want to spend that much of your life in it.
I have a friend, let’s call him Jack, who is 95% less tapped into frontier tech, futurism, rationalism, etc. than either of us, but who took up Claude Code recently and set to designing a beautiful golf gambling dashboard with extensive backend modeling. He’s become a sicko. The people you’d least expect will find the damnedest things to do with these tools.
If you paste this post into an AI where you have memory across chats enabled, you will receive obscene glaze if you ask it to generate one of these cognitive portfolios for you. Don’t include the footnote, obviously.
Thumbnail image is Hiroshi Sugimoto, Trylon, New York, 1977 from his Theaters series. He opens his camera shutter for the whole run of a film. It’s quite striking.



