Remote work has completely changed the traditional approach to hiring, opening the door to a much broader talent pool than anyone would have imagined a couple of decades ago. Here at MartianCraft, we’ve been fully remote since 2004, long before the idea of virtual teams gained mainstream appeal. Back then, remote positions were rare, and we had to piece together our own strategies for interviewing people around the world. Over the years, we’ve continuously refined our approach to hiring, adjusting to new tools, new social norms — and new methods that bad actors sometimes use to misrepresent themselves. We still believe wholeheartedly in the potential of remote collaboration, but it’s become clear that we need a robust system of checks and balances to ensure the people we bring on board are who they claim to be.
One of the biggest shifts we’ve noticed in the last 5 years or so is a sharp increase in fraudulent activity targeting remote job postings. When companies first began embracing distributed teams en masse, it didn’t take long for scammers to realize that they could hide behind a screen and cobble together a convincing resume. Some simply embellished their qualifications, but others went further: We’ve seen candidates who tried to juggle multiple full-time jobs at once, outsource their tasks to cheap freelancers, or even send a substitute to appear in their place for video interviews. The more the tech world embraced remote work, the more sophisticated these methods became. We also witnessed the emergence of AI-generated resumes and cover letters, tools that can rewrite entire job histories with a click and tailor them to specific listings.
Over time, we’ve learned to spot certain patterns early in the process. One of the most effective measures we’ve taken is to keep a comprehensive record of every individual who has ever applied to MartianCraft. This goes beyond simply storing a name and a PDF resume. We archive notes from each conversation, keep the resumes they submit, and watch for changes if they decide to apply again. By revisiting someone’s file, we can see if the positions on their resume suddenly shift from one application to the next, or if they now claim to have attended a college that they never mentioned before. Inconsistencies aren’t always definitive proof of fraud — sometimes people do legitimate skill-building or decide to include short-term gigs that they previously left off — but they can raise questions that prompt us to dig deeper.
Another part of our strategy involves verifying critical information against public platforms like LinkedIn. It’s astonishing how often we find large gaps between a resume and a candidate’s public profile. Titles might have changed drastically, entire companies might appear or disappear, or dates might not line up. We don’t expect perfect one-to-one matches across every platform, since people occasionally omit short contracts or forget to update their job titles, but when we see major inconsistencies or brand-new “roles” that have no digital footprint, it puts us on alert.
We’ve also seen how AI-driven resume generation can produce bullet points that align a little too neatly with a job description. A candidate might list every skill word for word from our posting or replicate phrasing that suspiciously echoes something we wrote on our careers page. Sometimes the claims are plausible in isolation, but read like a collage of different people’s experiences. A quick glance might not catch it, yet when we map these claims against the candidate’s conversation in interviews, it becomes evident they don’t genuinely know the details they say they do.
I know what you’re thinking: Isn’t using AI to tailor resumes to job postings exactly what we’re supposed to be doing now? The problem isn’t customization; it’s fabrication. We’re seeing candidates list technologies they’ve never touched just because the job description mentions them. There’s a massive difference between trimming irrelevant details and inventing experience. AI makes it all too easy for bad actors to waste everyone’s time by interviewing for roles they’re not qualified for. The moment we dig into the specifics of the tech stack, it becomes clear they’ve embellished or outright lied. What could have been a solid interview quickly turns into a trust issue: what else in the resume is fake?
Our commitment to using video in interviews has been crucial. Ever since we went remote in 2004, we’ve gravitated toward face-to-face conversations whenever possible to get a sense of who someone is and to let them see who we are. In an era when many companies relied on phone screenings, we found that video gave us a far richer understanding of a candidate’s communication style and personality. These days, with the proliferation of deepfakes and AI-based visuals, it also helps us confirm a candidate’s identity. We require the camera to stay on during every interview session, and we capture a still image early on as a reference. If a later interview shows someone with notably different facial features or suspiciously choppy, out-of-sync mouth movements, we probe deeper. We’ve had a few cases where the lighting and expression never changed during an entire conversation, suggesting deepfake technology. By insisting on continual video interaction, we reduce the risk of letting an imposter or an automated avatar slip through. We have also found that asking a candidate to raise their hand in front of their face or make a similar gesture can break the generated mask they are using.
We also pay close attention to a candidate’s job history during interviews. We ask about specific roles, the culture of the companies they list, and the types of projects they worked on. It’s surprisingly common for someone to list a college or a previous employer they know nothing about. Maybe they claim to have attended a certain university, but they can’t answer the most basic questions about campus life. Or they say they specialized in a technology stack, yet draw a blank when we ask about real-world scenarios in that same domain. When these red flags stack up, we quickly realize something’s amiss.
Part of the reason we catch so many of these anomalies is our unique interview structure. As we mentioned in a previous post, we pay people for the time they spend interviewing with us. That might sound surprising, but it ensures a more thoughtful process for both sides. Candidates are motivated to invest genuine effort; in turn, we respect and value their time. A small additional perk is that we can verify their physical mailing address when sending a check.
Traditional interviews sometimes involve broad, open-ended questions that can be gamed with ChatGPT or another AI model. We’ve altered our approach, focusing on real-world tasks, hands-on coding exercises, and spontaneous problem-solving discussions. AI struggles to keep up in a scenario where each question builds on the last, referencing a snippet of code we shared just seconds ago, or a design challenge that demands immediate creative input. Anyone trying to outsource their thought process to a third party or an AI system will usually stumble when the conversation takes an unexpected turn.
We maintain these precautions because we truly believe in the value of remote work. It’s a powerful tool that allows great minds to collaborate from different continents, and it offers professionals an opportunity to find a better work-life balance without uprooting their families. But as remote collaboration becomes more mainstream, we’ve also seen a rise in the “overworking” trend, where people take on two or three full-time roles simultaneously and either farm out tasks or skimp on real engagement. We can’t support that kind of deception, not just because it harms productivity and morale but also because it betrays the trust that’s so crucial for making remote teams succeed. When you join MartianCraft, we want you to thrive as your authentic self — bringing your expertise, ideas, and passion to the table, not splitting your focus between multiple employers or relying on an algorithm to do the heavy lifting.
That’s why we devote so much energy to safeguarding our hiring process. Yes, it takes more time on our end, and it may seem strict to certain candidates, but the payoff is a community of like-minded professionals who can rely on one another. Remote collaboration demands trust: When you can’t tap someone on the shoulder to double-check their progress, you have to be confident they can follow through independently. Everything we do — from archiving resumes and cross-referencing LinkedIn pages to requiring video calls and designing AI-resistant interviews — aims to preserve that trust. We don’t want to exclude people who have unconventional backgrounds or who might not present themselves in a traditional way, but we do insist on honesty, accountability, and genuine skill.
Technology will continue evolving, and who knows what new challenges lie ahead. We’ve already adapted to deepfakes, AI resumes, and advanced impersonation tactics. We fully expect more surprises in the years to come. But our longstanding commitment to remote collaboration means we’re ready to meet those challenges as they emerge. We’ll keep refining our process, always looking for that balance between being thorough enough to deter fraud and welcoming enough to encourage the real talent out there. The end goal remains the same as it was back in 2004: to bring together the best possible team, no matter where they happen to live, united by a shared dedication to doing meaningful work.