← Back to Home
A collage of game developers and AI coding tools, symbolizing the integration of artificial intelligence in game programming.

How AI Is Reshaping Game Programming: Industry Leaders Weigh In on the New Hiring Standards

[GamePea Exclusive, Reproduction Prohibited!] GamePea Reports — From the nationwide craze of 'farming lobsters' to the surge of AI topics at GDC, AI applications have become the hottest topic across the gaming industry. Compared to fields like art generation, motion capture, and AI NPCs, programming has long been considered an ideal domain for AI.

Research indicates that top-tier programmers typically produce 100–150 lines of high-quality code per day. According to industry surveys and internal Tencent data, programmers primarily focused on coding generate an average of 100–200 lines of effective new code daily. The advent of AI coding tools is quietly bridging this efficiency gap. Tools like Claude Code abroad and Tencent's open-source CodeBuddy in China are reshaping programming at an unprecedented pace, enabling efficient and stable completion of tasks that once required significant time and effort from junior to mid-level programmers.

Of course, AI's role in programming is never about replacement but empowerment. The current value of AI coding lies in substituting for junior or inexperienced programmers, freeing them from repetitive, basic coding tasks, and inadvertently elevating the value of top-tier developers. At last year's Unity conference, Yu Yu, Lead Client Programmer at Tencent's TiMi Studio Group, remarked on the state of Vibe Coding: 'The rise of AI requires many experts to empower it, giving us engineers over 35 a second spring in our careers.' In the AI era, programmers need higher-order skills to collaborate with AI and harness it to solve complex problems, making experienced senior programmers even more critical.

Recently, executives from CCP Games (developer of EVE Online) and Garry Newman, founder of Facepunch Studios (developer of Rust), told foreign media that AI is accelerating game programming efficiency but still relies on the judgment of seasoned programmers. Meanwhile, Augment Code, a star startup focused on enterprise-level AI coding assistants and agents, released its own job postings, emphasizing that when AI agents can handle 99% of implementation work, the logic of programmer hiring is undergoing a systemic overhaul.

AI as an 'Assistant': Developer Judgment Still Dominates

Kristinn Þór Sigurbergsson, Engineering Director at CCP Games, said the company is 'extensively using AI tools for code-related work.' However, he added, 'The usefulness of AI heavily depends on the task at hand.' For the Icelandic studio, one appeal of AI models is their ability to help developers quickly grasp 'vast and mature codebases,' such as their 23-year-old MMO's codebase.

Sigurbergsson explained: 'Using tools like Cursor or Claude Code to open a project lets you get up to speed quickly. They are especially powerful for codebase navigation, summarization, and cross-file logic tracking.' He noted that AI can be used for debugging, but directly asking for solutions yields 'limited results.' He added, 'AI suggestions often involve suppressing a log entry. Occasionally that's correct, but most of the time it's not, and it still requires experienced judgment.'

The CCP executive said the biggest difference after adopting AI is that developers spend more time on planning and review, reducing time spent writing code. 'An interesting phenomenon is that teams are often bolder during the planning phase,' he said. 'The cost of making mistakes is lower because iteration is faster. This shifts focus to design thinking rather than typing code.' CCP developers also use AI in EVE Online to test features or behaviors, though Sigurbergsson noted these are 'rarely production-ready code, nor intended to be—but it's very effective as a communication and exploration tool.'

However, the biggest change is in non-production code, where AI has had a 'transformative' impact. 'We often need to write small scripts to generate data, investigate issues, or automate one-time tasks,' Sigurbergsson said. 'In such cases, we care more about the output than code elegance. The value proposition of scripting versus manual work has shifted dramatically: what used to take half a day now takes minutes.'

Cliff Harris, veteran indie developer at Positech Games, described using Anthropic's Claude for coding as 'life-changing.' 'In the past year, I've learned more about obscure C++ algorithms and optimization techniques than in the previous 15 years combined,' he told foreign media. 'I started programming at age 11 in 1981, so I have 45 years of experience. I find it incredibly useful just to bounce ideas off Claude or find bugs. Anyone not using the latest top-tier LLM for programming is tying their own hands.'

Garry Newman, founder of Facepunch Studios (Rust, Garry's Mod), said AI makes his work easier. He noted that using ChatGPT or similar tools to explain things instead of Google is 'an evolution of programming.' 'If I want to refactor code, I don't need to spend 10 minutes copy-pasting the same code into 30 different files; I just spend 5 minutes arguing with Claude to do it for me.' Newman added that he isn't worried about AI replacing programmers like him; instead, he sees it as enhancing his capabilities. 'Some worry AI will make anyone capable of doing my job, but I don't think so. It makes me a better programmer, more efficient. I learn from it. It makes me a better programmer. I'm not worried; I'm excited.'

Similarly, Paul Kilduff-Taylor of Mode 7 sees AI as playing an 'assistant role' in programming. 'I've seen many experienced programmers use AI to quickly find information or get hints. Current 'reasoning' models have low hallucination rates and are effective in this regard. Positioning AI as an assistant—offering optimization or debugging suggestions, quick documentation queries, or as a consulting tool—is becoming increasingly common.'

Hallucinations Spark Widespread Concern: AI Coding Still Can't Replace Programmers

While many developers are impressed with AI's coding assistance, others express concerns or see serious limitations. Kilduff-Taylor acknowledged that AI makes developing a game 'easier than ever,' but the technology has output limitations. The main reason is that humans create differently from AI. 'Dealing with code you don't understand in unfamiliar, uncontrollable structures scales poorly, and current commercial AI systems can't handle context windows as large as an entire Unity project,' he said. 'That's why many 'wow, AI made a game' examples use very lightweight frameworks or end up as simple prototypes.'

Chet Faliszek of Stray Bombay, a vocal critic of AI hype, also worries about fully understanding final outputs. 'Can it help you write small, independent systems? Sure. Code is code; you usually don't need to reinvent the wheel,' said the Valve veteran. 'But for example, I'm learning Godot while relearning C#—I want more than just a final result I can't read. I want to understand and implement it myself to grasp its strengths. In the process, you learn tidbits that make you think, 'Wait, if I cause damage this way, it means I can improve it or offer this upgrade option.''

Bram Ridder, former Rebellion employee and Technical Director at Kythera AI, echoed similar concerns. While he has used generative AI for basic 'boilerplate' code, he generally avoids it 'because it robs you of understanding and learning opportunities. It's a useful tool, but no one should rely on it.'

Developers widely worry about the accuracy of AI outputs. At least for now, generative AI models are prone to 'hallucinations,' confidently presenting incorrect information. 'I use AI more for brainstorming specific questions outside my knowledge,' said Adam Grimley, Senior Programmer at Huey Games. 'Even then, I usually take their answers with a grain of salt and cross-check against papers, human-written blogs, and tutorials. It's a very slow process, typically used only when I've exhausted other methods.'

Alex Darby, former Lead Programmer at Bithell Games and Roll7, added: 'The last time I used AI, I found it useless and frustrating. The perceived speed boost is just 'it can type infinitely fast, but at least 10-15% of the time it generates nonsense.' Once I realized it's unreliable and I can't trust its code, I spent a lot of time reading, verifying, and correcting code, which was slower than writing it myself from scratch.'

Hannah Rose, Senior Programmer at Failbetter Games (Fallen London), expressed similar concerns, questioning the value of models like Copilot that extract code from Stack Overflow or YouTube tutorials. 'Copying large chunks of code from public repositories into your project saves typing time, but even in the best case, you still spend time reviewing it. Most of the time, you end up modifying or deleting it entirely,' she said. 'It's a trade-off between saving typing time and spending thinking time. I rarely feel typing speed is my main productivity bottleneck.'

Matthew Davis of Subset Games described AI as a 'completely unreliable programming tool' that, beyond autocomplete, 'can't be relied upon to produce reliable, usable things.' 'If I ask it to generate longer solutions, I inevitably spend more time debugging than if I had written the code myself,' he continued. 'Moreover, creating a large codebase you don't fully understand exponentially increases long-term technical debt. For now, AI is at best an inefficient, costly tool.'

Beyond accuracy concerns, other developers worry about the nature of generated code and how AI forces changes in work habits. Jem Frisby, Backend Web Developer at Failbetter, described most AI-generated code as 'garbage.' In her view, the problem isn't the technology itself but how its operation is prioritized. 'Its architecture is terrible, fragile, and completely disregards performance,' she explained. 'Worse, it forces you to adapt to it; you have to accept what it offers and then figure out how to make it fit your existing content. Software development is collaborative, and no one likes working with someone who says 'my way or the highway.''

John Ogden, CTO of Huey Games, said that while AI is 'somewhat useful' at the functional level, it fails at the architectural level and 'cannot fully replace programmers.' He sees AI's shortcomings in console game development due to the closed nature of consoles. 'AI training in this area is very limited.' The worst-case scenario, he believes, is developers using AI to create large amounts of code that require manual debugging. 'Any programmer who has worked with a system for a while develops a mental model of it, especially those who helped write it,' Ogden said. 'But a pile of AI code destroys that. AI won't wake up in the middle of the night realizing something is wrong with the system or think about better implementation methods. You've essentially removed all relevant parts of general intelligence from the development process.'

Among those concerned about AI in programming, some believe the technology might eventually work as proponents claim—but only after overcoming major hurdles. Alex Darby thinks the only way to make AI-generated code work company-wide is to build the entire workflow around it. 'I think tech companies are better positioned to use this approach because they mostly follow 'large-scale automated testing and continuous delivery' processes,' he said. 'This requires a different software architecture approach, where any piece of code tends to be more modular and independent for easier testing. Thus, less context is needed to write code.'

Meanwhile, Mode 7's Kilduff-Taylor believes the main barrier to widespread AI use in game programming is context. 'AI can be incredibly stupid or surprisingly powerful: there's a huge gap between 'stochastic parrot' style dumb autocomplete and major new discoveries in physics,' he said. 'Context, frameworks, and auxiliary systems are key.' He concluded: 'In gaming, we don't yet have the right frameworks to solve this. Some think it's insurmountable. Personally, I lack the insight to make an effective judgment, especially long-term.'

New Hiring Standards for AI-Native Programmers: Judgment Over Coding Ability

As programmers begin working alongside increasingly powerful AI agents, the nature of their jobs is shifting. What skills should programmers have in the AI era? Augment has pioneered new standards in its hiring process.

Augment believes that programmers now spend less time writing code and more time deciding what to build, designing systems that run stably in production, coordinating agents, and aligning teams around clear goals. Coding ability still matters, but increasingly, AI can help. What matters more now is judgment: the ability to choose the right problems, make sound architectural decisions, and guide humans and agents toward meaningful outcomes.

Augment argues that in AI-native engineering, the human role is shifting from author to architect and editor. You define intent, make design and trade-off decisions, set rules, focus on user experience, and serve as the last line of quality assurance. Therefore, they believe engineers in the AI era need six key competencies:

  1. Product and Outcome Judgment: Are we building the right thing? As code production costs drop, the most expensive mistake is building the wrong product. Programmers increasingly need to dive into user problems, eliminate ambiguity, and define clear outcomes before implementation begins. The most valuable programmers aren't those who write the most code, but those who ensure the team solves the right problems.
  2. System and Architecture Judgment: Will this hold up in production? Agents can generate runnable code, but they are far worse at judging whether surrounding systems are reliable. Architectural judgment still requires understanding long-term trade-offs, real-world operational conditions, and potential risks during scaling. 'It works' is easy; 'it works consistently in production' is much harder.
  3. Agent Leverage: Can you turn AI into actual engineering throughput? AI-proficient programmers don't just use agents as assistants. They craft prompts, guide them to execute efficiently, steer them when they go off course, and verify their outputs. Think of it as delegation—except your subordinates are extremely fast and occasionally confidently wrong.
  4. Communication and Collaboration: Can you clearly articulate intent and collaborate across perspectives? As project execution speeds up, more work shifts to clarifying issues, weighing trade-offs, and integrating input from different team members. Programmers increasingly need to communicate clearly, listen carefully, and build consensus quickly. The fastest teams aren't those that write code fastest, but those that think fastest.
  5. Ownership and Leadership: Do you focus on outcomes, not just tasks? Good programmers take responsibility for the final result, not just their part of the code. When something blocks progress—whether slow builds, unclear workflows, or system flaws—they step in and solve it, even if it's outside their scope. Ownership means removing any obstacle to team success.
  6. Learning Speed and Experimental Mindset: Can you keep up with the pace of tool evolution? The tools we use today won't be the ones we use three months from now. In this industry, good programmers constantly experiment, quickly change workflows, and abandon old methods when better ones emerge. Experimentation isn't a hobby; it's part of the job.

Augment believes a framework is only meaningful if it changes how you hire. The next step is turning these dimensions into observable signals—behaviors that can be assessed during interviews. For example, can a candidate quickly clarify an ambiguous question? Can they identify architectural risks before they appear in production? Can they effectively guide and validate AI-generated work?

Based on this, Augment has identified four primary hiring directions for its recent recruitment:

Each job description weights the six dimensions differently, and each role now has an interview process built around the most important signals for that position. Augment believes a side benefit of rethinking hiring is that it forces explicit articulation of engineering values. These six dimensions not only influence hiring but also Augment's views on performance, growth, and career development. If judgment, impact, and learning speed matter most, then these competencies should be everywhere—not just in interviews.

Tags: AI