One emerging scenario for software development assumes teams rely on LLM-based agents that generate most of the code while humans provide only light supervision. Some developers experiment with “vibe coding,” but a more likely model is automated agents working continuously. The open question: can we trust those systems, especially in a hostile international environment?