The future of visual scripting in an era of LLMs

General / 07 August 2025

I often find myself working with game engines. In particular, usually Unity or Unreal. When tasked with a new issue to solve or idea to implement, the mental headspace is, unsurprisingly, different depending on which engine you're using.


The current paradigm: 

Let's assume we've got an idea to test or thorny issue to solve. Both Unreal and Unity have similar, yet distinctly different mindsets to adopt.

Unity encourages you to think in a combination of prefabs, scene objects and the logic (scripts) on those prefabs/objects.

Unreal encourages you to think in a primarily actor and blueprint oriented way, with the outliner/level setup ranking lower on the consideration hierarchy.

Both are similar enough that you can muddle through either with a modicum of experience. Unreal is more opinionated about the route you take to get there, though that's a matter of preference.

If this is a new feature or idea:

Unity: You'll create a few new scene objects, maybe a prefab or three and then some scripts that orchestrate connectivity and logic flow. You'll interact with existing components Unity provides, such as mesh renderers, materials and physics primitives. When working with scripting, you can quickly hash out what you'll need to an LLM and get a good starting point within minutes.

Unreal: You will start by creating a few blueprints, an actor or two in the world and then spend time configuring them. You may need to modify the level or game mode blueprints. You'll figure out the required visual nodes to string together to achieve your vision. You might have to consider implementing a custom node via C++ if it's particularly challenging, which will involve recompiling the engine.

If this is fixing an existing issue or wrangling an existing feature:

Unity: You'll probably start by examining the scene setup, the hierarchy, components and any related prefabs. The goal is to quickly mentally map the areas that the code might influence. Then you jump into the c#, sometimes step through with a debugger, often use print statements and generally play test when able. When the bug is on the c# side, which it is 90% of the time, you can identify it with an LLM, usually pretty quickly.

Unreal: You'll probably start by finding the associated actors and then the specific blueprints. Next you'd probably figure out how they are used within the level or in the game mode, level blueprints or how their life cycle is started. You might go check out the objects in the test levels where they are isolated to run in a controlled / configurable environment. You'll need to manually skim / read all of the node graphs.

These differences come from each engine's origins.

Unity started as a tool for smaller teams and indie developers. The component-based architecture reflects this: you build up functionality piece by piece. Scripts are just another component you attach, keeping things simple. Everything should be immediately obvious and leverage the ecosystem of c#. Examples and documentation are easy to share and create in a text / code form.

Unreal came from Epic's internal engine needs for AAA production. The blueprint system emerged because visual scripting could bridge the gap between designers and programmers. When your team has dedicated technical artists and designers who need to iterate without touching code, blueprints solve that problem.

The programming ↔ art gap is closing.

It might have been faster to throw together an Unreal Blueprint 5 years ago, but today? You can generate a few C# scripts in the same time frame, without needing to know Blueprint's whole lexicon of arbitrary node names and usages. You will still need to tune and clean up the c#, but at least it is well documented and standardized via c#. LLMs know c#, they know text.

LLMs are the new search engines, and they don't know much about Unreal Blueprints. They simply have much less to learn from regarding blueprints. Arbitrary node graph formats are harder to share and less viral than text.

Who will Blueprints be for in 10 years? The programming landscape is changing and less and less code will be written by humans. From purist to novice, engineers are feeling the winds of change regarding how much time we spend on crafting code.

As this revolution marches forward, the tooling that adopts the universally accepted API that is "text" will increasingly outpace tools that don't.

What comes next? Do we need another intermediary representation of code?