You ever see something that only exists because some suit thought it would net them a promotion?

The stink of "internal pitch released to the public" is one that this game will never manage to wash off of itself, because that is what this so obviously is. This was designed, top to bottom, for the sole purpose of being used in a business proposal to trick some old guys into investing. AI is hot right now, peaking in its usual fad cycles — gamer president memes aren't going to be around for much longer, but they're everywhere right now —and the Square Enix business department have taken the Web3 bait. NFTs, crypto, the blockchain, and now with a re-imagining of The Portopia Serial Murder Case, we're getting into the GPT-esque AI text parser sector. What's unfortunate for the Square Enix Web3 diehards is that their ideas fucking blow and their execution is somehow even worse than their concepts.

The idea of augmenting your traditional text parser with AI may sound interesting. It isn't. Square Enix claims that the point of this move is to limit the classic guess-the-verb problems that arise in primitive text adventure games by allowing the computer to take broader guesses at what the user is trying to say; in effect, putting the challenge of "what am I supposed to do" on the program, rather than the player. The reason why this doesn't work at all is because it's ironically harder to grok what the game is willing to accept as an input when you don't have a predefined list of which verbs work and which ones don't. LOOK and USE and TAKE are primitive, but they're also intuitive. Having a conversation with your AI partner to facilitate going to a location while they hem and haw and chide you for wasting time is frustrating, not convenient.

The game told me very early on that the victim's nephew had a motive and lived down by the port, at Nagisa Apartments. The most rudimentary of text parsers should be able to link "Kobe Port" and "Nagisa Apartments" as being interchangeable should the user wish to go there; with AI, this ought to be trivial. I wanted to go check the place out to see if there was any evidence in the area. Here's what happened:

>Go to Kobe port.
"Maybe we should focus on the task at hand?"
>Let's go to Kobe Port.
"Hmm..."
>Go to the port.
"Maybe we should focus on the task at hand?"
>Go Kobe Port.
"Maybe we should focus on the task at hand?"
>Let's go to Nagisa Apartments.
"I always forget exactly where that place is. It's somewhere near the port, though. Let's head there first and get our bearings."

Emphasis mine. So the writers understand that the port and Nagisa Apartments are linked, but the game logic fails to make the connection. Awesome. Really impressive showcase of your new technology.

Also, the LOOK command has been rendered completely useless. You're now expected to hold the right control button, making all of the UI elements disappear in order to inspect the background CGs for details. If this sounds like a terrible change, it is. Trying to LOOK around Toshi's apartment just made my partner say that the building was quiet. Inspecting the background CG revealed a phone, which I then examined through the text parser. Also in the CG was a piece of paper tucked beneath the phone. I tried to look at it, but the game was confused. It didn't seem to know if there was a piece of paper, or a note, or a letter, or a notepad, or anything of the sort beneath the phone. It just kept "Hmm..."-ing me. I don't know if this was an inconsequential background element that was painted in by an artist without being considered interactive in the game logic, or if it was a critical piece of evidence that I wasn't allowed to pick up because I wasn't using the correct terminology. If I could have LOOKed around the room for a written description of what was there, the game might have been willing to tell me which word corresponded to that piece of paper. But it didn't, so I didn't get to examine it. (EDIT: After some asking around, the piece of paper was actually a core piece of evidence. The game specifically wanted the term "memo".)

I don't know what about this is meant to be "AI". My partner acts like his brain is seeping out of his ears unless I prompt him with the exact line the game is expecting me to say. It's artificial, sure, but this is far from intelligent. And the game is ten fucking gigabytes! They must have packed the entire model into this thing, and it barely functions! Honestly, this feels like a shoddy Flash-based text adventure more than it does a modern AI tech demo. Something this badly put together wouldn't have flown back when Zork was new; in 2023, this is unacceptable.

One more Square Enix failure for the pile. How many more does the company have left in them before they're forced to fold?

Reviewed on Apr 24, 2023


4 Comments


1 year ago

dismal, really. on some level i understand the desire to take an innovative game and rework it into something that is still innovative today, but if you've sat on the concept this long you might as well sit on it a little longer and make sure it even kinda works

1 year ago

I mentioned this on another review, but I think it's impressive they've managed to take all the frustration of Bot Colony's speech-to-text input and translate that to a purely textual experience. Every time I see someone post a disastrous dialog tree, I just think of wrestling with Dragon speech software, trying to get a robot to go over to the Paris luggage rack to disarm a bomb only for it to constantly spit back facts about Paris because it doesn't understand simple commands.

Between selling their IPs for pennies on the dollar to invest in crypto (and the market bottoming out about a week later), their insistence upon NFTs well beyond that fad's lifecycle, and now going hard into AI, it just seems like Square has a terminal case of Web3 disease. I'd question how long they have before they fold, but I also suspect Final Fantasy XIV generates enough revenue to keep them afloat long enough to throw their weight into at least a dozen more schemes.

1 year ago

I was thinking I'd have to look into the game myself to find out why is it bombing this bad, but you gave a pretty concise answer to that, thanks)
The main thing that bugs me about this game is how exactly do they fit "AI" into the game? I sincerely doubt one can compile a whole neuronetwork into a retail gae filesize, sooo... Do they send every query to an API somewhere? Does it have to always be online? Or did they just build a shitty chatbot into the game and call it a day?

1 year ago

@Weatherby honestly if it wasn't for XIV being the absolute cash cow that it is in spite of squeenix upper management, the company would have been gone half a decade ago

@unaderon They definitely wouldn't have been able to fit something like the full GPT-3 model into this, but most of the public natural language processing (NLP) models that are popular at the moment only clock in at around 500 MB. NLP is the only thing that the game uses to help it understand prompts, and the implementation is bad. NLP is technically AI, but not in the sense that most people use the term; that would be for natural language generation (NLG). NLG is the most obvious part of something like OpenAI Chat, because it lets the computer "talk" back to you, rather than just going through a list of pre-defined responses.

The game was going to launch with NLG, but Square Enix got nervous about the game being able to generate "unethical content", so they stripped it out before launch. What's left is just the NLP. A shitty chatbot would at least have some NLG. This is significantly worse than something like Cleverbot ever was, and all that did was recycle old user entries.