What's happened in AI?
As I pointed out some months ago from the insight I gleaned from the apparently exceedingly abstruse method of... reading OpenAI's own official statements, GPT-5 proved to not be an exponential leap in quality along the lines of previous leaps in iteration.
A growing subset of the tech-y zeitgeist has since acknowledged this slowdown in the wake of GPT-5's release and OpenAI deciding their true calling was building ai tiktok and ad features for ChatGPT.
The main improvement in the news is AI video from what I have seen, which I write about more in the next section.
The pro- vs. anti- ai spats are getting steadily more heated online. Interestingly it's becoming noticeably more mainstream. I expected limited interest from this broader population in honesty, though it's still far from reaching true saturation into the less "online" crowd. Anyhow, my working theory is that generative AI has become something of a proxy for pro/anti corporate arguments, thus building off of existing tensions.
I believe Google is expected to release Gemini 3 sometime before the end of 2025 though I'm not exactly holding my breath in anticipation.
Growth in AI Video
Ai video saw the sort of rapid growth that other media had in the past this year, and I've seen plenty of posts talking and demoing it within certain spaces. Its consistency and reduction of the number of "AI-y" artifacts have improved quite a bit. This presumably will follow the same trajectory as the other mediums though--at its best, a technically competent but quite creatively lacking result. And given the complete lack of proper integration into real art tools it won't be possible to fix the problems. I really wonder why adobe hasn't put out something decent for this yet. It's certainly not because of any moral qualms at the least.
The Future of AI Art Revisited
I believe I speculated on this but we have more evidence to work with now. There are basically 3 ways that this could go down in the near future: 1. Gen AI escapes its current magnitude of capability and creates art better than pretty much any human is capable of 2. Gen AI stays within its current magnitude but becomes a constant in the art pipeline 3. Gen AI is unable to be integrated into pipelines effectively and is marginalized to a mild curiosity
Given that it seems pretty clear that the development of "understanding" as a whole for AI is on pause, the likelihood of 1) doesn't seem to be very high. Similarly for 3) I don't see why existing AI tools with some polishing and feature extension wouldn't already be there in terms of capability. I suppose that already shows that I imagine 2) is the most likely outcome. It's interesting how it seems like it will "work out" that way.. like a picturesque ending to a story where technology empowers people instead of replacing them. Maybe there is a cosmic scriptwriter after all...
Finally, though generative AI is very strongly stigmatized within certain spaces at the moment there are indications that this stance could relax over time if it were to become more evident that AI tooling is not able to fully supplant human creatives. Naturally the exact delineation of how much and for what processes automation should intrude into the process will be a mainstay of debate for that foreseeable future. This tweet gained a lot of traction and demonstrated some shade of nuance:

Legitimate AI (LLM) usecases
LLMs are far from useless--they just also don't magically solve everything. Things usually tend to end up like that, I suppose because we can't help but get excited and a little ahead of ourselves with predictions.
Anyhow, the most promising applications for LLMs seem to be:
-
actually good voice assistants
- This seems like by far the most obvious application to me as well as being actually useful for people but it's clearly not much of a priority. There is the significant hurdle of it being in every private company's interest to keep you locked into their app instead of providing an API to accomplish tasks outside of it via agents/assistants. Apple definitely has the clout to strongarm companies into doing so if it really cared to, but I guess they're more interested in... what? What is Apple even doing these days? Changing all their app modals to transparent panels?
-
finding "needles in a haystack"
- I saw news that a research model was able to bring certain papers to a mathematician's attention which enabled some discoveries. I believe similar events have occurred in biotech, and apparently there is a specialized model developed for law firms that assists with finding relevant documents.
-
naturalistic search queries
- This is pretty similar to the "needle in a haystack" thing but half the trouble of finding a solution online sometimes is just trying to figure out the proper terms to get relevant results. LLMs can act as a mediating layer here.
-
blackboxing programming boilerplate
- Very well known already. Though LLMs aren't to be trusted with an entire codebase they are plenty sufficient for blackboxing out certain functionality.
ASI Arms Race?
Just recently an "open letter" calling for a pause on ASI development until 700 celebrity names calling for a pause on ASI development until human alignment could be better verified was released. It included names like Geoffrey Hinton and other AI researchers, Steve Wozniak and a smattering of other figures like the Duke and Duchess of Sussex (I mean sure why not).
Given the vast economic/political (really the same, I suppose) leverage that control over an ASI would grant a country over the rest of the world, I doubt any of the major players (which I think are basically just the US and China) are feeling particularly incentivised to step down and potentially give their competitor the freedom to build a lead. Although I've seen statements from China, OpenAI and Anthropic about calling for a pause or dramatic slowdown on AI research I would hardly consider that a guarantee of anything. Even in the case where the majority of actors do genuinely believe that there is still the factor of systemic forces (game theory-ish sort of stuff) that will compel them to act anyhow.
Personally I am quite opposed to any further efforts to bootstrap AI into a higher magnitude of ability. The risks of worst-case scenarios are far too high to create agents with true intelligence that significantly exceeds any human. LLMs and generative AI are much a different question, however. I have my doubts that improving an image generation model is going to develop sentience as a byproduct, and it's already quite demonstrably shown that LLMs do not comprehend generically in the same way that humans are capable of.
However, in light of recent developments there is limited reason to believe that it will be a problem we will have to deal with in the near future. LLMs are likely not the path to ASI, and it's not like there's some other immediately promising technology just waiting to be reaped. In a sense, we're back to where we started. Well, not entirely since LLMs do have the power to perform that sort of "needle searching" that isn't feasible for humans. I suppose this could still be a means of acceleration, though I think it will most likely be just a flat time saver.