The way it totally disregards the many explicit instructions given in the "four panel" comic strip.
topato 12 hours ago [-]
Right? Came to the comments specifically for this, but am confused by people's responses. With prompt adherence this bad, is it worth the 2 cents you spent on it? I don't see how it's even useful for deciding if you want to use the ultra version, or for anything else really.... Maybe if you want to redo it in Photoshop? But at that point, breaking out the old Wacom tablet and making a composite image would probably be just as time intensive, but with much higher image quality (and none of the tale tell signs of AIgen)
ben_w 10 hours ago [-]
Even if you only earn $12/hour, 2 cents is worth it to save just 6 seconds.
An image has to be much worse than that to fail to save you 6 seconds.
That said, this is their own chosen example of what it can do, so I'd have to assume it is much worse than that on average.
thanhhaimai 13 hours ago [-]
> Imagen 4 Ultra: When your creative vision demands the highest level of detail and strict adherence to your prompts, Imagen 4 Ultra delivers highly-aligned results.
It seems that you may need the "Ultra" version if you want strict prompt adherence.
It's an interesting strategy. Personally, I notice that most of the times I actually don't need strict prompt adherence for image generation. If it looks nice, I'll accept it. If it doesn't, I'll click generate again. For creativity task, following the prompt too strictly might not be the outcome the users want.
mikepurvis 13 hours ago [-]
I've found this is an interesting balance with Copilot specifically. Like, on the one hand I'm glad it aims for the bare minimum and doesn't try to refactor my whole codebase on every shot... at the same time, there's certain obvious things where I wish it was able to think a bit bigger picture, or even engage me interactively, like "hey, I can do a self-contained implementation here, but it's a bit gross; it looks like adding dependency X to the project keeps this a one liner— which way should it go?"
chatmasta 12 hours ago [-]
I’ve had good experience with iterative prompting when generating images with Gemini (idk which model — it’s whatever we get with our enterprise subscription at work, presumably the latest.) It’s noticeably better than ChatGPT at incorporating its previous image attempt into my instructions to generate the next iteration.
weego 11 hours ago [-]
Hopefully it's better than midjourney at least. Ignoring key parts of the prompt seems to be a feature.
vunderba 3 hours ago [-]
Midjourney scores the absolute lowest in terms of prompt adherence against any of the other SOTA models (Kontext, Imagen, gpt-image-1, etc). At this point, its biggest feature is probably as an "exploratory tool" for visualizations by cranking up the chaos and weirdness parameters.
userbinator 13 hours ago [-]
In the little experimentation I did with AI image generation, it seems more a game of trying multiple times until you get something that actually looks right, so I wonder how many attempts they did.
cubefox 13 hours ago [-]
Though that was only Imagen 4 Fast, not Imagen 4 or Imagen 4 Ultra.
ajd555 14 hours ago [-]
Same for the poster. Asks for the ship to be going towards the right, and it's clearly doing the opposite
smokel 13 hours ago [-]
As seen from the AI's perspective.
math_dandy 13 hours ago [-]
To the left of the "detailed spaceship" I think I see a distortion pattern reminiscent of a cloaked Klingon bird of prey moving to the right. Or I'm just hallucinating patterns in nebular noise.
Jare 12 hours ago [-]
The ship is reminiscent of Galactica's oldschool vipers. Different, but very similar overall structure.
12 hours ago [-]
typpilol 12 hours ago [-]
I asked basically copilot the same and got a much better result lol
Makes one wonder if there’s a hidden pre/system prompt for Imagen that’s interfering with optimal results.
arjie 12 hours ago [-]
Interesting how Imagen doesn't suffer this yellow tint effect.
typpilol 11 hours ago [-]
I assume that's from the retro word in the prompt
HocusLocus 10 hours ago [-]
I have found Imagein to be a good general purpose editor and we use it to clean up bitmaps, and adjust black points and white points and curves on greyscale, so it is good for preparing B&W greyscale photographs for print to compensate for dot gain in halftone screens on laser printers. Its 'color separation' capability is rudimentary/first draft though and is ridiculously close to inverse RGB rather than CMYK. For good color seps we use Photoshop so I can control undercolor removal.
neom 7 hours ago [-]
Are you talking about this google product, or another tool altogether?
anonymousiam 7 hours ago [-]
They're probably talking about the original Imagen printing product line from the 1980's. I thought I might be the only one to remember them in this thread, so I did a search for printer and found the GP comment.
Clicking on "Read the documentation" leads to a page that documents nothing about the latest Imagen models and only provides examples using Gemini 2.0 Flash.
typpilol 12 hours ago [-]
Classic Google
vunderba 7 hours ago [-]
I've updated my GenAI Comparison site to include Imagen4 Ultra, so now we have four Google related generative models (Gemini Flash, Imagen3, Imagen4, and Imagen4 Ultra).
Despite claims that Ultra supports improved strict prompt adherence, we saw no evidence that it scored any better than Imagen 4 and in some cases seemed to ignore the prompt altogether (see the "Not the Bees" comic). In many cases, it also seemed much less steerable than Imagen3 requiring many of the prompts to be rewritten.
There's some speculation it's Gemini 3's multi-modal output, and other speculation that it's an OpenAI model. Hard to definitively since these models tend to hallucinate when interrogated.
vunderba 3 hours ago [-]
Other than LMArena and a website I can't verify is authentic, it's hard for me to run tests on this new model but I have serious doubts that it'll pass my more difficult prompts such drawing a valid 2d maze with clearly marked exit and entrance.
gpt-image-1 is in a class all of its own with regards to prompt adherence in the "text to image" category.
Once it hits GA I'll put it through its paces and add it to the site!
3 hours ago [-]
cubefox 21 minutes ago [-]
I tested it with generating a man holding a Penrose triangle made of wood. While gpt-image-1 succeeded, nano-banana failed. The aesthetics of nano-banana did look much better though. I would guess that it is a diffusion model, based on the fact that it adds irrelevant but pretty background details, which gpt-image-1 tends to avoid.
mattxxx 14 hours ago [-]
I guess it's kinda nicely genuine that the "four panel comic strip" has some errors in it (misunderstanding caption + cat high-fiving itself in the bonus fifth panel)
jug 14 hours ago [-]
I was just thinking that. It has many, many errors.
1. Not seen browsing ”ai.dev”.
2. The text ”Imagen 4 is now generally available!” is spoken, not a comic caption.
3. Invalid second panel.
4. Hallucinates ”Meet Imagen 4 fast!”
5. Hallucinates ”It offers low..” etc. (this is the second part of a single sentence said by the cat)
6. Hallucinates ”You can export images in 2K!” (this sentence is not asked for)
7. Doesn’t have the cat and the dog in the fourth panel.
—
Here’s the gpt-image-1 counterpart with the issues I could find:
1. The text ”Imagen 4 is now generally available!” is still spoken, not a caption.
2. ”low latency” -> ”low-laten”
(3. Has that ugly gpt-image-1 trademark yellow filter requiring work in post to avoid.)
I didn’t bring up the ”retro comic look” thing. I certainly think it’s an issue with Imagen 4’s version. It doesn’t look very old school at all. But I can’t judge the OpenAI one either on that, I’m no comic book expert, so I just skipped that one.
The pervasive yellow tinge indicates that that is almost assuredly `gpt-image-1` - OpenAI's flagship model and (aesthetics aside) the highest scoring model in terms of strict prompt adherence that I've seen.
With images and video, it's less clear exactly what they're doing, but it's watermarking on the pixel leve. From one of their blog posts:
Videos are composed of individual frames or still images. So we developed a watermarking technique inspired by our SynthID for image tool. This technique embeds a watermark directly into the pixels of every video frame, making it imperceptible to the human eye, but detectable for identification.
Elevenlab's audio watermarking is trivial to shake off with compression, but google claims that synthid is resilient to such manipulation.
edaemon 12 hours ago [-]
The cat also has more fingers on one hand than the other. It's a small, inconsequential thing but it always draws my eye in generated images.
latexr 12 hours ago [-]
> I didn’t bring up the ”retro comic look” thing. (…) I’m no comic book expert, so I just skipped that one.
I’m no Scott McCloud, but the OpenAI version definitely does a better job with the retro style. The yellow filter you criticised actually helps to sell the illusion. The Imagen version utterly fails in the retro area, that style is very much modern.
But there are other important flaws in the OpenAI version. The fourth panel has a different cat (the head shape and stripes are wrong) and it bleeds into the previous panel. Technically that could be a stylistic choice, except that the floor/table is inconsistent, making it clear it was a mistake.
math_dandy 13 hours ago [-]
I was going to nitpick the missing apostrophe in movie posters caption ("STARFALLS REVENGE") but its missing from the prompt, too.
I tried the following prompt and other than producing a four panel comic that was black and white it completely ignored every other instruction. This was with 4 ultra. Maybe someone else will have better luck but the failure seemed stable.
'''
A four panel comic strip. Simple black on white. Stick figures for characters. In the first panel there is a stick figure man and a stick figure bird eating bird seed at his feet. He is slightly hunched over to show he is looking at the bird. In the second panel. He is more hunched over looking more closely at the bird. In the third panel he is even more hunched over practically with his head to the bird, he is crouched down, knees bent, hands on thighs. In the upper left of the third panel the tip of an enormous beak can be seen, but it's only a few lines so could be anything. In the final panel the beak has gobbled up the man and his arms and legs are flailing outside of the beak while the small bird continues to eat birdseed on the ground.
'''
qoez 14 hours ago [-]
Looks so much better than the yellow tinted chatgpt output in my eyes
tripplyons 13 hours ago [-]
After manually white balancing to remove the tint, I find GPT-Image-1 (the model used in ChatGPT) to be better.
a1371 9 hours ago [-]
In the couple of prompts I gave it, it's better than the last version but I feel that Google is sacrificing quality for the sake of speed. While it's a lot faster, the output is not as good as OpenAI.
Meanwhile Veo3 is far better than the OpenAI's equivalent. I assume speed is not a priority there; both take their time.
lacoolj 11 hours ago [-]
> the generally availability
One of the biggest corporations in the world and they can't re-read before posting a typo in the title.
Heads be shakin
jimmy76615 10 hours ago [-]
I'm glad they can't. The reason large cooperations tend to suck is because some bored management guy cares about typos and invents a process for getting your headlines approved by some other dude who is just as bored and useless.
It's a typo, it doesn't matter.
nkzd 14 hours ago [-]
I am currently building an AI product which relies on Imagen 3 to generate a lot of photorealistic, cinematic or HDR images. I tried Imagen 4 during preview, but results were too "cartoonish". Did anyone else have the same experience?
joegibbs 10 hours ago [-]
Yeah me too, I think 3 does a much better job for photos or even just images that look like realistic renders. I use 3 for generating grids of age-progressed portraits for a game and it does a better job at sticking to the prompt. 4 also seems to spit out ones that have that really smooth look that makes it really obvious it’s AI.
LeoPanthera 13 hours ago [-]
Yes, it seems very reluctant to generate anything that could be mistaken for a photo.
djha-skin 4 hours ago [-]
Maybe the fact that they're working on imagen explains why Gemini is just so bad.
nh43215rgb 9 hours ago [-]
This is different from nano banana that others are talking about as the new google model?
coldcode 12 hours ago [-]
>Image generation may not always trigger:
>The model may output text only. Try asking for image outputs explicitly (e.g. "generate an image", "provide images as you go along", "update the image").
>The model may stop generating partway through. Try again or try a different prompt.
Seriously?
typpilol 12 hours ago [-]
Does it still charge 2 cents for that? Lol
Revisional_Sin 14 hours ago [-]
Wasn't Imagen 4 released months ago?
nevir 14 hours ago [-]
Yes, but usage was very limited / restricted. Now it's widely available
cubefox 13 hours ago [-]
I hate that they always announce their image models months before they make them available. They should just announce them later. OpenAI does this much better, with a few days delay at most.
SweetSoftPillow 11 hours ago [-]
They were available, just rate limited.
smokel 13 hours ago [-]
The comments here are priceless. In less than five years time we have gone from "That's impossible" to "Meh, it doesn't solve P=NP if prompted.".
For those commenting in the latter category, it might be worthwhile to read a bit about the underlying technology and share your insights on why it does not deliver.
oinfoalgo 10 hours ago [-]
Deep Dream was 2016.
The problem with 2025 is I have seen thousands of better examples than that landscape. The reflections in the lake are complete trash.
Then I think of Veo 3 that is just incredible. So no, it is not impressive if a still from the video model is vastly better than the static image generator from the same company.
I find it especially annoying because I can't think of another company this would happen at. It is just so Google.
quantumHazer 13 hours ago [-]
this is false and the two things are not correlated.
if you followed news during the GAN cycle you could extrapolate that deep NN could do this type of things. it is really cool that this things happened so fast, but we are talking about companies that have the money to deploy thousands of cars around the globe to collect data, so they absolutely know how to gather data
amelius 11 hours ago [-]
You are ignoring all the hyping here.
13 hours ago [-]
ivape 11 hours ago [-]
Anyone know if this can be prompted with image to image?
dsrtslnd23 11 hours ago [-]
they explicitly do not support that yet.
gawa 14 hours ago [-]
The webcomics is awful. It feels off, the characters look very fake, unsettling in the way they communicate. The prompt is shown bellow the image, but for me the result looks closer to a prompt "Create lifeless characters reciting marketing slop. They must fake an over exaggerated excitement but it should be clear they don't believe in what they're saying and have no souls".
Also, the prompt specifically ask "Panel 4 should show the cat and dog high-fiving" but the cat is high-fiving ... the cat. Personally I find this hallucinated plot twist good, it makes the ending a bit better. Although technically this is demonstrating a failure of the tool to follow the instructions from the prompt. Interesting choice of example for an official announcement.
typpilol 12 hours ago [-]
It's weird because I just asked the basic copilot app the same and got a much better result.
It's definitely just a matter of personal preference. To me, your image looks much worse and has the very distinctive look of the GPT-image-1 model.
cobbzilla 11 hours ago [-]
It’s more than visual preferences — his image actually adheres to the specified requirements. it hasn’t been shown that Imagen can do that, which might be a showstopper for many people, regardless of aesthetics.
typpilol 2 hours ago [-]
And this is literally just the free tier copilot app from the android store lol. Something I would never use in professional life unlike Claude
CrzyLngPwd 12 hours ago [-]
As others have said, with so many errors, it's just more AI slop.
Does the world need yet another AI slop generator?
An image has to be much worse than that to fail to save you 6 seconds.
That said, this is their own chosen example of what it can do, so I'd have to assume it is much worse than that on average.
It seems that you may need the "Ultra" version if you want strict prompt adherence.
It's an interesting strategy. Personally, I notice that most of the times I actually don't need strict prompt adherence for image generation. If it looks nice, I'll accept it. If it doesn't, I'll click generate again. For creativity task, following the prompt too strictly might not be the outcome the users want.
https://i.imgur.com/kSuqCYg.jpeg
https://tug.org/TUGboat/tb02-2/tb03imagen.pdf
Despite claims that Ultra supports improved strict prompt adherence, we saw no evidence that it scored any better than Imagen 4 and in some cases seemed to ignore the prompt altogether (see the "Not the Bees" comic). In many cases, it also seemed much less steerable than Imagen3 requiring many of the prompts to be rewritten.
https://genai-showdown.specr.net?models=IMAGEN_3,IMAGEN_4,IM...
There's some speculation it's Gemini 3's multi-modal output, and other speculation that it's an OpenAI model. Hard to definitively since these models tend to hallucinate when interrogated.
gpt-image-1 is in a class all of its own with regards to prompt adherence in the "text to image" category.
Once it hits GA I'll put it through its paces and add it to the site!
1. Not seen browsing ”ai.dev”.
2. The text ”Imagen 4 is now generally available!” is spoken, not a comic caption.
3. Invalid second panel.
4. Hallucinates ”Meet Imagen 4 fast!”
5. Hallucinates ”It offers low..” etc. (this is the second part of a single sentence said by the cat)
6. Hallucinates ”You can export images in 2K!” (this sentence is not asked for)
7. Doesn’t have the cat and the dog in the fourth panel.
—
Here’s the gpt-image-1 counterpart with the issues I could find:
https://chatgpt.com/share/689f7e4b-01e4-8011-8997-0f37edf8c2...
1. The text ”Imagen 4 is now generally available!” is still spoken, not a caption.
2. ”low latency” -> ”low-laten”
(3. Has that ugly gpt-image-1 trademark yellow filter requiring work in post to avoid.)
I didn’t bring up the ”retro comic look” thing. I certainly think it’s an issue with Imagen 4’s version. It doesn’t look very old school at all. But I can’t judge the OpenAI one either on that, I’m no comic book expert, so I just skipped that one.
https://i.imgur.com/kSuqCYg.jpeg
https://genai-showdown.specr.net
Repo: https://github.com/google-deepmind/synthid-text
Paper: https://www.nature.com/articles/s41586-024-08025-4
With images and video, it's less clear exactly what they're doing, but it's watermarking on the pixel leve. From one of their blog posts:
https://deepmind.google/discover/blog/watermarking-ai-genera...Elevenlab's audio watermarking is trivial to shake off with compression, but google claims that synthid is resilient to such manipulation.
I’m no Scott McCloud, but the OpenAI version definitely does a better job with the retro style. The yellow filter you criticised actually helps to sell the illusion. The Imagen version utterly fails in the retro area, that style is very much modern.
But there are other important flaws in the OpenAI version. The fourth panel has a different cat (the head shape and stripes are wrong) and it bleeds into the previous panel. Technically that could be a stylistic choice, except that the floor/table is inconsistent, making it clear it was a mistake.
Muphry's Law strikes again.
Indeed.
''' A four panel comic strip. Simple black on white. Stick figures for characters. In the first panel there is a stick figure man and a stick figure bird eating bird seed at his feet. He is slightly hunched over to show he is looking at the bird. In the second panel. He is more hunched over looking more closely at the bird. In the third panel he is even more hunched over practically with his head to the bird, he is crouched down, knees bent, hands on thighs. In the upper left of the third panel the tip of an enormous beak can be seen, but it's only a few lines so could be anything. In the final panel the beak has gobbled up the man and his arms and legs are flailing outside of the beak while the small bird continues to eat birdseed on the ground. '''
Meanwhile Veo3 is far better than the OpenAI's equivalent. I assume speed is not a priority there; both take their time.
One of the biggest corporations in the world and they can't re-read before posting a typo in the title.
Heads be shakin
It's a typo, it doesn't matter.
>The model may output text only. Try asking for image outputs explicitly (e.g. "generate an image", "provide images as you go along", "update the image").
>The model may stop generating partway through. Try again or try a different prompt.
Seriously?
For those commenting in the latter category, it might be worthwhile to read a bit about the underlying technology and share your insights on why it does not deliver.
The problem with 2025 is I have seen thousands of better examples than that landscape. The reflections in the lake are complete trash.
Then I think of Veo 3 that is just incredible. So no, it is not impressive if a still from the video model is vastly better than the static image generator from the same company.
I find it especially annoying because I can't think of another company this would happen at. It is just so Google.
if you followed news during the GAN cycle you could extrapolate that deep NN could do this type of things. it is really cool that this things happened so fast, but we are talking about companies that have the money to deploy thousands of cars around the globe to collect data, so they absolutely know how to gather data
Also, the prompt specifically ask "Panel 4 should show the cat and dog high-fiving" but the cat is high-fiving ... the cat. Personally I find this hallucinated plot twist good, it makes the ending a bit better. Although technically this is demonstrating a failure of the tool to follow the instructions from the prompt. Interesting choice of example for an official announcement.
https://i.imgur.com/kSuqCYg.jpeg
Does the world need yet another AI slop generator?