Got a lot of questions on how I made the last featured image for the previous newsletter. Here is a breakdown on how to create a simple image like that.
I find many iterations of the same prompt are usually necessary to arrive at desired results. I guess this is called βdynamic promptingβ - never knew this. Thanks. Also, should be noted that I have found MJ favors the original prompt creator and that if you enter the same prompt, it definitely will not be as nice or close as the original one.
There are seeds. And many many seeds. Each time you prompt, a random seed is generated. So it can be that the seed you are getting are not creating the output that you want. You also should make sure you check the version number and settings you are running in your prompt.
Many times people re-roll the same prompt 20 times until they get something they like. Or they used a fix seed like ( --seed 87654321 ) in their prompt.
Thanks. I also find that prompting too many times on the same subject tends to deteriorate the quality LOL By the way, I used your tutorial on creating consistent characters and I indeed created similar characters. Thanks so much - itβs exciting when you learn a new technique like that.
To add to the question from Todd, how do the newest tools compare with Midjourney, if you've tried them. For example, Adobe Firefly, Designer MS. I think DALL-E is okay, although possibly my prompting not terrific, or I'm not seeing alternative ways to establish the type of output I'm intending. Adobe looks potentially fabulous, but I've barely spent any time playing with it; does have a range of easy to use drop menus to refine the visual output, reducing reliance on plain language instructions.
Yes, the UI/UX for Firefly is way better than for Midjourney, however Midjourney is building their own web tool. And from what I have seen, it is very good, taking the same approach to settings as Firefly does (its all text based prompt behind the scenes.
In the end it will be us consumers that are the big winners, since the tools and price points of the tools will just improve a lot. We are currently at the Tetris part of AI. So its very hard to tell where things are going.
Looks like Iβm not the only one fumbling around with these tools. Part of me is wondering how much of my difficulty is me/my prompt skills vs. limitations of the model. Itβs all so new itβs hard to tell.
I have photography experience & have very particular ideas about the final image. Iβm always disappointed in the results I get. Often the image will be 80% there, I only want the βpersonβ in the image to look up and not down, but when I modify the prompt I get a different face, which is not really what I wanted. Or I need the βpersonβ to have a brown coat instead of blue to be consistent with the other images I generated a week ago.
My experience so far has been to work on prompts for a few hours until I basically give up and accept what I get.
The first paragraph explains SO MUCH. Thank you. Iβll be reading this immediately after I clear out my day job tasks. By the way-I signed up for a paid subscription.
Very helpful. In your experience, is one model better than another for certain tasks? Iβve mainly been using DALL-E and havenβt gotten into midjourney yet.
So Midjourney is lightyears ahead of DALL-E at the moment. I'd say that Stable Diffusion XL and Firefly is getting better. I'm just very keen on using Midjourney.
3 times a week, I use Midjourney for my newsletter artworks. It's a fun hobby.
Someone should make the bonus version in Sweden πΈπͺ
I find many iterations of the same prompt are usually necessary to arrive at desired results. I guess this is called βdynamic promptingβ - never knew this. Thanks. Also, should be noted that I have found MJ favors the original prompt creator and that if you enter the same prompt, it definitely will not be as nice or close as the original one.
There are seeds. And many many seeds. Each time you prompt, a random seed is generated. So it can be that the seed you are getting are not creating the output that you want. You also should make sure you check the version number and settings you are running in your prompt.
Many times people re-roll the same prompt 20 times until they get something they like. Or they used a fix seed like ( --seed 87654321 ) in their prompt.
Thanks. I also find that prompting too many times on the same subject tends to deteriorate the quality LOL By the way, I used your tutorial on creating consistent characters and I indeed created similar characters. Thanks so much - itβs exciting when you learn a new technique like that.
To add to the question from Todd, how do the newest tools compare with Midjourney, if you've tried them. For example, Adobe Firefly, Designer MS. I think DALL-E is okay, although possibly my prompting not terrific, or I'm not seeing alternative ways to establish the type of output I'm intending. Adobe looks potentially fabulous, but I've barely spent any time playing with it; does have a range of easy to use drop menus to refine the visual output, reducing reliance on plain language instructions.
Yes, the UI/UX for Firefly is way better than for Midjourney, however Midjourney is building their own web tool. And from what I have seen, it is very good, taking the same approach to settings as Firefly does (its all text based prompt behind the scenes.
In the end it will be us consumers that are the big winners, since the tools and price points of the tools will just improve a lot. We are currently at the Tetris part of AI. So its very hard to tell where things are going.
In terms of images produced, which do you think is currently the best? The aesthetic seems a bit different for each, to my eye.
Good post. Many thanks for making this post publicly available. Your knowledge is really beneficial.
Looks like Iβm not the only one fumbling around with these tools. Part of me is wondering how much of my difficulty is me/my prompt skills vs. limitations of the model. Itβs all so new itβs hard to tell.
I have photography experience & have very particular ideas about the final image. Iβm always disappointed in the results I get. Often the image will be 80% there, I only want the βpersonβ in the image to look up and not down, but when I modify the prompt I get a different face, which is not really what I wanted. Or I need the βpersonβ to have a brown coat instead of blue to be consistent with the other images I generated a week ago.
My experience so far has been to work on prompts for a few hours until I basically give up and accept what I get.
Have you read my tutorial on consistent characters in Midjourney? It might help you.
https://linusekenstam.substack.com/p/tutorial-how-to-create-consistent
The first paragraph explains SO MUCH. Thank you. Iβll be reading this immediately after I clear out my day job tasks. By the way-I signed up for a paid subscription.
Very helpful. In your experience, is one model better than another for certain tasks? Iβve mainly been using DALL-E and havenβt gotten into midjourney yet.
So Midjourney is lightyears ahead of DALL-E at the moment. I'd say that Stable Diffusion XL and Firefly is getting better. I'm just very keen on using Midjourney.
Iβll have to try out Midjourney. Thanks for sharing your tips on prompts.