I've been using openjourney (and MJ/SD) quite a bit, and it does generate "better" with "less" compared to standard v1.5, but it's nowhere close to Midjourney v4.
Midjourney is so far ahead in generating "good" images across a wide space of styles and subjects using very little prompting. While SD requires careful negative prompts and extravagant prompting to generate something decent.
Very interested in being wrong about this, there's so much happening with SD that it's hard to keep up with what's working best.
But I still feel 2.x is somehow a degradation of 1.x, its hard to get something decent out of it. The custom training/tuning and all is nice (and certainly the top rain to use SD over MJ, many use cases MJ just can't do) but it should not be used as a band-aid for appearantly inherent shortcomings in the new clip layer (I'm assuming this is where the largest difference comes from, since the Unet is trained on largely the same dataset as 1.x).
If you add qualifiers such as soft colors, impressionistic, western animation, stencil, etc. you can steer mid journey towards much more personalized styles.
Midjourneys upscaled images to their current max offering look fantastic, that's for sure. My wife generates some really great stuff just for fun.
But yeah for now, all my custom models are 1.5 so I've yet to fully upgrade yet, most of the community seems to be doing similar at the moment.
Anyone can get a camera phone, take a picture and use some free software (e.g. gimp) to get great results in post-processing.
Most non-expert users though want to click on a few pre-defined filters, find one they like & run with it, rather than having more control yet poorer results (precisely because they _aren't_ experts).
If Midjourney applies this to all their artwork then maybe it alleviates some of the ethical concerns (Midjourney then has a "style" independent of the training data)
On the same topic, is there some sort of 'awesome list' of finetuned SD models? (something better than just browsing https://huggingface.co/models?other=stable-diffusion)
Apologies for the bad taste, but I simply love that song, an absolute classic
Anyhow, regarding civai, you can filter out the NSFW models quite easily
Ought be noted that protogen 5.3 even when it is not an explicit porn model, it was trained with explicit models... So it can be... Raucy as well
BTW I love your app! At my desk I use Automatic1111 (because I have a decent GPU), but it's so nice to have a lean back experience on my iPad. Also, even my 6yo son can use it, as he doesn't need to manipulate a mouse.
So unless you want cliche-as-fuck fantasy and samey waifu material, classic SD seems to do a much better job.
If there's one style I dislike more than the bland Midjourney style, it's the super-smooth "realistic" child faces on adult bodies that protogen (and its own many descendants) spit out.
This offering is like going back to August 2022.
MJ v4 doesn't even use Stable Diffusion as a base ; a fine-tune of the latter will never come close to achieving what they do.
 - https://discord.com/channels/729741769192767510/730095596861...
I thought everything besides dall-e was sd under the hood.
Is Midjourney heavily funded? Because if they can battle SD why aren't we seeing lots of people doing the same, even in the Open Source space?
a1111 / invokeai -- stable diffusion UI tools
Protogen series -- popular stablediffusion checkpoints you can download so you can generate content in various styles
Which is sub-optimal -> bad. You don't want to train on output from an AI because you'll end up with a worse version of whatever that AI is already being bad at (hands, foot, and countless other things). This is the AI feedback loop that people have been talking about.
So instead of figuring out what Midjourney has done to get such good result, people just blatantly straight copied those results and fed them directly into the AI, as true as the art thief stereotype they are.
I suppose pytorch is / was Facebook, but if feels more arms length. I don't have to install and run a facebook cli to use it (nobody get any ideas).
You don't need a HF cli, you just need to use git LFS (I believe now part of git) to pull the files off of HF (unfortunately still requiring an account with them). It would be nice to see truly open mirrors for this stuff that don't have to involve any company.
give it 10 years and this will change
cool stuff, thanks
See this FAQ here: https://www.licenses.ai/faq-2
Q: "Are OpenRAILs considered open source licenses according to the Open Source Definition? NO."
A: "THESE ARE NOT OPEN SOURCE LICENSES, based on the definition used by Open Source Initiative, because it has some restrictions on the use of the licensed AI artifact.
That said, we consider OpenRAIL licenses to be “open”. OpenRAIL enables reuse, distribution, commercialization, and adaptation as long as the artifact is not being applied for use-cases that have been restricted.
Our main aim is not to evangelize what is open and what is not but rather to focus on the intersection between open and responsible licensing."
FWIW, there's a lot of active discussion in this space, and it could be the case that e.g. communities settle on releasing code under OSI-approved licenses and models/artifacts under lowercase "open" but use-restricted licenses.
So I do not understand how the resulting model weights are a subject of copyright at all, given that the US has firmly rejected the concept of "sweat of the brow" as a copyrightability standard. Maybe in the EU you could claim database rights over the training set you collected. But the US refuses to enforce those either.
 I'm not talking about "is AI art copyrightable" - my personal argument would be that the user feeding it prompts or specifying inpainting masks is enough human involvement to make it copyrightable.
The Copyright Office's refusal to register AI-generated works has been, so far, purely limited to people trying to claim Midjourney as a coauthor. They are not looking over your work with a fine-toothed comb and rejecting any submissions that have badly-painted hands.
 I personally think AI training is fair use, but a court will need to decide that. Furthermore, fair use training would not include fair use for selling access to the AI or its output.
 The few bits of training code I can find are all licensed under OSI/FSF approved licenses or using libraries under such licenses.
Not a lawyer, but as I understand the most likely way this question will be answered (for practical purposes in the US) is via the ongoing lawsuits against GitHub Copilot and Stable Diffusion and Midjourney.
I personally agree the creativity is in the source images and the training code, but think that unless it is decided that for legal purposes "AI Artifacts" (the files containing model weights, embedding, etc.) are just transformations of training data and therefore content and subject to the same legal standards as content, I see a lot of value in trying to let people license training and code and models separately. And if models are just transformations of content, I expect we can adjust the norms around licensing to achieve similar outcomes (i.e., trying to balance open sharing with some degree of creator-defined use restriction).
This is a different issue where the OP is arguing that the weights file is not eligible for copyright in the US. That's an interesting and separate point which I haven't really seen addressed before.
I should've been more specific: I was thinking mainly of the artists v. stable diffusion lawsuit which makes the specific technical claim that the stable diffusion software (which includes a bunch of "weights files") includes compressed copies of the training data. (Line 17, "By training Stable Diffusion on the Training Images, Stability caused those images to be stored at and incorporated into Stable Diffusion as compressed copies", https://stablediffusionlitigation.com/pdf/00201/1-1-stable-d...).
I expect that if the decision hinges on this claim, that could have far reaching implications re: model licensing. I think this along the lines of what you've laid out here!
Likewise if I drew my own art and used it as sample data for a completely trained-from-scractch art generator, I would own the result. The key problem is that, because AI companies are not licensing their data, there isn't any creativity that they own for them to assert copyright over. Even if AI training itself is fair use, they still own nothing.
(Whether other artists can claim copyright over some recognisable sample is another question.)
I don't think thin copyright would apply to AI model weights, since those are trained entirely by an automated process. Hyperparameters are selected primarily for functionality and not creative merit. And the actual model architectures themselves would be the subject of patents, not copyright; since they're ideas, not expressions of an idea.
Related note: have we seen someone try to patent-troll AI yet?
The Verve's Richard Ashcroft lost partial copyright and all royalties for "Bitter Sweet Symphony" because a sample from the Rolling Stones wasn't properly cleared: https://en.m.wikipedia.org/wiki/Bitter_Sweet_Symphony
Men at Work lost copyright over their famous "Land Down Under" because it used a tune from "Kookaburra sits in the Old Gum Tree" as an important part of the chorus.
Agreed. By that logic, William S Burroughs wouldn't own his best novels:
This is why you can't copyright maps, and why scans of public domain artwork are automatically public domain. Because there's no creativity in them.
The courts do not oppose the use of algorithms or mechanical tools in art. If I draw something in Photoshop, I still own it. Using, say, a blur or contrast filter does not reduce the creativity of the underlying art, because there's still an artist deciding what filters to use, how to control them, et cetera.
That doesn't apply for AI training. The controls that we do have for AI are hyperparameters and training set data. Hyperparameters are not themselves creative inputs; they are selected by trial and error to get the best result. And training set data can be creative, but the specific AI we are talking about was trained purely on scraped images from the Internet, which the creator does not own. So you have a machine that is being fed no creativity, and thus will produce no creativity, so the courts will reject claims to ownership over it.
 Trap streets ARE copyrightable, though. This is why you'll find fake streets that don't exist on your maps sometimes.
 Several museums continue to argue the opposite - i.e. that scanning a public domain work creates a new copyright on the scan. They even tried to harass the Wikimedia Foundation over it: https://en.wikipedia.org/wiki/National_Portrait_Gallery_and_...
The closest analogue I can think of would be copyrighting a Magic: The Gathering deck. Robert Hovden did that, and somehow convinced the Copyright Office to go along with it. As far as I can tell this never actually got court-tested, though. You can get a thin copyright on arrangements of other works you don't own, but a critical wrinkle in that is that an MTG deck is not merely "an arrangement of aesthetically pleasing card art". The cards are picked because of their gameplay value, specifically to min-max a particular win condition. They are not arrangements, but strategies.
Here's the thing: there is no copyright in game rules. Those are ideas, which you have to patent. And to the extent that an idea and an expression of that idea are inseparable, the idea part makes the whole uncopyrightable. This is known as the merger doctrine. So you can't copyright an MtG deck that would give you de-facto ownership over a particular game strategy.
So, applying that logic back to the training set, you'd only have ownership insamuch as your training set was selected for a particular artistic result, and not just "reducing the loss function" or "scoring higher on a double-blind image preference test".
As far as I'm aware, there are companies that do creatively select training set inputs; i.e. NovelAI. However, most of the "generalist" AI art generators, such as Stable Diffusion, Craiyon, or DALL-E, were trained on crawled data without much or any tweaking of the inputs. A lot of them have overfit text prompts, because the people training them didn't even filter for duplicate images. You can also specifically fine-tune an existing model to achieve a particular result, which would be a creative process if you could demonstrate that you picked all the images yourself.
But all of that only applies to the training set list itself; the actual training is still noncreative. The creativity has to flow through to the trained model. There's one problem with that, though: if it turns out that AI training for art generators is not fair use, then your copyright over the model dissolves like cotton candy in water. This is because without a fair use argument, the model is just a derivative work of the training set images, and you do not own unlicensed derivative works.
 Which is also why Cory Doctorow thinks the D&D OGL (either version) is a water sandwich that just takes away your fair use rights.
 WotC actually did patent specific parts of MTG, like turning cards to indicate that they've been used up that turn.
 I may have posted another comment in this thread claiming that training sets are kept hidden. I had a brain fart, they all pull from LAION and Common Crawl.
 This is also why people sell T-shirts with stolen fanart on it. The artists who drew the stolen art own nothing and cannot sue. The original creator of that art can sue, but more often than not they don't.
Interesting. Why is this happening?
But, I'm familiar with poking around in source code repos!
I found this https://huggingface.co/openjourney/openjourney/blob/main/tex... . It's a giant binary file. A big binary blob.
(The format of the blob is python's "pickle" format: a binary serialization of an in-memory object, used to store an in-memory object and later load it, perhaps on a different machine.)
But, I did not find any source code for generating that file. Am I missing something?
Shouldn't there at least be a list of input images, etc and some script that uses them to train the model?
Yeah, no. Nobody in the AI community actually provides training code. If you want to train from scratch you'll need to understand what their model architecture is, collect your own dataset, and write your own training loop.
The closest I've come across is code for training an unconditional U-Net; those just take an image and denoise/draw it. CLIP also has its own training code - though everyone just seems to use OpenAI CLIP. You'll need to figure out how to write a Diffusers pipeline that lets you combine CLIP and a U-Net together, and then alter the U-Net training code to feed CLIP vectors into the model, etc. Stable Diffusion also uses a Variational Autoencoder in front of the U-Net to get higher resolution and training performance, which I've yet to figure out how to train.
The blob you are looking at is the actual model weights. For you see, AI is proprietary software's final form. Software so proprietary that not even the creators are allowed to see the source code. Because there is no source code. Just piles and piles of linear algebra, nonlinear activation functions, and calculus.
For the record, I am trying to train-from-scratch an image generator using public domain data sources. It is not going well: after adding more images it seems to have gotten significantly dumber, with or without a from-scratch trained CLIP.
 I think Google Imagen is using BERT actually
 Specifically, the PD-Art-old-100 category on Wikimedia Commons.
The SD training set is available and the exact settings are described in reasonable details:
> The model is trained from scratch 550k steps at resolution 256x256 on a subset of LAION-5B filtered for explicit pornographic material, using the LAION-NSFW classifier with punsafe=0.1 and an aesthetic score >= 4.5. Then it is further trained for 850k steps at resolution 512x512 on the same dataset on images with resolution >= 512x512.
LAION-5B is available as a list of urls.
To my eye, kmeisthax's comment appears to be entirely accurate.
Well, that is to say, that assuming the facts that listed are accurate, then I agree with the conclusion: it's not "open source" at all. (And certainly not Libre.)
The things you said do not describe an open source project.
The point here is that the title of this thing is incorrect. If the ML community doesn't agree, it's because they are (apparently) walking around with incorrect definitions of "open source" and "Free Software" and "Libre Software".
At first I thought this might be a joke site, the poorly written copy reads like a parody.
Also, as others have pointed out, this is basically just yet another Stable Diffusion checkpoint.
How reproducible would the pictures be ?
Check out this video from prompt muse as an example: https://youtu.be/XjObqq6we4U
Anyone else have similar issues? I loaded it both from a locally downloaded version of the model as well as from inputting in the huggingface path and my token with write (?!?) permissions.
Anyone run into similar issues? Suggestions?
My guess is they do internally a slightly more careful and less porn/anime oriented version of what the 4chan/protogen people do. Make lots of fine tuned checkpoints, merge them, fine tune on a selection of outputs from that, merge more, throw away most of it, try again etc. Maybe there are other models in the mix, but I wouldn't bet on it.
(you still need a Nvidia GPU)
Extract the zip file and run the batch file. Find the cptk (checkpoint) file for a model you want. You can find openjourney here: https://huggingface.co/openjourney/openjourney/tree/main. Add it to the model directory.
Then you just need to go to a web browser and you can use the AUTOMATIC1111 webui. More information here: https://github.com/AUTOMATIC1111/stable-diffusion-webui
I just use https://softology.pro/tutorials/tensorflow/tensorflow.htm
- A few manual steps but mainly a well tested installed that does it all for you.
Our Repo: https://github.com/invoke-ai/InvokeAI
You will need one of the following:
An NVIDIA-based graphics card with 4 GB or more VRAM memory.
An Apple computer with an M1 chip.
Download the model from Huggingface, add it through our Model Mgmt UI, and then start prompting.
Also, will plug we're actively looking for people who want to contribute to our project! Hope you enjoy using the tool.
We're mainly waiting on others in the space (And/or increase investment by Intel/AMD) to offer support more broadly.
At this rate, I'd give Apple a likely shot of having better support than them w/ the neural engine & CoreML work they've been releasing.
Also, if you want to try the SaaS for free, feel free to submit a request using our contact-us form 
The Web interface for SD is based on InvokeAI