• Lemminary@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          24 days ago

          Large Language Models generate human-like text. They operate on words broken up as tokens and predict the next one in a sequence. Image Diffusion models take a static image of noise and iteratively denoise it into a stable image.

          The confusion comes from services like OpenAI that take your prompt, dress it up all fancy, and then feed it to a diffusion model.

      • Cethin@lemmy.zip
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        24 days ago

        Nope. LLMs are still what’s used for image generation. They aren’t AI though, so no.

            • LwL@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              24 days ago

              Holy confidently incorrect

              LLMs aren’t generating the images, when “using an LLM for image generation” what’s actually happening is the LLM talking to an image generation model and then giving you the image.

              Ironically there’s a hint of truth in it though because for text-to-image generation the model does need to map words into a vector space to understand the prompt, which is also what LLMs do. (And I don’t know enough to say whether the image generation offered through LLMs just has the LLM provide the vectors directly to the image gen model rather than providing a prompt text).

              You could also consider the whole thing as one entity in which case it’s just more generalized generative AI that contains both an LLM and an image gen model.