DALL-E

DALL-E (stylized DALL·E) is an artificial intelligence program that creates images from textual descriptions. It uses a 12-billion parameter[1] version of the GPT-3] Transformer model to interpret natural language inputs (such as "a green leather purse shaped like a pentagon" or "an isometric view of a sad capybara") and generate corresponding images.[2] It can create images of realistic objects ("a stained glass window with an image of a blue strawberry") as well as objects that do not exist in reality ("a cube with the texture of a porcupine").[3][4][5] Its name is a portmanteau of WALL-E and Salvador Dalí.[2][1] Many neural nets from the 2000s onward have been able to generate realistic images.[2] DALL-E, however, is able to generate them from natural language prompts, which it "understands [...] and rarely fails in any serious way".[2] OpenAI has not released source code for either model, although a "controlled demo" of DALL-E is available on OpenAI's website, where output from a limited selection of sample prompts can be viewed.[1] Open-source alternatives, trained on smaller amounts of data, like DALL-E Mini, have been released by others.[6] According to MIT Technology Review, one of OpenAI's objectives was to "give language models a better grasp of the everyday concepts that humans use to make sense of things".[7] https://en.wikipedia.org/wiki/DALL-E

DALL-E Mini https://github.com/borisdayma/dalle-mini

cf Generative Art

Repl.it generating 2D computer game assets https://twitter.com/amasad/status/1514322225184223234


Edited:    |       |    Search Twitter for discussion