DALL E 2: how the artificial intelligence works that creates works of art from what users write



The research company OpenAI presented to the public DALL E 2 an artificial intelligence (AI) that is capable of creating works of art from what its users write. The program is capable of understanding what people write in order to later transform that information into images.

Its creators claim that DALL E 2 it is capable of combining concepts, attributes and styles, something that had not been seen in an artificial intelligence until now. This AI represents an important advance with respect to its previous version since it offers a resolution of the generated images up to 4 times higher, as well as better precision and quality.

The developers of this artificial intelligence point out that compositions are created from pre-existing images which are edited by the system based on the text written by the user.

“You can add and remove elements taking shadows, reflections, and textures into account,” they explain.

Besides all this, DALL E 2 is also capable of creating new versions of existing works, such as the painting Girl with a Pearl Earringby the Dutch painter Johannes Vermeer.

In this case, artificial intelligence was able to create more than 10 different versions of the work which present different variations with respect to the original painting. Among the differences are the color, as well as the position and resolution of the image.

“DALL·E 2 has learned the relationship between images and the text that describes them. It uses a process called ‘diffusion’, which starts with a pattern of random dots and gradually alters that pattern towards an image when it recognizes specific aspects of it.

Limitations

At the moment DALL·E 2 is only available for a small number of users. To be part of this list it is necessary that people sign up on a waiting list in order to be selected.

“We have been working with external experts and we are testing DALL·E 2 for a limited number of trusted users who will help us understand the capabilities and limitations of this technology”, they refer.

Because it is a research project to determine more about this type of artificial intelligence, the system has a series of security measures to ensure that the content generated is not objectionable.

This includes the inability to create images that contain adult, violent, or hateful content.

“Our content policy does not allow users to generate violent, adult, or political content, among other categories. We will not generate images if our filters identify text ads and image uploads that may violate our policies.”

This may also interest you:
– Clearview: how the artificial intelligence with which Ukraine identifies Russian soldiers works
– Wendy’s will use artificial intelligence to receive orders and will know what you are going to order before you do
– 40,000 chemical weapons in just 6 hours: the result of using artificial intelligence for a scientific test

Source-laopinion.com