How to Write a Good Prompt: Advanced Styles & Parameters
Exploring Art and Craft styles, visual references, and seed control.
This is the final chapter of the DEUTLI prompt guide. If you just landed here, you can catch up on the foundational concepts: learn how to structure your scene in Part 1: The Semantic Core, or master photorealism and virtual cameras in Part 2: Photography & Optics.
MEDIA STYLE: ART
If in the Photo section we turned the neural network into a professional photo camera, forcing it to think in categories of optics, focal length, and film chemistry, then upon selecting the base style Art, the DEUTLI algorithms switch to the mode of an easel and a graphic tablet.
The cascading logic of our remote control works out flawlessly here too: as soon as you choose Art, the system understands that photographic artifacts (like chromatic aberrations or bokeh) will be inappropriate here. The neural network starts to simulate the texture of strokes, the physics of mixing paints, or the algorithms of digital render, and the filters of the negative prompt automatically cut off unnecessary photorealism.
Inside the Art style we highlighted three of the most demanded and visually strong directions:
MEDIA STYLE: ART
Digital
Concept Art
Anime
Manga/Style
Oil
Classical Paint
Vector
Coming in V1
Digital Concept Art
This is the absolute industrial standard of the modern sphere of entertainment (cinema, AAA video games). The neural network generates an image imitating the work of a professional artist on a graphic display. This style differs by an incredible degree of polishing, a perfect balance between picturesqueness and razor detailing. Digital brushes, complex gradients, deep aerial perspective, and verified "epic" light — all this creates the so-called "ArtStation look." This is an ideal tool for worldbuilding, development of environment design, architectural fantasies, and elaboration of futuristic concepts.
Anime
A specific, but incredibly powerful visual language of Japanese animation. Upon choosing this preset, the neural network switches to cel-shading—a technique of flat filling with color with sharp, graphic boundaries of light and shadow. The contours of objects are outlined with a clear line (lineart), and the geometry of space often acquires that very dynamic, distorted perspective about which we spoke earlier. At the same time, the backgrounds can be rendered in the style of rich watercolor painting (in the spirit of Makoto Shinkai or studio Ghibli). A beautiful choice for creating storyboards, stylized illustrations, and emotional storytelling.
Oil Paint
A return to classical, heavy, and noble art. This style emulates the physical, tangible nature of paints. The neural network starts to draw the microrelief of the canvas, adds volumetric, thick strokes (impasto), imitates tracks from the hard bristles of a brush or a palette knife. The colors mix right on the virtual canvas, forming complex, organic transitions, and the lighting often acquires a museum, dramatic character (chiaroscuro). If you need to give the subject weight, monumentality, historicity, or express an emotion through the aggressive expression of a stroke — this is a non-compromising choice.
Since MEDIA STYLE: ART works according to the laws of fine art, you can still control the lighting of the scene and the angle of view on it. However, the photographic concept of depth of field (optical blurring) in this section yields its place to another mechanism of attention control, which artists and graphic designers have honed over centuries — through contrast, density of detailing, and stroke technique.
ART FOCUS mechanics
Instead of imitating a camera lens, the parameters of the FOCUS group in this style control the brush of the digital creator:
- Isolated: This preset forces the neural network to maximally simplify the background. Instead of optical bokeh, the environment turns into abstract color spots, large careless strokes, or a flat fill with a minimal amount of details. At the same time, the main character or object is drawn maximally clearly, detailed, and contrastingly. The viewer is not distracted by the context of the environment and immediately concentrates attention on the subject. This is a classic trick for concept arts of characters, lookbooks, or expressive portraits.
- Deep: The complete opposite of the previous point. This parameter forces the neural network to painstakingly work through the details across the entire field of the canvas. Both the foreground, the subject, and the furthest background receive an identical level of attention of the "artist." No visual simplifications or lazy strokes on the back plan. This is the style of large-scale historical canvases, saturated book illustrations, and complex backgrounds, where the viewer is offered to wander their gaze over the picture for a long time, studying the architecture and arrangement of the world.
- Soft: There is no technical out-of-focus in painting, but instead there is impressionism and the sfumato technique (invented by Leonardo da Vinci). This preset intentionally softens all boundaries of objects, ridding the picture of hard contours, sharp shadows, and excessive micro-contrast. Paints smoothly flow into one another, and the detailing dissolves in favor of the overall picturesque spot. The image becomes airy, dreamlike, and slightly ephemeral. An ideal tool for conveying a subtle, melancholic, or mystical mood, when the overall emotion is more important than strict form.
MEDIA STYLE: CRAFT
If the Photo and Art styles work in the paradigm of a lens and a brush, then upon choosing the base style Craft, the DEUTLI algorithms turn into a virtual sculptor and model maker. In this mode, the neural network stops thinking in flat spots of color and starts to calculate the physics of materials: how light reflects from glossy plastic, how it scatters inside a piece of polymer clay, or what hard shadows cut cardboard casts.
This is a category for creating tangible, tactile images, where depth and volume have paramount importance. Inside it we highlighted three presets:
MEDIA STYLE: CRAFT
3D
Render
Clay
Tactile
Paper
PaperCut
Knitted
Coming in V1
3D Render
The industrial standard of modern commercial 3D graphics. Upon choosing this preset, the neural network emulates the work of powerful render engines (such as Octane, Redshift, or Unreal Engine). Its distinctive features: perfect, mathematically verified geometry, complex calculation of lighting (Global Illumination), and flawless reflections on complex materials like glass, chrome, or glossy plastic. This preset gives out a maximally "clean," sterile, and modern picture. Ideally suited for creating interface elements, product visualization, abstract geometric compositions, and futuristic design.
Clay
A style returning human warmth and tactility to computer graphics. The neural network imitates sculpting from physical material—plasticine, polymer clay, or matte soft-touch plastic. The main optical feature here (which neural networks have learned to reproduce amazingly accurately) is subsurface scattering. Light slightly penetrates inside the material, making shadows incredibly soft, thick, and "warm." Objects get rid of sharp corners, acquiring pleasant rounded bevels and micro-imperfections of the surface. This is an absolute hit for friendly UI/UX design, creating mascots, icons, and modern web illustration.
PaperCut
A unique hybrid of graphic flat design and physical volume. The image is built from virtual sheets of thick colored paper or cardboard, cut out and layered on top of each other in applique layers. The feeling of depth of the scene here is created not at the expense of perspective, but exclusively at the expense of physical shadows (Drop Shadows), which one layer of paper hard casts onto another. The algorithm also brilliantly emulates the microtexture of the material itself — the porosity and fuzz of craft paper. This is an incredibly expressive tool for magazine (editorial) illustration, creating deep abstract patterns, and conceptual landscapes.
REFERENCE
Generation "from emptiness" (Text-to-Image) is powerful, but sometimes you need to push off from a specific shape, composition, or color palette. The REFERENCE field allows you to pass a visual anchor to the neural network—a link to an existing image that the machine will use as a source of basic geometry or visual style.
https://example.com/image.jpg...
When working with this parameter, it is important to consider two technical aspects:
Mechanics of the target engine: Not all neural networks know how to parse links directly from the text prompt. Be sure to study the architecture of the generator you chose. In some systems the link in the text will work flawlessly, in others—this mechanism is not supported, and you will need to upload the source picture manually through their own interface (Image-to-Image or Style Reference functions). DEUTLI forms the correct structure, but you need to apply it looking back at the capabilities of the final tool.
Absolute accessibility of the link (Critical): If you pass a URL in the prompt, the link must be completely open. It must be a direct path to a graphic file (as a rule, ending in .jpg or .png) that is accessible to the server from any point in the world without a login, captcha, and password. Links to closed galleries, cloud drives like Google Drive or iCloud, and also pages of social networks requiring authorization will not work. The external crawler of the neural network will simply rest into a wall and ignore the reference.
SEED
The generation of any image in modern diffusion neural networks starts with chaos—a field of random digital noise. The neural network step by step "subtracts" this noise, forming a meaningful picture on the basis of your text query.
SEED (or grain) is a numerical value that determines the initial structure of this starting noise. If you use one and the same prompt, one and the same settings, and one and the same SEED, the generator will create an absolutely identical starting pattern of noise, and therefore—will issue an identical final image. This is your tool for ensuring the repeatability of the result. Not all neural networks react to a Seed inside a text prompt.
Main misconceptions around SEED
The most frequent mistake is to perceive the seed as a "save" of a character, location, or visual style. Beginner users find a successful seed and think: "Excellent, now with this seed I can change the poses of the character, and they will remain the same."
It does not work like that. The seed is tied to a specific set of words. As soon as you change even one word in the prompt (for example, standing to sitting) or change an optics parameter, the mathematical formula collapses. The starting noise remains the same, but the neural network starts to interpret it completely differently. The composition will scatter, and you will get a completely different frame. A seed is not a save of geometry, it is the fixation of the mathematical coordinates of the start.
What is the difference from Style Reference?
- Seed: Fixes mathematical chaos. It is fragile, breaks at the slightest change of the prompt. Works at the level of pixel structure.
- Style Reference: Analyzes visual characteristics (color rendering, brushstrokes, mood) and transfers this atmosphere to new generations. It is flexible and allows you to apply one and the same style to completely different prompts with different seeds.
Control of SEED in the DEUTLI interface
In our tool, the field for controlling the grain is extremely functional and offers three working scenarios:
Empty = Random
8371922
- Leave empty: The base scenario. If you input nothing, the target neural network (Midjourney, Stable Diffusion, etc.) will automatically assign a random seed for every new generation. This is necessary at the stage of searching for a concept.
- Generate random: A button for the quick creation of a random numerical value right in DEUTLI. Useful if you want to fix a specific random number before sending the query, so that later you have access to it in the logs.
- Fix (Input manually): You write in a specific number (for example, copied from a previous successful generation). This is used for micro-iterations: when the composition completely satisfies you, and you want to only slightly "move" the weight sliders or make minimal adjustments without changing the overall geometry of the frame.
GENERATE PROMPT button and data export
GENERATE PROMPT
When all parameters are set and you press the Generate Prompt button, our proprietary algorithm steps into the matter. Your visual intent, assembled through the interface, is passed through a complex neurolinguistic combine. The system takes the selected vectors of objects, lighting, optics, and media style, seamlessly sews them together and crystallizes them into a clean, mathematically verified linguistic formula, completely cleared of verbal garbage and conflicting terms.
COPY MIDJOURNEY
COPY NATURAL
COPY RAW DATA
SAVE .DEUT FILE
What you get at the output:
- 3 dialects: The system instantly translates your query into three specialized dialects, the syntax of which is optimized for the logic of the leading global generative engines.
- Free copying: The synthesis operation itself (running through the prompt improvement algorithm) writes off 1 credit from your balance. After a successful generation, you get full access to the result: you can copy the finished text blocks into the clipboard as many times as you like, without any restrictions and additional write-offs, right up to the moment of resetting the interface for preparing a new frame.
Open .deut file format
Besides the finished text, the system forms and offers to save to your local device, smartphone, tablet, or computer, a project file with the
.deut extension (corresponds to the documented open standard application/vnd.deut+json, registered in IANA).This is a completely open format, and all data inside the file belongs only to you. It is not locked in a proprietary cloud — you can right-click on it and open it in any basic text editor. Thanks to strict architectural markup, the structure of the file is easily read by a human, allowing you to study the parameters and use them even outside the limits of our remote control.
A look into the future: DEUTLI V2
Already now, the transparency of the
.deut format gives you security and control over your own developments. And in the upcoming second version (V2) we will present a full-fledged built-in editor. A version history will appear in it, the possibility of creating user presets, and for true pros — the function of manual editing of all fields, directly, without leaving the interface of the application.Stop typing. Start snapping.
You know the theory. Now put it into practice. Build your first structured visual formula in seconds and export your .deut file.