Using AI for Asset Creation

This doc goes over some of the considerations you need to make when using AI created 3D assets in Mona Spaces.

It is possible to create 3D assets with AI such as Stability.ai, or Luma AI. At the moment the assets may not be that optimised for a WebGL space, so this doc will go over some considerations when using AI based 3D assets.

Stability AI has the advantage of utilising images to create assets. So with a good reference images, either created yourself or using 2D image generation you can get better results.

If you want a tutorial on how to set up Stability.ai 3D Assets, you can go here for more information. Do note that you will install it on your own computer, and a 24Gb VRam graphics card is recommended (but not required).

AI Assets

As you can see in the examples below the assets look great from a distance, but once you get up close the quality starts to break down. The following examples use Luma AI, but similar considerations should be looked at with any AI generation tool.

The texture is also less optimised with wasted space on the texture, wasting valuable filesize.

The polycount and edgeloops are not really considered in the creation of the assets, and can therefore be significantly optimised in order to reduce the filesize and improve how they work in a space.

That said these assets can be fantastic reference, and depending on the quality of the original asset, there are a number of approaches you could use to create the optimised version.

Reference

The first approach that will usually create better results but over a longer time frame, is use the AI generated 3D asset purely as reference. You can use the asset to help guide the creation of a better high polygon asset, or a refined version of a low polygon asset. Basically, this would speed up the creative process so you have to spend less time figuring out what you want to do, and get to the doing part.

A benefit of this approach is that you can then take the time to improve on the design of the asset as you go, focusing less on the general details and really refine the details.

This also means that the end result is more yours rather than just the AI's (or the original artists in the database more the point).

Optimising the mesh

If the AI Generated asset is exactly what you want, but you still need to optimise it, the approach would be to create an optimised asset and projecting the original asset data onto the optimised mesh. The approaches to this are covered in the next tutorial category, and cover Decimation, Remeshing, Retopology, and Baking.

This asset example used retopology and baking the diffuse and normal map to get the polygon count from 9089 down to 52 (in the lowest example). the lowest polygon asset would look fine from a distance, but you could use the higher polygon assets with Level of Detail tools in Unity. If you save the assets like the following :

  • Temple1_LOD1

  • Temple1_LOD2

  • Temple1_LOD3

When you import all three fbx files as one model into Unity it will automatically create a Level of Detail asset, and switch between them at different distances from the camera. So at the cost of filesize for the extra meshes, the asset will be significantly more performant at distance.

Last updated