– How to optimize your UV maps –

Okay so it’s time for another tutorial (that is long overdue)
This time im going to be teaching you how to do your UV maps right.

A lot of creators on SL do not understand the importance of optimizing their UV maps. They arrange their UVs in the most mentally lazy ways possible, or dont even bother doing that, just leaving it how it is after its been unwrapped.


Understanding the importance of good UV layout.

Good UV layouts let you save texture memory, you can fit more detail on your texture without having to increase its resolution.
From a user’s perspective, the second life client is limited to roughly 500MB of video memory (and over 1GB for 64 bit viewers), a 1024×1024 texture is 4MB big, a 512×512 image however is just 1MB big. Smaller texture sizes means faster downloading, more textures can fit in your video memory and less lag.


From a creator’s perspective, better use of available UV space means more texture detail, less blurry or stretched out textures. You can pack a lot more things on a single texture image, not having to split it up into several different images or use ridiculous texture resolutions.

Heres an example of what i mean, this is an unoptimized 1024×1024 texture.

Red color shows unused UV space that has went to waste.
Purple shows things that the player will never see, resulting in more wasted UV space.
Green shows the only surface area that the player will ever see, all of this green stuff could have been easily fitted on a 512×512 image or even smaller!

with bad UVs, no body wins, the users have to wait longer for textures to download, bloating their texture memory, increasing their rendering weight, and creators still cant really put all that much detail into their things even if they max out the resolutions.

So how do you make your UVs all neat and tidy?

Well first of all, sit your booty down and take a moment to get familiar with the UV tools.
Look up some UV tutorials on google or youtube, memorize the tools, if you have a bad memory, take notes! and practice a little. If you cant do something as simple as that, you have no business in creating content for SL.

The example we will work with
Now that you know how to use your tools of trade, take your unwrapped UV, rotate, scale and arrange the pieces on the page along a corner till your UV forms a square or a rectangular shape, witch ever the circumstances allow you to do. Size of the things camera doesn’t see or tings that have no detail, can be smaller to save UV space.

Once done, scale the UV to take up the whole page.

Now if you got yourself a rectangular shape instead of a square like i did in this example, do this (the following example is for blender but similar process can be used for any other program)

Just scale the UV to take up the rest of the remaining UV space.
Than click the new button under the UV window to create a new texture and set it to 256×128 or 512×256 or 1024×512 if your UV is a horizontal rectangle like mine, or 128×256 or 256×512 or 512×1024 if your UV is a vertical rectangle.

The UV page will take the shape of the texture you load or create.512

what resolution you should go with depends on how big the object is, relative to the average camera distance to it, or how much detail it will have.

If lets say its something small, like a shoelace loophole, it can be a tiny image, if its something as big as an avatar body, you can go all out on resolution.
If its a smooth surface part with little to no detail on it, you can also go with smaller resolutions, paying little regard to how big the part is.
If its a part that has text or some other pattern on it that will end up too blurry and pixelated, you need higher resolutions.
To make this more simple, a 1024×1024 resolution image for a button on your jeans are a pants on head retarded waste of texture memory.

Tip: Use UV test maps, you can see wich pieces have too much texture resolution  and witch dont have enough of it and adjust it accordingly.




And that’s pretty much it.


– An Introduction to 3D –

This article is a basic introduction of what is 3D and how Second Life utilizes it.

To understand what 3D is and how it works, imagine a map, the map is flat 2dimensional image or a 2dimensional space with pixels in it, 2D has only 2 coordinates for every pixel in it, it is X and Z or left/right and forward/back left and right right(example: 3 steps left, 4 steps forward or x-3 y4) .

Now add a third coordinate to the map, Y or up/down. You now have a 3 dimensional map or a 3 dimensional space with a 3d model in it.

More examples example of how the 3D space works.


A 3D model is made out of points called vertices (ill talk more about those further down)

The coordinates of a single example point (represented by a yellow ball) in 3D space is x6 y6 z4. All 3 dimensions have a grid of their own that the points snap to, the points can only snap to a number like 1 or 2 or 3 but never in between them, its basic math, kinda like not being able to divide by zero. Good thing those numbers usually are 60000 60000 40000, your point can snap anywhere in between 60000 and 69999, one digit numbers like 00001 or 00002 would be already a nano scale and you would never even reach that scale so you don’t have to worry about it . Of course a lot of programs allow you to set up your own grid for precision modeling.

Note: a lot of 3D applications (like Second Life and blender) use Z as up/down and Y as back/forward, instead of Y for up/down and Z for back/forward. Some programs or game engines use their own orientations and have all 3 of the XYZ axis mixed up and changed places, it would be awesome if people came to a standard.

A 3D model is made out of points called vertices and lines that connect the points called edges, lines connected in a loop, make a polygon. There are 3 point and 4 point polygons(4point polygon are easier to work with but they are still broken down by the graphics card into 2 triangles on the rendering level, so in a sense, there is no such thing as a 4 point polygon. When exporting a model to SL, all the 4 point polygons are being split in half to make triangles, a process called triangulation), the inner contents of a polygon is called a face or a normal (imagine a polygon as a frame and the face as a picture in that frame), polygons that are connected with each other and share edges, form a thing called geometry, also called a wireframe.

3 point polygon only geometry is called mesh, mesh is mostly used in video games and other applications that use real time 3D rendering, such as Second Life.
4 point or quad geometry is mostly used in CGI movies and model editing, but it gets triangulated when exporting it to video games.

A 3d model file holds the XYZ coordinates of each point as well as some additional information like, what lines connect to those points, surface material and UV data. Most 3D file formats hold this data in a simple plain text format and can be opened and edited using notepad (i good way to manually repair corrupted files)

Here’s how a simple cube looks like.

These are the the point coordinates, they show how far away the point is from 0.0.0 witch is a reference point and the absolute center of the 3D space that the model is in.

Surface materials are groups of polygons that the model is divided into, each group can have its own color, shading and texture. In Second Life, materials are called faces(not to be confused with normals).

More advanced model and materials.

A UV map is a 2D representation of the surface of the 3 dimensional object, to understand how a UV map works, take a piece of paper and roll it up into a cylinder, you have a 3 dimensional shape, now unroll that cylinder and lay it flat on the table, you now have a UV layout of that shape. Each point and line in the UV map, belongs to each line and point on the 3 dimensional object.

A more advanced example of how the UV map works, would be a paper craft model.

UV maps can be cut up and arranged to make a better use of the available UV space, this helps to make the best of the given texture resolution, the more texture pixels a polygon will take up, the better image quality it will have in that particular surface area.

Now, moving onto SL

What are prims? A prim is short for a primitive, a primitive is a basic geometrical shape that you can start with, turning it into more advanced shapes. On Second Life, prims are also a set of simple geometrical shapes that you can shape into  more complex objects, combine and link them, to make advanced shapes. SL prims are already preset with different material surfaces, UV maps and advanced transformation handles. The idea of prims was basically to build houses from, the whole build menu in SL has a similar interface to a video game map editor, but people on sl figured they can just use it to build avatar attachments and other cool stuff instead of houses, using prims.

Notes: While the idea of real time 3D object manipulation in a game, to create content, is really neat in theory, and is one of the biggest selling points of sl, in practice, it has a  few problems because of the lazy and resource wasteful way it was executed. Prim builds being the main cause of all the frame rate lag on SL(not to be confused with network lag) is one of them, and here’s why. This example shows a cube made out of 162 prims (on the left) and the same exact cube made out of one single mesh(on the right).

Prim builds waste a lot of polygons in places where they aren’t needed to give the object its shape and aren’t even seen by the camera. Its hard for computers to process and render all this geometry and it often results in noticeable drops in frame rates, especially on older computers. It is not a good idea to use prims to make avatar attachments(20 avatars on screen with multiple 250 prim attachments all over their body, will lag you back to December) or anything else for that mater that isn’t a house.

60 triangles where wasted just to make this single corner of the cube, while the mesh version only use 1 triangle.

Prims are very inflexible and you need a lot to do the most simple things, and in the end, some shapes are just impossible to do. And this brings us to sculpted prims.

What are sculpted prims? Sculpted prims or sculpts are sphere prims, shaped using a sculpt map, into shapes that cannot be accomplished by using the normal prim transformation handles.
How it works is, you make a shape out of a ball or a cylinder primitive in a 3D program, your shape has to have a spherical UV map. When you are finished with your 3D shape, you generate a 64×64 color image called the sculpt map and export it to SL, the By using the UVmap, RGB values of every pixel on the sculpt map represents the XYZ values of every point in your object, a 3d map in 2d format. SL uses the shape information of the sculpt map to push every point of a sphere prim into the same position it was in your 3D program, giving it the shape that you made , therefore its kind of like sulpting a shape out of a chunk of rock.

Sculpt map method imposes a lot of limitations on the creator and challenges him to come up with creative ways to overcome those challenges. At the start you where bound to a 16×16(32×32 sculpt map) or a 32×32(64×64 sculpt map) segment objects to work with. later LL implemented the ability to create custom resolution sculpt maps to better optimize your shapes, you could have for example a 16×128 sculpt maps (8×64 segments)to make a better use of available polygons. But not all viewers supported this feature correctly and these kind of sculpts often rendered out broken in most third party viewers (just like everything else cuz TPVs always sucked)

When optimizing your sculpts, you also had to keep in mind how SL will decimate the segmentation of the sculpt to make LOD(levels of detail) if you want your object not to loose its shape or just turn to garbage when you zoomed out.

But even with all the optimization, sculpt maps are still very resource wastefully, since you cannot remove any of the polygons that the camera cannot see or aren’t necessary to give the object its shape, And since you still have a pretty much fixed amount of polygons and cannot do anything to it that would change the grid like topology of your object, witch in turn would break the spherical UV map, you still end up using multiple sculpted prims to make more complex shapes. Making sculpts is just really awkward, time consuming and frustrating process and people who think, making sculpts is easier than making meshes, and still do them long after mesh import was implemented, refusing to learn mesh(theres nothing to learn to begin with, if you know how to make sculpts, you are already a master of mesh),are just bat shit crazy.


for the sake of demonstration, i took a gun that is made out of sculpted prims and striped it down of all the polygons that isn’t necessary to give the gun its shapes, and made a “mesh” version of it. But in reality, i would have to redo the same gun from scratch, because of the flawed nature of sculpted prim geometry, the gun still has too many polygons, it shouldn’t be more than 2k or 3k.

Click to see the full size image

Notes: Any experienced 3D modeler who isn’t familiar with SL and how it works, would have his mind boggled after seeing the blasphemous wireframes of prim and sculpted prim builds. And you know, in this sense, SL is really mind blowingly amazing on what huge quantities of polygons it can process and render on screen, along with real time shadows, lighting and DOF.

Anyway, on top of that, you get a really loose representation on SL of what you have actually created in your 3D program, since lots of fine detail is being lost cuz RGB values can only go as far as 255x255x255. That means that the resolution of the grid that all the points snap to, is very tiny. On top of that, most plug-ins out there dont really do a good job recording that detail to begin with and not to mention image compression (if you don’t know about stuff like lossless compression). As a result, you get bumpy and chewed up objects that sometimes look like potatoes and as one of my friends described it “look like they have assholes at the poles”. Most plugins are very picky and limited by themselves, adding up their own limits to an already limited method(primstar for blender is so far the best sculpt map plugin, since it has no limitations of its own and can make very precise sculpts without loss of detail, apart from the loss you get from the 255 grid). in the end sculpts are only really good for making rocks :|

Click to see full size images.

Notes: I dont get why LL went with the sculpt map image idea to begin with, you would get far better results if the xyz vert information was saved in a text format, without any loss of detail)

The name sculpt itself is rather confusing both to those who work with 3D and those who don’t know anything about 3D.
In the 3D modeling world, sculpting is literally carving detail into a 3D models using tools that can manipulate/add/remove/ polygonal detail to a model. But on SL, sculpting means using a special color image, to deform a sphere prim.
SL players that have no prior experience or knowledge of 3D often confuse programs ment for actual 3D sculpting with SL sculpt creation. they think these programs where created to make sculpts for SL cuz it says “sculpting” somewhere in their name or description. While some of these might actually have a plug-in that lets you sort of turn your creation into a sculpt map, most of the time they are the worst starting point for inexperienced users, that wanna learn to make stuff for SL, due to the complexity of those programs that are aimed at industry professionals and not simple Joe.

Both prims and sculpts are meshes just like everything else on SL. From the linden ground and water to the linden trees and avatar. Everything renderd on your screen in SL, is made out of 3 point polygons, and this brings us to mesh import.

Click to see full size image.

Mesh import is LL’s latest addition to SL content creator’s toolkit. Mesh import means that you no longer have to use all kinds of indirect and often ass awkward workarounds, such as sculpt maps and prim arranging scripts, to bring your creations to SL. You can now import your models straight from your 3D program, without having to make them in a specific way and than turn them into sculpt maps and hope that it wont turn into a jumbled mess when imported. It removes many of the limitations, imposed by the sculpt map method. You now have the full freedom of any topology you want, different UV solutions, materials, can even rig your meshes to the SL skeleton, and you have the full freedom to use any of the tools you want in shaping a mesh object, since you don’t have to worry anymore about the segmentation and not screwing up the spherical UV maps.

And so this wraps up the introduction to 3D, now, on to the tutorials!