– M4 Venus neck blending tutorial –

This tutorial will show you how to blend your Kemono/AV2/Rikugou/LL body skin texture with your M4 Venus head’s neck using Blender.

I will assume you already know the basics of how to use blender, if not, this older M4 neck blending tutorial covers the process in greater detail.

If you haven’t already, download the M4 Venus texture development kit and find the “M4V_neck blending kit.blend” file and open it in Blender.

What is where (object layers)

Step 1, baking the neck texture.

Make sure you are in “Blender Render” mode at the top bar.

Select the object layers you will be working with, for example if you are transferring texture of the Kemono body UV onto the M4 neck UV, select both of those layers (you can select multiple layers by holding shift).

Select the source object (a neck with original UV or the M4 head stub).

Go to texture data tab on the right and click “New”.

Click “open” and select the source texture that you wanna bake onto the M4 UV neck.

Now hold shift and select the M4 UV object that you will be transferring the texture onto.

Press tab to enter edit mode and than press A to select all polygons if its not already selected.

In the UV editor window on the right, click “+New” to create a new image, set a name and set the resolution to 512 x 256 and click “OK”.

Go to render tab and scroll down to baking options.

Bake method has to be set to “Texture”
“Selected to active” checkbox needs to be marked.
Margin needs to be at least 32 or higher for best result(it extends the edges of the rendered image to fix problems like black seams).
Leave “Distance” and “Bias” alone for now unless you are getting spots where the texture inst baked right.

Click “Bake”, once done, save your image somewhere from the “Image” menu in the UV editor window on the right.
Step 2, baking the head texture.

Pretty much same deal as before, select the M4 head stub object and set its texture in the image tab to be the texture of your head, hold shift and select the M4 neck onto which you want to bake your head texture onto, make new image in UV window, bake your texture and save it.
Step 3, merging the baked textures.

I wont go deep into details on how to use an image editor since i assume, as a skin maker, you already know how it works lol.

So anyway, fire up your image editor and paste both baked images into it, layering one over the other.

Select a very soft eraser and start erasing away the top or bottom half of one of the textures till the 2 blend together.

You can test how your textures blend by loading them onto the included necks or test them ingame using the local texture option.

– M4 neck blending –

This tutorial will show you how to blend your body skin texture with your M4 head neck using Blender.

If you haven’t already, download the M4 Texture Developer Kit and find the “Neck Texture Bake.blend” file and open it in Blender.

For Avatar 2.0

First, make sure you are in blender render mode.

1. Select the “M4 UV sample AV2” object from the list
find its object layer and make it show by clicking on it
go to the texture tab and load the head texture you want to blend the neck to.
texturetab 4. Click open

2. Select the “Av2 neck M4 UV” object from the list (its going to be in the same layer as the “M4 UV sample AV2” object) and go to edit mode by pressing tab on your keyboard, (if it isnt selected already, select the wireframe in the UV windows by pressing A) and than create a new 1024×512 texture

3. Select the “M4 UV sample AV2”, hold shift and select the “Av2 neck M4 UV” then go to the render tab and scroll down to the bake menu. If it isnt setup already, choose bake mode as texture, mark the selected to active checkbox, set margin to 64px and click bake.
render tab

4. In the UV window, go to image and save your baked image, this will be the top part of the neck.

5. Select the “Av2 neck UV” object and go to the texture tab again and load the Avatar 2.0 body texture you want to blend the neck to.

6. Select the “Av2 neck M4 UV-B” object and go to its object layer
press tab to enter edit mode and select all by pressing A, go to image and create a new 1024×512 image for it.

7. Hold down shift and select the layer under the current layer you are in.

8. Select the “Av2 neck M4 UV-B” again, hold shift and select “Av2 neck UV” from the object list.

9. Go to the render tab, press bake again and save your baked image, this will be the bottom part of the neck.

10. Fire up your favorite image editor and paste both of your baked images into it, one on top of the other and than use a very soft eraser brush to blend them together.


For Kemono

Everything is pretty much the same here only the kemono parts are in the second object layer group and there is no “Kem neck M4 UV-B”, the “Kem neck M4 UV” is used in both bakes, you will need to hold shift and select both layers for “M4 UV sample Kem” and “Kem neck M4 UV”.

That is all.

– How to optimize your UV maps –

Okay so it’s time for another tutorial (that is long overdue)
This time im going to be teaching you how to do your UV maps right.

A lot of creators on SL do not understand the importance of optimizing their UV maps. They arrange their UVs in the most mentally lazy ways possible, or dont even bother doing that, just leaving it how it is after its been unwrapped.


Understanding the importance of good UV layout.

Good UV layouts let you save texture memory, you can fit more detail on your texture without having to increase its resolution.
From a user’s perspective, the second life client is limited to roughly 500MB of video memory (and over 1GB for 64 bit viewers), a 1024×1024 texture is 4MB big, a 512×512 image however is just 1MB big. Smaller texture sizes means faster downloading, more textures can fit in your video memory and less lag.


From a creator’s perspective, better use of available UV space means more texture detail, less blurry or stretched out textures. You can pack a lot more things on a single texture image, not having to split it up into several different images or use ridiculous texture resolutions.

Heres an example of what i mean, this is an unoptimized 1024×1024 texture.

Red color shows unused UV space that has went to waste.
Purple shows things that the player will never see, resulting in more wasted UV space.
Green shows the only surface area that the player will ever see, all of this green stuff could have been easily fitted on a 512×512 image or even smaller!

with bad UVs, no body wins, the users have to wait longer for textures to download, bloating their texture memory, increasing their rendering weight, and creators still cant really put all that much detail into their things even if they max out the resolutions.

So how do you make your UVs all neat and tidy?

Well first of all, sit your booty down and take a moment to get familiar with the UV tools.
Look up some UV tutorials on google or youtube, memorize the tools, if you have a bad memory, take notes! and practice a little. If you cant do something as simple as that, you have no business in creating content for SL.

The example we will work with
Now that you know how to use your tools of trade, take your unwrapped UV, rotate, scale and arrange the pieces on the page along a corner till your UV forms a square or a rectangular shape, witch ever the circumstances allow you to do. Size of the things camera doesn’t see or tings that have no detail, can be smaller to save UV space.

Once done, scale the UV to take up the whole page.

Now if you got yourself a rectangular shape instead of a square like i did in this example, do this (the following example is for blender but similar process can be used for any other program)

Just scale the UV to take up the rest of the remaining UV space.
Than click the new button under the UV window to create a new texture and set it to 256×128 or 512×256 or 1024×512 if your UV is a horizontal rectangle like mine, or 128×256 or 256×512 or 512×1024 if your UV is a vertical rectangle.

The UV page will take the shape of the texture you load or create.512

what resolution you should go with depends on how big the object is, relative to the average camera distance to it, or how much detail it will have.

If lets say its something small, like a shoelace loophole, it can be a tiny image, if its something as big as an avatar body, you can go all out on resolution.
If its a smooth surface part with little to no detail on it, you can also go with smaller resolutions, paying little regard to how big the part is.
If its a part that has text or some other pattern on it that will end up too blurry and pixelated, you need higher resolutions.
To make this more simple, a 1024×1024 resolution image for a button on your jeans are a pants on head retarded waste of texture memory.

Tip: Use UV test maps, you can see wich pieces have too much texture resolution  and witch dont have enough of it and adjust it accordingly.




And that’s pretty much it.

– Fur texture tutorial…. sorta –

Some time ago, someone requested me to make a fur pattern tutorial, i kept thinning about it but i couldn’t come up with much of a tutorial since the brush im using pretty much does it all for me.

So yeah heres the set of brushes im using for GIMP, you can download it here http://coyotemange.deviantart.com/art/Fur-Brush-Pack-211549772



So yeah, now for the tutorial part, what you need to do is set a dynamic brush size based on pressure and do some alteration between the few different brush brushes as you go so that it all wouldnt look like a mass of copypasta. Some smaller detail is impossible to do with the large brushes, so the fine hair brushes can be used on those parts. Finally, study pictures of animals to see what the kinds of fur thicknesses they have in different body areas, as well as fur colors.



– Metasequaia educational material dump –

Heres a bunch of stuff that might help you learn how to use Metasequoia

These are some old interface explanation sheets i once made for a friend who was getting into meta, these are old and outdated and lack some of the new features that later versions of meta have, but the majority of it is still the same.

UI  materialsandlayers  menus  mirroring

The ui.png is missing some of the new features and modes of operation, as well as doesnt explain some of the buttons. The materialsandlayers.png forgets to explain how you can group layers into groups and sub groups (kinda like a directory tree) using the arrow buttons and the triangle thing on each layer that can show or hide all the sub layers belonging to that layer. The mirroring.png warns about a symetry bug that has been long fixed.

Here’s some youtube videos.

This video shows the box modeling + subdivision method.

This video shows how you can use the paint tool in meta to help you with your texturing.

This is a tutorial about rigging avatars in blender but at the beginning it shows me making a simple demo avatar in met.

This video shows what most of the tools do.

Okay now these 2 tutorials are of another program but in part 2 of both of them i continue working in metasequoia.


Heres some more resources from other people.

This thread is full of useful links and videos for modeling anime characters and stuff like that.

This site has lots of useful metasequoia tutorials and videos, Click on the links in the menu on the left side of the page to see pages of different tutorials.

– Texture editing/recoloring –

Before we begin, i need to familiarize you with a few concepts.

– What is color? –

Light is made out of a type of electrons called the photons, all electromagnetic radiation travles in waves and have different wavelengths, the wave length in the case of light, defines its color.
Different object surfaces have different resistance to different color photons, some photons go right through into the object, others get bounced back, the color that bounces off the object is the color we see with our eyes. for Example, tomatoes arent really red, they just deflect red light.
A white object deflects all light equally, a black object absorbs all light.
Long story short, the color of the photons deflected from the object surface is the color we perceive the object to have

– What is ambiance? –

ambiance/radiosity is the illumination created by light reflected of object surfaces, if lets say you have a white ball in a room that has a red wall in it, the side of the ball facing the red wall will be slightly red too because of the red photons bouncing of the red wall
heres an example http://androidarts.com/tuts/radiosity.jpg
heres another simple example http://www.fxguide.com/wp-content/uploads/2012/03/Cornell_Box_Vray.jpg
notice the green and red and white reflections on the blue object and white walls, also the blue reflection on the ceiling

– So, now how does this apply to texture moding? –

In order for your textures not to look grey and dull, you need to apply the same principals of ambiance and all that stuff to them.
Heres and example, the boring skintone thing has the same dusty grey shading that should be avoided,  http://androidarts.com/tuts/middle.jpg it looks the same as if it was a build menu recolor.

– If its too much to swallow, simplified trick –

You need a tone theme, an overall consistent tone of ambiance through out all your colors
You can either be stylistic and pick a specific color for your ambiance or use the most common everyday ambiance that you would have on SL if lighting in it worked like in real life. Basically that would be half sky blue and half green/brown ground sort of ambiance i guess. Or if that is too complicated, just pic any color really, it just has to be similar but a bit different, like if you have blue, pic a different and slightly purple tone of blue.

– so assuming you picked your ambiance, heres a few methods you can use to apply it to your textures. –

This will be the shadow layer.


And this will be your Color layer under the shadow layer.


you need to find a tool or a filter in photoshop or whatever program you are using, that lets you select a color in the shadow layer image and turn it into a gradient alpha, kinda similar to how you would remove green background using chroma keying in photoshop only it turns the entire green channel into alpha. If you are unable to find it, plan B would be set the layer to multiply mode


And this is what you should get

Now you need to find the RGB or the levels filter thing and turn that layer into the color you want your ambiance to have

color balance



And you get this.


Now everything is colored in the same tone but it might not look too good one some of the colors you might have so heres where the magic wand selection tool comes into play.


You can select the colors/areas you wanna fine tune and play some more with the RGB levels  (do so by selecting them on the color layer and than switch to the shadow layer and edit it)



And youre done.


Now i realize that the example image i made isnt clear enough to actually see the differences so heres some more examples.

On the left you have a typical boring recolor that uses the same color and the same tone for its shadows only different brightness level. And on the right side you have shading using the method i described above


Lets say you want a nice light blue, heres what you get if you just change the prim color or try to recolor the texture the usual way.


Lets fix this using this ambiance colored shadow method.


Oh and heres one more trick you can do, take another copy of the shadow texture and this time isolate all the speculars and stuff using the levels filter


And put this on top of your texture using either the addition/additive layer mode or also use color to alpha to turn all the black color into alpha if you dont have such a mode or it works differently than what i have in my examples. Play around with its color levels as well and you should get this.

Oh and btw, one common mistake people do on SL is use too much white and especially too much black. you shouldnt use a pure black color for your black parts or the detail will disappear in the blackness. You need to pick a really dark grey sort of colorish tone for your  black parts, and really light cream or blueish color for your white

Like in this image, white isnt really white and black isnt really black if you take them out of context and compaired to actual white and actual black colors.


Well thats it, i hope this isnt too confusing, i will need to probably sit down and reorganize this totorial a bit to make it more simple.

– Metasequoia – Introduction and setting it up –

– Introduction –

Before i begin any tutorials, first i need to introduce you to my tools of trade, since all these tutorials will be orientated towards them and might not be useful to you, if you are using another software package.

Metasequoia is a poplar (in japan) 3D modeling program. It is simple, straight forward, and easy to understand, it is good for both beginners just getting into 3D and people with prior 3D  experience. It is good for creating videogame models 3DCG models.

The metasequoia project was started by Dr. O.Mizno in 1998 and is still an ongoing development. The program has freeware and shareware version, the shareware version requires license if you wanna be able to save to other formats apart from its native format .MQO and use plugins, apart from that, you can freely use it to learn and create stuff. The license is pretty cheep, just 45$ and if you are hesitant to buy it, you can also get a free 30 day trial license to try it out, but you dont need them cuz you can try it out without a license anyway, you only need license to be able to use plugins and export your stuff to blender, witch is another program i am using and going to get into later. Or you could always find some plugin for blender that can import .MQO files.

you can get the program here http://www.metaseq.net/metaseq/index.html
English version of the page http://www.metaseq.net/english/index.html




– Setting it up –


Assuming you have already downloaded and extracted the program and launched it and selected your language as English, the very first thing you need to do is go to file and disable basic mode, basic mode hides some of the important tools that im going to be using in my tutorials.


Now go to file again and this time go to configuration. This is where you set the program up and the very first thing you need to do here is set the UI theme. the default UI looks really ugly and hard to understand so i am using a custom UI that i made out of one older theme that used to come with metasequoia but than it got discontinued cuz it didn’t have the buttons for all the new features that got added later. you can download my theme here

Extract the mo24.style and mo24.bmp into your metasequoia’s Data folder. now in the screen settings, below the language drop down menu where it says style, select mo24.style.
Set the status bar to align bottom, the status bar will show you tool tips and other useful information.


in the preview tab, set the texture resolution to maximum and enable anti aliasing if you want smooth lines and edges.


In the file tab you can set the default directories for where there program will look for your work material like textures and bump maps and stuff, you can also enable automatic backup of your files so that you wouldnt accidentaly loose any work, i have it disabled cuz i have my own way of making backups. You should also set the file associations for .MQO files.


In the system tab, set max undo to 1000 and memory to 512mb, you can always undo everything if you screwed something up and didn’t notice it right away and kept working.


In the edit tab, set the shape of the rotation handle to arrows instead of rings cuz its fucking impossible to grab those rings in some camera angles.


And you are done with the configuration menu and can click OK and we are almost done, just a few more things left to do. Now you can see stuff grouped into tabs on the left side of the screen, the very first tab is called system, find 2 buttons called ObjPanel and MatManel and press them both, this opens the object panel and the material panel. Grab them and dock them to the right side of your screen. The object panel is an object layer window that works the same exact way as layers work in 2D programs like photochop. The material panel is kinda like your color/texture palette.


Finally, in the edit option tab, set the type of selection you want, free line or box and you are all done.

edit options tab

Now you are ready to begin work, my next tutorial will show what button does what and that sort of stuff, but until i actually get off my lazy as and do it, you could try figuring that out on your own by clicking stuff and seeing what it does.

– An Introduction to 3D –

This article is a basic introduction of what is 3D and how Second Life utilizes it.

To understand what 3D is and how it works, imagine a map, the map is flat 2dimensional image or a 2dimensional space with pixels in it, 2D has only 2 coordinates for every pixel in it, it is X and Z or left/right and forward/back left and right right(example: 3 steps left, 4 steps forward or x-3 y4) .

Now add a third coordinate to the map, Y or up/down. You now have a 3 dimensional map or a 3 dimensional space with a 3d model in it.

More examples example of how the 3D space works.


A 3D model is made out of points called vertices (ill talk more about those further down)

The coordinates of a single example point (represented by a yellow ball) in 3D space is x6 y6 z4. All 3 dimensions have a grid of their own that the points snap to, the points can only snap to a number like 1 or 2 or 3 but never in between them, its basic math, kinda like not being able to divide by zero. Good thing those numbers usually are 60000 60000 40000, your point can snap anywhere in between 60000 and 69999, one digit numbers like 00001 or 00002 would be already a nano scale and you would never even reach that scale so you don’t have to worry about it . Of course a lot of programs allow you to set up your own grid for precision modeling.

Note: a lot of 3D applications (like Second Life and blender) use Z as up/down and Y as back/forward, instead of Y for up/down and Z for back/forward. Some programs or game engines use their own orientations and have all 3 of the XYZ axis mixed up and changed places, it would be awesome if people came to a standard.

A 3D model is made out of points called vertices and lines that connect the points called edges, lines connected in a loop, make a polygon. There are 3 point and 4 point polygons(4point polygon are easier to work with but they are still broken down by the graphics card into 2 triangles on the rendering level, so in a sense, there is no such thing as a 4 point polygon. When exporting a model to SL, all the 4 point polygons are being split in half to make triangles, a process called triangulation), the inner contents of a polygon is called a face or a normal (imagine a polygon as a frame and the face as a picture in that frame), polygons that are connected with each other and share edges, form a thing called geometry, also called a wireframe.

3 point polygon only geometry is called mesh, mesh is mostly used in video games and other applications that use real time 3D rendering, such as Second Life.
4 point or quad geometry is mostly used in CGI movies and model editing, but it gets triangulated when exporting it to video games.

A 3d model file holds the XYZ coordinates of each point as well as some additional information like, what lines connect to those points, surface material and UV data. Most 3D file formats hold this data in a simple plain text format and can be opened and edited using notepad (i good way to manually repair corrupted files)

Here’s how a simple cube looks like.

These are the the point coordinates, they show how far away the point is from 0.0.0 witch is a reference point and the absolute center of the 3D space that the model is in.

Surface materials are groups of polygons that the model is divided into, each group can have its own color, shading and texture. In Second Life, materials are called faces(not to be confused with normals).

More advanced model and materials.

A UV map is a 2D representation of the surface of the 3 dimensional object, to understand how a UV map works, take a piece of paper and roll it up into a cylinder, you have a 3 dimensional shape, now unroll that cylinder and lay it flat on the table, you now have a UV layout of that shape. Each point and line in the UV map, belongs to each line and point on the 3 dimensional object.

A more advanced example of how the UV map works, would be a paper craft model.

UV maps can be cut up and arranged to make a better use of the available UV space, this helps to make the best of the given texture resolution, the more texture pixels a polygon will take up, the better image quality it will have in that particular surface area.

Now, moving onto SL

What are prims? A prim is short for a primitive, a primitive is a basic geometrical shape that you can start with, turning it into more advanced shapes. On Second Life, prims are also a set of simple geometrical shapes that you can shape into  more complex objects, combine and link them, to make advanced shapes. SL prims are already preset with different material surfaces, UV maps and advanced transformation handles. The idea of prims was basically to build houses from, the whole build menu in SL has a similar interface to a video game map editor, but people on sl figured they can just use it to build avatar attachments and other cool stuff instead of houses, using prims.

Notes: While the idea of real time 3D object manipulation in a game, to create content, is really neat in theory, and is one of the biggest selling points of sl, in practice, it has a  few problems because of the lazy and resource wasteful way it was executed. Prim builds being the main cause of all the frame rate lag on SL(not to be confused with network lag) is one of them, and here’s why. This example shows a cube made out of 162 prims (on the left) and the same exact cube made out of one single mesh(on the right).

Prim builds waste a lot of polygons in places where they aren’t needed to give the object its shape and aren’t even seen by the camera. Its hard for computers to process and render all this geometry and it often results in noticeable drops in frame rates, especially on older computers. It is not a good idea to use prims to make avatar attachments(20 avatars on screen with multiple 250 prim attachments all over their body, will lag you back to December) or anything else for that mater that isn’t a house.

60 triangles where wasted just to make this single corner of the cube, while the mesh version only use 1 triangle.

Prims are very inflexible and you need a lot to do the most simple things, and in the end, some shapes are just impossible to do. And this brings us to sculpted prims.

What are sculpted prims? Sculpted prims or sculpts are sphere prims, shaped using a sculpt map, into shapes that cannot be accomplished by using the normal prim transformation handles.
How it works is, you make a shape out of a ball or a cylinder primitive in a 3D program, your shape has to have a spherical UV map. When you are finished with your 3D shape, you generate a 64×64 color image called the sculpt map and export it to SL, the By using the UVmap, RGB values of every pixel on the sculpt map represents the XYZ values of every point in your object, a 3d map in 2d format. SL uses the shape information of the sculpt map to push every point of a sphere prim into the same position it was in your 3D program, giving it the shape that you made , therefore its kind of like sulpting a shape out of a chunk of rock.

Sculpt map method imposes a lot of limitations on the creator and challenges him to come up with creative ways to overcome those challenges. At the start you where bound to a 16×16(32×32 sculpt map) or a 32×32(64×64 sculpt map) segment objects to work with. later LL implemented the ability to create custom resolution sculpt maps to better optimize your shapes, you could have for example a 16×128 sculpt maps (8×64 segments)to make a better use of available polygons. But not all viewers supported this feature correctly and these kind of sculpts often rendered out broken in most third party viewers (just like everything else cuz TPVs always sucked)

When optimizing your sculpts, you also had to keep in mind how SL will decimate the segmentation of the sculpt to make LOD(levels of detail) if you want your object not to loose its shape or just turn to garbage when you zoomed out.

But even with all the optimization, sculpt maps are still very resource wastefully, since you cannot remove any of the polygons that the camera cannot see or aren’t necessary to give the object its shape, And since you still have a pretty much fixed amount of polygons and cannot do anything to it that would change the grid like topology of your object, witch in turn would break the spherical UV map, you still end up using multiple sculpted prims to make more complex shapes. Making sculpts is just really awkward, time consuming and frustrating process and people who think, making sculpts is easier than making meshes, and still do them long after mesh import was implemented, refusing to learn mesh(theres nothing to learn to begin with, if you know how to make sculpts, you are already a master of mesh),are just bat shit crazy.


for the sake of demonstration, i took a gun that is made out of sculpted prims and striped it down of all the polygons that isn’t necessary to give the gun its shapes, and made a “mesh” version of it. But in reality, i would have to redo the same gun from scratch, because of the flawed nature of sculpted prim geometry, the gun still has too many polygons, it shouldn’t be more than 2k or 3k.

Click to see the full size image

Notes: Any experienced 3D modeler who isn’t familiar with SL and how it works, would have his mind boggled after seeing the blasphemous wireframes of prim and sculpted prim builds. And you know, in this sense, SL is really mind blowingly amazing on what huge quantities of polygons it can process and render on screen, along with real time shadows, lighting and DOF.

Anyway, on top of that, you get a really loose representation on SL of what you have actually created in your 3D program, since lots of fine detail is being lost cuz RGB values can only go as far as 255x255x255. That means that the resolution of the grid that all the points snap to, is very tiny. On top of that, most plug-ins out there dont really do a good job recording that detail to begin with and not to mention image compression (if you don’t know about stuff like lossless compression). As a result, you get bumpy and chewed up objects that sometimes look like potatoes and as one of my friends described it “look like they have assholes at the poles”. Most plugins are very picky and limited by themselves, adding up their own limits to an already limited method(primstar for blender is so far the best sculpt map plugin, since it has no limitations of its own and can make very precise sculpts without loss of detail, apart from the loss you get from the 255 grid). in the end sculpts are only really good for making rocks :|

Click to see full size images.

Notes: I dont get why LL went with the sculpt map image idea to begin with, you would get far better results if the xyz vert information was saved in a text format, without any loss of detail)

The name sculpt itself is rather confusing both to those who work with 3D and those who don’t know anything about 3D.
In the 3D modeling world, sculpting is literally carving detail into a 3D models using tools that can manipulate/add/remove/ polygonal detail to a model. But on SL, sculpting means using a special color image, to deform a sphere prim.
SL players that have no prior experience or knowledge of 3D often confuse programs ment for actual 3D sculpting with SL sculpt creation. they think these programs where created to make sculpts for SL cuz it says “sculpting” somewhere in their name or description. While some of these might actually have a plug-in that lets you sort of turn your creation into a sculpt map, most of the time they are the worst starting point for inexperienced users, that wanna learn to make stuff for SL, due to the complexity of those programs that are aimed at industry professionals and not simple Joe.

Both prims and sculpts are meshes just like everything else on SL. From the linden ground and water to the linden trees and avatar. Everything renderd on your screen in SL, is made out of 3 point polygons, and this brings us to mesh import.

Click to see full size image.

Mesh import is LL’s latest addition to SL content creator’s toolkit. Mesh import means that you no longer have to use all kinds of indirect and often ass awkward workarounds, such as sculpt maps and prim arranging scripts, to bring your creations to SL. You can now import your models straight from your 3D program, without having to make them in a specific way and than turn them into sculpt maps and hope that it wont turn into a jumbled mess when imported. It removes many of the limitations, imposed by the sculpt map method. You now have the full freedom of any topology you want, different UV solutions, materials, can even rig your meshes to the SL skeleton, and you have the full freedom to use any of the tools you want in shaping a mesh object, since you don’t have to worry anymore about the segmentation and not screwing up the spherical UV maps.

And so this wraps up the introduction to 3D, now, on to the tutorials!