Ganymede Usage Guide

API Version: 0.26.4
Date: 2021-06-07

This document will guide you through the process of logging in to Meshcapade's online scan processing API (Ganymede), uploading a 3d scan file, requesting an alignment, and then downloading the result.

0. Quickstart

A Python command-line client is available. To install it, run the following command:

pip install git+https://gitlab+deploy-token-48796:niSxdsQ9tUXjAaoxp4_s@gitlab.com/meshcapade/ganymede-client.git

Avatars from scans

Following an examplary alignment with the ganymede client for avatars from scans:

ganymede align --gender female --infile my_scan.obj my_result.obj

This will upload my_scan.obj to the API, request an alignment, wait for it to complete, and then download the resulting file into my_result.obj

For the full set of available options, see ganymede align --help

Avatars from measurements

Following an examplary alignment with the ganymede client for avatars from measurements:

ganymede measurement --gender female --height 186 my_result.zip

This will request an alignment from measurements with a height of 186cm, wait for it to complete, and then download the resulting file into my_result.zip

For the full set of available options and available measurements, see ganymede measurement --help

All interactive API endpoints and actions are available on readme.io

Note: initial login will require a password change, so please follow step 1 (below) to complete a web login at least once before using the client.

1. Log in

  1. Point your browser at: https://api.digidoppel.com/ganymede/login.html
  2. Enter your credentials (if you don't have any, sign up here )
  3. You should receive your auth token; it expires in one hour and will need to be renewed after that. It should look something like this:
  message: "Login successful!",
  token: "eydrafWOisIdf1MVsldfZcsmdhcL3pf0aGRJSgENZb3V4cnpFeXZdKQnRwbnhBSTdocjVZ1VubkpERTf0iLCJhbciOiJSUzI1NiJ9",
  expires_in: 3600
  • The API base URL is: https://api.digidoppel.com/ganymede
  • All API requests require an authorization header: Authorization: $TOKEN
  • POST requests additionally require content-type: Content-Type: application/json

Programmatic login

Ganymede uses AWS Cognito for user management and authentication.
For API integration, here is sample Python code for programmatic log in, using the boto3 and warrant-lite libraries:

import boto3, botocore
from warrant_lite import WarrantLite

REGION = 'eu-central-1'
POOL_ID = 'eu-central-1_zIyGPqFrl'
CLIENT_ID = 'b8rb3ei1qdv9okjkht0jlro2i'

def get_headers(username, password):
    '''Log in and fetch authentication headers for subsequent API requests'''
    client = boto3.client('cognito-idp', region_name=REGION, config=botocore.config.Config(signature_version=botocore.UNSIGNED), aws_access_key_id='', aws_secret_access_key='')
    aws = WarrantLite(username=username, password=password, pool_id=POOL_ID, client_id=CLIENT_ID,client=client)
    tokens = aws.authenticate_user()
    id_token = tokens['AuthenticationResult']['IdToken']
    headers = {'Authorization': id_token, 'Content-Type': 'application/json'}
    return headers

Cognito SDKs for other languages are also available, e.g. AWS Amplify for Javascript.

The next steps will demonstrate each request via curl command line examples.
Some environment setup is included here for convenience:

TOKEN="xxxx-xxxx-xxxx-xxxx" # Insert the actual token received after login

2. Create an asset

Send a POST request to the /asset endpoint, with body containing:

  "filename": "SomeScan.obj"
curl -sXPOST -H "Authorization: $TOKEN"-H "Content-Type: application/json" -- data '{"filename": "SomeScan.obj"}' $URL/asset

The response will include an asset_id and a signed upload url, and should look something like this:

  "asset_id": "07a937a8-cb19-11e8-8c7a-ba5e0fbfa487",
  "asset_type": "scan_3d",
  "filename": "SomeScan.obj",
  "state": "ready",
  "upload": {
     "url": "https://signed-PUT-url",
     "expires_in": 600

3. Upload the asset

Send a PUT request with the file data to the signed URL (no extra headers needed):

curl -sXPUT --upload-file local/dir/SomeScan.obj "https://signed-PUT-url"

4. Verify assets (optional)

Send a GET request to $URL/asset/{asset_id} to view the uploaded asset and get a signedto view the uploaded asset and get a signed download url for the original data.

curl -sXGET -H "Authorization: $TOKEN" $URL/asset/{asset_id}

5. Request an alignment

Send a POST request to the /asset/{asset_id}/alignment endpoint, with body containing job parameters:

  "gender": "male"

Please see the full API reference below for the full set of request parameters.

curl -sXPOST -H "Authorization: $TOKEN" -H "Content-Type: application/json" -d '{"gender": "male"}' $URL/asset/{asset_id}/alignment

The response will contain a new sub_id for the generated asset as well as the job parameters:

  "asset_id": "07a937a8-cb19-11e8-8c7a-ba5e0fbfa487",
  "sub_id": "alignment/b87f9436-cb24-11e8-a7d4-6261940306cf",
  "parameters": {
     "gender": "male"
  "asset_type": "alignment",
  "filename": "SomeScan_alignment.obj",
  "state": "pending"

6. Check alignment status

Send a GET request to the /asset/{asset_id}/{sub_id} endpoint to check for results.

curl -sXGET -H "Authorization: $TOKEN" $URL/asset/{asset_id}/{sub_id}
  • "state": "ready" indicates the alignment is ready to download, and the reply will include a signed download url.
  "download": {
    "url": "https://signed-GET-url",
    "filesize": 2867182,
    "expires_in": 600
  • "state": "error" indicates an error occurred during processing. The accompanying "message" field should indicate the type of error.

7. Download result

Send a GET request to the signed URL to download the result file.

curl -sXGET "https://signed-GET-url" >> SomeScan_alignment.obj
Note: Alternatively, you may also download the result through your browser by pasting theURL into your address bar.

Well-conditioned inputs and best practices

The chance of a successful alignment, and the quality of it, is dependent on the passed data and the arguments used to define the processing command.The ideal input pose is an A-pose, with fingers splayed and legs not touching at the thigh. Other poses will work too, but this is a good ideal for animation purposes. Poses which do not conform to one of these archetypes may also work, but the further the input mesh is from these poses,the greater the chance of alignment failure. A good pose is the cornerstone of a good alignment.

Please refer to the Wiki for our best practices on what kind of input can be consumed.

Pose guide

Animation guide

A to BodybuilderA to Bodybuilder
A to CatwalkA to Catwalk
A to Dancing in RainA to Dancing in Rain
A to Hands FrontA to Hands Front
A to Hip HopA to Hip Hop
A to Irish DanceA to Irish Dance
A to ModelA to Model
A to SalsaA to Salsa
A to StretchesA to Stretches
A to WalkA to Walk
Contra PoseContra Pose
Wide to A PoseWide to A Pose
Wide to Arms RetractedWide to Arms Retracted
Wide to CatwalkWide to Catwalk
Wide to I PoseWide to I Pose
Wide to SquatWide to Squat
Wide to Toe TouchWide to Toe Touch


Pay attention to the scale (input_units) of the input mesh. If it diverges from the scale that you pass to the alignment process, or from the default if you do not pass an argument, alignment time can skyrocket or alignments can fail. This is on the roadmap to automatically detect and mitigate.

Local coordinate system

  • The alignment coordinate system occurs in the local space of the mesh object in your passed file.
  • The local coordinate system should be centered approximately in the center of the input mesh-midway between the feet.
  • Make sure your input mesh is facing the look_axis and the head-end of the body is pointed towards the up_axis.

Remove mouth and eye bags

The alignment process does not cope well with large geometric internal volumes such as mouth bags and eye bags as are found in typical artistically generated animation meshes. The "anonymize" parameter can mitigate this, if not, these portions should be cut off from the input mesh.

Remove mouth and eye bags

Keep hair above the neck

Input meshes with hair occluding the neck will not function well.


Tight fitting clothing

Input meshes with clothing do function, but the tighter the clothing is, the better. Loose or protuberant clothing or adornments may cause alignment to fail entirely, take a very long time, or produce subpar results. Keep in mind that the training data was done on mostly nude models, so any clothing included in the scan will be treated as if it were the surface of the subject's skin.

Tight fitting clothing

Limitations and processing notes

  • Currently, low resolution FBX results are not supported. This is on the roadmap to add support for.
  • Using the Ganymede client, the output will overwrite and local existing file of the same name without warning. This is on the roadmap to support.
  • Typical align times should be less than 20 minutes with well-conditioned inputs. If an alignment takes over an hour, it may have failed.

8. API reference

Available endpoints and methods

The full set of interactive API endpoints and actions are available on readme.io

A exemplary list of available endpoints:

  1. Asset collection: /asset
    • POST: Register a new asset
  2. Individual asset: /asset/{asset_id}
    • GET: Retrieve information for an asset
    • DELETE: Delete an asset
  3. Alignment sub-collection: /asset/{asset_id}/alignment
    • POST: Request alignment for an asset
    • GET: List all alignments for an asset
  4. 4. Individual alignment: /asset/{asset_id}/alignment/{alignment_id}
    • GET : Retrieve information for an alignment

Supported file types


The default mode of operation for the API is to accept and produce single .obj files.
Scan input as point-cloud .ply and compressed .zip files is also supported. Zip-files must contain:

  • Exactly one scan file .obj or .ply
  • Optionally, one texture image .png or .jpg

If a texture image is present the API will automatically attempt to re-map it to the output mesh geometry.

Note: Please make sure the filenames inside a zip archive do not start with digit or special characters.


The default output for a single scan input is an aligned .obj mesh.
If output_format: fbx is selected, the output will be a single rigged FBX file in the specified output pose. This will override file naming invisibly - that is, a file named 'object_aligned.obj' will be a FBX with an incorrect extension if this parameter is supplied with the .fbx argument.
You may alternatively supply the output_animation parameter for animated output.

Note: Supplying an different output_animation parameter will override output_format to FBX, even if a file extension is supplied--the output will be a .FBX file.

Sample animation options include:

  • a-walk: Starting from A-pose, moving into a walking animation
  • wide-a: Starting from a wide stance, moving into A-pose

More animation options will be added. Custom animations can also be created from motion capture or 4-D scan sequences.

If a .zip file is provided as input, the output will similarly be a .zip containing the aligned scan and, if present, the re-mapped output texture file

Well-conditioned inputs

Alignment Request Schema

The full set of available parameters for an alignment request is listed below.

  • gender is the only required parameter
  • Notable parameter defaults:
    • input_pose: a
    • input_units: m
    • up_axis: y
    • look_axis: z
  • Output pose and units will default to "same as input" unless otherwise specified
  "$schema": "http://json-schema.org/draft-04/schema#",
  "title": "Alignment Request",
  "description": "Request to align a 3D Scan",
  "type": "object",
  "properties": {
    "gender": {
      "description": "Gender of the input scan. This parameter will not affect the fitting process, which instead uses the shape of the input scan, but it will affect animation and deformation, as the sample data from males and females is distinct. However, if a 'none' is passed to the refinement parameter, the fitting process ignores the shape (but not the pose) of the scan, and the fitted mesh will more closely resemble the gender- defined model.",
      "type": "string",
      "enum": ["male", "female"]
    "input_template": {
      "description": "Body template to use for alignment. The difference here is just to the proportions of the starting point used to fit the mesh. No data from children has been used to generate the model.",
      "type": "string",
      "enum": ["default", "child"]
    "input_pose": {
      "description": "Initial pose for alignment. See above for pose guide and definitions. Getting this closer to the reality of your scan can speed up the alignment process.",
      "type": "string",
      "enum": ["a", "t", "i", "u"],
      "default": "a"
    "input_hands": {
      "description": "Initial hand pose for alignment.",
      "type": "string",
      "enum": ["splay", "curl", "fist", "relaxed"],
      "default": "splay"
    "input_units": {
      "description": "Units of the input scan.",
      "type": "string",
      "enum": ["m", "cm", "mm"],
      "default": "m"
    "output_pose": {
      "description": "Target output pose. See above for pose guide and definitions.",
      "type": "string",
      "enum": ["scan", "a", "t", "i", "u"],
      "default": "scan"
    "output_hands": {
      "description": "Target output hand pose.",
      "type": "string",
      "enum": ["scan", "splay", "curl", "fist", "relaxed"],
      "default": "scan"
    "output_units": {
      "description": "Units of the aligned output.",
      "type": "string",
      "enum": ["m", "cm", "mm", "inches"]
    "output_format": {
      "description": "File format of the aligned output.",
      "type": "string",
      "enum": ["obj", "fbx", "pc2"],
      "default": "obj"
    "output_filename": {
      "description": "Server side filename of the aligned output (without extension).",
      "type": "string"
    "output_texture_name": {
      "description": "Server side filename of the output texture (without extension).",
      "type": "string"
    "output_texture_resolution": {
      "description": "Resolution of the output texture.",
      "type": "number",
      "default": 4096
    "output_animation": {
      "description": "Animation to apply to the aligned output, incompatible with 'low' resolution parameter.",
      "type": "string"
    "platform_height": {
      "description": "Height of base to remove before aligning in meters. Useful for omitting points comprising the surface which the scan subject was standing on.",
      "type": "number"
    "up_axis": {
      "description": "Up axis of the input scan.",
      "type": "string",
      "enum": ["x", "y", "z", "-x", "-y", "-z"],
      "default": "y"
    "look_axis": {
      "description": "Look axis of the input scan.",
      "type": "string",
      "enum": ["x", "y", "z", "-x", "-y", "-z"],
      "default": "z"
    "refinement": {
      "description": "Refinement level for the alignment process. Higher settings results in a slower alignment which is more biased towards matching the precise surface defined by the input point cloud. 'None' ignores the surface of the scan and only uses derived parameters (e.g., waist circumference, arm length, etc), ad fit to the gendered model archetype specified by - gender.",
      "type": "string",
      "enum": ["high", "standard", "low", "none"],
      "default": "standard"
    "resolution": {
      "description": "Resolution of the aligned output mesh; available resolutions are 6890, 27578, and 110306 vertices.",
      "type": "string",
      "enum": ["high", "medium", "low"],
      "default": "medium"
    "anonymize": {
      "description": "This causes the alignment to skip fitting the output to the input head, leading to an anonymized output.",
      "type": "boolean",
      "default": false
  "required": [ "gender" ]