Try MAXST AR Fusion Tracker Now ✨

Object Fusion Tracker

Related documentations
Visual SLAM Tool
Tracker Coordinate System
Visual SLAM Learning Guide

The Object Tracker loads the map file and renders 3D object on it.
After target recognition and initial poses are acquired through the MAXST SDK, use AR Kit for tracking.
※To use the AR Kit, you must enter the actual size. (See Start / Stop Tracker)

The biggest difference from the existing Object Tracker is that the existing Object Tracker tracks through the frame input from the RGB camera. Due to the nature of RGB cameras, tracking will be lost if the target deviates from the camera frame or if there are few feature points. Object Fusion Tracker, on the other hand, tracks through the AR Kit, which allows the target to deviate from the camera frame or keep the feature point at least without tracking, due to the nature of learning the environment in which the current target lies.

Please refer Visual SLAM Learning Guide to create a map more precisely while scanning 3D space.

Create Instants
Start / Stop Tracker
Use Tracking Information
Set Map
Add / Replace Map
Change Tracking Mode

Create Instants

>ObjectFusionTrackerViewController.swift

var trackingManager:MasTrackerManager = MasTrackerManager()

Start / Stop Tracker

trackingManager.isFusionSupported ()
This function checks whether or not your device supports Fusion.
Return value is bool type. If true, it supports the device in use. If it is false, it does not support the device.

trackingManager.getFusionTrackingState ()
Pass the tracking status of the current Fusion.
The return value is an int of -1, which means that tracking isn't working properly, and 1 means that it's working properly.

To start / stop Tracker after loading the map, refer to the following code.

>ObjectFusionTrackerViewController.swift

    func startEngine() {
        ...
        
        if(trackingManager.isFusionSupported()) {
            trackingManager.start(.TRACKER_TYPE_OBJECT_FUSION)
            let objectTrackerMapPath:String = Bundle.main.path(forResource: "obj_190712_7", ofType: "3dmap", inDirectory: "data/SDKSample")!
            trackingManager.addTrackerData(objectTrackerMapPath)
            trackingManager.addTrackerData("{\"object_fusion\":\"set_length\",\"object_name\":\"obj_190712_7\", \"length\":0.53}");
            trackingManager.loadTrackerData()
        }
    }

@objc func resumeAR()
{
    ...
    trackingManager.start(.TRACKER_TYPE_OBJECT_FUSION)
}

@objc func pauseAR()
{
    trackingManager.stopTracker()
    ...
}

On the first call to addTrackerData (), the parameter must be passed the path of the 3dmap.
The addTrackerData () in the second call is used to determine the actual size of the 3dmap. (Unit: m)
Object_name is the file name of the 3dmap.
Length is the actual size of the two anchors you marked last when creating the 3dmap. (Please refer to Visual SLAM Tool.)
You must enter the actual size of the target. If you do not enter the correct actual size, the content will not be augmented properly.
It must be run in the following order: startTracker (), addTrackerData (), loadTrackerData ().

Use Tracking Information

To use the Tracking information, refer to the following code.

>ObjectFusionTrackerViewController.swift

func draw(in view: MTKView) {
    ...
    
    let trackingState:MasTrackingState = trackingManager.updateTrackingState()
    let result:MasTrackingResult = trackingState.getTrackingResult()
    
    let backgroundImage:MasTrackedImage = trackingState.getImage()
    var backgroundProjectionMatrix:matrix_float4x4 = cameraDevice.getBackgroundPlaneProjectionMatrix()
    
    let projectionMatrix:matrix_float4x4 = cameraDevice.getProjectionMatrix()
    
    if let cameraQuad = backgroundCameraQuad {
        cameraQuad.setProjectionMatrix(projectionMatrix: backgroundProjectionMatrix)
        cameraQuad.draw(commandEncoder: commandEncoder, image: backgroundImage)
    }
    
    let trackingCount:Int32 = result.getCount()
    
    for i in stride(from: 0, to: trackingCount, by: 1) {
        let trackable:MasTrackable = result.getTrackable(i)
        let poseMatrix:matrix_float4x4 = trackable.getPose()
        
        textureCube.setProjectionMatrix(projectionMatrix: projectionMatrix)
        textureCube.setPoseMatrix(poseMatrix: poseMatrix)
        textureCube.setTranslation(x: 0.0, y: 0.0, z: -0.15)
        textureCube.setScale(x: 0.3, y: 0.3, z: 0.3)
        textureCube.draw(commandEncoder: commandEncoder)
    }
    ...
}

Set Map

By calling function addTrackerData to register the map file and calling function loadTrackerData, Space can be tracked. To set a map, refer to the following code.

>ObjectFusionTrackerViewController.swift

func startEngin()
{
    ...
    trackingManager.addTrackerData(objectTrackerMapPath)
    trackingManager.loadTrackerData()
}

Add / Replace Map

※ You can add multiple 3dmaps to recognize one of multiple objects. We recommend to add up to three maps.

  1. Create a map file refer to Visual SLAM Tool.

  2. Copy the received map file to the desired path.

  3. Set a map.

  4. If you have an existing map file, call function 'addTrackerData' and function 'loadTrackerData' after calling trackingManager.removeTrackerData()