Try MAXST AR Fusion Tracker Now ✨

Image Fusion Tracker iOS Tutorial

1. Overview
2. iOS Development
2.1 Create Instants
2.2 Start / Stop Tracker
2.3 Use Tracking Information
2.4 Set Target Image
2.5 Add / Replace Target Image
2.6 Train Target Image Instantly
2.7 Change Tracking Mode
3. Reference
3.1 API Reference
3.2 Sample


1. Overview

Develop Image Tracker on iOS Platform. Refer to Image Tracker Introduction for detailed information.

To determine target site for Image Fusion Tracker, Target Manager needs to produce 2D Mapfile of the target. Please refer to Target Manager and Recommended Conditions for Target Images.

Once 2D Mapfile is successfully created, proceed to iOS Development to continue the tutorial.

Refer to Tracker Coordinate System to better understand 3D coordinate system of Image Tracker.

After image recognition and initial poses are acquired through the MAXST SDK, use AR Kit for tracking.

※To use the AR Kit, you must enter the actual size. (See Start / Stop Tracker)

Prerequisites
Target Manager
Recommended Conditions for Target Images
Tracker Coordinate System


2. iOS Development

Start developing on xCode using Swift. Refer to Requirements & Supports to find out which devices are supported.

ARSDK has to properly integrate on iOS UIViewController. Refer to Life Cycle documents for details.


2.1 Create Instants

ImageFusionTrackerViewController.swift

var trackingManager:MasTrackerManager = MasTrackerManager()

2.2 Start / Stop Tracker

trackingManager.isFusionSupported()
This function checks whether or not your device supports Fusion.
Return value is bool type. If true, it supports the device in use. If it is false, it does not support the device.

trackingManager.getFusionTrackingState()
Pass the tracking status of the current Fusion.
The return value is an int of -1, which means that tracking isn't working properly, and 1 means that it's working properly.

To start / stop the tracker after loading the map, refer to the following code.

ImageFusionTrackerViewController.swift


func 
startEngine() {
self.trackingManager.start(.TRACKER_TYPE_IMAGE_FUSION)            self.trackingManager.setTrackingOption(.NORMAL_TRACKING)
self.trackingManager.loadTrackerData()
}

@objc func resumeAR()
{
    ...
    trackingManager.start(.TRACKER_TYPE_IMAGE_FUSION)
}

@objc func pauseAR()
{
    trackingManager.stopTracker()
    ...
}

When creating a 2dmap, the actual size must be entered correctly. (Unit: m)
You must enter the actual size of the target. If you do not enter the correct actual size, the content will not be augmented properly.
It must be run in the following order: startTracker (), addTrackerData (), loadTrackerData ().


2.3 Use Tracking Information

In the folder where SDK is installed, go to ‘data > SDKSample > Original > ImageTarget' folder, and there is sample target image. Print the image.

If you use the sample code, the following content will be augmented for each image.

  • Blocks.jpg: The alpha video is augmented.
  • Lego.jpg: The normal video is augmented.
  • Glacier.jpg: The cube with the texture is augmented.

To apply tracking results to augmented objects, refer to the following code.

ImageFusionTrackerViewController.swift

func draw(in view: MTKView) {
        ...
        let trackingState:MasTrackingState = trackingManager.updateTrackingState()
            let result:MasTrackingResult = trackingState.getTrackingResult()
            
            let backgroundImage:MasTrackedImage = trackingState.getImage()
            var backgroundProjectionMatrix:matrix_float4x4 = cameraDevice.getBackgroundPlaneProjectionMatrix()

            let projectionMatrix:matrix_float4x4 = cameraDevice.getProjectionMatrix()
        
            if let cameraQuad = backgroundCameraQuad {
                cameraQuad.setProjectionMatrix(projectionMatrix: backgroundProjectionMatrix)
                cameraQuad.draw(commandEncoder: commandEncoder, image: backgroundImage)
            }
            
            let trackingCount:Int32 = result.getCount()
            
            if trackingCount > 0 {
                for i in stride(from: 0, to: trackingCount, by: 1) {
                    let trackable:MasTrackable = result.getTrackable(i)
                    let poseMatrix:matrix_float4x4 = trackable.getPose()
                    
                    if trackable.getName() == "Lego" {
                        if videoCaptureController.getState() == MEDIA_STATE.PLAYING {
                            videoCaptureController.play()
                            videoCaptureController.update()
                            
                            videoPanelRenderer.setProjectionMatrix(projectionMatrix: projectionMatrix)
                            videoPanelRenderer.setPoseMatrix(poseMatrix: poseMatrix)
                            videoPanelRenderer.setTranslation(x: 0.0, y: 0.0, z: 0.0)
                            videoPanelRenderer.setScale(x: 0.26, y: 0.15, z: 1.0)
                            videoPanelRenderer.draw(commandEncoder: commandEncoder, videoTextureId: videoCaptureController.getMetalTextureId())
                        }
                    } else if trackable.getName() == "Blocks" {
                        if chromakeyVideoCaptureController.getState() == MEDIA_STATE.PLAYING {
                            chromakeyVideoCaptureController.play()
                            chromakeyVideoCaptureController.update()
                            
                            chromakeyVideoPanelRenderer.setProjectionMatrix(projectionMatrix: projectionMatrix)
                            chromakeyVideoPanelRenderer.setPoseMatrix(poseMatrix: poseMatrix)
                            chromakeyVideoPanelRenderer.setTranslation(x: 0.0, y: 0.0, z: 0.0)
                            chromakeyVideoPanelRenderer.setScale(x: 0.26, y: 0.18, z: 1.0)
                            chromakeyVideoPanelRenderer.draw(commandEncoder: commandEncoder, videoTextureId: chromakeyVideoCaptureController.getMetalTextureId())
                        }
                    } else if trackable.getName() == "Glacier" {
                        textureCube!.setProjectionMatrix(projectionMatrix: projectionMatrix)
                        textureCube!.setPoseMatrix(poseMatrix: poseMatrix)
                        textureCube!.setTranslation(x: 0.0, y: 0.0, z: -0.025)
                        textureCube!.setScale(x: 0.15, y: 0.15, z: 0.05)
                        textureCube!.draw(commandEncoder: commandEncoder)
                    } else {
                        colorCube!.setProjectionMatrix(projectionMatrix: projectionMatrix)
                        colorCube!.setPoseMatrix(poseMatrix: poseMatrix)
                        colorCube!.setTranslation(x: 0.0, y: 0.0, z: -0.075)
                        colorCube!.setScale(x: 0.15, y: 0.15, z: 0.15)
                        colorCube!.draw(commandEncoder: commandEncoder)
                    }
                }
            } else {
                videoCaptureController.pause()
                chromakeyVideoCaptureController.pause()
            }
        ...
    }

2.4 Set Target Image

By calling function addTrackerData to register the map file and calling function loadTrackerData, the target image can be tracked. To set a target image, refer to the following code.

ImageFusionTrackerViewController.swift

func startEngine()
{
    ...
    let blocksTrackerMapPath:String = Bundle.main.path(forResource: "Blocks", ofType: "2dmap", inDirectory: "data/SDKSample")!
    let glacierTrackerMapPath:String = Bundle.main.path(forResource: "Glacier", ofType: "2dmap", inDirectory: "data/SDKSample")!
    let legoTrackerMapPath:String = Bundle.main.path(forResource: "Lego", ofType: "2dmap", inDirectory: "data/SDKSample")!
        
    trackingManager.start(.TRACKER_TYPE_IMAGE_FUSION)
    trackingManager.setTrackingOption(.NORMAL_TRACKING)
    trackingManager.addTrackerData(blocksTrackerMapPath)
    trackingManager.addTrackerData(glacierTrackerMapPath)
    trackingManager.addTrackerData(legoTrackerMapPath)
        
    trackingManager.loadTrackerData()
}

2.5 Add / Replace Target Image

  1. Create a map file refer to Documentation > Tools > Target Manager

  2. Download the file you created.

  3. Unzip the downloaded file and copy it to the desired path.

  4. Set a target image.


2.6 Train Target Image Instantly

If you want to use a raw image file as an image target without an offline training process via Target Manager, enter a JSON object as the first parameter of addTrackerData().

A sample JSON format is like below.

{
    "image":"add_image",
    "image_path":"ImageTarget/Blocks.png",
    "image_width":0.26,
}

The "image":"add_image" pair should be located at first. The value of "image_path" is an image path and the value of "image_width" is a real width (meter unit) of an image target.
A sample code is like below.

trackingManager.addTrackerData("{\"image\":\"add_image\",\"image_path\":\"ImageTarget/Blocks.png\",\"image_width\":0.26}");

"Image_path" inserts absolute path that has 2D map file. The instant training permits only jpg and png formats. An image width as a pixel size should be more than 320 and the best is 640.

※ Instant training of an image takes twice as much time as loading a 2dmap.
※ You must call loadTrackerData () after calling addTrackerData ().


3. References

These are additional references to develop Image Fusion Tracker


3.1 API Reference

Following documents explain classes and functions used to run image Fusion Tracker.

MaxstAR Class

TrackerManager Class

CameraDevice Class


3.2 Sample

For information regarding sample build and run Image Fusion Tracker, refer to Sample

ImageFusionTrackerViewController.swift