Machine Learning
Enhancing User Experience
NatDevice is built to tightly integrate with our cross-platform machine learning runtime for Unity Engine, NatML. NatML provides hardware accelerated machine learning to bring ML features to interactive media developers.
Computer vision models are becoming extremely popular for building interactive experiences, especially in user-generated content (think TikTok). The first step in building these experiences is streaming the camera preview.

Detecting objects from the camera stream.
The typical workflow begins by fetching model data from NatML and creating a predictor:
// Fetch model data from NatML
var modelData = await MLModelData.FromHub("@natsuite/mobilenet-v2");
// Deserialize the model
var model = modelData.Deserialize();
// Create the MobileNet predictor
var predictor = new MobileNetv2Predictor(model, modelData.labels);
Next, we start the camera preview with NatDevice:
// Discover a camera device
var query = new MediaDeviceQuery(MediaDeviceCriteria.CameraDevice);
var device = query.current as CameraDevice;
// Create a texture output
var textureOutput = new TextureOutput();
// Start the camera preview
device.StartRunning(textureOutput);
Then in the
Update
function, we can then create an MLImageFeature
from the camera image and make ML predictions:void Update () {
// Check that the preview has started
var previewTexture = textureOutput.texture;
if (!previewTexture)
return;
// Create an image feature from the preview texture
var imageFeature = new MLImageFeature(previewTexture);
// Set the image feature pre-processing config
(imageFeature.mean, imageFeature.std) = modelData.normalization;
imageFeature.aspectMode = modelData.aspectMode;
// Make a prediction
var (label, score) = predictor.Predict(imageFeature);
...
}
Last modified 1yr ago