To do it, let’s store animation start time and animation duration. It will allow us to acquire missing coordinates just by having the object’s start position, final object position, and average moving speed.īecause the animation is done on a per-frame basis and we only got initial/final values, we will also need to calculate what portion of path object had passed already. To solve this problem, we can use linear interpolation. Therefore, if we want an object to move and not to teleport, we have to calculate the path between the current position of an object and the current position of an image anchor. The ARKit provides only freshly detected image’s position. To animate the motion, we need to have a path for an object to follow. For its calculation, we will get the image’s physical size from the imageAnchor.referenceImage and the model’s size from the boundingBox property. Difference ratio is then used in the node’s transform to properly scale the virtual object. To make the virtual object appear the same size as the detected image, we will need the ratio of the difference between the real image’s and the model’s sizes. We will use ansform to get the image’s coordinates and heading. Place the model on the scene with appearance animation.Transform the model to match the heading and scale of the real world image.Calculate the difference between model and image sizes.Let’s break down what will be done down in the next code snippet: Model placement logic can be implemented in renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) function which gets called whenever ARAnchor is found for the first time. Model placement and transformationĪfter the image is found and the anchor is added to the scene, we can load and place the model on top of it. The next step is to place an object on top of the freshly found anchor. After it finds any, it will add or update a corresponding ARImageAnchor that represents the detected image’s position and orientation. Now the system will try to find images loaded from the AR Resources folder. I had also added plane’s image to a special container inside of Asset.xcasset folder - AR Resource Group.Īpple handles image recognition from this moment, so we don’t need to worry about it. The project contains pdf of the anchor image that we are going to detect on the scene and an AN-225 “Mriya” plane model that we are going to place on top of the detected image. It is based on the xCode’s Augmented Reality App template. Project setupĭownload the starter project. Once you have those, you are ready to go. Other requirements are basic for any ARKit app. We will build an image recognition application, which means that we need a device with iOS version 11.3 or higher. So how can we add the support for dynamic image tracking and with plane detection? You can use image tracking that was added in ARKit 2.0, but in this case, you will lose the possibility to track planes. Implementing such functionality can be tricky and is not provided by ARKit out of the box. Because a user can move the recognized image, AR avatar should follow real-world object’s position changes.
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |