Although the beginnings of AR technology go back to the early 1990s, the level of interest it has gained in recent years has been huge.
Take an example of the huge popularity of AR games (such as the Pokémon Go), or the massive number of pictures taken using the Snapchat application with funny filters or lenses applied.
This popularity is also noticed by the well-known technology companies around the world.
Snap, the parent company of the mentioned Snapchat, bought Israeli AR developer Cimagine for around 30-40 million dollars at the end of 2016. On the other side of the Globe, tech giant Google has recently allocated 500 million dollars for the development of the project Magic Leap – Mixed Reality glasses.
The increase of interest in augmented reality is obviously related to the better accessibility of the required equipment. In most cases all that is needed is a decent smartphone. There has also been further advancement of the implementing technologies in recent times.
Different types of AR
AR experiences can be divided into two main types: marker and markerless. It is the latter one that is currently driving the imagination of creators.
But to get a good grasp of both types of AR, we are going to do a little breakdown.
[table id=4 /]
Marker based AR
The main idea behind marker-based AR is to give indicators of where certain things are in a given space. Traditional AR technology uses a marker to determine where to place the digital content onto live camera feed.
Thanks to the recognition of the specific image patterns of the marker, it is known how the object should be oriented (in which direction), what its scale should be, and how it should be rotated to match the surface. This solution is fast and accurate, thus it requires that the marker is always visible on the screen (in device view range).
Yet, there are some obvious downsides to this approach. The first being a default restriction in the scope and scale of what can be done if limited to only regions with a preexisting marker. Additionally, every marker requires to be properly set up before use.
Due to the limitations of fixed physical markers, developers started looking for solutions that would do away with this, which brings us now to the introduction of the Markerless AR.
At the very beginning, a combination of several technologies such as GPS, accelerometer and digital compass was used to help define the location and orientation of the device.
However, these solutions were not very accurate, so work began on an additional technology for image analysis called SLAM (simultaneous localization and mapping technology).
This technology is based on scanning the environment and creating maps used for placing digital objects. Thanks to it, the arranged objects remain in their places – even though they are not within the device’s view range, they do not move or bounce when the user moves.
Think of it as though you took a picture of a room from as many angles as possible, with some extra grid lines attached, so you know what is where and to what size.
So, based on where you are, you just have to take a quick look at your surroundings and match it against the stored image to know your exact location within that space.
Throw in the processing power of modern hardware and you can have this done super fast.
Best Markerless AR SDKs with SLAM support
Let’s talk about the building blocks of AR technologies also known as SDK (Software Development Kits). As a quick intro, an SDK is a set of tools put together to ease the creation of an application.
In this case we are looking at SDK targeted at AR technologies. As you can guess, each of the two most popular mobile systems has its own AR SDK.
Apple presented its solution, called ARKit in 2017, on the occasion of the premiere of the system in iOS 11. Thanks to it, all iPhones and iPads users who had at least A9 CPU in their devices could enjoy augmented reality.
Thanks to the continuous development of the SDK (version 3.5 is currently available) as well as technological innovations (True Depth camera, LiDAR scanner), Apple devices are considered the best for AR.
This doesn’t mean that android users have been left out in the cold.
Google has more experience in this field than Apple. The first projects such as Google Glass and Tango certainly contributed to the creation of the AR SDK called ARCore, which was made available to developers at the beginning of 2018.
Their tool is praised for its surface mapping feature, great object positioning and anchoring (to place digital objects accurately) plus excellent motion tracking (determines the device’s position and orientation in regard to its movement).
At the beginning, its biggest problem was the availability of devices that supported ARCore (at least Android 7.0 system), but today, thanks to the development of smartphone technology, this should not be a problem anymore.
The popularity of AR and access to native SDKs has also enabled other companies to prepare their AR tools called frameworks.
The most popular SDK of this type is a Wikitude tool that is cross-platform and offers solutions for both Android, iOS, and Smartglass devices.
It features image and objects recognition with tracking, its own SLAM technology, geolocation and SMART system that chooses the best solution for implementing instant tracking depending on the device on which it is running.
It also supports other platforms and frameworks, including the Unity engine.
MxT Tracking from Marxent is another example of the SDK solution. The creators of this framework boast of almost instant initialization compared to ARKit and ARCore which need a few seconds to establish a tracking plane and render a 3D object into the scene.
Another good example is the work by the Kudan company. The studio is working on Artificial Perception technology, which – thanks to SLAM algorithms – will significantly expand the capabilities of modern devices in the field of Virtuality and Robotics.
Their solution called KudanSLAM is very flexible, with camera setup and has high speeds with low CPU consumption (less than 5% for mobile devices). Object positioning in Kudan’s AR apps is highly precise and accurate, with no drifting or trembling.
An alternative to paid frameworks can also be the open-source OpenCV library, which contains tools for image analysis. If you need to find the right shape in the image (e.g. a human face) or isolate an area of a specific color, this can be a good tool for the job.
In one of our projects, we used a tool created using OpenCV to develop a simple game in which the player had to swallow chocolates appearing on the screen. Based on the camera live feed, the user’s mouth position was determined.
In addition, the application allows you to add funny lenses to user’s image (similar to Snapchat) like glasses, beard or hat.
The future of AR markerless technology
Of course, markerless technology is not perfect and requires stronger devices than in the case of marker based solution. A lot depends on the lighting conditions and the scanned surface itself, e.g. a white wall will be much worse to scan than a floor with a visible texture.
However, we must remember that it is still being developed, and in the future probably, apart from increasing the surface mapping speed, it will allow extracting depth from the image (objects in the foreground will obscure objects for further ones).
Some steps in this direction can already be found, e.g. during the transmission of matches NBA a virtual clock measuring the time of action is presented on the court. However, I have not once witnessed the clock works incorrectly (being visible on the players’ uniforms) and was hidden by the broadcasters.
Image with digital shot clock (Photo illustration by State.Image via TNT)
Another example is the People Occlusion technology released in the latest version of ARKit 3.5. which is already able to amaze with its actions.
All this means that we are closer than further to the next breakthrough and that the future will definitely bring us a lot of interesting, useful and innovative applications.
Senior AR/VR Developer and Lead Programmer at 4Experience. Having started his professional career in 2010, he's completed well over 30 major software projects.