Skip to content

Blockhead is an experiment in projecting a point from the 3D ARKit environment to the 2D frame buffer, then back to 3D as a texture on an object.

License

Notifications You must be signed in to change notification settings

hermiteer/Blockhead

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

35 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Blockhead screenshot

Blockhead

There and back again, a pixel's tale...

One of the fascinating things to me is how ARKit enables a blending of visual realities (what you see and what the device shows you). I wanted to explore this in detail, plus learn more about how a pixel can be projected from 3D space into 2D coordinates, then re-applied back in 3D space. Blockhead is an experiment to do something visually interesting with ARKit, the TrueDepth camera, plus develop some utilities to make all the coordinate space transforms easier to grasp.

Blockhead

The project was built from the default ARKit, single-view Xcode project. All of the 3D and 2D processing happens in a function from the ARSCNViewDelegate.renderer(nodeForAnchor:ARAnchor) delegate method. As the code evolves, things will move into new classes and extensions, but for now everything is here to keep all the transforms in logical order.

  1. Create an SNCBox node when a face is detected
  2. Position the box node over the face node
  3. Determine a bounding box around the face
  4. Project the bounding box from local space into world space
  5. Project the bounding box from world space to screen coordinates
  6. Create a red rectangle with the screen coordinates
  7. Project the screen rectangle to the frame buffer coordinates
  8. Calculate the texture coordinates from the frame buffer coordinates
  9. Filter the clamped texture
  10. Apply the clamped texture to the box node

The bounding box projection uses the center and radius of the face geometry, and projects points on all three axes. When projected to 2D, the largest radius is then used to determine the 2D bounding box. This ensures that the bounding box always surrounds the face geometry regardless of orientation.

The pixellation effect is done with CoreImage on only the pixels within the bounding box, and runs at 60fps on an iPhone 11 Pro.

The top left thumbnail is the entire AR frame buffer, with an overlay indicating the texture coordinates, with the same overlay on the screen. When both of these are aligned, the 2D transforms are working as expected. There are on-screen toggles for those features, plus pixellation and face geometry visibility.

Try it yourself

Simply clone the repo and open Blockhead.xcodeproj. You will to supply your own Developer Team ID and Bundle ID to build onto a device.

TODOs

Transform Utilities

All the transforms are in order-of-operation, and some can be encapsulated into their own functions.

Device Orientation Support

Right now the app is limited to "landscape right" to simplify the framebuffer to texture process, but this needs to support all orientations to correctly build the transform utilities.

Texture Rotation

When the face geometry rotate the texture as applied to the cube rotates further than expected. This is because the texture is always square aligned with the frame buffer, and should inverse rotate to compensate. This will require some additional calculation for the texture size dependent on the largest diameter when rotated.

Rear Camera Support

Using the front camera makes it easy to see the various debug views the app has, but this should support the rear camera on iPad Pros.

Cube Model Culling

The cube is a very simple SCNBox bound to the ARKit detected face geometry. When the head tilts up or down, the cube is not cut out where the neck and top of the head exists. I need to figure out how to add geometry, or modify the cube geometry, to hide those parts of the cube so it appears that cube is really surrounding the head and face. A long time ago I did some experiments in the Unreal Engine and objects had a "negative" mode where geometry intersections would "cut out" other objects, not sure if there is an equivalent in SceneKit.

Visual polish

When the face is no longer recognized, the cube could smoothly disappear instead of remaining on screen.

About

Blockhead is an experiment in projecting a point from the 3D ARKit environment to the 2D frame buffer, then back to 3D as a texture on an object.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Languages