What you'll learn

This codelab will guide you step by step to create an "Playground" type application. This application shows a gallery of objects that can be placed, or anchored in the augmented world. You can then take a photo of the AR scene and save it to Photos.

Sceneform provides the high level API for rendering 3D models using Java. This helps make creating AR experiences easier.

This codelab introduces some of the Sceneform API and walks you through:


The codelab computer should have everything you need:

Later on we'll also copy some 3D assets from the sample project on GitHub. You can download these sample assets from the GitHub repository. The downloaded zip file includes the completed project for reference.

For more information about getting started, see the Sceneform documentation.

Now that you have everything you need, let's get started!

In Android Studio, create a new project targeting API level 24 (Android 7.) or later:

Phone and Tablet API 24 7.0 (Nougat)

Basic Activity - with the floating action button. We'll use this button to take a photo.

Use the default name "MainActivity" for the activity.

In the gradle script we need to enable Java 8 and add the dependencies for Sceneform.

Enable Java 8

Sceneform uses language constructs from Java 8. For projects that have a min API level less than 26, you need to explicitly add support for Java 8.

In the android {} section of app/build.gradle add:

compileOptions {
   sourceCompatibility JavaVersion.VERSION_1_8
   targetCompatibility JavaVersion.VERSION_1_8

Add the ARCore and Sceneform dependencies

In the app/build.gradle file add dependencies for the Sceneform API and the Sceneform UX elements.

In the dependency {} section add:

implementation "com.google.ar.sceneform.ux:sceneform-ux:1.8.0"

Press the "Sync now" link to update the project.

That does it for the project setup. Now let's add the Sceneform ArFragment to the view layout.

There are several aspects of making a great AR experience that involve interacting with multiple views. This includes things such as displaying a graphical indicator that the user should move the phone in order for ARCore to detect planes and handling gestures for moving and scaling objects. To do this, you'll add ArFragment to the app/res/layout/content_main.xml file.

Open content_main.xml and let's add the fragment and the view. Here's the text of the layout file, but feel free to use the graphical view if that is more comfortable for you.

Replace the existing TextView element with the fragment:

   android:layout_height="match_parent" />

Add ARCore AndroidManifest entries

ARCore requires specific entries in the AndroidManifest. These declare that the camera will be used by the application, and that the AR features are used along with designating the application as requiring ARCore to run. This last metadata entry is used by the Play Store to filter out applications for users on devices that are not supported by ARCore.

Open app/manifests/AndroidManifest.xml and in the <manifest> section add these elements:

<uses-permission android:name="android.permission.CAMERA" />
<uses-feature android:name="android.hardware.camera.ar" android:required="true" />

Then add the metadata in the <application> section:

<meta-data android:name="com.google.ar.core" android:value="required" />

Add the ArFragment field

We'll be referencing the fragment a lot as we work with our AR scene. To make things easier, open MainActivity and add a member variable at the top of the class:

private ArFragment fragment;

Initialize it at the bottom of the onCreate() method. Since we're using a fragment, we need to use the fragment manager to find the fragment:

fragment = (ArFragment)

Great! Now we have the minimum amount of code to start using ARCore and make sure it works. Next, let's try it out!

The ArFragment we just added handles the permissions, ARCore session creation, and the plane discovery UI.

The first thing you should see is the request to use the camera.

Then you should see the indicator that you need to move the phone in order to detect a plane. Planes are detected on flat surfaces that have some "texture" or pattern (as opposed to a pure white desktop).

The floor is usually a good place to find planes!. Once a plane is detected, a white dotted pattern will spotlight where the plane is located.

If there is an error, you'll want to resolve it before continuing with the codelab.

This application presents a set of models that can be placed on the AR scene. We could use drag and drop to select one of the models and drag it onto the view. However, what seems to work best is to just look at where you want to place the model and tap it. This keeps your fingers out of the way so you can see better, and also makes it less cumbersome to hold the phone in the right place as dragging.

To do this, we'll add a pointer in the form of an overlay. The overlay will always be centered on the screen, and when we take a picture of the scene later, the pointer will not be in the image.

The View overlay needs a Drawable, so click Menu File>New>Java Class to make a new class named PointerDrawable. This extends Drawable, let's set the Superclass to android.graphics.drawable.Drawable.

Click on the red "light bulb" in the editor and select "Implement Methods". This will generate the placeholder methods.

Our pointer will have 2 states, enabled, which means an object can be dropped on the scene at that location, and disabled, when it can't.

We need 2 member variables:

Add these fields at the top of the class.

private final Paint paint = new Paint();
private boolean enabled;

Add a getter and setter for enabled

public boolean isEnabled() {
 return enabled;

public void setEnabled(boolean enabled) {
 this.enabled = enabled;

Now implement the draw method. We'll draw a circle in green when enabled, and an X in gray when disabled.

public void draw(@NonNull Canvas canvas) {
 float cx = canvas.getWidth()/2;
 float cy = canvas.getHeight()/2;
 if (enabled) {
   canvas.drawCircle(cx, cy, 10, paint);
 } else {
   canvas.drawText("X", cx, cy, paint);

That's sufficient for our purposes, we can ignore implementing the other methods.

Go back to MainActivity and let's initialize the pointer and add the code to enable and disable it based on the tracking state from ARCore, and if the user is looking at a plane detected by ARCore.

Add 3 member variables to MainActivity:

 private PointerDrawable pointer = new PointerDrawable();
 private boolean isTracking;
 private boolean isHitting;

At the bottom of onCreate() add a listener to the ArSceneView scene which will get called before processing every frame. In this listener we can make ARCore API calls and update the status of the pointer.

We'll use a lambda to first call the fragment's onUpdate method, then we'll call a new method in MainActivity called onUpdate.

fragment.getArSceneView().getScene().addOnUpdateListener(frameTime -> {

Implement onUpdate().

First, update the tracking state. If ARCore is not tracking, remove the pointer until tracking is restored.

Next, if ARCore is tracking, check for the gaze of the user hitting a plane detected by ARCore and enable the pointer accordingly.

private void onUpdate() {
 boolean trackingChanged = updateTracking();
 View contentView = findViewById(android.R.id.content);
 if (trackingChanged) {
   if (isTracking) {
   } else {

 if (isTracking) {
   boolean hitTestChanged = updateHitTest();
   if (hitTestChanged) {

updateTracking() uses ARCore's camera state and returns true if the tracking state has changed since last call.

private boolean updateTracking() {
 Frame frame = fragment.getArSceneView().getArFrame();
 boolean wasTracking = isTracking;
 isTracking = frame != null &&
              frame.getCamera().getTrackingState() == TrackingState.TRACKING;
 return isTracking != wasTracking;

updateHitTest() also uses ARCore to call Frame.hitTest(). As soon as any hit is detected, the method returns. We also need the center of the screen for this method, so add a helper method getScreenCenter() as well.

private boolean updateHitTest() {
 Frame frame = fragment.getArSceneView().getArFrame();
 android.graphics.Point pt = getScreenCenter();
 List<HitResult> hits;
 boolean wasHitting = isHitting;
 isHitting = false;
 if (frame != null) {
   hits = frame.hitTest(pt.x, pt.y);
   for (HitResult hit : hits) {
     Trackable trackable = hit.getTrackable();
     if (trackable instanceof Plane &&
             ((Plane) trackable).isPoseInPolygon(hit.getHitPose())) {
       isHitting = true;
 return wasHitting != isHitting;

private android.graphics.Point getScreenCenter() {
 View vw = findViewById(android.R.id.content);
 return new android.graphics.Point(vw.getWidth()/2, vw.getHeight()/2);

Great! Now we have the pointer implemented, let's try it out and make sure it works.

Now let's run again. As the application starts, the pointer should not be visible until ARCore starts tracking.

As you move around the phone, a plane will be detected, and you should see the pointer be enabled and disabled as the pointer moves on and off the plane.

Now, let's import the 3D models and create some items to add to the gallery.

Sceneform has an Android Studio plugin which assists in importing 3D models, adding them to the gradle file, and providing a preview of the model. This plugin is already installed for you. If you want to use it in your own projects, you can find it in the "Browse Repository" section for plugins in the Android Studio Preferences.

Now, let's use the plugin to import some models!

Android Studio 3.1 supports a new folder type named "sampledata". This folder is used for design time sample data. For our purposes, we'll keep the source 3D assets in this directory. Files in sampledata are not added to the APK, but available in the editor. To make sceneform compatible models, we'll add the conversion tasks to gradle and add them to the assets directory so they are available at runtime.

Create the sample data directory by clicking on app in the project view and then right mouse click to find the menu item New > Sample Data Directory.

Download the sampledata.zip resources from GitHub. This contains 2 directories:

There are 4 models to use for the codelab to build a "table-top" scaled scene, as opposed to a "room-scale" scene). These models should now be in app/sampledata/models (if they are not, go back one step, and copy the models). The models have multiple files associated with one object. The main model files are:

All these models are authored by Google and covered by the CC-BY license. You can find them and lots of other models at the Poly website.

We'll use the Sceneform plugin to add the conversion tasks to the build.gradle file and preview the models.

Select app/sampledata/models/andy_dance.fbx and then right mouse click to get the menu and pick New > Sceneform asset. This opens the import wizard page with everything initialized to reasonable values.

Hit "+" to add Animation Files and select sampledata/models/andy_dance.fbx.

Press Finish to start the importing of the model.

The output of this import process will be a text ‘.sfa' file which will be compiled into a binary ‘.sfb'. The .sfa file contains a json-like specification for the asset and the parameters there may be adjusted by hand. The ‘.sfb' file is the compiled binary resource for use in your application. The Sceneform plugin allows the ‘.sfb' file to opened and adjusted in editor, but note that any edits made will be saved with the ‘.sfa' file and synced into the ‘.sfb' binary asset.

When the import completes, the plugin will open the ‘.sfb' file in the editor. The Sceneform viewer is also opened, showing the imported model:

A common parameter to modify is ‘scale'. Add a line with scale: 0.25, in the ‘model' object. When done it should look like this:

model: {
 scale: 0.25,
 attributes: [
 collision: {},
 file: 'sampledata/models/andy_dance.fbx',
 name: 'andy_dance',
 recenter: 'root',

Convert Cabin.obj

Now convert the other models using the same process for app/sampledata/models/Cabin.obj.

You might notice that the Cabin model is a lot bigger than the andy model. This is a common situation in dealing with 3D models. Fortunately, we can adjust the size when converting the model by editing the Cabin.sfa file.

Find the line that has scale: and change the value to 0.0005

Save the file and when it rebuilds, the cabin will appear smaller.

Convert House.obj

The model file is app/sampledata/models/House.obj. It does not need any adjustments.

Convert igloo.obj

The model file is app/sampledata/models/igloo.obj. It looks a little big as well, so scale it to 0.25.

Now we have our 3D assets, let's add them to the gallery.

Now we'll add a simple list of models we can add to our augmented world. RecyclerViews are great for showing a scrolling list of items, that's a topic for another day, We'll just use a LinearLayout.

Add the LinearLayout to the layout file

Open up app/res/layout/content_main.xml and directly below the <fragment> element, add the LinearLayout.

Set the attributes of the LinearLayout:

Add the layout constraints to keep it at the bottom of the screen.


Adjust the fragment layout

Change the fragment layout to layout the fragment on the top part of the screen:

When it is updated, it will look like this:


Now we'll build the gallery. Each item in the gallery has a name, a uri to the sfb model in the assets directory, and a resource id of the thumbnail image of the model.

In MainActivity add a method at the end of the class named initializeGallery().

In there, first get the gallery layout view, then create the items, and add them to the gallery.

For each item, create an ImageView for the thumbnail, and add an onClickListener to handle adding the model to the scene.

private void initializeGallery() {
 LinearLayout gallery = findViewById(R.id.gallery_layout);

 ImageView andy = new ImageView(this);
 andy.setOnClickListener(view ->{addObject(Uri.parse("andy_dance.sfb"));});

 ImageView cabin = new ImageView(this);
 cabin.setOnClickListener(view ->{addObject(Uri.parse("Cabin.sfb"));});

 ImageView house = new ImageView(this);
 house.setOnClickListener(view ->{addObject(Uri.parse("House.sfb"));});

 ImageView igloo = new ImageView(this);
 igloo.setOnClickListener(view ->{addObject(Uri.parse("igloo.sfb"));});

Add the addObject method

This method is called when one of the items in the gallery is clicked. It performs a hittest to determine where in the 3D world space the object should be placed, then calls a method placeObject to actually place the object.

private void addObject(Uri model) {
 Frame frame = fragment.getArSceneView().getArFrame();
 android.graphics.Point pt = getScreenCenter();
 List<HitResult> hits;
 if (frame != null) {
   hits = frame.hitTest(pt.x, pt.y);
   for (HitResult hit : hits) {
     Trackable trackable = hit.getTrackable();
     if (trackable instanceof Plane &&
             ((Plane) trackable).isPoseInPolygon(hit.getHitPose())) {
       modelLoader.loadModel(hit.createAnchor(), model);


Add a ModelLoader class

Create a ModelLoader class to start the asynchronous loading of the 3D model using the ModelRenderable builder. The activity class can be replaced or destroyed at any point, even while a model is loading. A weak reference is used to ensure the ModelLoader respects the Activity lifecycle. This codelab uses small models, but larger models could take substantially longer to load.

public class ModelLoader {
   private final WeakReference<MainActivity> owner;
   private static final String TAG = "ModelLoader";

   ModelLoader(WeakReference<MainActivity> owner) {
       this.owner = owner;

   void loadModel(Anchor anchor, Uri uri) {
       if (owner.get() == null) {
           Log.d(TAG, "Activity is null.  Cannot load model.");
               .setSource(owner.get(), uri)
               .handle((renderable, throwable) -> {
                           MainActivity activity = owner.get();
                           if (activity == null) {
                               return null;
                           } else if (throwable != null) {
                           } else {
                               activity.addNodeToScene(anchor, renderable);
                           return null;


Additionally in the MainActivity add an instance of the ModelLoader

 private ModelLoader modelLoader;

In onCreate initialize the modelLoader:

modelLoader = new ModelLoader(new WeakReference<>(this));

Add addNodeToScene

addNodeToScene() builds two nodes and attaches them to the ArSceneView's scene object.

The first node is of type AnchorNode. Anchor nodes are positioned based on the pose of an ARCore Anchor. As a result, they stay positioned in the sample place relative to the real world.

The second Node is a TransformableNode. We could use the base class type, Node for the placing the objects, but Node does not have the interaction functionality to handle moving, scaling and rotating based on user gestures.

Once the nodes are created and connected to each other, connect the AnchorNode to the Scene, and select the node so it has the focus for interactions.

public void addNodeToScene(Anchor anchor, ModelRenderable renderable) {
 AnchorNode anchorNode = new AnchorNode(anchor);
 TransformableNode node = new TransformableNode(fragment.getTransformationSystem());

Add onException

when the network is down, loading a model remotely will fail.

public void onException(Throwable throwable){
   AlertDialog.Builder builder = new AlertDialog.Builder(this);
           .setTitle("Codelab error!");
   AlertDialog dialog = builder.create();

Great work!! Now we just need to call initializeGallery from onCreate(), at the end of the method:


Now when you start the app, the pointer will appear when ARCore starts tracking.

When a plane is detected, and you're looking at it, the pointer will turn green.

Tap one of the items and it will be created at that point!

You can then move the object around on the plane, scale it, or rotate.

Add the Sceneform animation dependency

In the app/build.gradle file add optional dependencies for Sceneform animation elements.

In the dependency {} section add:

implementation "com.google.ar.sceneform:animation:1.8.0"

Start the animation

In MainActivity add a method at the end of the class named startAnimation().

public void startAnimation(TransformableNode node, ModelRenderable renderable){
   if(renderable==null || renderable.getAnimationDataCount() == 0) {

   ModelAnimator animator = new ModelAnimator(renderable.getAnimationData(0), renderable);

At the end of addNodeToScene, call the new function

startAnimation(node, renderable);

You can test the application now and see the animation. With the current code the animation only plays through once.

Set an onTapListener to pause/resume the animation

Next we will make it so that the animation plays and stops when tapped.

Add a function to toggle the state of the animation.

public void togglePauseAndResume(ModelAnimator animator) {
   if (animator.isPaused()) {
   } else if (animator.isStarted()) {
   } else {

At the bottom of startAnimation set an onTapListener to call the new function

       (hitTestResult, motionEvent) -> {

Now tapping the dancing animation will trigger it to start or stop.

Now let's add the photo capturing. This will change the floating action button to save an image of the ArSceneView into the photos directory and launch an intent to view it.

First let's change the floating action button to be a camera instead of an envelope.

In app/res/layout/activity_main.xml

Find the floating action button and change the srcCompat to be the camera icon:

app:srcCompat="@android:drawable/ic_menu_camera" />

We also need a specifying the paths our application will write to. This file needs to be an xml resource file. Add this by selecting app/res in the project view, then right mouse clicking and select New > Directory. Name the directory xml.

Select the xml directory and create a new XML file named: paths.xml.

Add the external path to the images:

<?xml version="1.0" encoding="utf-8"?>
<paths xmlns:android="http://schemas.android.com/apk/res/android">
   <external-path name="my_images" path="Pictures" />

In AndroidManifest.xml, add the url provider. This is needed to securely pass the url of the photo we took to the photos app via an intent

Inside the <application> element add:


Add permission for external storage

This has two parts, first add it to the manifest next to the CAMERA permission

<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE"/>

Next, we have to request this. We extend the fragment so we can request additional permissions. Create a new class called WritingArFragment with a super class of com.google.ar.sceneform.ux.ArFragment

Add the write external storage permission as an additional permission. This will make the fragment prompt the user for consent when the application starts.

public class WritingArFragment extends ArFragment {
   public String[] getAdditionalPermissions() {
       String[] additionalPermissions = super.getAdditionalPermissions();
       int permissionLength = additionalPermissions != null ? additionalPermissions.length : 0;
       String[] permissions = new String[permissionLength + 1];
       permissions[0] = Manifest.permission.WRITE_EXTERNAL_STORAGE;
       if (permissionLength > 0) {
           System.arraycopy(additionalPermissions, 0, permissions, 1, additionalPermissions.length);
       return permissions;

Then change the content_main.xml layout to use WritingArFragment instead.

   app:layout_constraintVertical_weight="9" />

Add generateFilename method

A unique file name is needed for each picture we take. The filename for the picture is generated using the standard pictures directory, and then an album name of Sceneform. Each image name is based on the time, so they won't overwrite each other. This path is also related to the paths.xml file we added previously.

private String generateFilename() {
 String date =
         new SimpleDateFormat("yyyyMMddHHmmss", java.util.Locale.getDefault()).format(new Date());
 return Environment.getExternalStoragePublicDirectory(
         Environment.DIRECTORY_PICTURES) + File.separator + "Sceneform/" + date + "_screenshot.jpg";

Add saveBitmapToDisk method

saveBitmapToDisk() writes out the bitmap to the file.

private void saveBitmapToDisk(Bitmap bitmap, String filename) throws IOException {

   File out = new File(filename);
   if (!out.getParentFile().exists()) {
   try (FileOutputStream outputStream = new FileOutputStream(filename);
        ByteArrayOutputStream outputData = new ByteArrayOutputStream()) {
       bitmap.compress(Bitmap.CompressFormat.PNG, 100, outputData);
   } catch (IOException ex) {
       throw new IOException("Failed to save bitmap to disk", ex);

Add the takePhoto method

The method takePhoto() uses the PixelCopy API to capture a screenshot of the ArSceneView. It is asynchronous since it actually happens between frames. When the listener is called, the bitmap is saved to the disk, and then a snackbar is shown with an intent to open the image in the Pictures application.

private void takePhoto() {
   final String filename = generateFilename();
   ArSceneView view = fragment.getArSceneView();

   // Create a bitmap the size of the scene view.
   final Bitmap bitmap = Bitmap.createBitmap(view.getWidth(), view.getHeight(),

   // Create a handler thread to offload the processing of the image.
   final HandlerThread handlerThread = new HandlerThread("PixelCopier");
   // Make the request to copy.
   PixelCopy.request(view, bitmap, (copyResult) -> {
       if (copyResult == PixelCopy.SUCCESS) {
           try {
               saveBitmapToDisk(bitmap, filename);
           } catch (IOException e) {
               Toast toast = Toast.makeText(MainActivity.this, e.toString(),
           Snackbar snackbar = Snackbar.make(findViewById(android.R.id.content),
                   "Photo saved", Snackbar.LENGTH_LONG);
           snackbar.setAction("Open in Photos", v -> {
               File photoFile = new File(filename);

               Uri photoURI = FileProvider.getUriForFile(MainActivity.this,
                       MainActivity.this.getPackageName() + ".ar.codelab.name.provider",
               Intent intent = new Intent(Intent.ACTION_VIEW, photoURI);
               intent.setDataAndType(photoURI, "image/*");

       } else {
           Toast toast = Toast.makeText(MainActivity.this,
                   "Failed to copyPixels: " + copyResult, Toast.LENGTH_LONG);
   }, new Handler(handlerThread.getLooper()));

The last step is to change the floating action button onClickListener to call takePhoto(). This is in onCreate(). When you are done, it should look like this:

fab.setOnClickListener(view -> takePhoto());

Well Done! Now try it out one more time and take a picture!

Well done working through this codelab! A quick re-cap of what was covered:

Other Resources

As you continue your ARCore exploration. Check out these other resources: