From my childhood, I have loved cars; perhaps it should not be surprising that my graduate studies would involve cars in some form or the other. However, I count myself as being extremely fortunate that my advisor (Mike!) let me to formulate and investigate a research problem for my PhD thesis that involved an automobile.

In short, with AutoPoint, I designed and developed techniques that used a car’s window as an interactive and transparent display surface. AutoPoint enabled passengers to select and engage with Points-of-Interest (PoI) outside the car. The photo above shows the two distinct versions that I had developed. One for the lab and the other a prototype I tested by driving people around town.


Take a look at the photo below; notice how car dashboards have changed over the years as technology has progressed.

Clockwise from top-left: 1923 Ford Model T with an ammeter as the only dial on the dashboard; the 1948 Tucker Torpedo [65] with a lot more dials and switches; the 1965 Cadillac Deville [63] with an A/C and radio on the dashboard; the 1988 Nissan Heritage [33] with a car phone and digital displays driven by an onboard computer. 

With the recent strides made in self-driving technology, cars and car interiors were due for a change. As I saw it, there were two competing visions: a) one involved bringing entertainment (e.g. movies) on a larger screen to a passenger in the backseat, and b) using the glass surfaces around a passenger as a transparent interactive display to engage with the world going by.

concept sketch for AutoPoint; the idea was to allow a passenger to point outside to capture a portion of the world as seen through that window, and thereafter use the window surface to interact with the captured content. 

Without going into too much detail, here are a couple of fundamental questions that I wanted to explore:

  • How does one solve occlusion issues? This would obviously depend on how close a target in the real world was to the passenger, and how fast the car was moving.
  • What if something interesting was nearby (again, a loose term), but the passenger had no way of knowing it
  • How can we allow for open-ended exploration of a space that the car was driving through?

After getting the go-ahead from Mike, I went about developing the system in the lab. A visit to a junkyard and $100 led to a Volkswagen Passat rear door. Many visits to hardware stores (Home Depot mostly) and many hours at the lab this to this: a car door mounted to a platform that could be wheeled around easily.

this setup also had about 2-inches of height adjustment via the carriage bolts used to mount the wheels, this flexibility was important to ensure that when the time came to have participants over, I could get the ergonomics just right. 

After removing the rear window from the door, I made a replacement car window assembly by using an IR touchscreen panel, a sheet of glass, and switchable privacy tint (from a company called SmartTint). This allowed me to use a rear-projection setup that worked great with a projector (when the privacy tint was opaque) and could also allow someone to see the world outside – a TV screen with a 3D world in this case – when the privacy screen was set to transparent.

left: the custom window assembly mounted on the car door, right: a canopy ensured a more "car-like" environment, notice the window with the projector and the TV with the 3D world made in Unity in the background. 

I plugged the switchable tint to a power strip controlled using a solid state relay (SSR) that I'd hooked up to an Arduino.

SSR + Arduino + Power strip to control when the window would become transparent or opaque 

To simulate the outside world,  I built a virtual 3D city in Unity. Modeled after Chicago, just customized more to my tastes by adding mountains :). This is what someone sitting alongside the car door would see on the TV.

3D world the car "drove" through. Made in Unity, with pre-assigned targets I would ask users/participants to find

I designed a bunch of interactions (multitouch, or gesture-based via a Leap Motion sensor) that enabled a user to acquire a point of interest, but here's a short video of this system in action (using an interaction I called World Tilt to peer over something that was occluding a target).

as you can see, the field of view turned out okay, here a target is acquired by tapping the window to take a snapshot of the world as seen in through the window, and then using World Tilt to narrow-down the target

Other than pilot testing the lab setup with colleagues, I had 36 participants in the lab test out the system and the various interaction techniques.

After this in-lab phase, I had to come up with a way to make a similar system that could be tested in the real world. After trying roof mounted projectors, video cameras etc., I settled on an iPad mounted on the window. Not ideal, but I was able to get the experience of using this system to be very similar to the in-lab setup.

prototype mounts for the iPad that I designed and 3D printed.

The prototype above didn't work well with the natural curvature of most car windows, the iPad tended to point too far up (could perhaps have been taken care of with a custom lens, but there was a simpler solution)

a custom mount that I built out of aluminium, blacked out the rest of the window with chalkboard paper (idea given to me by a wonderful employee at the local ACE hardware store), and the iPad affixed to it with industrial strength velcro. 

The iPad app had two modes, a map mode (made using Apple Maps) and a stream mode (basically a video stream of the world going by). In each mode there were numerous interactive techniques that mimicked the ones tested in the lab (one called Time Slices let people go back to something they might have missed, World Tilt etc.)

the interface of the iPad app with its two distinct views: live view and map view, along with interaction techniques such as time slicing and world tilt

One difference in the testing methodology was to let participants collect snapshots of their choice during the drive, instead of the pre-assigned targets

drive loop from Northwestern University's campus to the Bahaî temple

Here's a short video of this system in action, where a participant captured something as we drove by:

If you've made it this far, thank you for your time :).


Software | UNITY with C#, micro-controller programmed with C++, and SWIFT for an IOS app.
Hardware | engineered custom mounts & sensor assemblies to alter a car window.
Evaluation| U/X | tested in a lab with 36 participants, and 14 in a car driving around Evanston & Wilmette.